hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2fa69fe5895adb75dd648b5f3f0d038073fc5f78 | 10,194 | py | Python | tests/unit/analytics/histogram/test_histogram.py | thehyve/Fractalis | 5591112e5bc994eea5baf3d28caa7e5dfee85a57 | [
"Apache-2.0"
] | null | null | null | tests/unit/analytics/histogram/test_histogram.py | thehyve/Fractalis | 5591112e5bc994eea5baf3d28caa7e5dfee85a57 | [
"Apache-2.0"
] | 6 | 2018-11-02T10:00:04.000Z | 2021-09-13T14:15:36.000Z | tests/unit/analytics/histogram/test_histogram.py | thehyve/Fractalis | 5591112e5bc994eea5baf3d28caa7e5dfee85a57 | [
"Apache-2.0"
] | 1 | 2018-10-22T08:12:00.000Z | 2018-10-22T08:12:00.000Z | import json
import pytest
import pandas as pd
from fractalis.analytics.tasks.histogram.main import HistogramTask
class TestHistogramTask:
task = HistogramTask()
def test_correct_output(self):
df = pd.DataFrame([[100, 'foo', 1],
[101, 'foo', 2],
[102, 'foo', 3],
[103, 'foo', 4],
[104, 'foo', 5],
[105, 'foo', 6],
[106, 'foo', 7],
[107, 'foo', 8],
[108, 'foo', 9],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
cat_df = pd.DataFrame([[100, 'cat', 'A'],
[101, 'cat', 'B'],
[102, 'cat', 'A'],
[103, 'cat', 'B'],
[104, 'cat', 'A'],
[105, 'cat', 'B'],
[106, 'cat', 'A'],
[107, 'cat', 'B'],
[108, 'cat', 'A'],
[109, 'cat', 'B']],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[cat_df])
assert all([key in result for key in
['stats', 'subsets', 'subset_labels', 'categories', 'label']])
assert 'A' in result['stats']
assert 'B' in result['stats']
assert 0 in result['stats']['A']
assert all([stat in result['stats']['A'][0] for stat in
['hist', 'bin_edges', 'mean', 'median', 'std', 'dist']])
def test_can_handle_nas(self):
df = pd.DataFrame([[100, 'foo', float('nan')],
[101, 'foo', 2],
[102, 'foo', float('nan')],
[103, 'foo', 4],
[104, 'foo', float('nan')],
[105, 'foo', 6],
[106, 'foo', float('nan')],
[107, 'foo', 8],
[108, 'foo', float('nan')],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[])
assert result['stats'][''][0]['median'] == 6
assert result['stats'][''][0]['mean'] == 6
def test_can_handle_negatives(self):
df = pd.DataFrame([[100, 'foo', -2],
[101, 'foo', 2],
[102, 'foo', -4],
[103, 'foo', 4],
[104, 'foo', -6],
[105, 'foo', 6],
[106, 'foo', -8],
[107, 'foo', 8],
[108, 'foo', -10],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[])
assert result['stats'][''][0]['median'] == 0
assert result['stats'][''][0]['mean'] == 0
def test_skips_small_groups(self):
df = pd.DataFrame([[100, 'foo', 1],
[101, 'foo', 2],
[102, 'foo', float('nan')],
[103, 'foo', 4],
[104, 'foo', float('nan')],
[105, 'foo', 6],
[106, 'foo', float('nan')],
[107, 'foo', 8],
[108, 'foo', float('nan')],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
cat_df = pd.DataFrame([[100, 'cat', 'A'],
[101, 'cat', 'B'],
[102, 'cat', 'A'],
[103, 'cat', 'B'],
[104, 'cat', 'A'],
[105, 'cat', 'B'],
[106, 'cat', 'A'],
[107, 'cat', 'B'],
[108, 'cat', 'A'],
[109, 'cat', 'B']],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[cat_df])
assert 'A' not in result['stats']
def test_skips_empty_groups(self):
df = pd.DataFrame([[100, 'foo', float('nan')],
[101, 'foo', 2],
[102, 'foo', float('nan')],
[103, 'foo', 4],
[104, 'foo', float('nan')],
[105, 'foo', 6],
[106, 'foo', float('nan')],
[107, 'foo', 8],
[108, 'foo', float('nan')],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
cat_df = pd.DataFrame([[100, 'cat', 'A'],
[101, 'cat', 'B'],
[102, 'cat', 'A'],
[103, 'cat', 'B'],
[104, 'cat', 'A'],
[105, 'cat', 'B'],
[106, 'cat', 'A'],
[107, 'cat', 'B'],
[108, 'cat', 'A'],
[109, 'cat', 'B']],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[cat_df])
assert 'A' not in result['stats']
assert 'B' in result['stats']
def test_throws_error_if_all_groups_empty(self):
df = pd.DataFrame([[100, 'foo', float('nan')],
[101, 'foo', float('nan')],
[102, 'foo', float('nan')],
[103, 'foo', float('nan')],
[104, 'foo', float('nan')],
[105, 'foo', float('nan')],
[106, 'foo', float('nan')],
[107, 'foo', float('nan')],
[108, 'foo', float('nan')],
[109, 'foo', float('nan')]],
columns=['id', 'feature', 'value'])
cat_df = pd.DataFrame([[100, 'cat', 'A'],
[101, 'cat', 'B'],
[102, 'cat', 'A'],
[103, 'cat', 'B'],
[104, 'cat', 'A'],
[105, 'cat', 'B'],
[106, 'cat', 'A'],
[107, 'cat', 'B'],
[108, 'cat', 'A'],
[109, 'cat', 'B']],
columns=['id', 'feature', 'value'])
with pytest.raises(ValueError) as e:
self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[cat_df])
assert 'selected numerical variable must be non-empty' in e
def test_output_is_json_serializable(self):
df = pd.DataFrame([[100, 'foo', 1],
[101, 'foo', 2],
[102, 'foo', 3],
[103, 'foo', 4],
[104, 'foo', 5],
[105, 'foo', 6],
[106, 'foo', 7],
[107, 'foo', 8],
[108, 'foo', 9],
[109, 'foo', 10]],
columns=['id', 'feature', 'value'])
cat_df = pd.DataFrame([[100, 'cat', 'A'],
[101, 'cat', 'B'],
[102, 'cat', 'A'],
[103, 'cat', 'B'],
[104, 'cat', 'A'],
[105, 'cat', 'B'],
[106, 'cat', 'A'],
[107, 'cat', 'B'],
[108, 'cat', 'A'],
[109, 'cat', 'B']],
columns=['id', 'feature', 'value'])
result = self.task.main(id_filter=[],
bw_factor=0.5,
num_bins=10,
subsets=[],
subset_labels=[],
data=df,
categories=[cat_df])
json.dumps(result)
| 45.508929 | 82 | 0.281342 | 788 | 10,194 | 3.558376 | 0.133249 | 0.035663 | 0.094151 | 0.068474 | 0.794579 | 0.75535 | 0.718616 | 0.714337 | 0.695435 | 0.695435 | 0 | 0.100334 | 0.559054 | 10,194 | 223 | 83 | 45.713004 | 0.523471 | 0 | 0 | 0.824645 | 0 | 0 | 0.083088 | 0 | 0 | 0 | 0 | 0 | 0.061611 | 1 | 0.033175 | false | 0 | 0.018957 | 0 | 0.061611 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2fb0259f5ce9f5ad038266ee3e74530e8e03a27a | 55 | py | Python | community_detect/__init__.py | zhanghuijun-hello/Community-detection-using-attribute-and-structural-similarities.- | b4df9ec3810e2661f4dc29b70bdafa5e0874a80c | [
"Apache-2.0"
] | 12 | 2018-10-10T03:46:42.000Z | 2022-02-24T06:54:55.000Z | community_detect/__init__.py | zhanghuijun-hello/Community-detection-using-attribute-and-structural-similarities.- | b4df9ec3810e2661f4dc29b70bdafa5e0874a80c | [
"Apache-2.0"
] | null | null | null | community_detect/__init__.py | zhanghuijun-hello/Community-detection-using-attribute-and-structural-similarities.- | b4df9ec3810e2661f4dc29b70bdafa5e0874a80c | [
"Apache-2.0"
] | 4 | 2019-04-07T19:49:41.000Z | 2021-06-21T14:23:18.000Z | from community_detect.community_detect import Community | 55 | 55 | 0.927273 | 7 | 55 | 7 | 0.571429 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 55 | 1 | 55 | 55 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2fb67baa2505d449ad90f206ef6fa6766ee02334 | 140 | py | Python | tests/unit/synchronizers/conftest.py | atsgen/tf-vcenter-fabric-manager | bb2cf0a0f80464457e1b884847df77a11259077c | [
"Apache-2.0"
] | 1 | 2022-03-13T06:31:49.000Z | 2022-03-13T06:31:49.000Z | tests/unit/synchronizers/conftest.py | atsgen/tf-vcenter-fabric-manager | bb2cf0a0f80464457e1b884847df77a11259077c | [
"Apache-2.0"
] | null | null | null | tests/unit/synchronizers/conftest.py | atsgen/tf-vcenter-fabric-manager | bb2cf0a0f80464457e1b884847df77a11259077c | [
"Apache-2.0"
] | 1 | 2020-08-25T12:44:56.000Z | 2020-08-25T12:44:56.000Z | import pytest
from cvfm import models
@pytest.fixture
def fabric_vn(project):
return {"uuid": models.generate_uuid("dvportgroup-1")}
| 15.555556 | 58 | 0.75 | 19 | 140 | 5.421053 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.135714 | 140 | 8 | 59 | 17.5 | 0.842975 | 0 | 0 | 0 | 1 | 0 | 0.121429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2fe1495905d8bdeed4d676cfa9ae59997cbdf9b1 | 24 | py | Python | tests/test_fivpy.py | TheilonMacedo/fivpy | 16237ccaaba2226eba8e4db6372c971263de9b5c | [
"MIT"
] | 1 | 2022-01-17T18:25:59.000Z | 2022-01-17T18:25:59.000Z | tests/test_fivpy.py | TheilonMacedo/fivpy | 16237ccaaba2226eba8e4db6372c971263de9b5c | [
"MIT"
] | null | null | null | tests/test_fivpy.py | TheilonMacedo/fivpy | 16237ccaaba2226eba8e4db6372c971263de9b5c | [
"MIT"
] | null | null | null | from fivpy import fivpy
| 12 | 23 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ff1827ebe96f17312a41ba96a89bc131ebcc740 | 35 | py | Python | configHelper/__init__.py | pkropf/configHelper | a433d28ab6315becc964466cf5125caf6dc458ca | [
"MIT"
] | null | null | null | configHelper/__init__.py | pkropf/configHelper | a433d28ab6315becc964466cf5125caf6dc458ca | [
"MIT"
] | null | null | null | configHelper/__init__.py | pkropf/configHelper | a433d28ab6315becc964466cf5125caf6dc458ca | [
"MIT"
] | null | null | null | from .findConfig import findConfig
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64029ee317d610d62f392def8d6341ee9dd111aa | 73 | py | Python | Chapter 01/Chap01_Example1.1.py | bpbpublications/Programming-Techniques-using-Python | 49b785f37e95a3aad1d36cef51e219ac56e5e9f0 | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.1.py | bpbpublications/Programming-Techniques-using-Python | 49b785f37e95a3aad1d36cef51e219ac56e5e9f0 | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.1.py | bpbpublications/Programming-Techniques-using-Python | 49b785f37e95a3aad1d36cef51e219ac56e5e9f0 | [
"MIT"
] | null | null | null | a=10
print(type(a))
a='Python'
print(type(a))
a=False
print(type(a)) | 12.166667 | 15 | 0.630137 | 15 | 73 | 3.066667 | 0.4 | 0.586957 | 0.652174 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.136986 | 73 | 6 | 16 | 12.166667 | 0.698413 | 0 | 0 | 0.5 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6441b9a684c612ad840c8b3e1623dcf1d403899f | 38 | py | Python | scikits/pulsefit/__init__.py | johnnylee/scikits.pulsefit | d07571d524d974f52a863cd96a823fce6f1fed1e | [
"MIT"
] | 2 | 2015-08-25T15:41:26.000Z | 2016-05-23T01:42:37.000Z | scikits/pulsefit/__init__.py | johnnylee/scikits.pulsefit | d07571d524d974f52a863cd96a823fce6f1fed1e | [
"MIT"
] | 1 | 2015-03-28T00:32:16.000Z | 2017-04-04T10:48:49.000Z | scikits/pulsefit/__init__.py | johnnylee/scikits.pulsefit | d07571d524d974f52a863cd96a823fce6f1fed1e | [
"MIT"
] | null | null | null | from fit_mpocmle import fit_mpoc_mle
| 12.666667 | 36 | 0.868421 | 7 | 38 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 2 | 37 | 19 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff33a2b87bafdac7f6989d0f28e630183d998f78 | 32 | py | Python | input/input.py | SharkooMaster/PyTerm | f880e75614e62163035f3b187c0fc249b86e7953 | [
"MIT"
] | 1 | 2022-03-29T08:25:56.000Z | 2022-03-29T08:25:56.000Z | input/input.py | SharkooMaster/PyTerm | f880e75614e62163035f3b187c0fc249b86e7953 | [
"MIT"
] | null | null | null | input/input.py | SharkooMaster/PyTerm | f880e75614e62163035f3b187c0fc249b86e7953 | [
"MIT"
] | null | null | null |
import threading
#class input: | 8 | 16 | 0.78125 | 4 | 32 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 4 | 17 | 8 | 0.925926 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff99aae250a9127e6c05a4184d37e2ec4a42db5a | 97 | py | Python | IPython/external/pexpect/__init__.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 26 | 2018-02-14T23:52:58.000Z | 2021-08-16T13:50:03.000Z | IPython/external/pexpect/__init__.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 3 | 2015-04-01T13:14:57.000Z | 2015-05-26T16:01:37.000Z | IPython/external/pexpect/__init__.py | dchichkov/ipython | 8096bb8640ee7e7c5ebdf3f428fe69cd390e1cd4 | [
"BSD-3-Clause-Clear"
] | 10 | 2018-08-13T19:38:39.000Z | 2020-04-19T03:02:00.000Z | try:
import pexpect
from pexpect import *
except ImportError:
from _pexpect import *
| 16.166667 | 26 | 0.701031 | 11 | 97 | 6.090909 | 0.545455 | 0.328358 | 0.507463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257732 | 97 | 5 | 27 | 19.4 | 0.930556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ffac575b04e0abd5dc4c32f7690d17067567be0a | 55 | py | Python | optim/__init__.py | rattaoup/invclr | 615ce0c51746cd6b13807b844f31453772fc944a | [
"MIT"
] | 10 | 2021-03-06T11:49:27.000Z | 2022-01-24T03:37:09.000Z | optim/__init__.py | rattaoup/invclr | 615ce0c51746cd6b13807b844f31453772fc944a | [
"MIT"
] | null | null | null | optim/__init__.py | rattaoup/invclr | 615ce0c51746cd6b13807b844f31453772fc944a | [
"MIT"
] | 1 | 2021-03-07T20:20:03.000Z | 2021-03-07T20:20:03.000Z | from .scheduler import CosineAnnealingWithLinearRampLR
| 27.5 | 54 | 0.909091 | 4 | 55 | 12.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 55 | 1 | 55 | 55 | 0.980392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
440d17cdff80d4549f4ed22999754bdf84802597 | 41 | py | Python | markov_matrix/__init__.py | Peder2911/markov-matrix | 1875955cfa4e524088202781b0ba4176bfa80a5b | [
"MIT"
] | null | null | null | markov_matrix/__init__.py | Peder2911/markov-matrix | 1875955cfa4e524088202781b0ba4176bfa80a5b | [
"MIT"
] | null | null | null | markov_matrix/__init__.py | Peder2911/markov-matrix | 1875955cfa4e524088202781b0ba4176bfa80a5b | [
"MIT"
] | null | null | null |
from .matrix_chain import matrix_chain
| 13.666667 | 39 | 0.829268 | 6 | 41 | 5.333333 | 0.666667 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 2 | 40 | 20.5 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
441dbf942352be2d67fa799c3484b8f0d3c6547a | 102 | py | Python | ghtools/migrators/__init__.py | alphagov/ghtools | be10c9251197c4c170e617f8328c1f94f5f45dca | [
"MIT"
] | 3 | 2015-02-09T12:19:40.000Z | 2016-07-20T18:19:11.000Z | ghtools/migrators/__init__.py | alphagov/ghtools | be10c9251197c4c170e617f8328c1f94f5f45dca | [
"MIT"
] | 3 | 2015-02-06T13:39:31.000Z | 2016-10-03T09:33:33.000Z | ghtools/migrators/__init__.py | alphagov/ghtools | be10c9251197c4c170e617f8328c1f94f5f45dca | [
"MIT"
] | 3 | 2017-10-12T10:33:20.000Z | 2021-04-10T19:55:50.000Z | from . import repo
from . import issues
from . import comments
from . import hooks
from . import wiki
| 17 | 22 | 0.754902 | 15 | 102 | 5.133333 | 0.466667 | 0.649351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196078 | 102 | 5 | 23 | 20.4 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4467780dbc90a97105cf46cb57cf1f6a883a8b8a | 25 | py | Python | app/rooms/examples/eg009_assign_form_to_form_group/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 21 | 2020-05-13T21:08:44.000Z | 2022-02-18T01:32:16.000Z | app/rooms/examples/eg009_assign_form_to_form_group/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 8 | 2020-11-23T09:28:04.000Z | 2022-02-02T12:04:08.000Z | app/rooms/examples/eg009_assign_form_to_form_group/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 26 | 2020-05-12T22:20:01.000Z | 2022-03-09T10:57:27.000Z | from .views import eg009
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.16 | 25 | 1 | 25 | 25 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4473066bc220c71669c29972bbb8f3f9076201c2 | 467 | py | Python | app/__init__.py | ganggas95/simdus_app | 0c57e11c712912f61d29ca4b63dfa1fe38bb067c | [
"MIT"
] | null | null | null | app/__init__.py | ganggas95/simdus_app | 0c57e11c712912f61d29ca4b63dfa1fe38bb067c | [
"MIT"
] | null | null | null | app/__init__.py | ganggas95/simdus_app | 0c57e11c712912f61d29ca4b63dfa1fe38bb067c | [
"MIT"
] | 1 | 2020-02-12T09:23:08.000Z | 2020-02-12T09:23:08.000Z | from app.create_app import app as app_instance
from app.auth_app.urls import auth_bp
from app.dashboard_app.urls import dashboard_bp
from app.alamat_app.urls import alamat_bp, api_alamat_bp
from app.keluarga_app.urls import kk_bp
import app.loader
app_instance.register_blueprint(auth_bp)
app_instance.register_blueprint(dashboard_bp)
app_instance.register_blueprint(alamat_bp)
app_instance.register_blueprint(kk_bp)
app_instance.register_blueprint(api_alamat_bp)
| 31.133333 | 56 | 0.873662 | 78 | 467 | 4.871795 | 0.217949 | 0.173684 | 0.25 | 0.368421 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070664 | 467 | 14 | 57 | 33.357143 | 0.875576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.545455 | 0 | 0.545455 | 0.454545 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
925706c19b5401609de283ad671ebb3144b00de8 | 92 | py | Python | src/services/__init__.py | jordansilva/raspberry-f1-dashboard | 96446a348d036a75f4699bab4459eabec16705f8 | [
"Apache-2.0"
] | null | null | null | src/services/__init__.py | jordansilva/raspberry-f1-dashboard | 96446a348d036a75f4699bab4459eabec16705f8 | [
"Apache-2.0"
] | null | null | null | src/services/__init__.py | jordansilva/raspberry-f1-dashboard | 96446a348d036a75f4699bab4459eabec16705f8 | [
"Apache-2.0"
] | null | null | null | from .f12020.f12020socket import F12020Socket
from .f12019.f12019socket import F12019Socket
| 30.666667 | 45 | 0.869565 | 10 | 92 | 8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.357143 | 0.086957 | 92 | 2 | 46 | 46 | 0.595238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
925df8eba0dcef0726e8b361007d7055dc73a625 | 163 | py | Python | asserts/general.py | nazarii-piontko/ToDo-BDD | 5418e712609c686e3a0220889c694f05560e2f31 | [
"MIT"
] | 1 | 2021-01-17T15:28:50.000Z | 2021-01-17T15:28:50.000Z | asserts/general.py | nazarii-piontko/node-todo-bdd | 5418e712609c686e3a0220889c694f05560e2f31 | [
"MIT"
] | null | null | null | asserts/general.py | nazarii-piontko/node-todo-bdd | 5418e712609c686e3a0220889c694f05560e2f31 | [
"MIT"
] | 1 | 2022-02-07T21:44:54.000Z | 2022-02-07T21:44:54.000Z | from unittest import TestCase
class GeneralAssert(TestCase):
"""
Assert class with general asserts, e.g. assertLess, assertEquals, etc.
"""
pass
| 18.111111 | 74 | 0.687117 | 18 | 163 | 6.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220859 | 163 | 8 | 75 | 20.375 | 0.88189 | 0.429448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
92623171ee08dbe5737d985ebdf23a833ae28ea1 | 2,968 | py | Python | util/sns-subscription-filter-policy-string/test/unit/test_set_filter_policy.py | carvantes/aws-serverless-event-fork-pipelines | db74c3c4c1d5a3f52925b7d18feb45c4f0f7dd8a | [
"MIT-0"
] | 126 | 2019-03-25T22:38:52.000Z | 2020-08-13T19:14:02.000Z | util/sns-subscription-filter-policy-string/test/unit/test_set_filter_policy.py | carvantes/aws-serverless-event-fork-pipelines | db74c3c4c1d5a3f52925b7d18feb45c4f0f7dd8a | [
"MIT-0"
] | 2 | 2019-05-24T01:26:06.000Z | 2020-04-29T13:03:55.000Z | util/sns-subscription-filter-policy-string/test/unit/test_set_filter_policy.py | carvantes/aws-serverless-event-fork-pipelines | db74c3c4c1d5a3f52925b7d18feb45c4f0f7dd8a | [
"MIT-0"
] | 29 | 2019-03-27T07:51:21.000Z | 2020-08-10T04:07:29.000Z | import pytest
import set_filter_policy
SUBSCRIPTION_ARN = 'theSubscription'
FILTER_POLICY = '{"pet": ["dog", "cat"]}'
def test_create(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
response = set_filter_policy.create(_mock_event(), None)
assert response == {
'Status': 'SUCCESS',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
set_filter_policy.SNS.set_subscription_attributes.assert_called_with(
SubscriptionArn=SUBSCRIPTION_ARN,
AttributeName='FilterPolicy',
AttributeValue=FILTER_POLICY
)
def test_create_sns_exception(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
set_filter_policy.SNS.set_subscription_attributes.side_effect = Exception('boom!')
response = set_filter_policy.create(_mock_event(), None)
assert response == {
'Status': 'FAILED',
'Reason': 'Error setting subscription filter policy: boom!',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
def test_update(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
response = set_filter_policy.update(_mock_event(), None)
assert response == {
'Status': 'SUCCESS',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
set_filter_policy.SNS.set_subscription_attributes.assert_called_with(
SubscriptionArn=SUBSCRIPTION_ARN,
AttributeName='FilterPolicy',
AttributeValue=FILTER_POLICY
)
def test_update_sns_exception(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
set_filter_policy.SNS.set_subscription_attributes.side_effect = Exception('boom!')
response = set_filter_policy.update(_mock_event(), None)
assert response == {
'Status': 'FAILED',
'Reason': 'Error setting subscription filter policy: boom!',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
def test_delete(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
response = set_filter_policy.delete(_mock_event(), None)
assert response == {
'Status': 'SUCCESS',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
set_filter_policy.SNS.set_subscription_attributes.assert_called_with(
SubscriptionArn=SUBSCRIPTION_ARN,
AttributeName='FilterPolicy',
AttributeValue='{}'
)
def test_delete_sns_exception(mocker):
mocker.patch.object(set_filter_policy, 'SNS')
set_filter_policy.SNS.set_subscription_attributes.side_effect = Exception('boom!')
response = set_filter_policy.delete(_mock_event(), None)
assert response == {
'Status': 'FAILED',
'Reason': 'Error setting subscription filter policy: boom!',
'PhysicalResourceId': SUBSCRIPTION_ARN + '-filterpolicy'
}
def _mock_event():
return {
'ResourceProperties': {
'SubscriptionArn': SUBSCRIPTION_ARN,
'FilterPolicy': FILTER_POLICY
}
}
| 29.979798 | 86 | 0.693733 | 299 | 2,968 | 6.551839 | 0.150502 | 0.159265 | 0.145482 | 0.11026 | 0.887187 | 0.887187 | 0.887187 | 0.887187 | 0.887187 | 0.887187 | 0 | 0 | 0.195081 | 2,968 | 98 | 87 | 30.285714 | 0.820008 | 0 | 0 | 0.635135 | 0 | 0 | 0.193396 | 0 | 0 | 0 | 0 | 0 | 0.121622 | 1 | 0.094595 | false | 0 | 0.027027 | 0.013514 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
926b2716faa9477242cedefc4806afd4f2da2138 | 71 | py | Python | office365/teams/teamsApp.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/teams/teamsApp.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/teams/teamsApp.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.entity import Entity
class TeamsApp(Entity):
pass
| 11.833333 | 35 | 0.760563 | 9 | 71 | 6 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.183099 | 71 | 5 | 36 | 14.2 | 0.87931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
92b8cda6b17e5ec15fa96eacf47c76cab71084ed | 3,851 | py | Python | main.py | chicm/open-solution-googleai-object-detection | 187d316238ccd14096e4b96e4e9e78a9e655f45f | [
"MIT"
] | null | null | null | main.py | chicm/open-solution-googleai-object-detection | 187d316238ccd14096e4b96e4e9e78a9e655f45f | [
"MIT"
] | null | null | null | main.py | chicm/open-solution-googleai-object-detection | 187d316238ccd14096e4b96e4e9e78a9e655f45f | [
"MIT"
] | 1 | 2018-08-25T14:46:18.000Z | 2018-08-25T14:46:18.000Z | import click
from src.pipeline_manager import PipelineManager
pipeline_manager = PipelineManager()
@click.group()
def main():
pass
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
def train(pipeline_name, dev_mode):
pipeline_manager.train(pipeline_name, dev_mode)
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
@click.option('-c', '--chunk_size', help='size of the chunks to run evaluation on', type=int, default=None,
required=False)
def evaluate(pipeline_name, dev_mode, chunk_size):
pipeline_manager.evaluate(pipeline_name, dev_mode, chunk_size)
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
@click.option('-s', '--submit_predictions', help='submit predictions if true', is_flag=True, required=False)
@click.option('-c', '--chunk_size', help='size of the chunks to run prediction on', type=int, default=None,
required=False)
def predict(pipeline_name, dev_mode, submit_predictions, chunk_size):
pipeline_manager.predict(pipeline_name, dev_mode, submit_predictions, chunk_size)
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-s', '--submit_predictions', help='submit predictions if true', is_flag=True, required=False)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
@click.option('-c', '--chunk_size', help='size of the chunks to run evaluation and prediction on', type=int,
default=None, required=False)
def train_evaluate_predict(pipeline_name, submit_predictions, dev_mode, chunk_size):
pipeline_manager.train(pipeline_name, dev_mode)
pipeline_manager.evaluate(pipeline_name, dev_mode, chunk_size)
pipeline_manager.predict(pipeline_name, dev_mode, submit_predictions, chunk_size)
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
@click.option('-c', '--chunk_size', help='size of the chunks to run evaluation and prediction on', type=int,
default=None, required=False)
def train_evaluate(pipeline_name, dev_mode, chunk_size):
pipeline_manager.train(pipeline_name, dev_mode)
pipeline_manager.evaluate(pipeline_name, dev_mode, chunk_size)
@main.command()
@click.option('-p', '--pipeline_name', help='pipeline to be trained', required=True)
@click.option('-s', '--submit_predictions', help='submit predictions if true', is_flag=True, required=False)
@click.option('-d', '--dev_mode', help='if true only a small sample of data will be used', is_flag=True, required=False)
@click.option('-c', '--chunk_size', help='size of the chunks to run prediction on', type=int, default=None,
required=False)
def evaluate_predict(pipeline_name, submit_predictions, dev_mode, chunk_size):
pipeline_manager.evaluate(pipeline_name, dev_mode, chunk_size)
pipeline_manager.predict(pipeline_name, dev_mode, submit_predictions, chunk_size)
@main.command()
@click.option('-f', '--submission_filepath', help='filepath to json submission file', required=True)
def submit_predictions(submission_filepath):
pipeline_manager.make_submission(submission_filepath)
if __name__ == "__main__":
main()
| 48.746835 | 120 | 0.73825 | 557 | 3,851 | 4.908438 | 0.111311 | 0.096562 | 0.076811 | 0.097293 | 0.901244 | 0.901244 | 0.901244 | 0.8782 | 0.8782 | 0.837601 | 0 | 0 | 0.11893 | 3,851 | 78 | 121 | 49.371795 | 0.805777 | 0 | 0 | 0.694915 | 0 | 0 | 0.284601 | 0.005453 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135593 | false | 0.016949 | 0.033898 | 0 | 0.169492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2b8e29574be1e9c47af068dc9b6d78d44c1ae922 | 3,462 | py | Python | pirates/leveleditor/worldData/CaveBTemplate.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/leveleditor/worldData/CaveBTemplate.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/leveleditor/worldData/CaveBTemplate.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import Point3, VBase3
objectStruct = {'Objects': {'1172185213.66sdnaik': {'Type': 'Island Game Area','Name': 'CaveBTemplate','File': '','Instanced': True,'Objects': {'1172185301.05sdnaik': {'Type': 'Locator Node','Name': 'portal_interior_1','Hpr': VBase3(-92.814, 0.0, 0.0),'Pos': Point3(408.102, 203.835, 1.938),'Scale': VBase3(1.0, 1.0, 1.0)},'1172185301.08sdnaik': {'Type': 'Locator Node','Name': 'portal_interior_2','Hpr': VBase3(-0.234, -0.244, 0.739),'Pos': Point3(-535.085, 236.444, 77.638),'Scale': VBase3(1.0, 1.0, 1.0)},'1172893180.14kmuller': {'Type': 'Tunnel Cap','Hpr': VBase3(-89.933, 0.0, 0.0),'Pos': Point3(-530.764, 233.107, 82.679),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnelcap_cave_interior'}},'1172893192.18kmuller': {'Type': 'Tunnel Cap','Hpr': Point3(0.0, 0.0, 0.0),'Pos': Point3(-476.043, 262.701, 122.229),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnelcap_cave_interior'}},'1172893216.81kmuller': {'Type': 'Tunnel Cap','Hpr': Point3(0.0, 0.0, 0.0),'Pos': Point3(-436.771, 259.368, 146.301),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnelcap_cave_interior'}},'1172893544.75kmuller': {'Type': 'Tunnel Cap','Hpr': VBase3(-29.142, 0.38, 0.0),'Pos': Point3(408.785, 196.489, 3.052),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0),'Model': 'models/tunnels/tunnelcap_cave_interior'}},'1176755520.41dzlu': {'Type': 'Light - Dynamic','Attenuation': '0.005','ConeAngle': '120.0000','DropOff': '6.8182','Flickering': False,'Hpr': VBase3(-110.238, -3.38, 94.315),'Intensity': '1.5758','LightType': 'SPOT','Pos': Point3(-538.19, 242.893, 99.248),'Visual': {'Color': (1, 1, 1, 1),'Model': 'models/props/light_tool_bulb'}},'1176755691.11dzlu': {'Type': 'Light - Dynamic','Attenuation': '0.005','ConeAngle': '120.0000','DropOff': '2.7273','Flickering': False,'Hpr': VBase3(42.452, 40.037, -92.62),'Intensity': '1.4545','LightType': 'SPOT','Pos': Point3(-301.72, -166.094, 66.363),'Visual': {'Color': (1, 1, 1, 1),'Model': 'models/props/light_tool_bulb'}},'1176756704.88dzlu': {'Type': 'Light - Dynamic','Attenuation': '0.005','ConeAngle': '60.0000','DropOff': '0.0000','Flickering': False,'Hpr': Point3(0.0, 0.0, 0.0),'Intensity': '0.1515','LightType': 'AMBIENT','Pos': Point3(66.477, -201.119, 35.177),'Visual': {'Color': (1, 1, 1, 1),'Model': 'models/props/light_tool_bulb'}}},'Visual': {'Model': 'models/caves/cave_b_zero'}}},'Node Links': [],'Layers': {},'ObjectIds': {'1172185213.66sdnaik': '["Objects"]["1172185213.66sdnaik"]','1172185301.05sdnaik': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172185301.05sdnaik"]','1172185301.08sdnaik': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172185301.08sdnaik"]','1172893180.14kmuller': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172893180.14kmuller"]','1172893192.18kmuller': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172893192.18kmuller"]','1172893216.81kmuller': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172893216.81kmuller"]','1172893544.75kmuller': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1172893544.75kmuller"]','1176755520.41dzlu': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1176755520.41dzlu"]','1176755691.11dzlu': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1176755691.11dzlu"]','1176756704.88dzlu': '["Objects"]["1172185213.66sdnaik"]["Objects"]["1176756704.88dzlu"]'}} | 1,731 | 3,415 | 0.669266 | 474 | 3,462 | 4.845992 | 0.327004 | 0.019155 | 0.020897 | 0.020897 | 0.386591 | 0.323465 | 0.266434 | 0.24902 | 0.227688 | 0.217675 | 0 | 0.273032 | 0.053148 | 3,462 | 2 | 3,415 | 1,731 | 0.4277 | 0 | 0 | 0 | 0 | 0 | 0.578111 | 0.261045 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
921dd203674aeda2b67ac105a2c911dd9f09efbf | 1,118 | py | Python | scrap_utils/__init__.py | bizzyvinci/scrap-utils | d036d04a191fd0bb7deea88ac2432cb53e58c997 | [
"MIT"
] | null | null | null | scrap_utils/__init__.py | bizzyvinci/scrap-utils | d036d04a191fd0bb7deea88ac2432cb53e58c997 | [
"MIT"
] | 1 | 2021-02-26T17:45:26.000Z | 2021-02-26T17:45:26.000Z | scrap_utils/__init__.py | bizzyvinci/scrap-utils | d036d04a191fd0bb7deea88ac2432cb53e58c997 | [
"MIT"
] | null | null | null | """
============
Scrap Utils
============
This module provides some functions that might save you from repetition
in scraping projects.
Functions
---------
+-------------------+-----------------------------------------------+
| load_json | Load json from file |
+-------------------+-----------------------------------------------+
| dump_json | Dump json into filepath |
+-------------------+-----------------------------------------------+
| to_csv | Save dataset to csv file |
+-------------------+-----------------------------------------------+
| read_csv | Read dataset from csv file |
+-------------------+-----------------------------------------------+
| get | Send a GET request with requests library |
+-------------------+-----------------------------------------------+
| post | Send a POST request with requests library |
+-------------------+-----------------------------------------------+
"""
from .file import *
from .requests import *
| 41.407407 | 71 | 0.282648 | 65 | 1,118 | 4.8 | 0.538462 | 0.051282 | 0.121795 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229875 | 1,118 | 26 | 72 | 43 | 0.362369 | 0.951699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6173d19f93be698ad28abd92426ea7ed74e5cda | 2,352 | py | Python | python/fbs/OutputMatrix.py | oliverlee/biketest | 074b0b03455021c52a13efe583b1816bc5daad4e | [
"BSD-2-Clause"
] | 3 | 2016-12-14T01:22:27.000Z | 2020-04-07T05:15:04.000Z | python/fbs/OutputMatrix.py | oliverlee/biketest | 074b0b03455021c52a13efe583b1816bc5daad4e | [
"BSD-2-Clause"
] | 7 | 2017-01-12T15:20:57.000Z | 2017-07-02T16:09:37.000Z | python/fbs/OutputMatrix.py | oliverlee/biketest | 074b0b03455021c52a13efe583b1816bc5daad4e | [
"BSD-2-Clause"
] | 1 | 2020-04-07T05:15:05.000Z | 2020-04-07T05:15:05.000Z | # automatically generated, do not modify
# namespace: fbs
import flatbuffers
class OutputMatrix(object):
__slots__ = ['_tab']
# OutputMatrix
def Init(self, buf, pos):
self._tab = flatbuffers.table.Table(buf, pos)
# OutputMatrix
def C00(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(0))
# OutputMatrix
def C01(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(8))
# OutputMatrix
def C02(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(16))
# OutputMatrix
def C03(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(24))
# OutputMatrix
def C04(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(32))
# OutputMatrix
def C10(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(40))
# OutputMatrix
def C11(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(48))
# OutputMatrix
def C12(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(56))
# OutputMatrix
def C13(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(64))
# OutputMatrix
def C14(self): return self._tab.Get(flatbuffers.number_types.Float64Flags, self._tab.Pos + flatbuffers.number_types.UOffsetTFlags.py_type(72))
def CreateOutputMatrix(builder, c00, c01, c02, c03, c04, c10, c11, c12, c13, c14):
builder.Prep(8, 80)
builder.PrependFloat64(c14)
builder.PrependFloat64(c13)
builder.PrependFloat64(c12)
builder.PrependFloat64(c11)
builder.PrependFloat64(c10)
builder.PrependFloat64(c04)
builder.PrependFloat64(c03)
builder.PrependFloat64(c02)
builder.PrependFloat64(c01)
builder.PrependFloat64(c00)
return builder.Offset()
| 49 | 146 | 0.76148 | 297 | 2,352 | 5.841751 | 0.181818 | 0.084726 | 0.253602 | 0.097983 | 0.605187 | 0.605187 | 0.605187 | 0.605187 | 0.605187 | 0.605187 | 0 | 0.058852 | 0.12585 | 2,352 | 47 | 147 | 50.042553 | 0.785019 | 0.083333 | 0 | 0 | 1 | 0 | 0.001867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.035714 | 0.357143 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a641bcec95fe5186fe477c415ce7c4c2bc4b11e2 | 57 | py | Python | python/pbase/io/__init__.py | renehorstmann/pbase | 7bf1b4e4fae833f24da92f3c7ee3b5e949cddb72 | [
"MIT"
] | 1 | 2021-09-16T06:28:03.000Z | 2021-09-16T06:28:03.000Z | python/pbase/io/__init__.py | renehorstmann/pbase | 7bf1b4e4fae833f24da92f3c7ee3b5e949cddb72 | [
"MIT"
] | null | null | null | python/pbase/io/__init__.py | renehorstmann/pbase | 7bf1b4e4fae833f24da92f3c7ee3b5e949cddb72 | [
"MIT"
] | null | null | null | from .csv import *
from .stl import *
from .ply import *
| 14.25 | 18 | 0.684211 | 9 | 57 | 4.333333 | 0.555556 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 57 | 3 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a65a7acaf9ded9ce96972c006f2f2c03ec4f151e | 22 | py | Python | src/modules/http/__init__.py | RecicladoraSanMiguel/recsm_odoo_image_manager | 1a4459377c3c274353cb6b5dd18f8bff11542e71 | [
"MIT"
] | null | null | null | src/modules/http/__init__.py | RecicladoraSanMiguel/recsm_odoo_image_manager | 1a4459377c3c274353cb6b5dd18f8bff11542e71 | [
"MIT"
] | 2 | 2022-01-13T01:45:44.000Z | 2022-03-12T00:03:11.000Z | src/modules/http/__init__.py | RecicladoraSanMiguel/recsm-python-NetIMmanager | 1a4459377c3c274353cb6b5dd18f8bff11542e71 | [
"MIT"
] | null | null | null | from .main import HTTP | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6711a4f7b16da011a555e0d0e8b431351b466b9 | 28 | py | Python | bulkmail/__init__.py | roughweed/csv-bulk-email | a2b2f87e80d5a4a0caaa29eb04a36e6282ff48d0 | [
"MIT"
] | 3 | 2020-12-26T21:14:34.000Z | 2021-01-10T17:29:12.000Z | bulkmail/__init__.py | roughweed/csv-bulk-email | a2b2f87e80d5a4a0caaa29eb04a36e6282ff48d0 | [
"MIT"
] | null | null | null | bulkmail/__init__.py | roughweed/csv-bulk-email | a2b2f87e80d5a4a0caaa29eb04a36e6282ff48d0 | [
"MIT"
] | 1 | 2021-05-20T08:57:54.000Z | 2021-05-20T08:57:54.000Z | from .mailer import TextMail | 28 | 28 | 0.857143 | 4 | 28 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6715571da8dc4add916528b2d9a7eeab2ff722c | 97 | py | Python | app/views/students/__init__.py | edejeed/SSIS-WEB-BASED-APP | 929057c88f4e67ce86d9d50917d380993ec9ba71 | [
"MIT"
] | null | null | null | app/views/students/__init__.py | edejeed/SSIS-WEB-BASED-APP | 929057c88f4e67ce86d9d50917d380993ec9ba71 | [
"MIT"
] | null | null | null | app/views/students/__init__.py | edejeed/SSIS-WEB-BASED-APP | 929057c88f4e67ce86d9d50917d380993ec9ba71 | [
"MIT"
] | null | null | null | from flask import Blueprint
student = Blueprint('student', __name__)
from . import routes
| 16.166667 | 41 | 0.731959 | 11 | 97 | 6.090909 | 0.636364 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195876 | 97 | 5 | 42 | 19.4 | 0.858974 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
a6f3ccb7e156770c349625f52cb396705ea106e5 | 39 | py | Python | 2020/day09/day09_02.py | bandarji/aoc | c0d2bae9631a78af8ed52921f22153680ec95001 | [
"Apache-2.0"
] | null | null | null | 2020/day09/day09_02.py | bandarji/aoc | c0d2bae9631a78af8ed52921f22153680ec95001 | [
"Apache-2.0"
] | null | null | null | 2020/day09/day09_02.py | bandarji/aoc | c0d2bae9631a78af8ed52921f22153680ec95001 | [
"Apache-2.0"
] | null | null | null | print('I put it all in the first file') | 39 | 39 | 0.717949 | 9 | 39 | 3.111111 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 1 | 39 | 39 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
472e2a893616d27da38c8f8333929f14ad7d0bb8 | 23 | py | Python | src/core/base/exporters/__init__.py | Epihaius/panda3dstudio | f5c62ca49617cae1aa5aa5b695200027da99e242 | [
"BSD-3-Clause"
] | 63 | 2016-01-02T16:28:47.000Z | 2022-01-19T11:29:51.000Z | src/core/base/exporters/__init__.py | Epihaius/panda3dstudio | f5c62ca49617cae1aa5aa5b695200027da99e242 | [
"BSD-3-Clause"
] | 12 | 2016-06-12T14:14:15.000Z | 2020-12-18T16:11:45.000Z | src/core/base/exporters/__init__.py | Epihaius/panda3dstudio | f5c62ca49617cae1aa5aa5b695200027da99e242 | [
"BSD-3-Clause"
] | 17 | 2016-05-23T00:02:27.000Z | 2021-04-25T17:48:27.000Z | from . import bam, obj
| 11.5 | 22 | 0.695652 | 4 | 23 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 1 | 23 | 23 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b635f172cd570e491ba3b6d4ea669e9bf6a4b16 | 110 | py | Python | polecat/data/examples/starwars/starwars/project.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 4 | 2019-08-10T12:56:12.000Z | 2020-01-21T09:51:20.000Z | polecat/data/examples/starwars/starwars/project.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 71 | 2019-04-09T05:39:21.000Z | 2020-05-16T23:09:24.000Z | polecat/data/examples/starwars/starwars/project.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | null | null | null | from polecat.project import Project
from .models import * # noqa
class StarWarsProject(Project):
pass
| 13.75 | 35 | 0.745455 | 13 | 110 | 6.307692 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190909 | 110 | 7 | 36 | 15.714286 | 0.921348 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5b77d6998cf82f3e7d66c4c4c78b03742c2bb255 | 1,356 | py | Python | Python/problem008.py | emergent/ProjectEuler | ec1c92cc47fde80efddeb0346d9b0fa511df1f00 | [
"Unlicense"
] | null | null | null | Python/problem008.py | emergent/ProjectEuler | ec1c92cc47fde80efddeb0346d9b0fa511df1f00 | [
"Unlicense"
] | null | null | null | Python/problem008.py | emergent/ProjectEuler | ec1c92cc47fde80efddeb0346d9b0fa511df1f00 | [
"Unlicense"
] | null | null | null | #! /usr/bin/env python3
'''
Problem 8 - Project Euler
http://projecteuler.net/index.php?section=problems&id=8
'''
digits = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
""".replace("\n",'')
from functools import reduce
from operator import mul
maxnum = 0
for i in range(0, len(digits)-13):
maxnum = max(maxnum, reduce(mul, list(map(int, digits[i:i+13]))))
print(maxnum)
| 36.648649 | 69 | 0.90413 | 72 | 1,356 | 17.027778 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.782171 | 0.048673 | 1,356 | 36 | 70 | 37.666667 | 0.168217 | 0.076696 | 0 | 0 | 0 | 0 | 0.822347 | 0.803859 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.035714 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b8e0976a6d5756da14de2263599ff5e8b48091c | 2,397 | py | Python | pivot_hottime_month.py | Soonyeon-Kim/TheShadowTree_in_Seoul | 3de33c7c9b4ce85b5fe927423b2356f2d34f1e33 | [
"Unlicense"
] | 1 | 2019-07-08T07:11:58.000Z | 2019-07-08T07:11:58.000Z | pivot_hottime_month.py | Soonyeon-Kim/TheShadowTree_in_Seoul | 3de33c7c9b4ce85b5fe927423b2356f2d34f1e33 | [
"Unlicense"
] | null | null | null | pivot_hottime_month.py | Soonyeon-Kim/TheShadowTree_in_Seoul | 3de33c7c9b4ce85b5fe927423b2356f2d34f1e33 | [
"Unlicense"
] | null | null | null | import pandas as pd
filename1_f='2018/pivot_living_people_2018'
filename2_f='2017/pivot_living_people_2017'
for idx in range(1,13,1):
filename1=filename1_f+'{0:02d}.csv'.format(idx)
df01=pd.read_csv(filename1,encoding='cp949')
# print(df01)
# print(df01.columns)
df_per_time=pd.pivot_table(data=df01, values='Total_Living_people', index=['YYYYMM','si','gu','dong'], columns='H',aggfunc='mean')
'''
시간별 평균
'''
# filename2='ttest.csv'
# df_per_time.to_csv(filename2,encoding='cp949')
# print(filename2+'저장완료')
'''
13~15
'''
df_per_hot=df_per_time.iloc[:,13:16]
df_per_hot['hot_mean']=round((df_per_hot.loc[:,13]+df_per_hot.loc[:,14]+df_per_hot.loc[:,15])/3,0)
# df_per_hot['month']=
mon=str(df_per_hot.index[0][0])
df_per_hot['month']=mon[4:6]
# print(df_per_hot)
df_hottime_month=pd.pivot_table(data=df_per_hot, values='hot_mean',index=['si','gu','dong','month'],aggfunc='mean')
# print(df_hottime_month)
filename_pivot_f='pivot_hottime_month_2018'
filename_pivot=filename_pivot_f+'{0:02d}.csv'.format(idx)
df_hottime_month.to_csv(filename_pivot,encoding='cp949')
print(filename_pivot+'저장완료')
filename2=filename2_f+'{0:02d}.csv'.format(idx)
df02=pd.read_csv(filename2,encoding='cp949')
# print(df01)
# print(df01.columns)
df_per_time=pd.pivot_table(data=df02, values='Total_Living_people', index=['YYYYMM','si','gu','dong'], columns='H',aggfunc='mean')
'''
시간별 평균
'''
# filename2='ttest.csv'
# df_per_time.to_csv(filename2,encoding='cp949')
# print(filename2+'저장완료')
'''
13~15
'''
df_per_hot=df_per_time.iloc[:,13:16]
df_per_hot['hot_mean']=round((df_per_hot.loc[:,13]+df_per_hot.loc[:,14]+df_per_hot.loc[:,15])/3,0)
# df_per_hot['month']=
mon=str(df_per_hot.index[0][0])
df_per_hot['month']=mon[4:6]
# print(df_per_hot)
df_hottime_month=pd.pivot_table(data=df_per_hot, values='hot_mean',index=['si','gu','dong','month'],aggfunc='mean')
# print(df_hottime_month)
filename_pivot_f='pivot_hottime_month_2017'
filename_pivot=filename_pivot_f+'{0:02d}.csv'.format(idx)
df_hottime_month.to_csv(filename_pivot,encoding='cp949')
print(filename_pivot+'저장완료') | 33.291667 | 135 | 0.640384 | 361 | 2,397 | 3.941828 | 0.177285 | 0.091356 | 0.112439 | 0.046381 | 0.868587 | 0.860155 | 0.836261 | 0.836261 | 0.836261 | 0.836261 | 0 | 0.06524 | 0.181477 | 2,397 | 72 | 136 | 33.291667 | 0.660041 | 0.15728 | 0 | 0.571429 | 0 | 0 | 0.178668 | 0.05739 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bd00d415aacd8ac7a6542d3dc2cc3e4a3961947 | 31 | py | Python | trendalert/__init__.py | jonleonATX/donchian_trend_alert | d6b075ba61a6cc0d4a01b2ddb470e62f4a2dbc2c | [
"MIT"
] | 18 | 2021-01-21T05:07:01.000Z | 2021-12-25T18:36:37.000Z | trendalert/__init__.py | jonleonATX/donchian_trend_alert | d6b075ba61a6cc0d4a01b2ddb470e62f4a2dbc2c | [
"MIT"
] | null | null | null | trendalert/__init__.py | jonleonATX/donchian_trend_alert | d6b075ba61a6cc0d4a01b2ddb470e62f4a2dbc2c | [
"MIT"
] | 5 | 2021-01-22T04:37:09.000Z | 2021-03-01T11:43:18.000Z | from trendalert.alert import *
| 15.5 | 30 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5bd0a9da2d75c4703eb04933e20ee2ee9f1dc696 | 27,086 | py | Python | tests/st/pynative/test_tensor_augassign.py | httpsgithu/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | 1 | 2022-02-23T09:13:43.000Z | 2022-02-23T09:13:43.000Z | tests/st/pynative/test_tensor_augassign.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | tests/st/pynative/test_tensor_augassign.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
""" test_tensor_setitem """
import numpy as np
import pytest
from mindspore import Tensor, context
from mindspore import dtype as mstype
def setup_module():
context.set_context(mode=context.PYNATIVE_MODE)
# GPU: does not supported op "FloorMod"
@pytest.mark.level1
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_slice():
input_np_3d = np.arange(120).reshape(4, 5, 6).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
index_slice_1 = slice(1, None, None)
index_slice_2 = slice(None, 4, None)
index_slice_3 = slice(-3, 4, None)
index_slice_4 = slice(2, -1, None)
index_slice_7 = slice(1, 5, None)
index_slice_8 = slice(-5, 3, None)
value_number = 3
value_list_1_ele = [2]
value_list_mul_ele = [10, 20, 30, 40, 50, 60]
value_list_much_ele = [10, 20, 30, 40, 50, 60, 70]
input_tensor_3d[index_slice_1] += value_number
input_np_3d[index_slice_1] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_slice_2] -= value_list_1_ele
input_np_3d[index_slice_2] -= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_slice_3] *= value_list_mul_ele
input_np_3d[index_slice_3] *= value_list_mul_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_slice_4] /= value_number
input_np_3d[index_slice_4] /= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_slice_7] /= value_number
input_np_3d[index_slice_7] /= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_slice_8] += value_number
input_np_3d[index_slice_8] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
with pytest.raises(ValueError):
input_tensor_3d[index_slice_8] /= value_list_much_ele
# GPU: does not supported op "FloorMod"
@pytest.mark.level1
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_ellipsis():
input_np_3d = np.arange(24).reshape(2, 3, 4).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
value_number_1, value_number_2 = 1, 2.0
value_np_1 = np.array([1])
value_np_2 = np.array([1, 2, 3, 4])
value_np_3 = np.arange(12).reshape(3, 4)
value_tensor_1 = Tensor(value_np_1)
value_tensor_2 = Tensor(value_np_2)
value_tensor_3 = Tensor(value_np_3)
value_tuple_1_ele = (0.5,)
value_tuple_4_ele = (0.1, 0.2, 0.3, 0.4)
value_list_1_ele = [1.5]
value_list_4_ele = [1.1, 1.2, 1.3, 1.4]
input_tensor_3d[...] += value_number_1
input_np_3d[...] += value_number_1
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] -= value_number_2
input_np_3d[...] -= value_number_2
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] *= value_tensor_1
input_np_3d[...] *= value_np_1
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] /= value_tensor_2
input_np_3d[...] /= value_np_2
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] /= value_tensor_3
input_np_3d[...] /= value_np_3
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] -= value_tuple_1_ele
input_np_3d[...] -= value_tuple_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] *= value_tuple_4_ele
input_np_3d[...] *= value_tuple_4_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] -= value_list_1_ele
input_np_3d[...] -= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[...] *= value_list_4_ele
input_np_3d[...] *= value_list_4_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
# GPU: does not supported op "FloorMod"
@pytest.mark.level1
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_bool():
input_np_3d = np.arange(120).reshape(4, 5, 6).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
index_bool_1 = True
index_bool_2 = False
value_number = 1
value_np_1 = np.array([1], np.float32)
value_np_2 = np.array([1, 2, 3, 4, 5, 6], np.float32)
value_np_3 = np.arange(1, 31).astype(np.float32).reshape(5, 6)
value_np_4 = np.arange(1, 121).astype(np.float32).reshape(4, 5, 6)
value_tensor_1 = Tensor(value_np_1, mstype.float32)
value_tensor_2 = Tensor(value_np_2, mstype.float32)
value_tensor_3 = Tensor(value_np_3, mstype.float32)
value_tensor_4 = Tensor(value_np_4, mstype.float32)
value_tuple_1_ele = (0.5,)
value_tuple_6_ele = (0.1, 0.2, 0.3, 0.4, 0.5, 0.6)
value_list_1_ele = [1.5]
value_list_6_ele = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6]
input_tensor_3d[index_bool_1] += value_number
input_np_3d[index_bool_1] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] -= value_tensor_1
input_np_3d[index_bool_1] -= value_np_1
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] *= value_tensor_2
input_np_3d[index_bool_1] *= value_np_2
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] -= value_tensor_3
input_np_3d[index_bool_1] -= value_np_3
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] //= value_tensor_4
input_np_3d[index_bool_1] //= value_np_4
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] %= value_tuple_1_ele
input_np_3d[index_bool_1] %= value_tuple_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] %= value_tuple_6_ele
input_np_3d[index_bool_1] %= value_tuple_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] %= value_list_1_ele
input_np_3d[index_bool_1] %= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_bool_1] -= value_list_6_ele
input_np_3d[index_bool_1] -= value_list_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
with pytest.raises(IndexError):
input_tensor_3d[index_bool_2] *= value_tensor_2
# GPU: does not supported op "FloorMod"
@pytest.mark.level1
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_number():
input_np_1d = np.arange(4).astype(np.float32)
input_tensor_1d = Tensor(input_np_1d, mstype.float32)
input_np_3d = np.arange(80).reshape(4, 5, 4).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
number_index_1, number_index_2, number_index_3, number_index_4 = 0, 3, 4, 3.4
value_number = 2
value_np_scalar = np.array(5)
value_np_1_ele = np.array([1])
value_np_1d = np.array([1, 2, 3, 4])
value_np_2d = np.arange(20).reshape(5, 4)
value_tensor_scalar = Tensor(value_np_scalar, mstype.float32)
value_tensor_1_ele = Tensor(value_np_1_ele, mstype.float32)
value_tensor_1d = Tensor(value_np_1d, mstype.float32)
value_tensor_2d = Tensor(value_np_2d, mstype.float32)
value_tuple_1_ele = (100,)
value_tuple_mul_ele = (10, 20, 30, 40)
value_tuple_much_ele = (10, 20, 30, 40, 10)
value_tuple_empty = ()
value_list_1_ele = [101]
value_list_mul_ele = [11, 21, 31, 41]
value_list_much_ele = [12, 22, 33, 43, 18]
value_list_empty = []
input_tensor_1d[number_index_1] += value_number
input_np_1d[number_index_1] += value_number
assert np.allclose(input_tensor_1d.asnumpy(), input_np_1d, 0.0001, 0.0001)
input_tensor_1d[number_index_2] -= value_number
input_np_1d[number_index_2] -= value_number
assert np.allclose(input_tensor_1d.asnumpy(), input_np_1d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] *= value_number
input_np_3d[number_index_1] *= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_2] /= value_number
input_np_3d[number_index_2] /= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_1d[number_index_1] //= value_tensor_scalar
input_np_1d[number_index_1] //= value_np_scalar
assert np.allclose(input_tensor_1d.asnumpy(), input_np_1d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] *= value_tensor_scalar
input_np_3d[number_index_1] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_2] %= value_tensor_1_ele
input_np_3d[number_index_2] %= value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] += value_tensor_1d
input_np_3d[number_index_1] += value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_2] -= value_tensor_2d
input_np_3d[number_index_2] -= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_1d[number_index_1] += value_tuple_1_ele
input_np_1d[number_index_1] += value_tuple_1_ele
assert np.allclose(input_tensor_1d.asnumpy(), input_np_1d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] -= value_tuple_1_ele
input_np_3d[number_index_1] -= value_tuple_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] *= value_tuple_mul_ele
input_np_3d[number_index_1] *= value_tuple_mul_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_1d[number_index_2] += value_list_1_ele
input_np_1d[number_index_2] += value_list_1_ele
assert np.allclose(input_tensor_1d.asnumpy(), input_np_1d, 0.0001, 0.0001)
input_tensor_3d[number_index_1] -= value_list_1_ele
input_np_3d[number_index_1] -= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[number_index_2] *= value_list_mul_ele
input_np_3d[number_index_2] *= value_list_mul_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
with pytest.raises(IndexError):
input_tensor_1d[number_index_3] += value_number
with pytest.raises(IndexError):
input_tensor_3d[number_index_3] -= value_number
with pytest.raises(IndexError):
input_tensor_1d[number_index_4] *= value_number
with pytest.raises(IndexError):
input_tensor_3d[number_index_4] /= value_number
with pytest.raises(ValueError):
input_tensor_1d[number_index_1] *= value_tuple_mul_ele
with pytest.raises(ValueError):
input_tensor_3d[number_index_1] *= value_tuple_much_ele
with pytest.raises(RuntimeError):
input_tensor_1d[number_index_1] /= value_tuple_empty
with pytest.raises(ValueError):
input_tensor_3d[number_index_2] //= value_list_much_ele
with pytest.raises(ValueError):
input_tensor_3d[number_index_2] *= value_list_empty
# GPU: does not supported op "FloorMod"
@pytest.mark.level0
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_tensor():
input_np_3d = np.arange(120).reshape(4, 5, 6).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
index_np_1d_1ele = np.random.randint(4, size=1)
index_np_1d = np.random.randint(4, size=6)
index_np_2d = np.random.randint(4, size=(5, 6))
index_np_3d = np.random.randint(4, size=(4, 5, 6))
index_tensor_1d_1ele = Tensor(index_np_1d_1ele, mstype.int32)
index_tensor_1d = Tensor(index_np_1d, mstype.int32)
index_tensor_2d = Tensor(index_np_2d, mstype.int32)
index_tensor_3d = Tensor(index_np_3d, mstype.int32)
value_number = 1
value_np_1 = np.array([1])
value_np_2 = np.array([1, 2, 3, 4, 5, 6])
value_np_3 = np.arange(1, 31).reshape(5, 6)
value_np_4 = np.arange(1, 181).reshape(6, 5, 6)
value_tensor_1 = Tensor(value_np_1)
value_tensor_2 = Tensor(value_np_2)
value_tensor_3 = Tensor(value_np_3)
value_tensor_4 = Tensor(value_np_4)
value_tuple_1_ele = (0.5,)
value_tuple_6_ele = (0.1, 0.2, 0.3, 0.4, 0.5, 0.6)
value_list_1_ele = [1.5]
value_list_6_ele = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6]
input_tensor_3d[index_tensor_1d_1ele] += value_number
input_np_3d[index_np_1d_1ele] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d_1ele] -= value_tensor_2
input_np_3d[index_np_1d_1ele] -= value_np_2
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d_1ele] /= value_tuple_6_ele
input_np_3d[index_np_1d_1ele] /= value_tuple_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d_1ele] *= value_list_1_ele
input_np_3d[index_np_1d_1ele] *= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d] += value_number
input_np_3d[index_np_1d] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d] -= value_tensor_1
input_np_3d[index_np_1d] -= value_np_1
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d] /= value_tuple_1_ele
input_np_3d[index_np_1d] /= value_tuple_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_1d] += value_list_6_ele
input_np_3d[index_np_1d] += value_list_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_2d] -= value_number
input_np_3d[index_np_2d] -= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_2d] *= value_tensor_2
input_np_3d[index_np_2d] *= value_np_2
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_2d] /= value_tensor_4
input_np_3d[index_np_2d] /= value_np_4
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_2d] += value_tuple_6_ele
input_np_3d[index_np_2d] += value_tuple_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_2d] -= value_list_1_ele
input_np_3d[index_np_2d] -= value_list_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_3d] *= value_number
input_np_3d[index_np_3d] *= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_3d] /= value_tensor_1
input_np_3d[index_np_3d] /= value_np_1
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_3d] += value_tensor_3
input_np_3d[index_np_3d] += value_np_3
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_3d] /= value_tuple_1_ele
input_np_3d[index_np_3d] /= value_tuple_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tensor_3d] -= value_list_6_ele
input_np_3d[index_np_3d] -= value_list_6_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
# GPU: does not supported op "FloorMod"
@pytest.mark.level0
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_list():
input_np_3d = np.arange(120).reshape(4, 5, 6).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
list_index_empty = []
list_index_int_1 = [2]
list_index_int_2 = [3, 1]
list_index_int_overflow = [4, 2]
list_index_bool_1 = [False, False, False, False]
list_index_bool_2 = [True, True, True, True]
list_index_bool_3 = [True, False, True, False]
list_index_mix_1 = [True, 0]
list_index_mix_2 = [3, False]
value_number = 2
value_np_scalar = np.array(100)
value_np_1_ele = np.array([1])
value_np_1d = np.array([1, 2, 3, 4, 5, 6])
value_np_2d = np.arange(1, 31).reshape(5, 6)
value_np_3d = np.arange(1, 61).reshape(2, 5, 6)
value_tensor_scalar = Tensor(value_np_scalar, mstype.float32)
value_tensor_1_ele = Tensor(value_np_1_ele, mstype.float32)
value_tensor_1d = Tensor(value_np_1d, mstype.float32)
value_tensor_2d = Tensor(value_np_2d, mstype.float32)
value_tensor_3d = Tensor(value_np_3d, mstype.float32)
input_tensor_3d[list_index_int_1] += value_number
input_np_3d[list_index_int_1] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_1] += value_tensor_scalar
input_np_3d[list_index_int_1] += value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_1] -= value_tensor_1_ele
input_np_3d[list_index_int_1] -= value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_1] *= value_tensor_1d
input_np_3d[list_index_int_1] *= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_1] /= value_tensor_2d
input_np_3d[list_index_int_1] /= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] += value_number
input_np_3d[list_index_int_2] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] //= value_tensor_scalar
input_np_3d[list_index_int_2] //= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] *= value_tensor_1_ele
input_np_3d[list_index_int_2] *= value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] %= value_tensor_1d
input_np_3d[list_index_int_2] %= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] += value_tensor_2d
input_np_3d[list_index_int_2] += value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_int_2] -= value_tensor_3d
input_np_3d[list_index_int_2] -= value_np_3d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_2] += value_number
input_np_3d[list_index_bool_2] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_2] *= value_tensor_scalar
input_np_3d[list_index_bool_2] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_2] /= value_tensor_1_ele
input_np_3d[list_index_bool_2] /= value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_2] //= value_tensor_1d
input_np_3d[list_index_bool_2] //= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_2] %= value_tensor_2d
input_np_3d[list_index_bool_2] %= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] += value_number
input_np_3d[list_index_bool_3] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] *= value_tensor_scalar
input_np_3d[list_index_bool_3] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] += value_tensor_1_ele
input_np_3d[list_index_bool_3] += value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] -= value_tensor_1d
input_np_3d[list_index_bool_3] -= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] *= value_tensor_2d
input_np_3d[list_index_bool_3] *= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_bool_3] /= value_tensor_3d
input_np_3d[list_index_bool_3] /= value_np_3d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] += value_number
input_np_3d[list_index_mix_1] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] *= value_tensor_scalar
input_np_3d[list_index_mix_1] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] += value_tensor_1_ele
input_np_3d[list_index_mix_1] += value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] -= value_tensor_1d
input_np_3d[list_index_mix_1] -= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] *= value_tensor_2d
input_np_3d[list_index_mix_1] *= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_1] /= value_tensor_3d
input_np_3d[list_index_mix_1] /= value_np_3d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] += value_number
input_np_3d[list_index_mix_2] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] *= value_tensor_scalar
input_np_3d[list_index_mix_2] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] += value_tensor_1_ele
input_np_3d[list_index_mix_2] += value_np_1_ele
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] -= value_tensor_1d
input_np_3d[list_index_mix_2] -= value_np_1d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] *= value_tensor_2d
input_np_3d[list_index_mix_2] *= value_np_2d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[list_index_mix_2] /= value_tensor_3d
input_np_3d[list_index_mix_2] /= value_np_3d
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
with pytest.raises(IndexError):
input_tensor_3d[list_index_empty] += value_number
with pytest.raises(IndexError):
input_tensor_3d[list_index_int_overflow] += value_number
with pytest.raises(IndexError):
input_tensor_3d[list_index_bool_1] += value_number
# GPU: does not supported op "FloorMod"
@pytest.mark.level1
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
def test_tesnsor_augassign_by_tuple():
input_np_3d = np.arange(120).reshape(4, 5, 6).astype(np.float32)
input_tensor_3d = Tensor(input_np_3d, mstype.float32)
index_tuple_1 = (slice(1, 3, 1), ..., [1, 3, 2])
index_tuple_2 = (2, 3, 4)
index_tuple_4 = ([2, 3], True)
index_tuple_5 = (False, 3)
index_tuple_6 = (False, slice(3, 1, -1))
index_tuple_7 = (..., slice(None, 6, 2))
value_number = 2
value_np_scalar = np.array(100)
value_tensor_scalar = Tensor(value_np_scalar, mstype.float32)
input_tensor_3d[index_tuple_1] += value_number
input_np_3d[index_tuple_1] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tuple_1] -= Tensor(np.ones((2, 5, 3)), mstype.float32)
input_np_3d[index_tuple_1] -= np.ones((2, 5, 3))
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tuple_2] *= value_tensor_scalar
input_np_3d[index_tuple_2] *= value_np_scalar
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tuple_4] //= value_number
input_np_3d[index_tuple_4] //= value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
input_tensor_3d[index_tuple_7] += value_number
input_np_3d[index_tuple_7] += value_number
assert np.allclose(input_tensor_3d.asnumpy(), input_np_3d, 0.0001, 0.0001)
with pytest.raises(IndexError):
input_tensor_3d[index_tuple_5] *= value_number
with pytest.raises(IndexError):
input_tensor_3d[index_tuple_6] %= value_number
| 41.542945 | 81 | 0.743521 | 4,676 | 27,086 | 3.855432 | 0.036997 | 0.131795 | 0.144941 | 0.111826 | 0.896772 | 0.877413 | 0.865542 | 0.829654 | 0.779399 | 0.710173 | 0 | 0.097079 | 0.143174 | 27,086 | 651 | 82 | 41.606759 | 0.679723 | 0.034187 | 0 | 0.388773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199584 | 1 | 0.016632 | false | 0 | 0.008316 | 0 | 0.024948 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bf180d43642a2bb9875b5462c071c82cf965dca | 102 | py | Python | test/_e2e/update_templates/python/http/func.py | senthilnathan/kn-plugin-func | 62b21f0536f024f31ee913763e4e89f8adf526f1 | [
"Apache-2.0"
] | 35 | 2021-07-15T03:51:29.000Z | 2022-03-27T23:44:34.000Z | test/_e2e/update_templates/python/http/func.py | senthilnathan/kn-plugin-func | 62b21f0536f024f31ee913763e4e89f8adf526f1 | [
"Apache-2.0"
] | 541 | 2021-07-14T19:32:29.000Z | 2022-03-31T23:59:10.000Z | test/_e2e/update_templates/python/http/func.py | senthilnathan/kn-plugin-func | 62b21f0536f024f31ee913763e4e89f8adf526f1 | [
"Apache-2.0"
] | 24 | 2021-07-15T05:52:37.000Z | 2022-02-16T13:42:37.000Z | from parliament import Context
def main(context: Context):
return "HELLO PYTHON FUNCTION", 200
| 17 | 39 | 0.745098 | 13 | 102 | 5.846154 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.186275 | 102 | 5 | 40 | 20.4 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
75086f8ebac7fdcf731158bf416683a67c7a424f | 149 | py | Python | application/cms/__init__.py | AlexKouzy/ethnicity-facts-and-figures-publisher | 18ab2495a8633f585e18e607c7f75daa564a053d | [
"MIT"
] | 1 | 2021-10-06T13:48:36.000Z | 2021-10-06T13:48:36.000Z | application/cms/__init__.py | AlexKouzy/ethnicity-facts-and-figures-publisher | 18ab2495a8633f585e18e607c7f75daa564a053d | [
"MIT"
] | 116 | 2018-11-02T17:20:47.000Z | 2022-02-09T11:06:22.000Z | application/cms/__init__.py | racedisparityaudit/rd_cms | a12f0e3f5461cc41eed0077ed02e11efafc5dd76 | [
"MIT"
] | 2 | 2018-11-09T16:47:35.000Z | 2020-04-09T13:06:48.000Z | from flask import Blueprint
cms_blueprint = Blueprint("cms", __name__, url_prefix="/cms")
from application.cms.views import create_measure # noqa
| 24.833333 | 61 | 0.785235 | 20 | 149 | 5.5 | 0.65 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120805 | 149 | 5 | 62 | 29.8 | 0.839695 | 0.026846 | 0 | 0 | 0 | 0 | 0.048951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
754ce9fd6750443043062cd95336c2791fa4b9a8 | 194 | py | Python | urbanonto_tools/ontology_initial_import/start_ups/import_validator.py | przemekgradzki/urbanonto | d93b12f82b8c82013453aa77af4fbe3475231332 | [
"MIT"
] | null | null | null | urbanonto_tools/ontology_initial_import/start_ups/import_validator.py | przemekgradzki/urbanonto | d93b12f82b8c82013453aa77af4fbe3475231332 | [
"MIT"
] | null | null | null | urbanonto_tools/ontology_initial_import/start_ups/import_validator.py | przemekgradzki/urbanonto | d93b12f82b8c82013453aa77af4fbe3475231332 | [
"MIT"
] | 1 | 2021-09-12T18:24:33.000Z | 2021-09-12T18:24:33.000Z | import sys
from ontology_initial_import.orchestrators.excel_import_validation_orchestrator import orchestrate_import_validation
orchestrate_import_validation(excel_file_path=str(sys.argv[1]))
| 32.333333 | 116 | 0.902062 | 25 | 194 | 6.56 | 0.6 | 0.292683 | 0.329268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005405 | 0.046392 | 194 | 5 | 117 | 38.8 | 0.881081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
754e63b84faf3089a63dba302da31b67c4bcab77 | 145 | py | Python | create_database.py | flebel/littlesleeper_noise_recorder | a016344f372e645d0584fb3130b9f0b3a974b9ce | [
"BSD-3-Clause"
] | 1 | 2015-10-28T16:23:41.000Z | 2015-10-28T16:23:41.000Z | create_database.py | flebel/littlesleeper_noise_recorder | a016344f372e645d0584fb3130b9f0b3a974b9ce | [
"BSD-3-Clause"
] | null | null | null | create_database.py | flebel/littlesleeper_noise_recorder | a016344f372e645d0584fb3130b9f0b3a974b9ce | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from recorder import engine
from recorder.models import Base, NoiseEvent, NoiseSource
Base.metadata.create_all(engine)
| 16.111111 | 57 | 0.8 | 20 | 145 | 5.75 | 0.75 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117241 | 145 | 8 | 58 | 18.125 | 0.898438 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f32a53c291ca6d5702b0c7e143ea403ac754cb86 | 9,123 | py | Python | study/hfppnetwork/hfppnetwork/sms/tests.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | 7 | 2015-07-15T06:47:16.000Z | 2020-10-17T20:51:09.000Z | study/hfppnetwork/hfppnetwork/sms/tests.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | null | null | null | study/hfppnetwork/hfppnetwork/sms/tests.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | 8 | 2017-01-30T02:27:01.000Z | 2021-04-21T04:15:48.000Z | from django.test import TestCase
from hfppnetwork.sms.models import Partner, Study, BeneficiaryClaimData,\
CarrierClaimData, InpatientClaimData, OutpatientClaimData
from django.http.response import HttpResponseRedirect
from django.contrib.auth.models import User
from xml.etree import ElementTree
from hfppnetwork.sms import helper
# Create your tests here.
def test_data_create(request):
if not isinstance(request.user, User):
return HttpResponseRedirect('/login/')
#helper.pull_hub_roles()
#helper.pull_hub_partner('091f80d7-8ecb-429c-8f0b-caeaae18dcd8');
#helper.add_hub_partner('user4', 'org4', '1', 'false', 'pass4')
#helper.edit_hub_partner('349d9967-7bc1-4f0b-ba0f-150f8861fa98', 'user4e', 'org4e', '1', 'false', 'pass4e')
#helper.delete_hub_partner('349d9967-7bc1-4f0b-ba0f-150f8861fa98')
Partner.objects.all().delete()
Partner.objects.create(hfpp_network_id = '1', company_name="partner 1 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 0).save()
Partner.objects.create(hfpp_network_id = '2', company_name="partner 2 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 0).save()
"""
Partner.objects.create(hfpp_network_id = 'hfpp_partner_1', company_name="partner 1 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_2', company_name="partner 2 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_3', company_name="partner 3 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_4', company_name="partner 4 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_5', company_name="partner 5 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_6', company_name="partner 6 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_7', company_name="partner 7 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_8', company_name="partner 8 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_9', company_name="partner 9 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_10', company_name="partner 10 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_11', company_name="partner 11 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
Partner.objects.create(hfpp_network_id = 'hfpp_partner_12', company_name="partner 12 company",city="C", state=1, \
region='region', division='division', number_of_insured=0, owner = request.user,
count_of_data_requests_received = 0,
count_of_data_requests_sent = 0,
count_of_data_requests_declined = 0,
count_of_data_requests_responded = 0,
count_of_data_requests_pending = 0,
reciprocity = 10000.00).save()
"""
return HttpResponseRedirect('/studies')
def test_data_clear(request):
if not isinstance(request.user, User):
return HttpResponseRedirect('/login/')
Partner.objects.all().delete()
return HttpResponseRedirect('/studies/')
def test_parse(request):
print ('!!!',BeneficiaryClaimData().bene_birth_dt);
study = Study.objects.get(pk=16);
root = ElementTree.parse('test_files/beneficiary_summary.xml')
for beneficiary_summary in root.findall('//BeneficiarySummary'):
print ('!!!code', beneficiary_summary.find('./BeneficiaryCode').text)
properties = helper.parseBeneficiaryClaim({}, beneficiary_summary)
print (properties)
obj = BeneficiaryClaimData.objects.create(study=study, **properties)
root = ElementTree.parse('test_files/carrier_claim.xml')
for beneficiary_summary in root.findall('//CarrierClaim'):
properties = helper.parseCarrierClaimData({}, beneficiary_summary)
print (properties)
obj = CarrierClaimData.objects.create(study=study, **properties)
root = ElementTree.parse('test_files/inpatient_claim.xml')
for beneficiary_summary in root.findall('//InpatientClaim'):
properties = helper.parseInpatientClaimData({}, beneficiary_summary)
print (properties)
obj = InpatientClaimData.objects.create(study=study, **properties)
root = ElementTree.parse('test_files/outpatient_claim.xml')
for beneficiary_summary in root.findall('//OutpatientClaim'):
properties = helper.parseOutpatientClaimData({}, beneficiary_summary)
print (properties)
obj = OutpatientClaimData.objects.create(study=study, **properties)
return HttpResponseRedirect('/studies') | 53.664706 | 118 | 0.707114 | 1,147 | 9,123 | 5.268527 | 0.116827 | 0.081086 | 0.12742 | 0.220089 | 0.797783 | 0.750455 | 0.744994 | 0.724971 | 0.70412 | 0.7023 | 0 | 0.038353 | 0.185465 | 9,123 | 170 | 119 | 53.664706 | 0.774862 | 0.037597 | 0 | 0.448276 | 0 | 0 | 0.101738 | 0.038863 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.103448 | 0 | 0.241379 | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f350a91588e9eaea586b9bce4ed5c2f0d57f71f5 | 77 | py | Python | src/factiva/news/stream/__init__.py | cerritows/factiva-news-python | b9bc7fa8b0f035e67f6bbb56731932e2299edc2d | [
"MIT"
] | 1 | 2021-01-25T12:34:32.000Z | 2021-01-25T12:34:32.000Z | src/factiva/news/stream/__init__.py | cerritows/factiva-news-python | b9bc7fa8b0f035e67f6bbb56731932e2299edc2d | [
"MIT"
] | null | null | null | src/factiva/news/stream/__init__.py | cerritows/factiva-news-python | b9bc7fa8b0f035e67f6bbb56731932e2299edc2d | [
"MIT"
] | null | null | null | from factiva.news import BulkNewsBase
class Stream(BulkNewsBase):
pass
| 12.833333 | 37 | 0.779221 | 9 | 77 | 6.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 38 | 15.4 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f365b9b84e0f469f69df51d63deabe1960f9c0a4 | 151 | py | Python | tests/conftest.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 138 | 2017-06-17T13:30:27.000Z | 2022-03-20T02:33:47.000Z | tests/conftest.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 73 | 2017-05-16T06:53:04.000Z | 2022-02-04T10:40:44.000Z | tests/conftest.py | jaisw7/shenfun | 7482beb5b35580bc45f72704b69343cc6fc1d773 | [
"BSD-2-Clause"
] | 38 | 2018-01-31T14:37:01.000Z | 2022-03-31T15:07:27.000Z | import os
import pytest
def pytest_configure(config):
os.environ['pytest'] = 'True'
def pytest_unconfigure(config):
del os.environ['pytest']
| 16.777778 | 33 | 0.721854 | 20 | 151 | 5.35 | 0.5 | 0.168224 | 0.280374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152318 | 151 | 8 | 34 | 18.875 | 0.835938 | 0 | 0 | 0 | 0 | 0 | 0.10596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3c58c3cf594b869d4a39490a5c1606317ab2735 | 3,977 | py | Python | tests/test_pools_fyi.py | jhhb/pydefipulsedata | 0c48537dd054d1b7756bf07e300db434115e9307 | [
"MIT"
] | 3 | 2021-06-14T14:41:40.000Z | 2022-03-11T15:21:37.000Z | tests/test_pools_fyi.py | jhhb/pydefipulsedata | 0c48537dd054d1b7756bf07e300db434115e9307 | [
"MIT"
] | 1 | 2021-06-17T10:05:23.000Z | 2021-06-20T18:03:11.000Z | tests/test_pools_fyi.py | jhhb/pydefipulsedata | 0c48537dd054d1b7756bf07e300db434115e9307 | [
"MIT"
] | 1 | 2022-01-17T11:35:10.000Z | 2022-01-17T11:35:10.000Z | import unittest
import responses
from defipulsedata import PoolsFyi
class TestWrapper(unittest.TestCase):
@responses.activate
def test_get_exchanges(self):
url_without_params = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/exchanges?api-key=mock-key'
responses.add(responses.GET, url_without_params, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_exchanges()
self.assertEqual(responses.calls[0].request.url, url_without_params)
responses.reset()
url_with_params = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/exchanges?tags=stable&platform=bancor&direction=asc&orderBy=platform&offset=1&limit=200&api-key=mock-key'
all_params = {
'tags': 'stable',
'platform': 'bancor',
'direction': 'asc',
'orderBy': 'platform',
'offset': 1,
'limit': 200,
}
responses.add(responses.GET, url_with_params, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_exchanges(params=all_params)
self.assertEqual(
responses.calls[0].request.url,
url_with_params,
'it correctly serializes the query params',
)
@responses.activate
def test_get_returns(self):
address = '0x0000000000000000000000000000000000000000'
expected_url = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/returns/0x0000000000000000000000000000000000000000?api-key=mock-key'
responses.add(responses.GET, expected_url, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_returns(address=address)
self.assertEqual(responses.calls[0].request.url, expected_url)
@responses.activate
def test_get_liquidity(self):
address = '0x0000000000000000000000000000000000000000'
expected_url = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v0/liquidity/0x0000000000000000000000000000000000000000?api-key=mock-key'
responses.add(responses.GET, expected_url, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_liquidity(address=address)
self.assertEqual(responses.calls[0].request.url, expected_url)
@responses.activate
def test_get_exchange(self):
address = '0x0000000000000000000000000000000000000000'
expected_url = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/exchange/0x0000000000000000000000000000000000000000?api-key=mock-key'
responses.add(responses.GET, expected_url, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_exchange(address=address)
self.assertEqual(responses.calls[0].request.url, expected_url)
@responses.activate
def test_get_trades(self):
address = '0x0000000000000000000000000000000000000000'
url_without_params = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/trades/0x0000000000000000000000000000000000000000?api-key=mock-key'
responses.add(responses.GET, url_without_params, json='{}', status=200)
PoolsFyi(api_key='mock-key').get_trades(address=address)
self.assertEqual(responses.calls[0].request.url, url_without_params)
responses.reset()
url_with_all_params = 'https://data-api.defipulse.com/api/v1/blocklytics/pools/v1/trades/0x0000000000000000000000000000000000000000?from=2020-10-21&to=2020-10-31&platform=bancor&direction=asc&orderBy=platform&offset=1&limit=200&api-key=mock-key'
responses.add(responses.GET, url_with_all_params, json='{}', status=200)
all_params = {
'from': '2020-10-21',
'to': '2020-10-31',
'platform': 'bancor',
'direction': 'asc',
'orderBy': 'platform',
'offset': 1,
'limit': 200,
}
PoolsFyi(api_key='mock-key').get_trades(address=address, params=all_params)
self.assertEqual(responses.calls[0].request.url, url_with_all_params)
| 46.788235 | 253 | 0.689464 | 459 | 3,977 | 5.834423 | 0.14597 | 0.031367 | 0.052278 | 0.067961 | 0.8764 | 0.856236 | 0.843167 | 0.843167 | 0.843167 | 0.833084 | 0 | 0.140539 | 0.178778 | 3,977 | 84 | 254 | 47.345238 | 0.679424 | 0 | 0 | 0.478261 | 0 | 0.101449 | 0.350264 | 0.042243 | 0 | 0 | 0.095047 | 0 | 0.101449 | 1 | 0.072464 | false | 0 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45f6f44709ba63f0d795c6228f93e34121501b7a | 4,993 | py | Python | scripts/validation/single_task_validator_test.py | malawski/cloudworkflowsimulator | 12b2f30c7f72c3e52a5c53d86fd39b319adf71c8 | [
"Apache-2.0"
] | 22 | 2015-05-28T10:08:46.000Z | 2021-11-01T12:47:16.000Z | scripts/validation/single_task_validator_test.py | AYUSHMIT/cloudworkflowsimulator | 12b2f30c7f72c3e52a5c53d86fd39b319adf71c8 | [
"Apache-2.0"
] | 46 | 2015-01-14T18:23:11.000Z | 2017-07-18T02:26:48.000Z | scripts/validation/single_task_validator_test.py | AYUSHMIT/cloudworkflowsimulator | 12b2f30c7f72c3e52a5c53d86fd39b319adf71c8 | [
"Apache-2.0"
] | 18 | 2015-02-11T17:48:20.000Z | 2021-11-01T12:47:17.000Z | import unittest
from validation import single_task_validator
from validation.parsed_log_loader import TaskLog
from validation.parsed_log_loader import TransferLog
from validation.parsed_log_loader import VMLog
IRRELEVANT_TASK_ATTRIBUTES = {
'id': 'some_id',
'workflow': 'some_workflow',
'task_id': 'some_task_id',
'vm': 1,
'result': 'OK'
}
IRRELEVANT_TRANSFER_ATTRIBUTES = {
'id': 'some_id',
'vm': 1,
'direction': 'UPLOAD',
'job_id': 23,
'file_id': 'file.txt',
}
IRRELEVANT_VM_ATTRIBUTES = {
'id': 'some_id',
'price_for_billing_unit': 1.,
'cores': 1
}
class SingleTaskValidatorTest(unittest.TestCase):
def test_should_pass_when_valid_task(self):
task = TaskLog(started=3.0, finished=5.0, **IRRELEVANT_TASK_ATTRIBUTES)
result = single_task_validator.validate_task(task)
self.assertTrue(result.is_valid)
def test_should_return_some_message_when_fails(self):
task = TaskLog(started=single_task_validator.MISSING_VALUE,
finished=single_task_validator.MISSING_VALUE,
**IRRELEVANT_TASK_ATTRIBUTES)
result = single_task_validator.validate_task(task)
self.assertTrue(result.message)
def test_should_fail_when_task_has_not_started(self):
task = TaskLog(started=single_task_validator.MISSING_VALUE,
finished=5.0, **IRRELEVANT_TASK_ATTRIBUTES)
result = single_task_validator.validate_task(task)
self.assertFalse(result.is_valid)
def test_should_fail_when_task_has_not_ended(self):
task = TaskLog(finished=single_task_validator.MISSING_VALUE,
started=5.0, **IRRELEVANT_TASK_ATTRIBUTES)
result = single_task_validator.validate_task(task)
self.assertFalse(result.is_valid)
def test_should_hold_task_time_order(self):
task = TaskLog(started=5.0, finished=3.0, **IRRELEVANT_TASK_ATTRIBUTES)
result = single_task_validator.validate_task(task)
self.assertFalse(result.is_valid)
def test_should_pass_when_valid_transfer(self):
task = TransferLog(started=3.0, finished=5.0, **IRRELEVANT_TRANSFER_ATTRIBUTES)
result = single_task_validator.validate_transfer(task)
self.assertTrue(result.is_valid)
def test_should_return_some_message_when_transfer_validation_fails(self):
task = TransferLog(started=single_task_validator.MISSING_VALUE,
finished=single_task_validator.MISSING_VALUE,
**IRRELEVANT_TRANSFER_ATTRIBUTES)
result = single_task_validator.validate_transfer(task)
self.assertTrue(result.message)
def test_should_fail_when_transfer_has_not_started(self):
task = TransferLog(started=single_task_validator.MISSING_VALUE,
finished=5.0, **IRRELEVANT_TRANSFER_ATTRIBUTES)
result = single_task_validator.validate_transfer(task)
self.assertFalse(result.is_valid)
def test_should_fail_when_transfer_has_not_ended(self):
task = TransferLog(finished=single_task_validator.MISSING_VALUE,
started=5.0, **IRRELEVANT_TRANSFER_ATTRIBUTES)
result = single_task_validator.validate_transfer(task)
self.assertFalse(result.is_valid)
def test_should_hold_transfer_time_order(self):
task = TransferLog(started=5.0, finished=3.0, **IRRELEVANT_TRANSFER_ATTRIBUTES)
result = single_task_validator.validate_transfer(task)
self.assertFalse(result.is_valid)
def test_should_pass_when_valid_vm(self):
task = VMLog(started=3.0, finished=5.0, **IRRELEVANT_VM_ATTRIBUTES)
result = single_task_validator.validate_vm(task)
self.assertTrue(result.is_valid)
def test_should_return_some_message_when_vm_validation_fails(self):
task = VMLog(started=single_task_validator.MISSING_VALUE,
finished=single_task_validator.MISSING_VALUE,
**IRRELEVANT_VM_ATTRIBUTES)
result = single_task_validator.validate_vm(task)
self.assertTrue(result.message)
def test_should_fail_when_vm_has_not_started(self):
task = VMLog(started=single_task_validator.MISSING_VALUE,
finished=5.0, **IRRELEVANT_VM_ATTRIBUTES)
result = single_task_validator.validate_vm(task)
self.assertFalse(result.is_valid)
def test_should_fail_when_vm_has_not_ended(self):
task = VMLog(finished=single_task_validator.MISSING_VALUE,
started=5.0, **IRRELEVANT_VM_ATTRIBUTES)
result = single_task_validator.validate_vm(task)
self.assertFalse(result.is_valid)
def test_should_hold_vm_time_order(self):
task = VMLog(started=5.0, finished=3.0, **IRRELEVANT_VM_ATTRIBUTES)
result = single_task_validator.validate_vm(task)
self.assertFalse(result.is_valid)
if __name__ == '__main__':
unittest.main()
| 35.161972 | 87 | 0.709193 | 603 | 4,993 | 5.456053 | 0.112769 | 0.085106 | 0.161702 | 0.118541 | 0.827356 | 0.809119 | 0.769301 | 0.744681 | 0.725228 | 0.725228 | 0 | 0.010598 | 0.206289 | 4,993 | 141 | 88 | 35.411348 | 0.819581 | 0 | 0 | 0.444444 | 0 | 0 | 0.030042 | 0.004406 | 0 | 0 | 0 | 0 | 0.151515 | 1 | 0.151515 | false | 0.030303 | 0.050505 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45fa15d548f4dabbedbbdd4b81f731a4ad041f20 | 5,952 | py | Python | test-framework/test-suites/integration/tests/add/test_add_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 123 | 2015-05-12T23:36:45.000Z | 2017-07-05T23:26:57.000Z | test-framework/test-suites/integration/tests/add/test_add_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 177 | 2015-06-05T19:17:47.000Z | 2017-07-07T17:57:24.000Z | test-framework/test-suites/integration/tests/add/test_add_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 32 | 2015-06-07T02:25:03.000Z | 2017-06-23T07:35:35.000Z | import json
from textwrap import dedent
class TestAddHostRoute:
def test_no_args(self, host):
result = host.run('stack add host route')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host ...} {address=string} {gateway=string} [interface=string] [netmask=string] [syncnow=string]
''')
def test_no_host(self, host):
result = host.run(
'stack add host route address=192.168.0.2 gateway=192.168.0.1'
)
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host ...} {address=string} {gateway=string} [interface=string] [netmask=string] [syncnow=string]
''')
def test_no_address(self, host):
result = host.run(
'stack add host route frontend-0-0 gateway=192.168.0.1'
)
assert result.rc == 255
assert result.stderr == dedent('''\
error - "address" parameter is required
{host ...} {address=string} {gateway=string} [interface=string] [netmask=string] [syncnow=string]
''')
def test_no_gateway(self, host):
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2'
)
assert result.rc == 255
assert result.stderr == dedent('''\
error - "gateway" parameter is required
{host ...} {address=string} {gateway=string} [interface=string] [netmask=string] [syncnow=string]
''')
def test_with_subnet(self, host):
# Add the route
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2 gateway=private'
)
assert result.rc == 0
# Check that it is there now
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '192.168.0.2',
'source': 'H',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
def test_with_gateway_and_netmask(self, host):
# Add the route
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2 '
'gateway=192.168.0.1 netmask=255.255.255.0'
)
assert result.rc == 0
# Check that it is there now
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': '192.168.0.1',
'host': 'frontend-0-0',
'interface': None,
'netmask': '255.255.255.0',
'network': '192.168.0.2',
'source': 'H',
'subnet': None
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
def test_with_interface(self, host):
# Add the route
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2 '
'gateway=192.168.0.1 interface=eth0'
)
assert result.rc == 0
# Check that it is there now
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': '192.168.0.1',
'host': 'frontend-0-0',
'interface': 'eth0',
'netmask': '255.255.255.255',
'network': '192.168.0.2',
'source': 'H',
'subnet': None
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
def test_duplicate(self, host, add_environment):
# Add the route
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2 '
'gateway=192.168.0.1 netmask=255.255.255.0'
)
assert result.rc == 0
# Add it again and make sure it errors out
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.2 '
'gateway=192.168.0.1 netmask=255.255.255.0'
)
assert result.rc == 255
assert result.stderr == 'error - route for "192.168.0.2" already exists\n'
def test_with_syncnow(self, host, revert_routing_table, revert_etc):
# Add a route with sync now so it is added to the routing table
result = host.run(
'stack add host route frontend-0-0 address=192.168.0.3 '
'gateway=192.168.0.2 interface=eth1 syncnow=true'
)
assert result.rc == 0
# Confirm it is in the DB
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{'gateway': '192.168.0.2',
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '192.168.0.3',
'source': 'H',
'subnet': None
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
# Also check that the test route is in our routing table
result = host.run('ip route list')
assert result.rc == 0
assert '192.168.0.3 via 192.168.0.2 dev eth1' in result.stdout
| 26.571429 | 100 | 0.604671 | 851 | 5,952 | 4.202115 | 0.108108 | 0.082215 | 0.075503 | 0.067114 | 0.862136 | 0.841443 | 0.836689 | 0.830817 | 0.830817 | 0.7967 | 0 | 0.1113 | 0.210517 | 5,952 | 223 | 101 | 26.690583 | 0.649713 | 0.053427 | 0 | 0.683673 | 0 | 0.030612 | 0.494486 | 0.011206 | 0 | 0 | 0 | 0 | 0.127551 | 1 | 0.045918 | false | 0 | 0.010204 | 0 | 0.061224 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
346c69ee36d65fd01d7cf2ef23a965ec5ed7a8fd | 4,219 | py | Python | zephyr/backend/Tests/test_MiniZephyr.py | uwoseis/zephyr-cli | e4228be3947021f2b983c919c51bb1f67df90eb0 | [
"MIT"
] | 18 | 2015-11-21T03:36:33.000Z | 2021-07-23T08:20:27.000Z | zephyr/backend/Tests/test_MiniZephyr.py | uwoseis/zephyr-cli | e4228be3947021f2b983c919c51bb1f67df90eb0 | [
"MIT"
] | 33 | 2015-11-23T15:38:12.000Z | 2016-10-12T00:41:05.000Z | zephyr/backend/Tests/test_MiniZephyr.py | uwoseis/zephyr-cli | e4228be3947021f2b983c919c51bb1f67df90eb0 | [
"MIT"
] | 7 | 2017-01-03T14:54:46.000Z | 2020-01-04T13:39:57.000Z | import unittest
import numpy as np
from zephyr.backend import MiniZephyr, MiniZephyr25D, SimpleSource, AnalyticalHelmholtz
class TestMiniZephyr(unittest.TestCase):
@staticmethod
def _elementNorm(arr):
return np.sqrt((arr.conj()*arr).sum()) / arr.size
def setUp(self):
pass
@staticmethod
def test_cleanExecution():
systemConfig = {
'c': 2500., # m/s
'rho': 1., # density
'nx': 100, # count
'nz': 200, # count
'freq': 2e2,
}
xs = 50
zs = 100
sloc = np.array([xs, zs]).reshape((1,2))
Ainv = MiniZephyr(systemConfig)
src = SimpleSource(systemConfig)
q = src(sloc)
u = Ainv*q
@staticmethod
def test_cleanExecution25D():
systemConfig = {
'c': 2500., # m/s
'rho': 1., # density
'nx': 100, # count
'nz': 200, # count
'freq': 2e2,
'nky': 4,
'parallel': False,
}
xs = 50
zs = 100
sloc = np.array([xs, zs]).reshape((1,2))
Ainv = MiniZephyr25D(systemConfig)
src = SimpleSource(systemConfig)
q = src(sloc)
u = Ainv*q
@staticmethod
def test_cleanExecution25DParallel():
systemConfig = {
'c': 2500., # m/s
'rho': 1., # density
'nx': 100, # count
'nz': 200, # count
'freq': 2e2,
'nky': 4,
'parallel': True,
}
xs = 50
zs = 100
sloc = np.array([xs, zs]).reshape((1,2))
Ainv = MiniZephyr25D(systemConfig)
src = SimpleSource(systemConfig)
q = src(sloc)
u = Ainv*q
def test_compareAnalytical(self):
systemConfig = {
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
}
xs = 25
zs = 25
sloc = np.array([xs, zs]).reshape((1,2))
Ainv = MiniZephyr(systemConfig)
src = SimpleSource(systemConfig)
q = src(sloc)
uMZ = Ainv*q
AH = AnalyticalHelmholtz(systemConfig)
uAH = AH(sloc)
nx = systemConfig['nx']
nz = systemConfig['nz']
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
segAHr = uAHr[40:180,40:80]
segMZr = uMZr[40:180,40:80]
error = self._elementNorm((segAHr - segMZr) / abs(segAHr))
self.assertTrue(error < 1e-2)
def test_compareAnalytical25D(self):
systemConfig = {
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
'nky': 20,
'3D': True,
}
xs = 25
zs = 25
sloc = np.array([xs, zs]).reshape((1,2))
Ainv = MiniZephyr25D(systemConfig)
src = SimpleSource(systemConfig)
q = src(sloc)
uMZ = Ainv*q
AH = AnalyticalHelmholtz(systemConfig)
uAH = AH(sloc)
nx = systemConfig['nx']
nz = systemConfig['nz']
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
segAHr = uAHr[40:180,40:80]
segMZr = uMZr[40:180,40:80]
error = self._elementNorm((segAHr - segMZr) / abs(segAHr))
print(error)
self.assertTrue(error < 1e-2)
if __name__ == '__main__':
unittest.main()
| 26.872611 | 87 | 0.402465 | 372 | 4,219 | 4.521505 | 0.236559 | 0.020809 | 0.050535 | 0.053508 | 0.764566 | 0.738407 | 0.738407 | 0.738407 | 0.738407 | 0.738407 | 0 | 0.072969 | 0.483527 | 4,219 | 156 | 88 | 27.044872 | 0.698944 | 0.029154 | 0 | 0.784483 | 0 | 0 | 0.025288 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.060345 | false | 0.008621 | 0.025862 | 0.008621 | 0.103448 | 0.008621 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3471ebfb76167ffed244bfa3d1158d941445720d | 142 | py | Python | app/processor.py | isakcodes/website | a781f4c90c609461a64340a904d577014f5690ae | [
"MIT"
] | null | null | null | app/processor.py | isakcodes/website | a781f4c90c609461a64340a904d577014f5690ae | [
"MIT"
] | null | null | null | app/processor.py | isakcodes/website | a781f4c90c609461a64340a904d577014f5690ae | [
"MIT"
] | null | null | null | from app.utils import get_substitutions_templates
def variables_processor(request=None):
c = get_substitutions_templates()
return c
| 20.285714 | 49 | 0.795775 | 18 | 142 | 6 | 0.777778 | 0.296296 | 0.462963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147887 | 142 | 6 | 50 | 23.666667 | 0.892562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
3480ad6733919c3e483b778d1203d4e26839c55b | 22 | py | Python | afnumpy/linalg/__init__.py | FilipeMaia/afnumpy | 11958f501f7ddeb88915a44d0fd4914e1779e7dd | [
"BSD-2-Clause"
] | 31 | 2015-06-16T17:17:06.000Z | 2021-01-03T16:20:23.000Z | afnumpy/linalg/__init__.py | daurer/afnumpy | 83f529eab7cb0ba49101aa5869059ac38f457e36 | [
"BSD-2-Clause"
] | 33 | 2015-05-14T18:03:43.000Z | 2019-09-23T20:02:45.000Z | afnumpy/linalg/__init__.py | daurer/afnumpy | 83f529eab7cb0ba49101aa5869059ac38f457e36 | [
"BSD-2-Clause"
] | 13 | 2015-06-16T17:17:09.000Z | 2021-11-06T22:46:15.000Z | from .linalg import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b35f129c500229c799930379062ff8b99f0f3ae | 217 | py | Python | ftpclient/__init__.py | gusenov/ftp-client-py | 983ab42c1dbf526b9798ceccc9282ae2d9fa3cf7 | [
"MIT"
] | null | null | null | ftpclient/__init__.py | gusenov/ftp-client-py | 983ab42c1dbf526b9798ceccc9282ae2d9fa3cf7 | [
"MIT"
] | null | null | null | ftpclient/__init__.py | gusenov/ftp-client-py | 983ab42c1dbf526b9798ceccc9282ae2d9fa3cf7 | [
"MIT"
] | null | null | null | from ftpclient.ftp_item_type import *
from ftpclient.ftp_item import *
from ftpclient.ftp_item_iterator import *
from ftpclient.ftp_connection import *
from ftpclient.ftp_utils import *
from ftpclient.logger import * | 31 | 41 | 0.834101 | 31 | 217 | 5.612903 | 0.322581 | 0.448276 | 0.45977 | 0.505747 | 0.298851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110599 | 217 | 7 | 42 | 31 | 0.901554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1ba0080b26a6fa121b26a7fe93e9bfb326ba3bbe | 74 | py | Python | captcha/commands/__init__.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 21 | 2021-03-11T06:52:41.000Z | 2022-02-04T16:27:47.000Z | captcha/commands/__init__.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 77 | 2021-03-06T13:31:50.000Z | 2022-03-25T10:37:15.000Z | captcha/commands/__init__.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 33 | 2021-03-05T20:59:07.000Z | 2022-03-06T03:55:47.000Z | from .global_settings import OwnerCommands
from .settings import Settings
| 24.666667 | 42 | 0.864865 | 9 | 74 | 7 | 0.555556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 43 | 37 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1bbd01a8a17aab0a357f585012fee283a673aa72 | 185 | py | Python | libotp/__init__.py | P1ayerOne/src | 3a4343e29f844fe95da7d51aaee7fb680d02bf72 | [
"BSD-3-Clause"
] | null | null | null | libotp/__init__.py | P1ayerOne/src | 3a4343e29f844fe95da7d51aaee7fb680d02bf72 | [
"BSD-3-Clause"
] | null | null | null | libotp/__init__.py | P1ayerOne/src | 3a4343e29f844fe95da7d51aaee7fb680d02bf72 | [
"BSD-3-Clause"
] | null | null | null | from .movement.CImpulse import CImpulse
from .movement.CMover import CMover
from .movement.CMoverGroup import CMoverGroup
from .nametag import *
from .settings.Settings import Settings
| 30.833333 | 45 | 0.837838 | 23 | 185 | 6.73913 | 0.347826 | 0.232258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 185 | 5 | 46 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94151fd175972e351d1eefba2793ef022219705a | 39 | py | Python | enginessl/__init__.py | XXXalice/EngineSSL | 582753932830cb7b714fde57490a72774af27cf4 | [
"MIT"
] | 22 | 2018-10-20T19:39:58.000Z | 2021-09-21T05:42:54.000Z | enginessl/__init__.py | AliClouds/EngineSSL | 1b65b9c903d31c6ed2d96e906035adce22ce46ea | [
"MIT"
] | 73 | 2018-10-05T13:41:36.000Z | 2020-10-04T20:27:20.000Z | enginessl/__init__.py | AliClouds/EngineSSL | 1b65b9c903d31c6ed2d96e906035adce22ce46ea | [
"MIT"
] | 8 | 2018-10-23T12:31:30.000Z | 2021-06-30T18:14:31.000Z | #Anywhere module!!!!!!!!!!!!!!!!!!!!!!! | 39 | 39 | 0.358974 | 2 | 39 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 39 | 1 | 39 | 39 | 0.368421 | 0.974359 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9442d1e50a29b44080b985d5f24c734e56c78dc7 | 178 | py | Python | practicer/app.py | DominikPott/practicer | 1e0f10d3cc9ec17ead067708e3334223fbeb72ea | [
"MIT"
] | 1 | 2021-10-01T09:15:08.000Z | 2021-10-01T09:15:08.000Z | practicer/app.py | DominikPott/practicer | 1e0f10d3cc9ec17ead067708e3334223fbeb72ea | [
"MIT"
] | 3 | 2021-04-18T11:13:25.000Z | 2021-04-19T16:36:47.000Z | practicer/app.py | DominikPott/practicer | 1e0f10d3cc9ec17ead067708e3334223fbeb72ea | [
"MIT"
] | null | null | null | import practicer.api
import practicer.gui.pyside.app
if __name__ == '__main__':
exercises = practicer.api.exercises()
practicer.gui.pyside.app.run(exercises=exercises)
| 22.25 | 53 | 0.758427 | 22 | 178 | 5.772727 | 0.5 | 0.23622 | 0.283465 | 0.330709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 178 | 7 | 54 | 25.428571 | 0.814103 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
946e852e1462f87955c3433bcaa5d899a662d3ae | 307 | py | Python | core/dal/migration.py | oboforty/metaindex | 290d6b581fb1c074e28d42dc750ab878585e2eb2 | [
"MIT"
] | null | null | null | core/dal/migration.py | oboforty/metaindex | 290d6b581fb1c074e28d42dc750ab878585e2eb2 | [
"MIT"
] | null | null | null | core/dal/migration.py | oboforty/metaindex | 290d6b581fb1c074e28d42dc750ab878585e2eb2 | [
"MIT"
] | null | null | null | from .entities.dbdata.ChEBIData import ChEBIData
from .entities.dbdata.HMDBData import HMDBData
from .entities.dbdata.PubChemData import PubChemData
from .entities.dbdata.LipidMapsData import LipidMapsData
from .entities.dbdata.KEGGData import KeggData
from .entities.SecondaryID import SecondaryID
| 38.375 | 57 | 0.840391 | 35 | 307 | 7.371429 | 0.285714 | 0.27907 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104235 | 307 | 7 | 58 | 43.857143 | 0.938182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
947059b90fc2746b22af47acf0256d37579e20f6 | 44 | py | Python | Services/__init__.py | carlCarlson6/NERwithBERT | 109733c3816e39b0eff201a3e69acddf8a121844 | [
"MIT"
] | 1 | 2020-10-11T08:47:43.000Z | 2020-10-11T08:47:43.000Z | Services/__init__.py | carlCarlson6/NERwithBERT | 109733c3816e39b0eff201a3e69acddf8a121844 | [
"MIT"
] | null | null | null | Services/__init__.py | carlCarlson6/NERwithBERT | 109733c3816e39b0eff201a3e69acddf8a121844 | [
"MIT"
] | null | null | null | from Services.DataService import DataService | 44 | 44 | 0.909091 | 5 | 44 | 8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
849c85646aa206f320a608a3bff28781b87728e1 | 160 | py | Python | apps/gallery/templatetags/template_filters.py | mrtaalebi/sitigo | cce8b4f5299b58d7365789ead416d4568b443743 | [
"Apache-2.0"
] | null | null | null | apps/gallery/templatetags/template_filters.py | mrtaalebi/sitigo | cce8b4f5299b58d7365789ead416d4568b443743 | [
"Apache-2.0"
] | 8 | 2020-02-12T01:02:15.000Z | 2022-03-11T23:53:39.000Z | apps/gallery/templatetags/template_filters.py | mrtaalebi/sitigo | cce8b4f5299b58d7365789ead416d4568b443743 | [
"Apache-2.0"
] | null | null | null | from django import template
register = template.Library()
@register.filter
def modulo(a, b):
return a % b
@register.filter
def len(a):
return len(a)
| 13.333333 | 29 | 0.69375 | 24 | 160 | 4.625 | 0.541667 | 0.252252 | 0.306306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19375 | 160 | 11 | 30 | 14.545455 | 0.860465 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
84c93f4bee43a3ba74663eaaa4af82078e49a741 | 165 | py | Python | students/K33401/Savonik_Nikita/Lr4/api/admin.py | Bot228/ITMO_ICT_WebDevelopment_2020-2021 | 4d3691507c2f01eb4b905f4e40c1e59de850f72d | [
"MIT"
] | null | null | null | students/K33401/Savonik_Nikita/Lr4/api/admin.py | Bot228/ITMO_ICT_WebDevelopment_2020-2021 | 4d3691507c2f01eb4b905f4e40c1e59de850f72d | [
"MIT"
] | null | null | null | students/K33401/Savonik_Nikita/Lr4/api/admin.py | Bot228/ITMO_ICT_WebDevelopment_2020-2021 | 4d3691507c2f01eb4b905f4e40c1e59de850f72d | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(User)
admin.site.register(Car)
admin.site.register(Order)
admin.site.register(CarToOrder) | 23.571429 | 32 | 0.812121 | 24 | 165 | 5.583333 | 0.5 | 0.268657 | 0.507463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 165 | 7 | 33 | 23.571429 | 0.875817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
84d4a7638b432ccc7712a9a3cffb5d937964aead | 13,158 | py | Python | tests/core/test_rdd.py | Imbruced/geo_pyspark | 26da16d48168789c5f2bb75b5fdec1f515bf9cb1 | [
"Apache-2.0"
] | 7 | 2019-10-10T05:47:37.000Z | 2020-09-08T06:37:03.000Z | tests/core/test_rdd.py | Imbruced/geo_pyspark | 26da16d48168789c5f2bb75b5fdec1f515bf9cb1 | [
"Apache-2.0"
] | 3 | 2019-12-16T16:49:57.000Z | 2021-08-23T20:43:32.000Z | tests/core/test_rdd.py | Imbruced/geo_pyspark | 26da16d48168789c5f2bb75b5fdec1f515bf9cb1 | [
"Apache-2.0"
] | 3 | 2019-10-17T16:10:41.000Z | 2022-01-24T12:56:21.000Z | import logging
from pyspark import StorageLevel
from shapely.geometry import Point
from geo_pyspark.core.SpatialRDD import PointRDD, PolygonRDD, CircleRDD
from geo_pyspark.core.enums import GridType, FileDataSplitter, IndexType
from geo_pyspark.core.enums.join_build_side import JoinBuildSide
from geo_pyspark.core.geom_types import Envelope
from geo_pyspark.core.spatialOperator import RangeQuery, KNNQuery, JoinQuery
from geo_pyspark.core.spatialOperator.join_params import JoinParams
import os
from tests.polygon_properties import polygon_rdd_input_location, polygon_rdd_start_offset, polygon_rdd_end_offset, \
polygon_rdd_splitter, polygon_rdd_index_type
from tests.test_base import TestBase
from tests.tools import tests_path
resource_folder = "resources"
point_rdd_input_location = os.path.join(tests_path, resource_folder, "arealm-small.csv")
point_rdd_splitter = FileDataSplitter.CSV
point_rdd_index_type = IndexType.RTREE
point_rdd_num_partitions = 5
point_rdd_offset = 1
knn_query_point = Point(-84.01, 34.01)
range_query_window = Envelope(-90.01, -80.01, 30.01, 40.01)
join_query_partitionin_type = GridType.QUADTREE
each_query_loop_times = 1
class TestSpatialRDD(TestBase):
def test_empty_constructor_test(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd_copy = PointRDD()
object_rdd_copy.rawJvmSpatialRDD = object_rdd.rawJvmSpatialRDD
object_rdd_copy.analyze()
def test_spatial_range_query(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False)
for i in range(each_query_loop_times):
result_size = RangeQuery.SpatialRangeQuery(
object_rdd, range_query_window, False, False
).count()
logging.info(result_size)
def test_range_query_using_index(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.buildIndex(point_rdd_index_type, False)
for i in range(each_query_loop_times):
result_size = RangeQuery.SpatialRangeQuery(
object_rdd, range_query_window, False, True).count
def test_knn_query(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
for i in range(each_query_loop_times):
result = KNNQuery.SpatialKnnQuery(object_rdd, knn_query_point, 1000, False)
def test_knn_query_with_index(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.buildIndex(point_rdd_index_type, False)
for i in range(each_query_loop_times):
result = KNNQuery.SpatialKnnQuery(object_rdd, knn_query_point, 1000, True)
def test_spaltial_join(self):
query_window_rdd = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
for x in range(each_query_loop_times):
result_size = JoinQuery.SpatialJoinQuery(
object_rdd, query_window_rdd, False, True).count
def test_spatial_join_using_index(self):
query_window = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window.spatialPartitioning(object_rdd.getPartitioner())
object_rdd.buildIndex(point_rdd_index_type, True)
for i in range(each_query_loop_times):
result_size = JoinQuery.SpatialJoinQuery(
object_rdd, query_window, True, False).count()
def test_spatial_join_using_index_on_polygons(self):
query_window = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window.spatialPartitioning(object_rdd.getPartitioner())
query_window.buildIndex(polygon_rdd_index_type, True)
for i in range(each_query_loop_times):
result_size = JoinQuery.SpatialJoinQuery(
object_rdd,
query_window,
True,
False
).count()
def test_spatial_join_query_using_index_on_polygons(self):
query_window_rdd = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
for i in range(each_query_loop_times):
result_size = JoinQuery.SpatialJoinQuery(
object_rdd, query_window_rdd, True, False
)
def test_spatial_join_query_and_build_index_on_points_on_the_fly(self):
query_window = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window.spatialPartitioning(object_rdd.getPartitioner())
for i in range(each_query_loop_times):
result_size = JoinQuery.SpatialJoinQuery(
object_rdd,
query_window,
True,
False
).count()
def test_spatial_join_query_and_build_index_on_polygons_on_the_fly(self):
query_window_rdd = PolygonRDD(
self.sc,
polygon_rdd_input_location,
polygon_rdd_start_offset,
polygon_rdd_end_offset,
polygon_rdd_splitter,
True
)
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
object_rdd.analyze()
object_rdd.spatialPartitioning(join_query_partitionin_type)
query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
for i in range(each_query_loop_times):
join_params = JoinParams(False, polygon_rdd_index_type, JoinBuildSide.LEFT)
resultSize = JoinQuery.spatialJoin(
query_window_rdd,
object_rdd,
join_params
).count()
def test_distance_join_query(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
query_window_rdd = CircleRDD(object_rdd, 0.1)
object_rdd.analyze()
object_rdd.spatialPartitioning(GridType.QUADTREE)
query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
for i in range(each_query_loop_times):
result_size = JoinQuery.DistanceJoinQuery(
object_rdd,
query_window_rdd,
False,
True).count()
def test_distance_join_query_using_index(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False
)
query_window_rdd = CircleRDD(object_rdd, 0.1)
object_rdd.analyze()
object_rdd.spatialPartitioning(GridType.QUADTREE)
query_window_rdd.spatialPartitioning(object_rdd.getPartitioner())
object_rdd.buildIndex(IndexType.RTREE, True)
for i in range(each_query_loop_times):
result_size = JoinQuery.DistanceJoinQuery(
object_rdd,
query_window_rdd,
True,
True
).count
def test_earthdata_format_mapper(self):
pass
# input_location = "test/data/modis/modis.csv"
# splitter = FileDataSplitter.CSV
# index_type = IndexType.RTREE
# query_envelope = Envelope(-90.01, -80.01, 30.01, 40.01)
# num_partitions = 5
# loop_times = 1
# hdf_increment = 5
# hdf_offset = 2
# hdf_root_group_name = "MOD_Swath_LST"
# hdf_data_variable_name = "LST"
# url_prefix = "test/resources/modis/"
# hdf_daya_variable_list = ["LST", "QC", "Error_LST", "Emis_31", "Emis_32"]
#
# earth_data_hdf_point = EarthdataHDFPointMapper(
# hdf_increment, hdf_offset, hdf_root_group_name,
# hdf_daya_variable_list, hdf_data_variable_name, url_prefix)
# spatial_rdd = PointRDD(
# sc,
# input_location,
# num_partitions,
# earth_data_hdf_point)
#
# i = 0
# while i < loop_times:
# result_size = 0
# result_size = RangeQuery.SpatialRangeQuery(
# spatial_rdd,
# query_envelope,
# False,
# False
# ).count
# i = i + 1
def test_crs_transformed_spatial_range_query(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False,
newLevel=StorageLevel.DISK_ONLY,
sourceEpsgCRSCode="epsg:4326",
targetEpsgCode="epsg:3005"
)
for i in range(each_query_loop_times):
result_size = RangeQuery.SpatialRangeQuery(
object_rdd, range_query_window, False, False
)
def test_crs_tranformed_spatial_range_query_using_index(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Offset=point_rdd_offset,
splitter=point_rdd_splitter,
carryInputData=False,
newLevel=StorageLevel.DISK_ONLY,
sourceEpsgCRSCode="epsg:4326",
targetEpsgCode="epsg:3005"
)
object_rdd.buildIndex(point_rdd_index_type, False)
for i in range(each_query_loop_times):
result_size = RangeQuery.SpatialRangeQuery(
object_rdd,
range_query_window,
False,
True
).count
| 34.901857 | 116 | 0.634595 | 1,381 | 13,158 | 5.637219 | 0.117306 | 0.073988 | 0.04727 | 0.04316 | 0.770841 | 0.755556 | 0.747848 | 0.734489 | 0.734489 | 0.721516 | 0 | 0.008937 | 0.302706 | 13,158 | 376 | 117 | 34.994681 | 0.839564 | 0.065815 | 0 | 0.721854 | 0 | 0 | 0.004976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05298 | false | 0.003311 | 0.043046 | 0 | 0.099338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
84f3c6661549303c63945b738e354e05cc7cc546 | 331 | py | Python | run_tests.py | bibikar/optimizations_bench | 683767e7caaae804f95220feee5a76b016199d21 | [
"MIT"
] | 3 | 2017-05-10T11:09:17.000Z | 2019-05-14T14:04:19.000Z | run_tests.py | bibikar/optimizations_bench | 683767e7caaae804f95220feee5a76b016199d21 | [
"MIT"
] | 4 | 2017-04-15T12:03:23.000Z | 2019-07-25T18:01:57.000Z | run_tests.py | bibikar/optimizations_bench | 683767e7caaae804f95220feee5a76b016199d21 | [
"MIT"
] | 4 | 2017-04-15T12:07:42.000Z | 2020-04-16T01:36:19.000Z | # Copyright (C) 2017 Intel Corporation
#
# SPDX-License-Identifier: MIT
import os
# warning: this is sanity test for Travis CI. The arguments are really bad for real perf testing, use default arguments instead
os.system('miniconda3/envs/intel3/bin/python numpy/umath/umath_mem_bench.py -v --size 10 --goal-time 0.01 --repeats 1')
| 41.375 | 127 | 0.767372 | 54 | 331 | 4.666667 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041812 | 0.132931 | 331 | 7 | 128 | 47.285714 | 0.836237 | 0.577039 | 0 | 0 | 0 | 0.5 | 0.785185 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ca60c9dfd7b19db688cb67d1d3ec4c8ccd88870d | 42 | py | Python | app/models/__init__.py | c17r/hnhiring | c009d95b641702bc987f0925bb738f6e3684cabd | [
"MIT"
] | 1 | 2022-01-11T06:04:01.000Z | 2022-01-11T06:04:01.000Z | app/models/__init__.py | c17r/hnhiring | c009d95b641702bc987f0925bb738f6e3684cabd | [
"MIT"
] | null | null | null | app/models/__init__.py | c17r/hnhiring | c009d95b641702bc987f0925bb738f6e3684cabd | [
"MIT"
] | 1 | 2022-01-11T06:04:04.000Z | 2022-01-11T06:04:04.000Z | from .entry import *
from .month import *
| 14 | 20 | 0.714286 | 6 | 42 | 5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 2 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca629c66809566fbcc8ec2b2e2ba469baff0a626 | 17 | py | Python | evidential_deep_learning/__init__.py | Dariusrussellkish/evidential-deep-learning | d973b958cce51fc7b297a43c4b62b9ea131b3bad | [
"Apache-2.0"
] | 3 | 2021-04-08T03:41:58.000Z | 2022-02-19T13:55:40.000Z | evidential_deep_learning/__init__.py | Dariusrussellkish/evidential-deep-learning | d973b958cce51fc7b297a43c4b62b9ea131b3bad | [
"Apache-2.0"
] | 7 | 2020-11-13T18:47:55.000Z | 2022-03-12T00:30:13.000Z | detectionModules/camera/tf/__init__.py | Impeekay/shop-analytics-pi | 4e02068775b700da3f0e01a612fdc5cc29c85eaf | [
"MIT"
] | 3 | 2020-05-11T06:59:28.000Z | 2020-06-08T16:59:54.000Z | from . import tf
| 8.5 | 16 | 0.705882 | 3 | 17 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 17 | 1 | 17 | 17 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca72d5319cc9ccd921e77bf352f93015c0e5b29b | 84 | py | Python | spotdl/__init__.py | khjxiaogu/spotify-downloader | a8dcb8d998da0769bbe210f2808d16b346453c23 | [
"MIT"
] | 4,698 | 2017-06-20T22:37:10.000Z | 2022-03-28T13:38:07.000Z | spotdl/__init__.py | Delgan/spotify-downloader | 8adf3e8d6b98269b1538dd91c9a44ed345c77545 | [
"MIT"
] | 690 | 2017-06-20T20:08:42.000Z | 2022-02-26T23:36:07.000Z | spotdl/__init__.py | Delgan/spotify-downloader | 8adf3e8d6b98269b1538dd91c9a44ed345c77545 | [
"MIT"
] | 741 | 2017-06-21T23:32:51.000Z | 2022-03-07T12:11:54.000Z | from spotdl.version import __version__
from spotdl.command_line.core import Spotdl
| 21 | 43 | 0.857143 | 12 | 84 | 5.583333 | 0.583333 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 84 | 3 | 44 | 28 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0461d34f23656ca59e3546fc4f47aa8648da725b | 4,831 | py | Python | tests/unit/molior/test_buildstates.py | gaod/molior | ced63e19a5666112e6ed1553cce7cec2f5c16429 | [
"Apache-2.0"
] | 72 | 2019-07-22T19:19:17.000Z | 2022-03-14T17:08:19.000Z | tests/unit/molior/test_buildstates.py | gaod/molior | ced63e19a5666112e6ed1553cce7cec2f5c16429 | [
"Apache-2.0"
] | 19 | 2019-08-02T13:55:22.000Z | 2022-01-20T08:49:43.000Z | tests/unit/molior/test_buildstates.py | gaod/molior | ced63e19a5666112e6ed1553cce7cec2f5c16429 | [
"Apache-2.0"
] | 8 | 2019-07-24T02:47:47.000Z | 2021-11-10T07:02:14.000Z | import asyncio
import sys
from mock import MagicMock, mock, Mock
sys.modules['aiofile'] = mock.MagicMock()
from molior.model.build import Build # noqa: E402
from molior.model.maintainer import Maintainer # noqa: F401
def logmock(build):
build.log_state = MagicMock()
build.parent.log_state = MagicMock()
if build.parent.parent:
build.parent.parent.log_state = MagicMock()
build.log = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
build.parent.log = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
if build.parent.parent:
build.parent.parent.log = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
build.logtitle = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
build.parent.logtitle = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
if build.parent.parent:
build.parent.parent.logtitle = Mock(side_effect=asyncio.coroutine(lambda a, **args: None))
def test_src_build_failed():
"""
Tests whether a sourcebuild was set to failed correctly
"""
src_build = Build(buildtype="source")
src_build.parent = Build(buildtype="build")
logmock(src_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(src_build.set_failed())
assert src_build.buildstate == "build_failed"
assert src_build.parent.buildstate == "build_failed"
def test_deb_build_failed():
"""
Tests whether a debian build was set to failed correctly
"""
deb_build = Build(buildtype="deb")
deb_build.parent = Build(buildtype="source")
deb_build.parent.parent = Build(buildtype="build")
logmock(deb_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(deb_build.set_failed())
assert deb_build.buildstate == "build_failed"
assert deb_build.parent.parent.buildstate == "build_failed"
def test_src_build_publish_failed():
"""
Tests whether a sourcebuild was set to publish failed when
the publish failed
"""
src_build = Build(buildtype="source")
src_build.parent = Build(buildtype="build")
logmock(src_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(src_build.set_publish_failed())
assert src_build.buildstate == "publish_failed"
assert src_build.parent.buildstate == "build_failed"
def test_deb_build_publish_failed():
"""
Tests whether a debian was set to publish failed when
the publish failed
"""
deb_build = Build(buildtype="deb")
deb_build.parent = Build(buildtype="source")
deb_build.parent.parent = Build(buildtype="build")
logmock(deb_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(deb_build.set_publish_failed())
assert deb_build.buildstate == "publish_failed"
assert deb_build.parent.parent.buildstate == "build_failed"
def test_deb_build_successful_only_build():
"""
Tests whether a debian was set to successful correctly
"""
deb_build = Build(id=1337, buildtype="deb")
deb_build.parent = Build(buildtype="source")
deb_build.parent.parent = Build(buildtype="build")
deb_build.parent.children = [deb_build]
logmock(deb_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(deb_build.set_successful())
assert deb_build.buildstate == "successful"
assert deb_build.parent.parent.buildstate == "successful"
def test_deb_build_successful_all_successful():
"""
Tests whether a debian was set to successful correctly
with multiple builds
"""
deb_build = Build(
id=1337,
buildtype="deb"
)
deb_build.parent = Build(buildtype="source")
deb_build.parent.parent = Build(buildtype="build")
other_build = Build(buildtype="source")
other_build.buildstate = "successful"
deb_build.parent.children = [deb_build, other_build]
logmock(deb_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(deb_build.set_successful())
assert deb_build.buildstate == "successful"
assert deb_build.parent.parent.buildstate == "successful"
def test_deb_build_successful_other_failed():
"""
Tests whether a debian was set to successful correctly
with multiple builds and the other build has failed
"""
deb_build = Build(
id=1337,
buildtype="deb"
)
deb_build.parent = Build(buildtype="source")
deb_build.parent.parent = Build(buildtype="build")
other_build = Build(buildtype="source")
other_build.buildstate = "build_failed"
deb_build.parent.children = [deb_build, other_build]
logmock(deb_build)
loop = asyncio.get_event_loop()
loop.run_until_complete(deb_build.set_successful())
assert deb_build.buildstate == "successful"
assert deb_build.parent.parent.buildstate != "successful"
| 29.820988 | 98 | 0.711033 | 623 | 4,831 | 5.284109 | 0.105939 | 0.111786 | 0.076549 | 0.060753 | 0.894897 | 0.823208 | 0.809842 | 0.809842 | 0.765492 | 0.72661 | 0 | 0.004543 | 0.17988 | 4,831 | 161 | 99 | 30.006211 | 0.82635 | 0.108466 | 0 | 0.591398 | 0 | 0 | 0.070029 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 1 | 0.086022 | false | 0 | 0.053763 | 0 | 0.139785 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04bd8b750758d2c826ec76ede579a81dc93fd46d | 33 | py | Python | app/constants/__init__.py | AOSC-Dev/modern-paste | 0d47dc8911a17d84e61c14a650620a41c98b6d95 | [
"MIT"
] | 1 | 2020-04-08T22:09:54.000Z | 2020-04-08T22:09:54.000Z | app/constants/__init__.py | AOSC-Dev/modern-paste | 0d47dc8911a17d84e61c14a650620a41c98b6d95 | [
"MIT"
] | null | null | null | app/constants/__init__.py | AOSC-Dev/modern-paste | 0d47dc8911a17d84e61c14a650620a41c98b6d95 | [
"MIT"
] | null | null | null | from .build_environment import *
| 16.5 | 32 | 0.818182 | 4 | 33 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e053315844aca778f515e1289af429508d185bf | 71 | py | Python | graph_rl/policies/__init__.py | nicoguertler/graphrl | 21a1cefc53e5c457745570460de0d99e68622e57 | [
"MIT"
] | 1 | 2022-01-04T15:21:55.000Z | 2022-01-04T15:21:55.000Z | graph_rl/policies/__init__.py | nicoguertler/graph_rl | 21a1cefc53e5c457745570460de0d99e68622e57 | [
"MIT"
] | null | null | null | graph_rl/policies/__init__.py | nicoguertler/graph_rl | 21a1cefc53e5c457745570460de0d99e68622e57 | [
"MIT"
] | null | null | null | from .policy import Policy
from .tianshou_policy import TianshouPolicy
| 23.666667 | 43 | 0.859155 | 9 | 71 | 6.666667 | 0.555556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 44 | 35.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e05fb0c041d988a67683a0b9a90b5d54e49c6d9 | 2,707 | py | Python | tests/test_io.py | asfadmin/hyp3-autorift | 328fe5de886702874b5204087a492b1b76604859 | [
"BSD-3-Clause"
] | 1 | 2021-02-17T18:17:07.000Z | 2021-02-17T18:17:07.000Z | tests/test_io.py | asfadmin/hyp3-autorift | 328fe5de886702874b5204087a492b1b76604859 | [
"BSD-3-Clause"
] | 33 | 2020-09-14T17:25:07.000Z | 2022-03-02T19:37:15.000Z | tests/test_io.py | ASFHyP3/hyp3-autorift | d135e3435e4aeed0475cd8a979135996cfa30b68 | [
"BSD-3-Clause"
] | null | null | null | import pytest
from hyp3lib import DemError
from hyp3_autorift import geometry, io
from hyp3_autorift.process import DEFAULT_PARAMETER_FILE
def test_find_jpl_parameter_info():
lat_limits = (55, 56)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'NPS'
lat_limits = (54, 55)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'N37'
lat_limits = (54, 55)
lon_limits = (-40, -41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'N24'
lat_limits = (-54, -55)
lon_limits = (-40, -41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'S24'
lat_limits = (-55, -56)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'S37'
lat_limits = (-56, -57)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
parameter_info = io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
assert parameter_info['name'] == 'SPS'
lat_limits = (-90, -91)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
with pytest.raises(DemError):
io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
lat_limits = (90, 91)
lon_limits = (40, 41)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
with pytest.raises(DemError):
io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
lat_limits = (55, 56)
lon_limits = (180, 181)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
with pytest.raises(DemError):
io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
lat_limits = (55, 56)
lon_limits = (-180, -181)
polygon = geometry.polygon_from_bbox(x_limits=lat_limits, y_limits=lon_limits)
with pytest.raises(DemError):
io.find_jpl_parameter_info(polygon, DEFAULT_PARAMETER_FILE)
| 39.808824 | 82 | 0.738456 | 384 | 2,707 | 4.815104 | 0.117188 | 0.161709 | 0.118983 | 0.118983 | 0.904813 | 0.904813 | 0.904813 | 0.904813 | 0.904813 | 0.904813 | 0 | 0.041703 | 0.158478 | 2,707 | 67 | 83 | 40.402985 | 0.769974 | 0 | 0 | 0.672727 | 0 | 0 | 0.015515 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 1 | 0.018182 | false | 0 | 0.072727 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e246fbf3f08de0fab0c4b8ab37d6f1b7c046371 | 35 | py | Python | torchOnVideo/datasets/DAVIS/denoising/__init__.py | torchOnVideo/torchOnVideo | aa07d5661f772eca027ecc6b79e14bd68a515aa1 | [
"MIT"
] | 2 | 2021-03-19T08:05:06.000Z | 2021-05-22T21:54:10.000Z | torchOnVideo/datasets/DAVIS/denoising/__init__.py | torchOnVideo/torchOnVideo | aa07d5661f772eca027ecc6b79e14bd68a515aa1 | [
"MIT"
] | null | null | null | torchOnVideo/datasets/DAVIS/denoising/__init__.py | torchOnVideo/torchOnVideo | aa07d5661f772eca027ecc6b79e14bd68a515aa1 | [
"MIT"
] | null | null | null | from .test_vnlnet import TestVNLNet | 35 | 35 | 0.885714 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e324edfd22bc25d4beca239dfe01543e225a814 | 65,846 | py | Python | hls-writer/hls_writer.py | 19etweinstock/hls4ml-sherylll | b4aac31d1d6c7353cb04347e06d654b876cbeb1e | [
"Apache-2.0"
] | null | null | null | hls-writer/hls_writer.py | 19etweinstock/hls4ml-sherylll | b4aac31d1d6c7353cb04347e06d654b876cbeb1e | [
"Apache-2.0"
] | null | null | null | hls-writer/hls_writer.py | 19etweinstock/hls4ml-sherylll | b4aac31d1d6c7353cb04347e06d654b876cbeb1e | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
import tarfile
import yaml
from shutil import copyfile
import numpy as np
import os
import re
def hls_writer(layer_list, yamlConfig):
filedir = os.path.dirname(os.path.abspath(__file__))
###################
## myproject.cpp
###################
f = open(os.path.join(filedir,'../hls-template/firmware/myproject.cpp'),'r')
fout = open('{}/firmware/{}.cpp'.format(yamlConfig['OutputDir'], yamlConfig['ProjectName']),'w')
# Set some variables to make the routine after a bit smoother
do_batchnorm = False
is_dense = False
is_conv2d = False
for i in range(1,len(layer_list)+1):
if layer_list[i-1]['class_name'] == 'BatchNormalization': do_batchnorm = True
for i in range(1,len(layer_list)+1):
if layer_list[i-1]['class_name']=='Conv2D':
is_conv2d = True
break
if not is_conv2d:
for i in range(1,len(layer_list)+1):
if layer_list[i-1]['class_name']=='Dense':
is_dense = True
break
activation_layers = ['Activation', 'LeakyReLU', 'ThresholdedReLU', 'ELU', 'PReLU']
# lines to add to .cpp for sublayers
sublayerlines = []
# lines to add to .h for sublayers
sublayerlines_h = []
for line in f.readlines():
#Add headers to weights and biases
if 'myproject' in line:
newline = line.replace('myproject',yamlConfig['ProjectName'])
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='Conv1D':
newline = line.replace('input_t data[N_INPUTS]','input_t data[Y_INPUTS_1][N_CHAN_1]')
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='Conv2D':
newline = line.replace('input_t data[N_INPUTS]','input_t data[IN_HEIGHT_1][IN_WIDTH_1][N_CHAN_1]')
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='BatchNormalization' and is_conv2d:
newline = line.replace('input_t data[N_INPUTS]','input_t data[IN_HEIGHT_1][IN_WIDTH_1][N_FILT_1]')
elif 'const_size_in = N_INPUTS' in line and layer_list[0]['class_name']=='Conv1D':
newline = line.replace('const_size_in = N_INPUTS','const_size_in = Y_INPUTS_1*N_CHAN_1')
elif 'const_size_in = N_INPUTS' in line and layer_list[0]['class_name']=='Conv2D':
newline = line.replace('const_size_in = N_INPUTS','const_size_in = IN_HEIGHT_1*IN_WIDTH_1*N_CHAN_1')
elif 'const_size_in = N_INPUTS' in line and layer_list[0]['class_name']=='BatchNormalization' and is_conv2d:
newline = line.replace('const_size_in = N_INPUTS','const_size_in = IN_HEIGHT_1*IN_WIDTH_1*N_FILT_1')
elif '//hls-fpga-machine-learning insert weights' in line:
newline = line
for i in range(1,len(layer_list)+1):
if layer_list[i-1]['class_name'] == 'BatchNormalization':
newline += '#include "weights/beta{}.h"\n'.format(i)
newline += '#include "weights/scale{}.h"\n'.format(i)
newline += '#include "weights/mean{}.h"\n'.format(i)
elif 'Pooling' in layer_list[i-1]['class_name']:
pass # No weights for pooling
else:
if layer_list[i-1]['n_part']>1:
for i_part in range(layer_list[i-1]['n_part']):
newline += '#include "weights/w{}_{}.h"\n'.format(i,i_part)
newline += '#include "weights/b{}_{}.h"\n'.format(i,i_part)
elif layer_list[i-1]['class_name'] not in activation_layers:
newline += '#include "weights/w{}.h"\n'.format(i)
newline += '#include "weights/b{}.h"\n'.format(i)
if layer_list[i-1].get('activation') == 'PReLU':
newline += '#include "weights/a{}.h"\n'.format(i)
elif layer_list[i-1]['class_name'] == 'PReLU':
newline += '#include "weights/a{}.h"\n'.format(i)
#Add input/output type
elif '//hls-fpga-machine-learning insert IO' in line:
newline = line
if yamlConfig["IOType"] == "io_parallel":
newline += ' #pragma HLS ARRAY_RESHAPE variable=data complete dim=0 \n'
newline += ' #pragma HLS ARRAY_RESHAPE variable=res complete dim=0 \n'
newline += ' #pragma HLS INTERFACE ap_vld port=data,res \n'
newline += ' #pragma HLS PIPELINE \n'
if yamlConfig["IOType"] == "io_serial":
newline += ' #pragma HLS INTERFACE axis port=data,res \n'
newline += ' #pragma HLS DATAFLOW \n'
#Add layers
elif '//hls-fpga-machine-learning insert layers' in line:
newline = line + '\n'
for i in range(1,len(layer_list)+1):
#Input to compute_layer
#First layer and dense
if i==1 and (layer_list[i-1]['class_name']=='Dense' or (layer_list[i-1]['class_name']=='BatchNormalization' and is_dense)):
input_type = 'input_t'
input_object = 'data'
n_in = 'N_INPUTS'
#Layer is Dense and previous layer was Conv1D
elif layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv1D':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
n_in = 'Y_OUTPUTS_{}*N_FILT_{}'.format(i-1,i-1)
#Layer is Dense and previous layer was Conv2D
elif layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv2D':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
n_in = 'IN_HEIGHT_{}*IN_WIDTH_{}*N_FILT_{}'.format(i-1,i-1,i-1)
#Layer is Dense, BatchNormalization or Activation
elif layer_list[i-1]['class_name']=='Dense' or layer_list[i-1]['class_name'] in activation_layers:
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
n_in = 'N_LAYER_{}'.format(i-1)
elif is_dense and layer_list[i-1]['class_name']=='BatchNormalization':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
n_in = 'N_LAYER_{}'.format(i-1)
n_filt = 'N_FILT_{}'.format(i-1)
elif (i==1 and layer_list[i-1]['class_name']=='BatchNormalization' and is_conv2d):
input_type = 'input_t'
input_object = 'data'
in_height = 'IN_HEIGHT_{}'.format(i)
in_width = 'IN_WIDTH_{}'.format(i)
n_chan = 'N_FILT_{}'.format(i)
elif is_conv2d and layer_list[i-1]['class_name']=='BatchNormalization':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
n_in = 'OUT_HEIGHT_{}*OUT_WIDTH_{}*N_FILT_{}'.format(i-1,i-1,i-1)
n_filt = 'N_FILT_{}'.format(i-1)
#First layer and Conv1D
elif (i==1 and layer_list[i-1]['class_name']=='Conv1D'):
input_type = 'input_t'
input_object = 'data'
y_in = 'Y_INPUTS_{}'.format(i)
n_chan = 'N_CHAN_{}'.format(i)
#Layer is Conv1D
elif layer_list[i-1]['class_name']=='Conv1D':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
y_in = 'Y_INPUTS_{}'.format(i)
n_chan = 'N_CHAN_{}'.format(i)
#First layer and Conv2D
elif (i==1 and layer_list[i-1]['class_name']=='Conv2D'):
input_type = 'input_t'
input_object = 'data'
in_height = 'IN_HEIGHT_{}'.format(i)
in_width = 'IN_WIDTH_{}'.format(i)
n_chan = 'N_CHAN_{}'.format(i)
#Layer is Conv2D
elif layer_list[i-1]['class_name']=='Conv2D':
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
in_height = 'IN_HEIGHT_{}'.format(i)
in_width = 'IN_WIDTH_{}'.format(i)
n_chan = 'N_CHAN_{}'.format(i)
#Pooling layer
elif 'Pooling' in layer_list[i-1]['class_name']:
input_type = 'layer{}_t'.format(i-1)
input_object = 'layer{}_out'.format(i-1)
output_object = 'layer{}_out'.format(i)
in_height = 'IN_HEIGHT_{}'.format(i)
in_width = 'IN_WIDTH_{}'.format(i)
out_height = 'OUT_HEIGHT_{}'.format(i)
out_width = 'OUT_WIDTH_{}'.format(i)
n_filt = 'N_FILT_{}'.format(i)
#Currently doesn't allow all combinations
#Outputs of compute_layer and activation
if i==len(layer_list) and layer_list[i-1]['class_name']=='Dense':
output_type = 'result_t'
output_object = 'res'
n_out = 'N_OUTPUTS'
if layer_list[i-1]['class_name'] in activation_layers: input_type = 'result_t'
elif i==len(layer_list) and layer_list[i-1]['class_name'] in activation_layers and is_dense:
output_type = 'result_t'
output_object = 'res'
n_out = 'N_OUTPUTS'
input_type = 'result_t'
elif i==len(layer_list) and is_dense and layer_list[i-1]['class_name']=='BatchNormalization':
output_type = 'result_t'
output_object = 'res'
n_out = 'N_OUTPUTS'
elif i==len(layer_list) and is_conv2d and layer_list[i-1]['class_name']=='BatchNormalization':
output_type = 'layer{}_t'.format(i)
output_object = 'layer{}_out'.format(i)
out_height = 'OUT_HEIGHT_{}'.format(i)
out_width = 'OUT_WIDTH_{}'.format(i)
n_filt = 'N_FILT_{}'.format(i)
elif(i==len(layer_list)-1 and is_dense and layer_list[i-1]['class_name']=='BatchNormalization' and layer_list[i]['class_name'] in activation_layers):
output_type = 'result_t'
output_object = 'layer{}_out'.format(i)
n_out = 'N_OUTPUTS'
elif layer_list[i-1]['class_name']=='Dense' or (layer_list[i-1]['class_name']=='BatchNormalization' and is_dense) or (layer_list[i-1]['class_name'] in activation_layers and is_dense):
output_type = 'layer{}_t'.format(i)
output_object = 'layer{}_out'.format(i)
n_out = 'N_LAYER_{}'.format(i)
elif layer_list[i-1]['class_name']=='Conv1D':
output_type = 'layer{}_t'.format(i)
output_object = 'layer{}_out'.format(i)
y_out = 'Y_OUTPUTS_{}'.format(i)
n_filt = 'N_FILT_{}'.format(i)
elif layer_list[i-1]['class_name']=='Conv2D' or (is_conv2d and layer_list[i-1]['class_name']=='BatchNormalization'):
output_type = 'layer{}_t'.format(i)
output_object = 'layer{}_out'.format(i)
out_height = 'OUT_HEIGHT_{}'.format(i)
out_width = 'OUT_WIDTH_{}'.format(i)
n_filt = 'N_FILT_{}'.format(i)
#Currently assumes end with dense
if( i!=len(layer_list) ):
if layer_list[i-1]['class_name']=='Dense' or (layer_list[i-1]['class_name']=='BatchNormalization' and is_dense) or (layer_list[i-1]['class_name'] in activation_layers and is_dense):
newline += ' {} layer{}_out[{}];\n'.format(output_type,i,n_out)
elif layer_list[i-1]['class_name']=='Conv1D' or 'Pooling1D' in layer_list[i-1]['class_name']:
newline += ' {} layer{}_out[{}*{}];\n'.format(output_type,i,y_out,n_filt)
elif layer_list[i-1]['class_name']=='Conv2D' or 'Pooling2D' in layer_list[i-1]['class_name']:
newline += ' {} layer{}_out[{}*{}*{}];\n'.format(output_type,i,out_height,out_width,n_filt)
elif layer_list[i-1]['class_name']=='BatchNormalization' and is_conv2d:
if i!= 1: newline += ' {} layer{}_out[{}*{}*{}];\n'.format(output_type,i,out_height,out_width,n_filt)
else: newline += ' {} layer{}_out[{}*{}*{}];\n'.format(output_type,i,in_height,in_width,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=layer{}_out depth=1\n'.format(i)
#github Issue 53
#Compute Dense layer
#if layer_list[i-1]['activation'] == "linear" and layer_list[i-1]['class_name']=='Dense':
# newline += ' nnet::compute_layer<{}, {}, config{}>({}, {}, w{}, b{});\n'.format(input_type, output_type, i, input_object, output_object, i, i)
#elif layer_list[i-1]['class_name']=='Dense':
if layer_list[i-1]['class_name']=='Dense':
newline += ' {} logits{}[{}];\n'.format(output_type,i,n_out)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=logits{} complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=logits{} depth=1\n'.format(i)
if layer_list[i-1]['n_part']==1 or yamlConfig["IOType"]=="io_serial":
# Use one layer if there's only 1 partition, or if we're using serial mode
newline += ' nnet::compute_layer<{}, {}, config{}>({}, logits{}, w{}, b{});\n'.format(input_type, output_type, i, input_object, i, i, i, i)
else:
# initialize arrays for sublayer outputs
newline += ' compute_layer{}({}, logits{});\n'.format(i, input_object, i)
sublayerline = 'void compute_layer{}({} {}[{}], {} logits{}[{}]) {{\n'.format(i,input_type, input_object, n_in, output_type, i, n_out)
sublayerline_h = 'void compute_layer{}({} {}[{}], {} logits{}[{}]);\n'.format(i,input_type, input_object, n_in, output_type, i, n_out)
sublayerlines_h.append(sublayerline_h)
for i_part in range(0, layer_list[i-1]['n_part']):
n_subout = layer_list[i-1]['n_subout'][i_part]
sublayerline += ' {} logits{}_{}[{}];\n'.format(output_type,i,i_part,n_subout)
if yamlConfig["IOType"] == "io_parallel": sublayerline += ' #pragma HLS ARRAY_PARTITION variable=logits{}_{} complete dim=0\n'.format(i,i_part)
if yamlConfig["IOType"] == "io_serial": sublayerline += ' #pragma HLS STREAM variable=logits{}_{} depth=1\n'.format(i,i_part)
# initialize arrays for merged partial outputs
for i_part in range(1, layer_list[i-1]['n_part']-1):
n_mergeout = sum([layer_list[i-1]['n_subout'][kk] for kk in range(0, i_part+1)])
sublayerline += ' {} logits{}_0to{}[{}];\n'.format(output_type,i,i_part,n_mergeout)
if yamlConfig["IOType"] == "io_parallel": sublayerline += ' #pragma HLS ARRAY_PARTITION variable=logits{}_0to{} complete dim=0\n'.format(i,i_part)
if yamlConfig["IOType"] == "io_serial": sublayerline += ' #pragma HLS STREAM variable=logits{}_0to{} depth=1\n'.format(i,i_part)
# compute sublayer outputs
for i_part in range(0, layer_list[i-1]['n_part']):
sublayerline += ' nnet::compute_layer<{}, {}, config{}_{}>({}, logits{}_{}, w{}_{}, b{}_{});\n'.format(input_type, output_type, i, i_part, input_object, i, i_part, i, i_part, i, i_part)
# merge sublayer outputs
for i_part in range(0, layer_list[i-1]['n_part']-1):
n_subout = layer_list[i-1]['n_subout'][i_part+1]
n_mergeout = sum([layer_list[i-1]['n_subout'][kk] for kk in range(0, i_part+1)])
if layer_list[i-1]['n_part']==2:
sublayerline += ' nnet::merge<{}, {}, {}>(logits{}_{}, logits{}_{}, logits{});\n'.format(output_type, n_mergeout, n_subout, i, i_part, i, i_part+1, i)
elif i_part==0:
sublayerline += ' nnet::merge<{}, {}, {}>(logits{}_{}, logits{}_{}, logits{}_0to{});\n'.format(output_type, n_mergeout, n_subout, i, i_part, i, i_part+1, i, i_part+1)
elif i_part==layer_list[i-1]['n_part']-2:
sublayerline += ' nnet::merge<{}, {}, {}>(logits{}_0to{}, logits{}_{}, logits{});\n'.format(output_type, n_mergeout, n_subout, i, i_part, i, i_part+1, i)
else:
sublayerline += ' nnet::merge<{}, {}, {}>(logits{}_0to{}, logits{}_{}, logits{}_0to{});\n'.format(output_type, n_mergeout, n_subout, i, i_part, i, i_part+1, i, i_part+1)
sublayerline += '}\n'
sublayerlines.append(sublayerline)
elif layer_list[i-1]['class_name']=='Conv1D':
if i>1 and layer_list[i-2]['class_name']=='Conv1D':
newline += ' {} conv_layer{}_in[{}][{}];\n'.format(input_type,i,y_in,n_chan)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv_layer{}_in complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv_layer{}_in depth=1\n'.format(i)
newline += ' nnet::unflatten<{}, {}, {}>({}, conv_layer{}_in);\n'.format(input_type, y_in, n_chan, input_object, i)
newline += ' {} conv_layer{}_out[{}][{}];\n'.format(output_type,i,y_out,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv_layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv_layer{}_out depth=1\n'.format(i)
newline += ' nnet::conv_1d<{}, {}, config{}>(conv_layer{}_in, conv_layer{}_out, w{}, b{});\n'.format(input_type, output_type, i, i, i, i, i, i)
else:
newline += ' {} conv_layer{}_out[{}][{}];\n'.format(output_type,i,y_out,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv_layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv_layer{}_out depth=1\n'.format(i)
newline += ' nnet::conv_1d<{}, {}, config{}>({}, conv_layer{}_out, w{}, b{});\n'.format(input_type, output_type, i, input_object, i, i, i, i)
newline += ' {} logits{}[{}*{}];\n'.format(output_type,i,y_out,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=logits{} complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=logits{} complete depth=1\n'.format(i)
newline += ' nnet::flatten<{}, {}, {}>(conv_layer{}_out, logits{});\n'.format(input_type, y_out, n_filt, i, i)
elif layer_list[i-1]['class_name']=='Conv2D':
if i>1 and (layer_list[i-2]['class_name']=='Conv2D' or layer_list[i-2]['class_name']=='BatchNormalization'):
newline += ' {} conv2d_layer{}_in[{}][{}][{}];\n'.format(input_type,i,in_height,in_width,n_chan)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv2d_layer{}_in complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv2d_layer{}_in depth=1\n'.format(i)
newline += ' nnet::unflatten<{}, {}, {}, {}>({}, conv2d_layer{}_in);\n'.format(input_type, in_height, in_width, n_chan, input_object, i)
newline += ' {} conv2d_layer{}_out[{}][{}][{}];\n'.format(output_type,i,out_height,out_width,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv2d_layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv2d_layer{}_out depth=1\n'.format(i)
newline += ' nnet::conv_2d<{}, {}, config{}>(conv2d_layer{}_in, conv2d_layer{}_out, w{}, b{});\n'.format(input_type, output_type, i, i, i, i, i, i)
else:
newline += ' {} conv2d_layer{}_out[{}][{}][{}];\n'.format(output_type,i,out_height,out_width,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=conv2d_layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=conv2d_layer{}_out depth=1\n'.format(i)
newline += ' nnet::conv_2d<{}, {}, config{}>({}, conv2d_layer{}_out, w{}, b{});\n'.format(input_type, output_type, i, input_object, i, i, i, i)
newline += ' {} logits{}[{}*{}*{}];\n'.format(output_type,i,out_height,out_width,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=logits{} complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=logits{} complete depth=1\n'.format(i)
newline += ' nnet::flatten<{}, {}, {}, {}>(conv2d_layer{}_out, logits{});\n'.format(output_type, out_height, out_width, n_filt, i, i)
elif layer_list[i-1]['class_name'] == 'BatchNormalization' and is_dense:
newline += ' nnet::normalize<{}, {}, config{}>({}, {}, scale{}, beta{}, mean{});\n'.format(input_type, output_type, i, input_object, output_object, i, i, i)
elif i==1 and layer_list[i-1]['class_name'] == 'BatchNormalization' and is_conv2d:
newline += ' {} logits{}[{}*{}*{}];\n'.format(output_type,i,in_height,in_width,n_filt)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=logits{} complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=logits{} complete depth=1\n'.format(i)
newline += ' nnet::flatten<{}, {}, {}, {}>({}, logits{});\n'.format(input_type, in_height, in_width, n_filt, input_object, i)
newline += ' nnet::normalize<{}, {}, config{}>(logits{}, {}, scale{}, beta{}, mean{});\n'.format(output_type, output_type, i, i, output_object, i, i, i)
elif layer_list[i-1]['class_name'] == 'BatchNormalization' and is_conv2d:
newline += ' nnet::normalize<{}, {}, config{}>({}, {}, scale{}, beta{}, mean{});\n'.format(input_type, output_type, i, input_object, output_object, i, i, i)
elif 'Pooling' in layer_list[i-1]['class_name']:
info = layer_list[i-1]['class_name'].split('Pooling')
d = int(info[1].split('D')[0]) # n dimensions
if d == 1:
newline += ' nnet::pooling1d<{}, config{}>({}, {});\n'.format(input_type, i, input_object, output_object)
elif d == 2:
# Unflatten if needed: if the last layer is activation or batchnorm
unflatten = layer_list[i-2]['class_name'] in activation_layers
unflatten |= 'activation' in layer_list[i-2].keys()
unflatten |= layer_list[i-2]['class_name'] == 'BatchNormalization'
if unflatten:
# Add the unflatten layer
inshape = ''.join('[{0}]'.format(dim) for dim in [in_height, in_width, n_filt])
newline += ' {} pool2d_layer{}_in{};\n'.format(input_type, i, inshape)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=pool2d_layer{}_in complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=pool2d_layer{}_in depth=1\n'.format(i)
newline += ' nnet::unflatten<{}, {}, {}, {}>({}, pool2d_layer{}_in);\n'.format(input_type, in_height, in_width, n_filt, input_object, i)
outshape = ''.join('[{0}]'.format(dim) for dim in [out_height, out_width, n_filt])
newline += ' {} pool2d_layer{}_out{};\n'.format(input_type, i, outshape)
if yamlConfig["IOType"] == "io_parallel": newline += ' #pragma HLS ARRAY_PARTITION variable=pool2d_layer{}_out complete dim=0\n'.format(i)
if yamlConfig["IOType"] == "io_serial": newline += ' #pragma HLS STREAM variable=pool2d_layer{}_out depth=1\n'.format(i)
# Do the pooling layer
newline += ' nnet::pooling2d<{}, config{i}>(pool2d_layer{i}_in, pool2d_layer{i}_out);\n'.format(input_type, i=i)
else:
newline += ' nnet::pooling2d<{}, config{i}>({}, {});\n'.format(input_type, i, input_object, output_object)
# Flatten the pooling output
newline += ' nnet::flatten<{}, {}, {}, {}>(pool2d_layer{}_out, layer{}_out);\n'.format(input_type, out_height, out_width, n_filt, i, i)
#Activations
if layer_list[i-1]['class_name'] in activation_layers or 'activation' in layer_list[i-1].keys():
if layer_list[i-1]['class_name'] not in activation_layers:
act_input_type = output_type
act_input_object = "logits" + str(i)
else:
act_input_type = input_type
act_input_object = input_object
activation_name = layer_list[i-1]['activation']+'_config'+str(i)
activation_param = layer_list[i-1].get('activ_param')
if layer_list[i-1]['activation'] == "relu":
newline += ' nnet::relu<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "LeakyReLU":
newline += ' nnet::leaky_relu<{}, {}, {}>({}, {}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, activation_param, output_object)
elif layer_list[i-1]['activation'] == "ThresholdedReLU":
newline += ' nnet::thresholded_relu<{}, {}, {}>({}, {}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, activation_param, output_object)
elif layer_list[i-1]['activation'].lower() == "elu":
if activation_param:
newline += ' nnet::elu<{}, {}, {}>({}, {}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, activation_param, output_object)
else:
newline += ' nnet::elu<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "selu":
newline += ' nnet::selu<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "PReLU":
newline += ' nnet::prelu<{}, {}, {}>({}, a{}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, i, output_object)
elif layer_list[i-1]['activation'] == "softmax":
newline += ' nnet::softmax<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "sigmoid":
newline += ' nnet::sigmoid<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "hard_sigmoid":
newline += ' nnet::hard_sigmoid<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "tanh":
newline += ' nnet::tanh<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "linear":
#github Issue 53
newline += ' nnet::linear<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "softsign":
newline += ' nnet::softsign<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
elif layer_list[i-1]['activation'] == "softplus":
newline += ' nnet::softplus<{}, {}, {}>({}, {});\n'.format(act_input_type, output_type, activation_name, act_input_object, output_object)
else:
raise Exception('ERROR: MISSING ACTIVATION')
newline += '\n'
#Just copy line
else:
newline = line
fout.write(newline)
for sublayerline in sublayerlines:
fout.write('\n')
fout.write(sublayerline)
fout.write('\n')
f.close()
fout.close()
###################
## parameters.h
###################
f = open(os.path.join(filedir,'../hls-template/firmware/parameters.h'),'r')
fout = open('{}/firmware/parameters.h'.format(yamlConfig['OutputDir']),'w')
dense_config_template = """struct config{index} : nnet::layer_config {{
static const unsigned n_in = {n_in};
static const unsigned n_out = {n_out};
static const unsigned io_type = nnet::{iotype};
static const unsigned reuse_factor = {reuse};
static const unsigned n_zeros = {nzeros};
static const bool store_weights_in_bram = false;
typedef accum_default_t accum_t;
typedef bias_default_t bias_t;
typedef weight_default_t weight_t;
}};\n"""
dense_sub_config_template = """struct config{index}_{i_part} : nnet::layer_config {{
static const unsigned n_in = {n_in};
static const unsigned n_out = {n_out};
static const unsigned io_type = nnet::{iotype};
static const unsigned reuse_factor = {reuse};
static const unsigned n_zeros = {nzeros};
static const bool store_weights_in_bram = false;
typedef accum_default_t accum_t;
typedef bias_default_t bias_t;
typedef weight_default_t weight_t;
}};\n"""
batchnorm_config_template = """struct config{index} : nnet::batchnorm_config {{
static const unsigned n_in = {n_in};
static const unsigned n_filt = {n_filt};
static const unsigned io_type = nnet::{iotype};
static const unsigned reuse_factor = {reuse};
static const bool store_weights_in_bram = false;
typedef beta_default_t beta_t;
typedef scale_default_t scale_t;
typedef mean_default_t mean_t;
}};\n"""
conv_config_template = """struct config{index} : nnet::conv_config {{
static const unsigned pad_left = {pad_left};
static const unsigned pad_right = {pad_right};
static const unsigned y_in = {y_in};
static const unsigned n_chan = {n_chan};
static const unsigned y_filt = {y_filt};
static const unsigned n_filt = {n_filt};
static const unsigned stride = {stride};
static const unsigned y_out = {y_out};
static const unsigned reuse_factor = {reuse};
static const unsigned n_zeros = {nzeros};
static const bool store_weights_in_bram = false;
typedef accum_default_t accum_t;
typedef bias_default_t bias_t;
typedef weight_default_t weight_t;
}};\n"""
conv2d_config_template = """struct config{index} : nnet::conv2d_config {{
static const unsigned pad_top = {pad_top};
static const unsigned pad_bottom = {pad_bottom};
static const unsigned pad_left = {pad_left};
static const unsigned pad_right = {pad_right};
static const unsigned in_height = {in_height};
static const unsigned in_width = {in_width};
static const unsigned n_chan = {n_chan};
static const unsigned filt_height = {filt_height};
static const unsigned filt_width = {filt_width};
static const unsigned n_filt = {n_filt};
static const unsigned stride_height = {stride_height};
static const unsigned stride_width = {stride_width};
static const unsigned out_height = {out_height};
static const unsigned out_width = {out_width};
static const unsigned reuse_factor = {reuse};
static const unsigned n_zeros = {nzeros};
static const bool store_weights_in_bram = false;
typedef accum_default_t accum_t;
typedef bias_default_t bias_t;
typedef weight_default_t weight_t;
}};\n"""
activ_config_template = """struct {type}_config{index} : nnet::activ_config {{
static const unsigned n_in = {n_in};
static const unsigned table_size = 1024;
static const unsigned io_type = nnet::{iotype};
}};\n"""
pooling1d_config_template = """struct config{index} : nnet::pooling1d_config {{
static const unsigned n_in = {n_in};
static const unsigned pool_size = {pool_size};
static const unsigned n_out = {n_out};
static const unsigned pad_left = {pad_left};
static const unsigned pad_right = {pad_right};
static const unsigned stride = {stride};
static const nnet::Pool_Op pool_op = nnet::{Op};
}};\n"""
pooling2d_config_template = """struct config{index} : nnet::pooling2d_config {{
static const unsigned in_height = {in_height};
static const unsigned in_width = {in_width};
static const unsigned n_filt = {n_filt};
static const unsigned stride_height = {stride_height};
static const unsigned stride_width = {stride_width};
static const unsigned pool_height = {pool_height};
static const unsigned pool_width = {pool_width};
static const unsigned out_height = {out_height};
static const unsigned out_width = {out_width};
static const unsigned pad_top = {pad_top};
static const unsigned pad_bottom = {pad_bottom};
static const unsigned pad_left = {pad_left};
static const unsigned pad_right = {pad_right};
static const nnet::Pool_Op pool_op = nnet::{Op};
static const unsigned reuse = {reuse};
}};\n
"""
for line in f.readlines():
#Insert numbers
if '//hls-fpga-machine-learning insert numbers' in line:
newline = line
newline += 'typedef {precision} accum_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} weight_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} bias_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} input_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} result_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
if do_batchnorm:
newline += 'typedef {precision} beta_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} mean_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
newline += 'typedef {precision} scale_default_t;\n'.format(precision=yamlConfig["DefaultPrecision"])
for i in range(1,len(layer_list)+1):
if i==1 and layer_list[i-1]['class_name']=='Dense':
newline += '#define N_INPUTS {}\n'.format(layer_list[i-1]['n_in'])
newline += '#define N_LAYER_1 {}\n'.format(layer_list[i-1]['n_out'])
elif i==1 and layer_list[i-1]['class_name']=='BatchNormalization' and is_dense:
newline += '#define N_INPUTS {}\n'.format(layer_list[i-1]['n_in'])
newline += '#define N_LAYER_1 {}\n'.format(layer_list[i-1]['n_out'])
newline += '#define N_FILT_1 {}\n'.format(layer_list[i-1]['n_filt'])
elif i==1 and layer_list[i-1]['class_name']=='BatchNormalization' and is_conv2d:
newline += '#define N_INPUTS {}\n'.format(layer_list[i-1]['n_in'])
newline += '#define N_LAYER_{} {}\n'.format(i,layer_list[i-1]['n_out'])
newline += '#define IN_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['in_height'])
newline += '#define IN_WIDTH_{} {}\n'.format(i, layer_list[i-1]['in_width'])
newline += '#define N_FILT_{} {}\n'.format(i, layer_list[i-1]['n_filt'])
elif i==len(layer_list) and layer_list[i-1]['class_name']=='Dense':
newline += '#define N_OUTPUTS {}\n'.format(layer_list[i-1]['n_out'])
elif i==len(layer_list) and layer_list[i-1]['class_name'] in activation_layers:
newline += '#define N_OUTPUTS {}\n'.format(layer_list[i-2]['n_out'])
elif i==len(layer_list) and layer_list[i-1]['class_name']=='BatchNormalization':
newline += '#define N_OUTPUTS {}\n'.format(layer_list[i-1]['n_out'])
newline += '#define N_FILT_{} {}\n'.format(i-1, layer_list[i-1]['n_filt'])
elif layer_list[i-1]['class_name']=='Dense':
newline += '#define N_LAYER_{} {}\n'.format(i, layer_list[i-1]['n_out'])
elif is_dense and layer_list[i-1]['class_name']=='BatchNormalization':
newline += '#define N_LAYER_{} {}\n'.format(i, layer_list[i-1]['n_out'])
newline += '#define N_FILT_{} {}\n'.format(i-1, layer_list[i-1]['n_filt'])
elif layer_list[i-1]['class_name'] in activation_layers:
newline += '#define N_LAYER_{} {}\n'.format(i, layer_list[i-2]['n_out'])
elif layer_list[i-1]['class_name']=='Conv1D':
newline += '#define Y_INPUTS_{} {}\n'.format(i, layer_list[i-1]['y_in'])
newline += '#define N_CHAN_{} {}\n'.format(i, layer_list[i-1]['n_chan'])
newline += '#define Y_OUTPUTS_{} {}\n'.format(i, layer_list[i-1]['y_out'])
newline += '#define N_FILT_{} {}\n'.format(i, layer_list[i-1]['n_filt'])
elif layer_list[i-1]['class_name']=='Conv2D':
newline += '#define IN_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['in_height'])
newline += '#define IN_WIDTH_{} {}\n'.format(i, layer_list[i-1]['in_width'])
newline += '#define N_CHAN_{} {}\n'.format(i, layer_list[i-1]['n_chan'])
newline += '#define OUT_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['out_height'])
newline += '#define OUT_WIDTH_{} {}\n'.format(i, layer_list[i-1]['out_width'])
newline += '#define N_FILT_{} {}\n'.format(i, layer_list[i-1]['n_filt'])
elif layer_list[i-1]['class_name']=='BatchNormalization' and is_conv2d:
newline += '#define N_LAYER_{} {}\n'.format(i, layer_list[i-1]['n_out'])
newline += '#define OUT_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['in_height'])
newline += '#define OUT_WIDTH_{} {}\n'.format(i, layer_list[i-1]['in_width'])
newline += '#define N_FILT_{} {}\n'.format(i, layer_list[i-1]['n_filt'])
elif 'Pooling' in layer_list[i-1]['class_name']:
info = layer_list[i-1]['class_name'].split('Pooling')
d = int(info[1].split('D')[0])
op = info[0]
if d == 1:
newline += '#define Y_INPUTS_{} {}\n'.format(i, layer_list[i-1]['y_in'])
newline += '#define Y_OUTPUTS_{} {}\n'.format(i, layer_list[i-1]['y_out'])
newline += '#define POOL_SIZE_{} {}\n'.format(i, layer_list[i-1]['pool_size'])
elif d == 2:
newline += '#define IN_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['in_height'])
newline += '#define IN_WIDTH_{} {}\n'.format(i, layer_list[i-1]['in_width'])
newline += '#define OUT_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['out_height'])
newline += '#define OUT_WIDTH_{} {}\n'.format(i, layer_list[i-1]['out_width'])
newline += '#define POOL_HEIGHT_{} {}\n'.format(i, layer_list[i-1]['pool_height'])
newline += '#define POOL_WIDTH_{} {}\n'.format(i, layer_list[i-1]['pool_width'])
newline += '#define N_FILT_{} {}\n'.format(i, layer_list[i-1]['n_filt'])
newline += '#define N_LAYER_{} {}\n'.format(i, layer_list[i-1]['n_out'])
elif '//hls-fpga-machine-learning insert layer-precision' in line:
newline = line
for i in range(1,len(layer_list)):
# if layer_list[i-1]['class_name']=='Dense':
# newline += 'typedef {precision} layer{index}_t;\n'.format(precision=yamlConfig["DefaultPrecision"], index=i)
newline += 'typedef {precision} layer{index}_t;\n'.format(precision=yamlConfig["DefaultPrecision"], index=i)
elif "//hls-fpga-machine-learning insert layer-config" in line:
newline = line
for i in range(1,len(layer_list)+1):
if i==1 and (layer_list[i-1]['class_name']=='Dense' or layer_list[i-1]['class_name']=='BatchNormalization'):
layer_in_name = "N_INPUTS"
layer_out_name = "N_LAYER_1"
layer_n_filt_name = "N_FILT_1"
elif i==1 and layer_list[i-1]['class_name']=='BatchNormalization' and is_conv2d:
layer_in_name = "IN_HEIGHT_{}*IN_WIDTH_{}*N_FILT_{}".format(i, i, i)
layer_out_name = "N_LAYER_1"
layer_n_filt_name = "N_FILT_{}".format(i)
elif i==1 and layer_list[i-1]['class_name']=='BatchNormalization' and is_dense:
layer_in_name = "N_INPUTS"
layer_out_name = "N_LAYER_1"
layer_n_filt_name = "N_FILT_{}".format(i)
elif is_dense and layer_list[i-1]['class_name']=='BatchNormalization':
layer_in_name = "N_LAYER_{}".format(i-1)
layer_out_name = "N_LAYER_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i-1)
elif i==len(layer_list) and layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv1D':
layer_in_name = "Y_OUTPUTS_{}*N_FILT_{}".format(i-1, i-1)
layer_out_name = "N_OUTPUTS"
elif i==len(layer_list) and layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv2D':
layer_in_name = "OUT_HEIGHT_{}*OUT_WIDTH_{}*N_FILT_{}".format(i-1, i-1, i-1)
layer_out_name = "N_OUTPUTS"
elif layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv1D':
layer_in_name = "Y_OUTPUTS_{}*N_FILT_{}".format(i-1, i-1)
layer_out_name = "N_LAYER_{}".format(i)
elif layer_list[i-1]['class_name']=='Dense' and layer_list[i-2]['class_name']=='Conv2D':
layer_in_name = "OUT_HEIGHT_{}*OUT_WIDTH_{}*N_FILT_{}".format(i-1, i-1, i-1)
layer_out_name = "N_LAYER_{}".format(i)
elif i==len(layer_list) and (layer_list[i-1]['class_name']=='Dense' or (is_dense and layer_list[i-1]['class_name'] in activation_layers) or (is_dense and layer_list[i-1]['class_name']=='BatchNormalization')):
layer_in_name = "N_LAYER_{}".format(i-1)
layer_out_name = "N_OUTPUTS"
elif layer_list[i-1]['class_name']=='Dense' or (is_dense and layer_list[i-1]['class_name'] in activation_layers):
layer_in_name = "N_LAYER_{}".format(i-1)
layer_out_name = "N_LAYER_{}".format(i)
elif layer_list[i-1]['class_name']=='Conv1D':
layer_y_in_name = "Y_INPUTS_{}".format(i)
layer_n_chan_name = "N_CHAN_{}".format(i)
layer_y_out_name = "Y_OUTPUTS_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i)
elif layer_list[i-1]['class_name']=='Conv2D': #or (is_conv2d and layer_list[i-1]['class_name']=='BatchNormalization'):
layer_in_height_name = "IN_HEIGHT_{}".format(i)
layer_in_width_name = "IN_WIDTH_{}".format(i)
layer_n_chan_name = "N_CHAN_{}".format(i)
layer_out_height_name = "OUT_HEIGHT_{}".format(i)
layer_out_width_name = "OUT_WIDTH_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i)
layer_in_name = "N_LAYER_{}".format(i-1)
elif is_conv2d and layer_list[i-1]['class_name']=='BatchNormalization':
layer_in_name = "OUT_HEIGHT_{}*OUT_WIDTH_{}*N_FILT_{}".format(i-1, i-1, i-1)
layer_out_name = "N_LAYER_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i-1)
elif 'Pooling' in layer_list[i-1]['class_name']:
info = layer_list[i-1]['class_name'].split('Pooling')
d = int(info[1].split('D')[0])
op = info[0]
if d == 1:
layer_y_in_name = "Y_INPUTS_{}".format(i)
layer_y_out_name = "Y_OUTPUTS_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i)
elif d == 2:
layer_in_height_name = "IN_HEIGHT_{}".format(i)
layer_in_width_name = "IN_WIDTH_{}".format(i)
layer_out_height_name = "OUT_HEIGHT_{}".format(i)
layer_out_width_name = "OUT_WIDTH_{}".format(i)
layer_n_filt_name = "N_FILT_{}".format(i)
layer_in_name = "N_LAYER_{}".format(i-1)
if layer_list[i-1]['class_name']=='Dense':
if layer_list[i-1]['n_part']==1:
newline += dense_config_template.format(index=str(i),
n_in=layer_in_name,
n_out=layer_out_name,
iotype=yamlConfig["IOType"],
reuse=yamlConfig["ReuseFactor"],
nzeros=layer_list[i-1]['weights_n_zeros'])
else:
for i_part in range(0, layer_list[i-1]['n_part']):
newline += dense_sub_config_template.format(index=str(i),
i_part=i_part,
n_in=layer_in_name,
n_out=layer_list[i-1]['n_subout'][i_part],
iotype=yamlConfig["IOType"],
reuse=yamlConfig["ReuseFactor"],
nzeros=layer_list[i-1]['weights_n_subzeros'][i_part])
newline += activ_config_template.format(type=layer_list[i-1]['activation'],
index=str(i),
n_in=layer_out_name,
iotype=yamlConfig["IOType"])
elif layer_list[i-1]['class_name']=='BatchNormalization':
newline += batchnorm_config_template.format(index=str(i),
n_in=layer_in_name,
n_out=layer_out_name,
n_filt=layer_n_filt_name,
iotype=yamlConfig["IOType"],
reuse=yamlConfig["ReuseFactor"])
elif layer_list[i-1]['class_name'] in activation_layers:
newline += activ_config_template.format(type=layer_list[i-1]['activation'],
index=str(i),
n_in=layer_out_name,
iotype=yamlConfig["IOType"])
elif layer_list[i-1]['class_name']=='Conv1D':
newline += conv_config_template.format(index=str(i),
pad_left=layer_list[i-1]['pad_left'],
pad_right=layer_list[i-1]['pad_right'],
y_in=layer_y_in_name,
n_chan=layer_n_chan_name,
y_out=layer_y_out_name,
n_filt=layer_n_filt_name,
y_filt=layer_list[i-1]['y_filt'],
stride=layer_list[i-1]['stride'],
iotype=yamlConfig["IOType"],
reuse=yamlConfig["ReuseFactor"],
nzeros=layer_list[i-1]['weights_n_zeros'])
newline += activ_config_template.format(type=layer_list[i-1]['activation'],
index=str(i),
n_in='{}*{}'.format(layer_y_out_name,layer_n_filt_name),
iotype=yamlConfig["IOType"])
elif layer_list[i-1]['class_name']=='Conv2D':
newline += conv2d_config_template.format(index=str(i),
pad_top=layer_list[i-1]['pad_top'],
pad_bottom=layer_list[i-1]['pad_bottom'],
pad_left=layer_list[i-1]['pad_left'],
pad_right=layer_list[i-1]['pad_right'],
in_height=layer_in_height_name,
in_width=layer_in_width_name,
n_chan=layer_n_chan_name,
out_height=layer_out_height_name,
out_width=layer_out_width_name,
n_filt=layer_n_filt_name,
filt_height=layer_list[i-1]['filt_height'],
filt_width=layer_list[i-1]['filt_width'],
stride_height=layer_list[i-1]['stride_height'],
stride_width=layer_list[i-1]['stride_width'],
iotype=yamlConfig["IOType"],
reuse=yamlConfig["ReuseFactor"],
nzeros=layer_list[i-1]['weights_n_zeros'])
newline += activ_config_template.format(type=layer_list[i-1]['activation'],
index=str(i),
n_in='{}*{}*{}'.format(layer_out_height_name,layer_out_width_name,layer_n_filt_name),
iotype=yamlConfig["IOType"])
elif 'Pooling' in layer_list[i-1]['class_name']:
info = layer_list[i-1]['class_name'].split('Pooling')
d = int(info[1].split('D')[0])
op = info[0]
if d == 1:
newline += pooling1d_config_template.format(index=str(i),
n_in=layer_n_in,
n_out=layer_n_out,
stride=layer_list[i-1]['stride'],
pool_size=layer_list[i-1]['pool_size'],
pad_left=layer_list[i-1]['pad_left'],
pad_right=layer_list[i-1]['pad_right'],
Op=op)
elif d == 2:
newline += pooling2d_config_template.format(index=str(i),
in_height=layer_in_height_name,
in_width=layer_in_width_name,
out_height=layer_out_height_name,
out_width=layer_out_width_name,
n_filt=layer_n_filt_name,
stride_height=layer_list[i-1]['stride_height'],
stride_width=layer_list[i-1]['stride_width'],
pool_height=layer_list[i-1]['pool_height'],
pool_width=layer_list[i-1]['pool_width'],
pad_left=layer_list[i-1]['pad_left'],
pad_right=layer_list[i-1]['pad_right'],
pad_top=layer_list[i-1]['pad_top'],
pad_bottom=layer_list[i-1]['pad_bottom'],
Op=op,
reuse=yamlConfig["ReuseFactor"])
else:
newline = line
fout.write(newline)
f.close()
fout.close()
###################
## test bench
###################
f = open(os.path.join(filedir,'../hls-template/myproject_test.cpp'),'r')
fout = open('{}/{}_test.cpp'.format(yamlConfig['OutputDir'], yamlConfig['ProjectName']),'w')
for line in f.readlines():
#Insert numbers
if 'myproject' in line:
newline = line.replace('myproject',yamlConfig['ProjectName'])
elif '//hls-fpga-machine-learning insert data' in line and (layer_list[0]['class_name']=='Dense' or (is_dense and layer_list[0]['class_name']=='BatchNormalization')):
newline = line
newline += ' input_t data_str[N_INPUTS] = {'
for i in range(0,layer_list[0]['n_in']-1):
newline += '0,'
newline += '0};\n'
elif '//hls-fpga-machine-learning insert data' in line and layer_list[0]['class_name']=='Conv1D':
newline = line
newline += ' input_t data_str[Y_INPUTS_1][N_CHAN_1] = {'
for i in range(0,layer_list[0]['y_in']*layer_list[0]['n_chan']-1):
newline += '0,'
newline += '0};\n'
elif '//hls-fpga-machine-learning insert data' in line and layer_list[0]['class_name']=='Conv2D':
newline = line
newline += ' input_t data_str[IN_HEIGHT_1][IN_WIDTH_1][N_CHAN_1] = {'
for i in range(0,layer_list[0]['in_height']*layer_list[0]['in_width']*layer_list[0]['n_chan']-1):
newline += '0,'
newline += '0};\n'
elif '//hls-fpga-machine-learning insert data' in line and is_conv2d and layer_list[0]['class_name']=='BatchNormalization':
newline = line
newline += ' input_t data_str[IN_HEIGHT_1][IN_WIDTH_1][N_FILT_1] = {'
for i in range(0,layer_list[0]['in_height']*layer_list[0]['in_width']*layer_list[0]['n_filt']-1):
newline += '0,'
newline += '0};\n'
else:
newline = line
fout.write(newline)
f.close()
fout.close()
#######################
## myproject.h
#######################
f = open(os.path.join(filedir,'../hls-template/firmware/myproject.h'),'r')
fout = open('{}/firmware/{}.h'.format(yamlConfig['OutputDir'], yamlConfig['ProjectName']),'w')
for line in f.readlines():
if 'MYPROJECT' in line:
newline = line.replace('MYPROJECT',format(yamlConfig['ProjectName'].upper()))
elif 'void myproject(' in line:
newline = 'void {}(\n'.format(yamlConfig['ProjectName'])
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='Conv1D':
newline = line.replace('input_t data[N_INPUTS]','input_t data[Y_INPUTS_1][N_CHAN_1]')
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='Conv2D':
newline = line.replace('input_t data[N_INPUTS]','input_t data[IN_HEIGHT_1][IN_WIDTH_1][N_CHAN_1]')
elif 'input_t data[N_INPUTS]' in line and layer_list[0]['class_name']=='BatchNormalization' and is_conv2d:
newline = line.replace('input_t data[N_INPUTS]','input_t data[IN_HEIGHT_1][IN_WIDTH_1][N_FILT_1]')
elif '#endif' in line:
for sublayerline_h in sublayerlines_h:
fout.write(sublayerline_h)
fout.write('\n#endif\n')
else:
newline = line
fout.write(newline)
f.close()
fout.close()
#######################
## build_prj.tcl
#######################
nnetdir = os.path.abspath(os.path.join(filedir, "../nnet_utils"))
relpath = os.path.relpath(nnetdir, start=yamlConfig['OutputDir'])
relpath = relpath.replace("\\", "\\\\")
f = open(os.path.join(filedir,'../hls-template/build_prj.tcl'),'r')
fout = open('{}/build_prj.tcl'.format(yamlConfig['OutputDir']),'w')
for line in f.readlines():
line = line.replace('myproject',yamlConfig['ProjectName'])
line = line.replace('nnet_utils', relpath)
if 'set_part {xc7vx690tffg1927-2}' in line:
line = 'set_part {{{}}}\n'.format(yamlConfig['XilinxPart'])
elif 'create_clock -period 5 -name default' in line:
line = 'create_clock -period {} -name default\n'.format(yamlConfig['ClockPeriod'])
fout.write(line)
f.close()
fout.close()
###################
# Tarball output
###################
with tarfile.open(yamlConfig['OutputDir'] + '.tar.gz', mode='w:gz') as archive:
archive.add(yamlConfig['OutputDir'], recursive=True)
#######################################
## Config module
#######################################
def parse_config(config_file) :
print("Loading configuration from", config_file)
config = open(config_file, 'r')
return yaml.load(config, Loader=yaml.Loader)
#######################################
## Print a bias or weight array to C++
#######################################
def print_array_to_cpp(name, a, odir, i_part = 0, n_part = 1, i_subout = 0, n_subout = 1):
#put output in subdir for tarballing later
#check if we're doing sublayer
if n_part > 1:
f=open("{}/firmware/weights/{}_{}.h".format(odir,name,i_part),"w")
if len(a.shape)==2: # dense weight
a = a[:,i_subout:i_subout+n_subout]
elif len(a.shape)==1: # bias
a = a[i_subout:i_subout+n_subout]
else:
f=open("{}/firmware/weights/{}.h".format(odir,name),"w")
#count zeros
zero_ctr = 0
for x in np.nditer(a, order='C'):
if x == 0:
zero_ctr += 1
#meta data
f.write("//Numpy array shape {}\n".format(a.shape))
f.write("//Min {:.12f}\n".format(np.min(a)))
f.write("//Max {:.12f}\n".format(np.max(a)))
f.write("//Number of zeros {}\n".format(zero_ctr))
f.write("\n")
#c++ variable
if re.match(r"^w\d*$", name) or re.match(r"^a\d*$", name):
if n_part > 1:
f.write("weight_default_t {}_{}".format(name,i_part))
else:
f.write("weight_default_t {}".format(name))
elif re.match(r"^b\d*$", name):
if n_part > 1:
f.write("bias_default_t {}_{}".format(name,i_part))
else:
f.write("bias_default_t {}".format(name))
elif re.match(r"^beta\d*$", name):
f.write("beta_default_t {}".format(name))
elif re.match(r"^mean\d*$", name):
f.write("mean_default_t {}".format(name))
elif re.match(r"^scale\d*$", name):
f.write("scale_default_t {}".format(name))
else:
raise Exception('ERROR: Unkown weights type')
#hls doesn't like 3d arrays... unrolling to 1d
#also doing for all (including 2d) arrays now
f.write("[{}]".format(np.prod(a.shape)))
f.write(" = {")
#fill c++ array.
#not including internal brackets for multidimensional case
i=0
for x in np.nditer(a, order='C'):
if i==0:
f.write("%.12f" % x)
else:
f.write(", %.12f" % x)
i=i+1
f.write("};\n")
f.close()
return zero_ctr
| 66.848731 | 224 | 0.511117 | 7,688 | 65,846 | 4.104709 | 0.042794 | 0.016922 | 0.068764 | 0.070412 | 0.847672 | 0.827835 | 0.797097 | 0.756789 | 0.719555 | 0.689419 | 0 | 0.013862 | 0.339352 | 65,846 | 984 | 225 | 66.916667 | 0.71157 | 0.030025 | 0 | 0.570913 | 0 | 0.002404 | 0.308822 | 0.040158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003606 | false | 0.001202 | 0.008413 | 0 | 0.014423 | 0.003606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e48b3e32ce540a8b8786ac0436dba73e0383c16 | 40 | py | Python | jupyterlabpymolpysnips/LabelFormat/sigdist.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlabpymolpysnips/LabelFormat/sigdist.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlabpymolpysnips/LabelFormat/sigdist.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | cmd.do('set label_distance_digits, 2;')
| 20 | 39 | 0.75 | 7 | 40 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.075 | 40 | 1 | 40 | 40 | 0.72973 | 0 | 0 | 0 | 0 | 0 | 0.725 | 0.55 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3fe0b0a63c8b5c8338f561f4acf42f7903f0f8e | 144 | py | Python | Rig/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | 4 | 2020-06-10T07:54:47.000Z | 2021-04-22T01:57:02.000Z | Rig/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | null | null | null | Rig/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | null | null | null | import SERigSpineComponent
import SERigBipedLimbComponent
import SERigBipedNeckComponent
import SERigHumanFacialComponent
import SERigCustomData | 28.8 | 32 | 0.9375 | 10 | 144 | 13.5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 144 | 5 | 33 | 28.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d0e94804ef3f903919f7dee0dbde9620d167311 | 12,051 | py | Python | cloudmersive_image_api_client/api/resize_api.py | Cloudmersive/Cloudmersive.APIClient.Python.ImageRecognition | 280666acc0b34d905ff54fe2aaec1768a0a3d0e7 | [
"Apache-2.0"
] | 1 | 2018-06-24T01:33:50.000Z | 2018-06-24T01:33:50.000Z | cloudmersive_image_api_client/api/resize_api.py | Cloudmersive/Cloudmersive.APIClient.Python.ImageRecognition | 280666acc0b34d905ff54fe2aaec1768a0a3d0e7 | [
"Apache-2.0"
] | null | null | null | cloudmersive_image_api_client/api/resize_api.py | Cloudmersive/Cloudmersive.APIClient.Python.ImageRecognition | 280666acc0b34d905ff54fe2aaec1768a0a3d0e7 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
imageapi
Image Recognition and Processing APIs let you use Machine Learning to recognize and process images, and also perform useful image modification operations. # noqa: E501
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from cloudmersive_image_api_client.api_client import ApiClient
class ResizeApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def resize_post(self, max_width, max_height, image_file, **kwargs): # noqa: E501
"""Resize an image while preserving aspect ratio # noqa: E501
Resize an image to a maximum width and maximum height, while preserving the image's original aspect ratio. Resize is EXIF-aware. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resize_post(max_width, max_height, image_file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int max_width: Maximum width of the output image - final image will be as large as possible while less than or equial to this width (required)
:param int max_height: Maximum height of the output image - final image will be as large as possible while less than or equial to this height (required)
:param file image_file: Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.resize_post_with_http_info(max_width, max_height, image_file, **kwargs) # noqa: E501
else:
(data) = self.resize_post_with_http_info(max_width, max_height, image_file, **kwargs) # noqa: E501
return data
def resize_post_with_http_info(self, max_width, max_height, image_file, **kwargs): # noqa: E501
"""Resize an image while preserving aspect ratio # noqa: E501
Resize an image to a maximum width and maximum height, while preserving the image's original aspect ratio. Resize is EXIF-aware. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resize_post_with_http_info(max_width, max_height, image_file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int max_width: Maximum width of the output image - final image will be as large as possible while less than or equial to this width (required)
:param int max_height: Maximum height of the output image - final image will be as large as possible while less than or equial to this height (required)
:param file image_file: Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['max_width', 'max_height', 'image_file'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method resize_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'max_width' is set
if ('max_width' not in params or
params['max_width'] is None):
raise ValueError("Missing the required parameter `max_width` when calling `resize_post`") # noqa: E501
# verify the required parameter 'max_height' is set
if ('max_height' not in params or
params['max_height'] is None):
raise ValueError("Missing the required parameter `max_height` when calling `resize_post`") # noqa: E501
# verify the required parameter 'image_file' is set
if ('image_file' not in params or
params['image_file'] is None):
raise ValueError("Missing the required parameter `image_file` when calling `resize_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'max_width' in params:
path_params['maxWidth'] = params['max_width'] # noqa: E501
if 'max_height' in params:
path_params['maxHeight'] = params['max_height'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'image_file' in params:
local_var_files['imageFile'] = params['image_file'] # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['Apikey'] # noqa: E501
return self.api_client.call_api(
'/image/resize/preserveAspectRatio/{maxWidth}/{maxHeight}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def resize_resize_simple(self, width, height, image_file, **kwargs): # noqa: E501
"""Resize an image # noqa: E501
Resize an image to a specific width and specific height. Resize is EXIF-aware. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resize_resize_simple(width, height, image_file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int width: Width of the output image - final image will be exactly this width (required)
:param int height: Height of the output image - final image will be exactly this height (required)
:param file image_file: Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.resize_resize_simple_with_http_info(width, height, image_file, **kwargs) # noqa: E501
else:
(data) = self.resize_resize_simple_with_http_info(width, height, image_file, **kwargs) # noqa: E501
return data
def resize_resize_simple_with_http_info(self, width, height, image_file, **kwargs): # noqa: E501
"""Resize an image # noqa: E501
Resize an image to a specific width and specific height. Resize is EXIF-aware. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resize_resize_simple_with_http_info(width, height, image_file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int width: Width of the output image - final image will be exactly this width (required)
:param int height: Height of the output image - final image will be exactly this height (required)
:param file image_file: Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['width', 'height', 'image_file'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method resize_resize_simple" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'width' is set
if ('width' not in params or
params['width'] is None):
raise ValueError("Missing the required parameter `width` when calling `resize_resize_simple`") # noqa: E501
# verify the required parameter 'height' is set
if ('height' not in params or
params['height'] is None):
raise ValueError("Missing the required parameter `height` when calling `resize_resize_simple`") # noqa: E501
# verify the required parameter 'image_file' is set
if ('image_file' not in params or
params['image_file'] is None):
raise ValueError("Missing the required parameter `image_file` when calling `resize_resize_simple`") # noqa: E501
collection_formats = {}
path_params = {}
if 'width' in params:
path_params['width'] = params['width'] # noqa: E501
if 'height' in params:
path_params['height'] = params['height'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'image_file' in params:
local_var_files['imageFile'] = params['image_file'] # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['Apikey'] # noqa: E501
return self.api_client.call_api(
'/image/resize/target/{width}/{height}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 45.475472 | 172 | 0.638536 | 1,513 | 12,051 | 4.891606 | 0.132188 | 0.044318 | 0.028375 | 0.0227 | 0.894068 | 0.875422 | 0.856101 | 0.834212 | 0.834212 | 0.804486 | 0 | 0.01492 | 0.276989 | 12,051 | 264 | 173 | 45.647727 | 0.8345 | 0.405527 | 0 | 0.656934 | 0 | 0 | 0.217581 | 0.051116 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036496 | false | 0 | 0.029197 | 0 | 0.116788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d367636a287debacfea0dfca2dc7061254737b0 | 373 | py | Python | cumulusci/tasks/github/__init__.py | davisagli/CumulusCI | fd74c324ad3ff662484b159395c639879011e711 | [
"BSD-3-Clause"
] | 163 | 2018-09-13T18:49:34.000Z | 2022-03-25T08:37:15.000Z | cumulusci/tasks/github/__init__.py | davisagli/CumulusCI | fd74c324ad3ff662484b159395c639879011e711 | [
"BSD-3-Clause"
] | 1,280 | 2018-09-11T20:09:37.000Z | 2022-03-31T18:40:21.000Z | cumulusci/tasks/github/__init__.py | davisagli/CumulusCI | fd74c324ad3ff662484b159395c639879011e711 | [
"BSD-3-Clause"
] | 125 | 2015-01-17T16:05:39.000Z | 2018-09-06T19:05:00.000Z | from cumulusci.tasks.github.merge import MergeBranch
from cumulusci.tasks.github.pull_request import PullRequests
from cumulusci.tasks.github.release import CreateRelease
from cumulusci.tasks.github.release_report import ReleaseReport
from cumulusci.tasks.github.tag import CloneTag
__all__ = ("MergeBranch", "PullRequests", "CreateRelease", "ReleaseReport", "CloneTag")
| 41.444444 | 87 | 0.836461 | 43 | 373 | 7.116279 | 0.395349 | 0.212418 | 0.294118 | 0.392157 | 0.202614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077748 | 373 | 8 | 88 | 46.625 | 0.889535 | 0 | 0 | 0 | 0 | 0 | 0.152815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
edc8c5089031d5c9426108a4055e9aefa35df01a | 17,910 | py | Python | lotlan_scheduler/parser/LoTLanLexer.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | lotlan_scheduler/parser/LoTLanLexer.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | lotlan_scheduler/parser/LoTLanLexer.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | # Generated from LoTLanLexer.g4 by ANTLR 4.8
from antlr4 import *
from io import StringIO
from typing.io import TextIO
import sys
def serializedATN():
with StringIO() as buf:
buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2/")
buf.write("\u01aa\b\1\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6")
buf.write("\4\7\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r")
buf.write("\t\r\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22")
buf.write("\4\23\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30")
buf.write("\t\30\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35")
buf.write("\4\36\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4")
buf.write("%\t%\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t")
buf.write("-\4.\t.\4/\t/\3\2\3\2\3\2\3\2\3\2\3\2\3\2\3\2\3\2\3\2")
buf.write("\3\2\3\2\3\2\3\2\3\3\3\3\3\3\3\3\3\3\3\3\3\3\3\3\3\4\3")
buf.write("\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\4")
buf.write("\3\4\3\4\3\4\3\4\3\4\3\4\3\4\3\5\3\5\3\5\3\5\3\5\3\6\6")
buf.write("\6\u0093\n\6\r\6\16\6\u0094\3\6\3\6\3\7\3\7\7\7\u009b")
buf.write("\n\7\f\7\16\7\u009e\13\7\3\7\3\7\3\b\7\b\u00a3\n\b\f\b")
buf.write("\16\b\u00a6\13\b\3\b\3\b\3\t\3\t\3\t\3\t\3\t\5\t\u00af")
buf.write("\n\t\3\n\3\n\6\n\u00b3\n\n\r\n\16\n\u00b4\3\n\3\n\3\13")
buf.write("\3\13\3\13\6\13\u00bc\n\13\r\13\16\13\u00bd\3\13\3\13")
buf.write("\3\13\3\13\3\f\3\f\3\f\3\f\3\f\3\f\3\r\3\r\3\r\7\r\u00cd")
buf.write("\n\r\f\r\16\r\u00d0\13\r\3\r\3\r\7\r\u00d4\n\r\f\r\16")
buf.write("\r\u00d7\13\r\3\16\3\16\3\16\3\16\3\16\3\16\3\16\3\16")
buf.write("\3\16\3\16\3\17\3\17\3\17\3\17\3\17\3\17\3\17\3\17\3\17")
buf.write("\3\17\3\17\3\17\3\20\3\20\3\20\3\20\3\20\3\20\3\20\3\20")
buf.write("\3\21\3\21\3\21\3\21\3\21\3\21\3\21\3\21\3\21\3\21\3\21")
buf.write("\3\21\3\21\3\22\3\22\3\22\3\22\3\22\3\22\3\22\3\22\3\22")
buf.write("\3\22\3\22\3\23\3\23\3\23\3\23\3\23\3\23\3\24\3\24\3\24")
buf.write("\3\24\3\25\3\25\3\25\3\25\3\25\3\25\3\25\3\25\3\26\3\26")
buf.write("\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\26")
buf.write("\3\27\3\27\3\27\3\27\3\27\3\27\3\27\3\27\3\27\3\27\3\27")
buf.write("\3\27\3\30\3\30\3\31\3\31\3\32\3\32\3\33\3\33\3\34\3\34")
buf.write("\3\35\3\35\3\36\3\36\3\36\3\37\3\37\3 \3 \3 \3!\3!\3!")
buf.write("\3\"\3\"\3\"\3#\3#\3#\3#\3$\3$\3$\3%\3%\3&\3&\3&\3&\3")
buf.write("&\3&\3&\3&\5&\u0165\n&\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'")
buf.write("\3\'\3\'\5\'\u0171\n\'\3(\3(\7(\u0175\n(\f(\16(\u0178")
buf.write("\13(\3)\3)\7)\u017c\n)\f)\16)\u017f\13)\3*\3*\6*\u0183")
buf.write("\n*\r*\16*\u0184\3*\3*\3+\3+\6+\u018b\n+\r+\16+\u018c")
buf.write("\3+\3+\3,\3,\3,\3-\6-\u0195\n-\r-\16-\u0196\3.\6.\u019a")
buf.write("\n.\r.\16.\u019b\3.\3.\6.\u01a0\n.\r.\16.\u01a1\3/\6/")
buf.write("\u01a5\n/\r/\16/\u01a6\3/\3/\2\2\60\4\3\6\4\b\5\n\6\f")
buf.write("\7\16\b\20\t\22\2\24\n\26\13\30\f\32\r\34\16\36\17 \20")
buf.write("\"\21$\22&\23(\24*\25,\26.\27\60\30\62\31\64\32\66\33")
buf.write("8\34:\35<\36>\37@ B!D\"F#H$J%L&N\'P(R)T*V+X,Z-\\.^/\4")
buf.write("\2\3\13\5\2\13\f\17\17\"\"\3\2\f\f\4\2\13\13\"\"\5\2\13")
buf.write("\13\17\17\"\"\3\2c|\6\2\62;C\\aac|\3\2C\\\6\2\"\")),,")
buf.write("\61;\3\2\62;\2\u01b9\2\4\3\2\2\2\2\6\3\2\2\2\2\b\3\2\2")
buf.write("\2\2\n\3\2\2\2\2\f\3\2\2\2\2\16\3\2\2\2\3\20\3\2\2\2\3")
buf.write("\24\3\2\2\2\3\26\3\2\2\2\3\30\3\2\2\2\3\32\3\2\2\2\3\34")
buf.write("\3\2\2\2\3\36\3\2\2\2\3 \3\2\2\2\3\"\3\2\2\2\3$\3\2\2")
buf.write("\2\3&\3\2\2\2\3(\3\2\2\2\3*\3\2\2\2\3,\3\2\2\2\3.\3\2")
buf.write("\2\2\3\60\3\2\2\2\3\62\3\2\2\2\3\64\3\2\2\2\3\66\3\2\2")
buf.write("\2\38\3\2\2\2\3:\3\2\2\2\3<\3\2\2\2\3>\3\2\2\2\3@\3\2")
buf.write("\2\2\3B\3\2\2\2\3D\3\2\2\2\3F\3\2\2\2\3H\3\2\2\2\3J\3")
buf.write("\2\2\2\3L\3\2\2\2\3N\3\2\2\2\3P\3\2\2\2\3R\3\2\2\2\3T")
buf.write("\3\2\2\2\3V\3\2\2\2\3X\3\2\2\2\3Z\3\2\2\2\3\\\3\2\2\2")
buf.write("\3^\3\2\2\2\4`\3\2\2\2\6n\3\2\2\2\bv\3\2\2\2\n\u008c\3")
buf.write("\2\2\2\f\u0092\3\2\2\2\16\u0098\3\2\2\2\20\u00a4\3\2\2")
buf.write("\2\22\u00ae\3\2\2\2\24\u00b0\3\2\2\2\26\u00b8\3\2\2\2")
buf.write("\30\u00c3\3\2\2\2\32\u00c9\3\2\2\2\34\u00d8\3\2\2\2\36")
buf.write("\u00e2\3\2\2\2 \u00ee\3\2\2\2\"\u00f6\3\2\2\2$\u0103\3")
buf.write("\2\2\2&\u010e\3\2\2\2(\u0114\3\2\2\2*\u0118\3\2\2\2,\u0120")
buf.write("\3\2\2\2.\u012d\3\2\2\2\60\u0139\3\2\2\2\62\u013b\3\2")
buf.write("\2\2\64\u013d\3\2\2\2\66\u013f\3\2\2\28\u0141\3\2\2\2")
buf.write(":\u0143\3\2\2\2<\u0145\3\2\2\2>\u0148\3\2\2\2@\u014a\3")
buf.write("\2\2\2B\u014d\3\2\2\2D\u0150\3\2\2\2F\u0153\3\2\2\2H\u0157")
buf.write("\3\2\2\2J\u015a\3\2\2\2L\u0164\3\2\2\2N\u0170\3\2\2\2")
buf.write("P\u0172\3\2\2\2R\u0179\3\2\2\2T\u0180\3\2\2\2V\u0188\3")
buf.write("\2\2\2X\u0190\3\2\2\2Z\u0194\3\2\2\2\\\u0199\3\2\2\2^")
buf.write("\u01a4\3\2\2\2`a\7V\2\2ab\7g\2\2bc\7o\2\2cd\7r\2\2de\7")
buf.write("n\2\2ef\7c\2\2fg\7v\2\2gh\7g\2\2hi\7\"\2\2ij\3\2\2\2j")
buf.write("k\5R)\2kl\3\2\2\2lm\b\2\2\2m\5\3\2\2\2no\7V\2\2op\7c\2")
buf.write("\2pq\7u\2\2qr\7m\2\2rs\7\"\2\2st\3\2\2\2tu\b\3\2\2u\7")
buf.write("\3\2\2\2vw\7V\2\2wx\7t\2\2xy\7c\2\2yz\7p\2\2z{\7u\2\2")
buf.write("{|\7r\2\2|}\7q\2\2}~\7t\2\2~\177\7v\2\2\177\u0080\7Q\2")
buf.write("\2\u0080\u0081\7t\2\2\u0081\u0082\7f\2\2\u0082\u0083\7")
buf.write("g\2\2\u0083\u0084\7t\2\2\u0084\u0085\7U\2\2\u0085\u0086")
buf.write("\7v\2\2\u0086\u0087\7g\2\2\u0087\u0088\7r\2\2\u0088\u0089")
buf.write("\7\"\2\2\u0089\u008a\3\2\2\2\u008a\u008b\b\4\2\2\u008b")
buf.write("\t\3\2\2\2\u008c\u008d\5R)\2\u008d\u008e\7\"\2\2\u008e")
buf.write("\u008f\3\2\2\2\u008f\u0090\b\5\2\2\u0090\13\3\2\2\2\u0091")
buf.write("\u0093\t\2\2\2\u0092\u0091\3\2\2\2\u0093\u0094\3\2\2\2")
buf.write("\u0094\u0092\3\2\2\2\u0094\u0095\3\2\2\2\u0095\u0096\3")
buf.write("\2\2\2\u0096\u0097\b\6\3\2\u0097\r\3\2\2\2\u0098\u009c")
buf.write("\7%\2\2\u0099\u009b\n\3\2\2\u009a\u0099\3\2\2\2\u009b")
buf.write("\u009e\3\2\2\2\u009c\u009a\3\2\2\2\u009c\u009d\3\2\2\2")
buf.write("\u009d\u009f\3\2\2\2\u009e\u009c\3\2\2\2\u009f\u00a0\b")
buf.write("\7\3\2\u00a0\17\3\2\2\2\u00a1\u00a3\t\4\2\2\u00a2\u00a1")
buf.write("\3\2\2\2\u00a3\u00a6\3\2\2\2\u00a4\u00a2\3\2\2\2\u00a4")
buf.write("\u00a5\3\2\2\2\u00a5\u00a7\3\2\2\2\u00a6\u00a4\3\2\2\2")
buf.write("\u00a7\u00a8\7\f\2\2\u00a8\21\3\2\2\2\u00a9\u00aa\7\"")
buf.write("\2\2\u00aa\u00ab\7\"\2\2\u00ab\u00ac\7\"\2\2\u00ac\u00af")
buf.write("\7\"\2\2\u00ad\u00af\7\13\2\2\u00ae\u00a9\3\2\2\2\u00ae")
buf.write("\u00ad\3\2\2\2\u00af\23\3\2\2\2\u00b0\u00b2\7%\2\2\u00b1")
buf.write("\u00b3\n\3\2\2\u00b2\u00b1\3\2\2\2\u00b3\u00b4\3\2\2\2")
buf.write("\u00b4\u00b2\3\2\2\2\u00b4\u00b5\3\2\2\2\u00b5\u00b6\3")
buf.write("\2\2\2\u00b6\u00b7\b\n\4\2\u00b7\25\3\2\2\2\u00b8\u00b9")
buf.write("\5\22\t\2\u00b9\u00bb\7%\2\2\u00ba\u00bc\n\3\2\2\u00bb")
buf.write("\u00ba\3\2\2\2\u00bc\u00bd\3\2\2\2\u00bd\u00bb\3\2\2\2")
buf.write("\u00bd\u00be\3\2\2\2\u00be\u00bf\3\2\2\2\u00bf\u00c0\7")
buf.write("\f\2\2\u00c0\u00c1\3\2\2\2\u00c1\u00c2\b\13\4\2\u00c2")
buf.write("\27\3\2\2\2\u00c3\u00c4\7G\2\2\u00c4\u00c5\7p\2\2\u00c5")
buf.write("\u00c6\7f\2\2\u00c6\u00c7\3\2\2\2\u00c7\u00c8\b\f\5\2")
buf.write("\u00c8\31\3\2\2\2\u00c9\u00ca\5\22\t\2\u00ca\u00ce\5P")
buf.write("(\2\u00cb\u00cd\t\5\2\2\u00cc\u00cb\3\2\2\2\u00cd\u00d0")
buf.write("\3\2\2\2\u00ce\u00cc\3\2\2\2\u00ce\u00cf\3\2\2\2\u00cf")
buf.write("\u00d1\3\2\2\2\u00d0\u00ce\3\2\2\2\u00d1\u00d5\5\60\30")
buf.write("\2\u00d2\u00d4\t\5\2\2\u00d3\u00d2\3\2\2\2\u00d4\u00d7")
buf.write("\3\2\2\2\u00d5\u00d3\3\2\2\2\u00d5\u00d6\3\2\2\2\u00d6")
buf.write("\33\3\2\2\2\u00d7\u00d5\3\2\2\2\u00d8\u00d9\5\22\t\2\u00d9")
buf.write("\u00da\7N\2\2\u00da\u00db\7q\2\2\u00db\u00dc\7e\2\2\u00dc")
buf.write("\u00dd\7c\2\2\u00dd\u00de\7v\2\2\u00de\u00df\7k\2\2\u00df")
buf.write("\u00e0\7q\2\2\u00e0\u00e1\7p\2\2\u00e1\35\3\2\2\2\u00e2")
buf.write("\u00e3\5\22\t\2\u00e3\u00e4\7R\2\2\u00e4\u00e5\7c\2\2")
buf.write("\u00e5\u00e6\7t\2\2\u00e6\u00e7\7c\2\2\u00e7\u00e8\7o")
buf.write("\2\2\u00e8\u00e9\7g\2\2\u00e9\u00ea\7v\2\2\u00ea\u00eb")
buf.write("\7g\2\2\u00eb\u00ec\7t\2\2\u00ec\u00ed\7u\2\2\u00ed\37")
buf.write("\3\2\2\2\u00ee\u00ef\5\22\t\2\u00ef\u00f0\7T\2\2\u00f0")
buf.write("\u00f1\7g\2\2\u00f1\u00f2\7r\2\2\u00f2\u00f3\7g\2\2\u00f3")
buf.write("\u00f4\7c\2\2\u00f4\u00f5\7v\2\2\u00f5!\3\2\2\2\u00f6")
buf.write("\u00f7\5\22\t\2\u00f7\u00f8\7E\2\2\u00f8\u00f9\7q\2\2")
buf.write("\u00f9\u00fa\7p\2\2\u00fa\u00fb\7u\2\2\u00fb\u00fc\7v")
buf.write("\2\2\u00fc\u00fd\7t\2\2\u00fd\u00fe\7c\2\2\u00fe\u00ff")
buf.write("\7k\2\2\u00ff\u0100\7p\2\2\u0100\u0101\7v\2\2\u0101\u0102")
buf.write("\7u\2\2\u0102#\3\2\2\2\u0103\u0104\5\22\t\2\u0104\u0105")
buf.write("\7V\2\2\u0105\u0106\7t\2\2\u0106\u0107\7c\2\2\u0107\u0108")
buf.write("\7p\2\2\u0108\u0109\7u\2\2\u0109\u010a\7r\2\2\u010a\u010b")
buf.write("\7q\2\2\u010b\u010c\7t\2\2\u010c\u010d\7v\2\2\u010d%\3")
buf.write("\2\2\2\u010e\u010f\5\22\t\2\u010f\u0110\7H\2\2\u0110\u0111")
buf.write("\7t\2\2\u0111\u0112\7q\2\2\u0112\u0113\7o\2\2\u0113\'")
buf.write("\3\2\2\2\u0114\u0115\5\22\t\2\u0115\u0116\7V\2\2\u0116")
buf.write("\u0117\7q\2\2\u0117)\3\2\2\2\u0118\u0119\5\22\t\2\u0119")
buf.write("\u011a\7Q\2\2\u011a\u011b\7p\2\2\u011b\u011c\7F\2\2\u011c")
buf.write("\u011d\7q\2\2\u011d\u011e\7p\2\2\u011e\u011f\7g\2\2\u011f")
buf.write("+\3\2\2\2\u0120\u0121\5\22\t\2\u0121\u0122\7V\2\2\u0122")
buf.write("\u0123\7t\2\2\u0123\u0124\7k\2\2\u0124\u0125\7i\2\2\u0125")
buf.write("\u0126\7i\2\2\u0126\u0127\7g\2\2\u0127\u0128\7t\2\2\u0128")
buf.write("\u0129\7g\2\2\u0129\u012a\7f\2\2\u012a\u012b\7D\2\2\u012b")
buf.write("\u012c\7{\2\2\u012c-\3\2\2\2\u012d\u012e\5\22\t\2\u012e")
buf.write("\u012f\7H\2\2\u012f\u0130\7k\2\2\u0130\u0131\7p\2\2\u0131")
buf.write("\u0132\7k\2\2\u0132\u0133\7u\2\2\u0133\u0134\7j\2\2\u0134")
buf.write("\u0135\7g\2\2\u0135\u0136\7f\2\2\u0136\u0137\7D\2\2\u0137")
buf.write("\u0138\7{\2\2\u0138/\3\2\2\2\u0139\u013a\7?\2\2\u013a")
buf.write("\61\3\2\2\2\u013b\u013c\7.\2\2\u013c\63\3\2\2\2\u013d")
buf.write("\u013e\7\60\2\2\u013e\65\3\2\2\2\u013f\u0140\7*\2\2\u0140")
buf.write("\67\3\2\2\2\u0141\u0142\7+\2\2\u01429\3\2\2\2\u0143\u0144")
buf.write("\7>\2\2\u0144;\3\2\2\2\u0145\u0146\7>\2\2\u0146\u0147")
buf.write("\7?\2\2\u0147=\3\2\2\2\u0148\u0149\7@\2\2\u0149?\3\2\2")
buf.write("\2\u014a\u014b\7@\2\2\u014b\u014c\7?\2\2\u014cA\3\2\2")
buf.write("\2\u014d\u014e\7?\2\2\u014e\u014f\7?\2\2\u014fC\3\2\2")
buf.write("\2\u0150\u0151\7#\2\2\u0151\u0152\7?\2\2\u0152E\3\2\2")
buf.write("\2\u0153\u0154\7c\2\2\u0154\u0155\7p\2\2\u0155\u0156\7")
buf.write("f\2\2\u0156G\3\2\2\2\u0157\u0158\7q\2\2\u0158\u0159\7")
buf.write("t\2\2\u0159I\3\2\2\2\u015a\u015b\7#\2\2\u015bK\3\2\2\2")
buf.write("\u015c\u015d\7V\2\2\u015d\u015e\7t\2\2\u015e\u015f\7w")
buf.write("\2\2\u015f\u0165\7g\2\2\u0160\u0161\7V\2\2\u0161\u0162")
buf.write("\7T\2\2\u0162\u0163\7W\2\2\u0163\u0165\7G\2\2\u0164\u015c")
buf.write("\3\2\2\2\u0164\u0160\3\2\2\2\u0165M\3\2\2\2\u0166\u0167")
buf.write("\7H\2\2\u0167\u0168\7c\2\2\u0168\u0169\7n\2\2\u0169\u016a")
buf.write("\7u\2\2\u016a\u0171\7g\2\2\u016b\u016c\7H\2\2\u016c\u016d")
buf.write("\7C\2\2\u016d\u016e\7N\2\2\u016e\u016f\7U\2\2\u016f\u0171")
buf.write("\7G\2\2\u0170\u0166\3\2\2\2\u0170\u016b\3\2\2\2\u0171")
buf.write("O\3\2\2\2\u0172\u0176\t\6\2\2\u0173\u0175\t\7\2\2\u0174")
buf.write("\u0173\3\2\2\2\u0175\u0178\3\2\2\2\u0176\u0174\3\2\2\2")
buf.write("\u0176\u0177\3\2\2\2\u0177Q\3\2\2\2\u0178\u0176\3\2\2")
buf.write("\2\u0179\u017d\t\b\2\2\u017a\u017c\t\7\2\2\u017b\u017a")
buf.write("\3\2\2\2\u017c\u017f\3\2\2\2\u017d\u017b\3\2\2\2\u017d")
buf.write("\u017e\3\2\2\2\u017eS\3\2\2\2\u017f\u017d\3\2\2\2\u0180")
buf.write("\u0182\7$\2\2\u0181\u0183\t\7\2\2\u0182\u0181\3\2\2\2")
buf.write("\u0183\u0184\3\2\2\2\u0184\u0182\3\2\2\2\u0184\u0185\3")
buf.write("\2\2\2\u0185\u0186\3\2\2\2\u0186\u0187\7$\2\2\u0187U\3")
buf.write("\2\2\2\u0188\u018a\7$\2\2\u0189\u018b\t\t\2\2\u018a\u0189")
buf.write("\3\2\2\2\u018b\u018c\3\2\2\2\u018c\u018a\3\2\2\2\u018c")
buf.write("\u018d\3\2\2\2\u018d\u018e\3\2\2\2\u018e\u018f\7$\2\2")
buf.write("\u018fW\3\2\2\2\u0190\u0191\7$\2\2\u0191\u0192\7$\2\2")
buf.write("\u0192Y\3\2\2\2\u0193\u0195\t\n\2\2\u0194\u0193\3\2\2")
buf.write("\2\u0195\u0196\3\2\2\2\u0196\u0194\3\2\2\2\u0196\u0197")
buf.write("\3\2\2\2\u0197[\3\2\2\2\u0198\u019a\t\n\2\2\u0199\u0198")
buf.write("\3\2\2\2\u019a\u019b\3\2\2\2\u019b\u0199\3\2\2\2\u019b")
buf.write("\u019c\3\2\2\2\u019c\u019d\3\2\2\2\u019d\u019f\7\60\2")
buf.write("\2\u019e\u01a0\t\n\2\2\u019f\u019e\3\2\2\2\u01a0\u01a1")
buf.write("\3\2\2\2\u01a1\u019f\3\2\2\2\u01a1\u01a2\3\2\2\2\u01a2")
buf.write("]\3\2\2\2\u01a3\u01a5\t\5\2\2\u01a4\u01a3\3\2\2\2\u01a5")
buf.write("\u01a6\3\2\2\2\u01a6\u01a4\3\2\2\2\u01a6\u01a7\3\2\2\2")
buf.write("\u01a7\u01a8\3\2\2\2\u01a8\u01a9\b/\4\2\u01a9_\3\2\2\2")
buf.write("\26\2\3\u0094\u009c\u00a4\u00ae\u00b4\u00bd\u00ce\u00d5")
buf.write("\u0164\u0170\u0176\u017d\u0184\u018c\u0196\u019b\u01a1")
buf.write("\u01a6\6\7\3\2\b\2\2\2\3\2\6\2\2")
return buf.getvalue()
class LoTLanLexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
BLOCK = 1
TEMPLATE = 1
TASK = 2
TRANSPORT_ORDER_STEP = 3
INSTANCE = 4
WHITESPACE = 5
COMMENT = 6
NEW_LINE = 7
COMMENT_IN_BLOCK = 8
COMMENT_LINE_IN_BLOCK = 9
END_IN_BLOCK = 10
ASSIGNMENT = 11
LOCATION = 12
PARAMETERS = 13
REPEAT = 14
CONSTRAINTS = 15
TRANSPORT = 16
FROM = 17
TO = 18
ON_DONE = 19
TRIGGERED_BY = 20
FINISHED_BY = 21
EQUAL = 22
COMMA = 23
DOT = 24
E_LEFT_PARENTHESIS = 25
E_RIGHT_PARENTHESIS = 26
E_LESS_THAN = 27
E_LESS_THAN_OR_EQUAL = 28
E_GREATER_THAN = 29
E_GREATER_THAN_OR_EQUAL = 30
E_EQUAL = 31
E_NOT_EQUAL = 32
E_BOOLEAN_AND = 33
E_BOOLEAN_OR = 34
E_BOOLEAN_NOT = 35
E_TRUE = 36
E_FALSE = 37
STARTS_WITH_LOWER_C_STR = 38
STARTS_WITH_UPPER_C_STR = 39
STRING_VALUE = 40
NUMERIC_VALUE = 41
EMPTY_VALUE = 42
INTEGER = 43
FLOAT = 44
WHITESPACE_BLOCK = 45
channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
modeNames = [ "DEFAULT_MODE", "BLOCK" ]
literalNames = [ "<INVALID>",
"'Task '", "'TransportOrderStep '", "'End'", "'='", "','", "'.'",
"'('", "')'", "'<'", "'<='", "'>'", "'>='", "'=='", "'!='",
"'and'", "'or'", "'!'", "'\"\"'" ]
symbolicNames = [ "<INVALID>",
"TEMPLATE", "TASK", "TRANSPORT_ORDER_STEP", "INSTANCE", "WHITESPACE",
"COMMENT", "NEW_LINE", "COMMENT_IN_BLOCK", "COMMENT_LINE_IN_BLOCK",
"END_IN_BLOCK", "ASSIGNMENT", "LOCATION", "PARAMETERS", "REPEAT",
"CONSTRAINTS", "TRANSPORT", "FROM", "TO", "ON_DONE", "TRIGGERED_BY",
"FINISHED_BY", "EQUAL", "COMMA", "DOT", "E_LEFT_PARENTHESIS",
"E_RIGHT_PARENTHESIS", "E_LESS_THAN", "E_LESS_THAN_OR_EQUAL",
"E_GREATER_THAN", "E_GREATER_THAN_OR_EQUAL", "E_EQUAL", "E_NOT_EQUAL",
"E_BOOLEAN_AND", "E_BOOLEAN_OR", "E_BOOLEAN_NOT", "E_TRUE",
"E_FALSE", "STARTS_WITH_LOWER_C_STR", "STARTS_WITH_UPPER_C_STR",
"STRING_VALUE", "NUMERIC_VALUE", "EMPTY_VALUE", "INTEGER", "FLOAT",
"WHITESPACE_BLOCK" ]
ruleNames = [ "TEMPLATE", "TASK", "TRANSPORT_ORDER_STEP", "INSTANCE",
"WHITESPACE", "COMMENT", "NEW_LINE", "INDENTATION", "COMMENT_IN_BLOCK",
"COMMENT_LINE_IN_BLOCK", "END_IN_BLOCK", "ASSIGNMENT",
"LOCATION", "PARAMETERS", "REPEAT", "CONSTRAINTS", "TRANSPORT",
"FROM", "TO", "ON_DONE", "TRIGGERED_BY", "FINISHED_BY",
"EQUAL", "COMMA", "DOT", "E_LEFT_PARENTHESIS", "E_RIGHT_PARENTHESIS",
"E_LESS_THAN", "E_LESS_THAN_OR_EQUAL", "E_GREATER_THAN",
"E_GREATER_THAN_OR_EQUAL", "E_EQUAL", "E_NOT_EQUAL", "E_BOOLEAN_AND",
"E_BOOLEAN_OR", "E_BOOLEAN_NOT", "E_TRUE", "E_FALSE",
"STARTS_WITH_LOWER_C_STR", "STARTS_WITH_UPPER_C_STR",
"STRING_VALUE", "NUMERIC_VALUE", "EMPTY_VALUE", "INTEGER",
"FLOAT", "WHITESPACE_BLOCK" ]
grammarFileName = "LoTLanLexer.g4"
def __init__(self, input=None, output:TextIO = sys.stdout):
super().__init__(input, output)
self.checkVersion("4.8")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 60.100671 | 103 | 0.571971 | 4,042 | 17,910 | 2.492083 | 0.142751 | 0.119329 | 0.064628 | 0.074258 | 0.303286 | 0.218009 | 0.158046 | 0.148119 | 0.142361 | 0.13968 | 0 | 0.325643 | 0.150586 | 17,910 | 297 | 104 | 60.30303 | 0.336489 | 0.002345 | 0 | 0 | 1 | 0.42446 | 0.606919 | 0.541144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007194 | false | 0 | 0.014388 | 0 | 0.223022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b66353f8693e58fc97e6fb5a60a8342f22b9a0d4 | 38 | py | Python | src/helpers/__init__.py | saketkc/moca_web | 38dfbdd9eeb739322ff3722727e43f1f4da07d3f | [
"BSD-2-Clause"
] | null | null | null | src/helpers/__init__.py | saketkc/moca_web | 38dfbdd9eeb739322ff3722727e43f1f4da07d3f | [
"BSD-2-Clause"
] | 4 | 2016-03-14T00:39:41.000Z | 2016-03-21T19:05:32.000Z | src/helpers/__init__.py | saketkc/moca_web | 38dfbdd9eeb739322ff3722727e43f1f4da07d3f | [
"BSD-2-Clause"
] | null | null | null | from .exceptions import MocaException
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b67129e0d8010cac4539d7e2deab786f8c96191c | 219 | py | Python | general_test_file.py | JHolderguru/phat_bakery | 3b0e67b08188daa571668fe29c3110266608ed14 | [
"MIT"
] | null | null | null | general_test_file.py | JHolderguru/phat_bakery | 3b0e67b08188daa571668fe29c3110266608ed14 | [
"MIT"
] | null | null | null | general_test_file.py | JHolderguru/phat_bakery | 3b0e67b08188daa571668fe29c3110266608ed14 | [
"MIT"
] | null | null | null | from general_functions import *
f_name = return_formatted_name(name)
# test set up
#print('Testing function return formatted{} with ' ' jon ----->' '')
print(return_formatted_name(name))
| 21.9 | 96 | 0.634703 | 25 | 219 | 5.32 | 0.64 | 0.338346 | 0.285714 | 0.345865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246575 | 219 | 9 | 97 | 24.333333 | 0.806061 | 0.484018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fcc861bf2685091d26cbb4c6300eba37f057e7f1 | 426 | py | Python | octicons16px/mention.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/mention.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/mention.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_MENTION = """
<svg class="octicon octicon-mention" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M4.75 2.37a6.5 6.5 0 006.5 11.26.75.75 0 01.75 1.298 8 8 0 113.994-7.273.754.754 0 01.006.095v1.5a2.75 2.75 0 01-5.072 1.475A4 4 0 1112 8v1.25a1.25 1.25 0 002.5 0V7.867a6.5 6.5 0 00-9.75-5.496V2.37zM10.5 8a2.5 2.5 0 10-5 0 2.5 2.5 0 005 0z"></path></svg>
"""
| 85.2 | 398 | 0.661972 | 111 | 426 | 2.531532 | 0.522523 | 0.035587 | 0.021352 | 0.02847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.432432 | 0.131455 | 426 | 4 | 399 | 106.5 | 0.327027 | 0 | 0 | 0 | 0 | 0.333333 | 0.941176 | 0.105882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fce28da097b0539af27c3d4c3c7eae51ceb17849 | 1,297 | py | Python | profiles_api/permissions.py | hemant-mehra/profile_rest_api | 3d753deb472b07950e957f7abda6522d8b46724c | [
"MIT"
] | null | null | null | profiles_api/permissions.py | hemant-mehra/profile_rest_api | 3d753deb472b07950e957f7abda6522d8b46724c | [
"MIT"
] | null | null | null | profiles_api/permissions.py | hemant-mehra/profile_rest_api | 3d753deb472b07950e957f7abda6522d8b46724c | [
"MIT"
] | null | null | null | from rest_framework import permissions
class UpdateOwnProfile(permissions.BasePermission):
"""Allow uset to edit thier own profile"""
def has_object_permission(self,request,view,obj):
"""check user is trying their own profile"""
# safe methods are mehtod which doest change anything like get() so other user can use this method
# we have to restrict only unsafe method like put post delete
if request.method in permissions.SAFE_METHODS:
return True
# comparing ids of logged in user and obj which is being updated
return obj.id == request.user.id
# crwated for feed
class UpdateOwnStatus(permissions.BasePermission):
"""Allow user to update thier own status"""
def has_object_permission(self,request,view,obj):
"""check user is trying their own status"""
# safe methods are mehtod which doest change anything like get() so other user can use this method
# we have to restrict only unsafe method like put post delete
if request.method in permissions.SAFE_METHODS:
return True
# comparing ids of logged in user and obj which is being updated
return obj.user_profile.id == request.user.id | 37.057143 | 106 | 0.666153 | 172 | 1,297 | 4.976744 | 0.406977 | 0.051402 | 0.070093 | 0.051402 | 0.703271 | 0.703271 | 0.703271 | 0.703271 | 0.703271 | 0.703271 | 0 | 0 | 0.278335 | 1,297 | 35 | 107 | 37.057143 | 0.91453 | 0.471858 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
fce6045d5eea32a646864f6c887265b6d469487f | 3,638 | py | Python | oxe-api/test/resource/media/test_delete_document.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | oxe-api/test/resource/media/test_delete_document.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | oxe-api/test/resource/media/test_delete_document.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | from test.BaseCase import BaseCase
from datetime import datetime
import os
import shutil
from unittest.mock import patch
class TestDeleteDocument(BaseCase):
@BaseCase.login
@BaseCase.grant_access("/media/delete_document")
@patch('resource.media.delete_document.DOCUMENT_FOLDER',
os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_delete_document_temp"))
def test_ok(self, token):
if not os.path.exists(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document_temp")):
os.makedirs(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document_temp"))
shutil.copy(
os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document", "empty_pdf.pdf"),
os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document_temp", "50")
)
self.assertTrue(os.path.exists(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_delete_document_temp", "50")))
self.db.insert({
"id": 50,
"filename": "empty_pdf.pdf",
"size": 10,
"creation_date": datetime.today(),
}, self.db.tables["Document"])
payload = {
"id": 50
}
response = self.application.post('/media/delete_document',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(200, response.status_code)
self.assertEqual(self.db.get_count(self.db.tables["Document"]), 0)
self.assertFalse(os.path.exists(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_delete_document_temp", "50")))
if os.path.exists(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document_temp")):
shutil.rmtree(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_delete_document_temp"))
@BaseCase.login
@BaseCase.grant_access("/media/delete_document")
@patch('resource.media.delete_document.DOCUMENT_FOLDER',
os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_delete_document_temp"))
def test_delete_unexisting_file(self, token):
self.db.insert({
"id": 50,
"filename": "empty_pdf.pdf",
"size": 10,
"creation_date": datetime.today(),
}, self.db.tables["Document"])
payload = {
"id": 50
}
response = self.application.post('/media/delete_document',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(200, response.status_code)
self.assertEqual(self.db.get_count(self.db.tables["Document"]), 0)
@BaseCase.login
@BaseCase.grant_access("/media/delete_document")
@patch('resource.media.delete_document.DOCUMENT_FOLDER',
os.path.join(os.path.dirname(os.path.realpath(__file__)),
"test_delete_document_temp"))
def test_delete_unexisting_record(self, token):
payload = {
"id": 50
}
response = self.application.post('/media/delete_document',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(200, response.status_code) | 42.302326 | 118 | 0.591259 | 404 | 3,638 | 5.034653 | 0.168317 | 0.109145 | 0.054081 | 0.064897 | 0.883972 | 0.883972 | 0.883972 | 0.883972 | 0.883972 | 0.883972 | 0 | 0.01185 | 0.280924 | 3,638 | 86 | 119 | 42.302326 | 0.765673 | 0 | 0 | 0.690141 | 0 | 0 | 0.18604 | 0.142896 | 0 | 0 | 0 | 0 | 0.098592 | 1 | 0.042254 | false | 0 | 0.070423 | 0 | 0.126761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1eb4d7426cb431181c15a6e61c6e335572e1c322 | 40 | py | Python | visoptslider/__init__.py | yuki-koyama/visoptslider | 6443107392e9cb5ee4d215f9eec30e780957bae6 | [
"MIT"
] | 11 | 2019-02-28T13:02:02.000Z | 2021-03-10T09:56:25.000Z | visoptslider/__init__.py | yuki-koyama/visoptslider | 6443107392e9cb5ee4d215f9eec30e780957bae6 | [
"MIT"
] | 6 | 2019-07-09T23:38:17.000Z | 2019-09-16T05:23:38.000Z | visoptslider/__init__.py | yuki-koyama/visoptslider | 6443107392e9cb5ee4d215f9eec30e780957bae6 | [
"MIT"
] | 3 | 2019-03-19T22:33:44.000Z | 2021-03-10T09:56:29.000Z | from .visoptslider import SlidersWidget
| 20 | 39 | 0.875 | 4 | 40 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ee34b659ce47eb935d4747a1fcb9923d63dae3c | 3,594 | py | Python | software_station_pkg.py | frostygoth/software-station | 063ea43ffd22af9f399e2d3fcb2e9a1f646235c0 | [
"BSD-3-Clause"
] | null | null | null | software_station_pkg.py | frostygoth/software-station | 063ea43ffd22af9f399e2d3fcb2e9a1f646235c0 | [
"BSD-3-Clause"
] | null | null | null | software_station_pkg.py | frostygoth/software-station | 063ea43ffd22af9f399e2d3fcb2e9a1f646235c0 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3.6
from subprocess import Popen, PIPE
def available_package_origin():
cmd = "pkg rquery '%o' | cut -d '/' -f1"
pkg_out = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
lst = list(set(pkg_out.stdout.read().splitlines()))
lst.sort()
return lst
def available_package_list():
cmd = "pkg rquery '%o:%n:%v:%sh:%c'"
pkg_out = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
lst = list(set(pkg_out.stdout.read().splitlines()))
lst.sort()
return lst
def installed_package_origin():
cmd = "pkg query '%o' | cut -d '/' -f1"
pkg_out = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
lst = list(set(pkg_out.stdout.read().splitlines()))
lst.sort()
return lst
def installed_package_list():
cmd = "pkg query '%o:%n:%v:%sh:%c'"
pkg_out = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
lst = list(set(pkg_out.stdout.read().splitlines()))
lst.sort()
return lst
def available_package_dictionary(origin_list):
pkg_list = available_package_list()
installed_pkg_list = installed_package_list()
avail = str(len(pkg_list))
pkg_dict = {'avail': avail, 'all': {}}
for origin in origin_list:
pkg_dict[origin] = {}
for pkg in pkg_list:
if pkg in installed_pkg_list:
boolean = True
else:
boolean = False
pi = pkg.split(':')
pl = pi[0].split('/')
pkg_info = {
'origin': pi[0],
'name': pi[1],
'version': pi[2],
'size': pi[3],
'comment': pi[4],
'installed': boolean
}
pkg_dict[pl[0]].update({pi[1]: pkg_info})
pkg_dict['all'].update({pi[1]: pkg_info})
return pkg_dict
def installed_package_dictionary(origin_list):
pkg_list = installed_package_list()
avail = str(len(pkg_list))
pkg_dict = {'avail': avail, 'all': {}}
for origin in origin_list:
pkg_dict[origin] = {}
for pkg in pkg_list:
pi = pkg.split(':')
pl = pi[0].split('/')
pkg_info = {
'origin': pi[0],
'name': pi[1],
'version': pi[2],
'size': pi[3],
'comment': pi[4],
'installed': True
}
pkg_dict[pl[0]].update({pi[1]: pkg_info})
pkg_dict['all'].update({pi[1]: pkg_info})
return pkg_dict
def search_packages(search):
cmd = f"pkg search -Q name {search} | grep 'Name ' | cut -d : -f2 | " \
"cut -d ' ' -f2"
output = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
lst = output.stdout.read().splitlines()
return lst
def delete_packages(pkg):
cmd = f"pkg delete -y {pkg}"
fetch = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
return fetch.stdout
def fetch_packages(pkg):
cmd = f"pkg fetch -y {pkg}"
fetch = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
return fetch.stdout
def install_packages(pkg):
cmd = f"pkg install -y {pkg}"
fetch = Popen(cmd, shell=True, stdout=PIPE, close_fds=True,
universal_newlines=True, encoding='utf-8')
return fetch.stdout
| 30.201681 | 76 | 0.577908 | 478 | 3,594 | 4.186192 | 0.171548 | 0.034983 | 0.051974 | 0.067966 | 0.813093 | 0.786107 | 0.75912 | 0.75912 | 0.75912 | 0.75912 | 0 | 0.012228 | 0.271842 | 3,594 | 118 | 77 | 30.457627 | 0.752388 | 0.0064 | 0 | 0.65625 | 0 | 0.010417 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0 | 0.010417 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
94a5fb252a58d29382556e0132027547c7f9e3e9 | 27 | py | Python | server/server/second_hand/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | 2 | 2020-11-16T06:15:09.000Z | 2021-09-07T09:32:55.000Z | server/server/second_hand/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | null | null | null | server/server/second_hand/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | null | null | null | from .second_hand import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94abaa2b5284608487aca2f8ab79e18c0e71ce29 | 141 | py | Python | py_import/py_import_2.py | StanLepunK/PYTHON_basics | da803bd72824de281677f3ba4c5d7bd44a7460fb | [
"MIT"
] | null | null | null | py_import/py_import_2.py | StanLepunK/PYTHON_basics | da803bd72824de281677f3ba4c5d7bd44a7460fb | [
"MIT"
] | null | null | null | py_import/py_import_2.py | StanLepunK/PYTHON_basics | da803bd72824de281677f3ba4c5d7bd44a7460fb | [
"MIT"
] | null | null | null | from py_lib import *
# import fichier python dansle dossier
print(add(1,2))
print(mult(2,2))
print(div(2,2))
print(sub(2,2))
print(mod(11,2)) | 20.142857 | 38 | 0.70922 | 29 | 141 | 3.413793 | 0.586207 | 0.242424 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086614 | 0.099291 | 141 | 7 | 39 | 20.142857 | 0.692913 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0.833333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
94ff3054c17ba8ee4ea2652b7bddba97f99b21c2 | 164 | py | Python | Capitulo_02/exercise2_11.py | thiagosouzalink/my_codes-exercices-book-curso_intensivo_de_python | 841aa855a7450ad3d0ba65393ba0b6debcd6a770 | [
"MIT"
] | null | null | null | Capitulo_02/exercise2_11.py | thiagosouzalink/my_codes-exercices-book-curso_intensivo_de_python | 841aa855a7450ad3d0ba65393ba0b6debcd6a770 | [
"MIT"
] | null | null | null | Capitulo_02/exercise2_11.py | thiagosouzalink/my_codes-exercices-book-curso_intensivo_de_python | 841aa855a7450ad3d0ba65393ba0b6debcd6a770 | [
"MIT"
] | null | null | null | """
2.11 – Zen de Python: Digite import this em uma sessão de terminal de Python e dê uma olhada nos princípios adicionais.
"""
# Exibe o Zen de Python
import this | 27.333333 | 119 | 0.737805 | 30 | 164 | 4.066667 | 0.7 | 0.196721 | 0.180328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.195122 | 164 | 6 | 120 | 27.333333 | 0.893939 | 0.865854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf6c5a7a12b2f8c028819a682348c12529b90ac6 | 89 | py | Python | flask_restify/fields/__init__.py | BetaS/flask-restify | 0636cb50e1896a9cacbf4fbc6191c6a2df67b601 | [
"MIT"
] | null | null | null | flask_restify/fields/__init__.py | BetaS/flask-restify | 0636cb50e1896a9cacbf4fbc6191c6a2df67b601 | [
"MIT"
] | null | null | null | flask_restify/fields/__init__.py | BetaS/flask-restify | 0636cb50e1896a9cacbf4fbc6191c6a2df67b601 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .base import *
from .string import *
from .number import *
| 14.833333 | 23 | 0.629213 | 12 | 89 | 4.666667 | 0.666667 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.202247 | 89 | 5 | 24 | 17.8 | 0.774648 | 0.235955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf9dd4fa763a4fde664db05a2f4924d326cba206 | 2,857 | py | Python | poultrybook/logbook/migrations/0001_initial.py | AlexGolovaschenko/PoultryBook | 41e735a6e16c1113888993e7aa9142df318bcb26 | [
"MIT"
] | null | null | null | poultrybook/logbook/migrations/0001_initial.py | AlexGolovaschenko/PoultryBook | 41e735a6e16c1113888993e7aa9142df318bcb26 | [
"MIT"
] | null | null | null | poultrybook/logbook/migrations/0001_initial.py | AlexGolovaschenko/PoultryBook | 41e735a6e16c1113888993e7aa9142df318bcb26 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.12 on 2022-02-07 18:09
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Room',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('number', models.IntegerField()),
],
),
migrations.CreateModel(
name='TextRecord',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timestamp', models.DateTimeField(auto_now_add=True, verbose_name='Время и дата')),
('content_type', models.CharField(max_length=100, verbose_name='Тип записи')),
('value', models.CharField(max_length=200, verbose_name='Значение')),
('room', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='logbook.room', verbose_name='Помещение')),
],
options={
'verbose_name': 'Запись (text)',
'verbose_name_plural': 'Записи (text)',
},
),
migrations.CreateModel(
name='IntegerRecord',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timestamp', models.DateTimeField(auto_now_add=True, verbose_name='Время и дата')),
('content_type', models.CharField(max_length=100, verbose_name='Тип записи')),
('value', models.IntegerField(verbose_name='Значение')),
('room', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='logbook.room', verbose_name='Помещение')),
],
options={
'verbose_name': 'Запись (integer)',
'verbose_name_plural': 'Записи (integer)',
},
),
migrations.CreateModel(
name='FloatRecord',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timestamp', models.DateTimeField(auto_now_add=True, verbose_name='Время и дата')),
('content_type', models.CharField(max_length=100, verbose_name='Тип записи')),
('value', models.FloatField(verbose_name='Значение')),
('room', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='logbook.room', verbose_name='Помещение')),
],
options={
'verbose_name': 'Запись (float)',
'verbose_name_plural': 'Записи (float)',
},
),
]
| 43.953846 | 134 | 0.574379 | 278 | 2,857 | 5.726619 | 0.26259 | 0.15201 | 0.035176 | 0.055276 | 0.705402 | 0.705402 | 0.705402 | 0.705402 | 0.705402 | 0.705402 | 0 | 0.013705 | 0.284914 | 2,857 | 64 | 135 | 44.640625 | 0.765541 | 0.016101 | 0 | 0.561404 | 1 | 0 | 0.171591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035088 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
449df9b38d07b1e7180a8e80d92249cd0bd83922 | 655 | py | Python | Python/jump-to-python/List.py | leeheefull/blog-source | 5f8370de5b0f62801fffc9e5f0f0bcb98dc2e6d1 | [
"MIT"
] | null | null | null | Python/jump-to-python/List.py | leeheefull/blog-source | 5f8370de5b0f62801fffc9e5f0f0bcb98dc2e6d1 | [
"MIT"
] | null | null | null | Python/jump-to-python/List.py | leeheefull/blog-source | 5f8370de5b0f62801fffc9e5f0f0bcb98dc2e6d1 | [
"MIT"
] | null | null | null | # List 사용하기
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(a) # [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(a[4]) # 5
# Empty list 1
a = list()
print(a) # []
# Empty list 2
a = []
print(a) # []
# List 초기화
n = 10
a = [27] * n
print(a) # [27, 27, 27, 27, 27, 27, 27, 27, 27]
# -index
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(a[-1]) # 9
print(a[-3]) # 7
# Slicing
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(a[1:4]) # [2, 3, 4]
# Comprehension
a = [i for i in range(20) if i % 2 == 0]
print(a) # [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
a = [i * i for i in range(1, 10)]
print(a) # [1, 4, 9, 16, 25, 36, 49, 64, 81]
# 2차원 배열 초기화
n = 3
m = 4
a = [[0] * m for _ in range(n)]
| 16.794872 | 48 | 0.450382 | 153 | 655 | 1.921569 | 0.261438 | 0.204082 | 0.142857 | 0.163265 | 0.363946 | 0.282313 | 0.282313 | 0.282313 | 0.221088 | 0.221088 | 0 | 0.241901 | 0.29313 | 655 | 38 | 49 | 17.236842 | 0.393089 | 0.367939 | 0 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 1 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
44cea0a78ef35877e93d6f8755bb787ba7c61681 | 129 | py | Python | googlemail/login.py | orlandodiaz/gmail | 2a188e1b15140b64a65d114a91a3600b79bee929 | [
"MIT"
] | 1 | 2022-02-16T00:29:27.000Z | 2022-02-16T00:29:27.000Z | googlemail/login.py | orlandordiaz/gmail | 2a188e1b15140b64a65d114a91a3600b79bee929 | [
"MIT"
] | null | null | null | googlemail/login.py | orlandordiaz/gmail | 2a188e1b15140b64a65d114a91a3600b79bee929 | [
"MIT"
] | null | null | null | from .gmail import Gmail
def login(username, password):
gmail = Gmail(username, password)
gmail.login()
return gmail | 21.5 | 37 | 0.705426 | 16 | 129 | 5.6875 | 0.5 | 0.351648 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20155 | 129 | 6 | 38 | 21.5 | 0.883495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
780f0c65c488a577c760e4c660cbe9e292725d05 | 152 | py | Python | kedro_mlflow/pipeline/__init__.py | akruszewski/kedro-mlflow | 330cab52642a0993e957740726e7d453282f1588 | [
"Apache-2.0"
] | null | null | null | kedro_mlflow/pipeline/__init__.py | akruszewski/kedro-mlflow | 330cab52642a0993e957740726e7d453282f1588 | [
"Apache-2.0"
] | null | null | null | kedro_mlflow/pipeline/__init__.py | akruszewski/kedro-mlflow | 330cab52642a0993e957740726e7d453282f1588 | [
"Apache-2.0"
] | null | null | null | from .modular_pipeline_ml import pipeline_ml
from .pipeline_ml import (
KedroMlflowPipelineMLDatasetsError,
KedroMlflowPipelineMLInputsError,
)
| 25.333333 | 44 | 0.835526 | 13 | 152 | 9.461538 | 0.538462 | 0.243902 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 152 | 5 | 45 | 30.4 | 0.924812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7832364cfea2a33546a5d6e6c469f0785cff24e2 | 49 | py | Python | problem_10/__init__.py | oltionzefi/daily-coding-problem | 4fe3ec53e1f3c7d299849671fdfead462d548cd3 | [
"MIT"
] | null | null | null | problem_10/__init__.py | oltionzefi/daily-coding-problem | 4fe3ec53e1f3c7d299849671fdfead462d548cd3 | [
"MIT"
] | null | null | null | problem_10/__init__.py | oltionzefi/daily-coding-problem | 4fe3ec53e1f3c7d299849671fdfead462d548cd3 | [
"MIT"
] | null | null | null | from .problem_10 import job_scheduler, Scheduler
| 24.5 | 48 | 0.857143 | 7 | 49 | 5.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.102041 | 49 | 1 | 49 | 49 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
15614eb4a1cd5d7795afc52f8ff3ee8d5efb1cd8 | 64 | py | Python | tests/test_questions.py | hullux/Questioner | bde697d6457841aef66383fe12b9f64af197454b | [
"MIT"
] | null | null | null | tests/test_questions.py | hullux/Questioner | bde697d6457841aef66383fe12b9f64af197454b | [
"MIT"
] | 6 | 2021-03-18T21:16:41.000Z | 2022-02-10T07:09:03.000Z | tests/test_questions.py | hullux/Questioner | bde697d6457841aef66383fe12b9f64af197454b | [
"MIT"
] | null | null | null | import unittest
class TestQuestion(unittest.TestCase):
pass | 16 | 38 | 0.796875 | 7 | 64 | 7.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 64 | 4 | 39 | 16 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
158278a5be3c98642acde63fd2c020fefc24c163 | 152 | py | Python | bobtex/admin.py | nanderv/bobtex | 6b1f8702804fbb342ae089e6fe503d54cc07b00f | [
"BSD-3-Clause"
] | 4 | 2020-06-15T14:48:18.000Z | 2020-10-02T14:27:35.000Z | bobtex/admin.py | nanderv/bobtex | 6b1f8702804fbb342ae089e6fe503d54cc07b00f | [
"BSD-3-Clause"
] | 6 | 2020-06-15T11:27:58.000Z | 2021-04-13T10:41:20.000Z | bobtex/admin.py | nanderv/bobtex | 6b1f8702804fbb342ae089e6fe503d54cc07b00f | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from projects.models import User
admin.site.register(User, UserAdmin)
| 25.333333 | 47 | 0.835526 | 22 | 152 | 5.772727 | 0.545455 | 0.15748 | 0.267717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098684 | 152 | 5 | 48 | 30.4 | 0.927007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
01ac93f9477c0133ec693f4f8c65a601e74a35f7 | 21 | py | Python | pysal/explore/momepy/__init__.py | martinfleis/pysal | d2e0667d825d403efe7182ecda210dc152ec206d | [
"BSD-3-Clause"
] | 941 | 2015-01-12T22:25:55.000Z | 2022-03-27T15:41:29.000Z | pysal/explore/momepy/__init__.py | anekekarina99/pysal | bd8c954d34b4694416830a852e26fe40d64424f2 | [
"BSD-3-Clause"
] | 589 | 2015-01-09T03:58:03.000Z | 2022-02-26T02:17:15.000Z | pysal/explore/momepy/__init__.py | anekekarina99/pysal | bd8c954d34b4694416830a852e26fe40d64424f2 | [
"BSD-3-Clause"
] | 303 | 2015-01-10T02:59:04.000Z | 2022-03-05T04:21:55.000Z | from momepy import *
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
01e86e02e124a30e3355a0de7e9ed5269eb07289 | 179 | py | Python | DjangoRestFrameworkNestedJSON/django_trial_proj/django_trial_app/admin.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | null | null | null | DjangoRestFrameworkNestedJSON/django_trial_proj/django_trial_app/admin.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | 5 | 2020-05-04T03:11:14.000Z | 2021-06-10T20:20:38.000Z | DjangoRestFrameworkNestedJSON/django_trial_proj/django_trial_app/admin.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | 1 | 2019-07-31T18:28:34.000Z | 2019-07-31T18:28:34.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.contrib import admin
from .models import User, Dob
admin.site.register(User)
admin.site.register(Dob) | 22.375 | 39 | 0.77095 | 26 | 179 | 5.115385 | 0.615385 | 0.135338 | 0.255639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.117318 | 179 | 8 | 40 | 22.375 | 0.835443 | 0.117318 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf00494c0eb90c98ba5ecbe11e86dbbabaa5490f | 198 | py | Python | grafeas/api/__init__.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | 6 | 2018-01-22T21:54:56.000Z | 2020-07-26T14:52:13.000Z | grafeas/api/__init__.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | 6 | 2018-07-12T12:56:16.000Z | 2021-07-13T00:33:24.000Z | grafeas/api/__init__.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | 19 | 2018-07-12T11:08:44.000Z | 2022-03-09T06:17:04.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from grafeas.api.grafeas_api import GrafeasApi
from grafeas.api.grafeas_projects_api import GrafeasProjectsApi
| 24.75 | 63 | 0.848485 | 27 | 198 | 5.925926 | 0.518519 | 0.1875 | 0.175 | 0.2625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0.116162 | 198 | 7 | 64 | 28.285714 | 0.908571 | 0.207071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
172df05c232918c0aa17f5473856bc902a607451 | 1,273 | py | Python | DJANGO/MascotasPasto/Modulo2/models.py | 11jamith/Frameworks-7A-2020B | fd920a8266053871c99d63b86a6abf9be962178c | [
"MIT"
] | null | null | null | DJANGO/MascotasPasto/Modulo2/models.py | 11jamith/Frameworks-7A-2020B | fd920a8266053871c99d63b86a6abf9be962178c | [
"MIT"
] | null | null | null | DJANGO/MascotasPasto/Modulo2/models.py | 11jamith/Frameworks-7A-2020B | fd920a8266053871c99d63b86a6abf9be962178c | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class afiliados(models.Model):
id = models.AutoField(primary_key=True)
nombre = models.TextField()
apellidos = models.TextField()
numero_movil = models.IntegerField()
direccion = models.TextField()
email = models.TextField()
id_ciudad = models.ForeignKey(
'ciudades', on_delete=models.SET_NULL, null=True)
estado = models.CharField(max_length=1)
fecha_creacion = models.DateField()
fecha_modificacion = models.DateField()
class paises(models.Model):
id = models.AutoField(primary_key=True)
codigo = models.CharField(max_length=10)
nombre = models.TextField()
abreviatura = models.CharField(max_length=4)
estado = models.CharField(max_length=1)
fecha_creacion = models.DateField()
fecha_modificacion = models.DateField()
class ciudades(models.Model):
id = models.AutoField(primary_key=True)
codigo = models.CharField(max_length=10)
nombre = models.TextField()
abreviatura = models.CharField(max_length=4)
id_pais = models.ForeignKey(
'paises', on_delete=models.SET_NULL, null=True)
estado = models.CharField(max_length=1)
fecha_creacion = models.DateField()
fecha_modificacion = models.DateField() | 34.405405 | 57 | 0.721131 | 151 | 1,273 | 5.927152 | 0.304636 | 0.117318 | 0.140782 | 0.18771 | 0.72067 | 0.72067 | 0.72067 | 0.72067 | 0.673743 | 0.673743 | 0 | 0.008515 | 0.169678 | 1,273 | 37 | 58 | 34.405405 | 0.838221 | 0.018853 | 0 | 0.612903 | 0 | 0 | 0.011218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.935484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
17638966417ef17919ac65c888de3ca2df1d5a28 | 175 | py | Python | week-02/defReturnValues.py | norbertbodo91/pythonExercises | 9cd773c5d6ce3280d19a84ef12b8fd478ff09613 | [
"MIT"
] | null | null | null | week-02/defReturnValues.py | norbertbodo91/pythonExercises | 9cd773c5d6ce3280d19a84ef12b8fd478ff09613 | [
"MIT"
] | null | null | null | week-02/defReturnValues.py | norbertbodo91/pythonExercises | 9cd773c5d6ce3280d19a84ef12b8fd478ff09613 | [
"MIT"
] | null | null | null | def make_green(name):
new_name = "Green " + name
return new_name
def greet_by_name(name):
print("Well hi there,", name)
name = make_green("Tojas")
greet_by_name(name)
| 17.5 | 31 | 0.708571 | 29 | 175 | 4 | 0.448276 | 0.206897 | 0.189655 | 0.258621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 175 | 9 | 32 | 19.444444 | 0.789116 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.428571 | 0.142857 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
177d2c27c8e1780d58e86fa8f0913679250560b9 | 164 | py | Python | models/__init__.py | mfatiho/CrowdCounting-P2PNet | b89ecf9b374bee8973c331bb44b99611152cd3ac | [
"BSD-3-Clause"
] | 89 | 2021-08-09T12:51:34.000Z | 2022-03-25T09:06:40.000Z | models/__init__.py | FeiGeChuanShu/CrowdCounting-P2PNet | a7c5a9546d0b5be16367db393fbbd81427c11b82 | [
"BSD-3-Clause"
] | 24 | 2021-08-16T09:17:38.000Z | 2022-03-30T08:29:02.000Z | models/__init__.py | FeiGeChuanShu/CrowdCounting-P2PNet | a7c5a9546d0b5be16367db393fbbd81427c11b82 | [
"BSD-3-Clause"
] | 25 | 2021-08-12T09:37:30.000Z | 2022-03-18T07:46:17.000Z | from .p2pnet import build
# build the P2PNet model
# set training to 'True' during training
def build_model(args, training=False):
return build(args, training) | 27.333333 | 40 | 0.762195 | 24 | 164 | 5.166667 | 0.625 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.158537 | 164 | 6 | 41 | 27.333333 | 0.884058 | 0.371951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
bd5e92687989b5c1fbcc842489791723ead234ea | 1,026 | py | Python | whoahqa/views/__init__.py | onaio/who-adolescent-hqa | 108a7e60b025d0723247f5f02eab2c4d41f5a02a | [
"Apache-2.0"
] | null | null | null | whoahqa/views/__init__.py | onaio/who-adolescent-hqa | 108a7e60b025d0723247f5f02eab2c4d41f5a02a | [
"Apache-2.0"
] | 2 | 2018-01-09T08:58:11.000Z | 2019-01-18T09:20:14.000Z | whoahqa/views/__init__.py | onaio/who-adolescent-hqa | 108a7e60b025d0723247f5f02eab2c4d41f5a02a | [
"Apache-2.0"
] | null | null | null | from whoahqa.views.auth import oauth_authorize, oauth_callback # noqa
from whoahqa.views.clinics import ClinicViews # noqa
from whoahqa.views.default_views import default # noqa
from whoahqa.views.default_views import set_locale # noqa
from whoahqa.views.request_methods import get_request_user, can_list_clinics # noqa
from whoahqa.views.request_methods import can_view_clinics # noqa
from whoahqa.views.request_methods import is_super_user # noqa
from whoahqa.views.request_methods import can_access_clinics # noqa
from whoahqa.views.request_methods import can_view_municipality # noqa
from whoahqa.views.request_methods import can_create_period # noqa
from whoahqa.views.request_methods import can_view_state # noqa
from whoahqa.views.request_methods import can_list_state # noqa
from whoahqa.views.submissions import SubmissionViews # noqa
from whoahqa.views.users import UserViews # noqa
from whoahqa.views.municipalities import MunicipalityViews # noqa
from whoahqa.views.states import StateViews # noqa
| 60.352941 | 84 | 0.840156 | 145 | 1,026 | 5.731034 | 0.248276 | 0.211793 | 0.308063 | 0.361011 | 0.574007 | 0.537906 | 0.537906 | 0.398315 | 0.186522 | 0.129964 | 0 | 0 | 0.111111 | 1,026 | 16 | 85 | 64.125 | 0.911184 | 0.076998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd7e5fe7eac67ab30db9a9d56d3065eb51c9be79 | 252 | py | Python | ca_on_greater_sudbury/people.py | dcycle/scrapers-ca | 4c7a6cd01d603221b5b3b7a400d2e5ca0c6e916f | [
"MIT"
] | null | null | null | ca_on_greater_sudbury/people.py | dcycle/scrapers-ca | 4c7a6cd01d603221b5b3b7a400d2e5ca0c6e916f | [
"MIT"
] | null | null | null | ca_on_greater_sudbury/people.py | dcycle/scrapers-ca | 4c7a6cd01d603221b5b3b7a400d2e5ca0c6e916f | [
"MIT"
] | null | null | null | from utils import CSVScraper
class GreaterSudburyPersonScraper(CSVScraper):
# http://opendata.greatersudbury.ca/datasets/elected-officials-2014-csv
csv_url = 'http://opendata.greatersudbury.ca/datasets/cc23919fdcff4f5fa2290dbc01571df5_0.csv'
| 36 | 97 | 0.81746 | 26 | 252 | 7.846154 | 0.692308 | 0.117647 | 0.254902 | 0.27451 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.083333 | 252 | 6 | 98 | 42 | 0.787879 | 0.27381 | 0 | 0 | 0 | 0 | 0.447514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd8b903e836c2f6627bf6aed85b071a1732c8ce9 | 80 | py | Python | ovopy/model/__init__.py | riza-azmi/ovopy | e8b644565b2afd5876c17dbefd400025c462d734 | [
"MIT"
] | 25 | 2019-04-02T14:29:48.000Z | 2019-12-17T03:27:42.000Z | ovopy/model/__init__.py | riza-azmi/ovopy | e8b644565b2afd5876c17dbefd400025c462d734 | [
"MIT"
] | 4 | 2020-04-06T03:00:58.000Z | 2021-12-12T16:02:39.000Z | ovopy/model/__init__.py | riza-azmi/ovopy | e8b644565b2afd5876c17dbefd400025c462d734 | [
"MIT"
] | 8 | 2019-04-02T08:16:51.000Z | 2019-12-12T13:06:50.000Z | # -*- coding: utf-8 -*-
from . import auth
from . import etc
from . import error | 20 | 23 | 0.65 | 12 | 80 | 4.333333 | 0.666667 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.2 | 80 | 4 | 24 | 20 | 0.796875 | 0.2625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bda4795429b24a2b9e6365097d05105e7191fff0 | 2,486 | py | Python | lime/ToBeErased/FranckCondon/Hermite.py | binggu56/lime | 07f60c5105f0bedb11ac389fd671f4f1737a71fe | [
"MIT"
] | 4 | 2020-01-15T11:52:23.000Z | 2021-01-05T19:40:36.000Z | lime/ToBeErased/FranckCondon/Hermite.py | binggu56/scitools | 3f7ce3d8411a23186c73f1bb87a8778e039fbd0b | [
"MIT"
] | null | null | null | lime/ToBeErased/FranckCondon/Hermite.py | binggu56/scitools | 3f7ce3d8411a23186c73f1bb87a8778e039fbd0b | [
"MIT"
] | 3 | 2020-02-14T07:10:44.000Z | 2021-04-14T17:49:45.000Z | #import sympy as sym
# from scipy.special import hermite
from mpmath import hermite
import numpy as np
# def DHermite(n):
# """ Physicist's Hermite polynomials generated by dynamic programming
# Until we can install sympy, cannot deal with n>10
# """
# d = {}
# d[0] = lambda x: 1+0*x
# d[1] = lambda x: 2*x
# d[2] = lambda x: 4*x**2 - 2
# d[3] = lambda x: 8*x**3 - 12*x
# d[4] = lambda x: 16*x**4 - 48*x**2 + 12
# d[5] = lambda x: 32*x**5 - 160*x**3 + 120*x
# d[6] = lambda x: 64*x**6 - 480*x**4 + 720*x**2 - 120
# d[7] = lambda x: 128*x**7 - 1344*x**5 + 3360*x**3 - 1680*x
# d[8] = lambda x: 256*x**8 - 3584*x**6 + 13440*x**4 - 13440*x**2 + 1680
# d[9] = lambda x: 512*x**9 - 9216*x**7 + 48384*x**5 - 80640*x**3 + 30240*x
# d[10] = lambda x: 1024*x**10 - 23040*x**8 + 161280*x**6 - 403200*x**4 + 302400*x**2 - 30240
# if (n > 10):
# print("Error, n > 10")
# return
# # X = sym.Symbol('X')
# # for i in range(10, n+1):
# # d[i] = 2*x*d[i-1] - d[i-1].diff(x, 1)
# # H = sym.simplify(d[n])
# # h = sym.lambdify(x, H)
# # return h
# return d[n]
def iHermite(n):
"""
Generates Fn(x) such that Fn(x) = (j**n)*Hn(jx)
Which also follows the recurrence Fn(x) = -2xFn-1(x) + 2(n-1)Fn-2(x)
"""
#x = sym.Symbol('x')
# d = {}
# d[0] = lambda x: 1 # DHermite(0)(0)
# d[1] = lambda x: -2*x # 1j*DHermite(1)(1j)
# d[2] = lambda x: 4*x**2 + 2
# d[3] = lambda x: -8*x**3 - 12*x
# d[4] = lambda x: 16*x**4 + 48*x**2 + 12
# d[5] = lambda x: -32*x**5 - 160*x**3 - 120*x
# d[6] = lambda x: 64*x**6 + 480*x**4 + 720*x**2 + 120
# d[7] = lambda x: -128*x**7 - 1344*x**5 - 3360*x**3 - 1680*x
# d[8] = lambda x: 256*x**8 + 3584*x**6 + 13440*x**4 + 13440*x**2 + 1680
# d[9] = lambda x: -512*x**9 - 9216*x**7 - 48384*x**5 - 80640*x**3 - 30240*x
# d[10] = lambda x: 1024*x**10 + 23040*x**8 + 161280*x**6 + 403200*x**4 + 302400*x**2 + 30240
# if (n > 10):
# print("Error, n>0")
# return
# ## for i in range(11, n+1):
# ## d[i] = -2*x*d[i-1] + 2*(i-1)*d[i-2]
# ## F = sym.simplify(d[n])
# ## f = sym.lambdify(x, F)
# return d[n]
return lambda x: np.real((1j)**n * hermite(n, 1j * x))
if __name__ == '__main__':
import numpy as np
from scipy import special
x = np.linspace(-1,1)
from lime.style import curve
h = iHermite(3)
| 32.285714 | 97 | 0.483508 | 476 | 2,486 | 2.508403 | 0.220588 | 0.134841 | 0.01005 | 0.01005 | 0.509213 | 0.509213 | 0.472362 | 0.472362 | 0.472362 | 0.457286 | 0 | 0.203966 | 0.290024 | 2,486 | 76 | 98 | 32.710526 | 0.472521 | 0.8214 | 0 | 0.2 | 1 | 0 | 0.022284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.5 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bda95d254af556669e703312c9aef04b10bde649 | 5,920 | py | Python | KerbalStuff/email.py | AlexanderDzhoganov/KerbalStuff | c8a5ab38ff3f28324870662d1248342a3fef17ef | [
"MIT"
] | 1 | 2019-04-15T10:30:17.000Z | 2019-04-15T10:30:17.000Z | KerbalStuff/email.py | AlexanderDzhoganov/KerbalStuff | c8a5ab38ff3f28324870662d1248342a3fef17ef | [
"MIT"
] | null | null | null | KerbalStuff/email.py | AlexanderDzhoganov/KerbalStuff | c8a5ab38ff3f28324870662d1248342a3fef17ef | [
"MIT"
] | null | null | null | import smtplib
import pystache
import os
import html.parser
from email.mime.text import MIMEText
from werkzeug.utils import secure_filename
from flask import url_for
from KerbalStuff.database import db
from KerbalStuff.objects import User
from KerbalStuff.config import _cfg, _cfgi
def send_confirmation(user, followMod=None):
if _cfg("smtp-host") == "":
return
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
with open("emails/confirm-account") as f:
if followMod != None:
message = MIMEText(pystache.render(f.read(), { 'user': user, "domain": _cfg("domain"),\
'confirmation': user.confirmation + "?f=" + followMod }))
else:
message = MIMEText(html.parser.HTMLParser().unescape(\
pystache.render(f.read(), { 'user': user, "domain": _cfg("domain"), 'confirmation': user.confirmation })))
message['X-MC-Important'] = "true"
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = "Welcome to Kerbal Stuff!"
message['From'] = "support@kerbalstuff.com"
message['To'] = user.email
smtp.sendmail("support@kerbalstuff.com", [ user.email ], message.as_string())
smtp.quit()
def send_reset(user):
if _cfg("smtp-host") == "":
return
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
with open("emails/password-reset") as f:
message = MIMEText(html.parser.HTMLParser().unescape(\
pystache.render(f.read(), { 'user': user, "domain": _cfg("domain"), 'confirmation': user.passwordReset })))
message['X-MC-Important'] = "true"
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = "Reset your password on Kerbal Stuff"
message['From'] = "support@kerbalstuff.com"
message['To'] = user.email
smtp.sendmail("support@kerbalstuff.com", [ user.email ], message.as_string())
smtp.quit()
def send_grant_notice(mod, user):
if _cfg("smtp-host") == "":
return
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
with open("emails/grant-notice") as f:
message = MIMEText(html.parser.HTMLParser().unescape(\
pystache.render(f.read(), { 'user': user, "domain": _cfg("domain"),\
'mod': mod, 'url': url_for('mods.mod', id=mod.id, mod_name=mod.name) })))
message['X-MC-Important'] = "true"
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = "You've been asked to co-author a mod on Kerbal Stuff"
message['From'] = "support@kerbalstuff.com"
message['To'] = user.email
smtp.sendmail("support@kerbalstuff.com", [ user.email ], message.as_string())
smtp.quit()
def send_update_notification(mod, version, user):
if _cfg("smtp-host") == "":
return
followers = [u.email for u in mod.followers]
changelog = version.changelog
if changelog:
changelog = '\n'.join([' ' + l for l in changelog.split('\n')])
targets = list()
for follower in followers:
targets.append(follower)
if len(targets) == 0:
return
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
with open("emails/mod-updated") as f:
message = MIMEText(html.parser.HTMLParser().unescape(pystache.render(f.read(),
{
'mod': mod,
'user': user,
'domain': _cfg("domain"),
'latest': version,
'url': '/mod/' + str(mod.id) + '/' + secure_filename(mod.name)[:64],
'changelog': changelog
})))
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = user.username + " has just updated " + mod.name + "!"
message['From'] = "support@kerbalstuff.com"
message['To'] = ";".join(targets)
smtp.sendmail("support@kerbalstuff.com", targets, message.as_string())
smtp.quit()
def send_autoupdate_notification(mod):
if _cfg("smtp-host") == "":
return
followers = [u.email for u in mod.followers]
changelog = mod.default_version().changelog
if changelog:
changelog = '\n'.join([' ' + l for l in changelog.split('\n')])
targets = list()
for follower in followers:
targets.append(follower)
if len(targets) == 0:
return
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
with open("emails/mod-autoupdated") as f:
message = MIMEText(html.parser.HTMLParser().unescape(pystache.render(f.read(),
{
'mod': mod,
'domain': _cfg("domain"),
'latest': mod.default_version(),
'url': '/mod/' + str(mod.id) + '/' + secure_filename(mod.name)[:64],
'changelog': changelog
})))
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = mod.name + " is compatible with KSP " + mod.versions[0].ksp_version + "!"
message['From'] = "support@kerbalstuff.com"
message['To'] = ";".join(targets)
smtp.sendmail("support@kerbalstuff.com", targets, message.as_string())
smtp.quit()
def send_bulk_email(users, subject, body):
if _cfg("smtp-host") == "":
return
targets = list()
for u in users:
targets.append(u)
smtp = smtplib.SMTP(_cfg("smtp-host"), _cfgi("smtp-port"))
smtp.login(_cfg("smtp-user"), _cfg("smtp-password"))
message = MIMEText(body)
message['X-MC-PreserveRecipients'] = "false"
message['Subject'] = subject
message['From'] = "support@kerbalstuff.com"
message['To'] = ";".join(targets)
smtp.sendmail("support@kerbalstuff.com", targets, message.as_string())
smtp.quit()
| 41.111111 | 126 | 0.609628 | 700 | 5,920 | 5.065714 | 0.17 | 0.047377 | 0.037225 | 0.021997 | 0.782008 | 0.770164 | 0.769036 | 0.755781 | 0.755781 | 0.755781 | 0 | 0.001509 | 0.216554 | 5,920 | 143 | 127 | 41.398601 | 0.763044 | 0 | 0 | 0.664179 | 0 | 0 | 0.227196 | 0.080912 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0.067164 | 0.097015 | 0 | 0.201493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.