hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7e8043c93402109216fe51849d85a9d9b4f5d5c6 | 38 | py | Python | src/__init__.py | shousper/pancake-hipchat-bot | a4aaaa6ff0d33daad1cae356a0f26fcbc64cce71 | [
"MIT"
] | null | null | null | src/__init__.py | shousper/pancake-hipchat-bot | a4aaaa6ff0d33daad1cae356a0f26fcbc64cce71 | [
"MIT"
] | null | null | null | src/__init__.py | shousper/pancake-hipchat-bot | a4aaaa6ff0d33daad1cae356a0f26fcbc64cce71 | [
"MIT"
] | null | null | null | from config import *
from bot import * | 19 | 20 | 0.763158 | 6 | 38 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 2 | 21 | 19 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e83dc0d05e20128b12e3312812672fd11dc8593 | 139 | py | Python | thualign/tokenizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 27 | 2021-05-11T07:24:59.000Z | 2022-03-25T05:23:45.000Z | thualign/tokenizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 11 | 2021-10-02T05:56:01.000Z | 2022-03-30T02:32:36.000Z | thualign/tokenizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 11 | 2021-06-04T05:23:39.000Z | 2022-03-19T19:40:55.000Z | from thualign.tokenizers.tokenizer import Tokenizer, WhiteSpaceTokenizer
from thualign.tokenizers.unicode_tokenizer import UnicodeTokenizer | 69.5 | 72 | 0.906475 | 14 | 139 | 8.928571 | 0.571429 | 0.192 | 0.352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057554 | 139 | 2 | 73 | 69.5 | 0.954198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e440f42c2ea3dfe07a635417f945f486399e88e | 30 | py | Python | src/amulet/__init__.py | Amulet-Team/Amulet-cli | 3d4e5d05ff3fc4869baedcfebab9aa8e62dfb3db | [
"MIT"
] | 1 | 2019-09-28T23:35:01.000Z | 2019-09-28T23:35:01.000Z | src/amulet/__init__.py | Amulet-Team/Amulet-cli | 3d4e5d05ff3fc4869baedcfebab9aa8e62dfb3db | [
"MIT"
] | null | null | null | src/amulet/__init__.py | Amulet-Team/Amulet-cli | 3d4e5d05ff3fc4869baedcfebab9aa8e62dfb3db | [
"MIT"
] | null | null | null | from .api import world_loader
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e49d531cad47055e11d809144b24c4407db414c | 141 | py | Python | hanibal/fiscaloriginal/report/__init__.py | Christian-Castro/castro_odoo8 | 8247fdb20aa39e043b6fa0c4d0af509462ab3e00 | [
"Unlicense"
] | null | null | null | hanibal/fiscaloriginal/report/__init__.py | Christian-Castro/castro_odoo8 | 8247fdb20aa39e043b6fa0c4d0af509462ab3e00 | [
"Unlicense"
] | null | null | null | hanibal/fiscaloriginal/report/__init__.py | Christian-Castro/castro_odoo8 | 8247fdb20aa39e043b6fa0c4d0af509462ab3e00 | [
"Unlicense"
] | null | null | null | # import factura_reporte
import retencion_reporte
# import compra_reporte
# import requisicioncompra_reporte
import ride_factura_electronica
| 23.5 | 34 | 0.886525 | 16 | 141 | 7.4375 | 0.5 | 0.436975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092199 | 141 | 5 | 35 | 28.2 | 0.929688 | 0.546099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e74f7d15a4bc418e11e69d5d224deece5c8f557 | 36 | py | Python | lgw/__init__.py | ebridges/lgw | c5a0b51bb6d3e5e9d6c1fa10ba186ba7f56c8de4 | [
"Apache-2.0"
] | 1 | 2020-05-25T19:01:26.000Z | 2020-05-25T19:01:26.000Z | lgw/__init__.py | ebridges/lgw | c5a0b51bb6d3e5e9d6c1fa10ba186ba7f56c8de4 | [
"Apache-2.0"
] | 5 | 2019-12-05T10:55:56.000Z | 2020-06-05T17:48:12.000Z | lgw/__init__.py | ebridges/lgw | c5a0b51bb6d3e5e9d6c1fa10ba186ba7f56c8de4 | [
"Apache-2.0"
] | null | null | null | from lgw.version import __version__
| 18 | 35 | 0.861111 | 5 | 36 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ed10ce9a9fc41722382c81c7c7caad7e58c60de | 7,654 | py | Python | feature_selection_viz.py | georgetown-analytics/Trade-Imbalances | 122ce10e40362d1ed94132a47aa7a69a6da46281 | [
"MIT"
] | 1 | 2020-12-17T15:19:42.000Z | 2020-12-17T15:19:42.000Z | feature_selection_viz.py | georgetown-analytics/Dinein-or-Takeout-Chicago | 122ce10e40362d1ed94132a47aa7a69a6da46281 | [
"MIT"
] | 1 | 2018-09-28T01:16:55.000Z | 2018-09-28T01:16:55.000Z | feature_selection_viz.py | georgetown-analytics/Dinein-or-Takeout-Chicago | 122ce10e40362d1ed94132a47aa7a69a6da46281 | [
"MIT"
] | 1 | 2018-09-28T01:14:37.000Z | 2018-09-28T01:14:37.000Z | #get data and setup
myConnection = psycopg2.connect( host=host, user=user, password=password, dbname=dbname )
import pandas as pd
data = pd.read_sql("Select * FROM final_data_thayn;", con=myConnection)
data_unclean = pd.read_sql("Select * FROM food_inspection_predict;", con=myConnection)
import yellowbrick
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from yellowbrick.features import Rank1D
from yellowbrick.features import Rank2D
data.dropna(inplace=True)
#set x and y
cols=['price', 'rating', 'review_count', 'is_african', 'is_asian_fusion', 'is_bakeries', 'is_bars',
'is_breakfast_brunch', 'is_buffets', 'is_cafes', 'is_caribbean',
'is_chinese', 'is_deli', 'is_eastern_european', 'is_european',
'is_fast_food', 'is_hawaiian', 'is_health_food', 'is_icecream',
'is_indian', 'is_italian', 'is_japanese', 'is_korean', 'is_latin',
'is_mediterranean', 'is_mexican', 'is_middleasten', 'is_new_american',
'is_piza', 'is_seafood', 'is_south_east_asian', 'is_southern',
'is_street_food', 'is_sweets', 'is_thai', 'is_other_category',
'is_pickup', 'is_delivery', 'is_restaurant_reservation', 'Canvass',
'Complaint', 'reinspection', 'License', 'FoodPoison', 'high_risk_1',
'medium_risk_2', 'low_risk_2', 'grocery', 'Bakery', 'Mobile']
X = data[cols]
y = data['pass']
#histogram of price
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(data['price'], bins = 10, range = (data['price'].min(),data['price'].max()))
plt.title('Price distribution')
plt.xlabel('Price')
plt.ylabel('Count of Price')
plt.show()
#factorplot with price and pass
g = sns.factorplot("price", col="pass", col_wrap=4,
data=data[data.price.notnull()], kind="count", size=4, aspect=.8)
#factorplot with rating and pass
g = sns.factorplot("rating", col="pass", col_wrap=4,
data=data[data.rating.notnull()], kind="count", size=4, aspect=.8)
g.savefig("rating_results.png")
#factorplot with risk and pass
g = sns.factorplot("risk", col="results", col_wrap=4,
data=data_unclean[data_unclean.risk.notnull()], kind="count", size=4, aspect=.8)
#pairplots
g = sns.pairplot(data=data[['price', 'rating', 'review_count', 'pass']], hue='pass')
g.savefig("pairplot.png")
g = sns.pairplot(data=data[['high_risk_1',
'medium_risk_2', 'low_risk_2', 'pass']], hue='pass')
g.savefig("pairplot_2.png")
#1D and 2D feature analysis
#1D
features = [
'price', 'rating', 'review_count', 'high_risk_1',
'medium_risk_2', 'low_risk_2', 'is_pickup', 'is_delivery', 'is_restaurant_reservation', 'Canvass',
'Complaint', 'reinspection', 'License', 'FoodPoison', 'is_pickup', 'is_delivery', 'is_restaurant_reservation'
]
X = data[features]
y = data['pass']
visualizer = Rank1D(features=features, algorithm='shapiro')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="1D_features.png") # Draw/show/poof the data
#2D
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="2D_features.png") # Draw/show/poof the data
#1D with other features but including rating
features = ['rating',
'is_african', 'is_asian_fusion', 'is_bakeries', 'is_bars',
'is_breakfast_brunch', 'is_buffets', 'is_cafes', 'is_caribbean',
'is_chinese', 'is_deli', 'is_eastern_european', 'is_european',
'is_fast_food', 'is_hawaiian', 'is_health_food', 'is_icecream',
]
X = data[features]
y = data['pass']
visualizer = Rank1D(features=features, algorithm='shapiro')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="1D_features_v2.png") # Draw/show/poof the data
#2D with same features as above but without rating and with review_count
features = ['review_count',
'is_african', 'is_asian_fusion', 'is_bakeries', 'is_bars',
'is_breakfast_brunch', 'is_buffets', 'is_cafes', 'is_caribbean',
'is_chinese', 'is_deli', 'is_eastern_european', 'is_european',
'is_fast_food', 'is_hawaiian', 'is_health_food', 'is_icecream',
]
X = data[features]
y = data['pass']
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="2D_features_v2.png") # Draw/show/poof the data
#1D with new features but still with rating
features = ['rating',
'is_indian', 'is_italian', 'is_japanese', 'is_korean', 'is_latin',
'is_mediterranean', 'is_mexican', 'is_middleasten', 'is_new_american',
'is_piza', 'is_seafood', 'is_south_east_asian', 'is_southern',
'is_street_food', 'is_sweets', 'is_thai', 'is_other_category',
]
X = data[features]
y = data['pass']
visualizer = Rank1D(features=features, algorithm='shapiro')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="1D_features_v3.png") # Draw/show/poof the data
#2D with same features as above but swapping rating with review_count
features = ['review_count',
'is_indian', 'is_italian', 'is_japanese', 'is_korean', 'is_latin',
'is_mediterranean', 'is_mexican', 'is_middleasten', 'is_new_american',
'is_piza', 'is_seafood', 'is_south_east_asian', 'is_southern',
'is_street_food', 'is_sweets', 'is_thai', 'is_other_category',
]
X = data[features]
y = data['pass']
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="2D_features_v3.png") # Draw/show/poof the data
#1D and 2D with all features, impossible to see
features = [
'price', 'rating', 'review_count', 'is_african', 'is_asian_fusion', 'is_bakeries', 'is_bars',
'is_breakfast_brunch', 'is_buffets', 'is_cafes', 'is_caribbean',
'is_chinese', 'is_deli', 'is_eastern_european', 'is_european',
'is_fast_food', 'is_hawaiian', 'is_health_food', 'is_icecream',
'is_indian', 'is_italian', 'is_japanese', 'is_korean', 'is_latin',
'is_mediterranean', 'is_mexican', 'is_middleasten', 'is_new_american',
'is_piza', 'is_seafood', 'is_south_east_asian', 'is_southern',
'is_street_food', 'is_sweets', 'is_thai', 'is_other_category',
'is_pickup', 'is_delivery', 'is_restaurant_reservation', 'Canvass',
'Complaint', 'reinspection', 'License', 'FoodPoison', 'high_risk_1',
'medium_risk_2', 'low_risk_2', 'grocery', 'Bakery', 'Mobile',
]
X = data[features]
y = data['pass']
visualizer = Rank1D(features=features, algorithm='shapiro')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="1D_features_all.png") # Draw/show/poof the data
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof(outpath="2D_features_all.png")
| 40.930481 | 116 | 0.673112 | 1,017 | 7,654 | 4.821042 | 0.167158 | 0.032837 | 0.031205 | 0.024475 | 0.821334 | 0.775444 | 0.746482 | 0.704059 | 0.675505 | 0.664083 | 0 | 0.00958 | 0.181735 | 7,654 | 186 | 117 | 41.150538 | 0.773272 | 0.13248 | 0 | 0.62406 | 0 | 0 | 0.395486 | 0.018782 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090226 | 0.082707 | 0 | 0.082707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
7d07923cd15aaa96b7df997c3b77f4a9adf66a21 | 2,243 | py | Python | search/__init__.py | Neurs1/search | cb75a30819080aabb875670199b5108b43f55e6b | [
"MIT"
] | 1 | 2022-01-22T02:44:11.000Z | 2022-01-22T02:44:11.000Z | search/__init__.py | Neurs1/search | cb75a30819080aabb875670199b5108b43f55e6b | [
"MIT"
] | null | null | null | search/__init__.py | Neurs1/search | cb75a30819080aabb875670199b5108b43f55e6b | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from requests import get
from urllib.parse import quote
def google(query, max_results = 10, lang = "en", proxies = {}):
page = get(f"https://www.google.com/search?q={quote(query, safe='')}&num={max_results}&hl={lang}", headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/61.0.3163.100 Safari/537.36'}, proxies = {}).text
chunk = BeautifulSoup(page, "lxml").find_all("div", attrs={"class": "yuRUbf"})
results = {}
for i in range(len(chunk)):
results.update({i: {"title": chunk[i].find("h3").text, "url": chunk[i].find("a", href = True)["href"]}})
return results
def yahoo(query, proxies = {}):
page = get(f"https://search.yahoo.com/search?p={quote(query, safe='')}", headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/61.0.3163.100 Safari/537.36'}).text
chunk = BeautifulSoup(page, "lxml").find_all("div", attrs={"class": "options-toggle"})
results = {}
for i in range(len(chunk)):
results.update({i: {"title": chunk[i].find("h3").text, "url": chunk[i].find("a", href = True)["href"]}})
return results
def bing(query, proxies = {}):
page = get(f"https://www.bing.com/search?q={quote(query, safe='')}", headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/61.0.3163.100 Safari/537.36'}, proxies = {}).text
chunk = BeautifulSoup(page, "lxml").find_all("li", attrs={"class": "b_algo"})
results = {}
for i in range(len(chunk)):
results.update({i: {"title": chunk[i].find("h2").text, "url": chunk[i].find("a", href = True)["href"]}})
return results
def aol(query, proxies = {}):
page = get(f"https://search.aol.com/aol/search?q={quote(query, safe='')}", headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/61.0.3163.100 Safari/537.36'}, proxies = {}).text
chunk = BeautifulSoup(page, "lxml").find_all("div", attrs={"class": "options-toggle"})
results = {}
for i in range(len(chunk)):
results.update({i: {"title": chunk[i].find("h3").text, "url": chunk[i].find("a", href = True)["href"]}})
return results | 64.085714 | 266 | 0.645118 | 346 | 2,243 | 4.16185 | 0.234104 | 0.027778 | 0.055556 | 0.041667 | 0.864583 | 0.864583 | 0.810417 | 0.767361 | 0.767361 | 0.767361 | 0 | 0.062468 | 0.122158 | 2,243 | 35 | 267 | 64.085714 | 0.668867 | 0 | 0 | 0.548387 | 0 | 0.16129 | 0.401515 | 0.016488 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.096774 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d2e89c156f3e79643527490e9e5fe835cbcb486 | 15,920 | py | Python | lib/datasets_rel/dataset_catalog_rel.py | champon1020/TRACE | 8ed0aed87e153af66f02502887a4de0d39867209 | [
"MIT"
] | 34 | 2021-08-19T05:59:58.000Z | 2022-03-26T09:26:54.000Z | lib/datasets_rel/dataset_catalog_rel.py | champon1020/TRACE | 8ed0aed87e153af66f02502887a4de0d39867209 | [
"MIT"
] | 8 | 2021-09-15T05:27:23.000Z | 2022-02-27T12:38:03.000Z | lib/datasets_rel/dataset_catalog_rel.py | champon1020/TRACE | 8ed0aed87e153af66f02502887a4de0d39867209 | [
"MIT"
] | 6 | 2021-09-16T10:51:38.000Z | 2022-03-05T22:48:54.000Z | # Adapted from Detectron.pytorch/lib/datasets/dataset_catalog.py
# for this project by Ji Zhang,2019
#-----------------------------------------------------------------------------
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
"""Collection of available datasets."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
from core.config import cfg
# Path to data dir
_DATA_DIR = cfg.DATA_DIR
# Required dataset entry keys
IM_DIR = 'image_directory'
ANN_FN = 'annotation_file'
ANN_FN2 = 'annotation_file2'
ANN_FN3 = 'predicate_file'
ANN_FN4 = 'name_mapping_file'
ANN_FN5 = 'name_list_file'
# Optional dataset entry keys
IM_PREFIX = 'image_prefix'
DEVKIT_DIR = 'devkit_directory'
RAW_DIR = 'raw_dir'
# Available datasets
DATASETS = {
# OpenImages_v4 rel dataset for relationship task
'oi_rel_train': {
IM_DIR:
_DATA_DIR + '/openimages_v4/train',
ANN_FN:
_DATA_DIR + '/openimages_v4/rel/detections_train.json',
ANN_FN2:
_DATA_DIR + '/openimages_v4/rel/rel_only_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/openimages_v4/rel/rel_9_predicates.json',
},
'oi_rel_train_mini': {
IM_DIR:
_DATA_DIR + '/openimages_v4/train',
ANN_FN:
_DATA_DIR + '/openimages_v4/rel/detections_train.json',
ANN_FN2:
_DATA_DIR + '/openimages_v4/rel/rel_only_annotations_train_mini.json',
ANN_FN3:
_DATA_DIR + '/openimages_v4/rel/rel_9_predicates.json',
},
'oi_rel_val': {
IM_DIR:
_DATA_DIR + '/openimages_v4/train',
ANN_FN:
_DATA_DIR + '/openimages_v4/rel/detections_val.json',
ANN_FN2:
_DATA_DIR + '/openimages_v4/rel/rel_only_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/openimages_v4/rel/rel_9_predicates.json',
},
'oi_rel_val_mini': {
IM_DIR:
_DATA_DIR + '/openimages_v4/train',
ANN_FN:
_DATA_DIR + '/openimages_v4/rel/detections_val.json',
ANN_FN2:
_DATA_DIR + '/openimages_v4/rel/rel_only_annotations_val_mini.json',
ANN_FN3:
_DATA_DIR + '/openimages_v4/rel/rel_9_predicates.json',
},
# for Kaggle test
'oi_kaggle_rel_test': {
IM_DIR:
_DATA_DIR + '/openimages_v4/rel/kaggle_test_images/challenge2018_test',
ANN_FN: # pseudo annotation
_DATA_DIR + '/openimages_v4/rel/kaggle_test_images/detections_test.json',
ANN_FN2:
_DATA_DIR + '/openimages_v4/rel/kaggle_test_images/all_rel_only_annotations_test.json',
ANN_FN3:
_DATA_DIR + '/openimages_v4/rel/rel_9_predicates.json',
},
# VG dataset
'vg_train': {
IM_DIR:
_DATA_DIR + '/vg/VG_100K',
ANN_FN:
_DATA_DIR + '/vg/detections_train.json',
ANN_FN2:
_DATA_DIR + '/vg/rel_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/vg/predicates.json',
},
'vg_val': {
IM_DIR:
_DATA_DIR + '/vg/VG_100K',
ANN_FN:
_DATA_DIR + '/vg/detections_val.json',
ANN_FN2:
_DATA_DIR + '/vg/rel_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/vg/predicates.json',
},
# VRD dataset
'vrd_train': {
IM_DIR:
_DATA_DIR + '/vrd/train_images',
ANN_FN:
_DATA_DIR + '/vrd/detections_train.json',
ANN_FN2:
_DATA_DIR + '/vrd/new_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/vrd/predicates.json',
},
'vrd_val': {
IM_DIR:
_DATA_DIR + '/vrd/val_images',
ANN_FN:
_DATA_DIR + '/vrd/detections_val.json',
ANN_FN2:
_DATA_DIR + '/vrd/new_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/vrd/predicates.json',
},
# vidvrd
'vidvrd_train': {
IM_DIR:
_DATA_DIR + '/vidvrd/frames',
ANN_FN:
_DATA_DIR + '/vidvrd/annotations/detections_train.json',
ANN_FN2:
_DATA_DIR + '/vidvrd/annotations/new_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/vidvrd/annotations/predicates.json',
ANN_FN4:
_DATA_DIR + '/vidvrd/annotations/train_fname_mapping.json',
ANN_FN5:
_DATA_DIR + '/vidvrd/annotations/train_fname_list.json',
},
'vidvrd_val': {
IM_DIR:
_DATA_DIR + '/vidvrd/frames',
ANN_FN:
_DATA_DIR + '/vidvrd/annotations/detections_val.json',
ANN_FN2:
_DATA_DIR + '/vidvrd/annotations/new_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/vidvrd/annotations/predicates.json',
ANN_FN4:
_DATA_DIR + '/vidvrd/annotations/val_fname_mapping.json',
ANN_FN5:
_DATA_DIR + '/vidvrd/annotations/val_fname_list.json',
},
# ActionGenome dataset
'ag_train': {
IM_DIR:
_DATA_DIR + '/ag/frames',
ANN_FN:
_DATA_DIR + '/ag/annotations/detections_train.json',
ANN_FN2:
_DATA_DIR + '/ag/annotations/new_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/ag/annotations/predicates.json',
ANN_FN4:
_DATA_DIR + '/ag/annotations/train_fname_mapping.json',
ANN_FN5:
_DATA_DIR + '/ag/annotations/train_fname_list.json',
},
'ag_val': {
IM_DIR:
_DATA_DIR + '/ag/frames',
ANN_FN:
_DATA_DIR + '/ag/annotations/detections_val.json',
ANN_FN2:
_DATA_DIR + '/ag/annotations/new_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/ag/annotations/predicates.json',
ANN_FN4:
_DATA_DIR + '/ag/annotations/val_fname_mapping.json',
ANN_FN5:
_DATA_DIR + '/ag/annotations/val_fname_list.json',
},
# GQA dataset
'gqa_train': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_train.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_train.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
'gqa_val': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_val.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_val.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
'gqa_all': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
'gqa_1st_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_1st_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
'gqa_2nd_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_2nd_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
'gqa_3rd_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_3rd_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships.json',
},
# GQA no_plural_verb dataset
'gqa_verb_train': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_train_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_verb_no_plural_train.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_val': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_val_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_verb_no_plural_val.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_all': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_1st_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_1st_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_2nd_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_2nd_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_3rd_of_3': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_3rd_of_3.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_1st_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_1st_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_2nd_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_2nd_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_3rd_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_3rd_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_4th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_4th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_5th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_5th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
'gqa_verb_6th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_6th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_verb.json',
},
# GQA no_plural_spt dataset
'gqa_spt_train': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_train_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_spt_no_plural_train.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_spt.json',
},
'gqa_spt_val': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_val_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_spt_no_plural_val.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_spt.json',
},
# GQA no_plural_misc dataset
'gqa_misc_train': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_train_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_misc_no_plural_train.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_val': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/detections_val_no_plural.json',
ANN_FN2:
_DATA_DIR + '/gqa/rel_annotations_misc_no_plural_val.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_1st_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_1st_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_2nd_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_2nd_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_3rd_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_3rd_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_4th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_4th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_5th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_5th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
'gqa_misc_6th_of_6': {
IM_DIR:
_DATA_DIR + '/gqa/images',
ANN_FN:
_DATA_DIR + '/gqa/dummy_detections_no_plural_all.json',
ANN_FN2:
_DATA_DIR + '/gqa/dummy_rel_annotations_all_6th_of_6.json',
ANN_FN3:
_DATA_DIR + '/gqa/relationships_misc.json',
},
}
| 32.824742 | 99 | 0.57657 | 1,948 | 15,920 | 4.178645 | 0.083162 | 0.150491 | 0.137592 | 0.060442 | 0.839803 | 0.827641 | 0.822604 | 0.813022 | 0.751597 | 0.692752 | 0 | 0.019438 | 0.305214 | 15,920 | 484 | 100 | 32.892562 | 0.716481 | 0.068467 | 0 | 0.677928 | 0 | 0 | 0.403099 | 0.319244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013514 | 0 | 0.013514 | 0.002252 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
adb102558303d9efcf2ba7f37ec17651fbf38bfd | 93 | py | Python | src/tests/test_dataset.py | danipab12/Lab3 | 7b9f43fe169e4c8f745fa946dc0355c8ac739f16 | [
"MIT"
] | 13 | 2016-08-01T22:48:17.000Z | 2021-06-22T21:06:18.000Z | src/tests/test_dataset.py | danipab12/Lab3 | 7b9f43fe169e4c8f745fa946dc0355c8ac739f16 | [
"MIT"
] | null | null | null | src/tests/test_dataset.py | danipab12/Lab3 | 7b9f43fe169e4c8f745fa946dc0355c8ac739f16 | [
"MIT"
] | 11 | 2016-08-01T22:48:20.000Z | 2019-09-04T21:14:40.000Z | from unittest import TestCase
class TestDataset(TestCase):
pass
# TODO: write tests | 15.5 | 29 | 0.731183 | 11 | 93 | 6.181818 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215054 | 93 | 6 | 30 | 15.5 | 0.931507 | 0.182796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
adb9735c8dada4cc128f28da2bc8591beb9396b7 | 9,493 | py | Python | src/Factory.py | zillwa/BrokerBot | 3b11dc7b3845c55f860d971014fc078058047abe | [
"MIT"
] | 7 | 2021-03-19T21:16:41.000Z | 2022-03-05T21:10:45.000Z | src/Factory.py | zillwa/BrokerBot | 3b11dc7b3845c55f860d971014fc078058047abe | [
"MIT"
] | null | null | null | src/Factory.py | zillwa/BrokerBot | 3b11dc7b3845c55f860d971014fc078058047abe | [
"MIT"
] | 6 | 2021-03-05T12:39:04.000Z | 2021-08-16T17:45:34.000Z | from DataHandler import DataHandler
from ExecutionHandler import ExecutionHandler
from Strategy import Strategy
class DH_factory:
# Overview: Class that creates and returns any DH object
"""
Overview: Constructs and returns proper DH based on passed in enum
Params: ENUM for DH_api
params is list containg DH parameters
Requires: none
Modifies: none
Effects: none
Returns: Valid DH object based on parameter
Throws: ValueError if parameter is invalid
"""
def construct_dh(self, enum, params):
if enum == 1:
return self.dh_alpaca(params)
elif enum == 2:
return self.dh_binance(params)
elif enum == 3:
return self.dh_polygon(params)
elif enum == 4:
return self.dh_ibkr(params)
elif enum == 5:
return self.dh_alpha(params)
else:
raise ValueError("Invalid ENUM")
"""
Overview: Constructs and returns alpaca DH based on params
Params: params is a list of parameters for the alpaca api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid alpaca DH object
Throws: none
"""
def dh_alpaca(self, params):
dh = AlpacaDataHandler(params[0], params[1], params[2], params[3])
return dh
"""
Overview: Constructs and returns binance DH based on params
Params: params is a list of parameters for the binance api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid binance DH object
Throws: none
"""
def dh_binance(self, params):
pass
"""
Overview: Constructs and returns polygon DH based on params
Params: params is a list of parameters for the polygon api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid polygon DH object
Throws: none
"""
def dh_polygon(self, params):
pass
"""
Overview: Constructs and returns ibkr DH based on params
Params: params is a list of parameters for the ibkr api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid ibkr DH object
Throws: none
"""
def dh_ibkr(self, params):
pass
"""
Overview: Constructs and returns alpha DH based on params
Params: params is a list of parameters for the alpha api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid alpha DH object
Throws: none
"""
def dh_alpha(self, params):
pass
class EH_factory:
# Overview: Class that creates and returns any EH object
"""
Overview: Constructs and returns proper DH based on passed in enum
Params: ENUM for EH_api
params is list containg EH parameters
Requires: none
Modifies: none
Effects: none
Returns: Valid EH object based on parameter
Throws: ValueError if parameter is invalid
"""
def construct_eh(self, enum, params):
if enum == 1:
return self.eh_alpaca(params)
elif enum == 2:
return self.eh_binance(params)
elif enum == 3:
return self.eh_ibkr(params)
elif enum == 4:
return self.eh_alpha(params)
else:
raise ValueError("Invalid ENUM")
"""
Overview: Constructs and returns alpaca EH based on params
Params: params is a list of parameters for the alpaca api DH
Requires: none
Modifies: none
Effects: none
Returns: Valid alpaca DH object
Throws: none
"""
def eh_alpaca(self, params):
eh = AlpacaExecutionHandler(params[0], params[1], params[2])
return eh
"""
Overview: Constructs and returns binance EH based on params
Params: params is a list of parameters for the binance api EH
Requires: none
Modifies: none
Effects: none
Returns: Valid binance EH object
Throws: none
"""
def eh_binance(self, params):
pass
"""
Overview: Constructs and returns ibkr EH based on params
Params: params is a list of parameters for the ibkr api EH
Requires: none
Modifies: none
Effects: none
Returns: Valid ibkr EH object
Throws: none
"""
def eh_ibkr(self, params):
pass
"""
Overview: Constructs and returns alpha EH based on params
Params: params is a list of parameters for the alpha api EH
Requires: none
Modifies: none
Effects: none
Returns: Valid alpha EH object
Throws: none
"""
def eh_alpha(self, params):
pass
class Strategy_factory:
# Overview: Class that creates and returns any Strategy object
"""
Overview: Constructs and returns proper startegy based on passed in enum
Params: ENUM for DH_api
params is list containg DH parameters
Requires: none
Modifies: none
Effects: none
Returns: Valid DH object based on parameter
Throws: ValueError if parameter is invalid
"""
def construct_strat(self, enum, params):
if enum == 1:
return self.short_low_risk(params)
elif enum == 2:
return self.medium_low_risk(params)
elif enum == 3:
return self.long_low_risk(params)
elif enum == 4:
return self.medium_low_risk(params)
elif enum == 5:
return self.medium_medium_risk(params)
elif enum == 6:
return self.long_medium_risk(params)
elif enum == 7:
return self.short_high_risk(params)
elif enum == 8:
return self.medium_high_risk(params)
elif enum == 9:
return self.long_high_risk(params)
else:
raise ValueError("Invalid ENUM")
"""
Overview: Constructs and returns short_low_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid short_low_risk strategy object
Throws: none
"""
def short_low_risk(self, params):
strat = Strategy(0,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns medium_low_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid medium_low_risk strategy object
Throws: none
"""
def medium_low_risk(self, params):
strat = Strategy(1,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns long_low_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid long_low_risk strategy object
Throws: none
"""
def long_low_risk(self, params):
strat = Strategy(2,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns short_medium_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid short_medium_risk strategy object
Throws: none
"""
def short_medium_risk(self, params):
strat = Strategy(3,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns medium_medium_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid medium_medium_risk strategy object
Throws: none
"""
def medium_medium_risk(self, params):
strat = Strategy(4,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns long_medium_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid long_medium_risk strategy object
Throws: none
"""
def long_medium_risk(self, params):
strat = Strategy(5,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns short_high_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid short_high_risk strategy object
Throws: none
"""
def short_high_risk(self, params):
strat = Strategy(6,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns medium_high_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid medium_high_risk strategy object
Throws: none
"""
def medium_high_risk(self, params):
strat = Strategy(7,params[0], params[1], params[2], params[3])
return strat
"""
Overview: Constructs and returns long_high_risk strat based on parameters
Params: params is a list of parameters for the strategy
Requires: none
Modifies: none
Effects: none
Returns: Valid long_high_risk strategy object
Throws: none
"""
def long_high_risk(self, params):
strat = Strategy(8,params[0], params[1], params[2], params[3])
return strat | 27.75731 | 81 | 0.643737 | 1,232 | 9,493 | 4.87987 | 0.061688 | 0.053892 | 0.073353 | 0.097804 | 0.934631 | 0.892382 | 0.787092 | 0.705755 | 0.631404 | 0.593979 | 0 | 0.01035 | 0.28758 | 9,493 | 342 | 82 | 27.75731 | 0.878604 | 0.103655 | 0 | 0.387755 | 0 | 0 | 0.009828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.071429 | 0.030612 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
add9e4db2ebbc9fee0b303d364194eb73bd493f4 | 25,188 | py | Python | rapid7vmconsole/__init__.py | kiblik/vm-console-client-python | 038f6d33e8b2654a558326c6eb87f09ee23e0e22 | [
"MIT"
] | 61 | 2018-05-17T05:57:09.000Z | 2022-03-08T13:59:21.000Z | rapid7vmconsole/__init__.py | kiblik/vm-console-client-python | 038f6d33e8b2654a558326c6eb87f09ee23e0e22 | [
"MIT"
] | 33 | 2018-06-26T16:21:14.000Z | 2022-03-03T20:55:47.000Z | rapid7vmconsole/__init__.py | kiblik/vm-console-client-python | 038f6d33e8b2654a558326c6eb87f09ee23e0e22 | [
"MIT"
] | 43 | 2018-02-24T05:45:53.000Z | 2022-03-31T22:15:16.000Z | # coding: utf-8
# flake8: noqa
"""
Python InsightVM API Client
OpenAPI spec version: 3
Contact: support@rapid7.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from rapid7vmconsole.api.administration_api import AdministrationApi
from rapid7vmconsole.api.asset_api import AssetApi
from rapid7vmconsole.api.asset_discovery_api import AssetDiscoveryApi
from rapid7vmconsole.api.asset_group_api import AssetGroupApi
from rapid7vmconsole.api.credential_api import CredentialApi
from rapid7vmconsole.api.policy_api import PolicyApi
from rapid7vmconsole.api.policy_override_api import PolicyOverrideApi
from rapid7vmconsole.api.remediation_api import RemediationApi
from rapid7vmconsole.api.report_api import ReportApi
from rapid7vmconsole.api.root_api import RootApi
from rapid7vmconsole.api.scan_api import ScanApi
from rapid7vmconsole.api.scan_engine_api import ScanEngineApi
from rapid7vmconsole.api.scan_template_api import ScanTemplateApi
from rapid7vmconsole.api.site_api import SiteApi
from rapid7vmconsole.api.tag_api import TagApi
from rapid7vmconsole.api.user_api import UserApi
from rapid7vmconsole.api.vulnerability_api import VulnerabilityApi
from rapid7vmconsole.api.vulnerability_check_api import VulnerabilityCheckApi
from rapid7vmconsole.api.vulnerability_exception_api import VulnerabilityExceptionApi
from rapid7vmconsole.api.vulnerability_result_api import VulnerabilityResultApi
# import ApiClient
from rapid7vmconsole.api_client import ApiClient
from rapid7vmconsole.configuration import Configuration
# import models into sdk package
from rapid7vmconsole.models.account import Account
from rapid7vmconsole.models.additional_information import AdditionalInformation
from rapid7vmconsole.models.address import Address
from rapid7vmconsole.models.adhoc_scan import AdhocScan
from rapid7vmconsole.models.advisory_link import AdvisoryLink
from rapid7vmconsole.models.agent import Agent
from rapid7vmconsole.models.alert import Alert
from rapid7vmconsole.models.assessment_result import AssessmentResult
from rapid7vmconsole.models.asset import Asset
from rapid7vmconsole.models.asset_create import AssetCreate
from rapid7vmconsole.models.asset_created_or_updated_reference import AssetCreatedOrUpdatedReference
from rapid7vmconsole.models.asset_group import AssetGroup
from rapid7vmconsole.models.asset_history import AssetHistory
from rapid7vmconsole.models.asset_policy import AssetPolicy
from rapid7vmconsole.models.asset_policy_assessment import AssetPolicyAssessment
from rapid7vmconsole.models.asset_policy_item import AssetPolicyItem
from rapid7vmconsole.models.asset_tag import AssetTag
from rapid7vmconsole.models.asset_vulnerabilities import AssetVulnerabilities
from rapid7vmconsole.models.authentication_settings import AuthenticationSettings
from rapid7vmconsole.models.authentication_source import AuthenticationSource
from rapid7vmconsole.models.available_report_format import AvailableReportFormat
from rapid7vmconsole.models.backups_size import BackupsSize
from rapid7vmconsole.models.bad_request_error import BadRequestError
from rapid7vmconsole.models.cpu_info import CPUInfo
from rapid7vmconsole.models.configuration import Configuration
from rapid7vmconsole.models.console_command_output import ConsoleCommandOutput
from rapid7vmconsole.models.content_description import ContentDescription
from rapid7vmconsole.models.create_authentication_source import CreateAuthenticationSource
from rapid7vmconsole.models.created_or_updated_reference import CreatedOrUpdatedReference
from rapid7vmconsole.models.created_reference import CreatedReference
from rapid7vmconsole.models.created_reference_asset_group_id_link import CreatedReferenceAssetGroupIDLink
from rapid7vmconsole.models.created_reference_credential_id_link import CreatedReferenceCredentialIDLink
from rapid7vmconsole.models.created_reference_discovery_query_id_link import CreatedReferenceDiscoveryQueryIDLink
from rapid7vmconsole.models.created_reference_engine_id_link import CreatedReferenceEngineIDLink
from rapid7vmconsole.models.created_reference_policy_override_id_link import CreatedReferencePolicyOverrideIDLink
from rapid7vmconsole.models.created_reference_scan_id_link import CreatedReferenceScanIDLink
from rapid7vmconsole.models.created_reference_scan_template_id_link import CreatedReferenceScanTemplateIDLink
from rapid7vmconsole.models.created_reference_user_id_link import CreatedReferenceUserIDLink
from rapid7vmconsole.models.created_reference_vulnerability_exception_id_link import CreatedReferenceVulnerabilityExceptionIDLink
from rapid7vmconsole.models.created_reference_vulnerability_validation_id_link import CreatedReferenceVulnerabilityValidationIDLink
from rapid7vmconsole.models.created_referenceint_link import CreatedReferenceintLink
from rapid7vmconsole.models.criterion import Criterion
from rapid7vmconsole.models.database import Database
from rapid7vmconsole.models.database_connection_settings import DatabaseConnectionSettings
from rapid7vmconsole.models.database_settings import DatabaseSettings
from rapid7vmconsole.models.database_size import DatabaseSize
from rapid7vmconsole.models.discovery_asset import DiscoveryAsset
from rapid7vmconsole.models.discovery_connection import DiscoveryConnection
from rapid7vmconsole.models.discovery_search_criteria import DiscoverySearchCriteria
from rapid7vmconsole.models.disk_free import DiskFree
from rapid7vmconsole.models.disk_info import DiskInfo
from rapid7vmconsole.models.disk_total import DiskTotal
from rapid7vmconsole.models.dynamic_site import DynamicSite
from rapid7vmconsole.models.engine_pool import EnginePool
from rapid7vmconsole.models.environment_properties import EnvironmentProperties
from rapid7vmconsole.models.error import Error
from rapid7vmconsole.models.exception_scope import ExceptionScope
from rapid7vmconsole.models.excluded_asset_groups import ExcludedAssetGroups
from rapid7vmconsole.models.excluded_scan_targets import ExcludedScanTargets
from rapid7vmconsole.models.exploit import Exploit
from rapid7vmconsole.models.exploit_source import ExploitSource
from rapid7vmconsole.models.exploit_source_link import ExploitSourceLink
from rapid7vmconsole.models.features import Features
from rapid7vmconsole.models.file import File
from rapid7vmconsole.models.fingerprint import Fingerprint
from rapid7vmconsole.models.global_scan import GlobalScan
from rapid7vmconsole.models.group_account import GroupAccount
from rapid7vmconsole.models.host_name import HostName
from rapid7vmconsole.models.i_meta_data import IMetaData
from rapid7vmconsole.models.included_asset_groups import IncludedAssetGroups
from rapid7vmconsole.models.included_scan_targets import IncludedScanTargets
from rapid7vmconsole.models.info import Info
from rapid7vmconsole.models.install_size import InstallSize
from rapid7vmconsole.models.installation_total_size import InstallationTotalSize
from rapid7vmconsole.models.internal_server_error import InternalServerError
from rapid7vmconsole.models.jvm_info import JVMInfo
from rapid7vmconsole.models.json_node import JsonNode
from rapid7vmconsole.models.license import License
from rapid7vmconsole.models.license_limits import LicenseLimits
from rapid7vmconsole.models.license_policy_scanning import LicensePolicyScanning
from rapid7vmconsole.models.license_policy_scanning_benchmarks import LicensePolicyScanningBenchmarks
from rapid7vmconsole.models.license_reporting import LicenseReporting
from rapid7vmconsole.models.license_scanning import LicenseScanning
from rapid7vmconsole.models.link import Link
from rapid7vmconsole.models.links import Links
from rapid7vmconsole.models.locale_preferences import LocalePreferences
from rapid7vmconsole.models.malware_kit import MalwareKit
from rapid7vmconsole.models.matched_solution import MatchedSolution
from rapid7vmconsole.models.memory_free import MemoryFree
from rapid7vmconsole.models.memory_info import MemoryInfo
from rapid7vmconsole.models.memory_total import MemoryTotal
from rapid7vmconsole.models.not_found_error import NotFoundError
from rapid7vmconsole.models.operating_system import OperatingSystem
from rapid7vmconsole.models.operating_system_cpe import OperatingSystemCpe
from rapid7vmconsole.models.pci import PCI
from rapid7vmconsole.models.page_info import PageInfo
from rapid7vmconsole.models.page_of_agent import PageOfAgent
from rapid7vmconsole.models.page_of_asset import PageOfAsset
from rapid7vmconsole.models.page_of_asset_group import PageOfAssetGroup
from rapid7vmconsole.models.page_of_asset_policy import PageOfAssetPolicy
from rapid7vmconsole.models.page_of_asset_policy_item import PageOfAssetPolicyItem
from rapid7vmconsole.models.page_of_discovery_connection import PageOfDiscoveryConnection
from rapid7vmconsole.models.page_of_exploit import PageOfExploit
from rapid7vmconsole.models.page_of_global_scan import PageOfGlobalScan
from rapid7vmconsole.models.page_of_malware_kit import PageOfMalwareKit
from rapid7vmconsole.models.page_of_operating_system import PageOfOperatingSystem
from rapid7vmconsole.models.page_of_policy import PageOfPolicy
from rapid7vmconsole.models.page_of_policy_asset import PageOfPolicyAsset
from rapid7vmconsole.models.page_of_policy_control import PageOfPolicyControl
from rapid7vmconsole.models.page_of_policy_group import PageOfPolicyGroup
from rapid7vmconsole.models.page_of_policy_item import PageOfPolicyItem
from rapid7vmconsole.models.page_of_policy_override import PageOfPolicyOverride
from rapid7vmconsole.models.page_of_policy_rule import PageOfPolicyRule
from rapid7vmconsole.models.page_of_report import PageOfReport
from rapid7vmconsole.models.page_of_scan import PageOfScan
from rapid7vmconsole.models.page_of_site import PageOfSite
from rapid7vmconsole.models.page_of_software import PageOfSoftware
from rapid7vmconsole.models.page_of_tag import PageOfTag
from rapid7vmconsole.models.page_of_user import PageOfUser
from rapid7vmconsole.models.page_of_vulnerability import PageOfVulnerability
from rapid7vmconsole.models.page_of_vulnerability_category import PageOfVulnerabilityCategory
from rapid7vmconsole.models.page_of_vulnerability_check import PageOfVulnerabilityCheck
from rapid7vmconsole.models.page_of_vulnerability_exception import PageOfVulnerabilityException
from rapid7vmconsole.models.page_of_vulnerability_finding import PageOfVulnerabilityFinding
from rapid7vmconsole.models.page_of_vulnerability_reference import PageOfVulnerabilityReference
from rapid7vmconsole.models.policy import Policy
from rapid7vmconsole.models.policy_asset import PolicyAsset
from rapid7vmconsole.models.policy_benchmark import PolicyBenchmark
from rapid7vmconsole.models.policy_control import PolicyControl
from rapid7vmconsole.models.policy_group import PolicyGroup
from rapid7vmconsole.models.policy_item import PolicyItem
from rapid7vmconsole.models.policy_metadata_resource import PolicyMetadataResource
from rapid7vmconsole.models.policy_override import PolicyOverride
from rapid7vmconsole.models.policy_override_reviewer import PolicyOverrideReviewer
from rapid7vmconsole.models.policy_override_scope import PolicyOverrideScope
from rapid7vmconsole.models.policy_override_submitter import PolicyOverrideSubmitter
from rapid7vmconsole.models.policy_rule import PolicyRule
from rapid7vmconsole.models.policy_rule_assessment_resource import PolicyRuleAssessmentResource
from rapid7vmconsole.models.policy_summary_resource import PolicySummaryResource
from rapid7vmconsole.models.privileges import Privileges
from rapid7vmconsole.models.range_resource import RangeResource
from rapid7vmconsole.models.reference_with_alert_id_link import ReferenceWithAlertIDLink
from rapid7vmconsole.models.reference_with_asset_id_link import ReferenceWithAssetIDLink
from rapid7vmconsole.models.reference_with_endpoint_id_link import ReferenceWithEndpointIDLink
from rapid7vmconsole.models.reference_with_engine_id_link import ReferenceWithEngineIDLink
from rapid7vmconsole.models.reference_with_report_id_link import ReferenceWithReportIDLink
from rapid7vmconsole.models.reference_with_scan_schedule_id_link import ReferenceWithScanScheduleIDLink
from rapid7vmconsole.models.reference_with_site_id_link import ReferenceWithSiteIDLink
from rapid7vmconsole.models.reference_with_tag_id_link import ReferenceWithTagIDLink
from rapid7vmconsole.models.reference_with_user_id_link import ReferenceWithUserIDLink
from rapid7vmconsole.models.references_with_asset_group_id_link import ReferencesWithAssetGroupIDLink
from rapid7vmconsole.models.references_with_asset_id_link import ReferencesWithAssetIDLink
from rapid7vmconsole.models.references_with_engine_id_link import ReferencesWithEngineIDLink
from rapid7vmconsole.models.references_with_reference_with_endpoint_id_link_service_link import ReferencesWithReferenceWithEndpointIDLinkServiceLink
from rapid7vmconsole.models.references_with_site_id_link import ReferencesWithSiteIDLink
from rapid7vmconsole.models.references_with_solution_natural_id_link import ReferencesWithSolutionNaturalIDLink
from rapid7vmconsole.models.references_with_tag_id_link import ReferencesWithTagIDLink
from rapid7vmconsole.models.references_with_user_id_link import ReferencesWithUserIDLink
from rapid7vmconsole.models.references_with_vulnerability_check_id_link import ReferencesWithVulnerabilityCheckIDLink
from rapid7vmconsole.models.references_with_vulnerability_check_type_id_link import ReferencesWithVulnerabilityCheckTypeIDLink
from rapid7vmconsole.models.references_with_vulnerability_natural_id_link import ReferencesWithVulnerabilityNaturalIDLink
from rapid7vmconsole.models.references_with_web_application_id_link import ReferencesWithWebApplicationIDLink
from rapid7vmconsole.models.remediation_resource import RemediationResource
from rapid7vmconsole.models.repeat import Repeat
from rapid7vmconsole.models.report import Report
from rapid7vmconsole.models.report_config_category_filters import ReportConfigCategoryFilters
from rapid7vmconsole.models.report_config_database_credentials_resource import ReportConfigDatabaseCredentialsResource
from rapid7vmconsole.models.report_config_database_resource import ReportConfigDatabaseResource
from rapid7vmconsole.models.report_config_filters_resource import ReportConfigFiltersResource
from rapid7vmconsole.models.report_config_scope_resource import ReportConfigScopeResource
from rapid7vmconsole.models.report_email import ReportEmail
from rapid7vmconsole.models.report_email_smtp import ReportEmailSmtp
from rapid7vmconsole.models.report_filters import ReportFilters
from rapid7vmconsole.models.report_frequency import ReportFrequency
from rapid7vmconsole.models.report_instance import ReportInstance
from rapid7vmconsole.models.report_repeat import ReportRepeat
from rapid7vmconsole.models.report_scope import ReportScope
from rapid7vmconsole.models.report_size import ReportSize
from rapid7vmconsole.models.report_storage import ReportStorage
from rapid7vmconsole.models.report_template import ReportTemplate
from rapid7vmconsole.models.resources_alert import ResourcesAlert
from rapid7vmconsole.models.resources_asset_group import ResourcesAssetGroup
from rapid7vmconsole.models.resources_asset_tag import ResourcesAssetTag
from rapid7vmconsole.models.resources_authentication_source import ResourcesAuthenticationSource
from rapid7vmconsole.models.resources_available_report_format import ResourcesAvailableReportFormat
from rapid7vmconsole.models.resources_configuration import ResourcesConfiguration
from rapid7vmconsole.models.resources_database import ResourcesDatabase
from rapid7vmconsole.models.resources_discovery_asset import ResourcesDiscoveryAsset
from rapid7vmconsole.models.resources_engine_pool import ResourcesEnginePool
from rapid7vmconsole.models.resources_file import ResourcesFile
from rapid7vmconsole.models.resources_group_account import ResourcesGroupAccount
from rapid7vmconsole.models.resources_matched_solution import ResourcesMatchedSolution
from rapid7vmconsole.models.resources_policy_override import ResourcesPolicyOverride
from rapid7vmconsole.models.resources_report_instance import ResourcesReportInstance
from rapid7vmconsole.models.resources_report_template import ResourcesReportTemplate
from rapid7vmconsole.models.resources_role import ResourcesRole
from rapid7vmconsole.models.resources_scan_engine import ResourcesScanEngine
from rapid7vmconsole.models.resources_scan_schedule import ResourcesScanSchedule
from rapid7vmconsole.models.resources_scan_template import ResourcesScanTemplate
from rapid7vmconsole.models.resources_shared_credential import ResourcesSharedCredential
from rapid7vmconsole.models.resources_site_credential import ResourcesSiteCredential
from rapid7vmconsole.models.resources_site_shared_credential import ResourcesSiteSharedCredential
from rapid7vmconsole.models.resources_smtp_alert import ResourcesSmtpAlert
from rapid7vmconsole.models.resources_snmp_alert import ResourcesSnmpAlert
from rapid7vmconsole.models.resources_software import ResourcesSoftware
from rapid7vmconsole.models.resources_solution import ResourcesSolution
from rapid7vmconsole.models.resources_sonar_query import ResourcesSonarQuery
from rapid7vmconsole.models.resources_syslog_alert import ResourcesSyslogAlert
from rapid7vmconsole.models.resources_tag import ResourcesTag
from rapid7vmconsole.models.resources_user import ResourcesUser
from rapid7vmconsole.models.resources_user_account import ResourcesUserAccount
from rapid7vmconsole.models.resources_vulnerability_validation_resource import ResourcesVulnerabilityValidationResource
from rapid7vmconsole.models.resources_web_form_authentication import ResourcesWebFormAuthentication
from rapid7vmconsole.models.resources_web_header_authentication import ResourcesWebHeaderAuthentication
from rapid7vmconsole.models.review import Review
from rapid7vmconsole.models.risk_modifier_settings import RiskModifierSettings
from rapid7vmconsole.models.risk_settings import RiskSettings
from rapid7vmconsole.models.risk_trend_all_assets_resource import RiskTrendAllAssetsResource
from rapid7vmconsole.models.risk_trend_resource import RiskTrendResource
from rapid7vmconsole.models.role import Role
from rapid7vmconsole.models.scan import Scan
from rapid7vmconsole.models.scan_engine import ScanEngine
from rapid7vmconsole.models.scan_events import ScanEvents
from rapid7vmconsole.models.scan_schedule import ScanSchedule
from rapid7vmconsole.models.scan_scope import ScanScope
from rapid7vmconsole.models.scan_settings import ScanSettings
from rapid7vmconsole.models.scan_size import ScanSize
from rapid7vmconsole.models.scan_targets_resource import ScanTargetsResource
from rapid7vmconsole.models.scan_template import ScanTemplate
from rapid7vmconsole.models.scan_template_asset_discovery import ScanTemplateAssetDiscovery
from rapid7vmconsole.models.scan_template_database import ScanTemplateDatabase
from rapid7vmconsole.models.scan_template_discovery import ScanTemplateDiscovery
from rapid7vmconsole.models.scan_template_discovery_performance import ScanTemplateDiscoveryPerformance
from rapid7vmconsole.models.scan_template_discovery_performance_packets_rate import ScanTemplateDiscoveryPerformancePacketsRate
from rapid7vmconsole.models.scan_template_discovery_performance_parallelism import ScanTemplateDiscoveryPerformanceParallelism
from rapid7vmconsole.models.scan_template_discovery_performance_scan_delay import ScanTemplateDiscoveryPerformanceScanDelay
from rapid7vmconsole.models.scan_template_discovery_performance_timeout import ScanTemplateDiscoveryPerformanceTimeout
from rapid7vmconsole.models.scan_template_service_discovery import ScanTemplateServiceDiscovery
from rapid7vmconsole.models.scan_template_service_discovery_tcp import ScanTemplateServiceDiscoveryTcp
from rapid7vmconsole.models.scan_template_service_discovery_udp import ScanTemplateServiceDiscoveryUdp
from rapid7vmconsole.models.scan_template_vulnerability_check_categories import ScanTemplateVulnerabilityCheckCategories
from rapid7vmconsole.models.scan_template_vulnerability_check_individual import ScanTemplateVulnerabilityCheckIndividual
from rapid7vmconsole.models.scan_template_vulnerability_checks import ScanTemplateVulnerabilityChecks
from rapid7vmconsole.models.scan_template_web_spider import ScanTemplateWebSpider
from rapid7vmconsole.models.scan_template_web_spider_paths import ScanTemplateWebSpiderPaths
from rapid7vmconsole.models.scan_template_web_spider_patterns import ScanTemplateWebSpiderPatterns
from rapid7vmconsole.models.scan_template_web_spider_performance import ScanTemplateWebSpiderPerformance
from rapid7vmconsole.models.scheduled_scan_targets import ScheduledScanTargets
from rapid7vmconsole.models.search_criteria import SearchCriteria
from rapid7vmconsole.models.service import Service
from rapid7vmconsole.models.service_link import ServiceLink
from rapid7vmconsole.models.service_unavailable_error import ServiceUnavailableError
from rapid7vmconsole.models.settings import Settings
from rapid7vmconsole.models.shared_credential import SharedCredential
from rapid7vmconsole.models.shared_credential_account import SharedCredentialAccount
from rapid7vmconsole.models.site import Site
from rapid7vmconsole.models.site_create_resource import SiteCreateResource
from rapid7vmconsole.models.site_credential import SiteCredential
from rapid7vmconsole.models.site_discovery_connection import SiteDiscoveryConnection
from rapid7vmconsole.models.site_organization import SiteOrganization
from rapid7vmconsole.models.site_shared_credential import SiteSharedCredential
from rapid7vmconsole.models.site_update_resource import SiteUpdateResource
from rapid7vmconsole.models.smtp_alert import SmtpAlert
from rapid7vmconsole.models.smtp_settings import SmtpSettings
from rapid7vmconsole.models.snmp_alert import SnmpAlert
from rapid7vmconsole.models.software import Software
from rapid7vmconsole.models.software_cpe import SoftwareCpe
from rapid7vmconsole.models.solution import Solution
from rapid7vmconsole.models.solution_match import SolutionMatch
from rapid7vmconsole.models.sonar_criteria import SonarCriteria
from rapid7vmconsole.models.sonar_criterion import SonarCriterion
from rapid7vmconsole.models.sonar_query import SonarQuery
from rapid7vmconsole.models.static_site import StaticSite
from rapid7vmconsole.models.steps import Steps
from rapid7vmconsole.models.submission import Submission
from rapid7vmconsole.models.summary import Summary
from rapid7vmconsole.models.swagger_discovery_search_criteria_filter import SwaggerDiscoverySearchCriteriaFilter
from rapid7vmconsole.models.swagger_search_criteria_filter import SwaggerSearchCriteriaFilter
from rapid7vmconsole.models.syslog_alert import SyslogAlert
from rapid7vmconsole.models.tag import Tag
from rapid7vmconsole.models.tag_asset_source import TagAssetSource
from rapid7vmconsole.models.tag_link import TagLink
from rapid7vmconsole.models.tagged_asset_references import TaggedAssetReferences
from rapid7vmconsole.models.telnet import Telnet
from rapid7vmconsole.models.token_resource import TokenResource
from rapid7vmconsole.models.unauthorized_error import UnauthorizedError
from rapid7vmconsole.models.unique_id import UniqueId
from rapid7vmconsole.models.update_id import UpdateId
from rapid7vmconsole.models.update_info import UpdateInfo
from rapid7vmconsole.models.update_settings import UpdateSettings
from rapid7vmconsole.models.user import User
from rapid7vmconsole.models.user_account import UserAccount
from rapid7vmconsole.models.user_create_role import UserCreateRole
from rapid7vmconsole.models.user_edit import UserEdit
from rapid7vmconsole.models.user_role import UserRole
from rapid7vmconsole.models.version_info import VersionInfo
from rapid7vmconsole.models.vulnerabilities import Vulnerabilities
from rapid7vmconsole.models.vulnerability import Vulnerability
from rapid7vmconsole.models.vulnerability_category import VulnerabilityCategory
from rapid7vmconsole.models.vulnerability_check import VulnerabilityCheck
from rapid7vmconsole.models.vulnerability_check_type import VulnerabilityCheckType
from rapid7vmconsole.models.vulnerability_cvss import VulnerabilityCvss
from rapid7vmconsole.models.vulnerability_cvss_v2 import VulnerabilityCvssV2
from rapid7vmconsole.models.vulnerability_cvss_v3 import VulnerabilityCvssV3
from rapid7vmconsole.models.vulnerability_events import VulnerabilityEvents
from rapid7vmconsole.models.vulnerability_exception import VulnerabilityException
from rapid7vmconsole.models.vulnerability_finding import VulnerabilityFinding
from rapid7vmconsole.models.vulnerability_reference import VulnerabilityReference
from rapid7vmconsole.models.vulnerability_validation_resource import VulnerabilityValidationResource
from rapid7vmconsole.models.vulnerability_validation_source import VulnerabilityValidationSource
from rapid7vmconsole.models.web_application import WebApplication
from rapid7vmconsole.models.web_form_authentication import WebFormAuthentication
from rapid7vmconsole.models.web_header_authentication import WebHeaderAuthentication
from rapid7vmconsole.models.web_page import WebPage
from rapid7vmconsole.models.web_settings import WebSettings
| 70.554622 | 148 | 0.91619 | 2,680 | 25,188 | 8.379104 | 0.195896 | 0.285135 | 0.350686 | 0.051478 | 0.24617 | 0.095609 | 0.041682 | 0 | 0 | 0 | 0 | 0.01451 | 0.056019 | 25,188 | 356 | 149 | 70.752809 | 0.929932 | 0.009925 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.002959 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc09192212e4add6fceb993f5dbcdf44c6eae4b1 | 142 | py | Python | paths.py | kamperh/bucktsong_eskmeans | fe1e19aa77bb47e0c71f22f75edff87a25edca94 | [
"MIT"
] | 1 | 2021-02-18T14:44:17.000Z | 2021-02-18T14:44:17.000Z | paths.py | kamperh/bucktsong_eskmeans | fe1e19aa77bb47e0c71f22f75edff87a25edca94 | [
"MIT"
] | null | null | null | paths.py | kamperh/bucktsong_eskmeans | fe1e19aa77bb47e0c71f22f75edff87a25edca94 | [
"MIT"
] | null | null | null | buckeye_datadir = "/home/kamperh/endgame/datasets/buckeye/"
xitsonga_datadir = "/home/kamperh/endgame/datasets/zerospeech2015/xitsonga_wavs/"
| 47.333333 | 81 | 0.823944 | 16 | 142 | 7.125 | 0.5625 | 0.192982 | 0.315789 | 0.438596 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.042254 | 142 | 2 | 82 | 71 | 0.808824 | 0 | 0 | 0 | 0 | 0 | 0.697183 | 0.697183 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc1af643a466dc8b2c62050cfd879b2eb8d7bad8 | 71 | py | Python | pytype/tools/merge_pyi/test_data/decoration.py | ashwinprasadme/pytype | fed209c73aacfcab15efc33deef3b4016a67cfe5 | [
"Apache-2.0"
] | 3,882 | 2015-03-22T12:17:15.000Z | 2022-03-31T17:13:20.000Z | pytype/tools/merge_pyi/test_data/decoration.py | ashwinprasadme/pytype | fed209c73aacfcab15efc33deef3b4016a67cfe5 | [
"Apache-2.0"
] | 638 | 2015-11-03T06:34:44.000Z | 2022-03-31T23:41:48.000Z | pytype/tools/merge_pyi/test_data/decoration.py | ashwinprasadme/pytype | fed209c73aacfcab15efc33deef3b4016a67cfe5 | [
"Apache-2.0"
] | 301 | 2015-08-14T10:21:17.000Z | 2022-03-08T11:03:40.000Z | def decoration(func):
return func
@decoration
def f1(a):
pass
| 10.142857 | 21 | 0.661972 | 10 | 71 | 4.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.239437 | 71 | 6 | 22 | 11.833333 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
70fd9acf7683a8cb841a282e4e7771cd748dbd7a | 27 | py | Python | babycenter/settings/__init__.py | praekeltfoundation/molo-babycenter | 9649fc78d9af8c7640b6da97c483156cff6a5e05 | [
"BSD-2-Clause"
] | 1,011 | 2015-07-23T00:39:13.000Z | 2022-03-25T11:05:08.000Z | example/settings/__init__.py | CZZLEGEND/django-rest-framework-json-api | 5f19ef0b642ae5d525396dc89fb5cfd9251f02af | [
"BSD-2-Clause"
] | 819 | 2015-07-21T13:43:30.000Z | 2022-03-20T22:01:51.000Z | example/settings/__init__.py | CZZLEGEND/django-rest-framework-json-api | 5f19ef0b642ae5d525396dc89fb5cfd9251f02af | [
"BSD-2-Clause"
] | 345 | 2015-07-21T14:29:26.000Z | 2022-03-22T03:25:04.000Z | from .dev import * # noqa
| 13.5 | 26 | 0.62963 | 4 | 27 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 1 | 27 | 27 | 0.85 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb536ff3e8361c6208719e0782ccb79165282557 | 202 | py | Python | tests/034.py | fangyuchen86/mini-pysonar | 541e55ebadee35afb22e17b19eed5c19ad31e21e | [
"BSD-3-Clause"
] | 22 | 2015-04-03T12:44:24.000Z | 2021-12-22T17:55:00.000Z | tests/034.py | GaoGersy/mini-pysonar | 541e55ebadee35afb22e17b19eed5c19ad31e21e | [
"BSD-3-Clause"
] | null | null | null | tests/034.py | GaoGersy/mini-pysonar | 541e55ebadee35afb22e17b19eed5c19ad31e21e | [
"BSD-3-Clause"
] | 43 | 2015-04-03T12:46:28.000Z | 2022-01-20T17:27:45.000Z | class A:
a = 1
class B:
a = A()
class C:
a = B()
o = C()
def f(x):
return x.a
f(o)
def g(x):
return x.a
g(g(o))
def h(x):
return x.a
h(h(h(o)))
| 7.769231 | 15 | 0.366337 | 41 | 202 | 1.804878 | 0.292683 | 0.283784 | 0.324324 | 0.364865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.450495 | 202 | 25 | 16 | 8.08 | 0.657658 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0 | 0.1875 | 0.75 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
cb7930a3157e46dedede75c9417e7d41f8097079 | 27 | py | Python | tests/unittests/test_config.py | movermeyer/SeqFindR | 6ae8debadfb5ab9db95a3f3d558211d94940aad7 | [
"ECL-2.0"
] | 12 | 2015-01-08T23:19:29.000Z | 2021-02-23T09:58:22.000Z | tests/unittests/test_config.py | movermeyer/SeqFindR | 6ae8debadfb5ab9db95a3f3d558211d94940aad7 | [
"ECL-2.0"
] | 8 | 2015-01-08T01:32:37.000Z | 2015-09-22T09:34:14.000Z | tests/unittests/test_config.py | movermeyer/SeqFindR | 6ae8debadfb5ab9db95a3f3d558211d94940aad7 | [
"ECL-2.0"
] | 7 | 2015-01-21T14:20:15.000Z | 2021-08-09T16:11:29.000Z | from context import config
| 13.5 | 26 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1dd5b65ca87e6cf2fb4148f0ba61264d8b1a5f14 | 31 | py | Python | server/lib/drivers/BLERemote/__init__.py | frdfsnlght/RemoteControl | 843d73372b36880d2381ca68fb075a2f0750028c | [
"MIT"
] | null | null | null | server/lib/drivers/BLERemote/__init__.py | frdfsnlght/RemoteControl | 843d73372b36880d2381ca68fb075a2f0750028c | [
"MIT"
] | 1 | 2022-01-04T23:39:26.000Z | 2022-01-04T23:39:26.000Z | server/lib/drivers/BLERemote/__init__.py | frdfsnlght/RemoteControl | 843d73372b36880d2381ca68fb075a2f0750028c | [
"MIT"
] | null | null | null |
from .BLERemote import Device
| 10.333333 | 29 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1dd72c6955315098aceb117945ad921d0d15b527 | 156 | py | Python | madry_files/__init__.py | anguyen8/sam | 6f9525adacb65b4f5e00bbea23a1e37c9008db27 | [
"MIT"
] | 41 | 2020-03-06T05:42:28.000Z | 2022-03-23T08:23:40.000Z | madry_files/__init__.py | anguyen8/sam | 6f9525adacb65b4f5e00bbea23a1e37c9008db27 | [
"MIT"
] | 11 | 2020-03-09T14:04:27.000Z | 2022-03-12T00:17:41.000Z | madry_files/__init__.py | anguyen8/sam | 6f9525adacb65b4f5e00bbea23a1e37c9008db27 | [
"MIT"
] | 4 | 2020-03-06T06:07:12.000Z | 2020-07-30T02:48:11.000Z | from .resnet import *
from .vgg import *
from .wide_resnet import wide_resnet50
from .leaky_resnet import *
from .alexnet import *
from .googlenet import *
| 22.285714 | 38 | 0.775641 | 22 | 156 | 5.363636 | 0.409091 | 0.338983 | 0.271186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.153846 | 156 | 6 | 39 | 26 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
69c0d704a6b42fc6ebc2ac508e31e90765da0250 | 23,744 | py | Python | tests/integration/routes/test_confirmation_email.py | ONSdigital/eq-questionnaire-runner | cac38e81714b03e3e85c56f9098adc01e7ccc703 | [
"MIT"
] | 3 | 2020-09-28T13:21:21.000Z | 2021-05-05T14:14:51.000Z | tests/integration/routes/test_confirmation_email.py | ONSdigital/eq-questionnaire-runner | cac38e81714b03e3e85c56f9098adc01e7ccc703 | [
"MIT"
] | 402 | 2019-11-06T17:23:03.000Z | 2022-03-31T16:03:35.000Z | tests/integration/routes/test_confirmation_email.py | ONSdigital/eq-questionnaire-runner | cac38e81714b03e3e85c56f9098adc01e7ccc703 | [
"MIT"
] | 10 | 2020-03-03T14:23:27.000Z | 2022-01-31T12:21:21.000Z | from unittest.mock import MagicMock
from app import settings
from app.cloud_tasks.exceptions import CloudTaskCreationFailed
from tests.integration.integration_test_case import IntegrationTestCase
class TestEmailConfirmation(
IntegrationTestCase
): # pylint: disable=too-few-public-methods
def setUp(self):
settings.CONFIRMATION_EMAIL_LIMIT = 2
super().setUp()
def _launch_and_complete_questionnaire(self):
self.launchSurvey("test_confirmation_email")
self.post({"answer_id": "Yes"})
self.post()
def test_bad_signature_confirmation_email_sent(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I try to view the sent page with an incorrect email hash
self.get("/submitted/confirmation-email/sent?email=bad-signature")
# Then a BadRequest error is returned
self.assertBadRequest()
self.assertEqualPageTitle(
"An error has occurred - Confirmation email test schema"
)
def test_missing_email_param_confirmation_email_sent(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I try to view the sent page with no email param
self.get("/submitted/confirmation-email/sent")
# Then a BadRequest error is returned
self.assertBadRequest()
def test_bad_signature_confirm_email(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
# When I try to view the confirm email page with an incorrect email hash
self.get("/submitted/confirmation-email/confirm?email=bad-signature")
# Then a BadRequest error is returned
self.assertBadRequest()
self.assertEqualPageTitle(
"An error has occurred - Confirmation email test schema"
)
def test_missing_email_param_confirm_email(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
# When I try to view the confirm email page with no email param
self.get("/submitted/confirmation-email/confirm")
# Then a BadRequest error is returned
self.assertBadRequest()
def test_confirm_email_with_confirmation_email_not_set(self):
# Given I launch the test_thank_you_census_individual questionnaire, which doesn't have email confirmation set in the schema
self.launchSurvey("test_thank_you_census_individual")
self.post()
self.post()
# When I try to view the confirm email page
self.get("/submitted/confirmation-email/confirm?email=email-hash")
# Then I get routed to the thank you page
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Is this email address correct?")
def test_confirmation_email_send_with_confirmation_email_not_set(self):
# Given I launch the test_thank_you_census_individual questionnaire, which doesn't have email confirmation set in the schema
self.launchSurvey("test_thank_you_census_individual")
self.post()
self.post()
# When I try to view the confirmation email send page
self.get("/submitted/confirmation-email/send")
# Then I get routed to the thank you page
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Send a confirmation email")
def test_bad_signature_confirmation_email_send(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I try to view the confirm email page with an incorrect email hash
self.get("/submitted/confirmation-email/send?email=bad-signature")
# Then a BadRequest error is returned
self.assertBadRequest()
self.assertEqualPageTitle(
"An error has occurred - Confirmation email test schema"
)
def test_thank_you_page_get_not_allowed(self):
# Given I launch the test_confirmation_email questionnaire
self.launchSurvey("test_confirmation_email")
# When I try to view the thank you page without completing the questionnaire
self.get("/submitted/thank-you/")
# Then I get shown a 404 error
self.assertStatusNotFound()
def test_thank_you_page_post_not_allowed(self):
# Given I launch the test_confirmation_email questionnaire
self.launchSurvey("test_confirmation_email")
# When I try to POST to the thank you page without completing the questionnaire
self.post(url="/submitted/thank-you/")
# Then I get shown a 404 error
self.assertStatusNotFound()
def test_email_confirmation_page_get_not_allowed(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I try to view the confirmation email sent page without sending an email
self.get("/submitted/confirmation-email/sent")
# Then I get shown a 404 error
self.assertStatusNotFound()
def test_census_themed_schema_with_confirmation_email_true(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I am on the thank you page, Then there is an confirmation email form
self.assertInUrl("/submitted/thank-you/")
self.assertInBody("Get confirmation email")
self.assertEqualPageTitle(
"Thank you for completing the census - Confirmation email test schema"
)
def test_census_themed_schema_with_confirmation_email_not_set(self):
# Given I launch the test_thank_you_census_individual questionnaire, which doesn't have email confirmation set in the schema
self.launchSurvey("test_thank_you_census_individual")
# When I complete the questionnaire
self.post()
self.post()
# Then on the thank you page I don't get a confirmation email form
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_default_themed_schema_with_confirmation_email_not_set(self):
# Given I launch the test_checkbox questionnaire, which doesn't have email confirmation set in the schema
self.launchSurvey("test_checkbox")
# When I complete the questionnaire
self.post({"mandatory-checkbox-answer": "Tuna"})
self.post({"non-mandatory-checkbox-answer": "Pineapple"})
self.post({"single-checkbox-answer": "Estimate"})
self.post()
# Then on the thank you page I don't get a confirmation email form
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_confirm_email_missing_answer(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter a valid email but don't provide an answer on the confirm email page
self.post({"email": "email@example.com"})
self.post()
# Then I get an error on the confirm email page
self.assertEqualPageTitle(
"Error: Confirm your email address - Confirmation email test schema"
)
self.assertInBody("There is a problem with your answer")
self.assertInBody("Select an answer")
def test_confirm_email_no(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter a valid email but answer no on the confirm email page
self.post({"email": "email@example.com"})
self.post({"confirm-email": "No, I need to change it"})
# Then I get redirect to the confirmation email send page with the email pre-filled
self.assertInUrl("/submitted/confirmation-email/send")
self.assertInBody("Send a confirmation email")
self.assertInBody("email@example.com")
def test_confirm_email_yes(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter a valid email submit and answer yes on the confirm email page
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I get confirmation that the email has been sent
self.assertInUrl("confirmation-email/sent")
self.assertInBody(
'Make sure you <a href="/sign-out">leave this page</a> or close your browser if using a shared device'
)
def test_confirm_email_confirmation_email_limit_reached(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and reach the email confirmation limit
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.get("/submitted/confirmation-email/send/")
self.post({"email": "email@example.com"})
confirm_email_url = self.last_url
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I try to access the confirm email page
self.get(confirm_email_url)
# Then I get routed to the thank you page
self.assertInUrl("/submitted/thank-you/")
def test_thank_you_page_confirmation_email_white_space(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter a valid email which has leading and trailing whitespace
self.post({"email": " email@example.com "})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I get confirmation that the email has been sent
self.assertInUrl("confirmation-email/sent")
self.assertInBody("A confirmation email has been sent to email@example.com")
def test_thank_you_missing_email(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I fail to enter an email and submit
self.post()
# Then I get an error message on the thank you page
self.assertInUrl("/submitted/thank-you/")
self.assertInBody("There is a problem with this page")
self.assertInBody("Enter an email address")
self.assertEqualPageTitle(
"Error: Thank you for completing the census - Confirmation email test schema"
)
def test_thank_you_incorrect_email_format(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I fail to enter an email in the correct format and submit
self.post({"email": "incorrect-format"})
# Then I get an error message on the thank you page
self.assertInUrl("thank-you")
self.assertInBody("There is a problem with this page")
self.assertInBody(
"Enter an email address in a valid format, for example name@example.com"
)
self.assertEqualPageTitle(
"Error: Thank you for completing the census - Confirmation email test schema"
)
def test_thank_you_email_invalid_tld(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter an email with an invalid TLD and submit
self.post({"email": "a@a.a"})
# Then I get an error message on the thank you page
self.assertInUrl("thank-you")
self.assertInBody("There is a problem with this page")
self.assertInBody(
"Enter an email address in a valid format, for example name@example.com"
)
def test_thank_you_email_invalid_and_invalid_tld(self):
# Given I launch and complete the test_confirmation_email questionnaire
self._launch_and_complete_questionnaire()
# When I enter an invalid email with an invalid TLD and submit
self.post({"email": "a@@a.a"})
# Then I get a single error message on the thank you page
self.assertInUrl("thank-you")
self.assertInBody("There is a problem with this page")
self.assertInBody(
"Enter an email address in a valid format, for example name@example.com"
)
self.assertNotInBody('data-qa="error-link-2"')
def test_confirmation_email_page_missing_email(self):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
# When I go to the confirmation email page and submit, but fail to enter an email
self.get("/submitted/confirmation-email/send/")
self.post()
# Then I get an error message on the confirmation email page
self.assertInUrl("/submitted/confirmation-email/send/")
self.assertInBody("There is a problem with this page")
self.assertInBody("Enter an email address")
self.assertEqualPageTitle(
"Error: Confirmation email - Confirmation email test schema"
)
def test_confirmation_email_page_incorrect_email_format(self):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
# When I go to the confirmation email page and submit, but fail to enter an email in the correct format
self.get("/submitted/confirmation-email/send/")
self.post({"email": "invalid-format"})
# Then I get an error message on the confirmation email page
self.assertInUrl("/submitted/confirmation-email/send/")
self.assertInBody("There is a problem with this page")
self.assertInBody(
"Enter an email address in a valid format, for example name@example.com"
)
self.assertEqualPageTitle(
"Error: Confirmation email - Confirmation email test schema"
)
def test_confirmation_email_page(self):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I go to the confirmation email page and submit with a valid email
self.get("/submitted/confirmation-email/send/")
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I get confirmation that the email has been sent
self.assertInUrl("confirmation-email/sent")
self.assertInBody("A confirmation email has been sent to email@example.com")
def test_confirmation_email_page_white_space(self):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I go to the confirmation email page and submit with a valid email which has leading and trailing whitespace
self.get("/submitted/confirmation-email/send/")
self.post({"email": " email@example.com "})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I get confirmation that the email has been sent
self.assertInUrl("confirmation-email/sent")
self.assertInBody("A confirmation email has been sent to email@example.com")
def test_send_another_email_link_is_not_present_on_thank_you_page_when_confirmation_limit_hit(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I reach the limit of the number of confirmation emails able to be sent
self.get("/submitted/thank-you/")
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I no longer see the option to send a confirmation email
self.get("/submitted/thank-you/")
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_send_another_email_link_is_not_present_on_confirmation_sent_page_when_confirmation_limit_hit(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I reach the limit of the number of confirmation emails able to be sent
self.get("/submitted/confirmation-email/send/")
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then I no longer see the option to send another confirmation email
self.assertInUrl("confirmation-email/sent")
self.assertNotInBody("send another confirmation email.")
def test_visiting_send_another_email_page_redirects_to_thank_you_page_when_limit_exceeded(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and have reached the email limit
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.get("/submitted/confirmation-email/send/")
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I try to access the send another email page
self.get("/submitted/confirmation-email/send/")
# Then I should be redirected to the thank you page
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_submitting_email_on_thank_you_page_reloads_the_page_when_limit_exceeded(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and have reached the email limit
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.assertInUrl("confirmation-email/sent")
# Load the thank you page with the email form
self.get("/submitted/thank-you/")
# Set the new email limit so the limit will be reached on the next request
self._application.config["CONFIRMATION_EMAIL_LIMIT"] = 1
# When I try to submit another email
self.post({"email": "email@example.com"})
# Then the thank you page should be reloaded without the email form
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_submitting_email_on_send_another_email_page_redirect_to_thank_you_when_limit_exceeded(
self,
):
# Given I launch and complete the test_confirmation_email questionnaire and have reached the email limit
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.assertInUrl("confirmation-email/sent")
# Load the send another email page with the email form
self.get("/submitted/confirmation-email/send/")
# Set the new email limit so the limit will be reached on the next request
self._application.config["CONFIRMATION_EMAIL_LIMIT"] = 1
# When I try to submit another email
self.post({"email": "email@example.com"})
# I should be redirected to the thank you page
self.assertInUrl("/submitted/thank-you/")
self.assertNotInBody("Get confirmation email")
def test_500_publish_failed(self):
publisher = self._application.eq["cloud_tasks"]
publisher.create_task = MagicMock(side_effect=CloudTaskCreationFailed)
# Given I launch and complete the test_confirmation_email questionnaire and submit with a valid email from the thank you page
self._launch_and_complete_questionnaire()
# When the email fulfilment request fails to publish
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# Then an error page is shown
self.assertEqualPageTitle(
"Sorry, there was a problem sending the confirmation email - Confirmation email test schema"
)
self.assertInSelector(self.last_url, "p[data-qa=retry]")
def test_attempting_to_deserialize_email_hash_from_different_session_fails(self):
# Given I request a confirmation to my email address
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
# When I use the email hash in a different session
query_params = self.last_url.split("?")[-1]
self.exit()
self._launch_and_complete_questionnaire()
self.post({"email": "new-email@new-example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.get(f"/submitted/confirmation-email/sent?{query_params}")
# Then a BadRequest error is returned
self.assertBadRequest()
self.assertEqualPageTitle(
"An error has occurred - Confirmation email test schema"
)
def test_head_request_on_email_confirmation(self):
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.head(self.last_url)
self.assertStatusOK()
def test_head_request_on_email_send(self):
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "No, I need to change it"})
self.head(self.last_url)
self.assertStatusOK()
def test_head_request_on_email_sent(self):
self._launch_and_complete_questionnaire()
self.post({"email": "email@example.com"})
self.post({"confirm-email": "Yes, send the confirmation email"})
self.head(self.last_url)
self.assertStatusOK()
| 44.884688 | 133 | 0.691164 | 3,050 | 23,744 | 5.221967 | 0.073115 | 0.149432 | 0.061907 | 0.060275 | 0.875934 | 0.853394 | 0.835751 | 0.815219 | 0.796697 | 0.771708 | 0 | 0.000924 | 0.22532 | 23,744 | 528 | 134 | 44.969697 | 0.864956 | 0.289126 | 0 | 0.681388 | 0 | 0.003155 | 0.321967 | 0.100209 | 0 | 0 | 0 | 0 | 0.249211 | 1 | 0.119874 | false | 0 | 0.012618 | 0 | 0.135647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69f30faf97c8da7af2a7590917ae80a04f422948 | 23 | py | Python | src/__init__.py | hcji/ssw | 2aa6d28bedcf8e0b29401d32ca498defd2c1d3a6 | [
"BSD-2-Clause"
] | 34 | 2016-01-29T19:10:56.000Z | 2021-05-18T07:35:17.000Z | src/__init__.py | hcji/ssw | 2aa6d28bedcf8e0b29401d32ca498defd2c1d3a6 | [
"BSD-2-Clause"
] | 10 | 2016-01-29T17:27:34.000Z | 2021-12-17T14:10:51.000Z | src/__init__.py | hcji/ssw | 2aa6d28bedcf8e0b29401d32ca498defd2c1d3a6 | [
"BSD-2-Clause"
] | 11 | 2016-09-16T18:12:28.000Z | 2022-03-29T20:25:02.000Z | from . sswobj import *
| 11.5 | 22 | 0.695652 | 3 | 23 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 1 | 23 | 23 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e10c130d10e4ae29bc8f89bf3d5f88ca244a61e | 47 | py | Python | rlalgorithms_tf2/trpo/__init__.py | unsignedrant/rlalgorithms-tf2 | 9bbd4ba62873044cf7bda5ac01d2d625f1e32e67 | [
"MIT"
] | 4 | 2022-02-04T23:24:43.000Z | 2022-02-25T10:09:24.000Z | rlalgorithms_tf2/trpo/__init__.py | unsignedrant/rlalgorithms-tf2 | 9bbd4ba62873044cf7bda5ac01d2d625f1e32e67 | [
"MIT"
] | 1 | 2022-02-02T22:52:23.000Z | 2022-02-02T22:52:23.000Z | rlalgorithms_tf2/trpo/__init__.py | unsignedrant/rlalgorithms-tf2 | 9bbd4ba62873044cf7bda5ac01d2d625f1e32e67 | [
"MIT"
] | 2 | 2022-02-05T15:28:11.000Z | 2022-02-16T01:20:16.000Z | from rlalgorithms_tf2.trpo.cli import cli_args
| 23.5 | 46 | 0.87234 | 8 | 47 | 4.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.085106 | 47 | 1 | 47 | 47 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
385ecf50c60b9d6cda369d7a47373f8d0199ff41 | 8,069 | py | Python | TADV/models/resnet.py | jfc43/eval-transductive-robustness | 91aea64cc69be1e3f4d14f94de9ff976c8c307df | [
"Apache-2.0"
] | null | null | null | TADV/models/resnet.py | jfc43/eval-transductive-robustness | 91aea64cc69be1e3f4d14f94de9ff976c8c307df | [
"Apache-2.0"
] | null | null | null | TADV/models/resnet.py | jfc43/eval-transductive-robustness | 91aea64cc69be1e3f4d14f94de9ff976c8c307df | [
"Apache-2.0"
] | null | null | null | """
ResNet.
Take from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py.
"""
import torch
import utils.torch
from .classifier import Classifier
from .resnet_block import ResNetBlock
import torch.nn as nn
class ResNet(Classifier):
"""
Simple classifier.
"""
def __init__(self, N_class, resolution=(1, 32, 32), blocks=[3, 3, 3], normalization=True, channels=64, **kwargs):
"""
Initialize classifier.
:param N_class: number of classes to classify
:type N_class: int
:param resolution: resolution (assumed to be square)
:type resolution: int
:param blocks: layers per block
:type blocks: [int]
:param normalization: normalization to use
:type normalization: None or torch.nn.Module
:param channels: channels to start with
:type channels: int
"""
super(ResNet, self).__init__(N_class, resolution, **kwargs)
self.blocks = blocks
""" ([int]) Blocks. """
self.channels = channels
""" (int) Channels. """
self.normalization = normalization
""" (callable) Normalization. """
self.inplace = False
""" (bool) Inplace. """
conv1 = torch.nn.Conv2d(self.resolution[0], self.channels, kernel_size=3, stride=1, padding=1, bias=False)
torch.nn.init.kaiming_normal_(conv1.weight, mode='fan_out', nonlinearity='relu')
self.append_layer('conv1', conv1)
if self.normalization:
norm1 = torch.nn.BatchNorm2d(self.channels)
torch.nn.init.constant_(norm1.weight, 1)
torch.nn.init.constant_(norm1.bias, 0)
self.append_layer('norm1', norm1)
relu = torch.nn.ReLU(inplace=self.inplace)
self.append_layer('relu1', relu)
downsampled = 1
for i in range(len(self.blocks)):
in_planes = (2 ** max(0, i - 1)) * self.channels
out_planes = (2 ** i) * self.channels
layers = self.blocks[i]
stride = 2 if i > 0 else 1
downsample = None
if stride != 1 or in_planes != out_planes:
conv = torch.nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
torch.nn.init.kaiming_normal_(conv.weight, mode='fan_out', nonlinearity='relu')
if self.normalization:
bn = torch.nn.BatchNorm2d(out_planes)
torch.nn.init.constant_(bn.weight, 1)
torch.nn.init.constant_(bn.bias, 0)
downsample = torch.nn.Sequential(*[conv, bn])
else:
downsample = torch.nn.Sequential(*[conv])
sequence = []
sequence.append(ResNetBlock(in_planes, out_planes, stride=stride, downsample=downsample, normalization=self.normalization))
for _ in range(1, layers):
sequence.append(ResNetBlock(out_planes, out_planes, stride=1, downsample=None, normalization=self.normalization))
self.append_layer('block%d' % i, torch.nn.Sequential(*sequence))
downsampled *= stride
representation = out_planes
pool = torch.nn.AvgPool2d((self.resolution[1] // downsampled, self.resolution[2] // downsampled), stride=1)
self.append_layer('avgpool', pool)
view = utils.torch.View(-1, representation)
self.append_layer('view', view)
gain = torch.nn.init.calculate_gain('relu')
logits = torch.nn.Linear(representation, self._N_output)
torch.nn.init.kaiming_normal_(logits.weight, gain)
torch.nn.init.constant_(logits.bias, 0)
self.append_layer('logits', logits)
class ResNetTwoBranch(torch.nn.Module):
"""
Simple classifier.
"""
def __init__(self, N_class, resolution=(1, 32, 32), blocks=[3, 3, 3], normalization=True, channels=64, **kwargs):
"""
Initialize classifier.
:param N_class: number of classes to classify
:type N_class: int
:param resolution: resolution (assumed to be square)
:type resolution: int
:param blocks: layers per block
:type blocks: [int]
:param normalization: normalization to use
:type normalization: None or torch.nn.Module
:param channels: channels to start with
:type channels: int
"""
super(ResNetTwoBranch, self).__init__(**kwargs)
self.N_class = N_class
self.resolution = resolution
self.blocks = blocks
""" ([int]) Blocks. """
self.channels = channels
""" (int) Channels. """
self.normalization = normalization
""" (callable) Normalization. """
self.inplace = False
""" (bool) Inplace. """
self.feature_layers = nn.Sequential()
conv1 = torch.nn.Conv2d(self.resolution[0], self.channels, kernel_size=3, stride=1, padding=1, bias=False)
torch.nn.init.kaiming_normal_(conv1.weight, mode='fan_out', nonlinearity='relu')
self.feature_layers.add_module('conv1', conv1)
if self.normalization:
norm1 = torch.nn.BatchNorm2d(self.channels)
torch.nn.init.constant_(norm1.weight, 1)
torch.nn.init.constant_(norm1.bias, 0)
self.feature_layers.add_module('norm1', norm1)
relu = torch.nn.ReLU(inplace=self.inplace)
self.feature_layers.add_module('relu1', relu)
downsampled = 1
for i in range(len(self.blocks)):
in_planes = (2 ** max(0, i - 1)) * self.channels
out_planes = (2 ** i) * self.channels
layers = self.blocks[i]
stride = 2 if i > 0 else 1
downsample = None
if stride != 1 or in_planes != out_planes:
conv = torch.nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
torch.nn.init.kaiming_normal_(conv.weight, mode='fan_out', nonlinearity='relu')
if self.normalization:
bn = torch.nn.BatchNorm2d(out_planes)
torch.nn.init.constant_(bn.weight, 1)
torch.nn.init.constant_(bn.bias, 0)
downsample = torch.nn.Sequential(*[conv, bn])
else:
downsample = torch.nn.Sequential(*[conv])
sequence = []
sequence.append(ResNetBlock(in_planes, out_planes, stride=stride, downsample=downsample, normalization=self.normalization))
for _ in range(1, layers):
sequence.append(ResNetBlock(out_planes, out_planes, stride=1, downsample=None, normalization=self.normalization))
self.feature_layers.add_module('block%d' % i, torch.nn.Sequential(*sequence))
downsampled *= stride
representation = out_planes
pool = torch.nn.AvgPool2d((self.resolution[1] // downsampled, self.resolution[2] // downsampled), stride=1)
self.feature_layers.add_module('avgpool', pool)
view = utils.torch.View(-1, representation)
self.feature_layers.add_module('view', view)
self.classifier_layers = nn.Sequential()
gain = torch.nn.init.calculate_gain('relu')
logits = torch.nn.Linear(representation, self.N_class)
torch.nn.init.kaiming_normal_(logits.weight, gain)
torch.nn.init.constant_(logits.bias, 0)
self.classifier_layers.add_module('logits', logits)
self.dense_layers = nn.Sequential()
self.dense_layers.add_module("d0", nn.Linear(representation, 256))
self.dense_layers.add_module("d1", nn.BatchNorm1d(256))
self.dense_layers.add_module("d2", nn.ReLU())
self.dense_layers.add_module("d3", nn.Linear(256, 1))
def forward(self, x, return_d=False):
feature = self.feature_layers(x)
cls_output = self.classifier_layers(feature)
d_output = self.dense_layers(feature)
if return_d:
return cls_output, d_output
else:
return cls_output
| 38.061321 | 135 | 0.609865 | 942 | 8,069 | 5.087049 | 0.132696 | 0.061352 | 0.041319 | 0.039649 | 0.835559 | 0.793406 | 0.782137 | 0.782137 | 0.782137 | 0.762104 | 0 | 0.019538 | 0.270542 | 8,069 | 211 | 136 | 38.241706 | 0.794597 | 0.106457 | 0 | 0.663934 | 0 | 0 | 0.020411 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02459 | false | 0 | 0.040984 | 0 | 0.098361 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
38bd0a5d32939e3c99f54a68239a8f8a70fa3cae | 7,066 | py | Python | data/dataset.py | xuhangc/shadow | befdcb17ca01136b93cf67b94dbd3f55dc2300bc | [
"MIT"
] | null | null | null | data/dataset.py | xuhangc/shadow | befdcb17ca01136b93cf67b94dbd3f55dc2300bc | [
"MIT"
] | null | null | null | data/dataset.py | xuhangc/shadow | befdcb17ca01136b93cf67b94dbd3f55dc2300bc | [
"MIT"
] | null | null | null | import os
from torch.utils.data import Dataset
import torch
from PIL import Image
import torchvision.transforms.functional as TF
import random
def is_image_file(filename):
return any(filename.endswith(extension) for extension in ['jpeg', 'JPEG', 'jpg', 'png', 'JPG', 'PNG', 'gif', 'tif'])
class DataLoaderTrain(Dataset):
def __init__(self, img_dir, img_options=None):
super(DataLoaderTrain, self).__init__()
inp_files = sorted(os.listdir(os.path.join(img_dir, 'input')))
tar_files = sorted(os.listdir(os.path.join(img_dir, 'target')))
mask_files = sorted(os.listdir(os.path.join(img_dir, 'mask')))
self.inp_filenames = [os.path.join(
img_dir, 'input', x) for x in inp_files if is_image_file(x)]
self.tar_filenames = [os.path.join(
img_dir, 'target', x) for x in tar_files if is_image_file(x)]
self.mask_filenames = [os.path.join(
img_dir, 'mask', x) for x in mask_files if is_image_file(x)]
self.img_options = img_options
self.sizex = len(self.tar_filenames) # get the size of target
self.ps = self.img_options['patch_size']
def __len__(self):
return self.sizex
def __getitem__(self, index):
index_ = index % self.sizex
inp_path = self.inp_filenames[index_]
tar_path = self.tar_filenames[index_]
mask_path = self.mask_filenames[index_]
inp_img = Image.open(inp_path).convert('RGB')
tar_img = Image.open(tar_path).convert('RGB')
mask_img = Image.open(mask_path)
inp_img = TF.to_tensor(inp_img)
inp_img = TF.resize(inp_img, [self.ps, self.ps])
tar_img = TF.to_tensor(tar_img)
tar_img = TF.resize(tar_img, [self.ps, self.ps])
mask_img = TF.to_tensor(mask_img)
mask_img = TF.resize(mask_img, [self.ps, self.ps])
hh, ww = tar_img.shape[1], tar_img.shape[2]
rr = random.randint(0, hh - self.ps)
cc = random.randint(0, ww - self.ps)
aug = random.randint(0, 8)
# Crop patch
inp_img = inp_img[:, rr:rr + self.ps, cc:cc + self.ps]
tar_img = tar_img[:, rr:rr + self.ps, cc:cc + self.ps]
mask_img = mask_img[:, rr:rr + self.ps, cc:cc + self.ps]
# Data Augmentations
if aug == 1:
inp_img = inp_img.flip(1)
tar_img = tar_img.flip(1)
mask_img = mask_img.flip(1)
elif aug == 2:
inp_img = inp_img.flip(2)
tar_img = tar_img.flip(2)
mask_img = mask_img.flip(2)
elif aug == 3:
inp_img = torch.rot90(inp_img, dims=(1, 2))
tar_img = torch.rot90(tar_img, dims=(1, 2))
mask_img = torch.rot90(mask_img, dims=(1, 2))
elif aug == 4:
inp_img = torch.rot90(inp_img, dims=(1, 2), k=2)
tar_img = torch.rot90(tar_img, dims=(1, 2), k=2)
mask_img = torch.rot90(mask_img, dims=(1, 2), k=2)
elif aug == 5:
inp_img = torch.rot90(inp_img, dims=(1, 2), k=3)
tar_img = torch.rot90(tar_img, dims=(1, 2), k=3)
mask_img = torch.rot90(mask_img, dims=(1, 2), k=3)
elif aug == 6:
inp_img = torch.rot90(inp_img.flip(1), dims=(1, 2))
tar_img = torch.rot90(tar_img.flip(1), dims=(1, 2))
mask_img = torch.rot90(mask_img.flip(1), dims=(1, 2))
elif aug == 7:
inp_img = torch.rot90(inp_img.flip(2), dims=(1, 2))
tar_img = torch.rot90(tar_img.flip(2), dims=(1, 2))
mask_img = torch.rot90(mask_img.flip(2), dims=(1, 2))
filename = os.path.splitext(os.path.split(tar_path)[-1])[0]
return inp_img, tar_img, mask_img, filename
class DataLoaderVal(Dataset):
def __init__(self, img_dir, img_options=None, rgb_dir2=None):
super(DataLoaderVal, self).__init__()
inp_files = sorted(os.listdir(os.path.join(img_dir, 'input')))
tar_files = sorted(os.listdir(os.path.join(img_dir, 'target')))
mask_files = sorted(os.listdir(os.path.join(img_dir, 'mask')))
self.inp_filenames = [os.path.join(
img_dir, 'input', x) for x in inp_files if is_image_file(x)]
self.tar_filenames = [os.path.join(
img_dir, 'target', x) for x in tar_files if is_image_file(x)]
self.mask_filenames = [os.path.join(
img_dir, 'mask', x) for x in mask_files if is_image_file(x)]
self.img_options = img_options
self.sizex = len(self.tar_filenames) # get the size of target
self.ps = self.img_options['patch_size']
def __len__(self):
return self.sizex
def __getitem__(self, index):
index_ = index % self.sizex
inp_path = self.inp_filenames[index_]
tar_path = self.tar_filenames[index_]
mask_path = self.mask_filenames[index_]
inp_img = Image.open(inp_path).convert('RGB')
tar_img = Image.open(tar_path).convert('RGB')
mask_img = Image.open(mask_path)
inp_img = TF.to_tensor(inp_img)
inp_img = TF.resize(inp_img, [self.ps, self.ps])
tar_img = TF.to_tensor(tar_img)
tar_img = TF.resize(tar_img, [self.ps, self.ps])
mask_img = TF.to_tensor(mask_img)
mask_img = TF.resize(mask_img, [self.ps, self.ps])
filename = os.path.splitext(os.path.split(tar_path)[-1])[0]
return inp_img, tar_img, mask_img, filename
class DataLoaderTest(Dataset):
def __init__(self, img_dir, img_options):
super(DataLoaderTest, self).__init__()
inp_files = sorted(os.listdir(os.path.join(img_dir, 'input')))
tar_files = sorted(os.listdir(os.path.join(img_dir, 'target')))
mask_files = sorted(os.listdir(os.path.join(img_dir, 'mask')))
self.inp_filenames = [os.path.join(
img_dir, 'input', x) for x in inp_files if is_image_file(x)]
self.tar_filenames = [os.path.join(
img_dir, 'target', x) for x in tar_files if is_image_file(x)]
self.mask_filenames = [os.path.join(
img_dir, 'mask', x) for x in mask_files if is_image_file(x)]
self.img_options = img_options
self.inp_size = len(self.tar_filenames)
# self.ps = self.img_options['patch_size']
def __len__(self):
return self.inp_size
def __getitem__(self, index):
inp_path = self.inp_filenames[index]
tar_path = self.tar_filenames[index]
mask_path = self.mask_filenames[index]
inp_img = Image.open(inp_path).convert('RGB')
tar_img = Image.open(tar_path).convert('RGB')
mask_img = Image.open(mask_path)
inp_img = TF.to_tensor(inp_img)
# inp_img = TF.resize(inp_img, [self.ps, self.ps])
tar_img = TF.to_tensor(tar_img)
# tar_img = TF.resize(tar_img, [self.ps, self.ps])
mask_img = TF.to_tensor(mask_img)
# mask_img = TF.resize(mask_img, [self.ps, self.ps])
filename = os.path.splitext(os.path.split(tar_path)[-1])[0]
return inp_img, tar_img, mask_img, filename
| 37.786096 | 120 | 0.610105 | 1,088 | 7,066 | 3.69761 | 0.088235 | 0.053691 | 0.044743 | 0.058166 | 0.856078 | 0.82774 | 0.822272 | 0.809346 | 0.800895 | 0.758141 | 0 | 0.018596 | 0.254175 | 7,066 | 186 | 121 | 37.989247 | 0.744782 | 0.037504 | 0 | 0.583942 | 0 | 0 | 0.022674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072993 | false | 0 | 0.043796 | 0.029197 | 0.189781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa747543a6ca4777233b869374a5cc057457e02c | 85 | py | Python | mikaponics/ecommerce/signals.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | 2 | 2019-04-30T23:51:41.000Z | 2019-05-04T00:35:52.000Z | mikaponics/ecommerce/signals.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | 27 | 2019-04-30T20:22:28.000Z | 2022-02-10T08:10:32.000Z | mikaponics/ecommerce/signals.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | null | null | null | from django.core.management import call_command
from django.dispatch import receiver
| 28.333333 | 47 | 0.870588 | 12 | 85 | 6.083333 | 0.75 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094118 | 85 | 2 | 48 | 42.5 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa79dd99b9bb8491b2c4c9c0f0cda1df9229bad8 | 329 | py | Python | lib/geojson/factory.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | 1 | 2015-09-05T14:02:10.000Z | 2015-09-05T14:02:10.000Z | lib/geojson/factory.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | null | null | null | lib/geojson/factory.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | 1 | 2018-10-21T20:02:48.000Z | 2018-10-21T20:02:48.000Z | from geojson.geometry import Point, LineString, Polygon
from geojson.geometry import MultiLineString, MultiPoint, MultiPolygon
from geojson.geometry import GeometryCollection
from geojson.feature import Feature, FeatureCollection
from geojson.base import GeoJSON
from geojson.crs import Named, Linked
name = Named
link = Linked
| 32.9 | 70 | 0.844985 | 40 | 329 | 6.95 | 0.475 | 0.23741 | 0.205036 | 0.269784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112462 | 329 | 9 | 71 | 36.555556 | 0.952055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaa0aeb7347b104c099048f7a0422fb63659d0e0 | 276 | py | Python | pybamm/models/submodels/particle/size_distribution/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 330 | 2019-04-17T11:36:57.000Z | 2022-03-28T16:49:55.000Z | pybamm/models/submodels/particle/size_distribution/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 1,530 | 2019-03-26T18:13:03.000Z | 2022-03-31T16:12:53.000Z | pybamm/models/submodels/particle/size_distribution/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 178 | 2019-03-27T13:48:04.000Z | 2022-03-31T09:30:11.000Z | from .base_distribution import BaseSizeDistribution
from .fickian_diffusion import FickianDiffusion
from .x_averaged_fickian_diffusion import XAveragedFickianDiffusion
from .uniform_profile import UniformProfile
from .x_averaged_uniform_profile import XAveragedUniformProfile
| 46 | 67 | 0.90942 | 29 | 276 | 8.344828 | 0.517241 | 0.132231 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 276 | 5 | 68 | 55.2 | 0.945313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aacc56cabe9d6ea6a46a8f304dcef6916f64ee12 | 195 | py | Python | tests/error/missing_required_arg01.py | ktok07b6/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 83 | 2015-11-30T09:59:13.000Z | 2021-08-03T09:12:28.000Z | tests/error/missing_required_arg01.py | jesseclin/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 4 | 2017-02-10T01:43:11.000Z | 2020-07-14T03:52:25.000Z | tests/error/missing_required_arg01.py | jesseclin/polyphony | 657c5c7440520db6b4985970bd50547407693ac4 | [
"MIT"
] | 11 | 2016-11-18T14:39:15.000Z | 2021-02-23T10:05:20.000Z | #missing_required_arg01() missing required argument x
from polyphony import testbench
def missing_required_arg01(x):
return x
@testbench
def test():
missing_required_arg01()
test()
| 13 | 53 | 0.764103 | 25 | 195 | 5.72 | 0.48 | 0.41958 | 0.41958 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03681 | 0.164103 | 195 | 14 | 54 | 13.928571 | 0.840491 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6328c35d4d3e5b28879df6dc9de69dfda7b155e4 | 1,301 | py | Python | backend/utils/elements.py | LuisFernandoBenatto/scrape-fundamentus | dbae1b17699588b21a530eb80048347aab6c9dd7 | [
"MIT"
] | 1 | 2021-06-08T00:52:05.000Z | 2021-06-08T00:52:05.000Z | backend/utils/elements.py | LuisFernandoBenatto/scrape-fundamentus | dbae1b17699588b21a530eb80048347aab6c9dd7 | [
"MIT"
] | null | null | null | backend/utils/elements.py | LuisFernandoBenatto/scrape-fundamentus | dbae1b17699588b21a530eb80048347aab6c9dd7 | [
"MIT"
] | null | null | null | class Header:
SEARCH_INPUT = '/html/body/div[1]/div[1]/form/fieldset/input[1]'
class AssetsPage:
TICKET = '/html/body/div[1]/div[2]/table[1]/tbody/tr[1]/td[2]/span'
SUBSECTOR = '/html/body/div[1]/div[2]/table[1]/tbody/tr[5]/td[2]/span/a'
STOCK_DIV_YIELD = (
'/html/body/div[1]/div[2]/table[3]/tbody/tr[9]/td[4]/span'
)
P_L = '/html/body/div[1]/div[2]/table[3]/tbody/tr[2]/td[4]/span'
STOCK_P_VP = '/html/body/div[1]/div[2]/table[3]/tbody/tr[3]/td[4]/span'
EBITDA = '/html/body/div[1]/div[2]/table[3]/tbody/tr[10]/td[4]/span'
ROE = '/html/body/div[1]/div[2]/table[3]/tbody/tr[9]/td[6]/span'
ROIC = '/html/body/div[1]/div[2]/table[3]/tbody/tr[8]/td[6]/span'
MIN_PRICE = '/html/body/div[1]/div[2]/table[1]/tbody/tr[3]/td[4]/span'
MAX_PRICE = '/html/body/div[1]/div[2]/table[1]/tbody/tr[4]/td[4]/span'
PRICE = '/html/body/div[1]/div[2]/table[1]/tbody/tr[1]/td[4]/span'
# FIIs
SEGMENT = '/html/body/div[1]/div[2]/table[1]/tbody/tr[4]/td[2]/span/a'
FII_DIV_YIELD = '/html/body/div[1]/div[2]/table[3]/tbody/tr[3]/td[4]/span'
FII_P_VP = '/html/body/div[1]/div[2]/table[3]/tbody/tr[4]/td[4]/span'
# To get asset type
SEGMENT_OR_SUBSECTOR_LABEL = (
'/html/body/div[1]/div[2]/table[1]/tbody/tr[4]/td[1]/span[2]'
)
| 48.185185 | 78 | 0.601076 | 263 | 1,301 | 2.91635 | 0.178707 | 0.088657 | 0.229465 | 0.250326 | 0.710561 | 0.691004 | 0.663625 | 0.663625 | 0.663625 | 0.663625 | 0 | 0.070175 | 0.123751 | 1,301 | 26 | 79 | 50.038462 | 0.602632 | 0.01691 | 0 | 0 | 0 | 0.727273 | 0.701411 | 0.701411 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
632a70085723149bcf8323ea4907a26b174f36dd | 23 | py | Python | templator/__init__.py | dongrama/templator | ae0ecf5ddf6e17cffe5228bc70f00f6ee3c44c56 | [
"MIT"
] | null | null | null | templator/__init__.py | dongrama/templator | ae0ecf5ddf6e17cffe5228bc70f00f6ee3c44c56 | [
"MIT"
] | null | null | null | templator/__init__.py | dongrama/templator | ae0ecf5ddf6e17cffe5228bc70f00f6ee3c44c56 | [
"MIT"
] | null | null | null | from . import __main__
| 11.5 | 22 | 0.782609 | 3 | 23 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d6adeb840bed3a47536715d326f05d672b90bc1 | 7,845 | py | Python | python/oneflow/test/modules/test_optim_adagrad.py | zzk0/oneflow | ab15f5986ee0081da5493ee63d3f2acf063ae229 | [
"Apache-2.0"
] | 3,285 | 2020-07-31T05:51:22.000Z | 2022-03-31T15:20:16.000Z | python/oneflow/test/modules/test_optim_adagrad.py | zzk0/oneflow | ab15f5986ee0081da5493ee63d3f2acf063ae229 | [
"Apache-2.0"
] | 2,417 | 2020-07-31T06:28:58.000Z | 2022-03-31T23:04:14.000Z | python/oneflow/test/modules/test_optim_adagrad.py | zzk0/oneflow | ab15f5986ee0081da5493ee63d3f2acf063ae229 | [
"Apache-2.0"
] | 520 | 2020-07-31T05:52:42.000Z | 2022-03-29T02:38:11.000Z | """
Copyright 2020 The OneFlow Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import tempfile
import unittest
from collections import OrderedDict
import numpy as np
from test_util import GenArgList
from optimizer_test_util import clip_grad_norm_np
import oneflow as flow
from oneflow.nn.parameter import Parameter
def compare_with_numpy_adagrad(
test_case,
device,
x_shape,
learning_rate,
train_iters,
lr_decay,
weight_decay,
initial_accumulator_value,
eps,
reload_state_step,
save_load_by_pickle,
):
random_grad_seq = []
for _ in range(train_iters):
random_grad_seq.append(np.random.uniform(size=x_shape).astype(np.float32))
init_value = np.random.uniform(size=x_shape).astype(np.float32)
def train_by_oneflow():
x = Parameter(flow.Tensor(init_value, device=flow.device(device)))
adagrad = flow.optim.Adagrad(
[
{
"params": [x],
"lr": learning_rate,
"eps": eps,
"weight_decay": weight_decay,
}
],
lr_decay=lr_decay,
initial_accumulator_value=initial_accumulator_value,
)
def train_one_iter(grad):
grad_tensor = flow.tensor(
grad, requires_grad=False, device=flow.device(device)
)
loss = flow.sum(x * grad_tensor)
loss.backward()
adagrad.step()
adagrad.zero_grad()
for i in range(train_iters):
train_one_iter(random_grad_seq[i])
if i == reload_state_step:
state_dict = adagrad.state_dict()
adagrad = flow.optim.Adagrad([x])
if save_load_by_pickle:
with tempfile.NamedTemporaryFile("wb", delete=False) as f:
file_name = f.name
import pickle
pickle.dump(state_dict, f)
with open(file_name, "rb") as f:
state_dict = pickle.load(f)
adagrad.load_state_dict(state_dict)
return x
def train_by_numpy():
x = init_value
st = np.ones_like(x) * initial_accumulator_value
def train_one_iter(iter, grad):
grad = grad + weight_decay * x
lr = learning_rate / (1 + (iter - 1) * lr_decay)
s = st + grad * grad
param = x - lr / (np.sqrt(s) + eps) * grad
return (param, s)
for i in range(1, train_iters + 1):
(x, st) = train_one_iter(i, random_grad_seq[i - 1])
return x
oneflow_res = train_by_oneflow().numpy()
numpy_res = train_by_numpy()
test_case.assertTrue(
np.allclose(oneflow_res.flatten(), numpy_res.flatten(), rtol=1e-3, atol=1e-3)
)
def compare_with_numpy_adam_clip_grad(
test_case,
device,
x_shape,
learning_rate,
train_iters,
lr_decay,
weight_decay,
initial_accumulator_value,
eps,
clip_grad_max_norm,
clip_grad_norm_type,
reload_state_step,
save_load_by_pickle,
):
random_grad_seq = []
for _ in range(train_iters):
random_grad_seq.append(np.random.uniform(size=x_shape).astype(np.float32))
init_value = np.random.uniform(size=x_shape).astype(np.float32)
def train_by_oneflow():
x = Parameter(flow.Tensor(init_value, device=flow.device(device)))
adagrad = flow.optim.Adagrad(
[
{
"params": [x],
"lr": learning_rate,
"eps": eps,
"weight_decay": weight_decay,
"clip_grad_max_norm": clip_grad_max_norm,
"clip_grad_norm_type": clip_grad_norm_type,
}
],
lr_decay=lr_decay,
initial_accumulator_value=initial_accumulator_value,
)
def train_one_iter(grad):
grad_tensor = flow.tensor(
grad, requires_grad=False, device=flow.device(device)
)
loss = flow.sum(x * grad_tensor)
loss.backward()
adagrad.clip_grad()
adagrad.step()
adagrad.zero_grad()
for i in range(train_iters):
train_one_iter(random_grad_seq[i])
if i == reload_state_step:
state_dict = adagrad.state_dict()
adagrad = flow.optim.Adagrad([x])
if save_load_by_pickle:
with tempfile.NamedTemporaryFile("wb", delete=False) as f:
file_name = f.name
import pickle
pickle.dump(state_dict, f)
with open(file_name, "rb") as f:
state_dict = pickle.load(f)
adagrad.load_state_dict(state_dict)
return x
def train_by_numpy():
x = init_value
st = np.ones_like(x) * initial_accumulator_value
def train_one_iter(iter, grad):
total_norm, grad = clip_grad_norm_np(
grad, clip_grad_max_norm, clip_grad_norm_type
)
grad = grad + weight_decay * x
lr = learning_rate / (1 + (iter - 1) * lr_decay)
s = st + grad * grad
param = x - lr / (np.sqrt(s) + eps) * grad
return (param, s)
for i in range(1, train_iters + 1):
(x, st) = train_one_iter(i, random_grad_seq[i - 1])
return x
oneflow_res = train_by_oneflow().numpy()
numpy_res = train_by_numpy()
test_case.assertTrue(
np.allclose(oneflow_res.flatten(), numpy_res.flatten(), rtol=1e-3, atol=1e-3)
)
@flow.unittest.skip_unless_1n1d()
class TestAdagrad(flow.unittest.TestCase):
def test_adagrad(test_case):
arg_dict = OrderedDict()
arg_dict["device"] = ["cpu", "cuda"]
arg_dict["x_shape"] = [(10,)]
arg_dict["learning_rate"] = [1, 1e-3]
arg_dict["train_iters"] = [10]
arg_dict["lr_decay"] = [0.9, 0.75]
arg_dict["weight_decay"] = [0.0, 0.1]
arg_dict["initial_accumulator_value"] = [1.0, 2.1]
arg_dict["eps"] = [1e-08, 1e-07]
arg_dict["reload_state_step"] = [5] # save and load optim state
arg_dict["save_load_by_pickle"] = [False, True]
for arg in GenArgList(arg_dict):
compare_with_numpy_adagrad(test_case, *arg)
def test_adagrad_clip_grad(test_case):
arg_dict = OrderedDict()
arg_dict["device"] = ["cpu", "cuda"]
arg_dict["x_shape"] = [(10,)]
arg_dict["learning_rate"] = [1, 1e-3]
arg_dict["train_iters"] = [10]
arg_dict["lr_decay"] = [0.9, 0.75]
arg_dict["weight_decay"] = [0.0, 0.1]
arg_dict["initial_accumulator_value"] = [1.0, 2.1]
arg_dict["eps"] = [1e-08, 1e-07]
arg_dict["clip_grad_max_norm"] = [0, 0.5, 1.0]
arg_dict["clip_grad_norm_type"] = ["inf", "-inf", 0.0, 1.0, 2.0, 3.5]
arg_dict["reload_state_step"] = [5] # save and load optim state
arg_dict["save_load_by_pickle"] = [False, True]
for arg in GenArgList(arg_dict):
compare_with_numpy_adam_clip_grad(test_case, *arg)
if __name__ == "__main__":
unittest.main()
| 32.551867 | 85 | 0.580497 | 1,009 | 7,845 | 4.222993 | 0.17443 | 0.042713 | 0.053978 | 0.02253 | 0.789721 | 0.787139 | 0.774701 | 0.774701 | 0.74161 | 0.74161 | 0 | 0.019184 | 0.315615 | 7,845 | 240 | 86 | 32.6875 | 0.774446 | 0.080816 | 0 | 0.789474 | 0 | 0 | 0.055401 | 0.006943 | 0 | 0 | 0 | 0 | 0.010526 | 1 | 0.063158 | false | 0 | 0.052632 | 0 | 0.152632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d7dcec3a2cdd6cc375d04a19fc00ca6fb921b41 | 393 | py | Python | test/test_utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | test/test_utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | test/test_utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | from mediawiki_dump.utils import parse_date_string
def test_parse_date_string():
# new Date(1085451568000).toGMTString()
# "Tue, 25 May 2004 02:19:28 GMT"
assert parse_date_string('1970-01-01T00:00:00Z').timestamp() == 0
assert parse_date_string('2004-05-25T02:19:28Z').timestamp() == 1085451568
assert parse_date_string('2018-10-29T16:01:01Z').timestamp() == 1540828861
| 39.3 | 78 | 0.732824 | 58 | 393 | 4.758621 | 0.655172 | 0.163043 | 0.271739 | 0.228261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25656 | 0.127226 | 393 | 9 | 79 | 43.666667 | 0.548105 | 0.175573 | 0 | 0 | 0 | 0 | 0.186916 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d9adce39bdb58a80d24395c8235ecbf82e2c96a | 4,582 | py | Python | attendees/whereabouts/serializers/place_serializer.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | null | null | null | attendees/whereabouts/serializers/place_serializer.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | 5 | 2022-01-21T03:26:40.000Z | 2022-02-04T17:32:16.000Z | attendees/whereabouts/serializers/place_serializer.py | xjlin0/attendees32 | 25913c75ea8d916dcb065a23f2fa68bea558f77c | [
"MIT"
] | null | null | null | from address.models import Address, Locality, State
from rest_framework import serializers
from attendees.whereabouts.models import Place
from attendees.whereabouts.serializers import AddressSerializer
class PlaceSerializer(serializers.ModelSerializer):
"""
Generic relation: https://www.django-rest-framework.org/api-guide/relations/#generic-relationships
"""
street = serializers.CharField(read_only=True)
address = AddressSerializer(required=False)
class Meta:
model = Place
# fields = '__all__'
fields = [
f.name for f in model._meta.fields if f.name not in ["is_removed"]
] + [
"street",
"address",
]
def create(self, validated_data):
"""
Create or update `Place` instance, given the validated data.
"""
place_data = self._kwargs.get("data", {})
place_id = place_data.get("id")
address_data = place_data.get("address")
address_id = address_data.get("id")
locality = validated_data.get("address", {}).get("locality")
if address_id and locality:
new_city = address_data.get("city")
new_zip = address_data.get("postal_code")
new_state = State.objects.filter(pk=address_data.get("state_id")).first()
if new_state:
locality, locality_created = Locality.objects.update_or_create(
name=new_city,
postal_code=new_zip,
state=new_state,
defaults={
"name": new_city,
"postal_code": new_zip,
"state": new_state,
},
)
address_data["locality"] = locality
address, address_created = Address.objects.update_or_create(
id=address_id,
defaults=address_data,
)
validated_data["address"] = address
place, place_created = Place.objects.update_or_create(
id=place_id,
defaults=validated_data,
)
else: # user is creating new address, new_address is to bypass DRF model validations
new_address_data = address_data.get("new_address", {})
del validated_data["address"]
place, place_created = Place.objects.update_or_create(
id=place_id,
defaults=validated_data,
)
place.address = new_address_data
place.save()
return place
def update(self, instance, validated_data):
"""
Update and return an existing `Place` instance, given the validated data.
"""
place_data = self._kwargs.get("data", {})
# place_id = instance.id
address_data = place_data.get("address")
address_id = address_data.get("id")
locality = validated_data.get("address", {}).get("locality")
if address_id and locality:
new_city = address_data.get("city")
new_zip = address_data.get("postal_code")
new_state = State.objects.filter(pk=address_data.get("state_id")).first()
print("hi 101 in PlaceSerializer processing state/locality")
if new_state:
locality, locality_created = Locality.objects.update_or_create(
name=new_city,
postal_code=new_zip,
state=new_state,
defaults={
"name": new_city,
"postal_code": new_zip,
"state": new_state,
},
)
address_data["locality"] = locality
address, address_created = Address.objects.update_or_create(
id=address_id,
defaults=address_data,
)
validated_data["address"] = address
place, place_created = Place.objects.update_or_create(
id=instance.id,
defaults=validated_data,
)
else: # user is creating new address, new_address is to bypass DRF model validations
new_address_data = address_data.get("new_address", {})
del validated_data["address"]
place, place_created = Place.objects.update_or_create(
id=instance.id,
defaults=validated_data,
)
place.address = new_address_data
place.save()
return place
| 35.796875 | 102 | 0.559799 | 468 | 4,582 | 5.230769 | 0.183761 | 0.089869 | 0.05719 | 0.068627 | 0.737745 | 0.737745 | 0.737745 | 0.737745 | 0.737745 | 0.737745 | 0 | 0.001006 | 0.349192 | 4,582 | 127 | 103 | 36.07874 | 0.81992 | 0.093845 | 0 | 0.701031 | 0 | 0 | 0.069506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020619 | false | 0 | 0.041237 | 0 | 0.123711 | 0.010309 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2de241b49eed99c0d4161c752cbffe143b4fe566 | 96 | py | Python | venv/lib/python3.8/site-packages/tomlkit/_utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/tomlkit/_utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/tomlkit/_utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/75/7e/79/1fb67803e5d41160c00edc2f8fd7a0a0f06ada87cafd03390913b64a5e | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93138b1ff18785b9786633a55ba463df6b674bb4 | 3,054 | py | Python | project/apps/adjudication/tests/test_api_as_admin.py | dbinetti/barberscore | 13c3d8193834bd2bb79922e28d3f5ab1675bdffd | [
"BSD-2-Clause"
] | 13 | 2017-08-07T15:45:49.000Z | 2019-07-03T13:58:50.000Z | project/apps/adjudication/tests/test_api_as_admin.py | barberscore/barberscore-api | 2aa9f8598c18c28ba1d4a294f76fd055619f803e | [
"BSD-2-Clause"
] | 309 | 2017-07-14T02:34:12.000Z | 2022-01-14T21:37:02.000Z | project/apps/adjudication/tests/test_api_as_admin.py | dbinetti/barberscore-django | 16fbd9945becda0a765bbdf52ad459a63655128f | [
"BSD-2-Clause"
] | 5 | 2017-08-07T14:01:07.000Z | 2019-06-24T19:44:55.000Z |
# Third-Party
import pytest
from rest_framework import status
# Django
from django.urls import reverse
pytestmark = pytest.mark.django_db
def test_appearance_endpoint(admin_api_client, appearance, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('appearance-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('appearance-detail', args=(str(appearance.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
def test_outcome_endpoint(admin_api_client, outcome, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('outcome-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('outcome-detail', args=(str(outcome.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
def test_panelist_endpoint(admin_api_client, panelist, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('panelist-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('panelist-detail', args=(str(panelist.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
def test_round_endpoint(admin_api_client, round, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('round-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('round-detail', args=(str(round.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
def test_score_endpoint(admin_api_client, score, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('score-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('score-detail', args=(str(score.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
def test_song_endpoint(admin_api_client, song, django_assert_max_num_queries):
with django_assert_max_num_queries(10):
path = reverse('song-list')
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
with django_assert_max_num_queries(10):
path = reverse('song-detail', args=(str(song.id),))
response = admin_api_client.get(path)
assert response.status_code == status.HTTP_200_OK
| 40.184211 | 90 | 0.727898 | 422 | 3,054 | 4.893365 | 0.104265 | 0.069734 | 0.122034 | 0.156901 | 0.784019 | 0.784019 | 0.784019 | 0.784019 | 0.784019 | 0.784019 | 0 | 0.023885 | 0.177472 | 3,054 | 75 | 91 | 40.72 | 0.798169 | 0.005894 | 0 | 0.62069 | 0 | 0 | 0.049472 | 0 | 0 | 0 | 0 | 0 | 0.517241 | 1 | 0.103448 | false | 0 | 0.051724 | 0 | 0.155172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9344257529fb195f910ee818a261d67f006879a2 | 26 | py | Python | k_selenium_cookies/models/__init__.py | kkristof200/selenium_cookies | 399232ca159a4c7b665d6f43e442ef451528e49f | [
"MIT"
] | null | null | null | k_selenium_cookies/models/__init__.py | kkristof200/selenium_cookies | 399232ca159a4c7b665d6f43e442ef451528e49f | [
"MIT"
] | null | null | null | k_selenium_cookies/models/__init__.py | kkristof200/selenium_cookies | 399232ca159a4c7b665d6f43e442ef451528e49f | [
"MIT"
] | null | null | null | from .cookie import Cookie | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
936b06c4890fe65b84f1602c54ef12c76127afc4 | 41 | py | Python | chainer_graphics/image/__init__.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | 3 | 2019-07-01T04:38:50.000Z | 2021-12-03T06:22:58.000Z | chainer_graphics/image/__init__.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | null | null | null | chainer_graphics/image/__init__.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | 1 | 2021-12-03T06:22:59.000Z | 2021-12-03T06:22:59.000Z | from .basic import *
from .warp import *
| 13.666667 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa8ec02a229d569378266eb2da22ed5241f93773 | 47 | py | Python | visigoth/containers/grid/__init__.py | visigoths/visigoth | c5297148209d630f6668f0e5ba3039a8856d8320 | [
"MIT"
] | null | null | null | visigoth/containers/grid/__init__.py | visigoths/visigoth | c5297148209d630f6668f0e5ba3039a8856d8320 | [
"MIT"
] | 1 | 2021-01-26T16:55:48.000Z | 2021-09-03T15:29:14.000Z | visigoth/containers/grid/__init__.py | visigoths/visigoth | c5297148209d630f6668f0e5ba3039a8856d8320 | [
"MIT"
] | null | null | null | from visigoth.containers.grid.grid import Grid
| 23.5 | 46 | 0.851064 | 7 | 47 | 5.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fad3a43b591796c2a1bd488d579a02a9074e5f1c | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/core/tests/test_simd_module.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/core/tests/test_simd_module.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/core/tests/test_simd_module.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/10/12/5b/1e6f46721543fe1910bd541f0be034199ac8517fb3644c7c8e265441ef | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 0 | 96 | 1 | 96 | 96 | 0.427083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87859c0f671fef1b5328775f5417dcacc3e59f73 | 104 | py | Python | test/data/dir_test/recursive/bare_plugins.py | darricktheprogrammer/blocks | 2f6b4e04a833371eee3bbc3d846c180d3b09d8a1 | [
"MIT"
] | null | null | null | test/data/dir_test/recursive/bare_plugins.py | darricktheprogrammer/blocks | 2f6b4e04a833371eee3bbc3d846c180d3b09d8a1 | [
"MIT"
] | null | null | null | test/data/dir_test/recursive/bare_plugins.py | darricktheprogrammer/blocks | 2f6b4e04a833371eee3bbc3d846c180d3b09d8a1 | [
"MIT"
] | null | null | null | from blocks.base import IPlugin
class BarePlugin1(IPlugin):
pass
class BarePlugin2(IPlugin):
pass
| 10.4 | 31 | 0.778846 | 13 | 104 | 6.230769 | 0.692308 | 0.271605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.153846 | 104 | 9 | 32 | 11.555556 | 0.897727 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
87e1584920473dd78c1f71153131d123c286837c | 28 | py | Python | vnpy/api/sec/__init__.py | xiumingxu/vnpy-xx | 8b2d9ecdabcb7931d46fd92fad2d3701b7e66975 | [
"MIT"
] | null | null | null | vnpy/api/sec/__init__.py | xiumingxu/vnpy-xx | 8b2d9ecdabcb7931d46fd92fad2d3701b7e66975 | [
"MIT"
] | null | null | null | vnpy/api/sec/__init__.py | xiumingxu/vnpy-xx | 8b2d9ecdabcb7931d46fd92fad2d3701b7e66975 | [
"MIT"
] | null | null | null | from .sec_constant import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e200fad02fb48df60cd8aa1d785826232ec922ec | 138 | py | Python | photographer/homepage/views.py | MehmetUzel/photographer-appoinment | 6a6580d0da5c22aeb34699db0d9ee61ad19ea931 | [
"MIT"
] | null | null | null | photographer/homepage/views.py | MehmetUzel/photographer-appoinment | 6a6580d0da5c22aeb34699db0d9ee61ad19ea931 | [
"MIT"
] | null | null | null | photographer/homepage/views.py | MehmetUzel/photographer-appoinment | 6a6580d0da5c22aeb34699db0d9ee61ad19ea931 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def home(response):
return render(response, "homepage/home.html", {})
| 19.714286 | 53 | 0.731884 | 18 | 138 | 5.611111 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 138 | 6 | 54 | 23 | 0.863248 | 0.166667 | 0 | 0 | 0 | 0 | 0.159292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3578b7255a53f99dcdf4a00f0feb771403106e60 | 116 | py | Python | dynamo/prediction/__init__.py | davisidarta/dynamo-release | 0dbd769f52ea07f3cdaa8fb31022ceb89938c382 | [
"BSD-3-Clause"
] | null | null | null | dynamo/prediction/__init__.py | davisidarta/dynamo-release | 0dbd769f52ea07f3cdaa8fb31022ceb89938c382 | [
"BSD-3-Clause"
] | null | null | null | dynamo/prediction/__init__.py | davisidarta/dynamo-release | 0dbd769f52ea07f3cdaa8fb31022ceb89938c382 | [
"BSD-3-Clause"
] | null | null | null | """Mapping Vector Field of Single Cells
"""
from .fate import fate, fate_bias
from .state_graph import state_graph
| 19.333333 | 39 | 0.775862 | 18 | 116 | 4.833333 | 0.666667 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146552 | 116 | 5 | 40 | 23.2 | 0.878788 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35ae74723315a8ce0b7df3115efb0e8fa6e70ffd | 5,059 | py | Python | grano/test/views/test_filters.py | ANCIR/grano | cee2ec1974df5df2bc6ed5e214f6bd5d201397a4 | [
"MIT"
] | 30 | 2018-08-23T15:42:17.000Z | 2021-11-16T13:11:36.000Z | grano/test/views/test_filters.py | ANCIR/grano | cee2ec1974df5df2bc6ed5e214f6bd5d201397a4 | [
"MIT"
] | null | null | null | grano/test/views/test_filters.py | ANCIR/grano | cee2ec1974df5df2bc6ed5e214f6bd5d201397a4 | [
"MIT"
] | 5 | 2019-05-30T11:36:53.000Z | 2021-08-11T16:17:14.000Z | import unittest
import flask
from grano import authz
from grano.lib.args import single_arg
from grano.views import filters
from grano.core import db
from grano.model import Entity
from grano.test.test_authz import make_test_app, BaseAuthTestCase
from grano.test.test_authz import _project_and_permission
from werkzeug.datastructures import MultiDict
from werkzeug.exceptions import BadRequest
class AllEntitiesTestCase(BaseAuthTestCase):
def setUp(self):
self.app = make_test_app()
Entity.all().delete()
# Consistently include an extra private project with Entity
# that should not show in any test results
project, permission = _project_and_permission(private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD)
db.session.add(entity)
def test_all_entities__private(self):
project, permission = _project_and_permission(private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 0)
def test_all_entities__private_reader_published(self):
project, permission = _project_and_permission(
reader=True, private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 1)
def test_all_entities__private_reader_draft(self):
project, permission = _project_and_permission(
reader=True, private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD-1)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 0)
def test_all_entities__private_editor_published(self):
project, permission = _project_and_permission(
editor=True, private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 1)
def test_all_entities__private_editor_draft(self):
project, permission = _project_and_permission(
editor=True, private=True)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD-1)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 1)
def test_all_entities__not_private_published(self):
project, permission = _project_and_permission(private=False)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 1)
def test_all_entities__not_private_draft(self):
project, permission = _project_and_permission(
reader=True, private=False)
entity = Entity(project=project, status=authz.PUBLISHED_THRESHOLD - 1)
db.session.add(entity)
db.session.commit()
with self.app.test_request_context():
flask.session['id'] = 1
self.app.preprocess_request()
q = db.session.query(Entity)
self.assertEqual(filters.for_entities(q, Entity).count(), 0)
class SingleArgTestCase(BaseAuthTestCase):
def setUp(self):
self.app = make_test_app()
def test_single_arg(self):
with self.app.test_request_context('/?a=b'):
self.assertEqual(single_arg('a'), 'b')
def test_single_arg__bad_request(self):
with self.app.test_request_context('/?a=b&a=c'):
with self.assertRaises(BadRequest):
single_arg('a')
def test_single_arg__allow_empty_duplicates(self):
with self.app.test_request_context('/?a=b&a='):
self.assertEqual(single_arg('a'), 'b')
if __name__ == '__main__':
unittest.main()
| 38.618321 | 78 | 0.663768 | 607 | 5,059 | 5.298188 | 0.136738 | 0.061567 | 0.034204 | 0.046642 | 0.82556 | 0.817786 | 0.776741 | 0.752799 | 0.752799 | 0.713308 | 0 | 0.004377 | 0.232259 | 5,059 | 130 | 79 | 38.915385 | 0.823635 | 0.019371 | 0 | 0.694444 | 0 | 0 | 0.009883 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 1 | 0.111111 | false | 0 | 0.101852 | 0 | 0.231481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35ba4b1068b75d2b05913c704960d35ab02b073e | 60 | py | Python | hlib/python/hlib/__init__.py | zzzDavid/heterocl | 977aae575d54a30c5bf6d869e8f71bdc815cf7e9 | [
"Apache-2.0"
] | 236 | 2019-05-19T01:48:11.000Z | 2022-03-31T09:03:54.000Z | hlib/python/hlib/__init__.py | zzzDavid/heterocl | 977aae575d54a30c5bf6d869e8f71bdc815cf7e9 | [
"Apache-2.0"
] | 248 | 2019-05-17T19:18:36.000Z | 2022-03-30T21:25:47.000Z | hlib/python/hlib/__init__.py | AlgaPeng/heterocl-2 | b5197907d1fe07485466a63671a2a906a861c939 | [
"Apache-2.0"
] | 85 | 2019-05-17T20:09:27.000Z | 2022-02-28T20:19:00.000Z | from . import op
from . import frontend
from . import utils
| 15 | 22 | 0.75 | 9 | 60 | 5 | 0.555556 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 60 | 3 | 23 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35bfd2949da03317bb8097e927a06b2d668c619f | 40 | py | Python | amaranth_soc/csr/__init__.py | jfng/amaranth-soc | 217d4ea76ad3b3bbf146980d168bc7b3b9d95a18 | [
"BSD-2-Clause"
] | 28 | 2020-01-28T18:22:04.000Z | 2021-11-10T12:50:14.000Z | amaranth_soc/csr/__init__.py | jfng/amaranth-soc | 217d4ea76ad3b3bbf146980d168bc7b3b9d95a18 | [
"BSD-2-Clause"
] | 24 | 2020-02-05T15:37:38.000Z | 2021-09-16T11:54:36.000Z | amaranth_soc/csr/__init__.py | jfng/amaranth-soc | 217d4ea76ad3b3bbf146980d168bc7b3b9d95a18 | [
"BSD-2-Clause"
] | 14 | 2020-02-07T15:25:27.000Z | 2021-10-11T05:33:17.000Z | from .bus import *
from .event import *
| 13.333333 | 20 | 0.7 | 6 | 40 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 21 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea6a0b16219f54534ec27e08409670ce1d4d7d4f | 3,358 | py | Python | test/test_algos/test_noise_handling/test_ssracos.py | HowardHu97/ZOOpt | 01568e8e6b0e65ac310d362af2da5245ac375e53 | [
"MIT"
] | 1 | 2018-11-03T12:05:00.000Z | 2018-11-03T12:05:00.000Z | test/test_algos/test_noise_handling/test_ssracos.py | HowardHu97/ZOOpt | 01568e8e6b0e65ac310d362af2da5245ac375e53 | [
"MIT"
] | null | null | null | test/test_algos/test_noise_handling/test_ssracos.py | HowardHu97/ZOOpt | 01568e8e6b0e65ac310d362af2da5245ac375e53 | [
"MIT"
] | null | null | null | import numpy as np
from zoopt import Dimension, Objective, Parameter, Opt
def ackley(solution):
"""
Ackley function for continuous optimization
"""
x = solution.get_x()
bias = 0.2
ave_seq = sum([(i - bias) * (i - bias) for i in x]) / len(x)
ave_cos = sum([np.cos(2.0*np.pi*(i-bias)) for i in x]) / len(x)
value = -20 * np.exp(-0.2 * np.sqrt(ave_seq)) - np.exp(ave_cos) + 20.0 + np.e
return value
def ackley_noise_creator(mu, sigma):
"""
Ackley function under noise
"""
return lambda solution: ackley(solution) + np.random.normal(mu, sigma, 1)
class TestSSRacos(object):
def test_performance(self):
ackley_noise_func = ackley_noise_creator(0, 0.1)
dim_size = 100 # dimensions
dim_regs = [[-1, 1]] * dim_size # dimension range
dim_tys = [True] * dim_size # dimension type : real
dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object
objective = Objective(ackley_noise_func, dim) # form up the objective function
budget = 20000 # 20*dim_size # number of calls to the objective function
# suppression=True means optimize with value suppression, which is a noise handling method
# resampling=True means optimize with re-sampling, which is another common used noise handling method
# non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets,
# the best solution will be evaluated repeatedly for 100 times
# balance_rate is a parameter for exponential weight average of several evaluations of one sample.
parameter = Parameter(budget=budget, noise_handling=True, suppression=True, non_update_allowed=200,
resample_times=50, balance_rate=0.5)
# parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10)
parameter.set_positive_size(5)
sol = Opt.min(objective, parameter)
assert sol.get_value() < 4
def test_resample(self):
ackley_noise_func = ackley_noise_creator(0, 0.1)
dim_size = 100 # dimensions
dim_regs = [[-1, 1]] * dim_size # dimension range
dim_tys = [True] * dim_size # dimension type : real
dim = Dimension(dim_size, dim_regs, dim_tys) # form up the dimension object
objective = Objective(ackley_noise_func, dim) # form up the objective function
budget = 20000 # 20*dim_size # number of calls to the objective function
# suppression=True means optimize with value suppression, which is a noise handling method
# resampling=True means optimize with re-sampling, which is another common used noise handling method
# non_update_allowed=500 and resample_times=100 means if the best solution doesn't change for 500 budgets,
# the best solution will be evaluated repeatedly for 100 times
# balance_rate is a parameter for exponential weight average of several evaluations of one sample.
parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10)
# parameter = Parameter(budget=budget, noise_handling=True, resampling=True, resample_times=10)
parameter.set_positive_size(5)
sol = Opt.min(objective, parameter)
assert sol.get_value() < 4 | 50.878788 | 114 | 0.68642 | 468 | 3,358 | 4.788462 | 0.260684 | 0.031236 | 0.026774 | 0.037483 | 0.800089 | 0.800089 | 0.800089 | 0.800089 | 0.78581 | 0.78581 | 0 | 0.032171 | 0.231686 | 3,358 | 66 | 115 | 50.878788 | 0.836434 | 0.444908 | 0 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.108108 | false | 0 | 0.054054 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ea81845170d7e5523c3c3fb9e0f2c3c3d6259836 | 195 | py | Python | bin/frequency.py | mikiec84/linkshop | 72959ceca0003be226edeca6496f915502831596 | [
"Apache-2.0"
] | 6 | 2017-07-18T15:28:33.000Z | 2020-03-03T14:45:45.000Z | bin/frequency.py | mikiec84/linkshop | 72959ceca0003be226edeca6496f915502831596 | [
"Apache-2.0"
] | null | null | null | bin/frequency.py | mikiec84/linkshop | 72959ceca0003be226edeca6496f915502831596 | [
"Apache-2.0"
] | 3 | 2017-09-09T00:36:48.000Z | 2020-03-03T14:45:49.000Z | #!/usr/bin/env python3
"""Command-line wrapper for enumeration.cli_frequency."""
import loadPath # Adds the project path.
import linkograph.enumeration
linkograph.enumeration.cli_frequency()
| 21.666667 | 57 | 0.789744 | 24 | 195 | 6.333333 | 0.75 | 0.184211 | 0.302632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0.102564 | 195 | 8 | 58 | 24.375 | 0.862857 | 0.492308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea84873d804c6f8751171cd8d4318c9ce2ddcca5 | 12,089 | py | Python | tests/integration/test_acctload.py | kcburge/awsrun | b348bff36381dd08063bc6494ca79426d294f744 | [
"MIT"
] | 48 | 2019-11-16T15:22:05.000Z | 2020-08-30T20:56:34.000Z | tests/integration/test_acctload.py | kcburge/awsrun | b348bff36381dd08063bc6494ca79426d294f744 | [
"MIT"
] | 5 | 2021-01-16T15:50:31.000Z | 2022-03-30T01:32:42.000Z | tests/integration/test_acctload.py | kcburge/awsrun | b348bff36381dd08063bc6494ca79426d294f744 | [
"MIT"
] | 7 | 2020-10-27T09:36:57.000Z | 2021-08-30T16:10:26.000Z | #
# Copyright 2019 FMR LLC <opensource@fidelity.com>
#
# SPDX-License-Identifier: MIT
#
import json
from datetime import datetime, timedelta
from pathlib import Path
import pytest
import yaml
from freezegun import freeze_time
from awsrun import acctload
@pytest.fixture()
def expected_from_loader():
return [
{"id": "100200300400", "env": "prod", "status": "active"},
{"id": "200300400100", "env": "nonprod", "status": "active"},
{"id": "300400100200", "env": "dev", "status": "suspended"},
]
@pytest.fixture()
def csv_string():
return """id, env, status
"100200300400", prod, active
"200300400100", nonprod, active
"300400100200", dev, suspended
"""
@pytest.fixture()
def json_string():
return """
[
{
"id": "100200300400",
"env": "prod",
"status": "active"
},
{
"id": "200300400100",
"env": "nonprod",
"status": "active"
},
{
"id": "300400100200",
"env": "dev",
"status": "suspended"
}
]
"""
@pytest.fixture()
def yaml_string():
return """
- id: '100200300400'
env: prod
status: active
- id: '200300400100'
env: nonprod
status: active
- id: '300400100200'
env: dev
status: suspended
"""
@pytest.fixture()
def json_cache(tmpdir):
with open(tmpdir.join("awsrun.dat"), "w") as f:
f.write(
"""
[
{
"id": "100200300400",
"env": "prod",
"status": "active"
},
{
"id": "200300400100",
"env": "nonprod",
"status": "active"
},
{
"id": "300400100200",
"env": "dev",
"status": "suspended"
}
]
"""
)
@pytest.mark.parametrize("max_age", [0, 300])
def test_json_loader_without_cache(tmpdir, mocker, expected_from_loader, max_age):
mock_resp = mocker.Mock()
mock_resp.status_code = 200
mock_resp.json.return_value = expected_from_loader
mock_get = mocker.patch("requests.Session.get", return_value=mock_resp)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
url = "http://example.com/accts.json"
acctload.JSONAccountLoader(url, max_age=max_age)
# requests.get should be called as no cache exists on the filesystem
mock_get.assert_called_once()
(url_called,), kwargs = mock_get.call_args
assert url == url_called
# Make sure the accts were loaded and passed to the MetaAccountLoader
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
if max_age == 0:
# Make sure it did not cache data if max age was 0
with pytest.raises(FileNotFoundError):
open(tmpdir.join("awsrun.dat"))
else:
# Make sure the json loader cached the results if max age > 0
with open(tmpdir.join("awsrun.dat")) as f:
cached_accts = json.load(f)
assert accts == cached_accts
@pytest.mark.parametrize("max_age", [0, 300])
def test_yaml_loader_without_cache(
tmpdir, mocker, yaml_string, expected_from_loader, max_age
):
mock_resp = mocker.Mock()
mock_resp.status_code = 200
mock_resp.text = yaml_string
mock_get = mocker.patch("requests.Session.get", return_value=mock_resp)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
url = "http://example.com/accts.yaml"
acctload.YAMLAccountLoader(url, max_age=max_age)
# requests.get should be called as no cache exists on the filesystem
mock_get.assert_called_once()
(url_called,), kwargs = mock_get.call_args
assert url == url_called
# Make sure the accts were loaded and passed to the MetaAccountLoader
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
if max_age == 0:
# Make sure it did not cache data if max age was 0
with pytest.raises(FileNotFoundError):
open(tmpdir.join("awsrun.dat"))
else:
# Make sure the yaml loader cached the results if max age > 0
with open(tmpdir.join("awsrun.dat")) as f:
cached_accts = yaml.safe_load(f)
assert accts == cached_accts
@pytest.mark.parametrize("max_age", [0, 300])
def test_csv_loader_without_cache(
tmpdir, mocker, csv_string, expected_from_loader, max_age
):
mock_resp = mocker.Mock()
mock_resp.status_code = 400
mock_resp.text = csv_string
mock_get = mocker.patch("requests.Session.get", return_value=mock_resp)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
url = "http://example.com/accts.csv"
acctload.CSVAccountLoader(url, max_age=max_age)
# requests.get should be called as no cache exists on the filesystem
mock_get.assert_called_once()
(url_called,), kwargs = mock_get.call_args
assert url == url_called
# Make sure the accts were loaded and passed to the MetaAccountLoader
(accts,), kwargs = mock_mal.call_args
# csv loader returns a list of OrderedDicts, but json loader returns a list
# of dicts, so to share the fixture between tests, we convert the ordered
# dicts to plain dicts.
accts = [dict(a) for a in accts]
assert accts == expected_from_loader
if max_age == 0:
# Make sure it did not cache data if max age was 0
with pytest.raises(FileNotFoundError):
open(tmpdir.join("awsrun.dat"))
else:
# Make sure the json loader cached the results if max age > 0
with open(tmpdir.join("awsrun.dat")) as f:
cached_accts = json.load(f)
assert accts == cached_accts
def test_json_loader_with_cache(tmpdir, mocker, json_cache, expected_from_loader):
mock_get = mocker.patch("requests.get")
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
acctload.JSONAccountLoader("http://example.com/acct.json", max_age=86400)
# requests.get should not be called as a cache exists on the filesystem
mock_get.assert_not_called()
# Make sure the accts were loaded and passed to the MetaAccountLoader
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
def test_yaml_loader_with_cache(tmpdir, mocker, json_cache, expected_from_loader):
mock_get = mocker.patch("requests.get")
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
acctload.YAMLAccountLoader("http://example.com/acct.yaml", max_age=86400)
# requests.get should not be called as a cache exists on the filesystem
mock_get.assert_not_called()
# Make sure the accts were loaded and passed to the MetaAccountLoader
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
def test_json_loader_with_expired_cache(
tmpdir, mocker, json_cache, expected_from_loader
):
mock_resp = mocker.Mock()
mock_resp.status_code = 200
mock_resp.json.return_value = expected_from_loader
mock_get = mocker.patch("requests.Session.get", return_value=mock_resp)
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
# We'll compare the times of the date file before and after to ensure
# the file was replaced with a newer version.
cache_date_before = Path(tmpdir.join("awsrun.dat")).stat().st_mtime
# Fast-forward the time to the future by a day and a few seconds beyond
# when the cache is valid, which will force a fresh fetch of data.
with freeze_time(datetime.utcnow() + timedelta(days=1, seconds=5)):
acctload.JSONAccountLoader("http://example.com/acct.json", max_age=86400)
# requests.get should be called when cache is expired to refresh it
mock_get.assert_called_once()
# Compare the date of the cache file to make sure it was updated
cache_date_after = Path(tmpdir.join("awsrun.dat")).stat().st_mtime
assert cache_date_before < cache_date_after
# Make sure the json loader cached the results
with open(tmpdir.join("awsrun.dat")) as f:
cached_accts = json.load(f)
assert cached_accts == expected_from_loader
def test_yaml_loader_with_expired_cache(
tmpdir, mocker, json_cache, yaml_string, expected_from_loader
):
mock_resp = mocker.Mock()
mock_resp.status_code = 200
mock_resp.text = yaml_string
mock_get = mocker.patch("requests.Session.get", return_value=mock_resp)
mocker.patch("tempfile.gettempdir", return_value=tmpdir)
# We'll compare the times of the date file before and after to ensure
# the file was replaced with a newer version.
cache_date_before = Path(tmpdir.join("awsrun.dat")).stat().st_mtime
# Fast-forward the time to the future by a day and a few seconds beyond
# when the cache is valid, which will force a fresh fetch of data.
with freeze_time(datetime.utcnow() + timedelta(days=1, seconds=5)):
acctload.YAMLAccountLoader("http://example.com/acct.yaml", max_age=86400)
# requests.get should be called when cache is expired to refresh it
mock_get.assert_called_once()
# Compare the date of the cache file to make sure it was updated
cache_date_after = Path(tmpdir.join("awsrun.dat")).stat().st_mtime
assert cache_date_before < cache_date_after
# Make sure the json loader cached the results
with open(tmpdir.join("awsrun.dat")) as f:
cached_accts = yaml.safe_load(f)
assert cached_accts == expected_from_loader
@pytest.mark.parametrize(
"delimiter, csv_content",
[
(
None,
"""id, env, status
100200300400, prod, active
200300400100, nonprod, active
300400100200, dev, suspended""",
),
(
",",
"""id, env, status
100200300400, prod, active
200300400100, nonprod, active
300400100200, dev, suspended""",
),
(
"\t",
"""id\tenv\tstatus
100200300400\tprod\tactive
200300400100\tnonprod\tactive
300400100200\tdev\tsuspended""",
),
],
)
def test_csv_account_loader_with_file_url(
tmp_path, mocker, csv_content, delimiter, expected_from_loader
):
# Create the CSV file on disk as the csv loader will read it
csv_file = tmp_path / "accts.csv"
with csv_file.open("w") as f:
f.write(csv_content)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
url = "file://" + csv_file.as_posix()
if delimiter:
acctload.CSVAccountLoader(url, delimiter=delimiter)
else:
acctload.CSVAccountLoader(url)
(accts,), kwargs = mock_mal.call_args
# csv loader returns a list of OrderedDicts, but json loader returns a list
# of dicts, so to share the fixture between tests, we convert the ordered
# dicts to plain dicts.
accts = [dict(a) for a in accts]
assert accts == expected_from_loader
def test_json_account_loader_with_file_url(
tmp_path, mocker, json_string, expected_from_loader
):
json_file = tmp_path / "accts.json"
with json_file.open("w") as f:
f.write(json_string)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
url = "file://" + json_file.as_posix()
acctload.JSONAccountLoader(url, max_age=0)
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
def test_yaml_account_loader_with_file_url(
tmp_path, mocker, yaml_string, expected_from_loader
):
yaml_file = tmp_path / "accts.yaml"
with yaml_file.open("w") as f:
f.write(yaml_string)
mock_mal = mocker.patch("awsrun.acctload.MetaAccountLoader.__init__")
url = "file://" + yaml_file.as_posix()
acctload.YAMLAccountLoader(url, max_age=0)
(accts,), kwargs = mock_mal.call_args
assert accts == expected_from_loader
| 31.981481 | 82 | 0.674167 | 1,610 | 12,089 | 4.86087 | 0.119255 | 0.0207 | 0.052901 | 0.031561 | 0.892921 | 0.862382 | 0.854076 | 0.847176 | 0.820215 | 0.807181 | 0 | 0.03754 | 0.219952 | 12,089 | 377 | 83 | 32.066313 | 0.792365 | 0.192075 | 0 | 0.565957 | 0 | 0 | 0.197079 | 0.03718 | 0 | 0 | 0 | 0 | 0.106383 | 1 | 0.06383 | false | 0 | 0.029787 | 0.017021 | 0.110638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa2477dffa14eb1ef2bb44dcf3782d8fd79405a4 | 35 | py | Python | server-src/modules.py | Artingl/Fluffy | e51ca77651a67ea6206dcbfa0a3436c032f3a3ed | [
"Apache-2.0"
] | null | null | null | server-src/modules.py | Artingl/Fluffy | e51ca77651a67ea6206dcbfa0a3436c032f3a3ed | [
"Apache-2.0"
] | null | null | null | server-src/modules.py | Artingl/Fluffy | e51ca77651a67ea6206dcbfa0a3436c032f3a3ed | [
"Apache-2.0"
] | null | null | null | import users
import directMessages
| 11.666667 | 21 | 0.885714 | 4 | 35 | 7.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 2 | 22 | 17.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4bf4c088466c4ab3f55e1dbc80581a1586a9c10 | 4,636 | py | Python | social/tests/test_send_comment.py | Mangeneh/akkaskhooneh-backend | 2a81e73fbe0d55d5821ba1670a997bd8851c4af6 | [
"MIT"
] | 7 | 2018-09-17T18:34:49.000Z | 2019-09-15T11:39:15.000Z | social/tests/test_send_comment.py | Mangeneh/akkaskhooneh-backend | 2a81e73fbe0d55d5821ba1670a997bd8851c4af6 | [
"MIT"
] | 9 | 2019-10-21T17:12:21.000Z | 2022-03-11T23:28:14.000Z | social/tests/test_send_comment.py | Mangeneh/akkaskhooneh-backend | 2a81e73fbe0d55d5821ba1670a997bd8851c4af6 | [
"MIT"
] | 1 | 2019-11-29T16:12:12.000Z | 2019-11-29T16:12:12.000Z | from random import choice
from string import ascii_letters
from django.test import TestCase
from authentication.models import User
from social.models import Posts, Followers, Request, Comment
from rest_framework import status
class FollowRequestTest(TestCase):
def create(self, email, username, password):
user = User.objects.create(email=email, username=username, password='')
user.set_password(password)
user.save()
return user
def setUp(self):
self.password = 'sjkkensks'
self.user1 = self.create('t@t.com', 'test', self.password)
self.user2 = self.create('tt@tt.com', 'test2', self.password)
self.client.login(email=self.user1.email, password=self.password)
def to_private(self, user):
user.is_private = True
user.save()
def create_post(self, owner):
post = Posts.objects.create(owner=owner, picture='1.png')
return post
def test_public_comment_post(self):
post = self.create_post(self.user2)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(1, Comment.objects.filter(
user=self.user1, post=post).count())
def test_public_already_commented_post(self):
post = self.create_post(self.user2)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_public_comment_my_post_post(self):
post = self.create_post(self.user1)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_private_comment_my_post_post(self):
post = self.create_post(self.user1)
self.to_private(self.user1)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_private_comment_post(self):
post = self.create_post(self.user2)
self.to_private(self.user2)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(0, Comment.objects.filter(
user=self.user1, post=post).count())
def test_private_following_comment_post(self):
post = self.create_post(self.user2)
self.to_private(self.user2)
Followers.objects.create(user=self.user1, following=self.user2)
response = self.client.post(
"/social/comment/", {'post_id': post.id, 'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(1, Comment.objects.filter(
user=self.user1, post=post).count())
def test_no_post_id_comment(self):
post = self.create_post(self.user2)
response = self.client.post("/social/comment/", {'content': 'sks'})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(0, Comment.objects.filter(
user=self.user1, post=post).count())
def test_no_content_comment(self):
post = self.create_post(self.user2)
response = self.client.post("/social/comment/", {'post_id': post.id})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(0, Comment.objects.filter(
user=self.user1, post=post).count())
def test_empty_content_comment(self):
post = self.create_post(self.user2)
response = self.client.post(
"/social/comment/", {'content': '', 'post_id': post.id})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(0, Comment.objects.filter(
user=self.user1, post=post).count())
def test_big_content_comment(self):
post = self.create_post(self.user2)
response = self.client.post("/social/comment/", {'content': ''.join(
choice(ascii_letters) for _ in range(1010)), 'post_id': post.id})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(0, Comment.objects.filter(
user=self.user1, post=post).count())
| 43.327103 | 79 | 0.659836 | 583 | 4,636 | 5.070326 | 0.135506 | 0.073072 | 0.052097 | 0.081867 | 0.727673 | 0.727673 | 0.727673 | 0.727673 | 0.727673 | 0.725304 | 0 | 0.018498 | 0.207075 | 4,636 | 106 | 80 | 43.735849 | 0.785637 | 0 | 0 | 0.582418 | 0 | 0 | 0.081752 | 0 | 0 | 0 | 0 | 0 | 0.186813 | 1 | 0.153846 | false | 0.076923 | 0.065934 | 0 | 0.252747 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a4c5ca325881ea6465d0973d7eafa9c24b39727d | 1,228 | py | Python | backend/api/tests/utils.py | Leryud/doccano-lilo | b67c17431bedd76624346a0dbc41dd226cb1a0b5 | [
"MIT"
] | null | null | null | backend/api/tests/utils.py | Leryud/doccano-lilo | b67c17431bedd76624346a0dbc41dd226cb1a0b5 | [
"MIT"
] | null | null | null | backend/api/tests/utils.py | Leryud/doccano-lilo | b67c17431bedd76624346a0dbc41dd226cb1a0b5 | [
"MIT"
] | null | null | null | from rest_framework import status
from rest_framework.test import APITestCase
class CRUDMixin(APITestCase):
url = ''
data = {}
def assert_fetch(self, user=None, expected=status.HTTP_403_FORBIDDEN):
if user:
self.client.force_login(user)
response = self.client.get(self.url)
self.assertEqual(response.status_code, expected)
return response
def assert_create(self, user=None, expected=status.HTTP_403_FORBIDDEN):
if user:
self.client.force_login(user)
response = self.client.post(self.url, data=self.data, format='json')
self.assertEqual(response.status_code, expected)
return response
def assert_update(self, user=None, expected=status.HTTP_403_FORBIDDEN):
if user:
self.client.force_login(user)
response = self.client.patch(self.url, data=self.data, format='json')
self.assertEqual(response.status_code, expected)
return response
def assert_delete(self, user=None, expected=status.HTTP_403_FORBIDDEN):
if user:
self.client.force_login(user)
response = self.client.delete(self.url)
self.assertEqual(response.status_code, expected)
| 35.085714 | 77 | 0.680782 | 152 | 1,228 | 5.355263 | 0.243421 | 0.09828 | 0.058968 | 0.09828 | 0.816953 | 0.816953 | 0.816953 | 0.816953 | 0.749386 | 0.749386 | 0 | 0.012552 | 0.221498 | 1,228 | 34 | 78 | 36.117647 | 0.838912 | 0 | 0 | 0.535714 | 0 | 0 | 0.006515 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4d5f4aaf8d8d3b0d3cb2372eaa66ad648ddbed2 | 64 | py | Python | package_test1.py | lwinthida/python-exercises | 47a75422bf97c7694db99517ea93cb236662db79 | [
"MIT"
] | null | null | null | package_test1.py | lwinthida/python-exercises | 47a75422bf97c7694db99517ea93cb236662db79 | [
"MIT"
] | null | null | null | package_test1.py | lwinthida/python-exercises | 47a75422bf97c7694db99517ea93cb236662db79 | [
"MIT"
] | null | null | null | import package_python.ex41py3
package_python.ex41py3.convert()
| 16 | 32 | 0.859375 | 8 | 64 | 6.625 | 0.625 | 0.490566 | 0.754717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.0625 | 64 | 3 | 33 | 21.333333 | 0.783333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
35260403c0f8b26cbac6e913538193cf941589d9 | 139 | py | Python | src/django_website/blog/admin.py | jdheinz/project-ordo_ab_chao | 4063f93b297bab43cff6ca64fa5ba103f0c75158 | [
"MIT"
] | 2 | 2019-09-23T18:42:32.000Z | 2019-09-27T00:33:38.000Z | src/django_website/blog/admin.py | jdheinz/project-ordo_ab_chao | 4063f93b297bab43cff6ca64fa5ba103f0c75158 | [
"MIT"
] | 6 | 2021-03-19T03:25:33.000Z | 2022-02-10T08:48:14.000Z | src/django_website/blog/admin.py | jdheinz/project-ordo_ab_chao | 4063f93b297bab43cff6ca64fa5ba103f0c75158 | [
"MIT"
] | 6 | 2019-09-23T18:53:41.000Z | 2020-02-06T00:20:06.000Z | from django.contrib import admin
from .models import BlogPost
# register BlogPost instance with django admin
admin.site.register(BlogPost) | 27.8 | 46 | 0.834532 | 19 | 139 | 6.105263 | 0.578947 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115108 | 139 | 5 | 47 | 27.8 | 0.943089 | 0.316547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3546825b281f49189f3035dceb7f10879fb10604 | 10,138 | py | Python | cheetah/plot.py | BiocomputeLab/cheetah | cf5b32e6de7af5c4bddc715817b5a9b3b3f5e658 | [
"MIT"
] | null | null | null | cheetah/plot.py | BiocomputeLab/cheetah | cf5b32e6de7af5c4bddc715817b5a9b3b3f5e658 | [
"MIT"
] | null | null | null | cheetah/plot.py | BiocomputeLab/cheetah | cf5b32e6de7af5c4bddc715817b5a9b3b3f5e658 | [
"MIT"
] | 1 | 2021-06-25T01:01:31.000Z | 2021-06-25T01:01:31.000Z |
import numpy as np
import matplotlib.pyplot as plt
plt.ioff()
from mpl_toolkits.axes_grid1 import ImageGrid
def plot_acc_loss(acc, val_acc, loss, val_loss, filename, save_as='pdf'):
'''Function to plot training / validation accuracy and loss'''
epochs = range(1, len(acc) + 1)
# Overall accuracy
plt.figure(figsize=(5, 4))
plt.plot(epochs, acc, 'r', label='training acc')
plt.plot(epochs, val_acc, 'b', label='validation acc')
plt.xlabel('epochs')
plt.ylabel('overall accuracy')
plt.title('Training and validation accuracy')
plt.grid()
legend = plt.legend()
legend.get_frame().set_alpha(1)
plt.savefig(filename + '_acc' + '.' + save_as, bbox_inches='tight')
# Loss
plt.figure(figsize=(5, 4))
plt.plot(epochs, loss, 'r', label='training loss')
plt.plot(epochs, val_loss, 'b', label='validation loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.title('Training and validation loss')
plt.grid()
legend = plt.legend()
legend.get_frame().set_alpha(1)
plt.savefig(filename + '_loss' + '.' + save_as, bbox_inches='tight')
def plot_predprob(y_pred, class_names=None, n_classes=3, xtick_int=50,
ytick_int=50, show_plt=True, save_imag=True,
imag_name='pred_prob', save_as='pdf'):
'''Function to plot predigted probability masks'''
if class_names == None:
class_names = [str(k) for k in range(1, y_pred.shape[-1]+1)]
fig = plt.figure(figsize=(11, 4))
grid = ImageGrid(fig, rect=[0.085, 0.07, 0.85, 0.9],
nrows_ncols=(1, n_classes),
axes_pad=0.25,
share_all=True,
cbar_location="right",
cbar_mode="single",
cbar_size="7%",
cbar_pad=0.2)
for i in range(0, y_pred.shape[-1]):
ax = grid[i]
im = ax.imshow(y_pred[:, :, i], vmin=0, vmax=1, cmap='jet',
interpolation='nearest')
ax.set_xlim(0, y_pred[:, :, i].shape[1])
ax.set_ylim(0, y_pred[:, :, i].shape[0])
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_xticks(np.arange(0, y_pred.shape[1]+1, xtick_int))
ax.set_yticks(np.arange(0, y_pred.shape[0]+1, ytick_int))
ax.set_xlabel(r'image width [pixel]')
ax.set_ylabel(r'image height [pixel]')
ax.set_title('Class ' + class_names[i])
ax.cax.colorbar(im)
ax.cax.toggle_label(True)
if show_plt == True:
plt.show()
if save_imag == True:
plt.savefig(imag_name + '.' + save_as, bbox_inches='tight')
if show_plt == False:
# Clear memory (or matplotlib history) although the figure
# is not shown
plt.close()
def plot_segmask(y_pred, y_true=None, class_to_plot=2, xtick_int=50,
ytick_int=50, show_plt=True, save_imag=True,
imag_name='pred_mask', save_as='pdf'):
'''Function to plot segmentation mask (2 classes)'''
m_temp = np.argmax(y_pred, axis=-1) + 1
pred_mask = (m_temp*(m_temp==class_to_plot)) * (1.0/class_to_plot)
# Plot prediction mask and ground truth
if y_true is not None:
g_temp = np.argmax(y_true, axis=-1) + 1
gr_truth = (g_temp*(g_temp==class_to_plot)) * (1.0/class_to_plot)
grid_cmap = ['jet', 'gray']
grid_imag = [pred_mask, gr_truth]
grid_title = [r'Segmentation mask', r'Ground truth']
fig = plt.figure(figsize=(6.8, 4))
grid = ImageGrid(fig, rect=[0.1, 0.07, 0.85, 0.9],
nrows_ncols=(1, 2),
axes_pad=0.25,
share_all=True)
for i in range(0, 2):
ax = grid[i]
ax.imshow(grid_imag[i], vmin=0, vmax=1, cmap=grid_cmap[i])
ax.set_xlim(0, grid_imag[i].shape[1])
ax.set_ylim(0, grid_imag[i].shape[0])
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_xticks(np.arange(0, pred_mask.shape[1]+1, xtick_int))
ax.set_yticks(np.arange(0, pred_mask.shape[0]+1, ytick_int))
ax.set_xlabel(r'image width [pixel]')
ax.set_ylabel(r'image height [pixel]')
ax.set_title(grid_title[i])
else:
# Plot prediction mask
plt.figure(figsize=(4.5, 4))
plt.imshow(pred_mask, vmin=0, vmax=1, cmap='jet')
ax.set_xlim(0, pred_mask.shape[0])
ax.set_ylim(0, pred_mask.shape[1])
ax.set_ylim(ax.get_ylim()[::-1])
plt.xticks(np.arange(0, pred_mask.shape[1]+1, xtick_int))
plt.yticks(np.arange(0, pred_mask.shape[0]+1, ytick_int))
plt.xlabel(r'image width [pixel]')
plt.ylabel(r'image height [pixel]')
plt.title(r'Segmentation mask')
if show_plt == True:
plt.show()
if save_imag == True:
plt.savefig(imag_name + '.' + save_as, bbox_inches='tight')
if show_plt == False:
# Clear memory (or matplotlib history) although the figure
# is not shown
plt.close()
def plot_segmask_input(y_pred, x_in, y_true=None, class_to_plot=2,
input_to_plot=1, input_name='channel 1',
xtick_int=50, ytick_int=50, show_plt=True,
save_imag=True, imag_name='pred_mask_input',
save_as='pdf'):
'''Function to plot segmentation mask (2 classes)
including 1 input channel'''
m_temp = np.argmax(y_pred, axis=-1) + 1
pred_mask = (m_temp*(m_temp==class_to_plot)) * (1.0/class_to_plot)
if y_true is not None:
# Plot prediction mask, ground truth and input image (1 channel)
g_temp = np.argmax(y_true, axis=-1) + 1
gr_truth = (g_temp*(g_temp==class_to_plot)) * (1.0/class_to_plot)
grid_cmap = ['jet', 'gray', 'gray']
grid_imag = [pred_mask, gr_truth, x_in[..., input_to_plot-1]]
grid_title = [r'Segmentation mask', r'Ground truth', input_name]
fig_width = 10.5
n_cols = 3
else:
# Plot prediction mask and input image (1 channel)
grid_cmap = ['jet', 'gray']
grid_imag = [pred_mask, x_in[..., input_to_plot-1]]
grid_title = [r'Segmentation mask', input_name]
fig_width = 6.8
n_cols = 2
fig = plt.figure(figsize=(fig_width, 4))
grid = ImageGrid(fig, rect=[0.1, 0.07, 0.85, 0.9],
nrows_ncols=(1, n_cols),
axes_pad=0.25,
share_all=True)
for i in range(0, n_cols):
ax = grid[i]
ax.imshow(grid_imag[i], vmin=0, vmax=1, cmap=grid_cmap[i])
ax.set_xlim(0, grid_imag[i].shape[0])
ax.set_ylim(0, grid_imag[i].shape[1])
ax.set_xticks(np.arange(0, pred_mask.shape[1]+1, xtick_int))
ax.set_yticks(np.arange(0, pred_mask.shape[0]+1, ytick_int))
ax.set_xlabel(r'image width [pixel]')
ax.set_ylabel(r'image height [pixel]')
ax.set_title(grid_title[i])
if show_plt == True:
plt.show()
if save_imag == True:
plt.savefig(imag_name + '.' + save_as, bbox_inches='tight')
if show_plt == False:
# Clear memory (or matplotlib history) although the figure
# is not shown
plt.close()
def plot_segmask_3cl_input(y_pred, x_in, y_true=None, class_to_plot=(2, 3),
input_to_plot=1, input_name='channel 1',
xtick_int=50, ytick_int=50, show_plt=True,
save_imag=True, imag_name='pred_mask_input',
save_as='pdf'):
'''Function to plot segmentation mask (3 classes)
including 1 input channel,
this is a prototype version ==> can be combined with plot_segmask_input'''
m_temp = np.argmax(y_pred, axis=-1) + 1
pred_mask = ((m_temp*(m_temp==class_to_plot[0])) * (0.5/class_to_plot[0])
) + (
(m_temp*(m_temp==class_to_plot[1])) * (1.0/class_to_plot[1]))
if y_true is not None:
# Plot prediction mask, ground truth and input image (1 channel)
g_temp = np.argmax(y_true, axis=-1) + 1
gr_truth = ((g_temp*(g_temp==class_to_plot[0])) * (0.5/class_to_plot[0])
) + (
(g_temp*(g_temp==class_to_plot[1])) * (1.0/class_to_plot[1]))
grid_cmap = ['jet', 'gray', 'gray']
grid_imag = [pred_mask, gr_truth, x_in[..., input_to_plot-1]]
grid_title = [r'Segmentation mask', r'Ground truth', input_name]
fig_width = 10.5
n_cols = 3
else:
# Plot prediction mask and input image (1 channel)
grid_cmap = ['jet', 'gray']
grid_imag = [pred_mask, x_in[..., input_to_plot-1]]
grid_title = [r'Segmentation mask', input_name]
fig_width = 6.8
n_cols = 2
fig = plt.figure(figsize=(fig_width, 4))
grid = ImageGrid(fig, rect=[0.1, 0.07, 0.85, 0.9],
nrows_ncols=(1, n_cols),
axes_pad=0.25,
share_all=True)
for i in range(0, n_cols):
ax = grid[i]
ax.imshow(grid_imag[i], vmin=0, vmax=1, cmap=grid_cmap[i])
ax.set_xlim(0, grid_imag[i].shape[0])
ax.set_ylim(0, grid_imag[i].shape[1])
ax.set_xticks(np.arange(0, pred_mask.shape[1]+1, xtick_int))
ax.set_yticks(np.arange(0, pred_mask.shape[0]+1, ytick_int))
ax.set_xlabel(r'image width [pixel]')
ax.set_ylabel(r'image height [pixel]')
ax.set_title(grid_title[i])
if show_plt == True:
plt.show()
if save_imag == True:
plt.savefig(imag_name + '.' + save_as, bbox_inches='tight')
if show_plt == False:
# Clear memory (or matplotlib history) although the figure
# is not shown
plt.close()
| 41.892562 | 82 | 0.555731 | 1,483 | 10,138 | 3.578557 | 0.111935 | 0.031091 | 0.039382 | 0.02638 | 0.834181 | 0.775203 | 0.74788 | 0.731675 | 0.705672 | 0.703976 | 0 | 0.034483 | 0.304892 | 10,138 | 241 | 83 | 42.06639 | 0.718604 | 0.094003 | 0 | 0.677083 | 0 | 0 | 0.077018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026042 | false | 0 | 0.015625 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
103dbb89d24bb05ab9ca5fe84b0294a4ba5e4684 | 34 | py | Python | pandoc_chem_struct/__init__.py | scotthartley/pandoc-chem-struct | f89c2cd5230710b6efc94e3f9a8dbf228033ba3f | [
"MIT"
] | 11 | 2016-03-24T10:21:42.000Z | 2021-09-10T07:23:18.000Z | pandoc_chem_struct/__init__.py | scotthartley/pandoc-chem-struct | f89c2cd5230710b6efc94e3f9a8dbf228033ba3f | [
"MIT"
] | 1 | 2021-11-22T15:18:35.000Z | 2021-11-23T13:59:03.000Z | pandoc_chem_struct/__init__.py | scotthartley/pandoc-chem-struct | f89c2cd5230710b6efc94e3f9a8dbf228033ba3f | [
"MIT"
] | 2 | 2016-01-29T20:54:23.000Z | 2020-10-10T16:43:44.000Z | from .pandoc_chem_struct import *
| 17 | 33 | 0.823529 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
105a6400a7ac93414961dd2e11198fd2537cdf3b | 130 | py | Python | src/iOS/toga_iOS/libs/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 1,261 | 2019-03-31T16:28:47.000Z | 2022-03-31T09:01:23.000Z | src/iOS/toga_iOS/libs/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 597 | 2019-04-02T20:02:42.000Z | 2022-03-30T10:28:47.000Z | src/iOS/toga_iOS/libs/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 318 | 2019-03-31T18:32:00.000Z | 2022-03-30T18:07:13.000Z | from .core_graphics import * # NOQA
from .foundation import * # NOQA
from .uikit import * # NOQA
from .webkit import * # NOQA
| 26 | 36 | 0.692308 | 17 | 130 | 5.235294 | 0.470588 | 0.449438 | 0.47191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215385 | 130 | 4 | 37 | 32.5 | 0.872549 | 0.146154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
529c51ab18219dd27a56c82e3a04878fce4762b8 | 129 | py | Python | src/waldur_vmware/signals.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | 2 | 2017-01-20T15:26:25.000Z | 2017-08-03T04:38:08.000Z | src/waldur_vmware/signals.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | null | null | null | src/waldur_vmware/signals.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | null | null | null | from django.dispatch import Signal
# providing_args=['vm']
vm_created = Signal()
# providing_args=['vm']
vm_updated = Signal()
| 16.125 | 34 | 0.728682 | 17 | 129 | 5.294118 | 0.588235 | 0.333333 | 0.422222 | 0.466667 | 0.511111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124031 | 129 | 7 | 35 | 18.428571 | 0.79646 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
52b9d27d1a22c9fbf359da3734d404fd79e60d96 | 6,036 | py | Python | ECommerce/populate.py | suhasbs/EcommerceWebsite | 54a3204851b360345dc850ac22594832f2449097 | [
"MIT"
] | 1 | 2019-04-01T10:47:09.000Z | 2019-04-01T10:47:09.000Z | ECommerce/populate.py | suhasbs/EcommerceWebsite | 54a3204851b360345dc850ac22594832f2449097 | [
"MIT"
] | null | null | null | ECommerce/populate.py | suhasbs/EcommerceWebsite | 54a3204851b360345dc850ac22594832f2449097 | [
"MIT"
] | null | null | null | import csv
from ecom_webapp.models import Product, Books, Laptop, Furniture, ProductImages
import pandas as pd
# with open('csv_files/book_data_educ.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# print reader
# ctr=0
# for row in reader:
# ctr+=1
# if ctr==1:
# continue
# # if is_ascii(row[6]):
# print str(row[6].split('(')[0])
# # print row[2]
# # for i in range(21, 41):
# # Books.objects.get(pdt_id='BOOK_'+str(i)).delete()
# # Product.objects.get(pdt_id='BOOK_'+str(i)).delete()
# highs= row[4]
# highs_list = highs.split(',')
# for h in highs_list:
# if 'Publisher:' in h:
# publisher = h[10:]
# # print publisher
# Product.objects.create(pdt_id="BOOK_"+str(int(row[0])+41), brand_name=str(row[6].split('(')[0]), units_in_stock=200, description=row[4])
# Books.objects.create(pdt=Product.objects.get(pk="BOOK_"+str(int(row[0])+41)), genre=row[3], summary=row[2], publisher=publisher)
# with open('csv_files/laptops.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# ctr=0
# for row in reader:
# ctr+=1
# if ctr<=57:
# continue
# hd_cap = row[1]
# name = row[2]
# model_no = row[3]
# processor = row[5]+" "+row[6]+" "+row[7]
# ram = row[8]
# display_size = row[9]
# Product.objects.create(pdt_id="LPT_"+str(int(row[0])+1), brand_name=name, units_in_stock=200)
# Laptop.objects.create(laptop=Product.objects.get(pk="LPT_"+str(int(row[0])+1)), model_no=model_no, display_size=display_size, harddisk_capacity=hd_cap, ram=ram, processor=processor)
# with open('csv_files/furniture.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# ctr=0
# for row in reader:
# ctr+=1
# if ctr<=2:
# continue
# # hd_cap = row[1]
# name = row[1]
# description = row[4]+','+row[5]+","+row[6]
# unit_weight = row[8]
# # processor = row[5]+" "+row[6]+" "+row[7]
# dim = row[3]+" x "+row[9]+" x "+row[2]
# print dim
# # display_size = row[9]
# fur_type = row[7]
# material = row[6]
# Product.objects.create(pdt_id="FUR_"+str(int(row[0])+1), brand_name=name, units_in_stock=200, unit_weight=unit_weight, description=description)
# Furniture.objects.create(furniture=Product.objects.get(pk="FUR_"+str(int(row[0])+1)), type=fur_type,dimensions=dim, material=material)
# with open('csv_files/mobile_data.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# # ctr=0
# data = {'ind':[], 'image_url':[]}
# for row in reader:
# # ctr+=1
# # if ctr<=2:
# # continue
# # hd_cap = row[1]
# # name = row[1]
# # description = row[4]+','+row[5]+","+row[6]
# # unit_weight = row[8]
# # # processor = row[5]+" "+row[6]+" "+row[7]
# # dim = row[3]+" x "+row[9]+" x "+row[2]
# # print dim
# # display_size = row[9]
# # fur_type = row[7]
# # material = row[6]
# images = row[-1]
# images = images.split('\n')
# # print images
# if row[0]=="12":
# for image in images:
# if image:
# if Product.objects.filter(pk='MOB_'+str(int(row[0])+1)) and not ProductImages.objects.filter(product=Product.objects.get(pk='MOB_'+str(int(row[0])+1)), image_url=image) :
# ProductImages.objects.create(product=Product.objects.get(pk='MOB_'+str(int(row[0])+1)), image_url=image)
# data['ind'].append(int(row[0])+1)
# data['image_url'].append(image)
# print images.split('\n')
# ProductImages.objects.create(product=Product.objects.get(pk='MOB_1'), image_url='example.jpeg')
# ProductImages.objects.create(product=Product.objects.get(pk='MOB_1'), image_url='example2.jpeg')
# print ProductImages.objects.filter(pk='MOB_1')[1].image_url
# Product.objects.create(pdt_id="FUR_"+str(int(row[0])+1), brand_name=name, units_in_stock=200, unit_weight=unit_weight, description=description)
# Furniture.objects.create(furniture=Product.objects.get(pk="FUR_"+str(int(row[0])+1)), type=fur_type,dimensions=dim, material=material)
# df = pd.DataFrame(data)
# df.to_csv('csv_files/mobile_images.csv')
# print df
# Querying from Images Table
# print ProductImages.objects.filter(product=Product.objects.get(pk='MOB_1'))
# with open('csv_files/book_data_educ.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# # ctr=0
# data = {'ind':[], 'image_url':[]}
# for row in reader:
# images = row[-2]
# images = images.split('\n')
# for image in images:
# if image:
# if Product.objects.filter(pk='BOOK_'+str(int(row[0])+41)) and not ProductImages.objects.filter(product=Product.objects.get(pk='BOOK_'+str(int(row[0])+21)), image_url=image) :
# ProductImages.objects.create(product=Product.objects.get(pk='BOOK_'+str(int(row[0])+41)), image_url=image)
# print "Inserting book:"+str('BOOK_'+str(int(row[0])+41)+" "+image)
# with open('csv_files/furniture.csv') as f:
# reader = csv.reader(f)
# header = next(reader)
# # ctr=0
# data = {'ind':[], 'image_url':[]}
# for row in reader:
# images = row[-1]
# images = images.split('\n')
# for image in images:
# if image:
# if Product.objects.filter(pk='FUR_'+str(int(row[0])+1)) and not ProductImages.objects.filter(product=Product.objects.get(pk='FUR_'+str(int(row[0])+1)), image_url=image) :
# ProductImages.objects.create(product=Product.objects.get(pk='FUR_'+str(int(row[0])+1)), image_url=image)
# print "Inserting furniture:"+str('FUR_'+str(int(row[0])+1)+" "+image)
with open('csv_files/laptops.csv') as f:
print "here"
reader = csv.reader(f)
header = next(reader)
# ctr=0
data = {'ind':[], 'image_url':[]}
for row in reader:
images = row[-1]
images = images.split('\n')
# print images
for image in images:
if image:
if Product.objects.filter(pk='LPT_'+str(int(row[0])+1)) and not ProductImages.objects.filter(product=Product.objects.get(pk='LPT_'+str(int(row[0])+1)), image_url=image):
ProductImages.objects.create(product=Product.objects.get(pk='LPT_'+str(int(row[0])+1)), image_url=image)
print "Inserting laptop:"+str('LPT_'+str(int(row[0])+1)+" "+image)
| 35.093023 | 187 | 0.64049 | 939 | 6,036 | 4.005325 | 0.120341 | 0.026589 | 0.044669 | 0.061154 | 0.795799 | 0.769742 | 0.76469 | 0.728796 | 0.699548 | 0.67163 | 0 | 0.029181 | 0.154076 | 6,036 | 171 | 188 | 35.298246 | 0.707403 | 0.823393 | 0 | 0 | 0 | 0 | 0.081229 | 0.023052 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1875 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52c87a9e691e32953216ba1241539ec221660683 | 125 | py | Python | wouso/games/quiz/admin.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 117 | 2015-01-02T18:07:33.000Z | 2021-01-06T22:36:25.000Z | wouso/games/quiz/admin.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 229 | 2015-01-12T07:07:58.000Z | 2019-10-12T08:27:01.000Z | wouso/games/quiz/admin.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 96 | 2015-01-07T05:26:09.000Z | 2020-06-25T07:28:51.000Z | from django.contrib import admin
from models import QuizUser, Quiz
admin.site.register(Quiz)
admin.site.register(QuizUser)
| 17.857143 | 33 | 0.816 | 18 | 125 | 5.666667 | 0.555556 | 0.176471 | 0.254902 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104 | 125 | 6 | 34 | 20.833333 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
52caa2a45c1afcfb1df5fe846fd4b752732ae2bd | 7,885 | py | Python | gslib/tests/test_rm.py | jterrace/gsutil | 7e83582952faae36d85986ad6c024b06787feaf3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | gslib/tests/test_rm.py | jterrace/gsutil | 7e83582952faae36d85986ad6c024b06787feaf3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | gslib/tests/test_rm.py | jterrace/gsutil | 7e83582952faae36d85986ad6c024b06787feaf3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gslib.tests.testcase as testcase
from gslib.util import Retry
from gslib.tests.util import ObjectToURI as suri
class TestRm(testcase.GsUtilIntegrationTestCase):
"""Integration tests for rm command."""
def test_all_versions_current(self):
"""Test that 'rm -a' for an object with a current version works."""
bucket_uri = self.CreateVersionedBucket()
key_uri = bucket_uri.clone_replace_name('foo')
key_uri.set_contents_from_string('bar')
g1 = key_uri.generation
key_uri.set_contents_from_string('baz')
g2 = key_uri.generation
stderr = self.RunGsUtil(['-m', 'rm', '-a', suri(key_uri)],
return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 2)
self.assertIn('Removing %s#%s...' % (suri(key_uri), g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(key_uri), g2), stderr)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', '-a', suri(bucket_uri)],
return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
def test_all_versions_no_current(self):
"""Test that 'rm -a' for an object without a current version works."""
bucket_uri = self.CreateVersionedBucket()
key_uri = bucket_uri.clone_replace_name('foo')
key_uri.set_contents_from_string('bar')
g1 = key_uri.generation
key_uri.set_contents_from_string('baz')
g2 = key_uri.generation
stderr = self.RunGsUtil(['rm', suri(key_uri)], return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 1)
self.assertIn('Removing %s...' % suri(key_uri), stderr)
stderr = self.RunGsUtil(['-m', 'rm', '-a', suri(key_uri)],
return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 2)
self.assertIn('Removing %s#%s...' % (suri(key_uri), g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(key_uri), g2), stderr)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', '-a', suri(bucket_uri)],
return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
def test_fails_for_missing_obj(self):
bucket_uri = self.CreateVersionedBucket()
stderr = self.RunGsUtil(['rm', '-a', '%s/foo' % suri(bucket_uri)],
return_stderr=True, expected_status=1)
self.assertIn('Not Found', stderr)
def test_remove_all_versions_recursive_on_bucket(self):
"""Test that 'rm -ar' works on bucket."""
bucket_uri = self.CreateVersionedBucket()
k1_uri = bucket_uri.clone_replace_name('foo')
k2_uri = bucket_uri.clone_replace_name('foo2')
k1_uri.set_contents_from_string('bar')
k2_uri.set_contents_from_string('bar2')
k1g1 = k1_uri.generation
k2g1 = k2_uri.generation
k1_uri.set_contents_from_string('baz')
k2_uri.set_contents_from_string('baz2')
k1g2 = k1_uri.generation
k2g2 = k2_uri.generation
stderr = self.RunGsUtil(['rm', '-ar', suri(bucket_uri)],
return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 4)
self.assertIn('Removing %s#%s...' % (suri(k1_uri), k1g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(k1_uri), k1g2), stderr)
self.assertIn('Removing %s#%s...' % (suri(k2_uri), k2g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(k2_uri), k2g2), stderr)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', '-a', suri(bucket_uri)],
return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
def test_remove_all_versions_recursive_on_subdir(self):
"""Test that 'rm -ar' works on subdir."""
bucket_uri = self.CreateVersionedBucket()
k1_uri = bucket_uri.clone_replace_name('dir/foo')
k2_uri = bucket_uri.clone_replace_name('dir/foo2')
k1_uri.set_contents_from_string('bar')
k2_uri.set_contents_from_string('bar2')
k1g1 = k1_uri.generation
k2g1 = k2_uri.generation
k1_uri.set_contents_from_string('baz')
k2_uri.set_contents_from_string('baz2')
k1g2 = k1_uri.generation
k2g2 = k2_uri.generation
stderr = self.RunGsUtil(['rm', '-ar', '%s/dir' % suri(bucket_uri)],
return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 4)
self.assertIn('Removing %s#%s...' % (suri(k1_uri), k1g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(k1_uri), k1g2), stderr)
self.assertIn('Removing %s#%s...' % (suri(k2_uri), k2g1), stderr)
self.assertIn('Removing %s#%s...' % (suri(k2_uri), k2g2), stderr)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', '-a', suri(bucket_uri)],
return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
def test_some_missing(self):
"""Test that 'rm -a' fails when some but not all uris don't exist."""
bucket_uri = self.CreateVersionedBucket()
key_uri = bucket_uri.clone_replace_name('foo')
key_uri.set_contents_from_string('bar')
stderr = self.RunGsUtil(['rm', '-a', suri(key_uri), '%s/missing'
% suri(bucket_uri)],
return_stderr=True, expected_status=1)
self.assertEqual(stderr.count('Removing gs://'), 2)
self.assertIn('Not Found', stderr)
def test_some_missing_force(self):
"""Test that 'rm -af' succeeds despite hidden first uri."""
bucket_uri = self.CreateVersionedBucket()
key_uri = bucket_uri.clone_replace_name('foo')
key_uri.set_contents_from_string('bar')
stderr = self.RunGsUtil(['rm', '-af', suri(key_uri), '%s/missing'
% suri(bucket_uri)], return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 2)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', '-a', suri(bucket_uri)],
return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
def test_folder_objects_deleted(self):
"""Test for 'rm -r' of a folder with a dir_$folder$ marker."""
bucket_uri = self.CreateVersionedBucket()
key_uri = bucket_uri.clone_replace_name('abc/o1')
key_uri.set_contents_from_string('foobar')
folderkey = bucket_uri.clone_replace_name('abc_$folder$')
folderkey.set_contents_from_string('')
stderr = self.RunGsUtil(['rm', '-r', '%s/abc' % suri(bucket_uri)],
return_stderr=True)
self.assertEqual(stderr.count('Removing gs://'), 2)
# Use @Retry as hedge against bucket listing eventual consistency.
@Retry(AssertionError, tries=3, delay=1, backoff=1)
def _Check1():
stdout = self.RunGsUtil(['ls', suri(bucket_uri)], return_stdout=True)
self.assertEqual(stdout, '')
_Check1()
| 45.057143 | 75 | 0.657451 | 1,041 | 7,885 | 4.774256 | 0.167147 | 0.054326 | 0.04829 | 0.067606 | 0.789738 | 0.782093 | 0.77002 | 0.738229 | 0.72495 | 0.702616 | 0 | 0.019518 | 0.200761 | 7,885 | 174 | 76 | 45.316092 | 0.769121 | 0.173494 | 0 | 0.761194 | 0 | 0 | 0.08563 | 0 | 0 | 0 | 0 | 0 | 0.261194 | 1 | 0.104478 | false | 0 | 0.022388 | 0 | 0.134328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52ff99a5d12fa28c606cf668535176710c574a8c | 1,564 | py | Python | src/main/python/sql_smith/builder/criteria_builder.py | fbraem/sql-smith | b4dbf3ffec02fd11c6f3c074e48325e3fdad46fb | [
"MIT"
] | null | null | null | src/main/python/sql_smith/builder/criteria_builder.py | fbraem/sql-smith | b4dbf3ffec02fd11c6f3c074e48325e3fdad46fb | [
"MIT"
] | null | null | null | src/main/python/sql_smith/builder/criteria_builder.py | fbraem/sql-smith | b4dbf3ffec02fd11c6f3c074e48325e3fdad46fb | [
"MIT"
] | null | null | null | from sql_smith.functions import criteria, listing
class CriteriaBuilder:
def __init__(self, statement: 'StatementInterface'):
self._statement = statement
def between(self, start, end) -> 'CriteriaInterface':
return criteria('{} BETWEEN {} AND {}', self._statement, start, end)
def not_between(self, start, end) -> 'CriteriaInterface':
return criteria('{} NOT BETWEEN {} AND {}', self._statement, start, end)
def in_(self, *args) -> 'CriteriaInterface':
return criteria('{} IN ({})', self._statement, listing(args))
def not_in(self, *args) -> 'CriteriaInterface':
return criteria('{} NOT IN ({})', self._statement, listing(args))
def eq(self, value) -> 'CriteriaInterface':
return criteria('{} = {}', self._statement, value)
def not_eq(self, value) -> 'CriteriaInterface':
return criteria('{} != {}', self._statement, value)
def gt(self, value) -> 'CriteriaInterface':
return criteria('{} > {}', self._statement, value)
def gte(self, value) -> 'CriteriaInterface':
return criteria('{} >= {}', self._statement, value)
def lt(self, value) -> 'CriteriaInterface':
return criteria('{} < {}', self._statement, value)
def lte(self, value) -> 'CriteriaInterface':
return criteria('{} <= {}', self._statement, value)
def is_null(self) -> 'CriteriaInterface':
return criteria('{} IS NULL', self._statement)
def is_not_null(self) -> 'CriteriaInterface':
return criteria('{} IS NOT NULL', self._statement)
| 36.372093 | 80 | 0.627877 | 158 | 1,564 | 6.056962 | 0.196203 | 0.190178 | 0.388715 | 0.200627 | 0.794148 | 0.794148 | 0.562173 | 0.386625 | 0.386625 | 0.131661 | 0 | 0 | 0.206522 | 1,564 | 42 | 81 | 37.238095 | 0.771152 | 0 | 0 | 0 | 0 | 0 | 0.22954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.464286 | false | 0 | 0.035714 | 0.428571 | 0.964286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5e1481f6e69654b13616e69234780c23a3c8b8e0 | 22,092 | py | Python | disentanglement_lib/methods/shared/layers.py | homaralex/disentanglement_lib | cb9bf7c8498f220b1f1fd1cf560fc6030ede49f0 | [
"Apache-2.0"
] | 1 | 2021-03-08T17:37:10.000Z | 2021-03-08T17:37:10.000Z | disentanglement_lib/methods/shared/layers.py | homaralex/disentanglement_lib | cb9bf7c8498f220b1f1fd1cf560fc6030ede49f0 | [
"Apache-2.0"
] | null | null | null | disentanglement_lib/methods/shared/layers.py | homaralex/disentanglement_lib | cb9bf7c8498f220b1f1fd1cf560fc6030ede49f0 | [
"Apache-2.0"
] | null | null | null | import gin
import numpy as np
import tensorflow as tf
from tensorflow.python.layers.core import Dense
from tensorflow.python.ops import init_ops
from tensorflow.python.eager import context
from tensorflow.python.framework import common_shapes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_math_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn
from tensorflow.python.ops import standard_ops
_EPS = 1e-8
@gin.configurable('masked_layer', whitelist=['mask_trainable'])
class _BaseMaskedLayer:
def __init__(self, perc_sparse=0, mask_trainable=False, *args, **kwargs):
super().__init__(*args, **kwargs)
self.perc_sparse = perc_sparse
self.mask_trainable = mask_trainable
@property
def mask_shape(self):
return self.kernel.shape[-2:]
def _init_mask(self):
mask_val = (np.random.random(self.mask_shape) >= self.perc_sparse).astype('float')
self.mask = self.add_weight(
name='mask',
shape=self.mask_shape,
initializer=init_ops.Constant(mask_val),
trainable=self.mask_trainable,
dtype=self.dtype,
)
def build(self, input_shape):
super().build(input_shape)
self.built = False
self._init_mask()
self.built = True
class MaskedConv2d(_BaseMaskedLayer, tf.layers.Conv2D):
def call(self, inputs):
outputs = self._convolution_op(
inputs,
# that's the actual change
self.kernel * self.mask
)
if self.use_bias:
if self.data_format == 'channels_first':
if self.rank == 1:
# nn.bias_add does not accept a 1D input tensor.
bias = array_ops.reshape(self.bias, (1, self.filters, 1))
outputs += bias
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NCHW')
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NHWC')
if self.activation is not None:
return self.activation(outputs)
return outputs
def masked_conv2d(
inputs,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
perc_sparse=0,
):
layer = MaskedConv2d(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_reuse=reuse,
_scope=name,
perc_sparse=perc_sparse,
)
return layer.apply(inputs)
class MaskedDense(_BaseMaskedLayer, Dense):
def call(self, inputs):
inputs = ops.convert_to_tensor(inputs)
rank = common_shapes.rank(inputs)
if rank > 2:
# Broadcasting is required for the inputs.
outputs = standard_ops.tensordot(
inputs,
# that's the actual change
self.kernel * self.mask,
[[rank - 1], [0]]
)
# Reshape the output back to the original ndim of the input.
if not context.executing_eagerly():
shape = inputs.shape.as_list()
output_shape = shape[:-1] + [self.units]
outputs.set_shape(output_shape)
else:
# Cast the inputs to self.dtype, which is the variable dtype. We do not
# cast if `should_cast_variables` is True, as in that case the variable
# will be automatically casted to inputs.dtype.
if not self._mixed_precision_policy.should_cast_variables:
inputs = math_ops.cast(inputs, self.dtype)
outputs = gen_math_ops.mat_mul(
inputs,
# that's the actual change
self.kernel * self.mask,
)
if self.use_bias:
outputs = nn.bias_add(outputs, self.bias)
if self.activation is not None:
return self.activation(outputs) # pylint: disable=not-callable
return outputs
def masked_dense(
inputs, units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
perc_sparse=0,
):
layer = MaskedDense(
units=units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_scope=name,
_reuse=reuse,
perc_sparse=perc_sparse,
)
return layer.apply(inputs)
# TODO other parameterization version
class _BaseVDLayer:
def __init__(
self,
training_phase,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.training_phase = training_phase
@property
def mask_shape(self):
return self.kernel.shape[-2:]
def _build(self):
self.log_sigma_2 = self.add_weight(
name='vd_log_sigma_2',
shape=self.mask_shape,
initializer=init_ops.Constant(-10.),
trainable=True,
dtype=self.dtype,
)
def build(self, input_shape):
super().build(input_shape)
self.built = False
self._build()
self.built = True
def get_log_alpha(self):
log_alpha = tf.clip_by_value(self.log_sigma_2 - tf.log(tf.square(self.kernel) + _EPS), -8., 8.)
return tf.identity(log_alpha, name='log_alpha')
@property
def vd_threshold(self):
# TODO maybe this should be a little more elegant
return gin.query_parameter('vd_vae.vd_threshold')
def _get_outputs(self, inputs, layer_op):
log_alpha = self.get_log_alpha()
if self.training_phase:
mu = layer_op(inputs, self.kernel)
std = tf.sqrt(
layer_op(
tf.square(inputs),
tf.exp(log_alpha) * tf.square(self.kernel),
) + _EPS,
)
noisy_out = mu + std * tf.random_normal(tf.shape(std))
outputs = noisy_out
else:
select_mask = tf.cast(tf.less(log_alpha, self.vd_threshold), tf.float32)
masked_out = layer_op(inputs, self.kernel * select_mask)
outputs = masked_out
return outputs
class VDConv2D(_BaseVDLayer, tf.layers.Conv2D):
def call(self, inputs):
outputs = self._get_outputs(inputs, self._convolution_op)
if self.use_bias:
if self.data_format == 'channels_first':
if self.rank == 1:
# nn.bias_add does not accept a 1D input tensor.
bias = array_ops.reshape(self.bias, (1, self.filters, 1))
outputs += bias
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NCHW')
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NHWC')
if self.activation is not None:
return self.activation(outputs)
return outputs
def vd_conv2d(
inputs,
filters,
kernel_size,
training_phase,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
):
layer = VDConv2D(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_reuse=reuse,
_scope=name,
training_phase=training_phase,
)
return layer.apply(inputs)
class VDDense(_BaseVDLayer, Dense):
def call(self, inputs):
inputs = ops.convert_to_tensor(inputs)
rank = common_shapes.rank(inputs)
if rank > 2:
# Broadcasting is required for the inputs.
def _broadcasted_tensordot(_inputs, _kernel):
return standard_ops.tensordot(
_inputs,
_kernel,
[[rank - 1], [0]]
)
outputs = self._get_outputs(inputs, _broadcasted_tensordot)
# Reshape the output back to the original ndim of the input.
if not context.executing_eagerly():
shape = inputs.shape.as_list()
output_shape = shape[:-1] + [self.units]
outputs.set_shape(output_shape)
else:
# Cast the inputs to self.dtype, which is the variable dtype. We do not
# cast if `should_cast_variables` is True, as in that case the variable
# will be automatically casted to inputs.dtype.
if not self._mixed_precision_policy.should_cast_variables:
inputs = math_ops.cast(inputs, self.dtype)
outputs = self._get_outputs(inputs, gen_math_ops.mat_mul)
if self.use_bias:
outputs = nn.bias_add(outputs, self.bias)
if self.activation is not None:
return self.activation(outputs) # pylint: disable=not-callable
return outputs
def vd_dense(
inputs, units,
training_phase,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
):
layer = VDDense(
units=units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_scope=name,
_reuse=reuse,
training_phase=training_phase,
)
return layer.apply(inputs)
class _BaseSoftmaxLayer:
def __init__(
self,
temperature,
scale_temperature,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self._temperature = temperature
self.scale_temperature = scale_temperature
@property
def temperature(self):
if self.scale_temperature:
return self._temperature / tf.cast(self.kernel.shape[-2], tf.float32)
return self._temperature
class SoftmaxConv2d(_BaseSoftmaxLayer, tf.layers.Conv2D):
def call(self, inputs):
softmax_kernel = self.kernel * tf.nn.softmax(
logits=tf.reshape(
tf.reduce_max(tf.abs(self.kernel), axis=(0, 1)),
(self.kernel.shape[2], -1),
) / self.temperature,
axis=0,
)
outputs = self._convolution_op(
inputs,
# that's the actual change
softmax_kernel,
)
if self.use_bias:
if self.data_format == 'channels_first':
if self.rank == 1:
# nn.bias_add does not accept a 1D input tensor.
bias = array_ops.reshape(self.bias, (1, self.filters, 1))
outputs += bias
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NCHW')
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NHWC')
if self.activation is not None:
return self.activation(outputs)
return outputs
def softmax_conv2d(
inputs,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
temperature=1.,
scale_temperature=False,
):
layer = SoftmaxConv2d(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_reuse=reuse,
_scope=name,
temperature=temperature,
scale_temperature=scale_temperature,
)
return layer.apply(inputs)
class SoftmaxDense(_BaseSoftmaxLayer, Dense):
def call(self, inputs):
inputs = ops.convert_to_tensor(inputs)
rank = common_shapes.rank(inputs)
softmax_kernel = self.kernel * tf.nn.softmax(
logits=tf.abs(self.kernel) / self.temperature,
axis=0,
)
if rank > 2:
# Broadcasting is required for the inputs.
outputs = standard_ops.tensordot(
inputs,
# that's the actual change
softmax_kernel,
[[rank - 1], [0]]
)
# Reshape the output back to the original ndim of the input.
if not context.executing_eagerly():
shape = inputs.shape.as_list()
output_shape = shape[:-1] + [self.units]
outputs.set_shape(output_shape)
else:
# Cast the inputs to self.dtype, which is the variable dtype. We do not
# cast if `should_cast_variables` is True, as in that case the variable
# will be automatically casted to inputs.dtype.
if not self._mixed_precision_policy.should_cast_variables:
inputs = math_ops.cast(inputs, self.dtype)
outputs = gen_math_ops.mat_mul(
inputs,
# that's the actual change
softmax_kernel,
)
if self.use_bias:
outputs = nn.bias_add(outputs, self.bias)
if self.activation is not None:
return self.activation(outputs) # pylint: disable=not-callable
return outputs
def softmax_dense(
inputs, units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
temperature=1.,
scale_temperature=False,
):
layer = SoftmaxDense(
units=units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
_scope=name,
_reuse=reuse,
temperature=temperature,
scale_temperature=scale_temperature,
)
return layer.apply(inputs)
class CodeNorm(tf.keras.layers.Layer):
def __init__(
self,
num_latent,
training_phase,
ema_decay=.9,
):
super().__init__()
self.num_latent = num_latent
self.training_phase = training_phase
self.ema_decay = ema_decay
def build(self, input_shape):
self.ema_mean = self.add_weight(
name='ema_mean',
shape=(self.num_latent,),
initializer=init_ops.Zeros(),
trainable=False,
dtype=tf.float32,
)
self.ema_var = self.add_weight(
name='ema_var',
shape=(self.num_latent,),
initializer=init_ops.Zeros(),
trainable=False,
dtype=tf.float32,
)
def call(self, means, logvar):
var = tf.exp(logvar)
if self.training_phase:
norm_means = tf.reduce_mean(means, axis=0)
norm_vars = tf.reduce_mean(tf.square(means) + tf.square(var), axis=0) - tf.square(norm_means)
ema_mu = self.ema_decay * self.ema_mean + (1 - self.ema_decay) * norm_means
tf.assign(self.ema_mean, ema_mu)
ema_var = self.ema_decay * self.ema_var + (1 - self.ema_decay) * norm_vars
tf.assign(self.ema_var, ema_var)
means = (means - norm_means) / (tf.sqrt(norm_vars) + _EPS)
var = var / (norm_vars + _EPS)
else:
means = (means - self.ema_mean) / (tf.sqrt(self.ema_var) + _EPS)
var = var / (self.ema_var + _EPS)
logvar = tf.log(var + 1e-17)
return means, logvar
def dropout_conv2d(
training_phase,
rate,
inputs,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
):
inputs = tf.keras.layers.SpatialDropout2D(rate=rate, data_format=data_format)(inputs, training=training_phase)
conv = tf.layers.conv2d(
inputs=inputs,
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
reuse=reuse,
)
return conv
def dropout_dense(
training_phase,
rate,
inputs, units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None,
):
dropout = tf.keras.layers.Dropout(rate=rate)(inputs, training=training_phase)
dense = tf.layers.dense(
dropout, units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
reuse=reuse
)
return dense
| 31.115493 | 114 | 0.604155 | 2,392 | 22,092 | 5.332776 | 0.091555 | 0.016463 | 0.035121 | 0.018031 | 0.790608 | 0.749922 | 0.744904 | 0.729539 | 0.715036 | 0.701003 | 0 | 0.00619 | 0.312557 | 22,092 | 709 | 115 | 31.159379 | 0.833739 | 0.059705 | 0 | 0.754561 | 0 | 0 | 0.011088 | 0 | 0 | 0 | 0 | 0.00141 | 0 | 1 | 0.05141 | false | 0 | 0.021559 | 0.006633 | 0.137645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e36807d7323009a5486a48f0b5526fad29a58ce | 198 | py | Python | lejian/rcmd/__init__.py | PuZheng/LEJAIN-backend | 1647b63cb409842566f3d2cd9771f8b8856c1a03 | [
"MIT"
] | null | null | null | lejian/rcmd/__init__.py | PuZheng/LEJAIN-backend | 1647b63cb409842566f3d2cd9771f8b8856c1a03 | [
"MIT"
] | 13 | 2015-10-23T04:43:51.000Z | 2015-12-19T14:30:33.000Z | lejian/rcmd/__init__.py | PuZheng/lejian-backend | 1647b63cb409842566f3d2cd9771f8b8856c1a03 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from flask import Blueprint
rcmd_ws = Blueprint("rcmd_ws", __name__, static_folder="static",
template_folder="templates")
import genuine_ap.rcmd.views
| 24.75 | 64 | 0.671717 | 24 | 198 | 5.166667 | 0.708333 | 0.209677 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.20202 | 198 | 7 | 65 | 28.285714 | 0.778481 | 0.106061 | 0 | 0 | 0 | 0 | 0.125714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
eadbd08b3a29df03d2cb4f2a2df9c024b9db4537 | 127 | py | Python | tamilnlp/__init__.py | AshokR/TamilNLP | ef8c81ba90a466732401b24790fd6e07b88f2adb | [
"Apache-2.0"
] | 64 | 2016-06-29T05:55:20.000Z | 2022-02-13T08:48:29.000Z | tamilnlp/__init__.py | Ezhil-Language-Foundation/TamilNLP | 3d898a6ce7daf7a740b945219c9b2bbbee44a37f | [
"Apache-2.0"
] | 8 | 2016-08-06T17:12:48.000Z | 2021-01-18T14:00:04.000Z | tamilnlp/__init__.py | AshokR/TamilNLP | ef8c81ba90a466732401b24790fd6e07b88f2adb | [
"Apache-2.0"
] | 18 | 2016-08-06T17:00:35.000Z | 2021-02-16T10:55:44.000Z | from .ConvertAmritaToRDR import *
from .TextSummaryExtractor import *
from .WikiByCategory import *
from .WikiByPage import *
| 25.4 | 35 | 0.80315 | 12 | 127 | 8.5 | 0.5 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133858 | 127 | 4 | 36 | 31.75 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaf9df229486be4757460be23dc826560ae351e7 | 36 | py | Python | testing.py | brownliuinnz/cython | 7fc9dd1369f43239b3e4b5f362fd1a9e1feddf64 | [
"CNRI-Python"
] | null | null | null | testing.py | brownliuinnz/cython | 7fc9dd1369f43239b3e4b5f362fd1a9e1feddf64 | [
"CNRI-Python"
] | null | null | null | testing.py | brownliuinnz/cython | 7fc9dd1369f43239b3e4b5f362fd1a9e1feddf64 | [
"CNRI-Python"
] | null | null | null | import cpp_python
cpp_python.test(5) | 18 | 18 | 0.861111 | 7 | 36 | 4.142857 | 0.714286 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.055556 | 36 | 2 | 18 | 18 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d8120e64af85453d967c001f92dde96175ae06d6 | 59 | py | Python | poetry_template/__main__.py | sitch/common_poetry_template | 478769db0819d8f603f2f379eb6d0a94203d23d9 | [
"MIT"
] | 4 | 2021-07-30T08:52:35.000Z | 2022-03-31T07:57:31.000Z | poetry_template/__main__.py | ImperialCollegeLondon/poetry_template | f9f93efc4b054b99f401ecbca1f48bdac6c0419e | [
"MIT"
] | null | null | null | poetry_template/__main__.py | ImperialCollegeLondon/poetry_template | f9f93efc4b054b99f401ecbca1f48bdac6c0419e | [
"MIT"
] | null | null | null | import poetry_template
print(poetry_template.__version__)
| 14.75 | 34 | 0.881356 | 7 | 59 | 6.571429 | 0.714286 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 59 | 3 | 35 | 19.666667 | 0.836364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
dc3e47e6315cdeaafcbbc3c091409278d9a7ae14 | 55 | py | Python | library/chainer_evaluation/__init__.py | AmirHosseinAmeli/Triple-GAN | 127948d9e22767d315a4b3ca58fc4a56d92ff9d3 | [
"MIT"
] | 29 | 2020-09-03T08:35:47.000Z | 2022-02-10T18:39:29.000Z | library/chainer_evaluation/__init__.py | AmirHosseinAmeli/Triple-GAN | 127948d9e22767d315a4b3ca58fc4a56d92ff9d3 | [
"MIT"
] | 6 | 2020-12-22T14:43:14.000Z | 2022-03-12T00:55:24.000Z | library/chainer_evaluation/__init__.py | AmirHosseinAmeli/Triple-GAN | 127948d9e22767d315a4b3ca58fc4a56d92ff9d3 | [
"MIT"
] | 8 | 2020-10-01T04:03:40.000Z | 2022-03-21T10:23:40.000Z | from . import evaluation
from . import inception_score
| 18.333333 | 29 | 0.818182 | 7 | 55 | 6.285714 | 0.714286 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 55 | 2 | 30 | 27.5 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc9cbdadec0a8be5683d267638e2c6d43dc44de7 | 5,886 | py | Python | nsrl/policies/exploration_policies.py | taodav/novelty-search-repr-space | 461691104dc3a72b9b4f7ec040b71d95eec434b1 | [
"MIT"
] | 11 | 2020-12-03T13:24:00.000Z | 2022-01-26T21:40:14.000Z | nsrl/policies/exploration_policies.py | taodav/novelty-search-repr-space | 461691104dc3a72b9b4f7ec040b71d95eec434b1 | [
"MIT"
] | null | null | null | nsrl/policies/exploration_policies.py | taodav/novelty-search-repr-space | 461691104dc3a72b9b4f7ec040b71d95eec434b1 | [
"MIT"
] | 2 | 2020-12-17T00:42:34.000Z | 2020-12-19T12:59:11.000Z | import torch
import copy
import numpy as np
from .EpsilonGreedyPolicy import EpsilonGreedyPolicy
from nsrl.helper.pytorch import device
class RewardArgmaxPolicy(EpsilonGreedyPolicy):
def __init__(self, learning_algo, n_actions, random_state, epsilon_start=0):
super(RewardArgmaxPolicy, self).__init__(learning_algo, n_actions, random_state, epsilon_start)
def bestAction(self, state, mode=None, *args, **kwargs):
for m in self.learning_algo.all_models: m.eval()
R = self.learning_algo.R if self.learning_algo._train_reward else None
copy_state = copy.deepcopy(state) # Required because of the "hack" below
state_tensor = torch.tensor(copy_state[0], dtype=torch.float).to(device)
dataset = kwargs.get('dataset', None)
if dataset is None:
raise Exception()
with torch.no_grad():
abstr_state = self.learning_algo.encoder(state_tensor)
all_prev_obs = torch.tensor(dataset.observationsMatchingBatchDim()[0], dtype=torch.float).to(device)
all_prev_states = self.learning_algo.encoder(all_prev_obs)
scores = self.learning_algo.intrRewards_planning(abstr_state, self.learning_algo.transition, all_prev_states, R=R)
return np.argmax(scores, axis=-1), np.max(scores, axis=-1)
class QArgmaxPolicy(EpsilonGreedyPolicy):
def __init__(self, learning_algo, n_actions, random_state, epsilon_start=0):
super(QArgmaxPolicy, self).__init__(learning_algo, n_actions, random_state, epsilon_start)
def bestAction(self, state, mode=None, *args, **kwargs):
for m in self.learning_algo.all_models: m.eval()
with torch.no_grad():
copy_state = copy.deepcopy(state) # Required because of the "hack" below
state_tensor = torch.tensor(copy_state[0], dtype=torch.float).to(device)
q_vals = self.learning_algo.qValues(state_tensor).squeeze(0).cpu().numpy()
return np.argmax(q_vals, axis=-1), np.max(q_vals, axis=-1)
class MCPolicy(EpsilonGreedyPolicy):
def __init__(self, learning_algo, n_actions, random_state, depth=1, epsilon_start=0):
super(MCPolicy, self).__init__(learning_algo, n_actions, random_state, epsilon_start)
self._depth = depth
def bestAction(self, state, mode=None, *args, **kwargs):
for m in self.learning_algo.all_models: m.eval()
with torch.no_grad():
R = self.learning_algo.R if self.learning_algo._train_reward else None
copy_state = copy.deepcopy(state) # Required because of the "hack" below
state_tensor = torch.tensor(copy_state[0], dtype=torch.float).to(device)
dataset = kwargs.get('dataset', None)
if dataset is None:
raise Exception()
abstr_state = self.learning_algo.encoder(state_tensor)
all_prev_obs = torch.tensor(dataset.observationsMatchingBatchDim()[0], dtype=torch.float).to(device)
all_prev_states = self.learning_algo.encoder(all_prev_obs)
scores = self.learning_algo.novelty_d_step_planning(abstr_state, self.learning_algo.Q, self.learning_algo.transition, all_prev_states, R=R, d=self._depth,
b=self.n_actions)
return np.argmax(scores, axis=-1), np.max(scores, axis=-1)
class MCRewardPolicy(EpsilonGreedyPolicy):
def __init__(self, learning_algo, n_actions, random_state, depth=1, epsilon_start=0):
super(MCRewardPolicy, self).__init__(learning_algo, n_actions, random_state, epsilon_start)
self._depth = depth
def bestAction(self, state, mode=None, *args, **kwargs):
for m in self.learning_algo.all_models: m.eval()
with torch.no_grad():
R = self.learning_algo.R if self.learning_algo._train_reward else None
copy_state = copy.deepcopy(state) # Required because of the "hack" below
state_tensor = torch.tensor(copy_state[0], dtype=torch.float).to(device)
dataset = kwargs.get('dataset', None)
if dataset is None:
raise Exception()
# This Q returns all 0s for all predicted Q values
class Q_zeros:
@staticmethod
def predict(abstr_reps):
return torch.zeros((abstr_reps.shape[0], self.n_actions))
abstr_state = self.learning_algo.encoder(state_tensor)
all_prev_obs = torch.tensor(dataset.observationsMatchingBatchDim()[0], dtype=torch.float).to(device)
all_prev_states = self.learning_algo.encoder(all_prev_obs)
scores = self.learning_algo.novelty_d_step_planning(abstr_state, Q_zeros, self.learning_algo.transition, all_prev_states, R=R, d=self._depth,
b=self.n_actions)
return np.argmax(scores, axis=-1), np.max(scores, axis=-1)
class BootstrapDQNPolicy(EpsilonGreedyPolicy):
def __init__(self, learning_algo, n_actions, random_state, epsilon_start=0):
super(BootstrapDQNPolicy, self).__init__(learning_algo, n_actions, random_state, epsilon_start)
self.idx = 0
self.head_num = self.learning_algo.Q.n_heads
def sample_head(self):
self.idx = np.random.randint(self.head_num)
def bestAction(self, state, mode=None, *args, **kwargs):
for m in self.learning_algo.all_models: m.eval()
with torch.no_grad():
copy_state = copy.deepcopy(state) # Required because of the "hack" below
state_tensor = torch.tensor(copy_state[0], dtype=torch.float).to(device)
abstr_state = self.learning_algo.encoder(state_tensor)
# Refer to BootstrappedQFunction here
scores = self.learning_algo.Q(abstr_state, [self.idx])[0].cpu().numpy()[0]
return np.argmax(scores, axis=-1), np.max(scores, axis=-1)
| 47.853659 | 166 | 0.674482 | 774 | 5,886 | 4.869509 | 0.142119 | 0.120987 | 0.14009 | 0.053064 | 0.816928 | 0.816928 | 0.807907 | 0.807907 | 0.796232 | 0.785354 | 0 | 0.006758 | 0.220693 | 5,886 | 122 | 167 | 48.245902 | 0.814912 | 0.045702 | 0 | 0.674157 | 0 | 0 | 0.003745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134831 | false | 0 | 0.05618 | 0.011236 | 0.325843 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcbbc14dd6503d11cbba1f9e8d27a340328bf876 | 3,345 | py | Python | tests/test_pg_usage_msg.py | jyrgenn/jpylib | a4711d11c012ad72f60d7591e7ac2c9e53d3ddd6 | [
"BSD-3-Clause"
] | null | null | null | tests/test_pg_usage_msg.py | jyrgenn/jpylib | a4711d11c012ad72f60d7591e7ac2c9e53d3ddd6 | [
"BSD-3-Clause"
] | null | null | null | tests/test_pg_usage_msg.py | jyrgenn/jpylib | a4711d11c012ad72f60d7591e7ac2c9e53d3ddd6 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
from jpylib.pgetopt import parse
import unittest
# test replacing the default usage message
class UsageTestcase(unittest.TestCase):
def test_usage0(self):
"""-h/--hounds option"""
ovc, args = parse({
"v": ("verbose", bool, 1, "increase verbosity"),
"z": ("zounds", int, 1, "number of zounds"),
"_arguments": [],
}, ["-v"], exit_on_error=False)
self.assertEqual(ovc.ovc_usage_msg(),
"usage: {} [-vz]".format(ovc._program))
def test_usage1(self):
"""-h/--hounds option"""
ovc, args = parse({
"v": ("verbose", bool, 1, "increase verbosity"),
"z": ("zounds", int, 1, "number of zounds"),
"_arguments": ["mangia"],
}, ["-v", "foo!"], exit_on_error=False)
self.assertEqual(ovc.ovc_usage_msg(),
"usage: {} [-vz] mangia".format(ovc._program))
def test_usage2(self):
"""-h/--hounds option"""
ovc, args = parse({
"v": ("verbose", bool, 1, "increase verbosity"),
"z": ("zounds", int, 1, "number of zounds"),
"_arguments": ["mangia", "[file1 ...]"],
}, ["-v", "foo!"], exit_on_error=False)
self.assertEqual(ovc.ovc_usage_msg(),
"usage: {} [-vz] mangia [file1 ...]".format(
ovc._program))
def test_usage_own(self):
"""-h/--hounds option"""
ovc, args = parse({
"v": ("verbose", bool, 1, "increase verbosity"),
"z": ("zounds", int, 1, "number of zounds"),
"_arguments": ["mangia", "[file1 ...]"],
"_usage": "usage: gniddle [-v] [-z 5] mangia [file1 ...]"
}, ["-v", "foo!"], exit_on_error=False)
self.assertEqual(
ovc.ovc_usage_msg(),
"usage: gniddle [-v] [-z 5] mangia [file1 ...]")
def test_usage_program(self):
"""-h/--hounds option"""
ovc, args = parse({
"v": ("verbose", bool, 1, "increase verbosity"),
"z": ("zounds", int, 1, "number of zounds"),
"_arguments": ["mangia", "[file1 ...]"],
"_program": "schnörkelate",
}, ["-v", "foo!"], exit_on_error=False)
self.assertEqual(ovc.ovc_usage_msg(),
"usage: schnörkelate [-vz] mangia [file1 ...]")
def test_usage_string_arguments(self):
"""_arguments as string"""
ovc, args = parse({
"v": ("verbose", bool, 0, "increase verbosity"),
"_arguments": "...",
"_program": "lala",
})
self.assertEqual(ovc.ovc_usage_msg(), "usage: lala [-v] ...")
def test_usage_empty_string_arguments(self):
"""_arguments as string"""
ovc, args = parse({
"v": ("verbose", bool, 0, "increase verbosity"),
"_arguments": "",
"_program": "lala",
})
self.assertEqual(ovc.ovc_usage_msg(), "usage: lala [-v]")
def test_usage_empty_list_arguments(self):
"""_arguments as string"""
ovc, args = parse({
"v": ("verbose", bool, 0, "increase verbosity"),
"_arguments": [],
"_program": "lala",
}, [])
self.assertEqual(ovc.ovc_usage_msg(), "usage: lala [-v]")
| 35.967742 | 72 | 0.495067 | 341 | 3,345 | 4.671554 | 0.184751 | 0.035154 | 0.060264 | 0.065286 | 0.865035 | 0.799749 | 0.799749 | 0.77715 | 0.77715 | 0.77715 | 0 | 0.011295 | 0.311809 | 3,345 | 92 | 73 | 36.358696 | 0.680712 | 0.06577 | 0 | 0.602941 | 0 | 0 | 0.262082 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.117647 | false | 0 | 0.029412 | 0 | 0.161765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
521b5db27388b4644004b2a74c23a6c05fb14c0d | 32,580 | py | Python | tests/test_memocell_data.py | hoefer-lab/memocell | 5dc08d121e64fbde1ccdce86f0f1390e6918d255 | [
"MIT"
] | null | null | null | tests/test_memocell_data.py | hoefer-lab/memocell | 5dc08d121e64fbde1ccdce86f0f1390e6918d255 | [
"MIT"
] | null | null | null | tests/test_memocell_data.py | hoefer-lab/memocell | 5dc08d121e64fbde1ccdce86f0f1390e6918d255 | [
"MIT"
] | 1 | 2021-05-25T12:54:51.000Z | 2021-05-25T12:54:51.000Z |
# for package testing with pytest call
# in upper directory "$ python setup.py pytest"
# or in this directory "$ py.test test_memocell_[...].py"
# or after pip installation $py.test --pyargs memocell$
import memocell as me
import numpy as np
class TestDataClass(object):
### tests for create_data_variable_order()
def test_create_data_variable_order_mean_only_false(self):
mean_only = False
assert(me.Data.create_data_variable_order(['A', 'B', 'C'], mean_only) == (
[{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)},
{'variables': 'C', 'summary_indices': 2, 'count_indices': (2,)}],
[{'variables': ('A', 'A'), 'summary_indices': 0, 'count_indices': (0, 0)},
{'variables': ('B', 'B'), 'summary_indices': 1, 'count_indices': (1, 1)},
{'variables': ('C', 'C'), 'summary_indices': 2, 'count_indices': (2, 2)}],
[{'variables': ('A', 'B'), 'summary_indices': 0, 'count_indices': (0, 1)},
{'variables': ('A', 'C'), 'summary_indices': 1, 'count_indices': (0, 2)},
{'variables': ('B', 'C'), 'summary_indices': 2, 'count_indices': (1, 2)}]))
def test_create_data_variable_order_mean_only_true(self):
mean_only = True
assert(me.Data.create_data_variable_order(['A', 'B', 'C'], mean_only) == (
[{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)},
{'variables': 'C', 'summary_indices': 2, 'count_indices': (2,)}],
[],
[]))
def test_create_data_variable_order_no_alphabetical_order(self):
mean_only = False
assert(me.Data.create_data_variable_order(['C', 'B', 'A'], mean_only) == (
[{'variables': 'C', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)},
{'variables': 'A', 'summary_indices': 2, 'count_indices': (2,)}],
[{'variables': ('C', 'C'), 'summary_indices': 0, 'count_indices': (0, 0)},
{'variables': ('B', 'B'), 'summary_indices': 1, 'count_indices': (1, 1)},
{'variables': ('A', 'A'), 'summary_indices': 2, 'count_indices': (2, 2)}],
[{'variables': ('C', 'B'), 'summary_indices': 0, 'count_indices': (0, 1)},
{'variables': ('C', 'A'), 'summary_indices': 1, 'count_indices': (0, 2)},
{'variables': ('B', 'A'), 'summary_indices': 2, 'count_indices': (1, 2)}]))
def test_create_data_variable_order_no_validation_here(self):
mean_only = False
assert(me.Data.create_data_variable_order(['A', 'A'], mean_only) == (
[{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'A', 'summary_indices': 1, 'count_indices': (1,)}],
[{'variables': ('A', 'A'), 'summary_indices': 0, 'count_indices': (0, 0)},
{'variables': ('A', 'A'), 'summary_indices': 1, 'count_indices': (1, 1)}],
[{'variables': ('A', 'A'), 'summary_indices': 0, 'count_indices': (0, 1)}]))
### tests for process_mean_exist_only
def test_process_mean_exist_only_counts(self):
assert(False == me.Data.process_mean_exist_only('counts', None, None))
def test_process_mean_exist_only_summary_mean_only(self):
assert(True == me.Data.process_mean_exist_only('summary', None, None))
def test_process_mean_exist_only_summary_mean_only_via_empty(self):
var_data = np.empty((2, 0, 3)) # some fake data
cov_data = np.empty((2, 0, 3)) # some fake data
assert(True == me.Data.process_mean_exist_only('summary', var_data, cov_data))
def test_process_mean_exist_only_summary_mean_only_mixed_1(self):
var_data = np.empty((2, 0, 3)) # some fake data
assert(True == me.Data.process_mean_exist_only('summary', var_data, None))
def test_process_mean_exist_only_summary_mean_only_mixed_2(self):
cov_data = np.empty((2, 0, 3)) # some fake data
assert(True == me.Data.process_mean_exist_only('summary', None, cov_data))
def test_process_mean_exist_only_counts_var_and_cov(self):
var_data = np.empty((2, 2, 3)) # some fake data
cov_data = np.empty((2, 1, 3)) # some fake data
assert(False == me.Data.process_mean_exist_only('summary', var_data, cov_data))
def test_process_mean_exist_only_counts_var_only(self):
var_data = np.empty((2, 2, 3)) # some fake data
assert(False == me.Data.process_mean_exist_only('summary', var_data, None))
def test_process_mean_exist_only_counts_cov_only(self):
cov_data = np.empty((2, 1, 3)) # some fake data
assert(False == me.Data.process_mean_exist_only('summary', None, cov_data))
### tests for convert_none_data_to_empty_array
def test_convert_none_data_to_empty_array_none_data(self):
count_data = None
mean_data = None
var_data = None
cov_data = None
num_variables = 2
num_time_values = 3
res_counts, res_mean, res_var, res_cov = me.Data.convert_none_data_to_empty_array(
count_data, mean_data,
var_data, cov_data,
num_variables, num_time_values)
sol_counts = np.empty((0, 2, 3))
sol_mean = np.empty((2, 0, 3))
sol_var = np.empty((2, 0, 3))
sol_cov = np.empty((2, 0, 3))
np.testing.assert_allclose(sol_counts, res_counts)
np.testing.assert_allclose(sol_mean, res_mean)
np.testing.assert_allclose(sol_var, res_var)
np.testing.assert_allclose(sol_cov, res_cov)
def test_convert_none_data_to_empty_array_random_data(self):
# create some random fake data
# with 4 wells, 2 variables, 3 time points
sol_counts = np.random.rand(4, 2, 3)
sol_mean = np.random.rand(2, 2, 3)
sol_var = np.random.rand(2, 2, 3)
sol_cov = np.random.rand(2, 1, 3)
num_variables = 2
num_time_values = 3
res_counts, res_mean, res_var, res_cov = me.Data.convert_none_data_to_empty_array(
sol_counts, sol_mean,
sol_var, sol_cov,
num_variables, num_time_values)
np.testing.assert_allclose(sol_counts, res_counts)
np.testing.assert_allclose(sol_mean, res_mean)
np.testing.assert_allclose(sol_var, res_var)
np.testing.assert_allclose(sol_cov, res_cov)
### tests for bootstrapping methods
def test_bootstrapping_mean(self):
stat_sample, se_stat_sample = me.Data.bootstrapping_mean(np.array([1.0, 2.0, 3.0]), 100000)
assert((stat_sample, round(se_stat_sample, 1)) == (2.0, 0.5))
def test_bootstrapping_variance(self):
stat_sample, se_stat_sample = me.Data.bootstrapping_variance(np.array([1.0, 2.0, 3.0]), 100000)
assert((stat_sample, round(se_stat_sample, 1)) == (1.0, 0.5))
def test_bootstrapping_covariance(self):
stat_sample, se_stat_sample = me.Data.bootstrapping_covariance(np.array([1.0, 2.0, 3.0]), np.array([3.0, 2.0, 1.0]), 10000)
assert((stat_sample, round(se_stat_sample, 1)) == (-1.0, 0.5))
def test_bootstrap_count_data_to_summary_stats_shape(self):
count_data = np.array([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 2., 2., 2., 2., 2., 3., 3.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 2., 2., 3., 4., 4., 5., 5.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]])
data_name = 'test_data'
data = me.Data(data_name)
data.load(['A', 'B'], np.linspace(0.0, 54.0, num=28, endpoint=True), count_data, bootstrap_samples=10)
data_mean, data_var, data_cov = me.Data.bootstrap_count_data_to_summary_stats(
data,
data.data_num_time_values,
data.data_mean_order,
data.data_variance_order,
data.data_covariance_order,
data.data_counts,
data.data_bootstrap_samples)
assert((data_mean.shape, data_var.shape, data_cov.shape) == ((2, 2, 28), (2, 2, 28), (2, 1, 28)))
def test_bootstrap_count_data_to_summary_stats_stat_values(self):
count_data = np.array([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 2., 2., 2., 2., 2., 3., 3.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 2., 2., 3., 4., 4., 5., 5.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]])
data_name = 'test_data'
data = me.Data(data_name)
data.load(['A', 'B'], np.linspace(0.0, 54.0, num=28, endpoint=True), count_data, bootstrap_samples=10)
data_mean, data_var, data_cov = me.Data.bootstrap_count_data_to_summary_stats(
data,
data.data_num_time_values,
data.data_mean_order,
data.data_variance_order,
data.data_covariance_order,
data.data_counts,
data.data_bootstrap_samples)
assert(np.all(data_mean[0, :, :] == np.array([[0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 1. , 1. , 1. , 1. , 1. , 1. , 2. , 2. , 2.5, 3. , 3. ,
4. , 4. ],
[1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. ]])) == True)
assert(np.all(data_var[0, :, :] == np.array([[0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 2. , 2. ,
2. , 2. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. ]])) == True)
assert(np.all(data_cov[0, :, :] == np.array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , -0.5, -0.5, -0.5,
-0.5, -0.5, -0.5, -0.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ]])) == True)
### test basic_sigma method
def test_introduce_basic_sigma(self):
data = np.array([[[0. , 0. , 0. , 0. , 0.02272727,
0.04545455, 0.09090909, 0.15909091, 0.18181818, 0.31818182,
0.45454545, 0.61363636, 0.75 , 0.81818182, 1.02272727,
1.31818182, 1.5 , 1.79545455, 2.25 , 2.61363636,
2.93181818, 3.38636364, 4.13636364, 4.75 , 5.59090909,
6.47727273, 7.40909091, 8.47727273],
[1. , 1. , 1. , 1. , 0.97727273,
0.95454545, 0.90909091, 0.86363636, 0.84090909, 0.72727273,
0.65909091, 0.56818182, 0.5 , 0.43181818, 0.34090909,
0.31818182, 0.25 , 0.22727273, 0.22727273, 0.18181818,
0.15909091, 0.13636364, 0.11363636, 0.09090909, 0.09090909,
0.09090909, 0.09090909, 0.09090909]],
[[0. , 0. , 0. , 0. , 0.02217504,
0.03127734, 0.04332831, 0.06380841, 0.06626216, 0.08355682,
0.10488916, 0.12202523, 0.13359648, 0.12848238, 0.14533142,
0.18722643, 0.19360286, 0.24160564, 0.29076081, 0.34158666,
0.39142742, 0.40884032, 0.51906845, 0.59882769, 0.6827803 ,
0.82338009, 0.94878574, 1.06604142],
[0. , 0. , 0. , 0. , 0.02248729,
0.03151183, 0.04341279, 0.05226862, 0.05494241, 0.06746095,
0.07182489, 0.07404752, 0.07514181, 0.07523978, 0.07086163,
0.0707864 , 0.0651271 , 0.06331053, 0.06288894, 0.05833507,
0.0543081 , 0.05187501, 0.04779238, 0.04355323, 0.04319857,
0.0426004 , 0.04327476, 0.0436796 ]]])
data_bs = np.array([[[0. , 0. , 0. , 0. , 0.02272727,
0.04545455, 0.09090909, 0.15909091, 0.18181818, 0.31818182,
0.45454545, 0.61363636, 0.75 , 0.81818182, 1.02272727,
1.31818182, 1.5 , 1.79545455, 2.25 , 2.61363636,
2.93181818, 3.38636364, 4.13636364, 4.75 , 5.59090909,
6.47727273, 7.40909091, 8.47727273],
[1. , 1. , 1. , 1. , 0.97727273,
0.95454545, 0.90909091, 0.86363636, 0.84090909, 0.72727273,
0.65909091, 0.56818182, 0.5 , 0.43181818, 0.34090909,
0.31818182, 0.25 , 0.22727273, 0.22727273, 0.18181818,
0.15909091, 0.13636364, 0.11363636, 0.09090909, 0.09090909,
0.09090909, 0.09090909, 0.09090909]],
[[0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.10488916, 0.12202523, 0.13359648, 0.12848238, 0.14533142,
0.18722643, 0.19360286, 0.24160564, 0.29076081, 0.34158666,
0.39142742, 0.40884032, 0.51906845, 0.59882769, 0.6827803 ,
0.82338009, 0.94878574, 1.06604142],
[0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 , 0.1 , 0.1 ,
0.1 , 0.1 , 0.1 ]]])
assert(np.all(data_bs == me.Data.introduce_basic_sigma(0.1, data))==True)
### test event methods
def test_event_find_first_change_from_inital_conditions_1(self):
data = me.Data('data_init')
assert((True, 2.0) == me.Data.event_find_first_change_from_inital_conditions(data,
np.array([[0.0, 0.0, 1.0, 2.0], [0.0, 0.0, 0.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_change_from_inital_conditions_2(self):
data = me.Data('data_init')
assert((False, None) == me.Data.event_find_first_change_from_inital_conditions(data,
np.array([[0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 1.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_change_from_inital_conditions_3(self):
data = me.Data('data_init')
assert((True, 1.0) == me.Data.event_find_first_change_from_inital_conditions(data,
np.array([[0.0, 0.0, 1.0, 0.0], [0.0, 1.0, 0.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_count_increase_1(self):
data = me.Data('data_init')
assert((False, None) == me.Data.event_find_first_cell_count_increase(data,
np.array([[0.0, 0.0, 0.0, 0.0], [4.0, 4.0, 4.0, 4.0], [1.0, 1.0, 1.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_count_increase_2(self):
data = me.Data('data_init')
assert((True, 1.0) == me.Data.event_find_first_cell_count_increase(data,
np.array([[0.0, 0.0, 1.0, 0.0], [0.0, 1.0, 0.0, 1.0], [1.0, 1.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_count_increase_3(self):
data = me.Data('data_init')
assert((True, 2.0) == me.Data.event_find_first_cell_count_increase(data,
np.array([[4.0, 4.0, 4.0, 4.0], [0.0, 0.0, 0.0, 1.0], [1.0, 1.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_type_conversion_1(self):
data = me.Data('data_init')
assert((True, 3.0) == me.Data.event_find_first_cell_type_conversion(data,
np.array([[4.0, 4.0, 4.0, 3.0], [0.0, 0.0, 0.0, 1.0], [1.0, 1.0, 1.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_count_increase_after_cell_type_conversion_1(self):
data = me.Data('data_init')
assert((False, None) == me.Data.event_find_first_cell_count_increase_after_cell_type_conversion(data,
np.array([[4.0, 4.0, 3.0, 3.0],
[1.0, 1.0, 2.0, 2.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_first_cell_count_increase_after_cell_type_conversion_2(self):
data = me.Data('data_init')
assert((True, 2.0) == me.Data.event_find_first_cell_count_increase_after_cell_type_conversion(data,
np.array([[4.0, 3.0, 3.0, 3.0],
[1.0, 2.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0]), diff=True))
def test_event_find_first_cell_count_increase_after_cell_type_conversion_3(self):
data = me.Data('data_init')
assert((True, 3.0) == me.Data.event_find_first_cell_count_increase_after_cell_type_conversion(data,
np.array([[4.0, 3.0, 3.0, 3.0],
[1.0, 2.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0]), diff=False))
def test_event_find_first_cell_count_increase_after_cell_type_conversion_4(self):
data = me.Data('data_init')
assert((False, None) == me.Data.event_find_first_cell_count_increase_after_cell_type_conversion(data,
np.array([[4.0, 4.0, 5.0, 6.0],
[1.0, 1.0, 1.0, 1.0]]),
np.array([0.0, 1.0, 2.0, 3.0])))
def test_event_find_second_cell_count_increase_after_first_cell_count_increase_after_cell_type_conversion_1(self):
data = me.Data('data_init')
assert((True, 1.0) == me.Data.event_find_second_cell_count_increase_after_first_cell_count_increase_after_cell_type_conversion(
data,
np.array([[4.0, 3.0, 3.0, 4.0, 5.0],
[1.0, 2.0, 2.0, 2.0, 2.0]]),
np.array([0.0, 1.0, 2.0, 3.0, 4.0]), diff=True))
def test_event_find_second_cell_count_increase_after_first_cell_count_increase_after_cell_type_conversion_2(self):
data = me.Data('data_init')
assert((True, 4.0) == me.Data.event_find_second_cell_count_increase_after_first_cell_count_increase_after_cell_type_conversion(
data,
np.array([[4.0, 3.0, 3.0, 4.0, 5.0],
[1.0, 2.0, 2.0, 2.0, 2.0]]),
np.array([0.0, 1.0, 2.0, 3.0, 4.0]), diff=False))
def test_event_find_third_cell_count_increase_after_first_and_second_cell_count_increase_after_cell_type_conversion_1(self):
data = me.Data('data_init')
assert((True, 1.0) == me.Data.event_find_third_cell_count_increase_after_first_and_second_cell_count_increase_after_cell_type_conversion(
data,
np.array([[4.0, 3.0, 3.0, 4.0, 5.0, 5.0],
[1.0, 2.0, 2.0, 2.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]), diff=True))
def test_event_find_third_cell_count_increase_after_first_and_second_cell_count_increase_after_cell_type_conversion_2(self):
data = me.Data('data_init')
assert((True, 5.0) == me.Data.event_find_third_cell_count_increase_after_first_and_second_cell_count_increase_after_cell_type_conversion(
data,
np.array([[4.0, 3.0, 3.0, 4.0, 5.0, 5.0],
[1.0, 2.0, 2.0, 2.0, 2.0, 3.0]]),
np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]), diff=False))
### test gamma histogram fitting
def test_gamma_compute_bin_probabilities_sum(self):
data = me.Data('data_init')
data_time_values = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
data.gamma_fit_bins = np.concatenate(([-np.inf], data_time_values, [np.inf]))
assert(0.9999 < sum(data.gamma_compute_bin_probabilities([4.0, 0.5])) < 1.0001)
def test_gamma_compute_bin_probabilities_values(self):
data = me.Data('data_init')
data_time_values = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
data.gamma_fit_bins = np.concatenate(([-np.inf], data_time_values, [np.inf]))
res = np.array([0. , 0.14287654, 0.42365334, 0.28226624, 0.10882377, 0.03204406, 0.01033605])
lower_res = res - 0.0001
uppper_res = res + 0.0001
assert(np.all([np.all(lower_res < data.gamma_compute_bin_probabilities([4.0, 0.5])),
np.all(data.gamma_compute_bin_probabilities([4.0, 0.5]) < uppper_res)]) == True)
def test_check_bin_digitalisation(self):
data = me.Data('data_init')
data_time_values = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
data.gamma_fit_bins = np.concatenate(([-np.inf], data_time_values, [np.inf]))
assert(np.all(np.array([0, 1, 1, 2, 2, 6]) == np.digitize([0.0, 0.1, 1.0, 1.8, 2.0, 5.2], data.gamma_fit_bins, right=True) - 1) == True)
def test_gamma_fit_binned_waiting_times(self):
data = me.Data('data_init')
data.data_time_values = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
theta = [4.0, 0.5]
waiting_times_arr = np.random.gamma(theta[0], theta[1], 100000)
data.gamma_fit_binned_waiting_times(waiting_times_arr)
theta_fit = data.gamma_fit_theta
assert(3.8 < theta_fit[0] < 4.2)
assert(0.4 < theta_fit[1] < 0.6)
### test load method
# @pytest.mark.slow
def test_load_count_data(self):
variables = ['A', 'B']
time_values = np.linspace(0.0, 4.0, num=5)
count_data = np.array([[[0.0, 0.0, 2.0, 2.0, 4.0], [1.0, 1.0, 1.0, 1.0, 0.0]],
[[0.0, 1.0, 2.0, 4.0, 4.0], [1.0, 1.0, 0.0, 0.0, 0.0]],
[[0.0, 1.0, 1.0, 4.0, 4.0], [1.0, 1.0, 0.0, 0.0, 0.0]],
[[0.0, 0.0, 0.0, 2.0, 4.0], [1.0, 0.0, 0.0, 0.0, 0.0]]])
data = me.Data('data_init')
data.load(variables, time_values, count_data)
sol_mean = np.array([[[0., 0.5, 1.25, 3., 4. ],
[1., 0.75, 0.25, 0.25, 0. ]],
[[0., 0.25096378, 0.41313568, 0.50182694, 0. ],
[0., 0.21847682, 0.21654396, 0.21624184, 0. ]]])
sol_var = np.array([[[0., 0.33333333, 0.91666667, 1.33333333, 0., ],
[0., 0.25, 0.25, 0.25, 0., ]],
[[0., 0.10247419, 0.39239737, 0.406103, 0., ],
[0., 0.13197367, 0.13279328, 0.13272021, 0., ]]])
sol_cov = np.array([[[ 0., 0.16666667, 0.25, -0.33333333, 0. ]],
[[ 0., 0.11379549, 0.18195518, 0.22722941, 0. ]]])
np.testing.assert_allclose(sol_mean, data.data_mean, rtol=0.1)
np.testing.assert_allclose(sol_var, data.data_variance, rtol=0.1)
np.testing.assert_allclose(sol_cov, data.data_covariance, rtol=0.1)
assert(data.data_mean_exists_only == False)
assert(data.data_num_variables == 2)
assert(data.data_num_time_values == 5)
assert(data.data_mean_order == [{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)}])
assert(data.data_variance_order == [{'variables': ('A', 'A'), 'summary_indices': 0, 'count_indices': (0, 0)},
{'variables': ('B', 'B'), 'summary_indices': 1, 'count_indices': (1, 1)}])
assert(data.data_covariance_order == [{'variables': ('A', 'B'), 'summary_indices': 0, 'count_indices': (0, 1)}])
assert(data.data_type == 'counts')
assert(data.data_num_values == 25)
assert(data.data_num_values_mean_only == 10)
def test_load_summary_data(self):
variables = ['A', 'B']
time_values = np.linspace(0.0, 4.0, num=5)
sol_mean = np.array([[[0., 0.5, 1.25, 3., 4. ],
[1., 0.75, 0.25, 0.25, 0. ]],
[[0., 0.25096378, 0.41313568, 0.50182694, 0. ],
[0., 0.21847682, 0.21654396, 0.21624184, 0. ]]])
sol_var = np.array([[[0., 0.33333333, 0.91666667, 1.33333333, 0., ],
[0., 0.25, 0.25, 0.25, 0., ]],
[[0., 0.10247419, 0.39239737, 0.406103, 0., ],
[0., 0.13197367, 0.13279328, 0.13272021, 0., ]]])
sol_cov = np.array([[[ 0., 0.16666667, 0.25, -0.33333333, 0. ]],
[[ 0., 0.11379549, 0.18195518, 0.22722941, 0. ]]])
data = me.Data('data_init')
data.load(variables, time_values, None, data_type='summary',
mean_data=sol_mean, var_data=sol_var, cov_data=sol_cov)
np.testing.assert_allclose(sol_mean, data.data_mean)
np.testing.assert_allclose(sol_var, data.data_variance)
np.testing.assert_allclose(sol_cov, data.data_covariance)
assert(data.data_mean_exists_only == False)
assert(data.data_num_variables == 2)
assert(data.data_num_time_values == 5)
assert(data.data_mean_order == [{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)}])
assert(data.data_variance_order == [{'variables': ('A', 'A'), 'summary_indices': 0, 'count_indices': (0, 0)},
{'variables': ('B', 'B'), 'summary_indices': 1, 'count_indices': (1, 1)}])
assert(data.data_covariance_order == [{'variables': ('A', 'B'), 'summary_indices': 0, 'count_indices': (0, 1)}])
assert(data.data_type == 'summary')
assert(data.data_num_values == 25)
assert(data.data_num_values_mean_only == 10)
def test_load_summary_data_mean_only_1(self):
variables = ['A', 'B']
time_values = np.linspace(0.0, 4.0, num=5)
sol_mean = np.array([[[0., 0.5, 1.25, 3., 4. ],
[1., 0.75, 0.25, 0.25, 0. ]],
[[0., 0.25096378, 0.41313568, 0.50182694, 0. ],
[0., 0.21847682, 0.21654396, 0.21624184, 0. ]]])
sol_var = np.empty((2, 0, 5))
sol_cov = np.empty((2, 0, 5))
data = me.Data('data_init')
data.load(variables, time_values, None, data_type='summary',
mean_data=sol_mean, var_data=sol_var, cov_data=sol_cov)
np.testing.assert_allclose(sol_mean, data.data_mean)
np.testing.assert_allclose(sol_var, data.data_variance)
np.testing.assert_allclose(sol_cov, data.data_covariance)
assert(data.data_mean_exists_only == True)
assert(data.data_num_variables == 2)
assert(data.data_num_time_values == 5)
assert(data.data_mean_order == [{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)}])
assert(data.data_variance_order == [])
assert(data.data_covariance_order == [])
assert(data.data_type == 'summary')
assert(data.data_num_values == 10)
assert(data.data_num_values_mean_only == 10)
def test_load_summary_data_mean_only_2(self):
variables = ['A', 'B']
time_values = np.linspace(0.0, 4.0, num=5)
sol_mean = np.array([[[0., 0.5, 1.25, 3., 4. ],
[1., 0.75, 0.25, 0.25, 0. ]],
[[0., 0.25096378, 0.41313568, 0.50182694, 0. ],
[0., 0.21847682, 0.21654396, 0.21624184, 0. ]]])
sol_var = np.empty((2, 0, 5))
sol_cov = np.empty((2, 0, 5))
data = me.Data('data_init')
data.load(variables, time_values, None, data_type='summary', mean_data=sol_mean)
np.testing.assert_allclose(sol_mean, data.data_mean)
np.testing.assert_allclose(sol_var, data.data_variance)
np.testing.assert_allclose(sol_cov, data.data_covariance)
assert(data.data_mean_exists_only == True)
assert(data.data_num_variables == 2)
assert(data.data_num_time_values == 5)
assert(data.data_mean_order == [{'variables': 'A', 'summary_indices': 0, 'count_indices': (0,)},
{'variables': 'B', 'summary_indices': 1, 'count_indices': (1,)}])
assert(data.data_variance_order == [])
assert(data.data_covariance_order == [])
assert(data.data_type == 'summary')
assert(data.data_num_values == 10)
assert(data.data_num_values_mean_only == 10)
| 63.882353 | 145 | 0.484223 | 4,417 | 32,580 | 3.344351 | 0.058184 | 0.049553 | 0.054427 | 0.058489 | 0.885527 | 0.875575 | 0.861427 | 0.849377 | 0.822231 | 0.782155 | 0 | 0.167373 | 0.355586 | 32,580 | 509 | 146 | 64.007859 | 0.536223 | 0.019552 | 0 | 0.604966 | 0 | 0 | 0.059307 | 0 | 0 | 0 | 0 | 0 | 0.216704 | 1 | 0.097065 | false | 0 | 0.004515 | 0 | 0.103837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5239dd41a8997d0345cf38eed6726c06fa769784 | 6,855 | py | Python | bpyCruft/nextLayer.py | feurig/mysorrybot | 020ac244b8fcd9bf7a45500691f356c5c057b8bc | [
"BSD-3-Clause"
] | null | null | null | bpyCruft/nextLayer.py | feurig/mysorrybot | 020ac244b8fcd9bf7a45500691f356c5c057b8bc | [
"BSD-3-Clause"
] | null | null | null | bpyCruft/nextLayer.py | feurig/mysorrybot | 020ac244b8fcd9bf7a45500691f356c5c057b8bc | [
"BSD-3-Clause"
] | null | null | null | import bpy
import math
import mathutils
standLocationRadius=56.0
standRadius=3.5
bpy.ops.mesh.primitive_cylinder_add(vertices=64, radius=59, depth=5.0, location=(0,0, 1.25))
bigHole = bpy.context.selected_objects[0]
bigHole.name="BigHole"
bpy.ops.mesh.primitive_cylinder_add(vertices=64, radius=61, depth=3.0, location=(0,0, 1.5))
nextLayer = bpy.context.selected_objects[0]
nextLayer.name="NextLayer"
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = bigHole
bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['BigHole'].select_set(state=True)
bpy.ops.object.delete()
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=1.5, depth=5.0, location=(0,-(standLocationRadius), 1.25))
screwHole = bpy.context.selected_objects[0]
screwHole.name="ScrewHole"
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=4.0, depth=3.0, location=(0,-(standLocationRadius), 1.5))
stand1 = bpy.context.selected_objects[0]
stand1.name="Stand1"
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = screwHole
bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['ScrewHole'].select_set(state=True)
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['Stand1'].select_set(state=False)
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = nextLayer
bpy.context.object.modifiers["Boolean"].operation = 'UNION'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.delete()
bpy.data.objects['Stand1'].select_set(state=True)
nextLayer = bpy.context.selected_objects[0]
nextLayer.name="NextLayer"
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=1.5, depth=5.0, location=(0,(standLocationRadius), 1.25))
screwHole = bpy.context.selected_objects[0]
screwHole.name="ScrewHole"
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=(standRadius), depth=3.0, location=(0,(standLocationRadius), 1.5))
stand = bpy.context.selected_objects[0]
stand.name="Stand2"
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = screwHole
bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['ScrewHole'].select_set(state=True)
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['Stand2'].select_set(state=False)
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = nextLayer
bpy.context.object.modifiers["Boolean"].operation = 'UNION'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.delete()
bpy.data.objects['Stand2'].select_set(state=True)
nextLayer = bpy.context.selected_objects[0]
nextLayer.name="NextLayer"
bpy.context.scene.cursor.location = mathutils.Vector((0.0,0.0,0.0))
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=1.5, depth=5.0,
location=(-(standLocationRadius), 0.0, 1.25))
screwHole = bpy.context.selected_objects[0]
screwHole.name="ScrewHole"
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=(standRadius), depth=3.0,
location=(-(standLocationRadius), 0.0, 1.5))
stand3 = bpy.context.selected_objects[0]
stand3.name="Stand3"
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = screwHole
bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['ScrewHole'].select_set(state=True)
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['Stand3'].select_set(state=False)
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = nextLayer
bpy.context.object.modifiers["Boolean"].operation = 'UNION'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['NextLayer'].select_set(state=True)
bpy.ops.object.delete()
bpy.data.objects['Stand3'].select_set(state=True)
nextLayer = bpy.context.selected_objects[0]
nextLayer.name="NextLayer"
bpy.context.scene.cursor.location = mathutils.Vector((0.0,0.0,0.0))
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=1.5, depth=5.0, location=((standLocationRadius), 0.0, 1.25))
screwHole = bpy.context.selected_objects[0]
screwHole.name="ScrewHole"
bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=(standRadius), depth=3.0, location=((standLocationRadius), 0.0, 1.5))
lastStand = bpy.context.selected_objects[0]
lastStand.name="LastStand"
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = screwHole
bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['ScrewHole'].select_set(state=True)
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['LastStand'].select_set(state=False)
theStand=bpy.data.objects['LastStand']
nextLayer = bpy.data.objects['NextLayer']
nextLayer.select_set(state=True)
bpy.context.active_object = nextLayer
bpy.ops.object.modifier_add(type='BOOLEAN')
bpy.context.object.modifiers["Boolean"].object = nextLayer
bpy.context.object.modifiers["Boolean"].operation = 'UNION'
bpy.ops.object.modifier_apply(apply_as='DATA', modifier="Boolean")
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['NextLayer'].select_set(state=True)
#bpy.ops.object.delete()
nextLayer = bpy.data.objects['LastStand']
#bpy.data.objects['Stand'].select_set(state=True)
nextLayer.name="NextLayer"
bpy.ops.object.select_all(action='DESELECT')
nextLayer.select_set(state=True)
bpy.context.scene.cursor.location = mathutils.Vector((0.0,0.0,0.0))
bpy.ops.object.origin_set(type='ORIGIN_CURSOR')
nextLayer.rotation_euler[2] = math.radians(-45.0) | 38.728814 | 125 | 0.774034 | 995 | 6,855 | 5.228141 | 0.069347 | 0.062284 | 0.101499 | 0.069204 | 0.92253 | 0.887159 | 0.88293 | 0.856978 | 0.845636 | 0.827951 | 0 | 0.022857 | 0.055434 | 6,855 | 177 | 126 | 38.728814 | 0.780541 | 0.010357 | 0 | 0.69403 | 0 | 0 | 0.118237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022388 | 0 | 0.022388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
870408453920a54fb69c665e523d146a16e6b822 | 13,811 | py | Python | scripts/ssc/COREL/config_library.py | MrBellamonte/MT-VAEs-TDA | 8881b5db607c673fb558f7b74ece27f244b16b77 | [
"MIT"
] | null | null | null | scripts/ssc/COREL/config_library.py | MrBellamonte/MT-VAEs-TDA | 8881b5db607c673fb558f7b74ece27f244b16b77 | [
"MIT"
] | 1 | 2020-09-22T13:04:58.000Z | 2020-09-22T13:05:23.000Z | scripts/ssc/COREL/config_library.py | MrBellamonte/AEs-VAEs-TDA | 8881b5db607c673fb558f7b74ece27f244b16b77 | [
"MIT"
] | null | null | null | from fractions import Fraction
import numpy as np
from src.datasets.datasets import Spheres
from src.evaluation.config import ConfigEval
from src.models.COREL.config import ConfigGrid_COREL, ConfigCOREL
from src.models.autoencoder.autoencoders import Autoencoder_MLP
from src.models.loss_collection import L1Loss, TwoSidedHingeLoss, HingeLoss
placeholder_config_corel = ConfigCOREL(
learning_rate=1/1000,
batch_size=16,
n_epochs=2,
weight_decay=0,
early_stopping=5,
rec_loss=L1Loss(),
top_loss=L1Loss(),
rec_loss_weight=1,
top_loss_weight=1,
model_class=Autoencoder_MLP,
model_kwargs={
'input_dim' : 101,
'latent_dim' : 2,
'size_hidden_layers': [128, 64, 32]
},
dataset=Spheres(),
sampling_kwargs={
'n_samples': 500
},
eval=ConfigEval(
active=True,
evaluate_on='test',
save_eval_latent=True,
save_train_latent=True,
online_visualization=True,
k_min=5,
k_max=105,
k_step=25,
),
uid = '',
)
test_grid_local = ConfigGrid_COREL(
learning_rate=[1/1000],
batch_size=[64],
n_epochs=[20],
weight_decay=[10e-5],
early_stopping=[5],
rec_loss=[L1Loss()],
top_loss=[L1Loss()],
rec_loss_weight=[1],
top_loss_weight=[1],
model_class=[Autoencoder_MLP],
model_kwargs={
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[32, 32]]
},
dataset=[Spheres()],
sampling_kwargs={
'n_samples': [64]
},
eval=[ConfigEval(
active = True,
evaluate_on = 'test',
save_eval_latent = True,
save_train_latent = True,
online_visualization = True,
k_min=5,
k_max=105,
k_step=25,
)],
uid = [''],
experiment_dir='/home/simonberg/PycharmProjects/MT-VAEs-TDA/output/test_simulator/TopoAE_testing_COREL',
seed = 1,
verbose = False
)
grid_spheres = ConfigGrid_COREL(
learning_rate=[1/1000],
batch_size=[int(i) for i in np.logspace(6,9,num=4,base = 2.0)],# [int(i) for i in np.logspace(4,9,num=6,base = 2.0)],
n_epochs=[100],
weight_decay=[10e-5],
early_stopping=[5],
rec_loss=[L1Loss()],
top_loss=[L1Loss()],
rec_loss_weight=[1],
top_loss_weight=[i for i in np.logspace(-8,0,num=9,base = 2.0)],
model_class=[Autoencoder_MLP],
model_kwargs={
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[32, 32]]
},
dataset=[Spheres()],
sampling_kwargs={
'n_samples': [640]
},
eval=[ConfigEval(
active = True,
evaluate_on = 'test',
save_eval_latent = True,
save_train_latent = True,
online_visualization = True,
k_min=5,
k_max=105,
k_step=25,
)],
uid = [''],
experiment_dir='/home/simonberg/PycharmProjects/MT-VAEs-TDA/output/output/TopoAE/Spheres/l1',
seed = 1,
verbose = False
)
grid_spheres_ts = ConfigGrid_COREL(
learning_rate=[1/1000],
batch_size=[int(i) for i in np.logspace(4,9,num=6,base = 2.0)],
n_epochs=[100],
weight_decay=[10e-5],
early_stopping=[5],
rec_loss=[L1Loss()],
top_loss=[TwoSidedHingeLoss(ratio=1/4)],
rec_loss_weight=[1],
top_loss_weight=[i for i in np.logspace(-4,0,num=5,base = 2.0)], #[i for i in np.logspace(-8,0,num=9,base = 2.0)],
model_class=[Autoencoder_MLP],
model_kwargs={
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[32, 32]]
},
dataset=[Spheres()],
sampling_kwargs={
'n_samples': [640]
},
eval=[ConfigEval(
active = True,
evaluate_on = 'test',
save_eval_latent = True,
save_train_latent = True,
online_visualization = False,
k_min=5,
k_max=105,
k_step=25,
)],
uid = [''],
experiment_dir='/home/simonberg/PycharmProjects/MT-VAEs-TDA/output/output/TopoAE/Spheres/ts',
seed = 1,
verbose = False
)
grid_spheres_ts_sq = ConfigGrid_COREL(
learning_rate=[1/1000],
batch_size=[32],
n_epochs=[100],
weight_decay=[10e-5],
early_stopping=[10],
rec_loss=[L1Loss()],
top_loss=[TwoSidedHingeLoss(ratio=1/4, penalty_type= 'squared')],
rec_loss_weight=[1],
top_loss_weight=[i for i in np.logspace(-6,-4,num=3,base = 2.0)], #[i for i in np.logspace(-8,0,num=9,base = 2.0)],
model_class=[Autoencoder_MLP],
model_kwargs={
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[32, 32]]
},
dataset=[Spheres()],
sampling_kwargs={
'n_samples': [640]
},
eval=[ConfigEval(
active = True,
evaluate_on = 'test',
save_eval_latent = True,
save_train_latent = True,
online_visualization = False,
k_min=5,
k_max=105,
k_step=25,
)],
uid = [''],
experiment_dir='/home/simonberg/PycharmProjects/MT-VAEs-TDA/output/output/TopoAE/Spheres/ts_sq',
seed = 1,
verbose = False
)
grid_spheres_ts_large = ConfigGrid_COREL(
learning_rate=[1/1000],
batch_size=[int(i) for i in np.logspace(4,9,num=6,base = 2.0)],
n_epochs=[100],
weight_decay=[10e-5],
early_stopping=[10],
rec_loss=[L1Loss()],
top_loss=[TwoSidedHingeLoss(ratio=1/4)],
rec_loss_weight=[1],
top_loss_weight=[i for i in np.logspace(-8,0,num=9,base = 2.0)],
model_class=[Autoencoder_MLP],
model_kwargs={
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[128, 64, 32]]
},
dataset=[Spheres()],
sampling_kwargs={
'n_samples': [640]
},
eval=[ConfigEval(
active = True,
evaluate_on = 'test',
save_eval_latent = True,
save_train_latent = True,
online_visualization = False,
k_min=5,
k_max=105,
k_step=25,
)],
uid = [''],
experiment_dir='/home/simonberg/PycharmProjects/MT-VAEs-TDA/output/output/TopoAE/Spheres/ts_large2',
seed = 2,
verbose = False
)
# conifg_spheres_fullbatch2_l1 = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[25,50,100,250,500],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[TwoSidedHingeLoss(ratio=1/4)],
# top_loss_weight=[float(Fraction(1/i))for i in np.logspace(-2,9,num=12,base = 2.0)],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [25]
# }
# )
# conifg_spheres_fullbatch2_tshinge = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[25,50,100,250,500],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[TwoSidedHingeLoss(ratio=1/4)],
# top_loss_weight=[float(Fraction(1/i))for i in np.logspace(-2,9,num=12,base = 2.0)],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [25]
# }
# )
#
#
#
# conifg_spheres_fullbatch_l1 = ConfigGrid_COREL(
# learning_rate=[1/1000],
# #batch_size=[int(i) for i in np.logspace(3,9,num=7,base = 2.0)],
# batch_size=[500],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[L1Loss()],
# top_loss_weight=[float(Fraction(1/i))for i in np.logspace(-2,9,num=12,base = 2.0)],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [25]
# }
# )
#
#
#
# test_run_leonhard = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[64, 128],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[L1Loss()],
# top_loss_weight=[1],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [250]
# }
# )
#
# config_grid_Spheres_n3_250_l1 = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[8,16,32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[L1Loss()],
# top_loss_weight=[1/64,1/32,1/16,1/8,1/4,1/2,1,2,4],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres(n_spheres=3)],
# sampling_kwargs={
# 'n_samples': [250]
# }
# )
#
#
# config_grid_Spheres_n3_250_tshinge = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[8,16,32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[TwoSidedHingeLoss(ratio=1/2),TwoSidedHingeLoss(ratio=1/4)],
# top_loss_weight=[1/64,1/32,1/16,1/8,1/4,1/2,1,2,4],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres(n_spheres=3)],
# sampling_kwargs={
# 'n_samples': [250]
# }
# )
#
#
#
# config_grid_Spheres_L1 = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[L1Loss()],
# top_loss_weight=[1/2048, 1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16,32],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [500]
# }
# )
#
#
# config_grid_Spheres_benchmark = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[L1Loss()],
# top_loss_weight=[0],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [500]
# }
# )
#
# config_grid_Spheres_Hinge = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[HingeLoss(), HingeLoss(penalty_type='squared')],
# top_loss_weight=[1/2048, 1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16,32],
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [500]
# }
# )
#
#
# config_grid_Spheres_TwoSidedHinge = ConfigGrid_COREL(
# learning_rate=[1/1000],
# batch_size=[32, 64, 128, 256, 512],
# n_epochs=[40],
# rec_loss=[L1Loss()],
# rec_loss_weight=[1],
# top_loss=[TwoSidedHingeLoss(), TwoSidedHingeLoss(penalty_type='squared'),TwoSidedHingeLoss(ratio=1/4), TwoSidedHingeLoss(ratio=1/4,penalty_type='squared')],
# top_loss_weight=[1/2048, 1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8], #[1/4,1/2,1,2,4,8,16,32]
# model_class=[Autoencoder_MLP],
# model_kwargs={
# 'input_dim' : [101],
# 'latent_dim' : [2],
# 'size_hidden_layers': [[128, 64, 32]]
# },
# dataset=[Spheres()],
# sampling_kwargs={
# 'n_samples': [500]
# }
# )
# OLD CONFIGS
config_grid_testSpheres = {
'train_args': {
'learning_rate': [1/1000],
'batch_size' : [32,64,128],
'n_epochs' : [2],
'rec_loss' : {
'loss_class' : [L1Loss()],
'weight' : [1]
},
'top_loss': {
'loss_class': [L1Loss()],
'weight' : [1]
},
},
'model_args': {
'model_class': [Autoencoder_MLP],
'kwargs' : {
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[128 ,64 ,32]]
}
},
'data_args' :{
'dataset' : Spheres(),
'kwargs' :{
'n_samples': 500
}
}
}
config_grid_test_tshinge = {
'train_args': {
'learning_rate': [1/1000],
'batch_size' : [32,64,128],
'n_epochs' : [2],
'rec_loss' : {
'loss_class' : [L1Loss()],
'weight' : [1]
},
'top_loss': {
'loss_class': [TwoSidedHingeLoss()],
'weight' : [1]
},
},
'model_args': {
'model_class': [Autoencoder_MLP],
'kwargs' : {
'input_dim' : [101],
'latent_dim' : [2],
'size_hidden_layers': [[128 ,64 ,32]]
}
},
'data_args' :{
'dataset' : Spheres(),
'kwargs' :{
'n_samples': 500
}
}
} | 27.457256 | 162 | 0.5513 | 1,719 | 13,811 | 4.153578 | 0.084351 | 0.033333 | 0.036975 | 0.042857 | 0.901401 | 0.901401 | 0.891597 | 0.869328 | 0.869328 | 0.848179 | 0 | 0.089956 | 0.274781 | 13,811 | 503 | 163 | 27.457256 | 0.622903 | 0.428354 | 0 | 0.698529 | 0 | 0.014706 | 0.139471 | 0.051569 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025735 | 0 | 0.025735 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
875b92aa09b4e34ff864bbfdcff4af6ad9327a8e | 48 | py | Python | PP4E-Examples-1.4/Examples/PP4E/System/helloshell.py | AngelLiang/PP4E | 3a7f63b366e1e4700b4d2524884696999a87ba9d | [
"MIT"
] | null | null | null | PP4E-Examples-1.4/Examples/PP4E/System/helloshell.py | AngelLiang/PP4E | 3a7f63b366e1e4700b4d2524884696999a87ba9d | [
"MIT"
] | null | null | null | PP4E-Examples-1.4/Examples/PP4E/System/helloshell.py | AngelLiang/PP4E | 3a7f63b366e1e4700b4d2524884696999a87ba9d | [
"MIT"
] | null | null | null | # a Python program
print('The Meaning of Life')
| 16 | 28 | 0.729167 | 8 | 48 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 29 | 24 | 0.875 | 0.333333 | 0 | 0 | 0 | 0 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
875ddee7d5ce85cc9b40dd132c3ddf57364dfd5a | 81 | py | Python | xyz/icexmoon/java_notes/ch1/foreach_python/main.py | icexmoon/java-notebook | a9f20eee069c8d3e8cfc145f7c6ddb4d1192568b | [
"Apache-2.0"
] | null | null | null | xyz/icexmoon/java_notes/ch1/foreach_python/main.py | icexmoon/java-notebook | a9f20eee069c8d3e8cfc145f7c6ddb4d1192568b | [
"Apache-2.0"
] | null | null | null | xyz/icexmoon/java_notes/ch1/foreach_python/main.py | icexmoon/java-notebook | a9f20eee069c8d3e8cfc145f7c6ddb4d1192568b | [
"Apache-2.0"
] | null | null | null | for i in range(1, 11):
print(i, sep="", end=" ")
# 1 2 3 4 5 6 7 8 9 10
| 20.25 | 29 | 0.45679 | 20 | 81 | 1.85 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 0.358025 | 81 | 3 | 30 | 27 | 0.442308 | 0.246914 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5e81fc5a30a6256702136432f49fcd5817a61112 | 30 | py | Python | TAO/Linux_new/bin/pyside/modules.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | 46 | 2017-05-15T11:15:08.000Z | 2018-07-02T03:32:52.000Z | TAO/Linux_new/bin/pyside/modules.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | null | null | null | TAO/Linux_new/bin/pyside/modules.py | dendisuhubdy/grokmachine | 120a21a25c2730ed356739231ec8b99fc0575c8b | [
"BSD-3-Clause"
] | 24 | 2017-05-17T03:26:17.000Z | 2018-07-09T07:00:50.000Z | import sidetrack
import sttun
| 10 | 16 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 2 | 17 | 15 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e94c8153a2725cbc2f664523e3d355b504f131c | 44 | py | Python | pysend/__init__.py | growdaisy/pysend | 0befa01078700ba7a3d68aab37ccc051286e4c1d | [
"MIT"
] | 1 | 2018-11-07T19:50:43.000Z | 2018-11-07T19:50:43.000Z | pysend/__init__.py | growdaisy/pysend | 0befa01078700ba7a3d68aab37ccc051286e4c1d | [
"MIT"
] | null | null | null | pysend/__init__.py | growdaisy/pysend | 0befa01078700ba7a3d68aab37ccc051286e4c1d | [
"MIT"
] | null | null | null | from .classes import Contact, Server, Email
| 22 | 43 | 0.795455 | 6 | 44 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 44 | 1 | 44 | 44 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ea879d547b1368d7a1653358d82277a9d0896d1 | 229 | py | Python | pyecs/__init__.py | en0/pyecs | 500e241b4cd647c520faff85225238c8c3875b4a | [
"MIT"
] | null | null | null | pyecs/__init__.py | en0/pyecs | 500e241b4cd647c520faff85225238c8c3875b4a | [
"MIT"
] | null | null | null | pyecs/__init__.py | en0/pyecs | 500e241b4cd647c520faff85225238c8c3875b4a | [
"MIT"
] | null | null | null | from .entity_manager import EntityManager, EntityManagerOpts
from .game import Game
from .game_builder import GameBuilder
from .game_builder import GameBuilder
from .system_manager import SystemManager
from .entity import Entity
| 32.714286 | 60 | 0.860262 | 29 | 229 | 6.655172 | 0.37931 | 0.124352 | 0.15544 | 0.217617 | 0.352332 | 0.352332 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10917 | 229 | 6 | 61 | 38.166667 | 0.946078 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5eaa9ebf8c8c1b8c8912acbab8df1e936b8bbcaf | 21 | py | Python | example_project/some_modules/third_modules/a102.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a102.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a102.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | class A102:
pass
| 7 | 11 | 0.619048 | 3 | 21 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0.333333 | 21 | 2 | 12 | 10.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5ed522e4c16067384e8f9fe6714380bd54d6cd9b | 124 | py | Python | kbase_report_state/__main__.py | kbaseIncubator/kbase_report_state | 9f63c6db7c9a080f372dcf6bccd1f5427341563d | [
"MIT"
] | null | null | null | kbase_report_state/__main__.py | kbaseIncubator/kbase_report_state | 9f63c6db7c9a080f372dcf6bccd1f5427341563d | [
"MIT"
] | null | null | null | kbase_report_state/__main__.py | kbaseIncubator/kbase_report_state | 9f63c6db7c9a080f372dcf6bccd1f5427341563d | [
"MIT"
] | null | null | null | """Main server CLI for kbase report state."""
from kbase_report_state import serve
if __name__ == "__main__":
serve()
| 17.714286 | 45 | 0.709677 | 17 | 124 | 4.588235 | 0.705882 | 0.282051 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 124 | 6 | 46 | 20.666667 | 0.764706 | 0.314516 | 0 | 0 | 0 | 0 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0dbdb5ca88b401335af1208f4c93d8961f705d07 | 96 | py | Python | textanonymize/__init__.py | pierrerochet/textanonymize | 62d36b957349ade7359c768cbe3537961df922f3 | [
"Apache-2.0"
] | null | null | null | textanonymize/__init__.py | pierrerochet/textanonymize | 62d36b957349ade7359c768cbe3537961df922f3 | [
"Apache-2.0"
] | null | null | null | textanonymize/__init__.py | pierrerochet/textanonymize | 62d36b957349ade7359c768cbe3537961df922f3 | [
"Apache-2.0"
] | null | null | null | __version__ = "0.1.0"
from textanonymize.lang.all import *
from textanonymize.lang.fr import *
| 19.2 | 36 | 0.760417 | 14 | 96 | 4.928571 | 0.642857 | 0.492754 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.125 | 96 | 4 | 37 | 24 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.052083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0dc22d7769a46afcc707f09923acf0de04367bcd | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/tests/test__all__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/tests/test__all__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/tests/test__all__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/2f/79/82/9d83d3a7302035f79d56eabd83bc4f59b7347353e8b7de24f6367d3692 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2184f82bc4b5e92a5e06f24bd20c08bb307c9160 | 7,596 | py | Python | nVidiaModel.py | scrambleegg7/CarND-Behavioral-Cloning-P3 | 882a0ffbee1ff9d622514407aa313c93bc7df2d7 | [
"MIT"
] | null | null | null | nVidiaModel.py | scrambleegg7/CarND-Behavioral-Cloning-P3 | 882a0ffbee1ff9d622514407aa313c93bc7df2d7 | [
"MIT"
] | null | null | null | nVidiaModel.py | scrambleegg7/CarND-Behavioral-Cloning-P3 | 882a0ffbee1ff9d622514407aa313c93bc7df2d7 | [
"MIT"
] | null | null | null | #
# nVidiaModel
#
import keras
from keras.models import Sequential, Model
from keras.layers import Flatten, Dense, Lambda, Convolution2D, Cropping2D, Conv2D
from keras.layers import Dropout, Activation
from keras.regularizers import l2 # activity_l2
from keras.layers.pooling import MaxPooling2D
from keras.optimizers import SGD, Adam, Nadam
class nVidiaModelClass():
def __init__(self):
print(keras.__version__)
self.kversion = keras.__version__
#self.buildModel()
def createPreProcessingLayers(self):
"""
Creates a model with the initial pre-processing layers.
"""
model = Sequential()
model.add(Lambda(lambda x: (x / 127.5) - 1., input_shape=(160,320,3)))
# cropping image size 50px from top ~ 20 px from bottom
model.add(Cropping2D(cropping=((50,20), (0,0))))
return model
def createNormalizedLayers(self):
"""
Creates a model with the initial pre-processing layers.
"""
# image is shrinked size image 66 x 200 x 3 YCrCb image
model = Sequential()
model.add(Lambda(lambda x: (x / 127.5) - 1., input_shape=(66,200,3)))
# cropping image size 50px from top ~ 20 px from bottom
#model.add(Cropping2D(cropping=((50,20), (0,0))))
return model
def buildModel(self):
"""
Creates nVidea Autonomous Car Group model
"""
model = self.createPreProcessingLayers()
if self.kversion == "1.2.1":
#
# suppress kera v.2 warning message Conv2d should be used.
#
model.add(Convolution2D(24,5,5, subsample=(2,2), activation='relu'))
model.add(Convolution2D(36,5,5, subsample=(2,2), activation='relu'))
model.add(Convolution2D(48,5,5, subsample=(2,2), activation='relu'))
model.add(Convolution2D(64,3,3, activation='relu'))
model.add(Convolution2D(64,3,3, activation='relu'))
else:
model.add(Conv2D(24,(5,5), strides=(2,2), activation='relu',name="conv1"))
model.add(Conv2D(36,(5,5), strides=(2,2), activation='relu',name="conv2"))
model.add(Conv2D(48,(5,5), strides=(2,2), activation='relu',name="conv3"))
model.add(Conv2D(64,(3,3), activation='relu',name="conv4"))
model.add(Conv2D(64,(3,3), activation='relu',name="conv5"))
model.add(Flatten())
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))
return model
def buildModel_Normal(self):
"""
Creates nVidea Autonomous Car Group model
"""
model = self.createPreProcessingLayers()
if self.kversion == "1.2.1":
#
# suppress kera v.2 warning message Conv2d should be used.
#
model.add(Convolution2D(24,5,5, subsample=(2,2), activation='elu'))
model.add(Convolution2D(36,5,5, subsample=(2,2), activation='elu'))
model.add(Convolution2D(48,5,5, subsample=(2,2), activation='elu'))
model.add(Convolution2D(64,3,3, activation='elu'))
model.add(Convolution2D(64,3,3, activation='elu'))
else:
model.add(Conv2D(24,(5,5), strides=(2,2), activation='elu',name="conv1"))
model.add(Conv2D(36,(5,5), strides=(2,2), activation='elu',name="conv2"))
model.add(Conv2D(48,(5,5), strides=(2,2), activation='elu',name="conv3"))
model.add(Conv2D(64,(3,3), activation='elu',name="conv4"))
model.add(Conv2D(64,(3,3), activation='elu',name="conv5"))
model.add(Flatten())
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))
return model
def buildModel_drop(self):
"""
Creates nVidea Autonomous Car Group model
"""
model = self.createPreProcessingLayers()
if self.kversion == "1.2.1":
#
# suppress kera v.2 warning message Conv2d should be used.
#
# 31 x 98 x 24
model.add(Convolution2D(24,5,5, subsample=(2,2), activation='elu', init="glorot_normal", W_regularizer=l2(0.001)) )
model.add(Dropout(0.1)) # keep_prob 0.9
# 14 x 47 x 36
model.add(Convolution2D(36,5,5, subsample=(2,2), activation='elu', init="glorot_normal", W_regularizer=l2(0.001)))
model.add(Dropout(0.2)) # keep_prob 0.8
# 5 x 22 x 48
model.add(Convolution2D(48,5,5, subsample=(2,2), activation='elu', init="glorot_normal", W_regularizer=l2(0.001)))
model.add(Dropout(0.2)) # keep_prob 0.8
# 3 x 20 x 64
model.add(Convolution2D(64,3,3, subsample=(1,1),activation='elu', init="glorot_normal", W_regularizer=l2(0.001)))
model.add(Dropout(0.2)) # keep_prob 0.8
# 1 x 18 x 64
model.add(Convolution2D(64,3,3, subsample=(1,1),activation='elu', init="glorot_normal", W_regularizer=l2(0.001)))
#model.add(Dropout(0.2)) # keep_prob 0.8
model.add(Flatten())
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(100,activation='elu', init='glorot_normal', W_regularizer=l2(0.001)))
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(50,activation='elu', init='glorot_normal', W_regularizer=l2(0.001)))
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(10,activation='elu', init='glorot_normal', W_regularizer=l2(0.001)))
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(1,activation='linear', init='glorot_normal'))
else:
model.add(Conv2D(24,(5,5), strides=(2,2), activation='elu',kernel_initializer="he_uniform", kernel_regularizer=l2(0.01), name="conv1"))
model.add(Dropout(0.1)) # keep_rate 0.9
model.add(Conv2D(36,(5,5), strides=(2,2), activation='elu',kernel_initializer="he_uniform", kernel_regularizer=l2(0.01), name="conv2"))
model.add(Dropout(0.2)) # keep_rate 0.8
model.add(Conv2D(48,(5,5), strides=(2,2), activation='elu',kernel_initializer="he_uniform", kernel_regularizer=l2(0.01), name="conv3"))
model.add(Dropout(0.2)) # keep_rate 0.8
model.add(Conv2D(64,(3,3), activation='elu',kernel_initializer="he_uniform", kernel_regularizer=l2(0.01), name="conv4"))
model.add(Dropout(0.2)) # keep_rate 0.8
model.add(Conv2D(64,(3,3), activation='elu',kernel_initializer="he_uniform", kernel_regularizer=l2(0.01), name="conv5"))
model.add(Flatten())
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(100,activation='elu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01) ))
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(50,activation='elu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01) ) )
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(10,activation='elu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01) ) )
model.add(Dropout(0.5)) # keep_prob 0.5
model.add(Dense(1,activation='linear', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01) ))
return model
def main():
nVidia = nVidiaModelClass()
model = nVidia.buildModel()
model.summary()
if __name__ == "__main__":
main() | 45.48503 | 148 | 0.595577 | 1,023 | 7,596 | 4.338221 | 0.123167 | 0.127986 | 0.048671 | 0.061289 | 0.852186 | 0.852186 | 0.846327 | 0.845201 | 0.842497 | 0.795178 | 0 | 0.084034 | 0.248025 | 7,596 | 167 | 149 | 45.48503 | 0.692927 | 0.131648 | 0 | 0.46729 | 0 | 0 | 0.067536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065421 | false | 0 | 0.065421 | 0 | 0.186916 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21e479de2570875b22059931122b42265d36d7f5 | 144 | py | Python | start_scheduler.py | ProgramRipper/biliob-spider | 2fe3d5fd91bb301dd0d0eb21d03153d6882f6bcf | [
"MIT"
] | 2 | 2021-02-21T05:49:17.000Z | 2021-02-28T03:01:45.000Z | start_scheduler.py | kirahan/biliob-spider | 1a7c4a2b6781775c62c9a7d1aa2f1b0e2b0ab1f8 | [
"MIT"
] | 1 | 2022-03-20T07:59:27.000Z | 2022-03-20T07:59:27.000Z | start_scheduler.py | kirahan/biliob-spider | 1a7c4a2b6781775c62c9a7d1aa2f1b0e2b0ab1f8 | [
"MIT"
] | 7 | 2021-02-13T16:58:49.000Z | 2022-02-11T03:23:56.000Z | from biliob_scheduler.scheduler import auto_crawl_task
from biliob_scheduler.scheduler import set_schedule
set_schedule()
auto_crawl_task()
| 28.8 | 55 | 0.861111 | 20 | 144 | 5.8 | 0.45 | 0.172414 | 0.327586 | 0.482759 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097222 | 144 | 4 | 56 | 36 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
df468eb52ae8089b3e9ec10ab1f1a8e1355d2786 | 6,593 | py | Python | aaf2/model/ext/typedefs.py | shahbazk8194/pyaaf2 | 56a49b45b7aee0454629f79497e3f476ea08328e | [
"MIT"
] | 80 | 2017-10-19T20:49:39.000Z | 2022-03-14T01:32:37.000Z | aaf2/model/ext/typedefs.py | shahbazk8194/pyaaf2 | 56a49b45b7aee0454629f79497e3f476ea08328e | [
"MIT"
] | 94 | 2018-01-05T22:30:31.000Z | 2022-03-26T21:51:38.000Z | aaf2/model/ext/typedefs.py | shahbazk8194/pyaaf2 | 56a49b45b7aee0454629f79497e3f476ea08328e | [
"MIT"
] | 29 | 2018-10-25T14:01:53.000Z | 2022-03-03T15:54:04.000Z | ints = {
}
enums = {
"ColorSitingType" : ("02010105-0000-0000-060e-2b3401040101", "01010100-0000-0000-060e-2b3401040101",{
5 : "LineAlternating",
6 : "VerticalMidpoint",
}
),
"AvidPannerKindType" : ("3659b342-4f19-4316-9309-f139434a94e5", "01010300-0000-0000-060e-2b3401040101",{
1 : "AvidPannerKind_Stereo",
2 : "AvidPannerKind_LCR",
3 : "AvidPannerKind_Quad",
4 : "AvidPannerKind_LCRS",
5 : "AvidPannerKind_5dot0",
6 : "AvidPannerKind_5dot1",
7 : "AvidPannerKind_6dot0",
8 : "AvidPannerKind_6dot1",
9 : "AvidPannerKind_7dot0",
10 : "AvidPannerKind_7dot1",
}
),
"AvidEssenceElementSizeKind" : ("0e040201-0101-0000-060e-2b3401040101", "01010100-0000-0000-060e-2b3401040101",{
0 : "AvidEssenceElementSizeKind_Unknown",
1 : "AvidEssenceElementSizeKind_CBE",
2 : "AvidEssenceElementSizeKind_VBE",
}
),
}
records = {
"BoundsBox" : ("0e040301-0200-0000-060e-2b3401040101", (
("PositionX" ,"03010100-0000-0000-060e-2b3401040101"),
("PositionY" ,"03010100-0000-0000-060e-2b3401040101"),
("Width" ,"03010100-0000-0000-060e-2b3401040101"),
("Height" ,"03010100-0000-0000-060e-2b3401040101"),
),
),
"AvidManifestElement" : ("0e040301-0100-0000-060e-2b3401040101", (
("did" ,"01010100-0000-0000-060e-2b3401040101"),
("sdid" ,"01010100-0000-0000-060e-2b3401040101"),
),
),
"EqualizationBand" : ("c4c670c9-bd44-11d3-80e9-006008143e6f", (
("type" ,"01030100-0000-0000-060e-2b3401040101"),
("frequency" ,"01010300-0000-0000-060e-2b3401040101"),
("gain" ,"01010300-0000-0000-060e-2b3401040101"),
("q" ,"01010300-0000-0000-060e-2b3401040101"),
("enable" ,"01040100-0000-0000-060e-2b3401040101"),
),
),
"RGBColor" : ("e96e6d43-c383-11d3-a069-006094eb75cb", (
("red" ,"01010200-0000-0000-060e-2b3401040101"),
("green" ,"01010200-0000-0000-060e-2b3401040101"),
("blue" ,"01010200-0000-0000-060e-2b3401040101"),
),
),
"AudioSuitePlugInChunk" : ("4e4d8f5f-eefd-11d3-9ff5-0004ac969f50", (
("Version" ,"01010300-0000-0000-060e-2b3401040101"),
("ManufacturerID" ,"0f96cb41-2aa8-11d4-a00f-0004ac969f50"),
("ProductID" ,"0f96cb41-2aa8-11d4-a00f-0004ac969f50"),
("PlugInID" ,"0f96cb41-2aa8-11d4-a00f-0004ac969f50"),
("ChunkID" ,"0f96cb41-2aa8-11d4-a00f-0004ac969f50"),
("Name" ,"3271a34f-f3a1-11d3-9ff5-0004ac969f50"),
("ChunkDataUID" ,"01030100-0000-0000-060e-2b3401040101"),
),
),
}
fixed_arrays = {
"AvidBounds" : ("8bc42732-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 48),
"AvidColor" : ("8bc42733-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 68),
"AvidCrop" : ("8bc4272f-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 32),
"AvidGlobalKeyFrame" : ("09997778-960e-11d3-a04e-006094eb75cb", "01010100-0000-0000-060e-2b3401040101", 16),
"AvidPosition" : ("8bc4272e-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 24),
"AvidScale" : ("8bc42730-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 16),
"AvidSpillSupress" : ("8bc42731-6bab-11d3-80cf-006008143e6f", "01010100-0000-0000-060e-2b3401040101", 8),
"AvidString4" : ("0f96cb41-2aa8-11d4-a00f-0004ac969f50", "01010100-0000-0000-060e-2b3401040101", 4),
"AvidWideString32" : ("3271a34f-f3a1-11d3-9ff5-0004ac969f50", "01010200-0000-0000-060e-2b3401040101", 32),
}
var_arrays = {
"AudioSuitePIChunkArray" : ("4e4d8f60-eefd-11d3-9ff5-0004ac969f50", "4e4d8f5f-eefd-11d3-9ff5-0004ac969f50"),
"AudioSuitePIChunkData" : ("5cf19caf-ef83-11d3-9ff5-0004ac969f50", "01010100-0000-0000-060e-2b3401040101"),
"AvidBagOfBits" : ("ccaa73d1-f538-11d3-a081-006094eb75cb", "01010100-0000-0000-060e-2b3401040101"),
"AvidManifestArray" : ("0e040402-0100-0000-060e-2b3401040101", "0e040301-0100-0000-060e-2b3401040101"),
"AvidTKMNTrackedParamArray" : ("b56a2ec2-fc3b-11d3-9ff7-0004ac969f50", "f9a74d0a-7b30-11d3-a044-006094eb75cb"),
"AvidTKMNTrackerDataArray" : ("b56a2ec3-fc3b-11d3-9ff7-0004ac969f50", "f9a74d0a-7b30-11d3-a044-006094eb75cb"),
"EqualizationBandArray" : ("c4c670ca-bd44-11d3-80e9-006008143e6f", "c4c670c9-bd44-11d3-80e9-006008143e6f"),
"kAAFTypeID_SubDescriptorStrongReferenceVector" : ("05060e00-0000-0000-060e-2b3401040101", "05022600-0000-0000-060e-2b3401040101"),
}
renames = {
}
strings = {
}
streams = {
}
opaques = {
}
extenums = {
"CodingEquationsType" : ("02020106-0000-0000-060e-2b3401040101", {
"0e040501-0201-0000-060e-2b3404010101" : "CodingEquations_ITU2020",
},
),
"ColorPrimariesType" : ("02020105-0000-0000-060e-2b3401040101", {
"04010101-0304-0000-060e-2b340401010d" : "ColorPrimaries_ITU2020",
"0e040501-0301-0000-060e-2b3404010101" : "ColorPrimaries_SMPTE_RP431",
"0e040501-0302-0000-060e-2b3404010101" : "ColorPrimaries_Sony_SGamut3",
"0e040501-0303-0000-060e-2b3404010101" : "ColorPrimaries_Sony_SGamut3_Cine",
},
),
"TransferCharacteristicType" : ("02020102-0000-0000-060e-2b3401040101", {
"0e040501-0101-0000-060e-2b3404010101" : "TransferCharacteristic_DPXPrintingDensity",
"0e040501-0102-0000-060e-2b3404010101" : "TransferCharacteristic_DPXLogarithmic",
"0e040501-0103-0000-060e-2b3404010101" : "TransferCharacteristic_SRGB",
"0e040501-0105-0000-060e-2b3404010101" : "TransferCharacteristic_SMPTE_RP431",
"0e040501-0106-0000-060e-2b3404010101" : "TransferCharacteristic_SMPTE_ST2084",
"0e040501-0108-0000-060e-2b3404010101" : "TransferCharacteristic_ARIB_B67",
"0e040501-010a-0000-060e-2b3404010101" : "TransferCharacteristic_ITU709_Extended2",
"0e060401-0101-0605-060e-2b3404010106" : "TransferCharacteristic_Sony_SLog3",
"0e170000-0001-0101-060e-2b340401010c" : "TransferCharacteristic_ARRI_LogC",
},
),
}
chars = {
}
indirects = {
}
sets = {
}
strongrefs = {
"AvidStrongReference" : ("f9a74d0a-7b30-11d3-a044-006094eb75cb", "0d010101-0101-0100-060e-2b3402060101"),
"kAAFTypeID_SubDescriptorStrongReference" : ("05022600-0000-0000-060e-2b3401040101", "0d010101-0101-5900-060e-2b3402060101"),
}
weakrefs = {
}
| 44.85034 | 135 | 0.662824 | 595 | 6,593 | 7.27563 | 0.359664 | 0.099792 | 0.19404 | 0.205128 | 0.468468 | 0.189882 | 0.127512 | 0.127512 | 0.103488 | 0 | 0 | 0.442828 | 0.176248 | 6,593 | 146 | 136 | 45.157534 | 0.354263 | 0 | 0 | 0.121212 | 0 | 0 | 0.694069 | 0.60003 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10c63bd8631602fd24797f7fa2ae5b80a1172547 | 23 | py | Python | app/stocks/services.py | Monxun/DjangoMLDocker | f34ddbc2f504054ed32ed1fb66a0a77c461350dd | [
"MIT"
] | null | null | null | app/stocks/services.py | Monxun/DjangoMLDocker | f34ddbc2f504054ed32ed1fb66a0a77c461350dd | [
"MIT"
] | null | null | null | app/stocks/services.py | Monxun/DjangoMLDocker | f34ddbc2f504054ed32ed1fb66a0a77c461350dd | [
"MIT"
] | null | null | null | from .src import finviz | 23 | 23 | 0.826087 | 4 | 23 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10c705115a35344e143ae59aa1a1661cfa5e9086 | 259 | py | Python | src/wai/annotations/domain/image/object_detection/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | null | null | null | src/wai/annotations/domain/image/object_detection/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | 3 | 2021-06-30T23:42:47.000Z | 2022-03-01T03:45:07.000Z | src/wai/annotations/domain/image/object_detection/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | null | null | null | """
Package specifying the domain of images annotated with objects detected
within those images.
"""
from ._ImageObjectDetectionDomainSpecifier import ImageObjectDetectionDomainSpecifier
from ._ImageObjectDetectionInstance import ImageObjectDetectionInstance
| 37 | 85 | 0.876448 | 21 | 259 | 10.714286 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088803 | 259 | 6 | 86 | 43.166667 | 0.95339 | 0.355212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10d9cd437ef8ccb1d6072bc4c820843abe15861d | 2,208 | py | Python | forms/migrations/0002_auto_20150309_1227.py | digideskio/gmmp | d82a4be0787c3a3a9e27dc590d7974f9f884fbb6 | [
"Apache-2.0"
] | 4 | 2020-01-05T09:14:19.000Z | 2022-02-17T03:22:09.000Z | forms/migrations/0002_auto_20150309_1227.py | digideskio/gmmp | d82a4be0787c3a3a9e27dc590d7974f9f884fbb6 | [
"Apache-2.0"
] | 68 | 2019-12-23T02:19:55.000Z | 2021-04-23T06:13:36.000Z | forms/migrations/0002_auto_20150309_1227.py | CodeForAfrica/gmmp | d7ffe2dac16bd57e81bb3555ddea9df1fe7e9ebf | [
"Apache-2.0"
] | 2 | 2020-11-07T12:23:21.000Z | 2021-11-07T18:21:31.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('forms', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='internetnewssheet',
name='equality_rights',
field=models.CharField(help_text="Scan the full news story and code 'Yes' if it quotes or makes reference to any piece of legislation or policy that promotes gender equality or human rights.", max_length=1, verbose_name='Reference to gender equality / human rights legislation/ policy', choices=[(b'Y', 'Yes'), (b'N', 'No')]),
preserve_default=True,
),
migrations.AlterField(
model_name='newspapersheet',
name='equality_rights',
field=models.CharField(help_text="Scan the full news story and code 'Yes' if it quotes or makes reference to any piece of legislation or policy that promotes gender equality or human rights.", max_length=1, verbose_name='Reference to gender equality / human rights legislation/ policy', choices=[(b'Y', 'Yes'), (b'N', 'No')]),
preserve_default=True,
),
migrations.AlterField(
model_name='radiosheet',
name='equality_rights',
field=models.CharField(help_text="Scan the full news story and code 'Yes' if it quotes or makes reference to any piece of legislation or policy that promotes gender equality or human rights.", max_length=1, verbose_name='Reference to gender equality / human rights legislation/ policy', choices=[(b'Y', 'Yes'), (b'N', 'No')]),
preserve_default=True,
),
migrations.AlterField(
model_name='televisionsheet',
name='equality_rights',
field=models.CharField(help_text="Scan the full news story and code 'Yes' if it quotes or makes reference to any piece of legislation or policy that promotes gender equality or human rights.", max_length=1, verbose_name='Reference to gender equality / human rights legislation/ policy', choices=[(b'Y', 'Yes'), (b'N', 'No')]),
preserve_default=True,
),
]
| 56.615385 | 338 | 0.666667 | 279 | 2,208 | 5.168459 | 0.243728 | 0.061026 | 0.069348 | 0.080444 | 0.839806 | 0.839806 | 0.839806 | 0.839806 | 0.839806 | 0.839806 | 0 | 0.005266 | 0.225996 | 2,208 | 38 | 339 | 58.105263 | 0.838502 | 0.009511 | 0 | 0.625 | 0 | 0.125 | 0.4746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10eacf5ba580ed20fb3df95e04d9d3bb45021dde | 3,420 | py | Python | tests/unit/parameter_tests.py | Pankrat/pika | 9f62cbe032e9b4fa0fe1842587ce0702c3926a3d | [
"BSD-3-Clause"
] | null | null | null | tests/unit/parameter_tests.py | Pankrat/pika | 9f62cbe032e9b4fa0fe1842587ce0702c3926a3d | [
"BSD-3-Clause"
] | null | null | null | tests/unit/parameter_tests.py | Pankrat/pika | 9f62cbe032e9b4fa0fe1842587ce0702c3926a3d | [
"BSD-3-Clause"
] | null | null | null | import unittest
import pika
class ParameterTests(unittest.TestCase):
def test_parameters_accepts_plain_string_virtualhost(self):
parameters = pika.ConnectionParameters(virtual_host="prtfqpeo")
self.assertEqual(parameters.virtual_host, "prtfqpeo")
def test_parameters_accepts_plain_string_virtualhost(self):
parameters = pika.ConnectionParameters(virtual_host=u"prtfqpeo")
self.assertEqual(parameters.virtual_host, "prtfqpeo")
def test_parameters_accept_plain_string_locale(self):
parameters = pika.ConnectionParameters(locale="en_US")
self.assertEqual(parameters.locale, "en_US")
def test_parameters_accept_unicode_locale(self):
parameters = pika.ConnectionParameters(locale=u"en_US")
self.assertEqual(parameters.locale, "en_US")
def test_urlparameters_accepts_plain_string(self):
parameters = pika.URLParameters('amqp://prtfqpeo:oihdglkhcp0@myserver.'
'mycompany.com:5672/prtfqpeo?locale='
'en_US')
self.assertEqual(parameters.port, 5672)
self.assertEqual(parameters.virtual_host, "prtfqpeo")
self.assertEqual(parameters.credentials.password, "oihdglkhcp0")
self.assertEqual(parameters.credentials.username, "prtfqpeo")
self.assertEqual(parameters.locale, "en_US")
def test_urlparameters_accepts_unicode_string(self):
parameters = pika.URLParameters(u'amqp://prtfqpeo:oihdglkhcp0@myserver'
u'.mycompany.com:5672/prtfqpeo?locale='
u'en_US')
self.assertEqual(parameters.port, 5672)
self.assertEqual(parameters.virtual_host, "prtfqpeo")
self.assertEqual(parameters.credentials.password, "oihdglkhcp0")
self.assertEqual(parameters.credentials.username, "prtfqpeo")
self.assertEqual(parameters.locale, "en_US")
def test_urlparameters_uses_default_port_if_not_specified(self):
parameters = pika.URLParameters("amqp://myserver.mycompany.com")
self.assertEqual(parameters.port, pika.URLParameters.DEFAULT_PORT)
def test_urlparameters_uses_default_virtual_host_if_not_specified(self):
parameters = pika.URLParameters("amqp://myserver.mycompany.com")
self.assertEqual(parameters.virtual_host,
pika.URLParameters.DEFAULT_VIRTUAL_HOST)
def test_urlparameters_uses_default_virtual_host_if_only_slash_is_specified(
self
):
parameters = pika.URLParameters("amqp://myserver.mycompany.com/")
self.assertEqual(parameters.virtual_host,
pika.URLParameters.DEFAULT_VIRTUAL_HOST)
def test_urlparameters_uses_default_username_and_password_if_not_specified(
self
):
parameters = pika.URLParameters("amqp://myserver.mycompany.com")
self.assertEqual(parameters.credentials.username,
pika.URLParameters.DEFAULT_USERNAME)
self.assertEqual(parameters.credentials.password,
pika.URLParameters.DEFAULT_PASSWORD)
def test_urlparameters_accepts_blank_username_and_password(self):
parameters = pika.URLParameters("amqp://:@myserver.mycompany.com")
self.assertEqual(parameters.credentials.username, "")
self.assertEqual(parameters.credentials.password, "")
| 47.5 | 80 | 0.700585 | 338 | 3,420 | 6.822485 | 0.136095 | 0.1366 | 0.227667 | 0.124892 | 0.85039 | 0.79098 | 0.705984 | 0.699913 | 0.676062 | 0.676062 | 0 | 0.007375 | 0.207018 | 3,420 | 71 | 81 | 48.169014 | 0.84292 | 0 | 0 | 0.465517 | 0 | 0 | 0.122222 | 0.08538 | 0 | 0 | 0 | 0 | 0.362069 | 1 | 0.189655 | false | 0.12069 | 0.034483 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
802fbcded5d43773fec02c8d33705590a0b40333 | 208 | py | Python | docs/docs/tutorials/generic-script/replacements/input.py | jorka/cartesi-contentful-test | ca1a0585db9acb453d13c68e11d05bbb96ddf173 | [
"MIT"
] | null | null | null | docs/docs/tutorials/generic-script/replacements/input.py | jorka/cartesi-contentful-test | ca1a0585db9acb453d13c68e11d05bbb96ddf173 | [
"MIT"
] | null | null | null | docs/docs/tutorials/generic-script/replacements/input.py | jorka/cartesi-contentful-test | ca1a0585db9acb453d13c68e11d05bbb96ddf173 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import jwt
payload = jwt.decode(b'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzb21lIjoicGF5bG9hZCJ9.Joh1R2dYzkRvDkqv3sygm5YyK8Gi4ShZqbhK2gxcs2U', 'secret', algorithms=['HS256'])
print(payload)
| 41.6 | 162 | 0.841346 | 17 | 208 | 10.294118 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105528 | 0.043269 | 208 | 4 | 163 | 52 | 0.773869 | 0.081731 | 0 | 0 | 0 | 0 | 0.610526 | 0.552632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
339acc6b88ff2394cc72d6e5e2b21e7aa8f3e6f7 | 11,473 | py | Python | tests/tests/correctness/EPLAnalytics/Extensions/Prediction/p_cor_001/run.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | 3 | 2019-09-02T18:21:22.000Z | 2020-04-17T16:34:57.000Z | tests/tests/correctness/EPLAnalytics/Extensions/Prediction/p_cor_001/run.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | null | null | null | tests/tests/correctness/EPLAnalytics/Extensions/Prediction/p_cor_001/run.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | null | null | null | # $Copyright (c) 2015 Software AG, Darmstadt, Germany and/or Software AG USA Inc., Reston, VA, USA, and/or Terracotta Inc., San Francisco, CA, USA, and/or Software AG (Canada) Inc., Cambridge, Ontario, Canada, and/or, Software AG (UK) Ltd., Derby, United Kingdom, and/or Software A.G. (Israel) Ltd., Or-Yehuda, Israel and/or their licensors.$
# Use, reproduction, transfer, publication or disclosure is prohibited except as specifically provided for in your License Agreement with Software AG
from industry.framework.AnalyticsBaseTest import AnalyticsBaseTest
from pysys.constants import *
class PySysTest(AnalyticsBaseTest):
def execute(self):
# Start the correlator
correlator = self.startTest(logfile='correlator.log',
inputLog='correlator_input.log',
enableJava=True)
self.injectAnalytic(correlator)
self.injectPrediction(correlator)
self.ready(correlator)
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", [], [], {})')
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==1', timeout=30)
# Sending the config events here, rather than from a file as the plugin instances are relatively slow
# to respond and highly parallel when they do (the latter is good), but I want to keep the logging of each
# trial distinct.
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"modelName":"Iris_KM"})')
# Different expression as there's a bug which means the error callback isn't called.
# Unfortunately the plugin has a bug from 9.12 related to the new dynamic model behaviour
# which means it no longer picks up on when a file isn't where it should be. I don't want to have
# to use the File adapter just to check for this.
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==2', timeout=30)
# File not found as we haven't provided the correct directory and it's not in the working dir.
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"modelName":"Iris_KM", "pmmlFileName":"EnergyDataModel.pmml"})')
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==3', timeout=30)
# This will induce warnings, but not an error as we can't actually tell which fields are mandatory or not.
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"modelName":"Iris_KM", "pmmlFileName":"Iris_KM.pmml", "pmmlFileDirectory":"'+self.PMMLMODELS+'"})')
self.waitForSignal('correlator.log', expr='Analytic Prediction started for inputDataNames', condition='==1', timeout=30)
# Input fields can duplicate mapping, output fields can't.
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"SEPAL_LE":"Input.DVALUE", "SEPAL_WI":"Input.DVALUE", "Cluster ID":"Output.DVALUE", "Cluster Affinity for predicted":"Output.DVALUE", "modelName":"Iris_KM", "pmmlFileName":"Iris_KM.pmml", "pmmlFileDirectory":"'+self.PMMLMODELS+'"})')
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==4', timeout=30)
# As above using prefixes and different cases.
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"input.SEPAL_LE":"Input.dvalue", "input.SEPAL_WI":"Input.Dvalue", "output.Cluster ID":"Output.dValue", "output.Cluster Affinity for predicted":"Output.DValue", "modelName":"Iris_KM", "pmmlFileName":"Iris_KM.pmml", "pmmlFileDirectory":"'+self.PMMLMODELS+'"})')
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==5', timeout=30)
# Mapping channels not in provided sequence of channels
correlator.sendEventStrings('com.industry.analytics.Analytic("Prediction", ["Input"], ["Output"], {"SEPAL_LE":"Inputx.DVALUE", "Cluster ID":"Outputx.DVALUE", "modelName":"Iris_KM", "pmmlFileName":"Iris_KM.pmml", "pmmlFileDirectory":"'+self.PMMLMODELS+'"})')
self.waitForSignal('correlator.log', expr='Error spawning Prediction Analytic instance', condition='==6', timeout=30)
def validate(self):
# Ensure the test output was correct
exprList=[]
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\[\],\[\],{}\)')
exprList.append('Mandatory param modelName missing.')
exprList.append('Error spawning Prediction Analytic instance.')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"modelName":"Iris_KM"}\)')
exprList.append('Loaded models: \[\]')
exprList.append('Model Iris_KM not found in PMML file \'\'.')
exprList.append('Error spawning Prediction Analytic instance.')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"modelName":"Iris_KM","pmmlFileName":"EnergyDataModel.pmml"}\)')
exprList.append('Loaded models: \[\]')
exprList.append('Model Iris_KM not found in PMML file \'EnergyDataModel.pmml\'.')
exprList.append('Error spawning Prediction Analytic instance.')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"modelName":"Iris_KM","pmmlFileDirectory":".*/tests/tools/models","pmmlFileName":"Iris_KM.pmml"}\)')
exprList.append('Loaded models: \["Iris_KM"\]')
exprList.append('Prediction Analytic using model Iris_KM from Iris_KM.pmml')
exprList.append('Input fields : \["SEPAL_LE","SEPAL_WI","PETAL_LE","PETAL_WI"\]')
exprList.append('Output fields: \["predictedValue_CLASS","Cluster ID","Cluster Affinity for predicted","Cluster Affinity for setosa","Cluster Affinity for versic","Cluster Affinity for virgin"\]')
exprList.append('No map found for model input parameter: SEPAL_LE')
exprList.append('No map found for model input parameter: SEPAL_WI')
exprList.append('No map found for model input parameter: PETAL_LE')
exprList.append('No map found for model input parameter: PETAL_WI')
exprList.append('No map found for model output parameter: predictedValue_CLASS')
exprList.append('No map found for model output parameter: Cluster ID')
exprList.append('No map found for model output parameter: Cluster Affinity for predicted')
exprList.append('No map found for model output parameter: Cluster Affinity for setosa')
exprList.append('No map found for model output parameter: Cluster Affinity for versic')
exprList.append('No map found for model output parameter: Cluster Affinity for virgin')
exprList.append('Analytic Prediction started for inputDataNames \["Input"\]')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"Cluster Affinity for predicted":"Output.DVALUE","Cluster ID":"Output.DVALUE","SEPAL_LE":"Input.DVALUE","SEPAL_WI":"Input.DVALUE","modelName":"Iris_KM","pmmlFileDirectory":".*/tests/tools/models","pmmlFileName":"Iris_KM.pmml"}\)')
exprList.append('Loaded models: \["Iris_KM"\]')
exprList.append('Prediction Analytic using model Iris_KM from Iris_KM.pmml')
exprList.append('Input fields : \["SEPAL_LE","SEPAL_WI","PETAL_LE","PETAL_WI"\]')
exprList.append('Output fields: \["predictedValue_CLASS","Cluster ID","Cluster Affinity for predicted","Cluster Affinity for setosa","Cluster Affinity for versic","Cluster Affinity for virgin"\]')
exprList.append('Duplicate mapping Input.DVALUE for PMML model input parameters.')
exprList.append('No map found for model input parameter: PETAL_LE')
exprList.append('No map found for model input parameter: PETAL_WI')
exprList.append('No map found for model output parameter: predictedValue_CLASS')
exprList.append('Duplicate mapping Output.DVALUE for PMML model output parameters.')
exprList.append('No map found for model output parameter: Cluster Affinity for setosa')
exprList.append('No map found for model output parameter: Cluster Affinity for versic')
exprList.append('No map found for model output parameter: Cluster Affinity for virgin')
exprList.append('Error spawning Prediction Analytic instance.')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"input.SEPAL_LE":"Input.dvalue","input.SEPAL_WI":"Input.Dvalue","modelName":"Iris_KM","output.Cluster Affinity for predicted":"Output.DValue","output.Cluster ID":"Output.dValue","pmmlFileDirectory":".*/tests/tools/models","pmmlFileName":"Iris_KM.pmml"}\)')
exprList.append('Loaded models: \["Iris_KM"\]')
exprList.append('Prediction Analytic using model Iris_KM from Iris_KM.pmml')
exprList.append('Input fields : \["SEPAL_LE","SEPAL_WI","PETAL_LE","PETAL_WI"\]')
exprList.append('Output fields: \["predictedValue_CLASS","Cluster ID","Cluster Affinity for predicted","Cluster Affinity for setosa","Cluster Affinity for versic","Cluster Affinity for virgin"\]')
exprList.append('Duplicate mapping Input.Dvalue for PMML model input parameters.')
exprList.append('No map found for model input parameter: PETAL_LE')
exprList.append('No map found for model input parameter: PETAL_WI')
exprList.append('No map found for model output parameter: predictedValue_CLASS')
exprList.append('Duplicate mapping Output.DValue for PMML model output parameters.')
exprList.append('No map found for model output parameter: Cluster Affinity for setosa')
exprList.append('No map found for model output parameter: Cluster Affinity for versic')
exprList.append('No map found for model output parameter: Cluster Affinity for virgin')
exprList.append('Error spawning Prediction Analytic instance.')
exprList.append('Validating com.industry.analytics.Analytic\("Prediction",\["Input"\],\["Output"\],{"Cluster ID":"Outputx.DVALUE","SEPAL_LE":"Inputx.DVALUE","modelName":"Iris_KM","pmmlFileDirectory":".*/tests/tools/models","pmmlFileName":"Iris_KM.pmml"}\)')
exprList.append('Loaded models: \["Iris_KM"\]')
exprList.append('Prediction Analytic using model Iris_KM from Iris_KM.pmml')
exprList.append('Input fields : \["SEPAL_LE","SEPAL_WI","PETAL_LE","PETAL_WI"\]')
exprList.append('Output fields: \["predictedValue_CLASS","Cluster ID","Cluster Affinity for predicted","Cluster Affinity for setosa","Cluster Affinity for versic","Cluster Affinity for virgin"\]')
exprList.append('Data name Inputx not found in the list of inputDataNames: \["Input"\]')
exprList.append('No map found for model input parameter: SEPAL_WI')
exprList.append('No map found for model input parameter: PETAL_LE')
exprList.append('No map found for model input parameter: PETAL_WI')
exprList.append('No map found for model output parameter: predictedValue_CLASS')
exprList.append('Data name Outputx not found in the list of outputDataNames: \["Output"\]')
exprList.append('No map found for model output parameter: Cluster Affinity for predicted')
exprList.append('No map found for model output parameter: Cluster Affinity for setosa')
exprList.append('No map found for model output parameter: Cluster Affinity for versic')
exprList.append('No map found for model output parameter: Cluster Affinity for virgin')
exprList.append('Error spawning Prediction Analytic instance.')
self.assertOrderedGrep("correlator.log", exprList=exprList)
self.checkSanity()
| 84.985185 | 361 | 0.743136 | 1,456 | 11,473 | 5.800824 | 0.157967 | 0.117689 | 0.07246 | 0.067488 | 0.789131 | 0.774213 | 0.76332 | 0.744139 | 0.729813 | 0.726735 | 0 | 0.002764 | 0.117057 | 11,473 | 134 | 362 | 85.619403 | 0.830997 | 0.125076 | 0 | 0.525253 | 0 | 0.151515 | 0.711439 | 0.254577 | 0 | 0 | 0 | 0 | 0.010101 | 1 | 0.020202 | false | 0 | 0.020202 | 0 | 0.050505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33aad98255a4f9215b30bf295b257a9335ef422b | 2,170 | py | Python | epytope/Data/pssms/tepitopepan/mat/DRB1_1437_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1437_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1437_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | DRB1_1437_9 = {0: {'A': -999.0, 'E': -999.0, 'D': -999.0, 'G': -999.0, 'F': -0.98558, 'I': -0.014418, 'H': -999.0, 'K': -999.0, 'M': -0.014418, 'L': -0.014418, 'N': -999.0, 'Q': -999.0, 'P': -999.0, 'S': -999.0, 'R': -999.0, 'T': -999.0, 'W': -0.98558, 'V': -0.014418, 'Y': -0.98558}, 1: {'A': 0.0, 'E': 0.1, 'D': -1.3, 'G': 0.5, 'F': 0.8, 'I': 1.1, 'H': 0.8, 'K': 1.1, 'M': 1.1, 'L': 1.0, 'N': 0.8, 'Q': 1.2, 'P': -0.5, 'S': -0.3, 'R': 2.2, 'T': 0.0, 'W': -0.1, 'V': 2.1, 'Y': 0.9}, 2: {'A': 0.0, 'E': -1.2, 'D': -1.3, 'G': 0.2, 'F': 0.8, 'I': 1.5, 'H': 0.2, 'K': 0.0, 'M': 1.4, 'L': 1.0, 'N': 0.5, 'Q': 0.0, 'P': 0.3, 'S': 0.2, 'R': 0.7, 'T': 0.0, 'W': 0.0, 'V': 0.5, 'Y': 0.8}, 3: {'A': 0.0, 'E': -1.0941, 'D': -0.82818, 'G': -1.0282, 'F': 0.8565, 'I': 0.32963, 'H': 0.53731, 'K': -0.23043, 'M': 0.86284, 'L': 0.62278, 'N': 0.0048429, 'Q': -0.12126, 'P': -1.218, 'S': -0.40878, 'R': -0.28052, 'T': -0.69699, 'W': 0.1589, 'V': -0.11258, 'Y': 0.38531}, 4: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 5: {'A': 0.0, 'E': -1.3927, 'D': -2.3212, 'G': -0.66338, 'F': -1.3595, 'I': 0.67186, 'H': -0.12275, 'K': 1.2191, 'M': -0.86634, 'L': 0.19125, 'N': -0.5417, 'Q': -0.32558, 'P': 0.47213, 'S': -0.068092, 'R': 0.97711, 'T': 0.778, 'W': -1.3623, 'V': 1.1455, 'Y': -1.3377}, 6: {'A': 0.0, 'E': -0.64983, 'D': -0.97579, 'G': -0.53871, 'F': 0.44152, 'I': 0.52796, 'H': -0.054496, 'K': -0.54508, 'M': 1.0081, 'L': 0.8556, 'N': 0.28428, 'Q': -0.22833, 'P': -0.11427, 'S': -0.063025, 'R': -0.47747, 'T': -0.18283, 'W': -0.18496, 'V': 0.055751, 'Y': -0.012542}, 7: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 8: {'A': 0.0, 'E': -1.4203, 'D': -1.4631, 'G': -0.80589, 'F': -0.77971, 'I': -0.20513, 'H': 0.08694, 'K': -0.32941, 'M': -0.20647, 'L': -0.85725, 'N': -1.2245, 'Q': 0.47193, 'P': -1.1884, 'S': 0.73353, 'R': -0.86303, 'T': -1.0877, 'W': -0.95502, 'V': -0.59132, 'Y': -0.82511}} | 2,170 | 2,170 | 0.396774 | 525 | 2,170 | 1.63619 | 0.201905 | 0.114086 | 0.027939 | 0.037253 | 0.223516 | 0.142026 | 0.142026 | 0.142026 | 0.132712 | 0.132712 | 0 | 0.376788 | 0.162212 | 2,170 | 1 | 2,170 | 2,170 | 0.09571 | 0 | 0 | 0 | 0 | 0 | 0.078766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33d7c36961cb44eb4af9088a66b01b561e598314 | 88 | py | Python | butter_exercise/utils/helpers.py | tadeoos/butter_exercise | 3b5a9601bc527214dfd115773ce5cbdd5899f742 | [
"MIT"
] | null | null | null | butter_exercise/utils/helpers.py | tadeoos/butter_exercise | 3b5a9601bc527214dfd115773ce5cbdd5899f742 | [
"MIT"
] | null | null | null | butter_exercise/utils/helpers.py | tadeoos/butter_exercise | 3b5a9601bc527214dfd115773ce5cbdd5899f742 | [
"MIT"
] | null | null | null | from django.utils import timezone
def aware_today():
return timezone.now().date()
| 14.666667 | 33 | 0.727273 | 12 | 88 | 5.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 88 | 5 | 34 | 17.6 | 0.851351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
1d6b1377e1cd74a7e0fa0f04f62d9bed88e5f738 | 215 | py | Python | pycgp/evaluator.py | d9w/pyCGP | 8f23bda9d653b9def91e108e7fdad61c029178e1 | [
"MIT"
] | 1 | 2019-05-29T07:38:06.000Z | 2019-05-29T07:38:06.000Z | pycgp/evaluator.py | d9w/pyCGP | 8f23bda9d653b9def91e108e7fdad61c029178e1 | [
"MIT"
] | null | null | null | pycgp/evaluator.py | d9w/pyCGP | 8f23bda9d653b9def91e108e7fdad61c029178e1 | [
"MIT"
] | 3 | 2019-09-15T20:09:17.000Z | 2020-04-10T16:37:29.000Z | from .cgp import CGP
class Evaluator:
def evaluate(self, cgp, it):
raise NotImplementedError('evaluation method not implemented')
def clone(self):
raise NotImplementedError('clone method not implemented')
| 21.5 | 65 | 0.767442 | 26 | 215 | 6.346154 | 0.615385 | 0.290909 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148837 | 215 | 9 | 66 | 23.888889 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0.285047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1d8f6a6c3cfe199f1b1e5db3817cbf2e41115863 | 271 | py | Python | runbox/utils.py | burenotti/runbox | 73a24764750544a37738605f66bad91f8c4cb31c | [
"MIT"
] | null | null | null | runbox/utils.py | burenotti/runbox | 73a24764750544a37738605f66bad91f8c4cb31c | [
"MIT"
] | null | null | null | runbox/utils.py | burenotti/runbox | 73a24764750544a37738605f66bad91f8c4cb31c | [
"MIT"
] | null | null | null | from __future__ import annotations
class Placeholder:
def __init__(self, arg_num: int = 0):
self.arg_num = arg_num
def __getitem__(self, arg_num: int) -> Placeholder:
return Placeholder(arg_num=arg_num)
_ = Placeholder()
_1 = _[1]
_2 = _[2]
| 16.9375 | 55 | 0.667897 | 36 | 271 | 4.388889 | 0.472222 | 0.227848 | 0.189873 | 0.164557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023923 | 0.228782 | 271 | 15 | 56 | 18.066667 | 0.732057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.111111 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d568df24b3b589b8a186c175415b73575d85d046 | 144 | py | Python | rhea/cores/misc/assign.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | 1 | 2022-03-16T23:56:09.000Z | 2022-03-16T23:56:09.000Z | rhea/cores/misc/assign.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | null | null | null | rhea/cores/misc/assign.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | null | null | null |
from myhdl import always_comb
def assign(a, b):
""" a = b """
@always_comb
def assign():
a.next = b
return assign,
| 11.076923 | 29 | 0.534722 | 20 | 144 | 3.75 | 0.55 | 0.266667 | 0.346667 | 0.506667 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.340278 | 144 | 12 | 30 | 12 | 0.789474 | 0.034722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d5707138c2f23aa580dab3fec9612f45a76c6517 | 265 | py | Python | explainerdashboard/dashboard_components/__init__.py | yanhong-zhao-ef/explainerdashboard | b057d6458988227e7bcebb2a91ea79c771ddcf2f | [
"MIT"
] | 1,178 | 2019-12-20T10:56:17.000Z | 2022-03-30T13:05:48.000Z | explainerdashboard/dashboard_components/__init__.py | yanhong-zhao-ef/explainerdashboard | b057d6458988227e7bcebb2a91ea79c771ddcf2f | [
"MIT"
] | 172 | 2020-03-04T08:15:01.000Z | 2022-03-31T20:23:14.000Z | explainerdashboard/dashboard_components/__init__.py | yanhong-zhao-ef/explainerdashboard | b057d6458988227e7bcebb2a91ea79c771ddcf2f | [
"MIT"
] | 150 | 2020-03-04T04:43:52.000Z | 2022-03-29T06:57:00.000Z | from ..dashboard_methods import *
from .overview_components import *
from .classifier_components import *
from .regression_components import *
from .shap_components import *
from .decisiontree_components import *
from .connectors import *
from .composites import *
| 29.444444 | 38 | 0.815094 | 30 | 265 | 7 | 0.4 | 0.333333 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120755 | 265 | 8 | 39 | 33.125 | 0.901288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d57f6afa8591c558d58e3666f120f246e571daeb | 46 | py | Python | zodo/ner/__init__.py | ZooPhy/zodo-services | b065c3967d831fae1a22a2e9c351d49437d1d02c | [
"Apache-2.0"
] | 1 | 2022-02-06T16:01:08.000Z | 2022-02-06T16:01:08.000Z | zodo/ner/__init__.py | ZooPhy/zodo-services | b065c3967d831fae1a22a2e9c351d49437d1d02c | [
"Apache-2.0"
] | 7 | 2020-09-01T19:18:29.000Z | 2022-02-10T01:45:33.000Z | zodo/ner/__init__.py | ZooPhy/zodo-services | b065c3967d831fae1a22a2e9c351d49437d1d02c | [
"Apache-2.0"
] | 1 | 2020-09-18T21:21:56.000Z | 2020-09-18T21:21:56.000Z | from .models import *
from .ner_utils import * | 23 | 24 | 0.76087 | 7 | 46 | 4.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 2 | 24 | 23 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d59f9b77ba24c5c2b771107e4828b4abf0d92288 | 46 | py | Python | tests/mkinit_dummy_module/submod1.py | Erotemic/ahoy | 9ec8c9a5bdbe10a1d01450660280ed4ea3b9390f | [
"Apache-2.0"
] | 36 | 2018-04-22T21:35:14.000Z | 2022-03-24T10:11:32.000Z | tests/mkinit_dummy_module/submod1.py | Erotemic/ahoy | 9ec8c9a5bdbe10a1d01450660280ed4ea3b9390f | [
"Apache-2.0"
] | 19 | 2018-05-26T02:44:53.000Z | 2022-03-04T17:46:04.000Z | tests/mkinit_dummy_module/submod1.py | Erotemic/ahoy | 9ec8c9a5bdbe10a1d01450660280ed4ea3b9390f | [
"Apache-2.0"
] | 4 | 2018-08-31T22:32:45.000Z | 2020-08-14T18:25:51.000Z | def func1():
pass
def func2():
pass
| 6.571429 | 12 | 0.521739 | 6 | 46 | 4 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.347826 | 46 | 6 | 13 | 7.666667 | 0.733333 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d5a015f2bcc6cba92e7511782ed0941e8922f47c | 4,363 | py | Python | proma/clients/tests/test_views.py | erickgnavar/Proma | 159051f4247700166f063075b3819ae426f6d337 | [
"MIT"
] | 3 | 2018-01-22T08:50:38.000Z | 2021-07-16T04:08:28.000Z | proma/clients/tests/test_views.py | erickgnavar/Proma | 159051f4247700166f063075b3819ae426f6d337 | [
"MIT"
] | 13 | 2019-05-27T03:08:29.000Z | 2020-01-03T03:36:04.000Z | proma/clients/tests/test_views.py | erickgnavar/Proma | 159051f4247700166f063075b3819ae426f6d337 | [
"MIT"
] | 1 | 2019-10-03T17:52:29.000Z | 2019-10-03T17:52:29.000Z | from django.test import RequestFactory, TestCase
from django.urls import resolve, reverse
from mixer.backend.django import mixer
from .. import views
class ClientCreateViewTestCase(TestCase):
def setUp(self):
self.view = views.ClientCreateView.as_view()
self.factory = RequestFactory()
self.user = mixer.blend("users.User")
def test_match_expected_view(self):
url = resolve("/clients/create/")
self.assertEqual(url.func.__name__, self.view.__name__)
def test_load_sucessful(self):
request = self.factory.get("/")
request.user = self.user
response = self.view(request)
self.assertEqual(response.status_code, 200)
self.assertIn("form", response.context_data)
def test_create_client(self):
data = {"name": "test", "email": "email@email.com", "alias": "test"}
request = self.factory.post("/", data=data)
request.user = self.user
response = self.view(request)
self.assertEqual(response.status_code, 302)
self.assertEqual(reverse("clients:client-list"), response["location"])
def test_create_client_missing_fields(self):
data = {"name": "test"}
request = self.factory.post("/", data=data)
request.user = self.user
response = self.view(request)
self.assertEqual(response.status_code, 200)
self.assertTrue(len(response.context_data["form"].errors) > 0)
class ClientUpdateViewTestCase(TestCase):
def setUp(self):
self.view = views.ClientUpdateView.as_view()
self.factory = RequestFactory()
self.user = mixer.blend("users.User")
self.client = mixer.blend("clients.Client")
def test_match_expected_view(self):
url = resolve("/clients/1/update/")
self.assertEqual(url.func.__name__, self.view.__name__)
def test_load_sucessful(self):
request = self.factory.get("/")
request.user = self.user
response = self.view(request, id=self.client.id)
self.assertEqual(response.status_code, 200)
self.assertIn("form", response.context_data)
def test_update_client(self):
data = {"name": "test", "email": "email@email.com", "alias": "test"}
request = self.factory.post("/", data=data)
request.user = self.user
response = self.view(request, id=self.client.id)
self.assertEqual(response.status_code, 302)
redirect_url = reverse("clients:client-detail", kwargs={"id": self.client.id})
self.assertEqual(redirect_url, response["location"])
def test_update_client_missing_fields(self):
data = {"name": "test"}
request = self.factory.post("/", data=data)
request.user = self.user
response = self.view(request, id=self.client.id)
self.assertEqual(response.status_code, 200)
self.assertTrue(len(response.context_data["form"].errors) > 0)
class ClientListViewTestCase(TestCase):
def setUp(self):
self.view = views.ClientListView.as_view()
self.factory = RequestFactory()
self.user = mixer.blend("users.User")
def test_match_expected_view(self):
url = resolve("/clients/")
self.assertEqual(url.func.__name__, self.view.__name__)
def test_load_sucessful(self):
request = self.factory.get("/")
request.user = self.user
mixer.cycle(5).blend("clients.Client")
response = self.view(request)
self.assertEqual(response.status_code, 200)
self.assertIn("clients", response.context_data)
self.assertIn("filter", response.context_data)
self.assertEqual(response.context_data["clients"].count(), 5)
class ClientDetailViewTestCase(TestCase):
def setUp(self):
self.view = views.ClientDetailView.as_view()
self.factory = RequestFactory()
self.user = mixer.blend("users.User")
self.client = mixer.blend("clients.Client")
def test_match_expected_view(self):
url = resolve("/clients/1/")
self.assertEqual(url.func.__name__, self.view.__name__)
def test_load_sucessful(self):
request = self.factory.get("/")
request.user = self.user
response = self.view(request, id=self.client.id)
self.assertEqual(response.status_code, 200)
self.assertIn("client", response.context_data)
| 37.612069 | 86 | 0.657346 | 512 | 4,363 | 5.433594 | 0.144531 | 0.04601 | 0.074407 | 0.054637 | 0.773904 | 0.773904 | 0.761323 | 0.713875 | 0.713875 | 0.713875 | 0 | 0.008696 | 0.20926 | 4,363 | 115 | 87 | 37.93913 | 0.797681 | 0 | 0 | 0.702128 | 0 | 0 | 0.076553 | 0.004813 | 0 | 0 | 0 | 0 | 0.234043 | 1 | 0.170213 | false | 0 | 0.042553 | 0 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
634d3f7d3c7fccd56bf576ff154322622db860c6 | 41 | py | Python | app_modules/noti_builder/__init__.py | l337quez/Aplicaci-n-ANDROID-para-control-del-suministro-de-energia- | 19986f11bcf77bc380121b4ec6d073d3c470648f | [
"MIT"
] | 14 | 2016-08-02T20:36:47.000Z | 2019-12-17T07:10:26.000Z | app_modules/noti_builder/__init__.py | l337quez/Aplicaci-n-ANDROID-para-control-del-suministro-de-energia- | 19986f11bcf77bc380121b4ec6d073d3c470648f | [
"MIT"
] | 1 | 2019-03-09T09:46:02.000Z | 2019-03-09T09:46:02.000Z | app_modules/noti_builder/__init__.py | l337quez/Aplicaci-n-ANDROID-para-control-del-suministro-de-energia- | 19986f11bcf77bc380121b4ec6d073d3c470648f | [
"MIT"
] | 3 | 2016-08-02T21:27:46.000Z | 2020-05-11T03:56:05.000Z | from .builder import NotificationBuilder
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.