hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c7eeb6ad96b62ee9ed39e894eafdc5891f6f7f42 | 28 | py | Python | mozumder/management/writers/__init__.py | mozumder/django-mozumder | 887ce303249eac2d77de062fd57023dbc4b782dd | [
"MIT"
] | 1 | 2020-06-13T06:12:16.000Z | 2020-06-13T06:12:16.000Z | mozumder/management/writers/__init__.py | mozumder/django-mozumder | 887ce303249eac2d77de062fd57023dbc4b782dd | [
"MIT"
] | 4 | 2020-06-18T03:53:29.000Z | 2021-06-09T17:56:12.000Z | mozumder/management/writers/__init__.py | mozumder/django-mozumder | 887ce303249eac2d77de062fd57023dbc4b782dd | [
"MIT"
] | null | null | null | from .app import write_app
| 9.333333 | 26 | 0.785714 | 5 | 28 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 2 | 27 | 14 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1bf3c26f75d5263e015cbfa1925edca979266e2f | 106 | py | Python | globals.py | srdjanko/CarND-LaneLines-P1 | ea09c92b81b983d73317b334b19995d641ad314a | [
"MIT"
] | null | null | null | globals.py | srdjanko/CarND-LaneLines-P1 | ea09c92b81b983d73317b334b19995d641ad314a | [
"MIT"
] | null | null | null | globals.py | srdjanko/CarND-LaneLines-P1 | ea09c92b81b983d73317b334b19995d641ad314a | [
"MIT"
] | null | null | null | import numpy as np
glob_previous_lanes = [np.array([0.0, 0.0, 0.0, 0.0]), np.array([0.0, 0.0, 0.0, 0.0])] | 35.333333 | 86 | 0.603774 | 27 | 106 | 2.296296 | 0.333333 | 0.451613 | 0.580645 | 0.645161 | 0.483871 | 0.483871 | 0.483871 | 0.483871 | 0.483871 | 0.483871 | 0 | 0.173913 | 0.132075 | 106 | 3 | 86 | 35.333333 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
402f4cae91ca4424dffbefe5b89485e22f37a984 | 13,339 | py | Python | LearningAlgorithm/pain/ANN (New).py | Mirotivo/biovid | 4cc4b1d2afd3f37224c74fe982d67aee99b81dc0 | [
"BSD-2-Clause"
] | null | null | null | LearningAlgorithm/pain/ANN (New).py | Mirotivo/biovid | 4cc4b1d2afd3f37224c74fe982d67aee99b81dc0 | [
"BSD-2-Clause"
] | null | null | null | LearningAlgorithm/pain/ANN (New).py | Mirotivo/biovid | 4cc4b1d2afd3f37224c74fe982d67aee99b81dc0 | [
"BSD-2-Clause"
] | null | null | null | import ClassifierModel
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
# Importing the dataset
dataset = pd.read_csv('features/Table_Step2_159Features-85Subs-5Levels-z.csv')
Combined = dataset.iloc[:,7:].values
# Imputing the missing features and replacing it with the mean value excluding the first column
# axis 0 is columnwise && axis 1 is rowwise.
Combined[:,1:] = ClassifierModel.ImputeDataSet(Combined)
# Separating each level
# 0.1 level_zero_one, 1 level_one, 2 level_two,3 level_three,4 level_four
level_zero_one,level_one,level_two,level_three,level_four=ClassifierModel.SeparateEachLevel(Combined)
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_one), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Feature Scaling
X_train = ClassifierModel.NormalizeFeatures(X_train)
X_test = ClassifierModel.NormalizeFeatures(X_test)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras import metrics
from sklearn.decomposition import PCA
pca = PCA(n_components=70)# adjust yourself
pca.fit(X_train)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 35, activation = 'relu',kernel_initializer="uniform", input_dim = 70))
# Adding the second hidden layer
#.add(Dense(output_dim = 80, init = 'uniform', activation = 'relu'))
#classifier.add(Dense(units = 40, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 50, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(output_dim = 20, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 10, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 15, activation = 'relu',kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(output_dim = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT1): '+str(accuracy))
print('Precision (BvsT1): '+str(precision))
print('Recall (BvsT1): '+str(recall))
print('Sensitivity (BvsT1): '+str(sensitivity))
print('Specificity (BvsT1): '+str(specificity))
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_two), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Feature Scaling
X_train = ClassifierModel.NormalizeFeatures(X_train)
X_test = ClassifierModel.NormalizeFeatures(X_test)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras import metrics
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 120, activation = 'relu',kernel_initializer="uniform", input_dim = 159))
# Adding the second hidden layer
#.add(Dense(output_dim = 80, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 90, activation = 'relu',kernel_initializer="uniform"))
classifier.add(Dense(units = 40, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 50, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(output_dim = 20, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 30, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 15, activation = 'relu',kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(output_dim = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT2): '+str(accuracy))
print('Precision (BvsT2): '+str(precision))
print('Recall (BvsT2): '+str(recall))
print('Sensitivity (BvsT2): '+str(sensitivity))
print('Specificity (BvsT2): '+str(specificity))
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_three), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Feature Scaling
X_train = ClassifierModel.NormalizeFeatures(X_train)
X_test = ClassifierModel.NormalizeFeatures(X_test)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras import metrics
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 120, activation = 'relu',kernel_initializer="uniform", input_dim = 159))
# Adding the second hidden layer
#.add(Dense(output_dim = 80, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 90, activation = 'relu',kernel_initializer="uniform"))
classifier.add(Dense(units = 40, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 50, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(output_dim = 20, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 30, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 15, activation = 'relu',kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(output_dim = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT3): '+str(accuracy))
print('Precision (BvsT3): '+str(precision))
print('Recall (BvsT3): '+str(recall))
print('Sensitivity (BvsT3): '+str(sensitivity))
print('Specificity (BvsT3): '+str(specificity))
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_four), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras import metrics
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 120, activation = 'relu',kernel_initializer="uniform", input_dim = 159))
# Adding the second hidden layer
#.add(Dense(output_dim = 80, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 90, activation = 'relu',kernel_initializer="uniform"))
classifier.add(Dense(units = 40, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 50, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(output_dim = 20, init = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 30, activation = 'relu',kernel_initializer="uniform"))
#classifier.add(Dense(units = 15, activation = 'relu',kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(output_dim = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)
# Fitting Kernel SVM to the Training set
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT4): '+str(accuracy))
print('Precision (BvsT4): '+str(precision))
print('Recall (BvsT4): '+str(recall))
print('Sensitivity (BvsT4): '+str(sensitivity))
print('Specificity (BvsT4): '+str(specificity))
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_one), axis=0)
Combined = np.concatenate((Combined, level_four), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Feature Scaling
X_train = ClassifierModel.NormalizeFeatures(X_train)
X_test = ClassifierModel.NormalizeFeatures(X_test)
# Fitting Kernel SVM to the Training set
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1,2])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT1vsT4): '+str(accuracy))
print('Precision (BvsT1vsT4): '+str(precision))
print('Recall (BvsT1vsT4): '+str(recall))
print('Sensitivity (BvsT1vsT4): '+str(sensitivity))
print('Specificity (BvsT1vsT4): '+str(specificity))
# Classify between the baseline and pain threshold
# axis 0 is columnwise && axis 1 is rowwise.
Combined = np.concatenate((level_zero_one, level_one), axis=0)
Combined = np.concatenate((Combined, level_two), axis=0)
Combined = np.concatenate((Combined, level_three), axis=0)
Combined = np.concatenate((Combined, level_four), axis=0)
# Encoding classes to integer levels
X,y = ClassifierModel.features_lables_split(Combined)
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 7)
# Feature Scaling
X_train = ClassifierModel.NormalizeFeatures(X_train)
X_test = ClassifierModel.NormalizeFeatures(X_test)
# Fitting Kernel SVM to the Training set
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
cm = confusion_matrix(y_test, y_pred)
ClassifierModel.Visualize_CM(cm,[0,1,2,3,4])
# Evaluation
accuracy,precision,recall,sensitivity,specificity = ClassifierModel.EvaluateClassifier(y_test, y_pred)
print('Accuracy (BvsT1vsT2vsT3vsT4): '+str(accuracy))
print('Precision (BvsT1vsT2vsT3vsT4): '+str(precision))
print('Recall (BvsT1vsT2vsT3vsT4): '+str(recall))
print('Sensitivity (BvsT1vsT2vsT3vsT4): '+str(sensitivity))
print('Specificity (BvsT1vsT2vsT3vsT4): '+str(specificity))
| 36.746556 | 102 | 0.770223 | 1,830 | 13,339 | 5.476503 | 0.098907 | 0.027939 | 0.055678 | 0.052784 | 0.843644 | 0.841648 | 0.841648 | 0.833167 | 0.833167 | 0.829276 | 0 | 0.020817 | 0.114102 | 13,339 | 362 | 103 | 36.848066 | 0.827283 | 0.325512 | 0 | 0.651899 | 0 | 0 | 0.118997 | 0.005961 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.170886 | 0 | 0.170886 | 0.189873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
404f31179de619313ef31f7c965ce5b6e1d50d1a | 4,472 | py | Python | d3rlpy_addons/models/torch/q_functions.py | jamartinh/d3rlpy-addons | 9561432062cfe83150d17c0908b2013d15d71ee3 | [
"MIT"
] | 1 | 2021-08-28T14:32:15.000Z | 2021-08-28T14:32:15.000Z | d3rlpy_addons/models/torch/q_functions.py | jamartinh/d3rlpy-addons | 9561432062cfe83150d17c0908b2013d15d71ee3 | [
"MIT"
] | null | null | null | d3rlpy_addons/models/torch/q_functions.py | jamartinh/d3rlpy-addons | 9561432062cfe83150d17c0908b2013d15d71ee3 | [
"MIT"
] | null | null | null | from typing import cast
import torch
from torch import nn
from d3rlpy.models.torch import Encoder, EncoderWithAction
from d3rlpy.models.torch.q_functions.mean_q_function import (
ContinuousMeanQFunction,
DiscreteMeanQFunction,
)
from d3rlpy.models.torch.q_functions.qr_q_function import (
ContinuousQRQFunction,
DiscreteQRQFunction,
)
class DiscreteDQRQFunction(DiscreteQRQFunction):
_fc0: nn.Linear
_q_value_offset: float
def __init__(
self,
encoder: Encoder,
action_size: int,
n_quantiles: int,
q_value_offset: float = 0.0,
):
super().__init__(encoder, action_size, n_quantiles)
# initial q_values for approximation
self._q_value_offset = q_value_offset
# get a new instance or clone a frozen copy
self._fc0 = type(self._fc)(encoder.get_feature_size(), n_quantiles)
# copy weights and stuff
self._fc0.load_state_dict(self._fc.state_dict())
# freeze model by freezing parameters
for param in self._fc0.parameters():
param.requires_grad = False
# set fc0 in eval mode only
self._fc0.eval()
def _compute_quantiles(
self, h: torch.Tensor, taus: torch.Tensor
) -> torch.Tensor:
h = cast(
torch.Tensor, (self._fc(h) - self._fc0(h)) + self._q_value_offset
)
return h.view(-1, self._action_size, self._n_quantiles)
class ContinuousDQRQFunction(ContinuousQRQFunction):
_fc0: nn.Linear
_q_value_offset: float
def __init__(
self,
encoder: EncoderWithAction,
n_quantiles: int,
q_value_offset: float = 0.0,
):
super().__init__(encoder, n_quantiles)
# initial q_values for approximation
self._q_value_offset = q_value_offset
# get a new instance or clone a frozen copy
self._fc0 = type(self._fc)(
encoder.get_feature_size(), self._n_quantiles
)
# copy weights and stuff
self._fc0.load_state_dict(self._fc.state_dict())
# freeze model by freezing parameters
for param in self._fc0.parameters():
param.requires_grad = False
# set fc0 in eval mode only
self._fc0.eval()
def _compute_quantiles(
self, h: torch.Tensor, taus: torch.Tensor
) -> torch.Tensor:
return cast(
torch.Tensor, (self._fc(h) - self._fc0(h)) + self._q_value_offset
)
class DiscreteDMeanQFunction(DiscreteMeanQFunction):
_fc0: nn.Linear
_q_value_offset: float
def __init__(
self, encoder: Encoder, action_size: int, q_value_offset: float = 0.0
):
super().__init__(encoder=encoder, action_size=action_size)
# initial q_values for approximation
self._q_value_offset = q_value_offset
# get a new instance or clone a frozen copy
self._fc0 = type(self._fc)(encoder.get_feature_size(), 1)
# copy weights and stuff
self._fc0.load_state_dict(self._fc.state_dict())
# freeze model by freezing parameters
for param in self._fc0.parameters():
param.requires_grad = False
# set fc0 in eval mode only
self._fc0.eval()
def forward(self, x: torch.Tensor) -> torch.Tensor:
return cast(
torch.Tensor,
self._fc(self._encoder(x))
- self._fc0(self._encoder(x))
+ self._q_value_offset,
)
class ContinuousDMeanQFunction(ContinuousMeanQFunction):
_fc0: nn.Linear
_q_value_offset: float
def __init__(self, encoder: EncoderWithAction, q_value_offset: float = 0.0):
super().__init__(encoder=encoder)
# initial q_values for approximation
self._q_value_offset = q_value_offset
# get a new instance or clone a frozen copy
self._fc0 = type(self._fc)(encoder.get_feature_size(), 1)
# copy weights and stuff
self._fc0.load_state_dict(self._fc.state_dict())
# freeze model by freezing parameters
for param in self._fc0.parameters():
param.requires_grad = False
# set fc0 in eval mode only
self._fc0.eval()
def forward(self, x: torch.Tensor, action: torch.Tensor) -> torch.Tensor:
return cast(
torch.Tensor,
self._fc(self._encoder(x, action))
- self._fc0(self._encoder(x, action))
+ self._q_value_offset,
)
| 28.484076 | 80 | 0.63797 | 555 | 4,472 | 4.81982 | 0.14955 | 0.04486 | 0.08972 | 0.050841 | 0.815327 | 0.785421 | 0.762243 | 0.762243 | 0.762243 | 0.762243 | 0 | 0.012947 | 0.274598 | 4,472 | 156 | 81 | 28.666667 | 0.811652 | 0.144678 | 0 | 0.581633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.061224 | 0.030612 | 0.306122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40da05e490dbf132220e957af8ed43eb31d58264 | 9,671 | py | Python | modules/routing.py | ZikeYan/RoutedFusion | 866699ff1eba48cdad20bbde9cc498c17848ac50 | [
"BSD-3-Clause"
] | 100 | 2020-07-10T11:54:46.000Z | 2022-03-16T08:22:22.000Z | modules/routing.py | ZikeYan/RoutedFusion | 866699ff1eba48cdad20bbde9cc498c17848ac50 | [
"BSD-3-Clause"
] | 19 | 2020-09-20T14:32:23.000Z | 2022-01-17T00:39:24.000Z | modules/routing.py | ZikeYan/RoutedFusion | 866699ff1eba48cdad20bbde9cc498c17848ac50 | [
"BSD-3-Clause"
] | 15 | 2020-09-21T14:15:23.000Z | 2022-01-12T23:09:41.000Z | import torch
class UNet(torch.nn.Module):
"""
Basic UNet building block, calling itself recursively.
Note that the final output does not have a ReLU applied.
"""
def __init__(self, Cin, F, Cout, depth, batchnorms=True):
super().__init__()
self.F = F
self.depth = depth
if batchnorms:
self.pre = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(Cin, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
)
self.post = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(3 * F, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, Cout, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(Cout),
)
else:
self.pre = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(Cin, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
)
self.post = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(3 * F, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, Cout, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU()
)
if depth > 1:
self.process = UNet(F, 2 * F, 2 * F, depth - 1, batchnorms=batchnorms)
else:
if batchnorms:
self.process = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(2 * F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(2 * F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(2 * F),
torch.nn.ReLU(),
)
else:
self.process = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(2 * F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
)
self.maxpool = torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
def forward(self, data):
features = self.pre(data)
lower_scale = self.maxpool(features)
lower_features = self.process(lower_scale)
upsampled = torch.nn.functional.interpolate(lower_features, scale_factor=2, mode="bilinear",
align_corners=False)
H = data.shape[2]
W = data.shape[3]
upsampled = upsampled[:, :, :H, :W]
output = self.post(torch.cat((features, upsampled), dim=1))
return output
class ConfidenceRouting(torch.nn.Module):
"""
Network for confidence routing in RoutedFusion.
"""
def __init__(self, Cin, F, Cout, depth, batchnorms=True):
super().__init__()
self.F = F
self.depth = depth
if batchnorms:
self.pre = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(Cin, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
)
self.post = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(3 * F, F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, Cout, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(Cout),
)
else:
self.pre = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(Cin, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
)
self.post = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(3 * F, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, Cout, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU()
)
if depth > 1:
self.process = UNet(F, 2 * F, 2 * F, depth - 1, batchnorms=batchnorms)
else:
if batchnorms:
self.process = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(2 * F),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(2 * F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.BatchNorm2d(2 * F),
torch.nn.ReLU(),
)
else:
self.process = torch.nn.Sequential(
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(2 * F, 2 * F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
)
self.uncertainty = torch.nn.Sequential(torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(3 * F, F, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU(),
torch.nn.ReflectionPad2d(1),
torch.nn.Conv2d(F, Cout, kernel_size=3, stride=1, padding=0),
torch.nn.ReLU())
self.maxpool = torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
def forward(self, data):
features = self.pre(data)
lower_scale = self.maxpool(features)
lower_features = self.process(lower_scale)
upsampled = torch.nn.functional.interpolate(lower_features, scale_factor=2, mode="bilinear",
align_corners=False)
H = data.shape[2]
W = data.shape[3]
upsampled = upsampled[:, :, :H, :W]
output = self.post(torch.cat((features, upsampled), dim=1))
uncertainty = self.uncertainty(torch.cat((features, upsampled), dim=1))
return torch.cat((output, uncertainty), dim=1)
def get_influence_percentages(self):
"""
This function is intended to return a matrix of influences.
I.e. for each output channel it returns the percentage it is controlled by each input channel.
Very very roughly speaking, as all this does is iteratively calculate these percentages based on fractional absolute weighting.
Output:
percentages -- C_out x C_in matrix giving the weights
"""
if isinstance(self.pre[1], torch.nn.BatchNorm2d):
print("BatchNorm UNets not supported for influence percentages")
return None
pre1 = self.pre[1].weight.abs().sum(dim=3).sum(dim=2)
pre1 = pre1 / pre1.sum(dim=1, keepdim=True)
pre2 = self.pre[4].weight.abs().sum(dim=3).sum(dim=2)
pre2 = pre2 / pre2.sum(dim=1, keepdim=True)
pre2 = torch.matmul(pre2, pre1)
if isinstance(self.process, UNet):
process2 = torch.matmul(self.process.get_influence_percentages(), pre2)
else:
process1 = self.process[1].weight.abs().sum(dim=3).sum(dim=2)
process1 = process1 / process1.sum(dim=1, keepdim=True)
process1 = torch.matmul(process1, pre2)
process2 = self.process[4].weight.abs().sum(dim=3).sum(dim=2)
process2 = process2 / process2.sum(dim=1, keepdim=True)
process2 = torch.matmul(process2, process1)
post1 = self.post[1].weight.abs().sum(dim=3).sum(dim=2)
post1 = post1 / post1.sum(dim=1, keepdim=True)
post1 = torch.matmul(post1, torch.cat((pre2, process2), dim=0))
post2 = self.post[4].weight.abs().sum(dim=3).sum(dim=2)
post2 = post2 / post2.sum(dim=1, keepdim=True)
post2 = torch.matmul(post2, post1)
return post2
return final_layer
| 41.865801 | 135 | 0.518044 | 1,137 | 9,671 | 4.350923 | 0.121372 | 0.15282 | 0.06226 | 0.09622 | 0.769759 | 0.755205 | 0.746311 | 0.738023 | 0.738023 | 0.708914 | 0 | 0.045872 | 0.357564 | 9,671 | 230 | 136 | 42.047826 | 0.750362 | 0.052528 | 0 | 0.760638 | 0 | 0 | 0.007825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026596 | false | 0 | 0.005319 | 0 | 0.069149 | 0.005319 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc0c08243e4a84c07bf42df1f008b3d55ddce1d3 | 11,599 | py | Python | source/tests/py_tests/unsafe_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 45 | 2016-06-21T22:28:43.000Z | 2022-03-26T12:21:46.000Z | source/tests/py_tests/unsafe_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 6 | 2020-07-12T18:00:10.000Z | 2021-11-30T11:20:14.000Z | source/tests/py_tests/unsafe_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 5 | 2019-09-03T17:20:34.000Z | 2022-01-30T15:10:21.000Z | from py_tests_common import *
def UnsafeBlockDeclaration_Test0():
c_program_text= """
fn Foo()
{
unsafe{}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionDeclaration_Test0():
c_program_text= """
fn Foo() unsafe;
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionDeclaration_Test1():
c_program_text= """
fn Foo() unsafe : i32;
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallOutsideUnsafeBlock_Test0():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
Bar(); // Regular function call
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 5 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test1():
c_program_text= """
fn Bar( i32 &imut x );
fn Bar( i32 & mut x ) unsafe;
fn Foo()
{
Bar(42); // Ok, safe function selected.
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallOutsideUnsafeBlock_Test2():
c_program_text= """
fn Bar( i32 &imut x );
fn Bar( i32 & mut x ) unsafe;
fn Foo()
{
var i32 mut x= 0;
Bar(x); // Error, unsafe function selected.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 7 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test3():
c_program_text= """
struct S
{
fn constructor() unsafe {}
}
fn Foo()
{
var S s; // Error, implicitly calling unsafe constructor
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 8 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test4():
c_program_text= """
struct S
{
fn constructor( i32 x ) unsafe {}
}
fn Foo()
{
var S s( 666 ); // Error, explicitly calling unsafe constructor
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 8 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test5():
c_program_text= """
struct S
{
fn constructor( S& other ) unsafe {} // unsafe copy constructor
}
fn Foo()
{
var S s0;
var S s1= s0; // Error, calling unsafe copy-constructor
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test6():
c_program_text= """
struct S
{
op= ( mut this, S& other ) unsafe {}
}
fn Foo()
{
var S s0, mut s1;
s1= s0; // Error, calling unsafe copy assignment operator
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test7():
c_program_text= """
struct S
{
op++( mut this ) unsafe {}
}
fn Foo()
{
var S mut s;
++s;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test8():
c_program_text= """
struct S
{
op[]( mut this, i32 x ) unsafe {}
}
fn Foo()
{
var S mut s;
s[0];
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test9():
c_program_text= """
struct S
{
op()( this ) unsafe {}
}
fn Foo()
{
var S mut s;
s();
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test10():
c_program_text= """
fn Bar() unsafe;
fn Foo() unsafe
{
Bar(); // Even in unsafe function we needs unsafe block to call other unsafe functions.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 5 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test11():
c_program_text= """
struct S
{
fn destructor() unsafe {}
}
fn Foo()
{
var S s;
} // Error, calling unsafe destructor here
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeFunctionCallOutsideUnsafeBlock_Test12():
c_program_text= """
struct S
{
fn destructor() unsafe {}
}
struct B // Error, while generating default-destructor. Currently, classes with unsafe destructor can not be members of other classes.
{
S s;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 6 )
def UnsafeFunctionCallInsideUnsafeBlock_Test0():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
Bar(); // Ok, we are inside unsafe block
}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallInsideUnsafeBlock_Test1():
c_program_text= """
struct S
{
fn constructor() unsafe {}
}
fn Foo()
{
unsafe
{
var S s; // Ok, implicitly call here unsafe constructor
}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallInsideUnsafeBlock_Test2():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
{
Bar(); // Ok, we are inside unsafe block
}
}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallInsideUnsafeBlock_Test3():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
if(true)
{
Bar(); // Ok, we are inside unsafe block
}
}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallInsideUnsafeBlock_Test4():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
while(true)
{
Bar(); // Ok, we are inside unsafe block
break;
}
}
}
"""
tests_lib.build_program( c_program_text )
def UnsafeFunctionCallInsideUnsafeBlock_Test5():
c_program_text= """
struct S { fn destructor() unsafe {} }
fn Foo()
{
unsafe
{
var S s;
} // Ok, call unsafe destructor at end of unsafe block.
}
"""
tests_lib.build_program( c_program_text )
def CouldNotOverloadFunction_ForUnsafe_Test0():
c_program_text= """
fn Foo();
fn Foo() unsafe;
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "CouldNotOverloadFunction" )
assert( errors_list[0].src_loc.line == 2 or errors_list[0].src_loc.line == 3 )
def CouldNotOverloadFunction_ForUnsafe_Test1():
c_program_text= """
fn Foo() unsafe;
fn Foo() {} // Trying to create body without 'unsafe'
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "CouldNotOverloadFunction" )
assert( errors_list[0].src_loc.line == 2 or errors_list[0].src_loc.line == 3 )
def FunctionDoesNotOverride_ForUnsafe_Test0():
c_program_text= """
class A polymorph
{
fn virtual Foo( this );
}
class B : A
{
fn virtual override Foo( this ) unsafe; // 'unsafe' breaks 'override' here
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "FunctionDoesNotOverride" )
assert( errors_list[0].src_loc.line == 9 )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test0():
c_program_text= """
struct S {} // have generated default-constructor
fn Foo()
{
var S mut s;
s.constructor;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExplicitAccessToThisMethodIsUnsafe" )
assert( errors_list[0].src_loc.line == 6 )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test1():
c_program_text= """
struct S {} // have generated default-constructor
fn Foo()
{
S::constructor;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExplicitAccessToThisMethodIsUnsafe" )
assert( errors_list[0].src_loc.line == 5 )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test2():
c_program_text= """
struct S { fn destructor(){} }
fn Foo()
{
var S mut s;
s.destructor();
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExplicitAccessToThisMethodIsUnsafe" )
assert( errors_list[0].src_loc.line == 6 )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test3():
c_program_text= """
struct S { fn destructor(){} }
fn Foo()
{
::S::destructor;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExplicitAccessToThisMethodIsUnsafe" )
assert( errors_list[0].src_loc.line == 5 )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test4():
c_program_text= """
struct S { fn constructor( i32 x ){} }
fn Foo()
{
var S mut s(0);
unsafe{ s.constructor(42); } // ok, can access constructor in unsafe block
}
"""
tests_lib.build_program( c_program_text )
def ExplicitAccessToSpecialMethodsIsUnsafe_Test5():
c_program_text= """
struct S { fn destructor(){} }
fn Foo()
{
var S mut s;
unsafe{ s.destructor(); } // ok, can access destructor in unsafe block
}
"""
tests_lib.build_program( c_program_text )
def SafeBlockResetsUnsafe_Test():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
safe
{
Bar();
}
}
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "UnsafeFunctionCallOutsideUnsafeBlock" )
assert( errors_list[0].src_loc.line == 9 )
def UnsafeInsideUnsafe_Test():
c_program_text= """
fn Bar() unsafe;
fn Foo()
{
unsafe
{
{
unsafe
{
Bar();
}
}
}
}
"""
tests_lib.build_program( c_program_text )
| 23.338028 | 137 | 0.693853 | 1,431 | 11,599 | 5.344514 | 0.088749 | 0.107218 | 0.103556 | 0.088912 | 0.860879 | 0.850157 | 0.83329 | 0.809493 | 0.782296 | 0.72228 | 0 | 0.016138 | 0.182602 | 11,599 | 496 | 138 | 23.385081 | 0.790528 | 0 | 0 | 0.559441 | 0 | 0.002331 | 0.380981 | 0.058195 | 0 | 0 | 0 | 0 | 0.13986 | 1 | 0.076923 | false | 0 | 0.002331 | 0 | 0.079254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc13f3dc88fc46000cab8adaba79cc069ecf9805 | 28,659 | py | Python | data_preprocess/data_pretreat.py | TAN-OpenLab/TCSE-net | fc6ecf704a9c128a9b5b6853cffa8486ee0f54e8 | [
"Apache-2.0"
] | null | null | null | data_preprocess/data_pretreat.py | TAN-OpenLab/TCSE-net | fc6ecf704a9c128a9b5b6853cffa8486ee0f54e8 | [
"Apache-2.0"
] | null | null | null | data_preprocess/data_pretreat.py | TAN-OpenLab/TCSE-net | fc6ecf704a9c128a9b5b6853cffa8486ee0f54e8 | [
"Apache-2.0"
] | null | null | null | # import numpy as np
# import six.moves.cPickle as pickle
# from data_preprocess import config
# import networkx as nx
# from data_preprocess import Laplacian
# import scipy.sparse
# from collections import Counter
# import gc
# LABEL_NUM = 0
# import math
#
# # trans the original ids to 1~n
# class IndexDict:
# def __init__(self, original_ids):
# self.original_to_new = {}
# self.new_to_original = []
# cnt = 0
# for i in original_ids:
# new = self.original_to_new.get(i, cnt) #get(key,default),
# if new == cnt: #节点i为未加入o_t_n,为节点i编号放入o_t_n
# self.original_to_new[i] = cnt
# cnt += 1
# self.new_to_original.append(i) #去重后的original,对应new_to_original
#
# def new(self, original):
# if type(original) is int:
# return self.original_to_new[original]
# else:
# if type(original[0]) is int:
# return [self.original_to_new[i] for i in original]
# else:
# return [[self.original_to_new[i] for i in l] for l in original]
#
# def original(self, new):
# if type(new) is int:
# return self.new_to_original[new]
# else:
# if type(new[0]) is int:
# return [self.new_to_original[i] for i in new]
# else:
# return [[self.new_to_original[i] for i in l] for l in new]
#
# def length(self):
# return len(self.new_to_original)
#
# #trainsform the sequence to list
# #graphs: 字典{级联id:【【种子】,时间】,【【节点1,节点2】,时间】,【节点3,节点4】,时间】】}
# def sequence2list(flename):
# graphs = {}
# with open(flename, 'r') as f:
# for line in f:
# walks = line.strip().split('\t')
# graphs[walks[0]] = [] #walk[0] = cascadeID
# for i in range(1, len(walks)):
# s = walks[i].split(":")[0] #node
# t = walks[i].split(":")[1] #time
# graphs[walks[0]].append([[str(xx) for xx in s.split(",")],int(t)])
# return graphs
#
# #read label and size from cascade file
# # label: 字典{级联id:label} 级联在三小时后的增量
# # sizes:字典{级联id:级联的边数量} 级联在三小时内的转发量
# # 二者相加 为级联的总转发量
# def read_labelANDsize(filename):
# labels = {}
# sizes = {}
# with open(filename, 'r') as f:
# for line in f:
# profile = line.split('\t')
# labels[profile[0]] = profile[-1]
# sizes[profile[0]] = int(profile[3])
# return labels,sizes
#
# #original_ids:每个图的ID
# def get_original_ids(graphs):
# original_ids = set()
# for graph in graphs.keys():
# for walk in graphs[graph]:
# for i in walk[0]:
# original_ids.add(i)
# print ("length of original ids:",len(original_ids))
# return original_ids
#
# def get_nodes(graph):
# nodes = {}
# j = 0
# for walk in graph:
# for i in walk[0]:
# if i not in nodes.keys():
# nodes[i] = j
# j = j+1
# return nodes
#
# def get_max_deepth(graphs):
# node_deepth = {}
# max_deepth = 0
# #max_leaf = 0
# for graph in graphs.values():
# for walk in graph:
# if walk[1] == 0:
# node_deepth[walk[0][0]] = 1
# else:
# for i in range(len(walk[0]) - 1):
# if walk[0][i] in node_deepth.keys():
# node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
# else:
# node_deepth[walk[0][i]] = 0
# node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
# deepth = max(node_deepth.values())
# max_deepth = max(max_deepth,deepth)
# #leafs = max(Counter(node_deepth.values()).values())
# #max_leaf = max(max_leaf,leafs)
# return max_deepth#,max_leaf
#
# def BFS_label(G,root):
# G_bfs_node = {}
# G_BFS = nx.DiGraph()
# node_ith_no = {}
# i = 0
# G_bfs_node[root] = i
# node_ith_no[root] = 0
# this_root= set()
# this_root.add(root)
# has_visit = set()
# #has_visit.add(root)
# G_BFS.add_node(G_bfs_node.get(root))
# isolates = list(nx.isolates(G))
# G.remove_nodes_from(list(nx.isolates(G)))
# while len(has_visit) != nx.number_of_nodes(G):
# next_root = set()
# for n in this_root:
# node = set()
# if n not in has_visit:
# has_visit.add(n)
# for s in G.successors(n):
# node.add(s)
# next_root.add(s)
# node = sorted(node, key=lambda d:G.nodes[d]['time'])
# j = 0
# for s in node:
# if s not in G_bfs_node.keys():
# i += 1
# G_bfs_node[s] = i
# node_ith_no[s] = j
# j += 1
# G_BFS.add_edge(G_bfs_node.get(n), G_bfs_node.get(s))
# this_root = next_root
# j = 0
# for s in isolates:
# i += 1
# G_BFS[s] = i
# G_BFS.add_node(G_bfs_node.get(s))
# node_ith_no[s] = j
# j += 1
# return G_BFS, G_bfs_node,node_ith_no
#
# def G_time_ordered(G):
# nodes = list(G.nodes)
# nodes = sorted(nodes, key=lambda d: G.nodes[d]['time'])
# nodes_times ={}
# G_time = nx.DiGraph()
# for (m,n) in G.edges():
# nodes_times[m] = nodes.index(m)
# nodes_times[n] = nodes.index(n)
# G_time.add_node(nodes_times.get(m),time = G.nodes[m]['time'])
# G_time.add_node(nodes_times.get(n), time=G.nodes[n]['time'])
# G_time.add_edge(nodes_times.get(m),nodes_times.get(n))
#
# return G_time, nodes_times
#
# #处理数据 将其转化为输入格式, 节点的embedding X【级联总数,num—sequence,max-num,max-num】,级联所在局部网络的拉普拉斯矩阵L,log(Y),每个级联内连接的发生时间
# def write_XYSIZE_data(graphs,labels,sizes,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num, n_time_interval,filename):
# #get the x,y,and size data
# id_data = []
# x_data = []
# y_data = []
# sz_data = []
# time_data = []
# trend_data = []
# Laplacian_data = []
# l=0
# maxdeep =0
#
# for key,graph in graphs.items():
# print(l)
# l+=1
# id = key
# label = labels[key].split()
# y = int(label[LABEL_NUM]) #label
# temp = []
# temp_time = np.zeros([NUM_SEQUENCE, n_time_interval],int)#store time
# temp_trend = np.zeros([1,NUM_SEQUENCE],int)
# size_temp = len(graph)
# if size_temp != sizes[key]:
# print (size_temp,sizes[key])
#
# #nodes_items = get_nodes(graph) #级联的所有节点{节点id:节点编号}
# #nodes_list = nodes_items.values()
# nx_G = nx.DiGraph()
# #nx_G.add_nodes_from(nodes_list)
# #将每个级联内部节点间的邻接矩阵(带有自环)
# temp_dict = {}
# node_deepth = {}
# for walk in graph:
# if walk[1] == 0:
# node_deepth[walk[0][0]] = 0
# nx_G.add_node(walk[0][0], time=walk[1])
# root = walk[0][0]
# for i in range(len(walk[0]) - 1):
# nx_G.add_node(walk[0][i + 1], time=walk[1])
# nx_G.add_edge(walk[0][i], walk[0][i + 1])
# if walk[0][i] in node_deepth.keys():
# node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
# else:
# node_deepth[walk[0][i]] = 0
# node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
#
#
# walk_time = math.floor(walk[1] / ((config.observation+1)/ NUM_SEQUENCE)) # 3*60 *60/180
# time_interval = math.floor(walk[1] / ((config.observation+1)/ n_time_interval))#观察时长内划分六个时间间隔
# temp_time[walk_time, time_interval] = 1
# temp_trend[0, walk_time] +=1
# if not temp_dict.get(walk_time):
# temp_dict[walk_time] = []
# temp_dict[walk_time].append(walk[0])
#
# #temp_dict = sorted(temp_dict.items(), key=lambda a: a[0])
# #G_BFS, G_bfs_node, node_ith_no = BFS_label(nx_G, root)
# G_time, node_times_orders = G_time_ordered(nx_G)
# temp_emb = np.zeros(shape=(max_num, 50))
# for i in range(NUM_SEQUENCE):
# if i in temp_dict.keys():
# for value in temp_dict.get(i):
# for p in range(len(value)):
# d1 = node_deepth[value[p]] // 100 % 10
# d2 = node_deepth[value[p]] //10 % 10
# d3 = node_deepth[value[p]] // 1 % 10
# n1 = node_times_orders[value[p]] //10 % 10
# n2 = node_times_orders[value[p]] //1 % 10
# temp_emb[node_times_orders[value[p]]][10-1-d1] = 1
# temp_emb[node_times_orders[value[p]]][10*2 - 1 - d2] = 1
# temp_emb[node_times_orders[value[p]]][10*3 - 1 - d3] = 1
# temp_emb[node_times_orders[value[p]]][10*4 - 1 - n1] = 1
# temp_emb[node_times_orders[value[p]]][10*5 -1 - n2] = 1
# temp_s = scipy.sparse.coo_matrix(temp_emb, dtype=np.float32)
# temp.append(temp_s)
# else:
# temp_s = temp[i-1]
# temp.append(temp_s)
#
# deep = max(list(node_deepth.values()))
# maxdeep = max(deep,maxdeep)
# #caculate laplacian
# L = Laplacian.calculate_scaled_laplacian_dir(G_time, kind_of_laplacin= 'caslaplacian', lambda_max=None)
# M, M = L.shape
# M = int(M)
# L = L.todense()
# if M < max_num:
# col_padding_L = np.zeros(shape=(M, max_num - M))
# L_col_padding = np.column_stack((L, col_padding_L))
# row_padding = np.zeros(shape=(max_num - M, max_num))
# L_col_row_padding = np.row_stack((L_col_padding, row_padding))
# Lapla = scipy.sparse.coo_matrix(L_col_row_padding, dtype=np.float32)
# else:
# Lapla = scipy.sparse.coo_matrix(L, dtype=np.float32)
#
# time_data.append(temp_time)
# trend_data.append((np.log(temp_trend+1.0)/np.log(2.0)).tolist())
# id_data.append(id)
# x_data.append(temp)
# y_data.append(np.log(y+1.0)/np.log(2.0))
# Laplacian_data.append(Lapla)
# sz_data.append(size_temp)
# gc.collect()
# print('maxdeepth',maxdeep)
# pickle.dump((id_data,x_data,Laplacian_data, y_data, sz_data, time_data, trend_data,index.length()), open(filename,'wb'))
#
# def get_maxsize(sizes):
# max_size = 0
# for cascadeID in sizes:
# max_size = max(max_size,sizes[cascadeID])
# gc.collect()
# return max_size
#
# #级联的最大长度(级联中边的数量)
# def get_max_length(graphs):
# len_sequence = 0
# max_num = 0
# for cascadeID in graphs:
# max_num = max(max_num,len(graphs[cascadeID]))
# for sequence in graphs[cascadeID]:
# len_sequence = max(len_sequence,len(sequence[0]))
# gc.collect()
# return len_sequence
#
# def get_max_node_num(graphs):
# max_num = 0
# for key,graph in graphs.items():
# nodes = get_nodes(graph)
# max_num = max(max_num,len(nodes))
# return max_num
#
# if __name__ == "__main__":
#
# ### data set 数据转换,输入为list###
# graphs_train = sequence2list(config.shortestpath_train) #
# graphs_val = sequence2list(config.shortestpath_val)
# graphs_test = sequence2list(config.shortestpath_test)
#
# # train_depth = get_max_deepth(graphs_train)
# # test_depth = get_max_deepth(graphs_test)
# # val_depth = get_max_deepth(graphs_val)
# # max_depth = max(test_depth,train_depth,val_depth)
# # #max_leaf = max(train_leafs,test_leafs,val_leafs)
# # print('max_depth',max_depth)
# #print('max_leaf',max_leaf)
#
# # get Laplacian ##
# cascade_train = config.cascade_train
# cascade_test = config.cascade_test
# cascade_val = config.cascade_val
#
# ### get labels ###
# labels_train, sizes_train = read_labelANDsize(config.cascade_train) # labels是{id:观测时间后的转发量}以及sizes级联长度{id:级联总的转发数量}
# labels_val, sizes_val = read_labelANDsize(config.cascade_val)
# labels_test, sizes_test = read_labelANDsize(config.cascade_test)
# # NUM_SEQUENCE = max(get_maxsize(sizes_train),get_maxsize(sizes_val),get_maxsize(sizes_test)) #三小时内,转发最多的级联的大小 884
# NUM_SEQUENCE =config.num_squ
# print(NUM_SEQUENCE)
#
# # LEN_SEQUENCE_train = get_max_length(graphs_train) #每个数据集内, 级联内某一传播链的最大长度
# # LEN_SEQUENCE_val = get_max_length(graphs_val)
# # LEN_SEQUENCE_test = get_max_length(graphs_test)
# # LEN_SEQUENCE = max(LEN_SEQUENCE_train,LEN_SEQUENCE_val,LEN_SEQUENCE_test) #26
# # print(LEN_SEQUENCE)
# LEN_SEQUENCE =0
# #
# max_num_train = get_max_node_num(graphs_train) #参与级联的最大节点数
# max_num_test = get_max_node_num(graphs_test)
# max_num_val = get_max_node_num(graphs_val)
# max_num = max(max_num_train, max_num_test, max_num_val)
# print(max_num) #100
# #
# # get the total original_ids and tranform the index from 0 ~n-1
# original_ids = get_original_ids(graphs_train)\
# .union(get_original_ids(graphs_val))\
# .union(get_original_ids(graphs_test))
#
# original_ids.add(-1)
# ## index is new index
# index = IndexDict(original_ids) #字典{节点对id:节点对}
#
# print("create train")
# write_XYSIZE_data(graphs_train, labels_train,sizes_train,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.train_pkl)
# print("create val an test")
# write_XYSIZE_data(graphs_val, labels_val, sizes_val,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.val_pkl)
# write_XYSIZE_data(graphs_test, labels_test, sizes_test,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.test_pkl)
# pickle.dump((len(original_ids),NUM_SEQUENCE,LEN_SEQUENCE), open(config.information,'wb'))
# print("Finish!!!")
#teittwr
import numpy as np
import six.moves.cPickle as pickle
from data_preprocess import config
import networkx as nx
from data_preprocess import Laplacian
import scipy.sparse
from collections import Counter
import gc
LABEL_NUM = 0
import math
# trans the original ids to 1~n
class IndexDict:
def __init__(self, original_ids):
self.original_to_new = {}
self.new_to_original = []
cnt = 0
for i in original_ids:
new = self.original_to_new.get(i, cnt) #get(key,default),
if new == cnt: #节点i为未加入o_t_n,为节点i编号放入o_t_n
self.original_to_new[i] = cnt
cnt += 1
self.new_to_original.append(i) #去重后的original,对应new_to_original
def new(self, original):
if type(original) is int:
return self.original_to_new[original]
else:
if type(original[0]) is int:
return [self.original_to_new[i] for i in original]
else:
return [[self.original_to_new[i] for i in l] for l in original]
def original(self, new):
if type(new) is int:
return self.new_to_original[new]
else:
if type(new[0]) is int:
return [self.new_to_original[i] for i in new]
else:
return [[self.new_to_original[i] for i in l] for l in new]
def length(self):
return len(self.new_to_original)
#trainsform the sequence to list
#graphs: 字典{级联id:【【种子】,时间】,【【节点1,节点2】,时间】,【节点3,节点4】,时间】】}
def sequence2list(flename):
graphs = {}
with open(flename, 'r') as f:
for line in f:
walks = line.strip().split('\t')
graphs[walks[0]] = [] #walk[0] = cascadeID
for i in range(1, len(walks)):
s = walks[i].split(":")[0] #node
t = walks[i].split(":")[1] #time
graphs[walks[0]].append([[str(xx) for xx in s.split(",")],int(t)])
return graphs
#read label and size from cascade file
# label: 字典{级联id:label} 级联在三小时后的增量
# sizes:字典{级联id:级联的边数量} 级联在三小时内的转发量
# 二者相加 为级联的总转发量
def read_labelANDsize(filename):
labels = {}
sizes = {}
with open(filename, 'r') as f:
for line in f:
profile = line.split('\t')
labels[profile[0]] = profile[-1]
sizes[profile[0]] = int(profile[3])
return labels,sizes
#original_ids:每个图的ID
def get_original_ids(graphs):
original_ids = set()
for graph in graphs.keys():
for walk in graphs[graph]:
for i in walk[0]:
original_ids.add(i)
print ("length of original ids:",len(original_ids))
return original_ids
def get_nodes(graph):
nodes = {}
j = 0
for walk in graph:
for i in walk[0]:
if i not in nodes.keys():
nodes[i] = j
j = j+1
return nodes
def get_max_deepth(graphs):
node_deepth = {}
max_deepth = 0
#max_leaf = 0
for graph in graphs.values():
for walk in graph:
if walk[1] == 0:
node_deepth[walk[0][0]] = 0
else:
for i in range(len(walk[0]) - 1):
if walk[0][i] in node_deepth.keys():
node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
else:
node_deepth[walk[0][i]] = 0
node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
deepth = max(node_deepth.values())
max_deepth = max(max_deepth,deepth)
#leafs = max(Counter(node_deepth.values()).values())
#max_leaf = max(max_leaf,leafs)
return max_deepth#,max_leaf
def BFS_label(G,root):
G_bfs_node = {}
G_BFS = nx.DiGraph()
node_ith_no = {}
i = 0
G_bfs_node[root] = i
node_ith_no[root] = 0
this_root= set()
this_root.add(root)
has_visit = set()
#has_visit.add(root)
G_BFS.add_node(G_bfs_node.get(root))
isolates = list(nx.isolates(G))
G.remove_nodes_from(list(nx.isolates(G)))
while len(has_visit) != nx.number_of_nodes(G):
next_root = set()
for n in this_root:
node = set()
if n not in has_visit:
has_visit.add(n)
for s in G.successors(n):
node.add(s)
next_root.add(s)
node = sorted(node, key=lambda d:G.nodes[d]['time'])
j = 0
for s in node:
if s not in G_bfs_node.keys():
i += 1
G_bfs_node[s] = i
node_ith_no[s] = j
j += 1
G_BFS.add_edge(G_bfs_node.get(n), G_bfs_node.get(s))
this_root = next_root
j = 0
for s in isolates:
i += 1
G_BFS[s] = i
G_BFS.add_node(G_bfs_node.get(s))
node_ith_no[s] = j
j += 1
return G_BFS, G_bfs_node,node_ith_no
def G_time_ordered(G):
nodes = list(G.nodes)
nodes = sorted(nodes, key=lambda d: G.nodes[d]['time'])
nodes_times ={}
G_time = nx.DiGraph()
for (m,n) in G.edges():
nodes_times[m] = nodes.index(m)
nodes_times[n] = nodes.index(n)
G_time.add_node(nodes_times.get(m),time = G.nodes[m]['time'])
G_time.add_node(nodes_times.get(n), time=G.nodes[n]['time'])
G_time.add_edge(nodes_times.get(m),nodes_times.get(n))
return G_time, nodes_times
#处理数据 将其转化为输入格式, 节点的embedding X【级联总数,num—sequence,max-num,max-num】,级联所在局部网络的拉普拉斯矩阵L,log(Y),每个级联内连接的发生时间
def write_XYSIZE_data(graphs,labels,sizes,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num, n_time_interval,filename):
#get the x,y,and size data
id_data = []
x_data = []
y_data = []
sz_data = []
time_data = []
trend_data = []
Laplacian_data = []
l=0
maxdeep =0
for key,graph in graphs.items():
print(l)
l+=1
id = key
label = labels[key].split()
y = int(label[LABEL_NUM]) #label
temp = []
temp_time = np.zeros([NUM_SEQUENCE, n_time_interval],int)#store time
temp_trend = np.zeros([1,NUM_SEQUENCE],int)
size_temp = len(graph)
if size_temp != sizes[key]:
print (size_temp,sizes[key])
#nodes_items = get_nodes(graph) #级联的所有节点{节点id:节点编号}
#nodes_list = nodes_items.values()
nx_G = nx.DiGraph()
#nx_G.add_nodes_from(nodes_list)
#将每个级联内部节点间的邻接矩阵(带有自环)
temp_dict = {}
node_deepth = {}
for walk in graph:
if walk[1] == 0:
node_deepth[walk[0][0]] = 0
nx_G.add_node(walk[0][0], time=walk[1])
root = walk[0][0]
for i in range(len(walk[0]) - 1):
if walk[0][i] not in nx_G.nodes():
node_deepth[walk[0][0]] = 0
nx_G.add_node(walk[0][i], time=0)
nx_G.add_edge(walk[0][i], walk[0][i + 1])
nx_G.add_node(walk[0][i + 1], time=walk[1])
nx_G.add_edge(walk[0][i], walk[0][i + 1])
if walk[0][i] in node_deepth.keys():
node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
else:
node_deepth[walk[0][i]] = 0
node_deepth[walk[0][i + 1]] = node_deepth.get(walk[0][i]) + 1
walk_time = math.floor(walk[1] / ((config.observation+1)/ NUM_SEQUENCE)) # 3*60 *60/180
time_interval = math.floor(walk[1] / ((config.observation+1)/ n_time_interval))#观察时长内划分六个时间间隔
temp_time[walk_time, time_interval] = 1
temp_trend[0, walk_time] +=1
if not temp_dict.get(walk_time):
temp_dict[walk_time] = []
temp_dict[walk_time].append(walk[0])
#temp_dict = sorted(temp_dict.items(), key=lambda a: a[0])
#G_BFS, G_bfs_node, node_ith_no = BFS_label(nx_G, root)
G_time, node_times_orders = G_time_ordered(nx_G)
temp_emb = np.zeros(shape=(max_num, 50))
for i in range(NUM_SEQUENCE):
if i in temp_dict.keys():
for value in temp_dict.get(i):
for p in range(len(value)):
d1 = node_deepth[value[p]] // 100 % 10
d2 = node_deepth[value[p]] //10 % 10
d3 = node_deepth[value[p]] // 1 % 10
n1 = node_times_orders[value[p]] //10 % 10
n2 = node_times_orders[value[p]] //1 % 10
temp_emb[node_times_orders[value[p]]][10-1-d1] = 1
temp_emb[node_times_orders[value[p]]][10*2 - 1 - d2] = 1
temp_emb[node_times_orders[value[p]]][10*3 - 1 - d3] = 1
temp_emb[node_times_orders[value[p]]][10*4 - 1 - n1] = 1
temp_emb[node_times_orders[value[p]]][10*5 -1 - n2] = 1
temp_s = scipy.sparse.coo_matrix(temp_emb, dtype=np.float32)
temp.append(temp_s)
else:
temp_s = temp[i-1]
temp.append(temp_s)
deep = max(list(node_deepth.values()))
maxdeep = max(deep,maxdeep)
#caculate laplacian
L = Laplacian.calculate_scaled_laplacian_dir(G_time, kind_of_laplacin= 'caslaplacian', lambda_max=None)
M, M = L.shape
M = int(M)
L = L.todense()
if M < max_num:
col_padding_L = np.zeros(shape=(M, max_num - M))
L_col_padding = np.column_stack((L, col_padding_L))
row_padding = np.zeros(shape=(max_num - M, max_num))
L_col_row_padding = np.row_stack((L_col_padding, row_padding))
Lapla = scipy.sparse.coo_matrix(L_col_row_padding, dtype=np.float32)
else:
Lapla = scipy.sparse.coo_matrix(L, dtype=np.float32)
time_data.append(temp_time)
trend_data.append((np.log(temp_trend+1.0)/np.log(2.0)).tolist())
id_data.append(id)
x_data.append(temp)
y_data.append(np.log(y+1.0)/np.log(2.0))
Laplacian_data.append(Lapla)
sz_data.append(size_temp)
gc.collect()
print('maxdeepth',maxdeep)
pickle.dump((id_data,x_data,Laplacian_data, y_data, sz_data, time_data, trend_data,index.length()), open(filename,'wb'))
def get_maxsize(sizes):
max_size = 0
for cascadeID in sizes:
max_size = max(max_size,sizes[cascadeID])
gc.collect()
return max_size
#级联的最大长度(级联中边的数量)
def get_max_length(graphs):
len_sequence = 0
max_num = 0
for cascadeID in graphs:
max_num = max(max_num,len(graphs[cascadeID]))
for sequence in graphs[cascadeID]:
len_sequence = max(len_sequence,len(sequence[0]))
gc.collect()
return len_sequence
def get_max_node_num(graphs):
max_num = 0
for key,graph in graphs.items():
nodes = get_nodes(graph)
max_num = max(max_num,len(nodes))
return max_num
if __name__ == "__main__":
### data set 数据转换,输入为list###
graphs_train = sequence2list(config.shortestpath_train) #
graphs_val = sequence2list(config.shortestpath_val)
graphs_test = sequence2list(config.shortestpath_test)
train_depth = get_max_deepth(graphs_train)
test_depth = get_max_deepth(graphs_test)
val_depth = get_max_deepth(graphs_val)
max_depth = max(test_depth,train_depth,val_depth)
#max_leaf = max(train_leafs,test_leafs,val_leafs)
print('max_depth',max_depth)
#print('max_leaf',max_leaf)
# get Laplacian ##
cascade_train = config.cascade_train
cascade_test = config.cascade_test
cascade_val = config.cascade_val
### get labels ###
labels_train, sizes_train = read_labelANDsize(config.cascade_train) # labels是{id:观测时间后的转发量}以及sizes级联长度{id:级联总的转发数量}
labels_val, sizes_val = read_labelANDsize(config.cascade_val)
labels_test, sizes_test = read_labelANDsize(config.cascade_test)
NUM_SEQUENCE = max(get_maxsize(sizes_train),get_maxsize(sizes_val),get_maxsize(sizes_test)) #三小时内,转发最多的级联的大小 884
#NUM_SEQUENCE =config.num_squ
print('numsequence')
print(get_maxsize(sizes_train))
print(get_maxsize(sizes_val))
print(get_maxsize(sizes_test))
LEN_SEQUENCE_train = get_max_length(graphs_train) #每个数据集内, 级联内某一传播链的最大长度
LEN_SEQUENCE_val = get_max_length(graphs_val)
LEN_SEQUENCE_test = get_max_length(graphs_test)
LEN_SEQUENCE = max(LEN_SEQUENCE_train,LEN_SEQUENCE_val,LEN_SEQUENCE_test) #26
print('LEN_SEQUENCE')
print(LEN_SEQUENCE_train)
print(LEN_SEQUENCE_val)
print(LEN_SEQUENCE_test)
#LEN_SEQUENCE =0
#
max_num_train = get_max_node_num(graphs_train) #参与级联的最大节点数
max_num_test = get_max_node_num(graphs_test)
max_num_val = get_max_node_num(graphs_val)
max_num = max(max_num_train, max_num_test, max_num_val)
print(max_num) #100
#
# get the total original_ids and tranform the index from 0 ~n-1
original_ids = get_original_ids(graphs_train)\
.union(get_original_ids(graphs_val))\
.union(get_original_ids(graphs_test))
original_ids.add(-1)
## index is new index
index = IndexDict(original_ids) #字典{节点对id:节点对}
# print("create train")
# write_XYSIZE_data(graphs_train, labels_train,sizes_train,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.train_pkl)
# print("create val an test")
# write_XYSIZE_data(graphs_val, labels_val, sizes_val,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.val_pkl)
# write_XYSIZE_data(graphs_test, labels_test, sizes_test,LEN_SEQUENCE,NUM_SEQUENCE,index,max_num,config.n_time_interval, config.test_pkl)
# pickle.dump((len(original_ids),NUM_SEQUENCE,LEN_SEQUENCE), open(config.information,'wb'))
# print("Finish!!!") | 39.258904 | 145 | 0.565616 | 3,958 | 28,659 | 3.845124 | 0.064174 | 0.01807 | 0.013404 | 0.009659 | 0.988173 | 0.988173 | 0.988173 | 0.988107 | 0.988107 | 0.988107 | 0 | 0.019883 | 0.306815 | 28,659 | 730 | 146 | 39.258904 | 0.746099 | 0.529188 | 0 | 0.177852 | 0 | 0 | 0.009058 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050336 | false | 0 | 0.030201 | 0.003356 | 0.14094 | 0.04698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90742a5e4260d68b0d437394227e4d4a691cb2be | 12,635 | py | Python | tests/quick/engine/test_project_q_engine.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 25 | 2017-11-20T08:50:12.000Z | 2018-07-31T19:02:19.000Z | tests/quick/engine/test_project_q_engine.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 23 | 2017-11-21T21:47:28.000Z | 2018-10-03T08:28:41.000Z | tests/quick/engine/test_project_q_engine.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 13 | 2017-11-20T08:50:14.000Z | 2018-09-01T21:44:00.000Z | import unittest
import numpy as np
from simulaqron.toolbox import has_module
from simulaqron.settings import SimBackend
if has_module.main(SimBackend.PROJECTQ.value):
from simulaqron.virtual_node.project_q_simulator import projectQEngine
from simulaqron.virtual_node.basics import noQubitError, quantumError
from projectq.types._qubit import Qubit
_has_module = True
else:
_has_module = False
def if_has_module(test):
def new_test(self):
if _has_module:
test(self)
return new_test
class TestProjectQEngine_init(unittest.TestCase):
@if_has_module
def test_init(self):
eng = projectQEngine("Alice", 0)
self.assertEqual(eng.maxQubits, 10)
self.assertEqual(eng.activeQubits, 0)
self.assertEqual(len(eng.qubitReg), 0)
eng = projectQEngine("Alice", 0, 5)
self.assertEqual(eng.maxQubits, 5)
self.assertEqual(eng.activeQubits, 0)
self.assertEqual(len(eng.qubitReg), 0)
class TestProjectQEngine(unittest.TestCase):
@if_has_module
def setUp(self):
self.eng = projectQEngine("Alice", 0)
@staticmethod
def abs_inner_product(state, ref):
comb_state = np.array(state[0]) + 1j * np.array(state[1])
inner = np.dot(comb_state, np.array(ref).conj())
return np.abs(inner)
@if_has_module
def test_add_fresh_qubit(self):
num = self.eng.add_fresh_qubit()
self.assertEqual(num, 0)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
self.assertTrue(isinstance(self.eng.qubitReg[num], Qubit))
@if_has_module
def test_add_to_many_fresh_qubits(self):
for _ in range(10):
self.eng.add_fresh_qubit()
with self.assertRaises(noQubitError):
self.eng.add_fresh_qubit()
@if_has_module
def test_add_qubit(self):
new_state = [1, 0]
num = self.eng.add_qubit(new_state)
self.assertEqual(num, 0)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, new_state), 1)
@if_has_module
def test_add_qubit_H(self):
new_state = [1 / np.sqrt(2), 1 / np.sqrt(2)]
num = self.eng.add_qubit(new_state)
self.assertEqual(num, 0)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, new_state), 1)
@if_has_module
def test_add_unphysical_qubit(self):
new_state = [1, 1]
with self.assertRaises(quantumError):
self.eng.add_qubit(new_state)
@if_has_module
def test_remove_qubit(self):
num = self.eng.add_fresh_qubit()
self.eng.remove_qubit(num)
self.assertEqual(self.eng.activeQubits, 0)
self.assertEqual(len(self.eng.qubitReg), 0)
with self.assertRaises(quantumError):
self.eng.remove_qubit(num)
@if_has_module
def test_get_register_RI(self):
self.eng.add_fresh_qubit()
self.eng.add_fresh_qubit()
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, [1, 0, 0, 0]), 1)
@if_has_module
def test_H(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, [1 / np.sqrt(2), 1 / np.sqrt(2)]), 1)
@if_has_module
def test_K(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_K(num)
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, [1 / np.sqrt(2), 1j / np.sqrt(2)]), 1)
@if_has_module
def test_X(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_X(num)
state = self.eng.get_register_RI()[1]
self.assertAlmostEqual(self.abs_inner_product(state, [0, 1]), 1)
@if_has_module
def test_Y(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
self.eng.apply_Y(num)
state = self.eng.get_register_RI()[1]
ref = [-1j / np.sqrt(2), 1j / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_Z(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
self.eng.apply_Z(num)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), -1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_Rx(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_rotation(num, (1, 0, 0), np.pi / 2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), -1j / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_Ry(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_rotation(num, (0, 1, 0), np.pi / 2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_Rz(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
self.eng.apply_rotation(num, (0, 0, 1), np.pi / 2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 1j / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_faulty_rot(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
with self.assertRaises(NotImplementedError):
self.eng.apply_rotation(num, (1, 0, 1), np.pi / 2)
@if_has_module
def test_cnot(self):
num1 = self.eng.add_fresh_qubit()
num2 = self.eng.add_fresh_qubit()
self.eng.apply_H(num1)
self.eng.apply_CNOT(num1, num2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 0, 0, 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_cz(self):
num1 = self.eng.add_fresh_qubit()
num2 = self.eng.add_fresh_qubit()
self.eng.apply_H(num1)
self.eng.apply_H(num2)
self.eng.apply_CPHASE(num1, num2)
state = self.eng.get_register_RI()[1]
ref = [1 / 2, 1 / 2, 1 / 2, -1 / 2]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_measure0(self):
num = self.eng.add_fresh_qubit()
m = self.eng.measure_qubit(num)
self.assertEqual(m, 0)
self.assertEqual(self.eng.activeQubits, 0)
@if_has_module
def test_measure1(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_X(num)
m = self.eng.measure_qubit(num)
self.assertEqual(m, 1)
self.assertEqual(self.eng.activeQubits, 0)
@if_has_module
def test_measure_inplace(self):
num = self.eng.add_fresh_qubit()
m = self.eng.measure_qubit_inplace(num)
self.assertEqual(m, 0)
self.assertEqual(self.eng.activeQubits, 1)
@if_has_module
def test_absorb_both_empty(self):
eng2 = projectQEngine("Alice", 0)
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, 0)
self.assertEqual(len(self.eng.qubitReg), 0)
@if_has_module
def test_absorb_other_empty(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
eng2 = projectQEngine("Alice", 0)
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_this_empty_H(self):
eng2 = projectQEngine("Alice", 0)
num = eng2.add_fresh_qubit()
eng2.apply_H(num)
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_this_empty_CNOT(self):
eng2 = projectQEngine("Alice", 0)
num1 = eng2.add_fresh_qubit()
num2 = eng2.add_fresh_qubit()
eng2.apply_H(num1)
eng2.apply_CNOT(num1, num2)
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, 2)
self.assertEqual(len(self.eng.qubitReg), 2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 0, 0, 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_this_empty_GHZ(self):
n = 5
eng2 = projectQEngine("Alice", 0)
qubits = [eng2.add_fresh_qubit() for _ in range(n)]
eng2.apply_H(qubits[0])
for i in range(1, n):
eng2.apply_CNOT(qubits[0], qubits[i])
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, n)
self.assertEqual(len(self.eng.qubitReg), n)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2)] + [0] * (2 ** n - 2) + [1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_2GHZ(self):
n = 5
eng2 = projectQEngine("Alice", 0)
for eng in [self.eng, eng2]:
qubits = [eng.add_fresh_qubit() for _ in range(n)]
eng.apply_H(qubits[0])
for i in range(1, n):
eng.apply_CNOT(qubits[0], qubits[i])
self.eng.absorb(eng2)
self.assertEqual(self.eng.activeQubits, 2 * n)
self.assertEqual(len(self.eng.qubitReg), 2 * n)
@if_has_module
def test_absorb_to_big_this_empty(self):
eng2 = projectQEngine("Alice", 0, 11)
for _ in range(11):
eng2.add_fresh_qubit()
with self.assertRaises(quantumError):
self.eng.absorb(eng2)
@if_has_module
def test_absorb_to_big(self):
self.eng.add_fresh_qubit()
eng2 = projectQEngine("Alice", 0)
for _ in range(10):
eng2.add_fresh_qubit()
with self.assertRaises(quantumError):
self.eng.absorb(eng2)
@if_has_module
def test_absorb_parts_both_empty(self):
eng2 = projectQEngine("Alice", 0)
self.eng.absorb_parts(*eng2.get_register_RI(), eng2.activeQubits)
self.assertEqual(self.eng.activeQubits, 0)
self.assertEqual(len(self.eng.qubitReg), 0)
@if_has_module
def test_absorb_parts(self):
self.eng.add_fresh_qubit()
eng2 = projectQEngine("Alice", 0)
eng2.add_fresh_qubit()
self.eng.absorb_parts(*eng2.get_register_RI(), eng2.activeQubits)
self.assertEqual(self.eng.activeQubits, 2)
self.assertEqual(len(self.eng.qubitReg), 2)
state = self.eng.get_register_RI()[1]
ref = [1, 0, 0, 0]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_parts_EPR(self):
eng2 = projectQEngine("Alice", 0)
num1 = eng2.add_fresh_qubit()
num2 = eng2.add_fresh_qubit()
eng2.apply_H(num1)
eng2.apply_CNOT(num1, num2)
self.eng.absorb_parts(*eng2.get_register_RI(), eng2.activeQubits)
self.assertEqual(self.eng.activeQubits, 2)
self.assertEqual(len(self.eng.qubitReg), 2)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 0, 0, 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
@if_has_module
def test_absorb_parts_other_empty(self):
num = self.eng.add_fresh_qubit()
self.eng.apply_H(num)
eng2 = projectQEngine("Alice", 0)
self.eng.absorb_parts(*eng2.get_register_RI(), eng2.activeQubits)
self.assertEqual(self.eng.activeQubits, 1)
self.assertEqual(len(self.eng.qubitReg), 1)
state = self.eng.get_register_RI()[1]
ref = [1 / np.sqrt(2), 1 / np.sqrt(2)]
self.assertAlmostEqual(self.abs_inner_product(state, ref), 1)
if __name__ == "__main__":
if _has_module:
unittest.main()
| 34.711538 | 99 | 0.630233 | 1,788 | 12,635 | 4.231544 | 0.068233 | 0.112873 | 0.056701 | 0.064763 | 0.861882 | 0.823949 | 0.784034 | 0.737113 | 0.732355 | 0.70341 | 0 | 0.030604 | 0.242264 | 12,635 | 363 | 100 | 34.807163 | 0.759662 | 0 | 0 | 0.650641 | 0 | 0 | 0.006569 | 0 | 0 | 0 | 0 | 0 | 0.224359 | 1 | 0.121795 | false | 0 | 0.022436 | 0 | 0.157051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
907eef92babb79ecb7a048beb308da9513218409 | 99 | py | Python | heron/bots/helpers/utils.py | thetomcraig/HERON | 11dc5e3c4bcffde200866c108574a950cef7f944 | [
"MIT"
] | 3 | 2018-02-24T22:17:40.000Z | 2021-05-18T21:29:17.000Z | heron/bots/helpers/utils.py | thetomcraig/HERON | 11dc5e3c4bcffde200866c108574a950cef7f944 | [
"MIT"
] | 16 | 2020-06-05T17:29:38.000Z | 2021-09-19T19:54:54.000Z | heron/bots/helpers/utils.py | thetomcraig/HERON | 11dc5e3c4bcffde200866c108574a950cef7f944 | [
"MIT"
] | null | null | null | import random
import logging
def clear_set(set_to_clear):
[x.delete() for x in set_to_clear]
| 14.142857 | 38 | 0.747475 | 18 | 99 | 3.833333 | 0.611111 | 0.144928 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171717 | 99 | 6 | 39 | 16.5 | 0.841463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90a4cdeb9a0d87cb839c802b110d0222d779c222 | 1,860 | py | Python | Microsoft/unit_aera.py | jsourabh1/6Days6Company | 65899c7c9d4734bd2151a9b54de633b1709ce583 | [
"MIT"
] | 1 | 2022-01-11T02:04:57.000Z | 2022-01-11T02:04:57.000Z | Microsoft/unit_aera.py | jsourabh1/6Days6Company | 65899c7c9d4734bd2151a9b54de633b1709ce583 | [
"MIT"
] | null | null | null | Microsoft/unit_aera.py | jsourabh1/6Days6Company | 65899c7c9d4734bd2151a9b54de633b1709ce583 | [
"MIT"
] | null | null | null | class Solution:
#Function to find unit area of the largest region of 1s.
def findMaxArea(self, grid):
#Code here
def dfs(i,j,grid):
if i >=0 and j >=0 and i < len(grid) and j <len(grid[i]) and grid[i][j] == 1:
# return 0
grid[i][j] = 0
up = dfs(i,j+1,grid)
down = dfs(i,j-1,grid)
left = dfs(i-1,j,grid)
right = dfs(i+1,j,grid)
first=dfs(i-1,j-1,grid)
second=dfs(i-1,j+1,grid)
third=dfs(i+1,j+1,grid)
fourth=dfs(i+1,j-1,grid)
return up + down + left + right + 1+first+second+third+fourth
return 0
# [[0, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, -1, 1, 1, 0, 0, 0],
# [0, -1, -1, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, -1, 0, 0, -1, -1, 0, 0, 1, 0, 1, 0, 0],
# [0, -1, 0, 0, -1, -1, 0, 0, 1, 1, 1, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, -1, 1, 1, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0, 0]]
count = 0
for i in range(len(grid)):
for j in range(len(grid[0])):
if grid[i][j] == 1:
count = max(dfs(i,j,grid), count)
return count
T=1
for i in range(T):
grid=[[0,0,1,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,1,1,1,0,0,0],[0,1,1,0,1,0,0,0,0,0,0,0,0],[0,1,0,0,1,1,0,0,1,0,1,0,0],[0,1,0,0,1,1,0,0,1,1,1,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0],[0,0,0,0,0,0,0,1,1,1,0,0,0],[0,0,0,0,0,0,0,1,1,0,0,0,0]]
obj = Solution()
ans = obj.findMaxArea(grid)
print(ans)
# } Driver Code Ends | 33.818182 | 234 | 0.372043 | 370 | 1,860 | 1.87027 | 0.124324 | 0.367052 | 0.429191 | 0.462428 | 0.42341 | 0.365607 | 0.302023 | 0.302023 | 0.300578 | 0.300578 | 0 | 0.211293 | 0.409677 | 1,860 | 55 | 235 | 33.818182 | 0.418944 | 0.293011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0 | 0 | 0.222222 | 0.037037 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90ac0e0abccb1e62d904481e9a2d6d2945871409 | 99 | py | Python | tests/app/models.py | Stranger6667/djoffers | 3aff8953f56dea88326ffc294c9551b5ef7458ab | [
"MIT"
] | 1 | 2018-05-19T15:58:29.000Z | 2018-05-19T15:58:29.000Z | tests/app/models.py | Stranger6667/djoffers | 3aff8953f56dea88326ffc294c9551b5ef7458ab | [
"MIT"
] | 5 | 2016-09-14T09:00:30.000Z | 2018-05-12T09:54:35.000Z | tests/app/models.py | Stranger6667/djoffers | 3aff8953f56dea88326ffc294c9551b5ef7458ab | [
"MIT"
] | 1 | 2018-02-21T12:54:18.000Z | 2018-02-21T12:54:18.000Z | # coding: utf-8
from djoffers.models import HasOffersModel
class Offer(HasOffersModel):
pass
| 14.142857 | 42 | 0.767677 | 12 | 99 | 6.333333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.161616 | 99 | 6 | 43 | 16.5 | 0.903614 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
90b9be0376bda238c41829e6eb19e8065f66b34c | 7,046 | py | Python | tests/unit-tests/test_rst_footnotes.py | bob-schumaker/confluencebuilder | 4e767395e4abbedb1955c7b1b1449244886ecb24 | [
"BSD-2-Clause"
] | 158 | 2019-03-18T13:42:40.000Z | 2022-03-25T09:46:59.000Z | tests/unit-tests/test_rst_footnotes.py | bob-schumaker/confluencebuilder | 4e767395e4abbedb1955c7b1b1449244886ecb24 | [
"BSD-2-Clause"
] | 192 | 2019-03-15T14:12:25.000Z | 2022-03-27T18:35:48.000Z | tests/unit-tests/test_rst_footnotes.py | bob-schumaker/confluencebuilder | 4e767395e4abbedb1955c7b1b1449244886ecb24 | [
"BSD-2-Clause"
] | 54 | 2019-03-22T14:14:31.000Z | 2022-03-08T06:54:28.000Z | # -*- coding: utf-8 -*-
"""
:copyright: Copyright 2020 Sphinx Confluence Builder Contributors (AUTHORS)
:license: BSD-2-Clause (LICENSE)
"""
from tests.lib import build_sphinx
from tests.lib import parse
from tests.lib import prepare_conf
import os
import unittest
class TestConfluenceRstFootnotes(unittest.TestCase):
@classmethod
def setUpClass(self):
self.config = prepare_conf()
test_dir = os.path.dirname(os.path.realpath(__file__))
self.dataset = os.path.join(test_dir, 'datasets', 'common')
self.filenames = [
'footnotes',
]
def test_storage_rst_footnotes(self):
out_dir = build_sphinx(self.dataset, config=self.config,
filenames=self.filenames)
with parse('footnotes', out_dir) as data:
# ##########################################################
# footnotes
# ##########################################################
footnote_link_containers = data.find_all('sup')
self.assertEqual(len(footnote_link_containers), 3)
# footnote a
container = footnote_link_containers.pop(0)
ac_link = container.find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'id5')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '[1]')
# leader anchor back to this footnote a
anchor_tag = container.find_previous_sibling()
self.assertIsNotNone(anchor_tag)
self.assertEqual(anchor_tag.name, 'ac:structured-macro')
self.assertTrue(anchor_tag.has_attr('ac:name'))
self.assertEqual(anchor_tag['ac:name'], 'anchor')
anchor_param = anchor_tag.find('ac:parameter', recursive=False)
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'id1')
# footnote b
container = footnote_link_containers.pop(0)
ac_link = container.find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'note')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '[3]') # 3 since 2 was pre-reserved
# leader anchor back to this footnote b
anchor_tag = container.find_previous_sibling()
self.assertIsNotNone(anchor_tag)
self.assertEqual(anchor_tag.name, 'ac:structured-macro')
self.assertTrue(anchor_tag.has_attr('ac:name'))
self.assertEqual(anchor_tag['ac:name'], 'anchor')
anchor_param = anchor_tag.find('ac:parameter', recursive=False)
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'id2')
# footnote c
container = footnote_link_containers.pop(0)
ac_link = container.find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'id4')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '[2]')
# leader anchor back to this footnote 3
anchor_tag = container.find_previous_sibling()
self.assertIsNotNone(anchor_tag)
self.assertEqual(anchor_tag.name, 'ac:structured-macro')
self.assertTrue(anchor_tag.has_attr('ac:name'))
self.assertEqual(anchor_tag['ac:name'], 'anchor')
anchor_param = anchor_tag.find('ac:parameter', recursive=False)
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'id3')
# ##########################################################
# footnote table
# ##########################################################
footnote_table = data.find('table')
self.assertIsNotNone(footnote_table)
footnote_rows = footnote_table.find_all('tr')
self.assertEqual(len(footnote_rows), 3)
# footnote a
tds = footnote_rows[0].find_all('td', recursive=False)
self.assertEqual(len(tds), 2)
anchor_tag = tds[0].find('ac:structured-macro',
{'ac:name': 'anchor'})
self.assertIsNotNone(anchor_tag)
anchor_param = anchor_tag.find('ac:parameter')
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'id4')
ac_link = tds[0].find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'id3')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '2')
self.assertEqual(tds[1].text.strip(), 'footnote 2')
# footnote b
tds = footnote_rows[1].find_all('td', recursive=False)
self.assertEqual(len(tds), 2)
anchor_tag = tds[0].find('ac:structured-macro',
{'ac:name': 'anchor'})
self.assertIsNotNone(anchor_tag)
anchor_param = anchor_tag.find('ac:parameter')
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'id5')
ac_link = tds[0].find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'id1')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '1')
self.assertEqual(tds[1].text.strip(), 'footnote num')
# footnote c
tds = footnote_rows[2].find_all('td', recursive=False)
self.assertEqual(len(tds), 2)
anchor_tag = tds[0].find('ac:structured-macro',
{'ac:name': 'anchor'})
self.assertIsNotNone(anchor_tag)
anchor_param = anchor_tag.find('ac:parameter')
self.assertIsNotNone(anchor_param)
self.assertEqual(anchor_param.text, 'note')
ac_link = tds[0].find('ac:link')
self.assertIsNotNone(ac_link)
self.assertTrue(ac_link.has_attr('ac:anchor'))
self.assertEqual(ac_link['ac:anchor'], 'id2')
link_body = ac_link.find('ac:plain-text-link-body')
self.assertIsNotNone(link_body)
self.assertEqual(link_body.text, '3')
self.assertEqual(tds[1].text.strip(), 'footnote note')
| 40.034091 | 80 | 0.582458 | 782 | 7,046 | 5.061381 | 0.13555 | 0.054573 | 0.030318 | 0.021223 | 0.767812 | 0.767812 | 0.745073 | 0.717787 | 0.717787 | 0.717787 | 0 | 0.00891 | 0.267244 | 7,046 | 175 | 81 | 40.262857 | 0.757699 | 0.052086 | 0 | 0.6 | 0 | 0 | 0.109917 | 0.021485 | 0 | 0 | 0 | 0 | 0.55 | 1 | 0.016667 | false | 0 | 0.041667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90e51b7ae376b3f11ca35636ff6cdbbd5e076d2b | 40 | py | Python | tools/targets/models_list.py | JeremyRubin/tornado-trails | 68b58e0b8dd455df016cf4b9f62d0f50b692c69c | [
"MIT"
] | 1 | 2017-01-28T14:15:55.000Z | 2017-01-28T14:15:55.000Z | tools/targets/models_list.py | JeremyRubin/tornado-trails | 68b58e0b8dd455df016cf4b9f62d0f50b692c69c | [
"MIT"
] | null | null | null | tools/targets/models_list.py | JeremyRubin/tornado-trails | 68b58e0b8dd455df016cf4b9f62d0f50b692c69c | [
"MIT"
] | null | null | null | from models.BaseHandler import BaseModel | 40 | 40 | 0.9 | 5 | 40 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
291652a616932e7684d3cae54ad5b82cb4350b00 | 77 | py | Python | freight/api/exceptions.py | rob-opsi/freight | 9b8e58ce80f6a2ad21769806bdb7f32e68713ce2 | [
"Apache-2.0"
] | null | null | null | freight/api/exceptions.py | rob-opsi/freight | 9b8e58ce80f6a2ad21769806bdb7f32e68713ce2 | [
"Apache-2.0"
] | null | null | null | freight/api/exceptions.py | rob-opsi/freight | 9b8e58ce80f6a2ad21769806bdb7f32e68713ce2 | [
"Apache-2.0"
] | 1 | 2020-07-03T00:52:08.000Z | 2020-07-03T00:52:08.000Z | from __future__ import absolute_import
class ApiError(Exception):
pass
| 12.833333 | 38 | 0.792208 | 9 | 77 | 6.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 39 | 15.4 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2920efb9463a60d493a1eb855b7178e5b1125b30 | 40 | py | Python | helloGoogle.py | simonczhang/google-cloud | d3dfc2e959a8aa97feb21fc6f4902fc8b50e4857 | [
"MIT"
] | null | null | null | helloGoogle.py | simonczhang/google-cloud | d3dfc2e959a8aa97feb21fc6f4902fc8b50e4857 | [
"MIT"
] | null | null | null | helloGoogle.py | simonczhang/google-cloud | d3dfc2e959a8aa97feb21fc6f4902fc8b50e4857 | [
"MIT"
] | null | null | null | print('Hello World Google Cloud!!!!!')
| 20 | 39 | 0.65 | 5 | 40 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
292ac6eab5d45da8aa42c146f26a367e0d811af2 | 85 | py | Python | test/test_pytest_direnv.py | brent-moffit/pytest-direnv | cdd44a19386c25d5b464d517f39f692b6fee46ae | [
"MIT"
] | null | null | null | test/test_pytest_direnv.py | brent-moffit/pytest-direnv | cdd44a19386c25d5b464d517f39f692b6fee46ae | [
"MIT"
] | null | null | null | test/test_pytest_direnv.py | brent-moffit/pytest-direnv | cdd44a19386c25d5b464d517f39f692b6fee46ae | [
"MIT"
] | null | null | null | import os
def test_direnv_load():
assert os.getenv("TEST_VAR") == "test value"
| 14.166667 | 48 | 0.682353 | 13 | 85 | 4.230769 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 5 | 49 | 17 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.211765 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29300de506112f0c4d5bb4761fc0f023f2ebf6bc | 8,260 | py | Python | src/garage/tf/models/cnn.py | bainro/garage | c5afbb19524792d9bbad9b9741f45e1d48ddca3d | [
"MIT"
] | null | null | null | src/garage/tf/models/cnn.py | bainro/garage | c5afbb19524792d9bbad9b9741f45e1d48ddca3d | [
"MIT"
] | null | null | null | src/garage/tf/models/cnn.py | bainro/garage | c5afbb19524792d9bbad9b9741f45e1d48ddca3d | [
"MIT"
] | null | null | null | """CNN in TensorFlow."""
import tensorflow as tf
def cnn(input_var,
filter_dims,
num_filters,
strides,
name,
padding,
hidden_nonlinearity=tf.nn.relu,
hidden_w_init=tf.initializers.glorot_uniform(),
hidden_b_init=tf.zeros_initializer()):
"""Convolutional neural network (CNN).
Note:
Based on 'NHWC' data format: [batch, height, width, channel].
Args:
input_var (tf.Tensor): Input tf.Tensor to the CNN.
filter_dims (tuple[int]): Dimension of the filters. For example,
(3, 5) means there are two convolutional layers. The filter for
first layer is of dimension (3 x 3) and the second one is of
dimension (5 x 5).
num_filters (tuple[int]): Number of filters. For example, (3, 32) means
there are two convolutional layers. The filter for the first layer
has 3 channels and the second one with 32 channels.
strides (tuple[int]): The stride of the sliding window. For example,
(1, 2) means there are two convolutional layers. The stride of the
filter for first layer is 1 and that of the second layer is 2.
name (str): Network name, also the variable scope.
padding (str): The type of padding algorithm to use,
either 'SAME' or 'VALID'.
hidden_nonlinearity (callable): Activation function for intermediate
dense layer(s). It should return a tf.Tensor. Set it to
None to maintain a linear activation.
hidden_w_init (callable): Initializer function for the weight
of intermediate dense layer(s). The function should return a
tf.Tensor.
hidden_b_init (callable): Initializer function for the bias
of intermediate dense layer(s). The function should return a
tf.Tensor.
Return:
tf.Tensor: The output tf.Tensor of the CNN.
"""
with tf.compat.v1.variable_scope(name):
h = input_var
for index, (filter_dim, num_filter,
stride) in enumerate(zip(filter_dims, num_filters,
strides)):
_stride = [1, stride, stride, 1]
h = _conv(h, 'h{}'.format(index), filter_dim, num_filter, _stride,
hidden_w_init, hidden_b_init, padding)
if hidden_nonlinearity is not None:
h = hidden_nonlinearity(h)
# flatten
dim = tf.reduce_prod(h.get_shape()[1:].as_list())
return tf.reshape(h, [-1, dim])
def cnn_with_max_pooling(input_var,
filter_dims,
num_filters,
strides,
name,
pool_shapes,
pool_strides,
padding,
hidden_nonlinearity=tf.nn.relu,
hidden_w_init=tf.initializers.glorot_uniform(),
hidden_b_init=tf.zeros_initializer()):
"""Convolutional neural network (CNN) with max-pooling.
Note:
Based on 'NHWC' data format: [batch, height, width, channel].
Args:
input_var (tf.Tensor): Input tf.Tensor to the CNN.
filter_dims (tuple[int]): Dimension of the filters. For example,
(3, 5) means there are two convolutional layers. The filter for
first layer is of dimension (3 x 3) and the second one is of
dimension (5 x 5).
num_filters (tuple[int]): Number of filters. For example, (3, 32) means
there are two convolutional layers. The filter for the first layer
has 3 channels and the second one with 32 channels.
strides (tuple[int]): The stride of the sliding window. For example,
(1, 2) means there are two convolutional layers. The stride of the
filter for first layer is 1 and that of the second layer is 2.
name (str): Model name, also the variable scope of the cnn.
pool_shapes (tuple[int]): Dimension of the pooling layer(s). For
example, (2, 2) means that all the pooling layers have
shape (2, 2).
pool_strides (tuple[int]): The strides of the pooling layer(s). For
example, (2, 2) means that all the pooling layers have
strides (2, 2).
padding (str): The type of padding algorithm to use,
either 'SAME' or 'VALID'.
hidden_nonlinearity (callable): Activation function for intermediate
dense layer(s). It should return a tf.Tensor. Set it to
None to maintain a linear activation.
hidden_w_init (callable): Initializer function for the weight
of intermediate dense layer(s). The function should return a
tf.Tensor.
hidden_b_init (callable): Initializer function for the bias
of intermediate dense layer(s). The function should return a
tf.Tensor.
Return:
tf.Tensor: The output tf.Tensor of the CNN.
"""
pool_strides = [1, pool_strides[0], pool_strides[1], 1]
pool_shapes = [1, pool_shapes[0], pool_shapes[1], 1]
with tf.compat.v1.variable_scope(name):
h = input_var
for index, (filter_dim, num_filter,
stride) in enumerate(zip(filter_dims, num_filters,
strides)):
_stride = [1, stride, stride, 1]
h = _conv(h, 'h{}'.format(index), filter_dim, num_filter, _stride,
hidden_w_init, hidden_b_init, padding)
if hidden_nonlinearity is not None:
h = hidden_nonlinearity(h)
h = tf.nn.max_pool2d(h,
ksize=pool_shapes,
strides=pool_strides,
padding=padding)
# flatten
dim = tf.reduce_prod(h.get_shape()[1:].as_list())
return tf.reshape(h, [-1, dim])
def _conv(input_var, name, filter_size, num_filter, strides, hidden_w_init,
hidden_b_init, padding):
"""Helper function for performing convolution.
Args:
input_var (tf.Tensor): Input tf.Tensor to the CNN.
name (str): Variable scope of the convolution Op.
filter_size (tuple[int]): Dimension of the filters. For example,
(3, 5) means there are two convolutional layers. The filter for
first layer is of dimension (3 x 3) and the second one is of
dimension (5 x 5).
num_filter (tuple[int]): Number of filters. For example, (3, 32) means
there are two convolutional layers. The filter for the first layer
has 3 channels and the second one with 32 channels.
strides (tuple[int]): The stride of the sliding window. For example,
(1, 2) means there are two convolutional layers. The stride of the
filter for first layer is 1 and that of the second layer is 2.
hidden_w_init (callable): Initializer function for the weight
of intermediate dense layer(s). The function should return a
tf.Tensor.
hidden_b_init (callable): Initializer function for the bias
of intermediate dense layer(s). The function should return a
tf.Tensor.
padding (str): The type of padding algorithm to use,
either 'SAME' or 'VALID'.
Return:
tf.Tensor: The output of the convolution.
"""
# channel from input
input_shape = input_var.get_shape()[-1]
# [filter_height, filter_width, in_channels, out_channels]
w_shape = [filter_size, filter_size, input_shape, num_filter]
b_shape = [1, 1, 1, num_filter]
with tf.compat.v1.variable_scope(name):
weight = tf.compat.v1.get_variable('weight',
w_shape,
initializer=hidden_w_init)
bias = tf.compat.v1.get_variable('bias',
b_shape,
initializer=hidden_b_init)
return tf.nn.conv2d(
input_var, weight, strides=strides, padding=padding) + bias
| 44.648649 | 79 | 0.589588 | 1,068 | 8,260 | 4.437266 | 0.13015 | 0.032074 | 0.02089 | 0.030386 | 0.84406 | 0.816628 | 0.816628 | 0.803967 | 0.787508 | 0.787508 | 0 | 0.015074 | 0.333414 | 8,260 | 184 | 80 | 44.891304 | 0.845623 | 0.579661 | 0 | 0.630769 | 0 | 0 | 0.005178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046154 | false | 0 | 0.015385 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2947c0a21e039c13a52c661a597ac229fed41e01 | 71 | py | Python | katas/kyu_7/find_the_stray_number.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_7/find_the_stray_number.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_7/find_the_stray_number.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | def stray(arr):
return reduce(lambda prev, curr: prev ^ curr, arr)
| 23.666667 | 54 | 0.676056 | 11 | 71 | 4.363636 | 0.727273 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197183 | 71 | 2 | 55 | 35.5 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
29561621ce033f62365a6725cd790753eb96befe | 11,413 | py | Python | awesome_panel_extensions/assets/svg_icons.py | Jhsmit/awesome-panel-extensions | 41eba7cf84caa911be4ed0df2a96e16fc1e70263 | [
"CC-BY-4.0"
] | 3 | 2020-07-16T07:28:45.000Z | 2020-07-17T12:53:56.000Z | awesome_panel_extensions/assets/svg_icons.py | MarcSkovMadsen/panel-extensions-template | f41ad8d8fb8502f87de3a4992917cbffb6299012 | [
"CC-BY-4.0"
] | null | null | null | awesome_panel_extensions/assets/svg_icons.py | MarcSkovMadsen/panel-extensions-template | f41ad8d8fb8502f87de3a4992917cbffb6299012 | [
"CC-BY-4.0"
] | null | null | null | """This module provides a collection of SVG Icons"""
# pylint: disable=line-too-long
GIF_SVG = """ <svg class="pnx-icon" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M12.002 4h-10a1 1 0 0 0-1 1v8l2.256-2.354a.5.5 0 0 1 .63-.062l2.66 1.773 3.71-3.71a.5.5 0 0 1 .577-.094l1.777 1.947V5a1 1 0 0 0-1-1zm-10-1a2 2 0 0 0-2 2v8a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V5a2 2 0 0 0-2-2h-10zm4 4.5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
<path fill-rule="evenodd" d="M4 2h10a1 1 0 0 1 1 1v8a1 1 0 0 1-1 1v1a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H4a2 2 0 0 0-2 2h1a1 1 0 0 1 1-1z"/>
</svg>"""
MP4_SVG = """
<svg class="pnx-icon" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M0 1a1 1 0 0 1 1-1h14a1 1 0 0 1 1 1v14a1 1 0 0 1-1 1H1a1 1 0 0 1-1-1V1zm4 0h8v6H4V1zm8 8H4v6h8V9zM1 1h2v2H1V1zm2 3H1v2h2V4zM1 7h2v2H1V7zm2 3H1v2h2v-2zm-2 3h2v2H1v-2zM15 1h-2v2h2V1zm-2 3h2v2h-2V4zm2 3h-2v2h2V7zm-2 3h2v2h-2v-2zm2 3h-2v2h2v-2z"/>
</svg>"""
YOUTUBE_SVG = """
<svg class="pnx-icon" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" role="img" viewBox="0 0 576 512">
<path d="M549.655 124.083c-6.281-23.65-24.787-42.276-48.284-48.597C458.781 25 288 25 288 25S117.22 25 74.629 75.486c-23.497 6.322-42.003 24.947-48.284 48.597-11.412 42.867-11.412 132.305-11.412 132.305s0 89.438 11.412 132.305c6.281 23.65 24.787 41.5 48.284 47.821C117.22 448 288 448 288 448s170.78 0 213.251-11.486c23.497-6.321 42.003-24.171 48.284-47.821 11.412-42.867 11.412-132.305 11.412-132.305s0-89.438-11.412-132.305zm-317.51 213.308V175.185l142.739 81.205-142.739 81.201z"/>
</svg>
"""
DOC_SVG = """
<svg class="pnx-icon" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M1 2.828v9.923c.918-.35 2.107-.692 3.287-.81 1.094-.111 2.278-.039 3.213.492V2.687c-.654-.689-1.782-.886-3.112-.752-1.234.124-2.303.523-3.388.893zm7.5-.141v9.746c.935-.53 2.12-.603 3.213-.493 1.18.12 2.25.461 3.287.811V2.828c-.885-.25-2.154-.769-3.388-.893-1.33-.134-2.458.063-3.112.752zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.258.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
</svg>
"""
CODE_SVG = """
<svg class="pnx-icon" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M4.854 4.146a.5.5 0 0 1 0 .708L1.707 8l3.147 3.146a.5.5 0 0 1-.708.708l-3.5-3.5a.5.5 0 0 1 0-.708l3.5-3.5a.5.5 0 0 1 .708 0zm6.292 0a.5.5 0 0 0 0 .708L14.293 8l-3.147 3.146a.5.5 0 0 0 .708.708l3.5-3.5a.5.5 0 0 0 0-.708l-3.5-3.5a.5.5 0 0 0-.708 0zm-.999-3.124a.5.5 0 0 1 .33.625l-4 13a.5.5 0 0 1-.955-.294l4-13a.5.5 0 0 1 .625-.33z"/>
</svg>
"""
# Source: https://fontawesome.com/icons/external-link-alt
EXTERNAL_LINK = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 512 512"><path fill="currentColor" d="M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z"/></svg>"""
FACEBOOK = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 320 512"><path fill="currentColor" d="M279.14 288l14.22-92.66h-88.91v-60.13c0-25.35 12.42-50.06 52.24-50.06h40.42V6.26S260.43 0 225.36 0c-73.22 0-121.08 44.38-121.08 124.72v70.62H22.89V288h81.39v224h100.17V288z"/></svg>"""
LINKED_IN = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 448 512"><path fill="currentColor" d="M100.28 448H7.4V148.9h92.88zM53.79 108.1C24.09 108.1 0 83.5 0 53.8a53.79 53.79 0 0 1 107.58 0c0 29.7-24.1 54.3-53.79 54.3zM447.9 448h-92.68V302.4c0-34.7-.7-79.2-48.29-79.2-48.29 0-55.69 37.7-55.69 76.7V448h-92.78V148.9h89.08v40.8h1.3c12.4-23.5 42.69-48.3 87.88-48.3 94 0 111.28 61.9 111.28 142.3V448z"/></svg>"""
TWITTER = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 512 512"><path fill="currentColor" d="M459.37 151.716c.325 4.548.325 9.097.325 13.645 0 138.72-105.583 298.558-298.558 298.558-59.452 0-114.68-17.219-161.137-47.106 8.447.974 16.568 1.299 25.34 1.299 49.055 0 94.213-16.568 130.274-44.832-46.132-.975-84.792-31.188-98.112-72.772 6.498.974 12.995 1.624 19.818 1.624 9.421 0 18.843-1.3 27.614-3.573-48.081-9.747-84.143-51.98-84.143-102.985v-1.299c13.969 7.797 30.214 12.67 47.431 13.319-28.264-18.843-46.781-51.005-46.781-87.391 0-19.492 5.197-37.36 14.294-52.954 51.655 63.675 129.3 105.258 216.365 109.807-1.624-7.797-2.599-15.918-2.599-24.04 0-57.828 46.782-104.934 104.934-104.934 30.213 0 57.502 12.67 76.67 33.137 23.715-4.548 46.456-13.32 66.599-25.34-7.798 24.366-24.366 44.833-46.132 57.827 21.117-2.273 41.584-8.122 60.426-16.243-14.292 20.791-32.161 39.308-52.628 54.253z"/></svg>"""
REDDIT = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 512 512"><path fill="currentColor" d="M440.3 203.5c-15 0-28.2 6.2-37.9 15.9-35.7-24.7-83.8-40.6-137.1-42.3L293 52.3l88.2 19.8c0 21.6 17.6 39.2 39.2 39.2 22 0 39.7-18.1 39.7-39.7s-17.6-39.7-39.7-39.7c-15.4 0-28.7 9.3-35.3 22l-97.4-21.6c-4.9-1.3-9.7 2.2-11 7.1L246.3 177c-52.9 2.2-100.5 18.1-136.3 42.8-9.7-10.1-23.4-16.3-38.4-16.3-55.6 0-73.8 74.6-22.9 100.1-1.8 7.9-2.6 16.3-2.6 24.7 0 83.8 94.4 151.7 210.3 151.7 116.4 0 210.8-67.9 210.8-151.7 0-8.4-.9-17.2-3.1-25.1 49.9-25.6 31.5-99.7-23.8-99.7zM129.4 308.9c0-22 17.6-39.7 39.7-39.7 21.6 0 39.2 17.6 39.2 39.7 0 21.6-17.6 39.2-39.2 39.2-22 .1-39.7-17.6-39.7-39.2zm214.3 93.5c-36.4 36.4-139.1 36.4-175.5 0-4-3.5-4-9.7 0-13.7 3.5-3.5 9.7-3.5 13.2 0 27.8 28.5 120 29 149 0 3.5-3.5 9.7-3.5 13.2 0 4.1 4 4.1 10.2.1 13.7zm-.8-54.2c-21.6 0-39.2-17.6-39.2-39.2 0-22 17.6-39.7 39.2-39.7 22 0 39.7 17.6 39.7 39.7-.1 21.5-17.7 39.2-39.7 39.2z"/></svg>"""
ENVELOPE = """<svg xmlns="http://www.w3.org/2000/svg" class="pnx-icon" aria-hidden="true" focusable="false" role="img" viewBox="0 0 512 512"><path fill="currentColor" d="M502.3 190.8c3.9-3.1 9.7-.2 9.7 4.7V400c0 26.5-21.5 48-48 48H48c-26.5 0-48-21.5-48-48V195.6c0-5 5.7-7.8 9.7-4.7 22.4 17.4 52.1 39.5 154.1 113.6 21.1 15.4 56.7 47.8 92.2 47.6 35.7.3 72-32.8 92.3-47.6 102-74.1 131.6-96.3 154-113.7zM256 320c23.2.4 56.6-29.2 73.4-41.4 132.7-96.3 142.8-104.7 173.4-128.7 5.8-4.5 9.2-11.5 9.2-18.9v-19c0-26.5-21.5-48-48-48H48C21.5 64 0 85.5 0 112v19c0 7.4 3.4 14.3 9.2 18.9 30.6 23.9 40.7 32.4 173.4 128.7 16.8 12.2 50.2 41.8 73.4 41.4z"/></svg>"""
BINDER = """<svg version="1.1" id="Layer_1"
xmlns="http://www.w3.org/2000/svg" class="pnx-icon pnx-icon-binder" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 212.118 65.883" enable-background="new 0 0 212.118 65.883" xml:space="preserve">
<switch>
<g>
<g>
<path fill="#545454" d="M50.751,48.727V12.472h7.251v17.547c1.885-2.32,4.544-3.094,7.299-3.094
c6.042,0,10.586,5.269,10.586,11.167S71.344,49.21,65.302,49.21c-2.755,0-5.849-0.87-7.299-3.046v2.562H50.751z M63.078,43.409
c2.9,0,5.076-2.514,5.076-5.317s-2.175-5.317-5.076-5.317s-5.076,2.514-5.076,5.317S60.178,43.409,63.078,43.409z"/>
<path fill="#545454" d="M84.35,15.855c2.32,0,4.254,1.885,4.254,4.254c0,2.32-1.934,4.254-4.254,4.254
c-2.369,0-4.254-1.934-4.254-4.254C80.096,17.741,81.981,15.855,84.35,15.855z M80.724,48.727V27.409h7.251v21.318H80.724z"/>
<path fill="#545454" d="M115.819,48.727h-7.25V36.642c0-2.514-0.967-3.867-3.239-3.867s-4.061,2.417-4.061,5.317v10.635h-7.251
V27.409h7.251v2.61c1.45-1.837,3.722-3.094,6.526-3.094c5.704,0,8.024,3.916,8.024,9.716V48.727z"/>
<path fill="#545454" d="M144.826,48.727h-7.251v-2.562c-1.45,2.176-4.592,3.046-7.299,3.046c-6.043,0-10.587-5.221-10.587-11.118
s4.544-11.167,10.587-11.167c2.707,0,5.414,0.773,7.299,3.094V12.472h7.251V48.727z M132.499,32.774
c-2.9,0-5.075,2.514-5.075,5.317s2.175,5.317,5.075,5.317s5.076-2.514,5.076-5.317S135.399,32.774,132.499,32.774z"/>
<path fill="#545454" d="M173.639,38.962h-16.532c0,2.466,1.74,5.075,4.592,5.075c2.562,0,4.158-1.691,4.206-3.238h7.396
c-1.256,5.607-5.801,8.411-11.456,8.411c-7.348,0-12.423-4.351-12.423-11.118c0-6.671,5.269-11.167,12.423-11.167
c6.478,0,11.843,3.867,11.843,10.683C173.687,38.043,173.639,38.527,173.639,38.962z M166.726,35.53c0,0-0.338-3.964-4.736-3.964
c-4.545,0-4.786,3.964-4.786,3.964H166.726z"/>
<path fill="#545454" d="M185.532,38.092v10.635h-7.251V27.409h7.251v2.61c1.111-1.595,2.949-3.094,5.607-3.094
c0.629,0,1.596,0.193,2.176,0.483v6.574h-0.098c-0.725-0.87-1.788-1.208-2.9-1.208C187.61,32.774,185.532,35.53,185.532,38.092z"
/>
</g>
<circle fill="none" stroke="#F5A252" stroke-width="4.8342" stroke-miterlimit="10" cx="27.879" cy="23.939" r="9.542"/>
<circle fill="none" stroke="#579ACA" stroke-width="4.8342" stroke-miterlimit="10" cx="27.879" cy="42.499" r="9.543"/>
<circle fill="none" stroke="#E66581" stroke-width="4.8342" stroke-miterlimit="10" cx="18.551" cy="33.289" r="9.543"/>
<path fill="none" stroke="#579ACA" stroke-width="4.8342" stroke-miterlimit="10" d="M20.196,36.836
c0.759-1.031,1.74-1.927,2.921-2.607c4.566-2.63,10.401-1.06,13.031,3.507"/>
<path fill="none" stroke="#F5A252" stroke-width="4.8342" stroke-miterlimit="10" d="M19.61,28.701
c-2.63-4.566-1.061-10.401,3.507-13.032c4.567-2.63,10.401-1.059,13.031,3.508"/>
</g>
</switch>
</svg>"""
FAST_COLLAPSED_ICON = """
<svg style="stroke: #E62F63" width="18" height="18" viewBox="0 0 18 18" fill="none" xmlns="http://www.w3.org/2000/svg" slot="collapsed-icon">
<path d="M15.2222 1H2.77778C1.79594 1 1 1.79594 1 2.77778V15.2222C1 16.2041 1.79594 17 2.77778 17H15.2222C16.2041 17 17 16.2041 17 15.2222V2.77778C17 1.79594 16.2041 1 15.2222 1Z" stroke-linecap="round" stroke-linejoin="round"></path>
<path d="M9 5.44446V12.5556" stroke-linecap="round" stroke-linejoin="round"></path>
<path d="M5.44446 9H12.5556" stroke-linecap="round" stroke-linejoin="round"></path>
</svg>
"""
FAST_EXPANDED_ICON = """
<svg style="stroke: #E62F63" width="18" height="18" viewBox="0 0 18 18" fill="none" xmlns="http://www.w3.org/2000/svg" slot="expanded-icon">
<path d="M15.2222 1H2.77778C1.79594 1 1 1.79594 1 2.77778V15.2222C1 16.2041 1.79594 17 2.77778 17H15.2222C16.2041 17 17 16.2041 17 15.2222V2.77778C17 1.79594 16.2041 1 15.2222 1Z" stroke-linecap="round" stroke-linejoin="round"></path>
<path d="M5.44446 9H12.5556" stroke-linecap="round" stroke-linejoin="round"></path>
</svg>
"""
ICONS = {
"binder": BINDER,
"code": CODE_SVG,
"doc": DOC_SVG,
"envelope": ENVELOPE,
"external_link": EXTERNAL_LINK,
"facebook": FACEBOOK,
"gif": GIF_SVG,
"linked_in": LINKED_IN,
"mp4": MP4_SVG,
"reddit": REDDIT,
"twitter": TWITTER,
"youtube": YOUTUBE_SVG,
"collapsed": FAST_COLLAPSED_ICON,
"expanded": FAST_EXPANDED_ICON,
}
| 113 | 1,027 | 0.661439 | 2,792 | 11,413 | 2.694842 | 0.277579 | 0.025253 | 0.013158 | 0.009569 | 0.367757 | 0.329612 | 0.309676 | 0.293993 | 0.284423 | 0.280968 | 0 | 0.47446 | 0.108035 | 11,413 | 100 | 1,028 | 114.13 | 0.264637 | 0.011653 | 0 | 0.263736 | 0 | 0.527473 | 0.949074 | 0.350488 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
462be090cc5a7e40770e0321bc8bd623feadf8bb | 8,867 | py | Python | Ago-Dic-2019/Ricardo_Romero_Medina/Ordinario/database.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ago-Dic-2019/Ricardo_Romero_Medina/Ordinario/database.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ago-Dic-2019/Ricardo_Romero_Medina/Ordinario/database.py | Arbupa/DAS_Sistemas | 52263ab91436b2e5a24ce6f8493aaa2e2fe92fb1 | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z | import sqlite3
class basedatos():
def __init__(self):
self.crear_tabla_personajes()
self.crear_tabla_locaciones()
self.crear_tabla_Episodios()
def crear_tabla_personajes(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado a SQLite')
query = '''
CREATE TABLE IF NOT EXISTS personajes(
Id TEXT PRIMARY KEY,
Nombre TEXT,
Status TEXT,
Species TEXT,
Type TEXT,
Origin TEXT,
Location TEXT,
id_locacion TEXT,
Episode TEXT,
id_episodio TEXT,
Url TEXT,
foreign key (id_locacion) references locaciones(Id),
foreign key (id_episodio) references episodios(Id)
);
'''
cursor.execute(query)
row = cursor.fetchall()
print('Tabla creada correctamente', row)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion', error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def insertar_personajes(self,per):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = """INSERT INTO personajes VALUES
("{}","{}","{}","{}","{}","{}","{}","{}","{}","{}","{}")""".format(per._Id, per._Nombre, per._Status, per._Species, per._Origen, per._Tipo, per._Location, per._Id_Locacion, per._Episode, per._Id_Episodio, per._Url)
resultado = cursor.execute(query)
conexion.commit()
print('Valor Insertado Correctamente', resultado)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def ver_personajes(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = 'SELECT * FROM personajes;'
cursor.execute(query)
rows = cursor.fetchall()
print('Total de registros: ', len(rows))
print('------------Registros-------------')
for row in rows:
print('Id: {}\nNombre: {}\nStatus: {}\nSpecies: {}\nOrigen: {}\nTipo: {}\nLocacion: {}\nNumero Locacion: {}\nEpisodio: {}\nNumero Episodio: {}\nURL: {}'.format(*row))
return 'Id: {}\nNombre: {}\nStatus: {}\nSpecies: {}\nOrigen: {}\nTipo: {}\nLocacion: {}\nNumero Locacion: {}\nEpisodio: {}\nNumero Episodio: {}\nURL: {}'.format(*row)
print('Total de registros: ', len(rows))
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def crear_tabla_locaciones(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado a SQLite')
query = '''
CREATE TABLE IF NOT EXISTS locaciones(
Id TEXT PRIMARY KEY,
Nombre TEXT,
Type TEXT,
Dimension TEXT,
Residents TEXT,
id_personaje TEXT,
Url TEXT,
foreign key (id_personaje) references personajes (Id)
);
'''
cursor.execute(query)
row = cursor.fetchall()
print('Tabla creada correctamente', row)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion', error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def insertar_locaciones(self,loc):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = """INSERT INTO locaciones VALUES
("{}","{}","{}","{}","{}","{}","{}")""".format(loc._Id, loc._Nombre, loc._Tipo, loc._Dimension, loc._Residentes, loc._Id_Residente, loc._Url)
resultado = cursor.execute(query)
conexion.commit()
print('Valor Insertado Correctamente', resultado)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def ver_locaciones(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = 'SELECT * FROM locaciones;'
cursor.execute(query)
rows = cursor.fetchall()
print('Total de registros: ', len(rows))
print('------------Registros-------------')
for row in rows:
print('Id: {}\nNombre: {}\nTipo: {}\nDimension: {}\nResidente: {}\nNumero Residente: {}\nURL: {}'.format(*row))
print('Total de registros: ', len(rows))
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def crear_tabla_Episodios(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado a SQLite')
query = '''
CREATE TABLE IF NOT EXISTS episodios(
Id TEXT PRIMARY KEY,
Nombre TEXT,
air_date TEXT,
Episode TEXT,
Character TEXT,
id_personaje TEXT,
Url TEXT,
foreign key (id_personaje) references personajes (Id)
);
'''
cursor.execute(query)
row = cursor.fetchall()
print('Tabla creada correctamente', row)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion', error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def insertar_episodios(self,epi):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = """INSERT INTO episodios VALUES
("{}","{}","{}","{}","{}","{}","{}")""".format(epi._Id, epi._Nombre, epi._Al_Aire, epi._Episodio, epi._Personaje, epi._Id_Personaje, epi._Url)
resultado = cursor.execute(query)
conexion.commit()
print('Valor Insertado Correctamente', resultado)
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
def ver_episodios(self):
try:
conexion = sqlite3.connect('Ordinario/BaseDatos/rickandmorty.db')
cursor = conexion.cursor()
print('Conectado')
query = 'SELECT * FROM episodios;'
cursor.execute(query)
rows = cursor.fetchall()
print('Total de registros: ', len(rows))
print('------------Registros-------------')
for row in rows:
print('Id: {}\nNombre: {}\nPrimera Transmision: {}\nEpisodio: {}\nPersonaje: {}\nNumero Personaje: {}\nURL: {}'.format(*row))
print('Total de registros: ', len(rows))
cursor.close()
except sqlite3.Error as error:
print('Error con la conexion',error)
finally:
if(conexion):
conexion.close()
print('Conexion a SQLite cerrada\n')
if __name__ == '__main__':
db = basedatos()
#db.ver_personajes()
#db.ver_locaciones()
#db.ver_episodios() | 34.501946 | 234 | 0.512687 | 802 | 8,867 | 5.578554 | 0.137157 | 0.018775 | 0.036209 | 0.050291 | 0.798391 | 0.798391 | 0.775816 | 0.775816 | 0.775816 | 0.775816 | 0 | 0.003355 | 0.361227 | 8,867 | 257 | 235 | 34.501946 | 0.786547 | 0.006316 | 0 | 0.781553 | 0 | 0.019417 | 0.384153 | 0.061869 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048544 | false | 0 | 0.004854 | 0 | 0.063107 | 0.218447 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
465443438003613058727e8573e00546439040db | 165 | py | Python | classified/probe/all.py | tehmaze/classified | 479768675cdff0156510741d0c0ea37b9b10c099 | [
"MIT"
] | 6 | 2015-03-17T21:37:58.000Z | 2020-05-20T12:45:57.000Z | classified/probe/all.py | tehmaze/classified | 479768675cdff0156510741d0c0ea37b9b10c099 | [
"MIT"
] | 2 | 2016-01-14T12:42:17.000Z | 2020-05-19T09:38:31.000Z | classified/probe/all.py | tehmaze/classified | 479768675cdff0156510741d0c0ea37b9b10c099 | [
"MIT"
] | 1 | 2016-06-10T12:23:03.000Z | 2016-06-10T12:23:03.000Z | # Project imports
from classified.probe.pan import PAN
from classified.probe.pcap import PCAP
from classified.probe.ssl import SSL
__all__ = ['PAN', 'PCAP', 'SSL']
| 23.571429 | 38 | 0.763636 | 24 | 165 | 5.083333 | 0.416667 | 0.344262 | 0.467213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 165 | 6 | 39 | 27.5 | 0.847222 | 0.090909 | 0 | 0 | 0 | 0 | 0.067568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4655246d86fd27b7f96852b412c0d2094dbac383 | 44 | py | Python | scorebot/__init__.py | bombsimon/hltv-python | 6b882fea88245c239fc034892f75898d65adcb0c | [
"MIT"
] | 10 | 2019-10-14T00:40:50.000Z | 2022-03-30T21:46:35.000Z | scorebot/__init__.py | bombsimon/hltv-python | 6b882fea88245c239fc034892f75898d65adcb0c | [
"MIT"
] | 6 | 2020-07-24T14:21:05.000Z | 2022-03-10T07:32:52.000Z | scorebot/__init__.py | bombsimon/hltv-python | 6b882fea88245c239fc034892f75898d65adcb0c | [
"MIT"
] | 4 | 2020-04-25T08:47:12.000Z | 2022-03-20T14:38:13.000Z | from .game import *
from .scorebot import *
| 14.666667 | 23 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 24 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3f0197e49bc897998f568d9536fdaec3fa92f54 | 129 | py | Python | siesta_utils/__init__.py | semodi/siesta_utils | 3bca36440e31cdda7784a16a50b93f2b88e1550f | [
"BSD-3-Clause"
] | null | null | null | siesta_utils/__init__.py | semodi/siesta_utils | 3bca36440e31cdda7784a16a50b93f2b88e1550f | [
"BSD-3-Clause"
] | null | null | null | siesta_utils/__init__.py | semodi/siesta_utils | 3bca36440e31cdda7784a16a50b93f2b88e1550f | [
"BSD-3-Clause"
] | null | null | null | """
This is the base file of siesta_utils
"""
from . import grid
from . import mat
from . import conversions
from . import xyz
| 14.333333 | 37 | 0.713178 | 20 | 129 | 4.55 | 0.7 | 0.43956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 129 | 8 | 38 | 16.125 | 0.892157 | 0.286822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
310202927a7c280ed1e91bd9898e32f149ac0a57 | 1,726 | py | Python | exercisemodule1.py | shyed2001/Python_Programming | 93ef958e3d8aa77f9191b550972235ce4fe4a6cb | [
"bzip2-1.0.6"
] | 2 | 2019-05-01T04:32:14.000Z | 2019-05-04T11:28:18.000Z | exercisemodule1.py | shyed2001/python-learning-basics | 93ef958e3d8aa77f9191b550972235ce4fe4a6cb | [
"bzip2-1.0.6"
] | null | null | null | exercisemodule1.py | shyed2001/python-learning-basics | 93ef958e3d8aa77f9191b550972235ce4fe4a6cb | [
"bzip2-1.0.6"
] | null | null | null | #-------------------------------------------------------------------------------
# Name: Return Function
# Purpose: Exercise
#
# Author: Shyed Shahriar Housaini
#
# Created: 01/05/2019
# Copyright: (c) User 2019
# Licence: <your licence>
#-------------------------------------------------------------------------------
monthConv={
"jan": "January",
"feb": "February",
"mar": "March",
"apr": "April",
"may": "May",
"jun": "June",
"jul": "July",
"aug": "August",
"sep":"September",
"oct":"October",
"nov":"November",
"dec":"December"
}
print("""
Assigned key value pairs are -
monthConv={
"jan": "January",
"feb": "February",
"mar": "March",
"apr": "April",
"may": "May",
"jun": "June",
"jul": "July",
"aug": "August",
"sep":"September",
"oct":"October",
"nov":"November",
"dec":"December"
}
""")
print("""
print(monthConv["nov"])
print(monthConv.get("mar"))
print(monthConv.get("ma"," Not a valid key"))
""")
print(monthConv["nov"])
print(monthConv.get("mar"))
print(monthConv.get("mov"))
print(monthConv.get("ma"))
print(monthConv.get("ma"," Not a valid key"))
monthConv2={
1: "january",
2: "february",
3: "march",
4: "april",
5: "may",
6: "june"
}
print("""
monthConv2={
1: "january",
2: "february",
3: "march",
4: "april",
5: "may",
6: "june"
}
""")
print("""
print(monthConv2[6])
print(monthConv2.get(3))
print(monthConv2.get(7))
print(monthConv2.get(9," Not a valid key"))
""")
print(monthConv2[6])
print(monthConv2.get("january"))
print(monthConv2.get(3))
print(monthConv2.get(7))
print(monthConv2.get(9," Not a valid key"))
print(monthConv2.get["january"])
print(monthConv["mov"])
print("""
""")
| 18.76087 | 81 | 0.524913 | 193 | 1,726 | 4.694301 | 0.326425 | 0.182119 | 0.15894 | 0.05298 | 0.807947 | 0.783664 | 0.745033 | 0.745033 | 0.695364 | 0.695364 | 0 | 0.030324 | 0.159328 | 1,726 | 91 | 82 | 18.967033 | 0.594073 | 0.188297 | 0 | 0.864865 | 0 | 0 | 0.583655 | 0.114109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.324324 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3170c00771eea27b977e07210808ea67511be341 | 599 | py | Python | app/dao/db.py | rbarbioni/python-flask-api | b245e167eae6482e80b8b69b33e418eea2b31aff | [
"Apache-2.0"
] | 3 | 2020-07-01T19:42:42.000Z | 2021-06-06T00:47:48.000Z | app/dao/db.py | rbarbioni/python-flask-api | b245e167eae6482e80b8b69b33e418eea2b31aff | [
"Apache-2.0"
] | 1 | 2021-04-04T23:45:29.000Z | 2021-04-04T23:45:29.000Z | app/dao/db.py | rbarbioni/python-flask-api | b245e167eae6482e80b8b69b33e418eea2b31aff | [
"Apache-2.0"
] | null | null | null | def all(session, Model):
return session.query(Model).all()
def query(session, Model, filter):
return session.query(Model).filter(filter)
def query_first(session, Model, filter):
return session.query(Model).filter(filter).first()
def query_all(session, Model, filter):
return session.query(Model).filter(filter).all()
def insert(session, model):
return session.add(model)
def update(session, Model, filter, model):
return session.query(Model).filter(filter).update(model)
def delete(session, Model, filter):
return session.query(Model).filter(filter).delete()
| 22.185185 | 60 | 0.722871 | 80 | 599 | 5.3875 | 0.15 | 0.25522 | 0.25058 | 0.320186 | 0.649652 | 0.573086 | 0.491879 | 0.491879 | 0.491879 | 0 | 0 | 0 | 0.135225 | 599 | 26 | 61 | 23.038462 | 0.832046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
31a03ed1a0689adc286790c1a7f849019c090a84 | 107 | py | Python | agpy/__init__.py | bhanuponguru/ag_py | e43103ad2776505aa307abe2299a29c6ce5d9277 | [
"MIT"
] | null | null | null | agpy/__init__.py | bhanuponguru/ag_py | e43103ad2776505aa307abe2299a29c6ce5d9277 | [
"MIT"
] | null | null | null | agpy/__init__.py | bhanuponguru/ag_py | e43103ad2776505aa307abe2299a29c6ce5d9277 | [
"MIT"
] | null | null | null | from . import *
from .window import *
from .keycodes import *
from .keyboard import *
from .timer import *
| 17.833333 | 23 | 0.719626 | 14 | 107 | 5.5 | 0.428571 | 0.519481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186916 | 107 | 5 | 24 | 21.4 | 0.885057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31af26efaf31e4646621c77194c27461260cc893 | 42 | py | Python | __init__.py | PVSemk/segmentation_models.pytorch | 8d9b033be918dfc1e6186d9ef404cc7d2c171e8d | [
"MIT"
] | null | null | null | __init__.py | PVSemk/segmentation_models.pytorch | 8d9b033be918dfc1e6186d9ef404cc7d2c171e8d | [
"MIT"
] | null | null | null | __init__.py | PVSemk/segmentation_models.pytorch | 8d9b033be918dfc1e6186d9ef404cc7d2c171e8d | [
"MIT"
] | null | null | null | from segmentation_models_pytorch import *
| 21 | 41 | 0.880952 | 5 | 42 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ee4406dc9bf92cfadac96d5e4789ff203d741d6 | 2,586 | py | Python | tools/ci/test_autocomplete.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 8,747 | 2016-08-18T14:58:24.000Z | 2022-03-31T20:58:55.000Z | tools/ci/test_autocomplete.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 8,603 | 2016-08-20T08:55:56.000Z | 2022-03-31T23:04:01.000Z | tools/ci/test_autocomplete.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 6,380 | 2016-08-18T18:17:00.000Z | 2022-03-31T22:25:57.000Z | #!/usr/bin/env python
import os
import sys
import unittest
import pexpect
class Test(unittest.TestCase):
def test_fish(self):
os.environ['TERM'] = 'vt100'
child = pexpect.spawn('fish -i')
with open(os.environ['IDF_PATH'] + '/fish' + str(sys.version_info.major) + '.out', 'wb') as output:
child.logfile = output
child.sendline('. ./export.fish')
result = child.expect(
['Go to the project directory and run.*idf\\.py build', pexpect.EOF,
pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Export was not successful!')
child.send('idf.py \t\t')
result = child.expect(['all.*app.*app-flash.*bootloader.*', pexpect.EOF, pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Autocompletion for idf.py failed in fish!')
def test_bash(self):
os.environ['TERM'] = 'xterm-256color'
child = pexpect.spawn('bash -i')
with open(os.environ['IDF_PATH'] + '/bash' + str(sys.version_info.major) + '.out', 'wb') as output:
child.logfile = output
child.sendline('. ./export.sh')
child.send('idf.py \t\t')
result = child.expect(
['Go to the project directory and run.*idf\\.py build', pexpect.EOF,
pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Export was not successful!')
result = child.expect(
['all.*app.*app-flash.*bootloader.*bootloader-flash.*build-system-targets.*clean.*', pexpect.EOF,
pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Autocompletion for idf.py failed in bash!')
def test_zsh(self):
child = pexpect.spawn('zsh -i')
with open(os.environ['IDF_PATH'] + '/zsh' + str(sys.version_info.major) + '.out', 'wb') as output:
child.logfile = output
child.sendline('. ./export.sh')
result = child.expect(
['Go to the project directory and run.*idf\\.py build', pexpect.EOF,
pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Export was not successful!')
child.send('idf.py \t\t')
result = child.expect(
['all.*app.*app-flash.*bootloader.*bootloader-flash.*build-system-targets.*clean.*', pexpect.EOF,
pexpect.TIMEOUT], timeout=40)
self.assertEqual(result, 0, 'Autocompletion for idf.py failed in zsh!')
if __name__ == '__main__':
unittest.main()
| 44.586207 | 114 | 0.578113 | 312 | 2,586 | 4.737179 | 0.24359 | 0.030447 | 0.069012 | 0.097429 | 0.809202 | 0.809202 | 0.809202 | 0.758457 | 0.758457 | 0.747632 | 0 | 0.012746 | 0.271848 | 2,586 | 57 | 115 | 45.368421 | 0.772172 | 0.007734 | 0 | 0.530612 | 0 | 0.040816 | 0.28499 | 0.075244 | 0 | 0 | 0 | 0 | 0.122449 | 1 | 0.061224 | false | 0 | 0.081633 | 0 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9eeb15541b84931e72cdfc3031e7c3ea73a14774 | 331 | py | Python | templates/python.flask/{{cookiecutter.project_safe_name}}/app/main/views/misc.py | by46/recipe | 203abd2141a536b66b4e57d073169a49395be1f0 | [
"MIT"
] | null | null | null | templates/python.flask/{{cookiecutter.project_safe_name}}/app/main/views/misc.py | by46/recipe | 203abd2141a536b66b4e57d073169a49395be1f0 | [
"MIT"
] | null | null | null | templates/python.flask/{{cookiecutter.project_safe_name}}/app/main/views/misc.py | by46/recipe | 203abd2141a536b66b4e57d073169a49395be1f0 | [
"MIT"
] | null | null | null | from flask import current_app
from flask import render_template
from app.main import main
@main.route("/version", methods=['GET'])
def version():
return render_template('main/version.html', version=current_app.config['VERSION'])
@main.route("/faq.htm")
def faq():
return render_template('main/faq.html')
| 22.066667 | 87 | 0.703927 | 45 | 331 | 5.066667 | 0.4 | 0.184211 | 0.131579 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154079 | 331 | 14 | 88 | 23.642857 | 0.814286 | 0 | 0 | 0 | 0 | 0 | 0.176656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0.333333 | 0.222222 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
730092245bc786f464d599057fcb22425131b790 | 2,735 | py | Python | transactions/models.py | Guilehm/expense-control-system | c0f8393497f54076cb15008f1bda9efc5081025b | [
"MIT"
] | 1 | 2022-02-16T23:23:02.000Z | 2022-02-16T23:23:02.000Z | transactions/models.py | guilehm/expense-control-system | c0f8393497f54076cb15008f1bda9efc5081025b | [
"MIT"
] | 40 | 2018-07-01T15:49:05.000Z | 2018-09-06T02:37:24.000Z | transactions/models.py | Guilehm/Expense-Control-System | c0f8393497f54076cb15008f1bda9efc5081025b | [
"MIT"
] | 1 | 2019-05-05T13:43:55.000Z | 2019-05-05T13:43:55.000Z | from django.contrib.auth.models import User
from django.core.validators import MinValueValidator
from django.db import models
from django.db.models import Sum
class ExpenseQuerySet(models.QuerySet):
def total(self):
return self.aggregate(Sum('total'))['total__sum'] or 0
class RevenueQuerySet(models.QuerySet):
def total(self):
return self.aggregate(Sum('total'))['total__sum'] or 0
# Create your models here.
class Revenue(models.Model):
account = models.ForeignKey('bank.BankAccount', on_delete=models.CASCADE, related_name='revenues')
user = models.ForeignKey(User, on_delete=models.CASCADE)
title = models.CharField(max_length=50)
description = models.TextField(blank=True, null=True)
total = models.DecimalField(
max_digits=9,
decimal_places=2,
validators=[MinValueValidator(0), ]
)
competition_date = models.DateField(db_index=True, blank=True, null=True)
due_date = models.DateField(db_index=True)
received_out = models.BooleanField(default=False)
note = models.TextField(blank=True, null=True)
category = models.ForeignKey(
'core.Category',
related_name='revenues',
on_delete=models.CASCADE,
)
tags = models.ManyToManyField(
'core.Tag',
related_name='revenues',
blank=True,
)
objects = RevenueQuerySet.as_manager()
date_added = models.DateTimeField(auto_now_add=True, db_index=True)
date_changed = models.DateTimeField(auto_now=True, db_index=True)
def __str__(self):
return self.title
class Expense(models.Model):
account = models.ForeignKey('bank.BankAccount', on_delete=models.CASCADE, related_name='expenses')
user = models.ForeignKey(User, on_delete=models.CASCADE)
title = models.CharField(max_length=50)
description = models.TextField(blank=True, null=True)
total = models.DecimalField(
max_digits=9,
decimal_places=2,
validators=[MinValueValidator(0), ]
)
competition_date = models.DateField(db_index=True, blank=True, null=True)
due_date = models.DateField(db_index=True)
paid_out = models.BooleanField(default=False)
note = models.TextField(blank=True, null=True)
category = models.ForeignKey(
'core.Category',
related_name='expenses',
on_delete=models.CASCADE,
null=True,
)
tags = models.ManyToManyField(
'core.Tag',
related_name='expenses',
blank=True,
)
objects = ExpenseQuerySet.as_manager()
date_added = models.DateTimeField(auto_now_add=True, db_index=True)
date_changed = models.DateTimeField(auto_now=True, db_index=True)
def __str__(self):
return self.title
| 32.176471 | 102 | 0.695795 | 328 | 2,735 | 5.631098 | 0.25 | 0.038982 | 0.047645 | 0.068219 | 0.788305 | 0.788305 | 0.788305 | 0.741743 | 0.741743 | 0.741743 | 0 | 0.005437 | 0.193053 | 2,735 | 84 | 103 | 32.559524 | 0.831445 | 0.008775 | 0 | 0.695652 | 0 | 0 | 0.056109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057971 | false | 0 | 0.057971 | 0.057971 | 0.637681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
73292cd965a33861268b4a580907fd3b067bff62 | 17,113 | py | Python | insights/parsers/tests/test_multipath_v4_ll.py | lhuett/insights-core | 1c84eeffc037f85e2bbf60c9a302c83aa1a50cf8 | [
"Apache-2.0"
] | 121 | 2017-05-30T20:23:25.000Z | 2022-03-23T12:52:15.000Z | insights/parsers/tests/test_multipath_v4_ll.py | lhuett/insights-core | 1c84eeffc037f85e2bbf60c9a302c83aa1a50cf8 | [
"Apache-2.0"
] | 1,977 | 2017-05-26T14:36:03.000Z | 2022-03-31T10:38:53.000Z | insights/parsers/tests/test_multipath_v4_ll.py | lhuett/insights-core | 1c84eeffc037f85e2bbf60c9a302c83aa1a50cf8 | [
"Apache-2.0"
] | 244 | 2017-05-30T20:22:57.000Z | 2022-03-26T10:09:39.000Z | from insights.parsers import multipath_v4_ll
from insights.tests import context_wrap
import doctest
MULTIPATH_V4_LL_INFO = """
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:0:0:0 sda 8:0 -1 undef ready VMware,Virtual disk running
3:0:0:1 sdb 8:16 -1 undef ready IET,VIRTUAL-DISK running
4:0:0:1 sdc 8:32 -1 undef ready IET,VIRTUAL-DISK running
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = E, len = 1
Oct 28 14:02:44 | *word = 1, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = A, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
mpathg (36f01faf000da360b0000033c528fea6d) dm-2 DELL,MD36xxi
size=54T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 12:0:0:1 sdc 8:32 active ready running
| |- 11:0:0:1 sdi 8:128 active ready running
| |- 15:0:0:1 sdo 8:224 active ready running
| `- 17:0:0:1 sdv 65:80 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 13:0:0:1 sdf 8:80 active ready running
|- 14:0:0:1 sdl 8:176 active ready running
|- 16:0:0:1 sdr 65:16 active ready running
`- 18:0:0:1 sdx 65:112 active ready running
mpathe (36f01faf000da3761000004323aa6fbce) dm-4 DELL,MD36xxi
size=44T features='3 queue_if_no_path pg_init_retries 55' hwhandler='2 rdac' wp=rx
|-+- policy='round-robin 0' prio=1 status=active
| |- 12:0:0:2 sdc 8:32 active ready running
| |- 11:0:0:2 sdi 8:128 active ready running
| |- 15:0:0:2 sdo 8:224 active ready running
| `- 17:0:0:2 sdv 65:80 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
|- 13:0:0:2 sdf 8:80 active ready running
|- 14:0:0:2 sdl 8:176 active ready running
|- 16:0:0:2 sdr 65:16 active ready running
`- 18:0:0:2 sdx 65:112 active ready running
36001405b1629f80d52a4c898f8856e43 dm-5 LIO-ORG ,block0_sdb
size=2.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 3:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 4:0:0:0 sdb 8:16 active ready running
mpatha (1IET 00080001) dm-0 IET,VIRTUAL-DISK
size=16G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 4:0:0:1 sdc 8:32 active ready running
1IET 00080001 dm-0 IET,VIRTUAL-DISK
size=16G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 4:0:0:1 sdc 8:32 active ready running
mpathb (1IET 00080002) dm-8 COMPELNT,Compellent Vol
size=16G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 4:0:0:1 sdc 8:32 active ready running
1IET 00080007 dm-19 COMPELNT,Compellent Vol
size=16G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 4:0:0:1 sdc 8:32 active ready running
mpathc (test_with_no_devs) dm-1 uninitialized
size=10G features='0' hwhandler='0' wp=rw
""".strip()
MULTIPATH_V4_LL_INFO_RHEL_5 = """
sdz: size = 293601280
sdz: vendor = DGC
sdz: product = RAID 5
sdz: rev = 0430
sdz: h:b:t:l = 5:0:1:6
sdz: tgt_node_name = 0x50060160c4603569
sdz: path checker = emc_clariion (controller setting)
sdz: checker timeout = 60000 ms (sysfs setting)
sdz: state = 2
sr0: blacklisted
sr1: blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev
5:0:1:7 sdaa 65:160 0 [undef][ready] DGC,RAID 5
3:0:0:5 sdab 65:176 0 [undef][ready] DGC,RAID 5
3:0:0:6 sdac 65:192 0 [undef][ready] DGC,RAID 5
3:0:0:7 sdad 65:208 0 [undef][ready] DGC,RAID 5
3:0:1:5 sdae 65:224 0 [undef][ready] DGC,RAID 5
3:0:1:6 sdaf 65:240 0 [undef][ready] DGC,RAID 5
3:0:1:7 sdag 66:0 0 [undef][ready] DGC,RAID 5
0:2:0:0 sda 8:0 0 [undef][ready] DELL,PERC 6/i
3:0:0:0 sdb 8:16 0 [undef][ready] DGC,RAID 5
3:0:0:1 sdc 8:32 0 [undef][ready] DGC,RAID 5
3:0:0:2 sdd 8:48 0 [undef][ready] DGC,RAID 5
3:0:0:3 sde 8:64 0 [undef][ready] DGC,RAID 5
3:0:0:4 sdf 8:80 0 [undef][ready] DGC,RAID 5
3:0:1:0 sdg 8:96 0 [undef][ready] DGC,RAID 5
3:0:1:1 sdh 8:112 0 [undef][ready] DGC,RAID 5
3:0:1:2 sdi 8:128 0 [undef][ready] DGC,RAID 5
3:0:1:3 sdj 8:144 0 [undef][ready] DGC,RAID 5
3:0:1:4 sdk 8:160 0 [undef][ready] DGC,RAID 5
5:0:0:0 sdl 8:176 0 [undef][ready] DGC,RAID 5
5:0:0:1 sdm 8:192 0 [undef][ready] DGC,RAID 5
5:0:0:2 sdn 8:208 0 [undef][ready] DGC,RAID 5
5:0:0:3 sdo 8:224 0 [undef][ready] DGC,RAID 5
5:0:0:4 sdp 8:240 0 [undef][ready] DGC,RAID 5
5:0:1:0 sdq 65:0 0 [undef][ready] DGC,RAID 5
5:0:1:1 sdr 65:16 0 [undef][ready] DGC,RAID 5
5:0:1:2 sds 65:32 0 [undef][ready] DGC,RAID 5
5:0:1:3 sdt 65:48 0 [undef][ready] DGC,RAID 5
5:0:1:4 sdu 65:64 0 [undef][ready] DGC,RAID 5
5:0:0:5 sdv 65:80 0 [undef][ready] DGC,RAID 5
5:0:0:6 sdw 65:96 0 [undef][ready] DGC,RAID 5
5:0:0:7 sdx 65:112 0 [undef][ready] DGC,RAID 5
5:0:1:5 sdy 65:128 0 [undef][ready] DGC,RAID 5
5:0:1:6 sdz 65:144 0 [undef][ready] DGC,RAID 5
params = 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:160 1000 8:240 1000 round-robin 0 2 1 8:80 1000 65:64 1000
status = 2 0 1 0 2 1 A 0 2 0 8:160 A 0 8:240 A 0 E 0 2 0 8:80 A 0 65:64 A 0
*word = 1, len = 1
*word = queue_if_no_path, len = 16
*word = 1, len = 1
*word = emc, len = 3
sdu: getprio = /sbin/mpath_prio_emc /dev/%n (controller setting)
process 31154 forking to exec '/sbin/mpath_prio_emc /dev/sdu' ((nil))
forked 31158
sdu: prio = 0
*word = 2, len = 1
*word = 1, len = 1
L004 (360060160ade32800f2e3baf47665e211) dm-9 DGC,RAID 5
[size=100G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:4 sdk 8:160 [active][ready]
\_ 5:0:0:4 sdp 8:240 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 3:0:0:4 sdf 8:80 [active][ready]
\_ 5:0:1:4 sdu 65:64 [active][ready]
params = 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:64 1000 65:48 1000 round-robin 0 2 1 8:144 1000 8:224 1000
status = 2 0 1 0 2 1 A 0 2 0 8:64 A 0 65:48 A 0 E 0 2 0 8:144 A 0 8:224 A 0
*word = 1, len = 1
*word = queue_if_no_path, len = 16
"""
def test_class_RHEL6():
mp = multipath_v4_ll.MultipathDevices(context_wrap(MULTIPATH_V4_LL_INFO))
assert len(mp.devices) == 7
assert mp.devices[0] == {
"alias": "mpathg",
"wwid": "36f01faf000da360b0000033c528fea6d",
"dm_name": "dm-2",
"venprod": "DELL,MD36xxi",
"size": "54T",
"features": "3 queue_if_no_path pg_init_retries 50",
"hwhandler": "1 rdac",
"wp": "rw",
"path_group": [{
"policy": "round-robin 0",
"prio": "0",
"status": "active",
"path": [
['12:0:0:1', 'sdc', '8:32', 'active', 'ready', 'running'],
['11:0:0:1', 'sdi', '8:128', 'active', 'ready', 'running'],
['15:0:0:1', 'sdo', '8:224', 'active', 'ready', 'running'],
['17:0:0:1', 'sdv', '65:80', 'active', 'ready', 'running']
]
}, {
"policy": "round-robin 0",
"prio": "0",
"status": "enabled",
"path": [
['13:0:0:1', 'sdf', '8:80', 'active', 'ready', 'running'],
['14:0:0:1', 'sdl', '8:176', 'active', 'ready', 'running'],
['16:0:0:1', 'sdr', '65:16', 'active', 'ready', 'running'],
['18:0:0:1', 'sdx', '65:112', 'active', 'ready', 'running']
]
}]
}
assert mp.devices[0].get('size') == '54T'
assert mp.devices[1].get('path_group') == [{
"policy": "round-robin 0",
"prio": "1",
"status": "active",
"path": [
['12:0:0:2', 'sdc', '8:32', 'active', 'ready', 'running'],
['11:0:0:2', 'sdi', '8:128', 'active', 'ready', 'running'],
['15:0:0:2', 'sdo', '8:224', 'active', 'ready', 'running'],
['17:0:0:2', 'sdv', '65:80', 'active', 'ready', 'running']
]
}, {
"policy": "round-robin 0",
"prio": "1",
"status": "enabled",
"path": [
['13:0:0:2', 'sdf', '8:80', 'active', 'ready', 'running'],
['14:0:0:2', 'sdl', '8:176', 'active', 'ready', 'running'],
['16:0:0:2', 'sdr', '65:16', 'active', 'ready', 'running'],
['18:0:0:2', 'sdx', '65:112', 'active', 'ready', 'running']
]
}]
assert mp.devices[2].get('hwhandler') == "0"
assert mp.devices[3].get('alias') == "mpatha"
assert mp.devices[4].get('wwid') == "1IET 00080001"
assert mp.devices[5].get('venprod') == "COMPELNT,Compellent Vol"
assert mp.devices[5].get('dm_name') == "dm-8"
assert mp.devices[6].get('venprod') == "COMPELNT,Compellent Vol"
assert mp.devices[6].get('dm_name') == "dm-19"
# Note that there's no data for the made-up 'mpathc', since there's no
# path group information and only devices with path group information
# get saved.
assert mp.dms == ['dm-2', 'dm-4', 'dm-5', 'dm-0', 'dm-0', 'dm-8', 'dm-19']
assert mp.by_dm['dm-2'] == mp.devices[0]
assert mp.aliases == ['mpathg', 'mpathe', 'mpatha', 'mpathb']
assert mp.by_alias['mpathg'] == mp.devices[0]
assert mp.wwids == [
'36f01faf000da360b0000033c528fea6d',
'36f01faf000da3761000004323aa6fbce',
'36001405b1629f80d52a4c898f8856e43',
'1IET 00080001',
'1IET 00080001',
'1IET 00080002',
'1IET 00080007'
]
assert mp.by_wwid['1IET 00080001'] == mp.devices[4]
# Pseudo list accessors
assert len(mp) == 7
for i, item in enumerate(mp):
assert item == mp.devices[i]
assert item == mp[i]
assert len(mp.raw_info_lines) == 11
assert "===== paths list =====" in mp.raw_info_lines
def test_get_multipath_v4_ll():
multipath_v4_ll_list = multipath_v4_ll.get_multipath_v4_ll(context_wrap(MULTIPATH_V4_LL_INFO))
assert len(multipath_v4_ll_list) == 7
assert multipath_v4_ll_list[0] == {
"alias": "mpathg",
"wwid": "36f01faf000da360b0000033c528fea6d",
"dm_name": "dm-2",
"venprod": "DELL,MD36xxi",
"size": "54T",
"features": "3 queue_if_no_path pg_init_retries 50",
"hwhandler": "1 rdac",
"wp": "rw",
"path_group": [{
"policy": "round-robin 0",
"prio": "0",
"status": "active",
"path": [
['12:0:0:1', 'sdc', '8:32', 'active', 'ready', 'running'],
['11:0:0:1', 'sdi', '8:128', 'active', 'ready', 'running'],
['15:0:0:1', 'sdo', '8:224', 'active', 'ready', 'running'],
['17:0:0:1', 'sdv', '65:80', 'active', 'ready', 'running']
]
}, {
"policy": "round-robin 0",
"prio": "0",
"status": "enabled",
"path": [
['13:0:0:1', 'sdf', '8:80', 'active', 'ready', 'running'],
['14:0:0:1', 'sdl', '8:176', 'active', 'ready', 'running'],
['16:0:0:1', 'sdr', '65:16', 'active', 'ready', 'running'],
['18:0:0:1', 'sdx', '65:112', 'active', 'ready', 'running']
]
}]
}
assert multipath_v4_ll_list[0].get('size') == '54T'
assert multipath_v4_ll_list[1].get('path_group') == [{
"policy": "round-robin 0",
"prio": "1",
"status": "active",
"path": [
['12:0:0:2', 'sdc', '8:32', 'active', 'ready', 'running'],
['11:0:0:2', 'sdi', '8:128', 'active', 'ready', 'running'],
['15:0:0:2', 'sdo', '8:224', 'active', 'ready', 'running'],
['17:0:0:2', 'sdv', '65:80', 'active', 'ready', 'running']
]
}, {
"policy": "round-robin 0",
"prio": "1",
"status": "enabled",
"path": [
['13:0:0:2', 'sdf', '8:80', 'active', 'ready', 'running'],
['14:0:0:2', 'sdl', '8:176', 'active', 'ready', 'running'],
['16:0:0:2', 'sdr', '65:16', 'active', 'ready', 'running'],
['18:0:0:2', 'sdx', '65:112', 'active', 'ready', 'running']
]
}]
assert multipath_v4_ll_list[2].get('hwhandler') == "0"
assert multipath_v4_ll_list[3].get('alias') == "mpatha"
assert multipath_v4_ll_list[4].get('wwid') == "1IET 00080001"
assert multipath_v4_ll_list[5].get('venprod') == "COMPELNT,Compellent Vol"
assert multipath_v4_ll_list[5].get('dm_name') == "dm-8"
assert multipath_v4_ll_list[6].get('venprod') == "COMPELNT,Compellent Vol"
assert multipath_v4_ll_list[6].get('dm_name') == "dm-19"
# Note that there's no data for the made-up 'mpathc', since there's no
# path group information and only devices with path group information
# get saved.
def test_get_multipath_v4_ll_RHEL_5():
"""
Test alternate device line prefixes, and ignoring extra clutter in input.
"""
multipath_v4_ll_list = multipath_v4_ll.get_multipath_v4_ll(context_wrap(MULTIPATH_V4_LL_INFO_RHEL_5))
assert len(multipath_v4_ll_list) == 1
"""
L004 (360060160ade32800f2e3baf47665e211) dm-9 DGC,RAID 5
[size=100G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:4 sdk 8:160 [active][ready]
\_ 5:0:0:4 sdp 8:240 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 3:0:0:4 sdf 8:80 [active][ready]
\_ 5:0:1:4 sdu 65:64 [active][ready]
"""
path_dev = multipath_v4_ll_list[0]
assert path_dev['alias'] == 'L004'
assert path_dev['wwid'] == '360060160ade32800f2e3baf47665e211'
assert path_dev['dm_name'] == 'dm-9'
assert path_dev['venprod'] == 'DGC,RAID 5'
assert path_dev['size'] == '100G'
assert path_dev['features'] == '1 queue_if_no_path'
assert path_dev['hwhandler'] == '1 emc'
assert path_dev['wp'] == 'rw'
assert path_dev['path_group'][0]['policy'] == 'round-robin 0'
assert path_dev['path_group'][0]['prio'] == '1'
assert path_dev['path_group'][0]['status'] == 'active'
assert len(path_dev['path_group'][0]['path']) == 2
paths = path_dev['path_group'][0]['path']
assert len(paths) == 2
assert paths[0] == ['3:0:1:4', 'sdk', '8:160', 'active', 'ready']
MULTIPATH_V4_LL_DOC = """
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:0:0:0 sda 8:0 -1 undef ready VMware,Virtual disk running
3:0:0:1 sdb 8:16 -1 undef ready IET,VIRTUAL-DISK running
4:0:0:1 sdc 8:32 -1 undef ready IET,VIRTUAL-DISK running
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = E, len = 1
Oct 28 14:02:44 | *word = 1, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = A, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
mpathg (36f01faf000da360b0000033c528fea6d) dm-2 DELL,MD36xxi
size=54T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 12:0:0:1 sdc 8:32 active ready running
| |- 11:0:0:1 sdi 8:128 active ready running
| |- 15:0:0:1 sdo 8:224 active ready running
| `- 17:0:0:1 sdv 65:80 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 13:0:0:1 sdf 8:80 active ready running
|- 14:0:0:1 sdl 8:176 active ready running
|- 16:0:0:1 sdr 65:16 active ready running
`- 18:0:0:1 sdx 65:112 active ready running
mpathe (36f01faf000da3761000004323aa6fbce) dm-4 DELL,MD36xxi
size=54T features='3 queue_if_no_path pg_init_retries 55' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 13:0:0:2 sdg 8:96 active faulty running
| |- 14:0:0:2 sdm 8:192 active faulty running
| |- 16:0:0:2 sds 65:32 active faulty running
| `- 18:0:0:2 sdy 65:128 active faulty running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 12:0:0:2 sdd 8:48 active faulty running
|- 11:0:0:2 sdj 8:144 active faulty running
|- 15:0:0:2 sdp 8:240 active faulty running
`- 17:0:0:2 sdw 65:96 active faulty running
36001405b1629f80d52a4c898f8856e43 dm-5 LIO-ORG ,block0_sdb
size=2.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 3:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 4:0:0:0 sdb 8:16 active ready running
"""
def test_doc_examples():
env = {
'MultipathDevices': multipath_v4_ll.MultipathDevices,
'mpaths': multipath_v4_ll.MultipathDevices(context_wrap(MULTIPATH_V4_LL_DOC)),
}
failed, total = doctest.testmod(multipath_v4_ll, globs=env)
assert failed == 0
| 43.105793 | 116 | 0.591188 | 2,897 | 17,113 | 3.412151 | 0.090784 | 0.02347 | 0.123824 | 0.045321 | 0.807284 | 0.768943 | 0.731715 | 0.704299 | 0.689327 | 0.603541 | 0 | 0.161491 | 0.230351 | 17,113 | 396 | 117 | 43.214646 | 0.589021 | 0.022907 | 0 | 0.51532 | 0 | 0.114206 | 0.677723 | 0.031273 | 0 | 0 | 0.001102 | 0 | 0.13649 | 1 | 0.011142 | false | 0 | 0.008357 | 0 | 0.019499 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
733c5c6ca77da668bc1c3e90e95aa23118657e58 | 25 | py | Python | crabs/api_caller/__init__.py | jonathanshuai/crabs | 2177f9d829a35a670619e1141d10a0442df25aa8 | [
"BSD-3-Clause"
] | null | null | null | crabs/api_caller/__init__.py | jonathanshuai/crabs | 2177f9d829a35a670619e1141d10a0442df25aa8 | [
"BSD-3-Clause"
] | 6 | 2021-03-18T20:50:52.000Z | 2022-03-11T23:28:02.000Z | crabs/api_caller/__init__.py | jonathanshuai/crabs | 2177f9d829a35a670619e1141d10a0442df25aa8 | [
"BSD-3-Clause"
] | null | null | null | from .crabcaller import * | 25 | 25 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
734216486f12fde78fd5abd1f947568c07390014 | 13,843 | py | Python | bot.py | Shikib/reddit_chat | db3effbd8b74835d6ee824d90884f47c2efcf589 | [
"MIT"
] | null | null | null | bot.py | Shikib/reddit_chat | db3effbd8b74835d6ee824d90884f47c2efcf589 | [
"MIT"
] | null | null | null | bot.py | Shikib/reddit_chat | db3effbd8b74835d6ee824d90884f47c2efcf589 | [
"MIT"
] | null | null | null | import string
import json
import random
from nltk import word_tokenize
from nltk import pos_tag
class ContextAwareMarkovBot():
def __init__(self,
ngram_len=5,
size_reweight=True,
punctuation_dataset=None,
style_dataset=None,
subreddit=None):
self.ngram_len = ngram_len
self.size_reweight = int(size_reweight)
self.punctuation_dataset = punctuation_dataset
self.style_dataset = style_dataset
self.subreddit = subreddit
def generate_message(self, prompt):
def _add_to_back(chain):
potential_next_words = {}
for depth in range(1, min(self.ngram_len, len(chain))+1):
# import pdb; pdb.set_trace()
try:
gram = chain[-depth:]
except:
break
# Turn the gram into a hashable tuple to read from the style graph
words = " ".join([t[0] for t in gram])
tags = " ".join([t[1] for t in gram])
gram_tuple = (words,tags)
# If this chain of words has never occurred before, continue
if gram_tuple not in self.style_graph:
continue
# Potential next words. Take the top twenty.
all_word_scores = self.style_graph[gram_tuple]
all_words = all_word_scores.keys()
top_words = \
sorted(all_words, key=lambda w: -all_word_scores[w])[:20]
word_scores = {word: all_word_scores[word] for word in top_words}
# Use the part of speech tag information to determine the next POS
if tags not in self.punctuation_graph:
continue
all_pos_scores = self.punctuation_graph[tags]
all_pos = all_pos_scores.keys()
top_pos = sorted(all_pos, key=lambda p: -all_pos_scores[p])
pos_scores = {pos: all_pos_scores[pos] for pos in top_pos}
word_scores = \
{word: 1.0*(word_scores[word]+pos_scores[word[1]])/2
for word in top_words if word[1] in pos_scores}
# Update master word list
for word,score in word_scores.items():
if word not in potential_next_words:
potential_next_words[word] = 0
potential_next_words[word] = \
potential_next_words[word] + \
score * (depth ** 2*self.size_reweight)
# Only consider the top 50 words
all_words = potential_next_words.keys()
top_words = \
sorted(all_words, key=lambda w: -potential_next_words[w])[:50]
potential_next_words = \
{word: potential_next_words[word] for word in top_words}
# Choose the next word proportional to its score
choice = random.random()
scores_sum = sum(potential_next_words.values())
if len(potential_next_words.keys()) == 0:
return None,0
for word,score in potential_next_words.items():
choice -= score*1.0/scores_sum
if word[1] == '.' or choice <= 0:
return word,score
def _add_to_front(chain):
potential_next_words = {}
for depth in range(1, min(self.ngram_len, len(chain))+1):
gram = chain[-depth:]
# Turn the gram into a hashable tuple to read from the style graph
words = " ".join([t[0] for t in gram])
tags = " ".join([t[1] for t in gram])
gram_tuple = (words,tags)
# If this chain of words has never occurred before, continue
if gram_tuple not in self.rstyle_graph:
continue
# Potential next words. Take the top twenty.
all_word_scores = self.rstyle_graph[gram_tuple]
all_words = all_word_scores.keys()
top_words = \
sorted(all_words, key=lambda w: -all_word_scores[w])[:20]
word_scores = {word: all_word_scores[word] for word in top_words}
# Use the part of speech tag information to determine the next POS
if tags not in self.rpunctuation_graph:
continue
all_pos_scores = self.rpunctuation_graph[tags]
all_pos = all_pos_scores.keys()
top_pos = sorted(all_pos, key=lambda p: -all_pos_scores[p])
pos_scores = {pos: all_pos_scores[pos] for pos in top_pos}
word_scores = \
{word: 1.0*(word_scores[word]+pos_scores[word[1]])/2
for word in top_words if word[1] in pos_scores}
# Update master word list
for word,score in word_scores.items():
if word not in potential_next_words:
potential_next_words[word] = 0
potential_next_words[word] = \
potential_next_words[word] + \
score * (depth ** 2*self.size_reweight)
# Only consider the top 50 words
all_words = potential_next_words.keys()
top_words = \
sorted(all_words, key=lambda w: -potential_next_words[w])[:50]
potential_next_words = \
{word: potential_next_words[word] for word in top_words}
# Choose the next word proportional to its score
choice = random.random()
scores_sum = sum(potential_next_words.values())
if len(potential_next_words.keys()) == 0:
return None,0
for word,score in potential_next_words.items():
choice -= score*1.0/scores_sum
if word[1] == '.' or choice <= 0:
return word,score
prompt = prompt.lower()
prompt = \
"".join([ch for ch in prompt if ch not in "'"])
words = word_tokenize(prompt)
chain = pos_tag(words)
delete_len = 0
while len(chain) < 30:
back_word,bscore = _add_to_back(chain)
front_word,fscore = _add_to_front(chain[::-1])
if bscore > fscore and chain[-1][1] != '.':
chain.append(back_word)
elif chain[0][1] != '.':
chain = [front_word] + chain
else:
break
return " ".join([word[0] for word in chain])
def train_punctuation(self):
# Initialize POS graph
self.punctuation_graph = {}
def _add_message_to_punctuation(message):
score = message[1]
message = message[0]
# Remove contractions and potentially other characters
message = \
"".join([ch for ch in message if ch not in "'"])
words = word_tokenize(message)
tagged_words = pos_tag(words)
for gram_len in range(1, self.ngram_len+1):
# The minus one is to ensure that we always have a word
# right after the gram
for i in range(len(tagged_words)-gram_len-1):
gram = tagged_words[i:i+gram_len]
# Turn the gram into a hashable string.
tags = " ".join([t[1] for t in gram])
# Identify the type of the word that comes after the gram
next_word = tagged_words[i+gram_len][1]
if tags not in self.punctuation_graph:
self.punctuation_graph[tags] = {}
if next_word not in self.punctuation_graph[tags]:
self.punctuation_graph[tags][next_word] = 0
self.punctuation_graph[tags][next_word] += score
# Need to turn the text into the right format
messages = self.extract_messages(self.punctuation_dataset)
for message in messages:
_add_message_to_punctuation(message)
def reverse_train_punctuation(self):
# Initialize POS graph
self.rpunctuation_graph = {}
def _add_message_to_punctuation(message):
score = message[1]
message = message[0]
# Remove contractions and potentially other characters
message = \
"".join([ch for ch in message if ch not in "'"])
words = ['.'] + word_tokenize(message)[::-1]
tagged_words = pos_tag(words)
for gram_len in range(1, self.ngram_len+1):
# The minus one is to ensure that we always have a word
# right after the gram
for i in range(len(tagged_words)-gram_len-1):
gram = tagged_words[i:i+gram_len]
# Turn the gram into a hashable string.
tags = " ".join([t[1] for t in gram])
# Identify the type of the word that comes after the gram
next_word = tagged_words[i+gram_len][1]
if tags not in self.rpunctuation_graph:
self.rpunctuation_graph[tags] = {}
if next_word not in self.rpunctuation_graph[tags]:
self.rpunctuation_graph[tags][next_word] = 0
self.rpunctuation_graph[tags][next_word] += score
# Need to turn the text into the right format
messages = self.extract_messages(self.punctuation_dataset)
for message in messages:
_add_message_to_punctuation(message)
def train_style(self):
# Initialize POS graph
self.style_graph = {}
def _add_message_to_style(message):
score = message[1]
message = message[0]
# Remove contractions and potentially other characters
message = \
"".join([ch for ch in message if ch not in "'"])
words = word_tokenize(message)
tagged_words = pos_tag(words)
for gram_len in range(1, self.ngram_len):
# The minus one is to ensure that we always have a word
# right after the gram
for i in range(len(tagged_words)-gram_len-1):
gram = tagged_words[i:i+gram_len]
# Turn the gram into a hashable tuple.
words = " ".join([t[0] for t in gram])
tags = " ".join([t[1] for t in gram])
gram_tuple = (words,tags)
# Identify the the word that comes after the gram
next_word = tagged_words[i+gram_len]
if gram_tuple not in self.style_graph:
self.style_graph[gram_tuple] = {}
if next_word not in self.style_graph[gram_tuple]:
self.style_graph[gram_tuple][next_word] = 0
self.style_graph[gram_tuple][next_word] += score
# Need to turn the text into the right format
messages = self.extract_messages(self.style_dataset)
for message in messages:
_add_message_to_style(message)
def reverse_train_style(self):
# Initialize POS graph
self.rstyle_graph = {}
def _add_message_to_style(message):
score = message[1]
message = message[0]
# Remove contractions and potentially other characters
message = \
"".join([ch for ch in message if ch not in "'"])
words = ['.'] + word_tokenize(message)[::-1]
tagged_words = pos_tag(words)
for gram_len in range(1, self.ngram_len):
# The minus one is to ensure that we always have a word
# right after the gram
for i in range(len(tagged_words)-gram_len-1):
gram = tagged_words[i:i+gram_len]
# Turn the gram into a hashable tuple.
words = " ".join([t[0] for t in gram])
tags = " ".join([t[1] for t in gram])
gram_tuple = (words,tags)
# Identify the the word that comes after the gram
next_word = tagged_words[i+gram_len]
if gram_tuple not in self.rstyle_graph:
self.rstyle_graph[gram_tuple] = {}
if next_word not in self.rstyle_graph[gram_tuple]:
self.rstyle_graph[gram_tuple][next_word] = 0
self.rstyle_graph[gram_tuple][next_word] += score
# Need to turn the text into the right format
messages = self.extract_messages(self.style_dataset)
for message in messages:
_add_message_to_style(message)
def extract_messages(self, filename):
messages = []
with open(filename) as f:
for i in range(10000):
try:
message = f.next()
except:
break
message = json.loads(message)
messages.append((message['body'].lower(), message['score']))
return messages
if __name__ == '__main__':
cmb = ContextAwareMarkovBot(ngram_len=10,
punctuation_dataset='AskReddit',
style_dataset='AskReddit',
subreddit='AskReddit')
cmb.train_style()
cmb.train_punctuation()
cmb.reverse_train_style()
cmb.reverse_train_punctuation()
import pdb; pdb.set_trace()
cmb.generate_message("I wonder")
| 37.926027 | 82 | 0.537311 | 1,642 | 13,843 | 4.32095 | 0.098051 | 0.047639 | 0.065962 | 0.014094 | 0.834249 | 0.812403 | 0.791825 | 0.745032 | 0.732629 | 0.723326 | 0 | 0.011109 | 0.382215 | 13,843 | 364 | 83 | 38.03022 | 0.818522 | 0.131836 | 0 | 0.651064 | 0 | 0 | 0.00618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055319 | false | 0 | 0.025532 | 0 | 0.110638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b402b6c30bd40cf69e25b007c5d9719fdf7d4842 | 183 | py | Python | apps/employee/views/its_alive.py | victtorvpb/employee-manager | fa0ea41a80e38378feef1fcbc071d9615f1d5b54 | [
"Apache-2.0"
] | null | null | null | apps/employee/views/its_alive.py | victtorvpb/employee-manager | fa0ea41a80e38378feef1fcbc071d9615f1d5b54 | [
"Apache-2.0"
] | 15 | 2019-08-18T17:20:23.000Z | 2021-06-09T18:15:40.000Z | apps/employee/views/its_alive.py | victtorvpb/employee-manager | fa0ea41a80e38378feef1fcbc071d9615f1d5b54 | [
"Apache-2.0"
] | 1 | 2019-08-20T00:47:04.000Z | 2019-08-20T00:47:04.000Z | from rest_framework.decorators import api_view
from rest_framework.response import Response
@api_view(['GET'])
def its_alive(request):
return Response({'message': 'its_alive'})
| 22.875 | 46 | 0.775956 | 25 | 183 | 5.44 | 0.6 | 0.117647 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 183 | 7 | 47 | 26.142857 | 0.834356 | 0 | 0 | 0 | 0 | 0 | 0.103825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b40b1558a9aa57bad311c7e28bca41b05079ec96 | 27,782 | py | Python | azure-mgmt-batchai/tests/test_mgmt_batchai_jobs.py | Christina-Kang/azure-sdk-for-python | bbf982eb06aab04b8151f69f1d230b7f5fb96ebf | [
"MIT"
] | 1 | 2022-03-30T22:39:15.000Z | 2022-03-30T22:39:15.000Z | azure-mgmt-batchai/tests/test_mgmt_batchai_jobs.py | Christina-Kang/azure-sdk-for-python | bbf982eb06aab04b8151f69f1d230b7f5fb96ebf | [
"MIT"
] | 54 | 2016-03-25T17:25:01.000Z | 2018-10-22T17:27:54.000Z | azure-mgmt-batchai/tests/test_mgmt_batchai_jobs.py | Christina-Kang/azure-sdk-for-python | bbf982eb06aab04b8151f69f1d230b7f5fb96ebf | [
"MIT"
] | 2 | 2017-01-20T18:25:46.000Z | 2017-05-12T21:31:47.000Z | # coding: utf-8
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
# pylint: disable=line-too-long
import re
from azure.storage.blob import BlockBlobService
from azure.storage.file import FileService
from devtools_testutils import AzureMgmtTestCase, StorageAccountPreparer
from devtools_testutils import ResourceGroupPreparer
from msrestazure.azure_exceptions import CloudError
import azure.mgmt.batchai.models as models
from azure.mgmt.batchai import BatchAIManagementClient
from . import helpers
class JobTestCase(AzureMgmtTestCase):
def setUp(self):
super(JobTestCase, self).setUp()
self.client = helpers.create_batchai_client(self) # type: BatchAIManagementClient
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_job_creation_and_deletion(self, resource_group, location, cluster, storage_account, storage_account_key):
"""Tests simple scenario for a job - submit, check results, delete."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1,
'echo hi | tee {0}/hi.txt'.format(helpers.JOB_OUTPUT_DIRECTORY_PATH_ENV),
container=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='ubuntu'))
) # type: models.Job
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name, helpers.MINUTE),
models.ExecutionState.succeeded)
# Check standard job output
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'hi\n', u'stderr.txt': u''})
# Check job's output
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.JOB_OUTPUT_DIRECTORY_ID,
{u'hi.txt': u'hi\n'})
# Check that we can access the output files directly in storage using path segment returned by the server
helpers.assert_file_in_file_share(self, storage_account.name, storage_account_key,
job.job_output_directory_path_segment + '/' + helpers.STDOUTERR_FOLDER_NAME,
'stdout.txt', u'hi\n')
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_running_job_deletion(self, resource_group, location, cluster):
"""Tests deletion of a running job."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1,
'sleep 600')
self.assertEqual(
helpers.wait_for_job_start_running(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.running)
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_running_job_termination(self, resource_group, location, cluster):
"""Tests termination of a running job."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'longrunning', 1,
'sleep 600')
self.assertEqual(
helpers.wait_for_job_start_running(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.running)
self.client.jobs.terminate(resource_group.name, job.name).result()
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name, helpers.MINUTE),
models.ExecutionState.failed)
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer(target_nodes=0, wait=False)
def test_queued_job_termination(self, resource_group, location, cluster):
"""Tests termination of a job in queued state."""
# Create a job which will be in queued state because the cluster has no compute nodes.
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1, 'true')
self.client.jobs.terminate(resource_group.name, job.name).result()
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name, helpers.MINUTE),
models.ExecutionState.failed)
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_completed_job_termination(self, resource_group, location, cluster):
"""Tests termination of completed job."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1, 'true')
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name, helpers.MINUTE),
models.ExecutionState.succeeded)
# termination of completed job is NOP and must not change the execution state.
self.client.jobs.terminate(resource_group.name, job.name).result()
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name, helpers.MINUTE),
models.ExecutionState.succeeded)
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_failed_job_reporting(self, resource_group, location, cluster):
"""Tests if job failure is reported correctly."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1,
'false')
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.failed)
job = self.client.jobs.get(resource_group.name, job.name)
self.assertEqual(job.execution_info.exit_code, 1)
self.assertEqual(len(job.execution_info.errors), 1)
self.assertEqual(job.execution_info.errors[0].code, 'JobFailed')
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_job_preparation_host(self, resource_group, location, cluster):
"""Tests job preparation execution for a job running on a host."""
# create a job with job preparation which populates input data in $AZ_BATCHAI_INPUT_INPUT/hi.txt
job = helpers.create_custom_job(
self.client, resource_group.name, location, cluster.id, 'job', 1,
'cat $AZ_BATCHAI_INPUT_INPUT/hi.txt',
'mkdir -p $AZ_BATCHAI_INPUT_INPUT && echo hello | tee $AZ_BATCHAI_INPUT_INPUT/hi.txt')
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'hello\n',
u'stderr.txt': u'',
u'stdout-job_prep.txt': u'hello\n',
u'stderr-job_prep.txt': u''})
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_job_preparation_container(self, resource_group, location, cluster):
"""Tests job preparation execution for a job running in a container."""
# create a job with job preparation which populates input data in $AZ_BATCHAI_INPUT_INPUT/hi.txt
job = helpers.create_custom_job(
self.client, resource_group.name, location, cluster.id, 'job', 1,
'cat $AZ_BATCHAI_INPUT_INPUT/hi.txt',
'mkdir -p $AZ_BATCHAI_INPUT_INPUT && echo hello | tee $AZ_BATCHAI_INPUT_INPUT/hi.txt',
container=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='ubuntu')))
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'hello\n',
u'stderr.txt': u'',
u'stdout-job_prep.txt': u'hello\n',
u'stderr-job_prep.txt': u''})
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_job_host_preparation_failure_reporting(self, resource_group, location, cluster):
"""Tests if job preparation failure is reported correctly."""
# create a job with failing job preparation
job = helpers.create_custom_job(
self.client, resource_group.name, location, cluster.id, 'job', 1, 'true', 'false')
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.failed)
job = self.client.jobs.get(resource_group.name, job.name)
self.assertEqual(job.execution_info.exit_code, 1)
self.assertEqual(len(job.execution_info.errors), 1)
self.assertEqual(job.execution_info.errors[0].code, 'JobPreparationFailed')
print(job.serialize())
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer()
def test_job_container_preparation_failure_reporting(self, resource_group, location, cluster):
"""Tests if job preparation failure is reported correctly."""
# create a job with failing job preparation
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 1, 'true',
'false',
container=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='ubuntu')))
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.failed)
job = self.client.jobs.get(resource_group.name, job.name)
self.assertEqual(job.execution_info.exit_code, 1)
self.assertEqual(len(job.execution_info.errors), 1)
self.assertEqual(job.execution_info.errors[0].code, 'JobPreparationFailed')
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer(target_nodes=2)
def test_password_less_ssh(self, resource_group, location, cluster):
"""Tests if password-less ssh is configured on hosts."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 2,
'ssh 10.0.0.4 echo done && ssh 10.0.0.5 echo done')
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
job = self.client.jobs.get(resource_group.name, job.name)
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'done\ndone\n',
u'stderr.txt': re.compile('Permanently added.*')})
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer(target_nodes=2)
def test_password_less_ssh_in_container(self, resource_group, location, cluster):
"""Tests if password-less ssh is configured in containers."""
job = helpers.create_custom_job(self.client, resource_group.name, location, cluster.id, 'job', 2,
'ssh 10.0.0.5 echo done && ssh 10.0.0.5 echo done',
container=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='ubuntu')))
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
job = self.client.jobs.get(resource_group.name, job.name)
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'done\ndone\n',
u'stderr.txt': re.compile('Permanently added.*')})
self.client.jobs.delete(resource_group.name, job.name).result()
self.assertRaises(CloudError, lambda: self.client.jobs.get(resource_group.name, job.name))
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer(target_nodes=1)
def test_job_level_mounting(self, resource_group, location, cluster, storage_account, storage_account_key):
"""Tests if it's possible to mount external file systems for a job."""
job_name = 'job'
# Create file share and container to mount on the job level
if storage_account.name != helpers.FAKE_STORAGE.name:
files = FileService(storage_account.name, storage_account_key)
files.create_share('jobshare', fail_on_exist=False)
blobs = BlockBlobService(storage_account.name, storage_account_key)
blobs.create_container('jobcontainer', fail_on_exist=False)
job = self.client.jobs.create(
resource_group.name,
job_name,
parameters=models.JobCreateParameters(
location=location,
cluster=models.ResourceId(id=cluster.id),
node_count=1,
mount_volumes=models.MountVolumes(
azure_file_shares=[
models.AzureFileShareReference(
account_name=storage_account.name,
azure_file_url='https://{0}.file.core.windows.net/{1}'.format(
storage_account.name, 'jobshare'),
relative_mount_path='job_afs',
credentials=models.AzureStorageCredentialsInfo(
account_key=storage_account_key
),
)
],
azure_blob_file_systems=[
models.AzureBlobFileSystemReference(
account_name=storage_account.name,
container_name='jobcontainer',
relative_mount_path='job_bfs',
credentials=models.AzureStorageCredentialsInfo(
account_key=storage_account_key
),
)
]
),
# Put standard output on cluster level AFS to check that the job has access to it.
std_out_err_path_prefix='$AZ_BATCHAI_MOUNT_ROOT/{0}'.format(helpers.AZURE_FILES_MOUNTING_PATH),
# Create two output directories on job level AFS and blobfuse.
output_directories=[
models.OutputDirectory(id='OUTPUT1', path_prefix='$AZ_BATCHAI_JOB_MOUNT_ROOT/job_afs'),
models.OutputDirectory(id='OUTPUT2', path_prefix='$AZ_BATCHAI_JOB_MOUNT_ROOT/job_bfs')
],
# Check that the job preparation has access to job level file systems.
job_preparation=models.JobPreparation(
command_line='echo afs > $AZ_BATCHAI_OUTPUT_OUTPUT1/prep_afs.txt; '
'echo bfs > $AZ_BATCHAI_OUTPUT_OUTPUT2/prep_bfs.txt; '
'echo done'
),
# Check that the job has access to job
custom_toolkit_settings=models.CustomToolkitSettings(
command_line='echo afs > $AZ_BATCHAI_OUTPUT_OUTPUT1/job_afs.txt; '
'echo bfs > $AZ_BATCHAI_OUTPUT_OUTPUT2/job_bfs.txt; '
'mkdir $AZ_BATCHAI_OUTPUT_OUTPUT1/afs; '
'echo afs > $AZ_BATCHAI_OUTPUT_OUTPUT1/afs/job_afs.txt; '
'mkdir $AZ_BATCHAI_OUTPUT_OUTPUT2/bfs; '
'echo bfs > $AZ_BATCHAI_OUTPUT_OUTPUT2/bfs/job_bfs.txt; '
'echo done'
)
)
).result()
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
job = self.client.jobs.get(resource_group.name, job.name)
# Assert job and job prep standard output is populated on cluster level filesystem
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'done\n', u'stderr.txt': u'',
u'stdout-job_prep.txt': u'done\n', u'stderr-job_prep.txt': u''})
# Assert files are generated on job level AFS
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name, 'OUTPUT1',
{u'job_afs.txt': u'afs\n', u'prep_afs.txt': u'afs\n', u'afs': None})
# Assert files are generated on job level blobfuse
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name, 'OUTPUT2',
{u'job_bfs.txt': u'bfs\n', u'prep_bfs.txt': u'bfs\n', u'bfs': None})
# Assert subfolders are available via API
helpers.assert_job_files_in_path_are(self, self.client, resource_group.name, job.name, 'OUTPUT1',
'afs', {u'job_afs.txt': u'afs\n'})
helpers.assert_job_files_in_path_are(self, self.client, resource_group.name, job.name, 'OUTPUT2',
'bfs', {u'job_bfs.txt': u'bfs\n'})
# Assert that we can access the output files created on job level mount volumes directly in storage using path
# segment returned by the server.
if storage_account.name != helpers.FAKE_STORAGE.name:
files = FileService(storage_account.name, storage_account_key)
self.assertTrue(
files.exists('jobshare', job.job_output_directory_path_segment +
'/' + helpers.OUTPUT_DIRECTORIES_FOLDER_NAME, 'job_afs.txt'))
blobs = BlockBlobService(storage_account.name, storage_account_key)
self.assertTrue(
blobs.exists('jobcontainer', job.job_output_directory_path_segment +
'/' + helpers.OUTPUT_DIRECTORIES_FOLDER_NAME + '/job_bfs.txt'))
# After the job is done the filesystems should be unmounted automatically, check this by submitting a new job.
checker = self.client.jobs.create(
resource_group.name,
'checker',
parameters=models.JobCreateParameters(
location=location,
cluster=models.ResourceId(id=cluster.id),
node_count=1,
std_out_err_path_prefix='$AZ_BATCHAI_MOUNT_ROOT/{0}'.format(helpers.AZURE_FILES_MOUNTING_PATH),
custom_toolkit_settings=models.CustomToolkitSettings(
command_line='echo job; df | grep -E "job_bfs|job_afs"'
)
)
).result()
# Check the job failed because there are not job level mount volumes anymore
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, checker.name,
helpers.MINUTE),
models.ExecutionState.failed)
# Check that the cluster level AFS was still mounted
helpers.assert_job_files_are(self, self.client, resource_group.name, checker.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'job\n', u'stderr.txt': u''})
@ResourceGroupPreparer(location=helpers.LOCATION)
@StorageAccountPreparer(name_prefix='psdk', location=helpers.LOCATION, playback_fake_resource=helpers.FAKE_STORAGE)
@helpers.ClusterPreparer(target_nodes=1)
def test_job_environment_variables_and_secrets(self, resource_group, location, cluster):
"""Tests if it's possible to mount external file systems for a job."""
job_name = 'job'
job = self.client.jobs.create(
resource_group.name,
job_name,
parameters=models.JobCreateParameters(
location=location,
cluster=models.ResourceId(id=cluster.id),
node_count=1,
std_out_err_path_prefix='$AZ_BATCHAI_MOUNT_ROOT/{0}'.format(helpers.AZURE_FILES_MOUNTING_PATH),
environment_variables=[
models.EnvironmentVariable(name='VARIABLE', value='VALUE')
],
secrets=[
models.EnvironmentVariableWithSecretValue(name='SECRET_VARIABLE', value='SECRET')
],
# Check that the job preparation has access to env variables and secrets.
job_preparation=models.JobPreparation(
command_line='echo $VARIABLE $SECRET_VARIABLE'
),
# Check that the job has access to env variables and secrets.
custom_toolkit_settings=models.CustomToolkitSettings(
command_line='echo $VARIABLE $SECRET_VARIABLE'
)
)
).result() # type: models.Job
self.assertEqual(
helpers.wait_for_job_completion(self.is_live, self.client, resource_group.name, job.name,
helpers.MINUTE),
models.ExecutionState.succeeded)
# Check that environment variables are reported by the server.
self.assertEqual(len(job.environment_variables), 1)
self.assertEqual(job.environment_variables[0].name, 'VARIABLE')
self.assertEqual(job.environment_variables[0].value, 'VALUE')
# Check that secrets are reported back by server, but value is not reported.
self.assertEqual(len(job.secrets), 1)
self.assertEqual(job.secrets[0].name, 'SECRET_VARIABLE')
self.assertIsNone(job.secrets[0].value)
# Check that job and job prep had access to the env variables and secrets.
helpers.assert_job_files_are(self, self.client, resource_group.name, job.name,
helpers.STANDARD_OUTPUT_DIRECTORY_ID,
{u'stdout.txt': u'VALUE SECRET\n', u'stderr.txt': u'',
u'stdout-job_prep.txt': u'VALUE SECRET\n', u'stderr-job_prep.txt': u''})
| 61.737778 | 119 | 0.626485 | 3,059 | 27,782 | 5.487414 | 0.097091 | 0.069701 | 0.076969 | 0.07268 | 0.831586 | 0.821518 | 0.801859 | 0.767544 | 0.721494 | 0.715596 | 0 | 0.004068 | 0.274458 | 27,782 | 449 | 120 | 61.875278 | 0.828695 | 0.106688 | 0 | 0.692308 | 0 | 0.005495 | 0.083893 | 0.02501 | 0 | 0 | 0 | 0 | 0.162088 | 1 | 0.041209 | false | 0.005495 | 0.024725 | 0 | 0.068681 | 0.002747 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4242c170c7c228d8ac1f21e05664f506fce7d75 | 4,395 | py | Python | python/data-visualization/bar-plot/bar-weights/bar.py | lijiansong/lang | e255709da2b12e09dea45f86d54f77a19b96f13b | [
"WTFPL"
] | 1 | 2020-01-09T03:22:09.000Z | 2020-01-09T03:22:09.000Z | python/data-visualization/bar-plot/bar-weights/bar.py | lijiansong/lang | e255709da2b12e09dea45f86d54f77a19b96f13b | [
"WTFPL"
] | null | null | null | python/data-visualization/bar-plot/bar-weights/bar.py | lijiansong/lang | e255709da2b12e09dea45f86d54f77a19b96f13b | [
"WTFPL"
] | null | null | null | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def autolabel(ax, rects):
"""Attach a text label above each bar in *rects*, displaying its height."""
for rect in rects:
height = rect.get_height()
if height > 0.0:
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
else:
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, -15), # 15 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
def draw_e2e_lr():
labels = ['BS', 'DP', 'MP', 'TN']
mobilenet_lr = [17.0, -20.0, -10.1, 19.8]
squeezenet_lr = [-9.8, -17.2, -7.4, 37.2]
ssd_mobilenet_lr = [-10.6, -21.0, -13.9, 31.8]
densenet121_lr = [-0.5, 16.8, -7.1, 55.9]
resnet50_lr = [8.9, 2.4, -8.6, 47.4]
ssd_vgg16_lr = [5.3, 24.4, -27.0, 51.2]
net_num = 6
width = 0.35
x0 = [(1+(net_num+1)*i)*width for i in range(len(labels))]
x1 = [(2+(net_num+1)*i)*width for i in range(len(labels))]
x2 = [(3+(net_num+1)*i)*width for i in range(len(labels))]
x3 = [(4+(net_num+1)*i)*width for i in range(len(labels))]
x4 = [(5+(net_num+1)*i)*width for i in range(len(labels))]
x5 = [(6+(net_num+1)*i)*width for i in range(len(labels))]
fig, ax = plt.subplots()
rects_mobilenet = ax.bar(x0, mobilenet_lr, width, label='MobileNet')
rects_squeezenet = ax.bar(x1, squeezenet_lr, width, label='SqueezeNet')
rects_ssd_mobilenet = ax.bar(x2, ssd_mobilenet_lr, width, label='SSD_MobileNetV1')
rects_densenet121 = ax.bar(x3, densenet121_lr, width, label='DenseNet121')
rects_resnet50 = ax.bar(x4, resnet50_lr, width, label='ResNet50')
rects_ssd_vgg16 = ax.bar(x5, ssd_vgg16_lr, width, label='SSD_VGG16')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('LR weights(%)')
ax.set_title('LR weights of hyper-parameters over end-to-end throughput')
ax.set_xticks(x3)
ax.set_xticklabels(labels)
ax.legend()
autolabel(ax, rects_mobilenet)
autolabel(ax, rects_squeezenet)
autolabel(ax, rects_ssd_mobilenet)
autolabel(ax, rects_densenet121)
autolabel(ax, rects_resnet50)
autolabel(ax, rects_ssd_vgg16)
fig.tight_layout()
plt.show()
def draw_hw_lr():
labels = ['BS', 'DP', 'MP', 'TN']
mobilenet_lr = [-1.4, 16.9, 13.8, 29.0]
squeezenet_lr = [-2.5, 21.5, 20.1, 29.7]
ssd_mobilenet_lr = [-0.7, 20.1, 7.7, 23.3]
densenet121_lr = [0.3, 27.4, 17.4, 51.0]
resnet50_lr = [2.2, 22.7, 12.1, 38.0]
ssd_vgg16_lr = [-0.3, 20.1, 5.0, 23.2]
net_num = 6
width = 0.35
x0 = [(1+(net_num+1)*i)*width for i in range(len(labels))]
x1 = [(2+(net_num+1)*i)*width for i in range(len(labels))]
x2 = [(3+(net_num+1)*i)*width for i in range(len(labels))]
x3 = [(4+(net_num+1)*i)*width for i in range(len(labels))]
x4 = [(5+(net_num+1)*i)*width for i in range(len(labels))]
x5 = [(6+(net_num+1)*i)*width for i in range(len(labels))]
fig, ax = plt.subplots()
rects_mobilenet = ax.bar(x0, mobilenet_lr, width, label='MobileNet')
rects_squeezenet = ax.bar(x1, squeezenet_lr, width, label='SqueezeNet')
rects_ssd_mobilenet = ax.bar(x2, ssd_mobilenet_lr, width, label='SSD_MobileNetV1')
rects_densenet121 = ax.bar(x3, densenet121_lr, width, label='DenseNet121')
rects_resnet50 = ax.bar(x4, resnet50_lr, width, label='ResNet50')
rects_ssd_vgg16 = ax.bar(x5, ssd_vgg16_lr, width, label='SSD_VGG16')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('LR weights(%)')
ax.set_title('LR weights of hyper-parameters over hardware throughput')
ax.set_xticks(x3)
ax.set_xticklabels(labels)
ax.legend()
autolabel(ax, rects_mobilenet)
autolabel(ax, rects_squeezenet)
autolabel(ax, rects_ssd_mobilenet)
autolabel(ax, rects_densenet121)
autolabel(ax, rects_resnet50)
autolabel(ax, rects_ssd_vgg16)
fig.tight_layout()
plt.show()
if __name__ == '__main__':
draw_e2e_lr()
draw_hw_lr()
| 39.241071 | 86 | 0.615472 | 695 | 4,395 | 3.729496 | 0.184173 | 0.032407 | 0.080247 | 0.037037 | 0.810957 | 0.810957 | 0.810957 | 0.810957 | 0.790123 | 0.790123 | 0 | 0.079695 | 0.223436 | 4,395 | 111 | 87 | 39.594595 | 0.679754 | 0.058476 | 0 | 0.688172 | 0 | 0 | 0.082344 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4302d63e4350509e38eb74a1aa6b86644bafefb | 6,379 | py | Python | fireant/tests/dataset/operations/test_cumulative.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 122 | 2016-08-05T13:34:52.000Z | 2022-03-15T13:21:13.000Z | fireant/tests/dataset/operations/test_cumulative.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 321 | 2016-08-10T08:48:15.000Z | 2021-07-28T13:08:18.000Z | fireant/tests/dataset/operations/test_cumulative.py | mikeengland/fireant | 63c12728c11f1fb252265459f8b8f384d20414b9 | [
"Apache-2.0"
] | 27 | 2016-08-10T08:11:08.000Z | 2021-08-23T08:14:37.000Z | from unittest import TestCase
from unittest.mock import MagicMock
import pandas as pd
import pandas.testing
from fireant import CumMean, CumProd, CumSum, Field
from fireant.dataset.references import Reference, WeekOverWeek
from fireant.tests.dataset.mocks import (
ElectionOverElection,
dimx1_date_df,
dimx2_date_str_df,
dimx2_date_str_ref_df,
mock_dataset,
)
class CumSumTests(TestCase):
def test_apply_to_timeseries(self):
cumsum = CumSum(mock_dataset.fields.wins)
result = cumsum.apply(dimx1_date_df, None)
expected = pd.Series([2, 4, 6, 8, 10, 12], name='$wins', index=dimx1_date_df.index)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim(self):
cumsum = CumSum(mock_dataset.fields.wins)
result = cumsum.apply(dimx2_date_str_df, None)
expected = pd.Series([2, 0, 0, 2, 2, 2, 4, 4, 4, 6, 4, 6, 6], name='$wins', index=dimx2_date_str_df.index)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim_and_ref(self):
cumsum = CumSum(mock_dataset.fields.wins)
result = cumsum.apply(dimx2_date_str_ref_df, ElectionOverElection(mock_dataset.fields.timestamp))
expected = pd.Series(
[2.0, 0.0, 2.0, 0.0, 4.0, 0.0, 6.0, 2.0, 6.0, 4.0, 6.0], name='$wins_eoe', index=dimx2_date_str_ref_df.index
)
pandas.testing.assert_series_equal(expected, result)
def test_apply_cummulative_for_delta_percent(self):
dataset = MagicMock()
dataset.table._table_name = "table"
field = Field("value", None)
cumsum = CumSum(field)
reference = Reference(field, WeekOverWeek, delta=True, delta_percent=True)
df = pd.DataFrame.from_dict(
{
"$value": [55, 60, 108],
"$value_wow": [50, 50, 100],
"$cumsum(value)": [55, 115, 223],
"$value_wow_delta_percent": [10, 20, 8],
}
)
result = cumsum.apply(df, reference)
pandas.testing.assert_series_equal(pd.Series([10.0, 15.0, 11.5]), result)
class CumProdTests(TestCase):
def test_apply_to_timeseries(self):
cumprod = CumProd(mock_dataset.fields.wins)
result = cumprod.apply(dimx1_date_df, None)
expected = pd.Series([2, 4, 8, 16, 32, 64], name='$wins', index=dimx1_date_df.index)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim(self):
cumprod = CumProd(mock_dataset.fields.wins)
result = cumprod.apply(dimx2_date_str_df, None)
expected = pd.Series([2] + [0] * 12, name='$wins', index=dimx2_date_str_df.index)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim_and_ref(self):
cumprod = CumProd(mock_dataset.fields.wins)
result = cumprod.apply(dimx2_date_str_ref_df, ElectionOverElection(mock_dataset.fields.timestamp))
expected = pd.Series([2.0] + [0.0] * 10, name='$wins_eoe', index=dimx2_date_str_ref_df.index)
pandas.testing.assert_series_equal(expected, result)
def test_apply_cummulative_for_delta_percent(self):
dataset = MagicMock()
dataset.table._table_name = "table"
field = Field("value", None)
cumsum = CumProd(field)
reference = Reference(field, WeekOverWeek, delta=True, delta_percent=True)
df = pd.DataFrame.from_dict(
{
"$value": [55, 60, 108],
"$value_wow": [50, 50, 100],
"$cumprod(value)": [55, 3300, 356400],
"$value_wow_delta_percent": [10, 20, 8],
}
)
result = cumsum.apply(df, reference)
pandas.testing.assert_series_equal(pd.Series([10.0, 32.0, 42.56]), result)
class CumMeanTests(TestCase):
def test_apply_to_timeseries(self):
cummean = CumMean(mock_dataset.fields.votes)
result = cummean.apply(dimx1_date_df, None)
expected = dimx1_date_df['$votes'].astype(float).cumsum() / range(1, len(dimx1_date_df) + 1)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim(self):
cummean = CumMean(mock_dataset.fields.votes)
result = cummean.apply(dimx2_date_str_df, None)
expected = pd.Series(
[
7579518.0,
1076384.0,
6564547.0,
7937233.5,
7465807.5,
8484218.666666666,
8322786.0,
9313940.5,
8614866.75,
9935978.0,
8521509.8,
9091928.0,
9341064.0,
],
name='$votes',
index=dimx2_date_str_df.index,
)
pandas.testing.assert_series_equal(expected, result)
def test_apply_to_timeseries_with_uni_dim_and_ref(self):
cummean = CumMean(mock_dataset.fields.votes)
result = cummean.apply(dimx2_date_str_ref_df, ElectionOverElection(mock_dataset.fields.timestamp))
expected = pd.Series(
[
7579518.0,
1076384.0,
7072032.5,
4685666.5,
7503711.0,
6316507.333333333,
8136969.0,
7688157.0,
8407797.0,
8635351.2,
8364511.166666667,
],
name='$votes_eoe',
index=dimx2_date_str_ref_df.index,
)
pandas.testing.assert_series_equal(expected, result)
def test_apply_cummulative_for_delta_percent(self):
dataset = MagicMock()
dataset.table._table_name = "table"
field = Field("value", None)
cumsum = CumMean(field)
reference = Reference(field, WeekOverWeek, delta=True, delta_percent=True)
df = pd.DataFrame.from_dict(
{
"$value": [55, 60, 108],
"$value_wow": [50, 50, 100],
"$cummean(value)": [55, 57.5, 74 + (1 / 3)],
"$value_wow_delta_percent": [10, 20, 8],
}
)
result = cumsum.apply(df, reference)
pandas.testing.assert_series_equal(pd.Series([10.0, 15.0, 11.5]), result)
| 35.837079 | 120 | 0.604797 | 780 | 6,379 | 4.696154 | 0.158974 | 0.034398 | 0.045864 | 0.0819 | 0.801256 | 0.796615 | 0.790063 | 0.755119 | 0.755119 | 0.749113 | 0 | 0.091108 | 0.282489 | 6,379 | 177 | 121 | 36.039548 | 0.709198 | 0 | 0 | 0.468966 | 0 | 0 | 0.039818 | 0.011287 | 0 | 0 | 0 | 0 | 0.082759 | 1 | 0.082759 | false | 0 | 0.048276 | 0 | 0.151724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b444d45303e82e585a54f1edfd58ec87812630e1 | 91 | py | Python | h5Nastran/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | 3 | 2017-12-02T05:13:05.000Z | 2017-12-07T04:34:13.000Z | h5Nastran/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | h5Nastran/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function, absolute_import
from .h5_nastran import H5Nastran | 30.333333 | 55 | 0.857143 | 12 | 91 | 5.916667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.120879 | 91 | 3 | 56 | 30.333333 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
b4656df643fc1944f75c2bfa06a3b14a366eacd2 | 297 | py | Python | problems/303_range-sum-query-immutable.py | okuda-seminar/review_leetcode | 9774dbb85b836c3ebab4b24d77774ed05abb7a32 | [
"MIT"
] | null | null | null | problems/303_range-sum-query-immutable.py | okuda-seminar/review_leetcode | 9774dbb85b836c3ebab4b24d77774ed05abb7a32 | [
"MIT"
] | 170 | 2021-05-11T14:03:05.000Z | 2021-11-30T14:22:52.000Z | problems/303_range-sum-query-immutable.py | ryuji0123/review_leetcode | 9774dbb85b836c3ebab4b24d77774ed05abb7a32 | [
"MIT"
] | null | null | null | # n = nums.length
# time = O(n)
# space = O(1)
class NumArray:
def __init__(self, nums: List[int]):
self.cumlative_sum = list(accumulate([0] + nums))
def sumRange(self, left: int, right: int) -> int:
return self.cumlative_sum[right + 1] - self.cumlative_sum[left]
| 29.7 | 71 | 0.612795 | 42 | 297 | 4.166667 | 0.52381 | 0.222857 | 0.274286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013216 | 0.23569 | 297 | 9 | 72 | 33 | 0.757709 | 0.13468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b4658d9f43c54ae2ea2fc56353419388cc31bb9d | 124 | py | Python | grayscale/clang/math/max.py | KennethanCeyer/grayscale | 646a11ea47f2120f317e554c736d8054aa55c4c4 | [
"MIT"
] | null | null | null | grayscale/clang/math/max.py | KennethanCeyer/grayscale | 646a11ea47f2120f317e554c736d8054aa55c4c4 | [
"MIT"
] | null | null | null | grayscale/clang/math/max.py | KennethanCeyer/grayscale | 646a11ea47f2120f317e554c736d8054aa55c4c4 | [
"MIT"
] | null | null | null | from typing import List
from grayscale.clang import dll
def max(nums: List[float]) -> float:
return dll.gs_max(nums)
| 15.5 | 36 | 0.725806 | 20 | 124 | 4.45 | 0.65 | 0.157303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 124 | 7 | 37 | 17.714286 | 0.872549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
81eca17d648d69799b04a6eb883103a30bb9582e | 552 | py | Python | qlearning4k/games/game.py | AlexGreason/DeepQLearning | bc554946bf84644b3430aeab9ad27c2c1f08f689 | [
"MIT"
] | null | null | null | qlearning4k/games/game.py | AlexGreason/DeepQLearning | bc554946bf84644b3430aeab9ad27c2c1f08f689 | [
"MIT"
] | null | null | null | qlearning4k/games/game.py | AlexGreason/DeepQLearning | bc554946bf84644b3430aeab9ad27c2c1f08f689 | [
"MIT"
] | null | null | null | class Game:
def __init__(self):
self.reset()
@property
def name(self):
return "Game"
@property
def nb_actions(self):
return 0
def reset(self):
pass
def play(self, action):
pass
def get_state(self):
return None
def get_score(self):
return 0
def is_over(self):
return False
def is_won(self):
return False
def get_frame(self, player=None):
if player is None:
return self.get_state()
else:
return self.get_state(player)
def draw(self):
return self.get_state()
| 13.8 | 35 | 0.628623 | 80 | 552 | 4.175 | 0.35 | 0.209581 | 0.116766 | 0.161677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004975 | 0.271739 | 552 | 39 | 36 | 14.153846 | 0.825871 | 0 | 0 | 0.357143 | 0 | 0 | 0.007797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.392857 | false | 0.071429 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
81f37e5d92a3cb9be77ba832f0b779004ed5796a | 110 | py | Python | mediafeed/core/views.py | rennerocha/youtube-organizer | a267b3281f183cc4bcf37b3543324084540bfc25 | [
"MIT"
] | 11 | 2020-06-17T18:00:04.000Z | 2020-07-15T13:11:36.000Z | mediafeed/core/views.py | rennerocha/youtube-organizer | a267b3281f183cc4bcf37b3543324084540bfc25 | [
"MIT"
] | 12 | 2020-06-24T19:16:07.000Z | 2020-07-21T13:33:14.000Z | mediafeed/core/views.py | rennerocha/youtube-organizer | a267b3281f183cc4bcf37b3543324084540bfc25 | [
"MIT"
] | null | null | null | from django.shortcuts import redirect
def user_profile(request):
return redirect("/c/renne/computacao")
| 18.333333 | 42 | 0.772727 | 14 | 110 | 6 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 110 | 5 | 43 | 22 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.172727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
81f6ef5dc045d6a8046b8d32045bc73e02891351 | 70 | py | Python | package1/package2/__init__.py | leowindwave/YAPT | ee5ec568ed746f90a18dc514836624d435a7ccdb | [
"CC0-1.0"
] | 4 | 2017-03-06T09:49:11.000Z | 2019-10-16T00:09:38.000Z | package1/package2/__init__.py | leowindwave/YAPT | ee5ec568ed746f90a18dc514836624d435a7ccdb | [
"CC0-1.0"
] | null | null | null | package1/package2/__init__.py | leowindwave/YAPT | ee5ec568ed746f90a18dc514836624d435a7ccdb | [
"CC0-1.0"
] | 7 | 2017-11-02T11:00:30.000Z | 2020-01-31T22:41:27.000Z | print("package1/package2/__init__.py excuted")
from . import module2
| 17.5 | 46 | 0.785714 | 9 | 70 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.1 | 70 | 3 | 47 | 23.333333 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.536232 | 0.42029 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
c31858a31f560a11e9505ccb53e5f4bff21e18ae | 14,558 | py | Python | python-impl/serdesZ.py | Byeongjee/bls_sigs_ref-fork | ae912b8f701fa93d26886eadef02d5d9c76e99ed | [
"Apache-2.0"
] | 2 | 2020-09-14T20:52:56.000Z | 2021-07-11T14:58:14.000Z | python-impl/serdesZ.py | Byeongjee/bls_sigs_ref-fork | ae912b8f701fa93d26886eadef02d5d9c76e99ed | [
"Apache-2.0"
] | 1 | 2019-10-23T15:06:57.000Z | 2019-10-23T15:06:57.000Z | python-impl/serdesZ.py | Byeongjee/bls_sigs_ref-fork | ae912b8f701fa93d26886eadef02d5d9c76e99ed | [
"Apache-2.0"
] | 1 | 2020-02-10T01:00:33.000Z | 2020-02-10T01:00:33.000Z | #!/usr/bin/python
# vim: syntax=python
#
# point serialization / deserialization
# using the ZCash format
# https://github.com/zkcrypto/pairing/blob/master/src/bls12_381/README.md
# https://github.com/zcash/zcash/issues/2517
# (C) 2019 Riad S. Wahby <rsw@cs.stanford.edu>
#
# see the comment at the top of ../sage-impl/serdesZ.sage for more info
import struct
from consts import p
from curve_ops import from_jacobian, point_eq
from fields import Fq, Fq2, sgn0, sqrt_F2
from serdes import DeserError, SerError, _to_bytes_F1, _to_bytes_F2, \
_from_bytes_F1, _from_bytes_F2, _gx1, _gx2, \
F1_zero, F1_one, F2_zero, F2_one
def serialize(P, compressed=True):
if isinstance(P[0], Fq):
return _serialize_help(P, compressed, _to_bytes_F1, 48, _gx1)
if isinstance(P[0], Fq2):
return _serialize_help(P, compressed, _to_bytes_F2, 96, _gx2)
raise SerError("cannot serialize " + str(P))
def _serialize_help(P, compressed, to_bytes, clen, g):
# point at infinity
if P[2] == 0:
if compressed:
return b'\xc0' + b'\x00' * (clen - 1)
return b'\x40' + b'\x00' * (2 * clen - 1)
(x, y) = from_jacobian(P)
if pow(y, 2) != g(x):
raise SerError("cannot serialize invalid point")
x_str = to_bytes(x)
if not compressed:
return struct.pack("=" + "B" * 2 * clen, *(x_str + to_bytes(y)))
y_neg = sgn0(y) < 0
tag_bits = 0xa0 if y_neg else 0x80
x_str[0] = x_str[0] | tag_bits
return struct.pack("=" + "B" * clen, *x_str)
def deserialize(sp, is_ell2=False):
if not is_ell2:
return _deserialize_help(sp, _from_bytes_F1, 48, _gx1, lambda x: pow(x, (p + 1) // 4), F1_zero, F1_one)
return _deserialize_help(sp, _from_bytes_F2, 96, _gx2, sqrt_F2, F2_zero, F2_one)
def _deserialize_help(sp, from_bytes, clen, g, sqrt_fn, zero, one):
data = list(struct.unpack("=" + "B" * len(sp), sp))
(tag, data[0]) = (data[0] >> 5, data[0] & 0x1f)
if tag in (0b001, 0b011, 0b111):
raise DeserError("cannot deserialize value with invalid tag: %d" % tag)
if tag == 0b000:
# uncompressed point
if len(data) != 2 * clen:
raise DeserError("invalid uncompresed point: length must be %d, got %d" % (2 * clen, len(data)))
x = from_bytes(data[:clen])
y = from_bytes(data[clen:])
if pow(y, 2) != g(x):
raise DeserError("invalid uncompressed point: not on curve")
return (x, y, one)
if tag in (0b010, 0b110):
# point at infinity
expected_len = 2 * clen if tag == 0b010 else clen
if len(data) != expected_len:
raise DeserError("invalid point at infinity: length must be %d, got %d" % (expected_len, len(data)))
if any( d != 0 for d in data ):
raise DeserError("invalid point at infinity: must be all 0s other than tag")
return (zero, one, zero)
if tag in (0b100, 0b101):
# compressed point
if len(data) != clen:
raise DeserError("invalid compressed point: length must be %d, got %d" % (clen, len(data)))
x = from_bytes(data)
# recompute y
gx = g(x)
y = sqrt_fn(gx)
if y is None or pow(y, 2) != gx:
raise DeserError("invalid compressed point: g(x) is nonsquare")
# fix sign of y
y_neg = -1 if tag == 0b101 else 1
y = y_neg * sgn0(y) * y
return (x, y, one)
raise DeserError("invalid tag %d" % tag)
if __name__ == "__main__":
import binascii
import random
import sys
from opt_swu_g1 import opt_swu_map
from opt_swu_g2 import opt_swu2_map
invalid_inputs_1 = [
# infinity points: too short
"c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"4000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# infinity points: not all zeros
"c00000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000",
"400000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000",
# bad tags
"3a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
"7a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
"fa0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
# wrong length for compresed point
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaa",
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaaaa",
# wrong length for uncompressed point
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# invalid x-coord
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
# invalid elm of Fp --- equal to p (must be strictly less)
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab",
# point not on curve
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
]
invalid_inputs_2 = [
# infinity points: too short
"c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"4000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# infinity points: not all zeros
"c00000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000",
"400000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000",
# bad tags
"3a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"7a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"fa0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# wrong length for compressed point
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# wrong length for uncompressed point
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
# invalid x-coord
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaa7",
# invalid elm of Fp --- equal to p (must be strictly less)
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"9a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab",
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa3a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
# point not on curve
"1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaaa",
]
def test_ell(P):
Puc = deserialize(serialize(P, False), isinstance(P[0], Fq2))
Pc = deserialize(serialize(P, True), isinstance(P[0], Fq2))
assert point_eq(P, Puc)
assert point_eq(P, Pc)
def main():
for Pinf in ((F1_zero, F1_one, F1_zero), (F2_zero, F2_one, F2_zero)):
test_ell(Pinf)
sys.stdout.write('.')
sys.stdout.flush()
for _ in range(0, 32):
sys.stdout.write('.')
sys.stdout.flush()
test_ell(opt_swu_map(Fq(p, random.getrandbits(380))))
test_ell(opt_swu2_map(Fq2(p, random.getrandbits(380), random.getrandbits(380))))
for (ell2, invals) in ((False, invalid_inputs_1), (True, invalid_inputs_2)):
curve_name = "E2" if ell2 else "E1"
for (idx, inval) in enumerate(invals):
try:
deserialize(binascii.unhexlify(inval), ell2)
except DeserError:
sys.stdout.write('*')
sys.stdout.flush()
else:
raise DeserError("expected failed deserialization of #%d on %s" % (idx, curve_name))
sys.stdout.write('\n')
main()
| 73.898477 | 397 | 0.812199 | 828 | 14,558 | 14.129227 | 0.289855 | 0.011539 | 0.013164 | 0.002821 | 0.071203 | 0.046671 | 0.023592 | 0.006838 | 0.006838 | 0.006838 | 0 | 0.535805 | 0.134771 | 14,558 | 196 | 398 | 74.27551 | 0.392982 | 0.060791 | 0 | 0.064748 | 0 | 0 | 0.661169 | 0.625761 | 0 | 1 | 0.00088 | 0 | 0.014388 | 1 | 0.043165 | false | 0 | 0.071942 | 0 | 0.194245 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c34243eb18839812cd504a19a6e986c5e0497445 | 124 | py | Python | main.py | diazcal/data-analytics-spanish-parliament-seat-allocation | 9131302c76edeb8b35ffa3b078232f9f6919ecd8 | [
"MIT"
] | null | null | null | main.py | diazcal/data-analytics-spanish-parliament-seat-allocation | 9131302c76edeb8b35ffa3b078232f9f6919ecd8 | [
"MIT"
] | null | null | null | main.py | diazcal/data-analytics-spanish-parliament-seat-allocation | 9131302c76edeb8b35ffa3b078232f9f6919ecd8 | [
"MIT"
] | null | null | null | from datasets.parties.all import df_regions_votes_and_parties
from datasets.regions.province import df_community_and_provice | 62 | 62 | 0.91129 | 19 | 124 | 5.578947 | 0.631579 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056452 | 124 | 2 | 62 | 62 | 0.905983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c37628f0ba44b8d21af74a0700b20d8e4066269d | 4,292 | py | Python | cmd/trainer/model/model.py | andrewcopp/coup | 184629e546e0f190b6c16ca8b1d8c6a8c94ac578 | [
"MIT"
] | 1 | 2020-02-16T22:10:53.000Z | 2020-02-16T22:10:53.000Z | cmd/trainer/model/model.py | andrewcopp/coup | 184629e546e0f190b6c16ca8b1d8c6a8c94ac578 | [
"MIT"
] | null | null | null | cmd/trainer/model/model.py | andrewcopp/coup | 184629e546e0f190b6c16ca8b1d8c6a8c94ac578 | [
"MIT"
] | null | null | null | import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# The model is responsible for all of the TensorFlow logic.
class Model:
def __init__(self):
self.epsilon = 0.01
def initialize(self, outfile):
tf.reset_default_graph()
n_inputs = 301
n_outputs = 1
n_hidden_layer = 256
with tf.device('/cpu:0'):
weights = {
'hidden_layer': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_layer])),
'out': tf.Variable(tf.truncated_normal([n_hidden_layer, n_outputs]))
}
biases = {
'hidden_layer': tf.Variable(tf.zeros([n_hidden_layer])),
'out': tf.Variable(tf.zeros([n_outputs]))
}
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
saver.save(sess, outfile)
def transfer(self, infile, outfile):
n_inputs = 301
n_outputs = 1
n_hidden_layer = 256
with tf.device('/cpu:0'):
weights = {
'hidden_layer': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_layer])),
'out': tf.Variable(tf.truncated_normal([n_hidden_layer, n_outputs]))
}
biases = {
'hidden_layer': tf.Variable(tf.zeros([n_hidden_layer])),
'out': tf.Variable(tf.zeros([n_outputs]))
}
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, infile)
saver.save(sess, outfile)
def train(self, infile, outfile, inputs, outputs):
tf.reset_default_graph()
learning_rate = 0.01
n_inputs = 301
n_outputs = 1
n_hidden_layer = 256
with tf.device('/cpu:0'):
features = tf.placeholder(tf.float32, [None, n_inputs])
labels = tf.placeholder(tf.float32, [None, n_outputs])
weights = {
'hidden_layer': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_layer])),
'out': tf.Variable(tf.truncated_normal([n_hidden_layer, n_outputs]))
}
biases = {
'hidden_layer': tf.Variable(tf.zeros([n_hidden_layer])),
'out': tf.Variable(tf.zeros([n_outputs]))
}
layer_1 = tf.add(tf.matmul(features, weights['hidden_layer']), biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
logits = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
logits = tf.nn.tanh(logits)
cost = tf.reduce_sum(tf.pow(logits-labels, 2))/(2*1)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, infile)
sess.run(optimizer, feed_dict={features: inputs, labels: outputs})
saver.save(sess, outfile)
def fit(self, infile, inputs):
tf.reset_default_graph()
n_inputs = 301
n_outputs = 1
n_hidden_layer = 256
with tf.device('/cpu:0'):
features = tf.placeholder(tf.float32, [None, n_inputs])
labels = tf.placeholder(tf.float32, [None, n_outputs])
weights = {
'hidden_layer': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_layer])),
'out': tf.Variable(tf.truncated_normal([n_hidden_layer, n_outputs]))
}
biases = {
'hidden_layer': tf.Variable(tf.zeros([n_hidden_layer])),
'out': tf.Variable(tf.zeros([n_outputs]))
}
layer_1 = tf.add(tf.matmul(features, weights['hidden_layer']), biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
logits = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
logits = tf.nn.tanh(logits)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, infile)
prediction = sess.run(logits, feed_dict={features: inputs})
return prediction
# layout - check
# communication
# saving - check
# data
# model
| 30.013986 | 101 | 0.561976 | 513 | 4,292 | 4.495127 | 0.179337 | 0.133565 | 0.083261 | 0.072853 | 0.747181 | 0.717259 | 0.717259 | 0.717259 | 0.717259 | 0.701648 | 0 | 0.019595 | 0.310345 | 4,292 | 142 | 102 | 30.225352 | 0.759459 | 0.026095 | 0 | 0.726316 | 0 | 0 | 0.053918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.021053 | 0 | 0.094737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c382b8dfb0cfea4ff39a3c7b2f9370c0d9694512 | 66 | py | Python | nntoolbox/vision/losses/__init__.py | nhatsmrt/nn-toolbox | 689b9924d3c88a433f8f350b89c13a878ac7d7c3 | [
"Apache-2.0"
] | 16 | 2019-07-11T15:57:41.000Z | 2020-09-08T13:52:45.000Z | nntoolbox/vision/losses/__init__.py | nhatsmrt/nn-toolbox | 689b9924d3c88a433f8f350b89c13a878ac7d7c3 | [
"Apache-2.0"
] | 1 | 2022-01-18T22:21:57.000Z | 2022-01-18T22:21:57.000Z | nntoolbox/vision/losses/__init__.py | nhatsmrt/nn-toolbox | 689b9924d3c88a433f8f350b89c13a878ac7d7c3 | [
"Apache-2.0"
] | 1 | 2019-08-07T10:07:09.000Z | 2019-08-07T10:07:09.000Z | from .style import *
from .metrics import *
from .robust import *
| 16.5 | 22 | 0.727273 | 9 | 66 | 5.333333 | 0.555556 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 3 | 23 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5edbd958ecb52bbb780dc5061856b9864bad8b26 | 8,526 | py | Python | lino_book/projects/lydia/tests/dumps/18.12.0/finan_bankstatementitem.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 3 | 2016-08-25T05:58:09.000Z | 2019-12-05T11:13:45.000Z | lino_book/projects/lydia/tests/dumps/18.12.0/finan_bankstatementitem.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 18 | 2016-11-12T21:38:58.000Z | 2019-12-03T17:54:38.000Z | lino_book/projects/lydia/tests/dumps/18.12.0/finan_bankstatementitem.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 9 | 2016-10-15T11:12:33.000Z | 2021-09-22T04:37:37.000Z | # -*- coding: UTF-8 -*-
logger.info("Loading 85 objects to table finan_bankstatementitem...")
# fields: id, seqno, match, amount, dc, remark, account, partner, date, voucher
loader.save(create_finan_bankstatementitem(1,1,u'SLS 25/2015','405.00',True,u'',2,113,None,129))
loader.save(create_finan_bankstatementitem(2,2,u'SLS 26/2015','350.00',True,u'',2,114,None,129))
loader.save(create_finan_bankstatementitem(3,3,u'SLS 41/2015','260.00',True,u'',2,115,None,129))
loader.save(create_finan_bankstatementitem(4,4,u'SLS 42/2015','260.00',True,u'',2,116,None,129))
loader.save(create_finan_bankstatementitem(5,5,u'SLS 43/2015','247.00',True,u'',2,117,None,129))
loader.save(create_finan_bankstatementitem(6,6,u'SLS 46/2015','150.00',True,u'',2,118,None,129))
loader.save(create_finan_bankstatementitem(7,7,u'SLS 45/2015','150.00',True,u'',2,119,None,129))
loader.save(create_finan_bankstatementitem(8,8,u'SLS 44/2015','150.00',True,u'',2,120,None,129))
loader.save(create_finan_bankstatementitem(10,10,u'SLS 47/2015','150.00',True,u'',2,123,None,129))
loader.save(create_finan_bankstatementitem(11,11,u'SLS 31/2015','85.00',True,u'',2,132,None,129))
loader.save(create_finan_bankstatementitem(12,12,u'SLS 32/2015','42.00',True,u'',2,134,None,129))
loader.save(create_finan_bankstatementitem(13,13,u'SLS 33/2015','117.30',True,u'',2,139,None,129))
loader.save(create_finan_bankstatementitem(14,14,u'SLS 34/2015','60.00',True,u'',2,141,None,129))
loader.save(create_finan_bankstatementitem(15,15,u'SLS 35/2015','120.00',True,u'',2,146,None,129))
loader.save(create_finan_bankstatementitem(16,16,u'SLS 29/2015','120.00',True,u'',2,150,None,129))
loader.save(create_finan_bankstatementitem(17,17,u'SLS 30/2015','90.00',True,u'',2,151,None,129))
loader.save(create_finan_bankstatementitem(18,18,u'SLS 37/2015','95.00',True,u'',2,156,None,129))
loader.save(create_finan_bankstatementitem(19,19,u'SLS 38/2015','60.00',True,u'',2,157,None,129))
loader.save(create_finan_bankstatementitem(20,20,u'SLS 40/2015','60.00',True,u'',2,165,None,129))
loader.save(create_finan_bankstatementitem(21,21,u'SLS 36/2015','60.00',True,u'',2,172,None,129))
loader.save(create_finan_bankstatementitem(23,23,u'SLS 28/2015','360.00',True,u'',2,180,None,129))
loader.save(create_finan_bankstatementitem(24,1,u'SLS 43/2015','13.00',True,u'',2,117,None,130))
loader.save(create_finan_bankstatementitem(25,2,u'SLS 27/2015','189.00',True,u'',2,122,None,130))
loader.save(create_finan_bankstatementitem(26,3,u'SLS 32/2015','18.36',True,u'',2,134,None,130))
loader.save(create_finan_bankstatementitem(27,4,u'SLS 33/2015','2.30',False,u'',2,139,None,130))
loader.save(create_finan_bankstatementitem(28,5,u'SLS 37/2015','5.00',True,u'',2,156,None,130))
loader.save(create_finan_bankstatementitem(29,6,u'SLS 39/2015','115.00',True,u'',2,173,None,130))
loader.save(create_finan_bankstatementitem(30,7,u'SLS 48/2015','180.00',True,u'',2,113,None,130))
loader.save(create_finan_bankstatementitem(31,8,u'SLS 49/2015','114.00',True,u'',2,114,None,130))
loader.save(create_finan_bankstatementitem(32,9,u'SLS 64/2015','60.00',True,u'',2,115,None,130))
loader.save(create_finan_bankstatementitem(33,10,u'SLS 65/2015','60.00',True,u'',2,116,None,130))
loader.save(create_finan_bankstatementitem(34,11,u'SLS 66/2015','60.00',True,u'',2,117,None,130))
loader.save(create_finan_bankstatementitem(36,13,u'SLS 68/2015','30.00',True,u'',2,119,None,130))
loader.save(create_finan_bankstatementitem(37,14,u'SLS 67/2015','30.00',True,u'',2,120,None,130))
loader.save(create_finan_bankstatementitem(38,15,u'SLS 50/2015','101.50',True,u'',2,122,None,130))
loader.save(create_finan_bankstatementitem(39,16,u'SLS 70/2015','30.60',True,u'',2,123,None,130))
loader.save(create_finan_bankstatementitem(40,17,u'SLS 54/2015','175.00',True,u'',2,132,None,130))
loader.save(create_finan_bankstatementitem(41,18,u'SLS 55/2015','60.00',True,u'',2,134,None,130))
loader.save(create_finan_bankstatementitem(42,19,u'SLS 56/2015','120.00',True,u'',2,139,None,130))
loader.save(create_finan_bankstatementitem(43,20,u'SLS 57/2015','60.00',True,u'',2,141,None,130))
loader.save(create_finan_bankstatementitem(44,21,u'SLS 58/2015','137.75',True,u'',2,146,None,130))
loader.save(create_finan_bankstatementitem(45,22,u'SLS 52/2015','85.00',True,u'',2,150,None,130))
loader.save(create_finan_bankstatementitem(46,23,u'SLS 53/2015','60.00',True,u'',2,151,None,130))
loader.save(create_finan_bankstatementitem(47,24,u'SLS 60/2015','115.00',True,u'',2,156,None,130))
loader.save(create_finan_bankstatementitem(49,26,u'SLS 63/2015','60.00',True,u'',2,165,None,130))
loader.save(create_finan_bankstatementitem(50,27,u'SLS 59/2015','90.00',True,u'',2,172,None,130))
loader.save(create_finan_bankstatementitem(51,28,u'SLS 62/2015','73.50',True,u'',2,173,None,130))
loader.save(create_finan_bankstatementitem(52,29,u'SLS 51/2015','122.40',True,u'',2,180,None,130))
loader.save(create_finan_bankstatementitem(53,1,u'SLS 27/2015','81.00',True,u'',2,122,None,131))
loader.save(create_finan_bankstatementitem(54,2,u'SLS 32/2015','0.36',False,u'',2,134,None,131))
loader.save(create_finan_bankstatementitem(55,3,u'SLS 49/2015','6.00',True,u'',2,114,None,131))
loader.save(create_finan_bankstatementitem(56,4,u'SLS 69/2015','30.00',True,u'',2,118,None,131))
loader.save(create_finan_bankstatementitem(57,5,u'SLS 50/2015','41.32',True,u'',2,122,None,131))
loader.save(create_finan_bankstatementitem(58,6,u'SLS 70/2015','0.60',False,u'',2,123,None,131))
loader.save(create_finan_bankstatementitem(59,7,u'SLS 58/2015','7.25',True,u'',2,146,None,131))
loader.save(create_finan_bankstatementitem(60,8,u'SLS 61/2015','60.00',True,u'',2,157,None,131))
loader.save(create_finan_bankstatementitem(62,10,u'SLS 51/2015','2.40',False,u'',2,180,None,131))
loader.save(create_finan_bankstatementitem(63,11,u'SLS 71/2015','115.00',True,u'',2,113,None,131))
loader.save(create_finan_bankstatementitem(64,12,u'SLS 72/2015','42.00',True,u'',2,114,None,131))
loader.save(create_finan_bankstatementitem(65,13,u'SLS 73/2015','107.10',True,u'',2,122,None,131))
loader.save(create_finan_bankstatementitem(66,14,u'SLS 77/2015','120.00',True,u'',2,132,None,131))
loader.save(create_finan_bankstatementitem(67,15,u'SLS 78/2015','60.00',True,u'',2,134,None,131))
loader.save(create_finan_bankstatementitem(68,16,u'SLS 79/2015','85.00',True,u'',2,139,None,131))
loader.save(create_finan_bankstatementitem(69,17,u'SLS 80/2015','60.00',True,u'',2,141,None,131))
loader.save(create_finan_bankstatementitem(70,18,u'SLS 81/2015','109.25',True,u'',2,146,None,131))
loader.save(create_finan_bankstatementitem(71,19,u'SLS 75/2015','120.00',True,u'',2,150,None,131))
loader.save(create_finan_bankstatementitem(72,20,u'SLS 76/2015','60.00',True,u'',2,151,None,131))
loader.save(create_finan_bankstatementitem(73,21,u'SLS 83/2015','120.00',True,u'',2,156,None,131))
loader.save(create_finan_bankstatementitem(75,23,u'SLS 86/2015','60.00',True,u'',2,165,None,131))
loader.save(create_finan_bankstatementitem(76,24,u'SLS 82/2015','60.00',True,u'',2,172,None,131))
loader.save(create_finan_bankstatementitem(77,25,u'SLS 85/2015','70.00',True,u'',2,173,None,131))
loader.save(create_finan_bankstatementitem(78,26,u'SLS 74/2015','61.20',True,u'',2,180,None,131))
loader.save(create_finan_bankstatementitem(79,1,u'SLS 50/2015','2.18',True,u'',2,122,None,132))
loader.save(create_finan_bankstatementitem(80,2,u'SLS 62/2015','31.50',True,u'',2,173,None,132))
loader.save(create_finan_bankstatementitem(81,3,u'SLS 72/2015','18.00',True,u'',2,114,None,132))
loader.save(create_finan_bankstatementitem(82,4,u'SLS 73/2015','2.10',False,u'',2,122,None,132))
loader.save(create_finan_bankstatementitem(83,5,u'SLS 81/2015','5.46',True,u'',2,146,None,132))
loader.save(create_finan_bankstatementitem(84,6,u'SLS 84/2015','60.00',True,u'',2,157,None,132))
loader.save(create_finan_bankstatementitem(85,7,u'SLS 85/2015','30.00',True,u'',2,173,None,132))
loader.save(create_finan_bankstatementitem(86,8,u'SLS 74/2015','1.20',False,u'',2,180,None,132))
loader.save(create_finan_bankstatementitem(88,10,u'SLS 14/2015','880.00',True,u'',2,114,None,132))
loader.save(create_finan_bankstatementitem(89,11,u'SLS 15/2015','1050.00',True,u'',2,115,None,132))
loader.save(create_finan_bankstatementitem(90,12,u'SLS 16/2015','196.00',True,u'',2,115,None,132))
loader.save(create_finan_bankstatementitem(91,13,u'SLS 17/2015','459.00',True,u'',2,116,None,132))
loader.save(create_finan_bankstatementitem(92,14,u'SLS 18/2015','880.00',True,u'',2,117,None,132))
loader.flush_deferred_objects()
| 93.692308 | 99 | 0.754164 | 1,642 | 8,526 | 3.810597 | 0.101705 | 0.302381 | 0.217357 | 0.28528 | 0.798306 | 0.778968 | 0.691705 | 0.320121 | 0.266262 | 0.266262 | 0 | 0.21258 | 0.022871 | 8,526 | 90 | 100 | 94.733333 | 0.538471 | 0.011612 | 0 | 0 | 0 | 0 | 0.170703 | 0.003086 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ee9d3ad979c4e3739b43f8dfb84a47d62152a0d | 5,417 | py | Python | frontend/amundsen_application/api/preview/v0.py | defendercrypt/amundsen | 83c728b646020f60cf2270c12e766fe4af8c9948 | [
"Apache-2.0"
] | 2,072 | 2020-08-11T20:16:48.000Z | 2022-03-31T07:04:05.000Z | frontend/amundsen_application/api/preview/v0.py | defendercrypt/amundsen | 83c728b646020f60cf2270c12e766fe4af8c9948 | [
"Apache-2.0"
] | 795 | 2020-08-11T15:24:39.000Z | 2022-03-31T18:56:13.000Z | frontend/amundsen_application/api/preview/v0.py | defendercrypt/amundsen | 83c728b646020f60cf2270c12e766fe4af8c9948 | [
"Apache-2.0"
] | 671 | 2020-08-11T20:39:56.000Z | 2022-03-31T08:39:07.000Z | # Copyright Contributors to the Amundsen project.
# SPDX-License-Identifier: Apache-2.0
import json
import logging
from pkg_resources import iter_entry_points
from http import HTTPStatus
from flask import Response, jsonify, make_response, request, current_app as app
from flask.blueprints import Blueprint
from marshmallow import ValidationError
from werkzeug.utils import import_string
from amundsen_application.models.preview_data import PreviewDataSchema
LOGGER = logging.getLogger(__name__)
PREVIEW_CLIENT_CLASS = None
PREVIEW_CLIENT_INSTANCE = None
for entry_point in iter_entry_points(group='preview_client', name='table_preview_client_class'):
preview_client_class = entry_point.load()
if preview_client_class is not None:
PREVIEW_CLIENT_CLASS = preview_client_class
preview_blueprint = Blueprint('preview', __name__, url_prefix='/api/preview/v0')
@preview_blueprint.route('/', methods=['POST'])
def get_table_preview() -> Response:
global PREVIEW_CLIENT_INSTANCE
global PREVIEW_CLIENT_CLASS
try:
if PREVIEW_CLIENT_INSTANCE is None:
if PREVIEW_CLIENT_CLASS is not None:
PREVIEW_CLIENT_INSTANCE = PREVIEW_CLIENT_CLASS()
logging.warn('Setting preview_client via entry_point is DEPRECATED and '
'will be removed in a future version')
elif (app.config['PREVIEW_CLIENT_ENABLED']
and app.config['PREVIEW_CLIENT'] is not None):
PREVIEW_CLIENT_CLASS = import_string(app.config['PREVIEW_CLIENT'])
PREVIEW_CLIENT_INSTANCE = PREVIEW_CLIENT_CLASS()
else:
payload = jsonify({'previewData': {}, 'msg': 'A client for the preview feature must be configured'})
return make_response(payload, HTTPStatus.NOT_IMPLEMENTED)
response = PREVIEW_CLIENT_INSTANCE.get_preview_data(params=request.get_json())
status_code = response.status_code
preview_data = json.loads(response.data).get('preview_data')
if status_code == HTTPStatus.OK:
# validate the returned table preview data
try:
data = PreviewDataSchema().load(preview_data)
payload = jsonify({'previewData': data, 'msg': 'Success'})
except ValidationError as err:
logging.error('Preview data dump returned errors: ' + str(err.messages))
raise Exception('The preview client did not return a valid PreviewData object')
else:
message = 'Encountered error: Preview client request failed with code ' + str(status_code)
logging.error(message)
# only necessary to pass the error text
payload = jsonify({'previewData': {'error_text': preview_data.get('error_text', '')}, 'msg': message})
return make_response(payload, status_code)
except Exception as e:
message = f'Encountered exception: {str(e)}'
logging.exception(message)
payload = jsonify({'previewData': {}, 'msg': message})
return make_response(payload, HTTPStatus.INTERNAL_SERVER_ERROR)
@preview_blueprint.route('/feature_preview', methods=['POST'])
def get_feature_preview() -> Response:
global PREVIEW_CLIENT_INSTANCE
global PREVIEW_CLIENT_CLASS
try:
if PREVIEW_CLIENT_INSTANCE is None:
if PREVIEW_CLIENT_CLASS is not None:
PREVIEW_CLIENT_INSTANCE = PREVIEW_CLIENT_CLASS()
logging.warn('Setting preview_client via entry_point is DEPRECATED and '
'will be removed in a future version')
elif (app.config['PREVIEW_CLIENT_ENABLED']
and app.config['PREVIEW_CLIENT'] is not None):
PREVIEW_CLIENT_CLASS = import_string(app.config['PREVIEW_CLIENT'])
PREVIEW_CLIENT_INSTANCE = PREVIEW_CLIENT_CLASS()
else:
payload = jsonify({'previewData': {}, 'msg': 'A client for the preview feature must be configured'})
return make_response(payload, HTTPStatus.NOT_IMPLEMENTED)
response = PREVIEW_CLIENT_INSTANCE.get_feature_preview_data(params=request.get_json())
status_code = response.status_code
preview_data = json.loads(response.data).get('preview_data')
if status_code == HTTPStatus.OK:
# validate the returned feature preview data
try:
data = PreviewDataSchema().load(preview_data)
payload = jsonify({'previewData': data, 'msg': 'Success'})
except ValidationError as err:
logging.error('Preview data dump returned errors: ' + str(err.messages))
raise Exception('The preview client did not return a valid PreviewData object')
else:
message = 'Encountered error: Preview client request failed with code ' + str(status_code)
logging.error(message)
# only necessary to pass the error text
payload = jsonify({'previewData': {'error_text': preview_data.get('error_text', '')}, 'msg': message})
return make_response(payload, status_code)
except Exception as e:
message = f'Encountered exception: {str(e)}'
logging.exception(message)
payload = jsonify({'previewData': {}, 'msg': message})
return make_response(payload, HTTPStatus.INTERNAL_SERVER_ERROR)
| 47.938053 | 116 | 0.669559 | 615 | 5,417 | 5.669919 | 0.203252 | 0.149125 | 0.082592 | 0.037855 | 0.79696 | 0.79696 | 0.78004 | 0.78004 | 0.78004 | 0.767995 | 0 | 0.000733 | 0.2446 | 5,417 | 112 | 117 | 48.366071 | 0.851417 | 0.044859 | 0 | 0.747253 | 0 | 0 | 0.199923 | 0.013548 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021978 | false | 0 | 0.120879 | 0 | 0.208791 | 0.043956 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f43cd5b0453abbd73fe68134a618d48c431ddc6 | 95 | py | Python | mineapy/__init__.py | vpandey-om/mineapy | a533196244d17aa69a5846eb6e197bd4899797b0 | [
"Apache-2.0"
] | null | null | null | mineapy/__init__.py | vpandey-om/mineapy | a533196244d17aa69a5846eb6e197bd4899797b0 | [
"Apache-2.0"
] | null | null | null | mineapy/__init__.py | vpandey-om/mineapy | a533196244d17aa69a5846eb6e197bd4899797b0 | [
"Apache-2.0"
] | null | null | null | # __init__.py
from .core.taskEnrich import TaskEnrichment
from .core.rxnExp import ReactionExp
| 23.75 | 43 | 0.831579 | 12 | 95 | 6.25 | 0.75 | 0.213333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 95 | 3 | 44 | 31.666667 | 0.882353 | 0.115789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f7774960f769e54c568bdc74840f7065f2feedc | 54 | py | Python | conceptnet_rocks/__init__.py | ldtoolkit/conceptnet-rocks | 4dda14c6a2a0fdd036a49ad20927a46bd8121848 | [
"Apache-2.0"
] | 9 | 2020-11-17T22:01:21.000Z | 2022-02-06T14:38:59.000Z | conceptnet_rocks/__init__.py | ldtoolkit/conceptnet-rocks | 4dda14c6a2a0fdd036a49ad20927a46bd8121848 | [
"Apache-2.0"
] | null | null | null | conceptnet_rocks/__init__.py | ldtoolkit/conceptnet-rocks | 4dda14c6a2a0fdd036a49ad20927a46bd8121848 | [
"Apache-2.0"
] | null | null | null | from conceptnet_rocks.database import AssertionFinder
| 27 | 53 | 0.907407 | 6 | 54 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 54 | 1 | 54 | 54 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
488f5d67b7ac2fe657a5d2907e565b05c480e805 | 272 | py | Python | src/facet/data/partition/__init__.py | skandupmanyu/facet | 545ade531ecfa617bad346ebef12955afa876cca | [
"Apache-2.0"
] | null | null | null | src/facet/data/partition/__init__.py | skandupmanyu/facet | 545ade531ecfa617bad346ebef12955afa876cca | [
"Apache-2.0"
] | null | null | null | src/facet/data/partition/__init__.py | skandupmanyu/facet | 545ade531ecfa617bad346ebef12955afa876cca | [
"Apache-2.0"
] | null | null | null | """
Partitioners to generate series of numerical and categorical values to be used
as inputs for simulations.
- Numerical partitions are intervals, represented by their central value.
- Categorical partitions are the categories themselves.
"""
from ._partition import *
| 27.2 | 78 | 0.797794 | 34 | 272 | 6.352941 | 0.852941 | 0.12037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150735 | 272 | 9 | 79 | 30.222222 | 0.935065 | 0.867647 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
489e63234b22c8d38c2ca2b00af9632517b032e1 | 132 | py | Python | test/run/t302.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/run/t302.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/run/t302.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | # Test that re-setting the value in a dict doesn't mess with its length
d = {'foo':2}
print len(d), d
d['foo'] = 13
print len(d), d
| 22 | 71 | 0.651515 | 29 | 132 | 2.965517 | 0.724138 | 0.069767 | 0.209302 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0.19697 | 132 | 5 | 72 | 26.4 | 0.783019 | 0.522727 | 0 | 0.5 | 0 | 0 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
489f5606bf73e9310f93bc1a1bc449ed57c1169d | 259 | py | Python | Demo/Demo_gym/envs/unittest/__init__.py | Remosy/iceHocekeyIRL | 1ffeaf8a9bd9585038629be41a2da552e0a4473b | [
"MIT"
] | null | null | null | Demo/Demo_gym/envs/unittest/__init__.py | Remosy/iceHocekeyIRL | 1ffeaf8a9bd9585038629be41a2da552e0a4473b | [
"MIT"
] | 3 | 2019-03-09T02:35:24.000Z | 2019-09-27T11:05:01.000Z | Demo/Demo_gym/envs/unittest/__init__.py | Remosy/iceHocekeyIRL | 1ffeaf8a9bd9585038629be41a2da552e0a4473b | [
"MIT"
] | null | null | null | from Demo_gym.envs.unittest.cube_crash import CubeCrash
from Demo_gym.envs.unittest.cube_crash import CubeCrashSparse
from Demo_gym.envs.unittest.cube_crash import CubeCrashScreenBecomesBlack
from Demo_gym.envs.unittest.memorize_digits import MemorizeDigits
| 43.166667 | 73 | 0.888031 | 36 | 259 | 6.166667 | 0.388889 | 0.144144 | 0.198198 | 0.27027 | 0.617117 | 0.513514 | 0.513514 | 0.513514 | 0 | 0 | 0 | 0 | 0.065637 | 259 | 5 | 74 | 51.8 | 0.917355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48a43db6c315d7eabb9426087b90c38a51818939 | 60 | py | Python | analogue_algorithm/__init__.py | tjwixtrom/analogue_algorithm | 2627556b9c8282fa0872f66daecc35b28362fd82 | [
"BSD-3-Clause"
] | 2 | 2019-08-05T13:44:18.000Z | 2022-02-16T14:06:54.000Z | analogue_algorithm/__init__.py | tjwixtrom/adaptive_WRF | 2627556b9c8282fa0872f66daecc35b28362fd82 | [
"BSD-3-Clause"
] | 3 | 2018-07-25T16:33:09.000Z | 2018-08-23T14:57:08.000Z | analogue_algorithm/__init__.py | tjwixtrom/analogue_algorithm | 2627556b9c8282fa0872f66daecc35b28362fd82 | [
"BSD-3-Clause"
] | null | null | null | from .calc import *
from .plots import *
from .wrf import *
| 15 | 20 | 0.7 | 9 | 60 | 4.666667 | 0.555556 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 60 | 3 | 21 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48e54faa7e5016db1ebd55ddd6a9f5cbbc0d77e4 | 190 | py | Python | tests/utils.py | yuezheng/Kado | ad26a7c3b90a6a956a799471dac1cbfd457cfab5 | [
"MIT"
] | null | null | null | tests/utils.py | yuezheng/Kado | ad26a7c3b90a6a956a799471dac1cbfd457cfab5 | [
"MIT"
] | null | null | null | tests/utils.py | yuezheng/Kado | ad26a7c3b90a6a956a799471dac1cbfd457cfab5 | [
"MIT"
] | null | null | null | import asyncio
loop = asyncio.get_event_loop()
def async_testcase(coro):
def wrapper(*args, **kwargs):
return loop.run_until_complete(coro(*args, **kwargs))
return wrapper | 21.111111 | 61 | 0.705263 | 25 | 190 | 5.16 | 0.64 | 0.155039 | 0.248062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173684 | 190 | 9 | 62 | 21.111111 | 0.821656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5b10b3e85cfd54e39036714308d61eb420624c0e | 353 | py | Python | py_ms_cognitive/__init__.py | mfridson/msf-image-capstone | c338eaf6ef401a8c70000775d0bfafdc5ce3f79c | [
"MIT"
] | null | null | null | py_ms_cognitive/__init__.py | mfridson/msf-image-capstone | c338eaf6ef401a8c70000775d0bfafdc5ce3f79c | [
"MIT"
] | null | null | null | py_ms_cognitive/__init__.py | mfridson/msf-image-capstone | c338eaf6ef401a8c70000775d0bfafdc5ce3f79c | [
"MIT"
] | null | null | null | from .py_ms_cognitive_search.py_ms_cognitive_web_search import PyMsCognitiveWebSearch
from .py_ms_cognitive_search.py_ms_cognitive_news_search import PyMsCognitiveNewsSearch
from .py_ms_cognitive_search.py_ms_cognitive_video_search import PyMsCognitiveVideoSearch
from .py_ms_cognitive_search.py_ms_cognitive_image_search import PyMsCognitiveImageSearch | 88.25 | 89 | 0.934844 | 48 | 353 | 6.291667 | 0.291667 | 0.10596 | 0.344371 | 0.225166 | 0.476821 | 0.476821 | 0.476821 | 0.476821 | 0 | 0 | 0 | 0 | 0.042493 | 353 | 4 | 90 | 88.25 | 0.893491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2933015b557671510b72c295744f7e47ebfdf34 | 77 | py | Python | FuzzyTree/__init__.py | juanmabelda/FuzzyClassifier | 8020756e062ab4d2a36d2a6cb40640fbab69f801 | [
"MIT"
] | 1 | 2021-09-30T08:54:58.000Z | 2021-09-30T08:54:58.000Z | FuzzyTree/__init__.py | juanmabelda/FuzzyClassifier | 8020756e062ab4d2a36d2a6cb40640fbab69f801 | [
"MIT"
] | null | null | null | FuzzyTree/__init__.py | juanmabelda/FuzzyClassifier | 8020756e062ab4d2a36d2a6cb40640fbab69f801 | [
"MIT"
] | 2 | 2018-07-17T03:05:42.000Z | 2021-10-14T08:25:31.000Z | from .FT_optimize import *
from .FuzzyVars import *
from .FuzzyTree import *
| 19.25 | 26 | 0.766234 | 10 | 77 | 5.8 | 0.6 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 77 | 3 | 27 | 25.666667 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2c61c5ee326af97cd94908abb0c0eb301f2d7ff | 45 | py | Python | gwrappy/gmail/__init__.py | hairizuanbinnoorazman/gwrappy | aae569eb87d0aeac6126ccceac8a208b8dfdcf51 | [
"Apache-2.0"
] | 5 | 2016-09-21T10:27:05.000Z | 2017-03-13T11:37:16.000Z | gwrappy/gmail/__init__.py | hairizuanbinnoorazman/gwrappy | aae569eb87d0aeac6126ccceac8a208b8dfdcf51 | [
"Apache-2.0"
] | 1 | 2021-11-15T17:46:52.000Z | 2021-11-15T17:46:52.000Z | gwrappy/gmail/__init__.py | hairizuanbinnoorazman/gwrappy | aae569eb87d0aeac6126ccceac8a208b8dfdcf51 | [
"Apache-2.0"
] | 2 | 2016-09-21T10:34:59.000Z | 2017-04-05T10:38:10.000Z | from gwrappy.gmail.gmail import GmailUtility
| 22.5 | 44 | 0.866667 | 6 | 45 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2ca3de2589fd5ff18ebd273e770335616f66368 | 2,620 | py | Python | crowdgezwitscher/twitter/migrations/0006_auto_20170408_2226.py | Strassengezwitscher/Crowdgezwitscher | afdd433acb35c1a554ba79464b744975de065151 | [
"MIT"
] | 4 | 2016-07-22T07:20:31.000Z | 2016-11-13T18:13:34.000Z | crowdgezwitscher/twitter/migrations/0006_auto_20170408_2226.py | Strassengezwitscher/Strassengezwitscher | afdd433acb35c1a554ba79464b744975de065151 | [
"MIT"
] | 402 | 2016-04-26T08:38:17.000Z | 2022-03-11T23:26:49.000Z | crowdgezwitscher/twitter/migrations/0006_auto_20170408_2226.py | Strassengezwitscher/Crowdgezwitscher | afdd433acb35c1a554ba79464b744975de065151 | [
"MIT"
] | 1 | 2018-01-14T16:58:57.000Z | 2018-01-14T16:58:57.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10 on 2017-04-08 22:26
from django.db import migrations
import base
def forwards_func(apps, schema_editor):
"""
Convert Tweet's and TwitterAccount's fields storing ID values to fit UnsignedBigIntegerField.
This applies to Tweet's tweet_id and TwitterAccount's account_id and last_known_tweet_id.
Do so by substracting 2^63 to have full 64-bit value range.
"""
Tweet = apps.get_model('twitter', 'Tweet')
TwitterAccount = apps.get_model('twitter', 'TwitterAccount')
db_alias = schema_editor.connection.alias
for tweet in Tweet.objects.using(db_alias).all():
tweet.tweet_id = int(tweet.tweet_id) - 2 ** 63
tweet.save()
for account in TwitterAccount.objects.using(db_alias).all():
account.account_id = int(account.account_id) - 2 ** 63
if account.last_known_tweet_id == '':
account.last_known_tweet_id = 0 - 2 ** 63
else:
account.last_known_tweet_id = int(account.last_known_tweet_id) - 2 ** 63
account.save()
def reverse_func(apps, schema_editor):
"""
Convert Tweet's and TwitterAccount's fields storing ID values to signed 64-bit value range again.
This applies to Tweet's tweet_id and TwitterAccount's account_id and last_known_tweet_id.
Do so by adding 2^63.
"""
Tweet = apps.get_model('twitter', 'Tweet')
TwitterAccount = apps.get_model('twitter', 'TwitterAccount')
db_alias = schema_editor.connection.alias
for tweet in Tweet.objects.using(db_alias).all():
tweet.tweet_id = int(tweet.tweet_id) + 2 ** 63
tweet.save()
for account in TwitterAccount.objects.using(db_alias).all():
account.account_id = int(account.account_id) + 2 ** 63
account.last_known_tweet_id = int(account.last_known_tweet_id) + 2 ** 63
account.save()
class Migration(migrations.Migration):
dependencies = [
('twitter', '0005_auto_20170226_1348'),
]
operations = [
migrations.RunPython(forwards_func, reverse_func),
migrations.AlterField(
model_name='tweet',
name='tweet_id',
field=base.fields.UnsignedBigIntegerField(unique=True),
),
migrations.AlterField(
model_name='twitteraccount',
name='account_id',
field=base.fields.UnsignedBigIntegerField(unique=True),
),
migrations.AlterField(
model_name='twitteraccount',
name='last_known_tweet_id',
field=base.fields.UnsignedBigIntegerField(default=0),
),
]
| 33.164557 | 101 | 0.659924 | 334 | 2,620 | 4.982036 | 0.260479 | 0.067308 | 0.075721 | 0.086538 | 0.759615 | 0.731971 | 0.701923 | 0.701923 | 0.701923 | 0.701923 | 0 | 0.032371 | 0.233588 | 2,620 | 78 | 102 | 33.589744 | 0.796315 | 0.199618 | 0 | 0.489796 | 1 | 0 | 0.081015 | 0.011225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.040816 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2e9df7f3bac1ad8904f4c3a13db92bd82b17e98 | 11,330 | py | Python | Brevet_US_4661747_Edwin_Gray_Power_Tube/Version_3/assembly_v1.py | Jay4C/Python-Macros-For_FreeCAD | 12ce5441a26731377fa43e86ccd2be675740d3a0 | [
"MIT"
] | null | null | null | Brevet_US_4661747_Edwin_Gray_Power_Tube/Version_3/assembly_v1.py | Jay4C/Python-Macros-For_FreeCAD | 12ce5441a26731377fa43e86ccd2be675740d3a0 | [
"MIT"
] | null | null | null | Brevet_US_4661747_Edwin_Gray_Power_Tube/Version_3/assembly_v1.py | Jay4C/Python-Macros-For_FreeCAD | 12ce5441a26731377fa43e86ccd2be675740d3a0 | [
"MIT"
] | null | null | null | import FreeCAD, Part, Drawing, math, Mesh
DOC = FreeCAD.activeDocument()
DOC_NAME = "assembly_v1"
def clear_doc():
# Clear the active document deleting all the objects
for obj in DOC.Objects:
DOC.removeObject(obj.Name)
def setview():
# Rearrange View
FreeCAD.Gui.SendMsgToActiveView("ViewFit")
FreeCAD.Gui.activeDocument().activeView().viewAxometric()
if DOC is None:
FreeCAD.newDocument(DOC_NAME)
FreeCAD.setActiveDocument(DOC_NAME)
DOC = FreeCAD.activeDocument()
else:
clear_doc()
# EPS= tolerance to use to cut the parts
EPS = 0.10
EPS_C = EPS * -0.5
# part_tank
Mesh.insert(u"part_tank.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_tank").ShapeColor = (0.10,0.10,0.10)
FreeCAD.getDocument("assembly_v1").getObject("part_tank").Placement = App.Placement(App.Vector(0,0,0),App.Rotation(App.Vector(0,0,1),0))
FreeCADGui.getDocument("assembly_v1").getObject("part_tank").Transparency = 80
# part_support_laser_cutting _ 1
Mesh.insert(u"part_support_laser_cutting.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_support_laser_cutting").ShapeColor = (1.00,1.00,0.00)
FreeCAD.getDocument("assembly_v1").getObject("part_support_laser_cutting").Placement = App.Placement(App.Vector(0,0,-2),App.Rotation(App.Vector(0,1,0),0))
# part_tige_filetee_m8_1000l for the cathode
Mesh.insert(u"part_tige_filetee_m8_1000l.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_tige_filetee_m8_1000l").ShapeColor = (0.50,0.50,0.50)
FreeCAD.getDocument("assembly_v1").getObject("part_tige_filetee_m8_1000l").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, -20),App.Rotation(App.Vector(0,0,1),0))
# part_tige_filetee_m8_1000l for the anode
Mesh.insert(u"part_tige_filetee_m8_1000l.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_tige_filetee_m8_1000l001").ShapeColor = (0.50,0.50,0.50)
FreeCAD.getDocument("assembly_v1").getObject("part_tige_filetee_m8_1000l001").Placement = App.Placement(App.Vector(-50/2 + 4 + 4, 0, -20),App.Rotation(App.Vector(0,0,1),0))
# Rank 1
# part_rondelle_8m for part_tige_filetee_m8_1000l
Mesh.insert(u"part_rondelle_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_rondelle_8m").ShapeColor = (0.30,0.20,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_rondelle_8m").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, -3.5),App.Rotation(App.Vector(0,0,1),0))
# part_rondelle_8m for part_tige_filetee_m8_1000l001
Mesh.insert(u"part_rondelle_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_rondelle_8m001").ShapeColor = (0.30,0.20,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_rondelle_8m001").Placement = App.Placement(App.Vector(-50/2 + 4 + 4, 0, -3.5),App.Rotation(App.Vector(0,0,1),0))
# part_ecrou_8m for part_tige_filetee_m8_1000l
Mesh.insert(u"part_ecrou_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_ecrou_8m").ShapeColor = (0.25,0.25,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_ecrou_8m").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, -11.5),App.Rotation(App.Vector(0,0,1),0))
# part_ecrou_8m for part_tige_filetee_m8_1000l001
Mesh.insert(u"part_ecrou_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_ecrou_8m001").ShapeColor = (0.25,0.25,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_ecrou_8m001").Placement = App.Placement(App.Vector(-50/2 + 4 + 4, 0, -11.5),App.Rotation(App.Vector(0,0,1),0))
# Rank 2
# part_rondelle_8m for part_tige_filetee_m8_1000l
Mesh.insert(u"part_rondelle_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_rondelle_8m002").ShapeColor = (0.30,0.20,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_rondelle_8m002").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, 0),App.Rotation(App.Vector(0,0,1),0))
# part_rondelle_8m for part_tige_filetee_m8_1000l001
Mesh.insert(u"part_rondelle_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_rondelle_8m003").ShapeColor = (0.30,0.20,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_rondelle_8m003").Placement = App.Placement(App.Vector(-50/2 + 4 + 4, 0, 0),App.Rotation(App.Vector(0,0,1),0))
# part_ecrou_8m for part_tige_filetee_m8_1000l
Mesh.insert(u"part_ecrou_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_ecrou_8m002").ShapeColor = (0.25,0.25,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_ecrou_8m002").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, 1.5),App.Rotation(App.Vector(0,0,1),0))
# part_ecrou_8m for part_tige_filetee_m8_1000l001
Mesh.insert(u"part_ecrou_8m.stl","assembly_v1")
FreeCADGui.getDocument("assembly_v1").getObject("part_ecrou_8m003").ShapeColor = (0.25,0.25,0.20)
FreeCAD.getDocument("assembly_v1").getObject("part_ecrou_8m003").Placement = App.Placement(App.Vector(-50/2 + 4 + 4, 0, 1.5),App.Rotation(App.Vector(0,0,1),0))
number_of_steps_electrode = 180
number_of_steps = number_of_steps_electrode * 2 + 2
# insertion part_equerre_assemblage_laser_cutting for spacing the electrodes
for i in range(0, number_of_steps):
location = (2.5*i + 9.5)
if i < 1:
Mesh.insert(u"part_equerre_assemblage_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting").Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, location), App.Rotation(App.Vector(0,0,1), 180))
FreeCADGui.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting").ShapeColor = (0.30,0.20,0.20)
elif 1 <= i < 10:
Mesh.insert(u"part_equerre_assemblage_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting00" + str(i)).Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, location), App.Rotation(App.Vector(0,0,1), 180))
FreeCADGui.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting00" + str(i)).ShapeColor = (0.30,0.20,0.20)
elif i < 100:
Mesh.insert(u"part_equerre_assemblage_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting0" + str(i)).Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, location), App.Rotation(App.Vector(0,0,1), 180))
FreeCADGui.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting0" + str(i)).ShapeColor = (0.30,0.20,0.20)
else:
Mesh.insert(u"part_equerre_assemblage_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting" + str(i)).Placement = App.Placement(App.Vector(50/2 - 4 - 4, 0, location), App.Rotation(App.Vector(0,0,1), 180))
FreeCADGui.getDocument("assembly_v1").getObject("part_equerre_assemblage_laser_cutting" + str(i)).ShapeColor = (0.30,0.20,0.20)
# insertion part_electrode_laser_cutting for the cathode
Mesh.insert(u"part_electrode_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_electrode_laser_cutting").Placement = App.Placement(App.Vector(0, 0, 11), App.Rotation(App.Vector(0, 0, 1), 0))
FreeCADGui.getDocument("assembly_v1").getObject("part_electrode_laser_cutting").ShapeColor = (0.60,0.40,0.20)
for i in range(0, number_of_steps_electrode):
location = 5*i + 16
if i < 9:
Mesh.insert(u"part_electrode_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_electrode_laser_cutting00" + str(i+1)).Placement = App.Placement(App.Vector(0, 0, location), App.Rotation(App.Vector(0, 0, 1), 0))
FreeCADGui.getDocument("assembly_v1").getObject("part_electrode_laser_cutting00" + str(i+1)).ShapeColor = (0.60,0.40,0.20)
elif i < 99:
Mesh.insert(u"part_electrode_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_electrode_laser_cutting0" + str(i+1)).Placement = App.Placement(App.Vector(0, 0, location), App.Rotation(App.Vector(0, 0, 1), 0))
FreeCADGui.getDocument("assembly_v1").getObject("part_electrode_laser_cutting0" + str(i+1)).ShapeColor = (0.60,0.40,0.20)
else:
Mesh.insert(u"part_electrode_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_electrode_laser_cutting" + str(i+1)).Placement = App.Placement(App.Vector(0, 0, location), App.Rotation(App.Vector(0, 0, 1), 0))
FreeCADGui.getDocument("assembly_v1").getObject("part_electrode_laser_cutting" + str(i+1)).ShapeColor = (0.60,0.40,0.20)
# insertion part_electrode_laser_cutting for the anode
for i in range(0, number_of_steps_electrode):
location = 5*i + 13.5
Mesh.insert(u"part_electrode_laser_cutting.stl","assembly_v1")
FreeCAD.getDocument("assembly_v1").getObject("part_electrode_laser_cutting" + str(i + number_of_steps_electrode + 1)).Placement = App.Placement(App.Vector(0, 0, location), App.Rotation(App.Vector(0, 0, 1), 180))
FreeCADGui.getDocument("assembly_v1").getObject("part_electrode_laser_cutting" + str(i + number_of_steps_electrode + 1)).ShapeColor = (0.20,0.40,0.60)
setview()
# Generate PNG files
file = 'assembly_v1_v3_'
# Ombr�
Gui.runCommand('Std_DrawStyle',5)
i = 1
Gui.activeDocument().activeView().viewIsometric()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewFront()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewTop()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewRight()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewRear()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewBottom()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewLeft()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
# Filaire
Gui.runCommand('Std_DrawStyle',2)
i += 1
Gui.activeDocument().activeView().viewIsometric()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewFront()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewTop()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewRight()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewRear()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewBottom()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
i += 1
Gui.activeDocument().activeView().viewLeft()
Gui.activeDocument().activeView().saveImage(file + str(i) + '.png',1117,388,'Current')
| 53.696682 | 215 | 0.744837 | 1,697 | 11,330 | 4.769004 | 0.078963 | 0.081552 | 0.111578 | 0.159397 | 0.903497 | 0.890523 | 0.884097 | 0.870011 | 0.849623 | 0.804522 | 0.000088 | 0.078184 | 0.084466 | 11,330 | 210 | 216 | 53.952381 | 0.701918 | 0.074404 | 0 | 0.471831 | 0 | 0 | 0.242473 | 0.111249 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.007042 | 0 | 0.021127 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2f05b8491060776946784bb978aea415af8a3b0 | 267 | py | Python | commons/helm/exceptions.py | unikubehq/commons | d4e64ca400d4ffe388cb9470bfce004a301e4be1 | [
"Apache-2.0"
] | 2 | 2021-06-17T07:50:57.000Z | 2021-08-08T11:53:40.000Z | commons/helm/exceptions.py | unikubehq/commons | d4e64ca400d4ffe388cb9470bfce004a301e4be1 | [
"Apache-2.0"
] | 34 | 2021-06-10T14:30:36.000Z | 2022-02-21T08:23:51.000Z | commons/helm/exceptions.py | unikubehq/commons | d4e64ca400d4ffe388cb9470bfce004a301e4be1 | [
"Apache-2.0"
] | null | null | null | class RepositoryBranchUnavailable(Exception):
pass
class RepositoryAuthenticationFailed(Exception):
pass
class RepositoryCloningFailed(Exception):
pass
class HelmDependencyError(Exception):
pass
class HelmChartRenderError(Exception):
pass
| 14.052632 | 48 | 0.782772 | 20 | 267 | 10.45 | 0.4 | 0.311005 | 0.344498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161049 | 267 | 18 | 49 | 14.833333 | 0.933036 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d2fd6f259b01f9da4b448b00fd8cb8c764c72a79 | 137 | py | Python | backend/app/translation/components/__init__.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | backend/app/translation/components/__init__.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | backend/app/translation/components/__init__.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | from .text import TextComponentTranslationOptions # noqa
from .text_description import TextDescriptionComponentTranslationOptions # noqa
| 45.666667 | 79 | 0.883212 | 11 | 137 | 10.909091 | 0.636364 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087591 | 137 | 2 | 80 | 68.5 | 0.96 | 0.065693 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96212ff19d59c1a6d97c48690333d22c2deef703 | 5,456 | py | Python | numerik061.py | matzegltg/miau | 89b580baccbd258fbfd81bc19b46603a07873f14 | [
"MIT"
] | null | null | null | numerik061.py | matzegltg/miau | 89b580baccbd258fbfd81bc19b46603a07873f14 | [
"MIT"
] | 1 | 2021-05-07T15:50:51.000Z | 2021-05-07T15:50:51.000Z | numerik061.py | matzegltg/miau | 89b580baccbd258fbfd81bc19b46603a07873f14 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import math as mth
def explizitEuler(a, b, p0, h, n):
t = [0]
y = [p0]
for i in range(0, n):
t.append(t[i]+h)
y.append(y[i]+h*(a*y[i]-b*y[i]**2))
return t,y
def heun(a,b,p0,h,n):
t = [0]
y = [p0]
for i in range(0,n):
t.append(t[i]+h)
ytilde = y[i] + h*(a*y[i]-b*y[i]**2)
y.append(y[i] + h/2 * ((a*y[i]-b*y[i]**2) + (a*ytilde - b*ytilde**2)))
return t, y
def aufgabeA():
t1, y1 = explizitEuler(a=2, b=0.01, p0=1, h=0.01, n=1000)
t20, y20 = explizitEuler(a=2, b=0.01, p0=20, h=0.01, n=1000)
t100, y100 = explizitEuler(a=2, b=0.01, p0=100, h=0.01, n=1000)
t200, y200 = explizitEuler(a=2, b=0.01, p0=200, h=0.01, n=1000)
t400, y400 = explizitEuler(a=2, b=0.01, p0=400, h=0.01, n=1000)
# Lösung mit p0 = 1
#t = np.linspace(0,10, 1000)
#y = 2/(0.01+(2-0.01)*np.exp(-2*t))
#plt.plot(t, y, "--", color = "grey", label = "solution for p0 = 1")
plt.plot(t1, y1, label = "p0 = 1")
plt.plot(t20, y20, label = "p0 = 20")
plt.plot(t100, y100, label = "p0 = 100")
plt.plot(t200, y200, label = "p0 = 200")
plt.plot(t400, y400, label = "p0 = 400")
plt.title("Aufgabe 6a")
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
def aufgabeAHeun():
t1, y1 = heun(a=2, b=0.01, p0=1, h=0.01, n=1000)
t20, y20 = heun(a=2, b=0.01, p0=20, h=0.01, n=1000)
t100, y100 = heun(a=2, b=0.01, p0=100, h=0.01, n=1000)
t200, y200 = heun(a=2, b=0.01, p0=200, h=0.01, n=1000)
t400, y400 = heun(a=2, b=0.01, p0=400, h=0.01, n=1000)
# Lösung mit p0 = 1
#t = np.linspace(0,10, 1000)
#y = 2/(0.01+(2-0.01)*np.exp(-2*t))
#plt.plot(t, y, "--", color = "grey", label = "solution for p0 = 1")
plt.plot(t1, y1, label = "p0 = 1")
plt.plot(t20, y20, label = "p0 = 20")
plt.plot(t100, y100, label = "p0 = 100")
plt.plot(t200, y200, label = "p0 = 200")
plt.plot(t400, y400, label = "p0 = 400")
plt.title("Aufgabe 6a - Verfahren von Heun")
plt.xlabel('t')
plt.ylabel('p(t)')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
def aufgabeB():
t1, y1 = explizitEuler(a=2, b=0.01, p0=1, h=0.01, n=2000)
t20, y20 = explizitEuler(a=2, b=0.01, p0=1, h=0.1, n=200)
t100, y100 = explizitEuler(a=2, b=0.01, p0=1, h=0.5, n=40)
t200, y200 = explizitEuler(a=2, b=0.01, p0=1, h=1, n=20)
# Lösung mit p0 = 1
t = np.linspace(0,10, 1000)
y = 2/(0.01+(2-0.01)*np.exp(-2*t))
plt.plot(t, y, "--", color = "grey", label = "solution for p0 = 1")
plt.plot(t1, y1, label = "h = 0.01")
plt.plot(t20, y20, label = "h = 0.1")
plt.plot(t100, y100, label = "h = 0.5")
plt.plot(t200, y200, label = "h = 1")
plt.title("Aufgabe 6b")
plt.xlabel('t')
plt.ylabel('p(t)')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
def aufgabeBHeun():
t1, y1 = heun(a=2, b=0.01, p0=1, h=0.01, n=2000)
t20, y20 = heun(a=2, b=0.01, p0=1, h=0.1, n=200)
t100, y100 = heun(a=2, b=0.01, p0=1, h=0.5, n=40)
t200, y200 = heun(a=2, b=0.01, p0=1, h=1, n=20)
# Lösung mit p0 = 1
t = np.linspace(0,10, 1000)
y = 2/(0.01+(2-0.01)*np.exp(-2*t))
plt.plot(t, y, "--", color = "grey", label = "solution for p0 = 1")
plt.plot(t1, y1, label = "h = 0.01")
plt.plot(t20, y20, label = "h = 0.1")
plt.plot(t100, y100, label = "h = 0.5")
plt.plot(t200, y200, label = "h = 1")
plt.title("Aufgabe 6b - Verfahren von Heun")
plt.xlabel('t')
plt.ylabel('p(t)')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
def aufgabeC():
nachse = []
fehler = []
fehlerHeun = []
p3 = 2/(0.01+(2-0.01)*mth.exp(-2*3))
for i in range(5,21):
n = 2**i
h = 3/n
nachse.append(n)
p0 = 1
b = 0.01
a = 2
t1, y1 = explizitEuler(a, b, p0, h, n)
err = abs(y1[2**i] - p3)
fehler.append(err)
nachse = []
for i in range(5,21):
n = 2**i
h = 3/n
nachse.append(n)
p0 = 1
b = 0.01
a = 2
t1, y1 = heun(a, b, p0, h, n)
err = abs(y1[2**i] - p3)
fehlerHeun.append(err)
plt.plot(nachse, fehler, "o-", label = "expliziter Euler Fehler")
plt.plot(nachse, fehlerHeun, "o-", label = "Heun Fehler")
# Lösung mit p0 = 1
#t = np.linspace(0,10, 1000)
# y = 2/(0.01+(2-0.01)*np.exp(-2*t))
plt.title("Aufgabe 6c - Fehlervergleich")
plt.xlabel('n')
plt.xscale('log')
plt.ylabel('fehler')
plt.yscale('log')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
def aufgabeCgraphs():
nachse = []
fehler = []
p3 = 2/(0.01+(2-0.01)*mth.exp(-2*3))
for i in range(5,21):
n = 2**i
h = 3/n
nachse.append(n)
p0 = 1
b = 0.01
a = 2
t1, y1 = explizitEuler(a, b, p0, h, n)
err = abs(y1[2**i] - p3)
fehler.append(err)
plt.plot(t1, y1, label = f"n = {n}")
# Lösung mit p0 = 1
#t = np.linspace(0,10, 1000)
# y = 2/(0.01+(2-0.01)*np.exp(-2*t))
plt.title("Aufgabe 6c")
plt.xlabel('n')
#plt.xscale('log')
plt.ylabel('fehler')
#plt.yscale('log')
plt.legend(loc="lower right")
plt.grid(True)
plt.show()
aufgabeC() | 25.615023 | 78 | 0.506048 | 1,000 | 5,456 | 2.761 | 0.097 | 0.055415 | 0.030424 | 0.026078 | 0.886273 | 0.859109 | 0.851503 | 0.842811 | 0.842811 | 0.818182 | 0 | 0.167005 | 0.277859 | 5,456 | 213 | 79 | 25.615023 | 0.533756 | 0.095491 | 0 | 0.671233 | 0 | 0 | 0.091316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0 | 0.020548 | 0 | 0.089041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
824d5ddea51829f38ce291a0eb96fd3507c38844 | 48 | py | Python | torchtracer/utils/__init__.py | OIdiotLin/torchtracer | ca85c414b27e3edabaa2fea17ab9943fa668f952 | [
"MIT"
] | 52 | 2018-11-14T22:14:53.000Z | 2022-03-24T13:03:21.000Z | torchtracer/utils/__init__.py | OIdiotLin/torchtracer | ca85c414b27e3edabaa2fea17ab9943fa668f952 | [
"MIT"
] | 4 | 2018-11-14T08:46:45.000Z | 2020-12-14T11:28:54.000Z | torchtracer/utils/__init__.py | OIdiotLin/torchtracer | ca85c414b27e3edabaa2fea17ab9943fa668f952 | [
"MIT"
] | 6 | 2019-06-05T07:17:06.000Z | 2021-08-31T03:10:35.000Z | from torchtracer.utils.storeman import StoreMan
| 24 | 47 | 0.875 | 6 | 48 | 7 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82757b7f062a9d40e5d11f521661bd62ca0a2744 | 50,604 | py | Python | pytracking-master/ltr/data/processing.py | wsumel/AMMC | ef101878b4a97f07984186ea09146348c0526fa6 | [
"Apache-2.0"
] | 3 | 2021-12-02T11:34:37.000Z | 2021-12-19T09:30:10.000Z | pytracking-master/ltr/data/processing.py | wsumel/AMMC | ef101878b4a97f07984186ea09146348c0526fa6 | [
"Apache-2.0"
] | null | null | null | pytracking-master/ltr/data/processing.py | wsumel/AMMC | ef101878b4a97f07984186ea09146348c0526fa6 | [
"Apache-2.0"
] | null | null | null | import torch
import math
import numpy as np
import torchvision.transforms as transforms
from pytracking import TensorDict
import ltr.data.processing_utils as prutils
def stack_tensors(x):
if isinstance(x, (list, tuple)) and isinstance(x[0], torch.Tensor):
return torch.stack(x)
return x
class BaseProcessing:
""" Base class for Processing. Processing class is used to process the data returned by a dataset, before passing it
through the network. For example, it can be used to crop a search region around the object, apply various data
augmentations, etc."""
def __init__(self, transform=transforms.ToTensor(), train_transform=None, test_transform=None, joint_transform=None):
"""
args:
transform - The set of transformations to be applied on the images. Used only if train_transform or
test_transform is None.
train_transform - The set of transformations to be applied on the train images. If None, the 'transform'
argument is used instead.
test_transform - The set of transformations to be applied on the test images. If None, the 'transform'
argument is used instead.
joint_transform - The set of transformations to be applied 'jointly' on the train and test images. For
example, it can be used to convert both test and train images to grayscale.
"""
self.transform = {'train': transform if train_transform is None else train_transform,
'test': transform if test_transform is None else test_transform,
'joint': joint_transform}
def __call__(self, data: TensorDict):
raise NotImplementedError
class ATOMProcessing(BaseProcessing):
""" The processing class used for training ATOM. The images are processed in the following way.
First, the target bounding box is jittered by adding some noise. Next, a square region (called search region )
centered at the jittered target center, and of area search_area_factor^2 times the area of the jittered box is
cropped from the image. The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz. A set of proposals are then generated for the test images by jittering the ground truth box.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, proposal_params,
mode='pair', *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
proposal_params - Arguments for the proposal generation process. See _generate_proposals for details.
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.proposal_params = proposal_params
self.mode = mode
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_proposals(self, box):
""" Generates proposals by adding noise to the input box
args:
box - input box
returns:
torch.Tensor - Array of shape (num_proposals, 4) containing proposals
torch.Tensor - Array of shape (num_proposals,) containing IoU overlap of each proposal with the input box. The
IoU is mapped to [-1, 1]
"""
# Generate proposals
num_proposals = self.proposal_params['boxes_per_frame']
proposal_method = self.proposal_params.get('proposal_method', 'default')
if proposal_method == 'default':
proposals = torch.zeros((num_proposals, 4))
gt_iou = torch.zeros(num_proposals)
for i in range(num_proposals):
proposals[i, :], gt_iou[i] = prutils.perturb_box(box, min_iou=self.proposal_params['min_iou'],
sigma_factor=self.proposal_params['sigma_factor'])
elif proposal_method == 'gmm':
proposals, _, _ = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
num_samples=num_proposals)
gt_iou = prutils.iou(box.view(1,4), proposals.view(-1,4))
# Map to [-1, 1]
gt_iou = gt_iou * 2 - 1
return proposals, gt_iou
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'train_images', test_images', 'train_anno', 'test_anno'
returns:
TensorDict - output data block with following fields:
'train_images', 'test_images', 'train_anno', 'test_anno', 'test_proposals', 'proposal_iou'
"""
# Apply joint transforms
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'], bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
# Crop image region centered at jittered_anno box
crops, boxes, _ = prutils.jittered_center_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz)
# Apply transforms
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
# Generate proposals
frame2_proposals, gt_iou = zip(*[self._generate_proposals(a) for a in data['test_anno']])
data['test_proposals'] = list(frame2_proposals)
data['proposal_iou'] = list(gt_iou)
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
return data
class KLBBregProcessing(BaseProcessing):
""" Based on ATOMProcessing. It supports training ATOM using the Maximum Likelihood or KL-divergence based learning
introduced in [https://arxiv.org/abs/1909.12297] and in PrDiMP [https://arxiv.org/abs/2003.12565].
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, proposal_params,
mode='pair', *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
proposal_params - Arguments for the proposal generation process. See _generate_proposals for details.
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.proposal_params = proposal_params
self.mode = mode
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_proposals(self, box):
"""
"""
# Generate proposals
proposals, proposal_density, gt_density = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
gt_sigma=self.proposal_params['gt_sigma'],
num_samples=self.proposal_params[
'boxes_per_frame'],
add_mean_box=self.proposal_params.get(
'add_mean_box', False))
return proposals, proposal_density, gt_density
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'train_images', test_images', 'train_anno', 'test_anno'
returns:
TensorDict - output data block with following fields:
'train_images', 'test_images', 'train_anno', 'test_anno', 'test_proposals', 'proposal_density', 'gt_density'
"""
# Apply joint transforms
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'], bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
# Crop image region centered at jittered_anno box
crops, boxes, _ = prutils.jittered_center_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz)
# Apply transforms
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
# Generate proposals
proposals, proposal_density, gt_density = zip(*[self._generate_proposals(a) for a in data['test_anno']])
data['test_proposals'] = proposals
data['proposal_density'] = proposal_density
data['gt_density'] = gt_density
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
return data
class ATOMwKLProcessing(BaseProcessing):
"""Same as ATOMProcessing but using the GMM-based sampling of proposal boxes used in KLBBregProcessing."""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, proposal_params,
mode='pair', *args, **kwargs):
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.proposal_params = proposal_params
self.mode = mode
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_proposals(self, box):
"""
"""
# Generate proposals
proposals, proposal_density, gt_density = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
self.proposal_params['gt_sigma'],
self.proposal_params['boxes_per_frame'])
iou = prutils.iou_gen(proposals, box.view(1, 4))
return proposals, proposal_density, gt_density, iou
def __call__(self, data: TensorDict):
# Apply joint transforms
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'], bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
# Crop image region centered at jittered_anno box
crops, boxes, _ = prutils.jittered_center_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz)
# Apply transforms
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
# Generate proposals
proposals, proposal_density, gt_density, proposal_iou = zip(
*[self._generate_proposals(a) for a in data['test_anno']])
data['test_proposals'] = proposals
data['proposal_density'] = proposal_density
data['gt_density'] = gt_density
data['proposal_iou'] = proposal_iou
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
return data
class DiMPProcessing(BaseProcessing):
""" The processing class used for training DiMP. The images are processed in the following way.
First, the target bounding box is jittered by adding some noise. Next, a square region (called search region )
centered at the jittered target center, and of area search_area_factor^2 times the area of the jittered box is
cropped from the image. The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz. A Gaussian label centered at the target is generated for each image. These label functions are
used for computing the loss of the predicted classification model on the test images. A set of proposals are
also generated for the test images by jittering the ground truth box. These proposals are used to train the
bounding box estimating branch.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, crop_type='replicate',
max_scale_change=None, mode='pair', proposal_params=None, label_function_params=None, *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
crop_type - If 'replicate', the boundary pixels are replicated in case the search region crop goes out of image.
If 'inside', the search region crop is shifted/shrunk to fit completely inside the image.
If 'inside_major', the search region crop is shifted/shrunk to fit completely inside one axis of the image.
max_scale_change - Maximum allowed scale change when performing the crop (only applicable for 'inside' and 'inside_major')
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
proposal_params - Arguments for the proposal generation process. See _generate_proposals for details.
label_function_params - Arguments for the label generation process. See _generate_label_function for details.
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.crop_type = crop_type
self.mode = mode
self.max_scale_change = max_scale_change
self.proposal_params = proposal_params
self.label_function_params = label_function_params
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_proposals(self, box):
""" Generates proposals by adding noise to the input box
args:
box - input box
returns:
torch.Tensor - Array of shape (num_proposals, 4) containing proposals
torch.Tensor - Array of shape (num_proposals,) containing IoU overlap of each proposal with the input box. The
IoU is mapped to [-1, 1]
"""
# Generate proposals
num_proposals = self.proposal_params['boxes_per_frame']
proposal_method = self.proposal_params.get('proposal_method', 'default')
if proposal_method == 'default':
proposals = torch.zeros((num_proposals, 4))
gt_iou = torch.zeros(num_proposals)
for i in range(num_proposals):
proposals[i, :], gt_iou[i] = prutils.perturb_box(box, min_iou=self.proposal_params['min_iou'],
sigma_factor=self.proposal_params['sigma_factor'])
elif proposal_method == 'gmm':
proposals, _, _ = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
num_samples=num_proposals)
gt_iou = prutils.iou(box.view(1, 4), proposals.view(-1, 4))
else:
raise ValueError('Unknown proposal method.')
# Map to [-1, 1]
gt_iou = gt_iou * 2 - 1
return proposals, gt_iou
def _generate_label_function(self, target_bb):
""" Generates the gaussian label function centered at target_bb
args:
target_bb - target bounding box (num_images, 4)
returns:
torch.Tensor - Tensor of shape (num_images, label_sz, label_sz) containing the label for each sample
"""
gauss_label = prutils.gaussian_label_function(target_bb.view(-1, 4), self.label_function_params['sigma_factor'],
self.label_function_params['kernel_sz'],
self.label_function_params['feature_sz'], self.output_sz,
end_pad_if_even=self.label_function_params.get('end_pad_if_even', True))
return gauss_label
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'train_images', test_images', 'train_anno', 'test_anno'
returns:
TensorDict - output data block with following fields:
'train_images', 'test_images', 'train_anno', 'test_anno', 'test_proposals', 'proposal_iou',
'test_label' (optional), 'train_label' (optional), 'test_label_density' (optional), 'train_label_density' (optional)
"""
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'], bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
crops, boxes = prutils.target_image_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz, mode=self.crop_type,
max_scale_change=self.max_scale_change)
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
# Generate proposals
if self.proposal_params:
frame2_proposals, gt_iou = zip(*[self._generate_proposals(a) for a in data['test_anno']])
data['test_proposals'] = list(frame2_proposals)
data['proposal_iou'] = list(gt_iou)
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
# Generate label functions
if self.label_function_params is not None:
data['train_label'] = self._generate_label_function(data['train_anno'])
data['test_label'] = self._generate_label_function(data['test_anno'])
return data
class KLDiMPProcessing(BaseProcessing):
""" The processing class used for training PrDiMP that additionally supports the probabilistic classifier and
bounding box regressor. See DiMPProcessing for details.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, crop_type='replicate',
max_scale_change=None, mode='pair', proposal_params=None,
label_function_params=None, label_density_params=None, *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
crop_type - If 'replicate', the boundary pixels are replicated in case the search region crop goes out of image.
If 'inside', the search region crop is shifted/shrunk to fit completely inside the image.
If 'inside_major', the search region crop is shifted/shrunk to fit completely inside one axis of the image.
max_scale_change - Maximum allowed scale change when performing the crop (only applicable for 'inside' and 'inside_major')
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
proposal_params - Arguments for the proposal generation process. See _generate_proposals for details.
label_function_params - Arguments for the label generation process. See _generate_label_function for details.
label_density_params - Arguments for the label density generation process. See _generate_label_function for details.
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.crop_type = crop_type
self.mode = mode
self.max_scale_change = max_scale_change
self.proposal_params = proposal_params
self.label_function_params = label_function_params
self.label_density_params = label_density_params
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_proposals(self, box):
""" Generate proposal sample boxes from a GMM proposal distribution and compute their ground-truth density.
This is used for ML and KL based regression learning of the bounding box regressor.
args:
box - input bounding box
"""
# Generate proposals
proposals, proposal_density, gt_density = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
gt_sigma=self.proposal_params['gt_sigma'],
num_samples=self.proposal_params['boxes_per_frame'],
add_mean_box=self.proposal_params.get('add_mean_box', False))
return proposals, proposal_density, gt_density
def _generate_label_function(self, target_bb):
""" Generates the gaussian label function centered at target_bb
args:
target_bb - target bounding box (num_images, 4)
returns:
torch.Tensor - Tensor of shape (num_images, label_sz, label_sz) containing the label for each sample
"""
gauss_label = prutils.gaussian_label_function(target_bb.view(-1, 4), self.label_function_params['sigma_factor'],
self.label_function_params['kernel_sz'],
self.label_function_params['feature_sz'], self.output_sz,
end_pad_if_even=self.label_function_params.get('end_pad_if_even', True))
return gauss_label
def _generate_label_density(self, target_bb):
""" Generates the gaussian label density centered at target_bb
args:
target_bb - target bounding box (num_images, 4)
returns:
torch.Tensor - Tensor of shape (num_images, label_sz, label_sz) containing the label for each sample
"""
feat_sz = self.label_density_params['feature_sz'] * self.label_density_params.get('interp_factor', 1)
gauss_label = prutils.gaussian_label_function(target_bb.view(-1, 4), self.label_density_params['sigma_factor'],
self.label_density_params['kernel_sz'],
feat_sz, self.output_sz,
end_pad_if_even=self.label_density_params.get('end_pad_if_even', True),
density=True,
uni_bias=self.label_density_params.get('uni_weight', 0.0))
gauss_label *= (gauss_label > self.label_density_params.get('threshold', 0.0)).float()
if self.label_density_params.get('normalize', False):
g_sum = gauss_label.sum(dim=(-2,-1))
valid = g_sum>0.01
gauss_label[valid, :, :] /= g_sum[valid].view(-1, 1, 1)
gauss_label[~valid, :, :] = 1.0 / (gauss_label.shape[-2] * gauss_label.shape[-1])
gauss_label *= 1.0 - self.label_density_params.get('shrink', 0.0)
return gauss_label
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'train_images', test_images', 'train_anno', 'test_anno'
returns:
TensorDict - output data block with following fields:
'train_images', 'test_images', 'train_anno', 'test_anno', 'test_proposals', 'proposal_density', 'gt_density',
'test_label' (optional), 'train_label' (optional), 'test_label_density' (optional), 'train_label_density' (optional)
"""
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'], bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
crops, boxes = prutils.target_image_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz, mode=self.crop_type,
max_scale_change=self.max_scale_change)
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
# Generate proposals
proposals, proposal_density, gt_density = zip(*[self._generate_proposals(a) for a in data['test_anno']])
data['test_proposals'] = proposals
data['proposal_density'] = proposal_density
data['gt_density'] = gt_density
for s in ['train', 'test']:
is_distractor = data.get('is_distractor_{}_frame'.format(s), None)
if is_distractor is not None:
for is_dist, box in zip(is_distractor, data[s+'_anno']):
if is_dist:
box[0] = 99999999.9
box[1] = 99999999.9
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
# Generate label functions
if self.label_function_params is not None:
data['train_label'] = self._generate_label_function(data['train_anno'])
data['test_label'] = self._generate_label_function(data['test_anno'])
if self.label_density_params is not None:
data['train_label_density'] = self._generate_label_density(data['train_anno'])
data['test_label_density'] = self._generate_label_density(data['test_anno'])
return data
class LWLProcessing(BaseProcessing):
""" The processing class used for training LWL. The images are processed in the following way.
First, the target bounding box (computed using the segmentation mask)is jittered by adding some noise.
Next, a rectangular region (called search region ) centered at the jittered target center, and of area
search_area_factor^2 times the area of the jittered box is cropped from the image.
The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz. The argument 'crop_type' determines how out-of-frame regions are handled when cropping the
search region. For instance, if crop_type == 'replicate', the boundary pixels are replicated in case the search
region crop goes out of frame. If crop_type == 'inside_major', the search region crop is shifted/shrunk to fit
completely inside one axis of the image.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor, crop_type='replicate',
max_scale_change=None, mode='pair', new_roll=False, *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - The size (width, height) to which the search region is resized. The aspect ratio is always
preserved when resizing the search region
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
crop_type - Determines how out-of-frame regions are handled when cropping the search region.
If 'replicate', the boundary pixels are replicated in case the search region crop goes out of
image.
If 'inside', the search region crop is shifted/shrunk to fit completely inside the image.
If 'inside_major', the search region crop is shifted/shrunk to fit completely inside one axis
of the image.
max_scale_change - Maximum allowed scale change when shrinking the search region to fit the image
(only applicable to 'inside' and 'inside_major' cropping modes). In case the desired
shrink factor exceeds the max_scale_change, the search region is only shrunk to the
factor max_scale_change. Out-of-frame regions are then handled by replicating the
boundary pixels. If max_scale_change is set to None, unbounded shrinking is allowed.
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
new_roll - Whether to use the same random roll values for train and test frames when applying the joint
transformation. If True, a new random roll is performed for the test frame transformations. Thus,
if performing random flips, the set of train frames and the set of test frames will be flipped
independently.
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.crop_type = crop_type
self.mode = mode
self.max_scale_change = max_scale_change
self.new_roll = new_roll
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'train' or 'test' indicating train or test data
returns:
torch.Tensor - jittered box
"""
if self.scale_jitter_factor.get('mode', 'gauss') == 'gauss':
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
elif self.scale_jitter_factor.get('mode', 'gauss') == 'uniform':
jittered_size = box[2:4] * torch.exp(torch.FloatTensor(2).uniform_(-self.scale_jitter_factor[mode],
self.scale_jitter_factor[mode]))
else:
raise Exception
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode])).float()
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def __call__(self, data: TensorDict):
# Apply joint transformations. i.e. All train/test frames in a sequence are applied the transformation with the
# same parameters
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'], data['train_masks'] = self.transform['joint'](
image=data['train_images'], bbox=data['train_anno'], mask=data['train_masks'])
data['test_images'], data['test_anno'], data['test_masks'] = self.transform['joint'](
image=data['test_images'], bbox=data['test_anno'], mask=data['test_masks'], new_roll=self.new_roll)
for s in ['train', 'test']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
orig_anno = data[s + '_anno']
# Extract a crop containing the target
crops, boxes, mask_crops = prutils.target_image_crop(data[s + '_images'], jittered_anno,
data[s + '_anno'], self.search_area_factor,
self.output_sz, mode=self.crop_type,
max_scale_change=self.max_scale_change,
masks=data[s + '_masks'])
# Apply independent transformations to each image
data[s + '_images'], data[s + '_anno'], data[s + '_masks'] = self.transform[s](image=crops, bbox=boxes, mask=mask_crops, joint=False)
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
return data
class KYSProcessing(BaseProcessing):
""" The processing class used for training KYS. The images are processed in the following way.
First, the target bounding box is jittered by adding some noise. Next, a square region (called search region )
centered at the jittered target center, and of area search_area_factor^2 times the area of the jittered box is
cropped from the image. The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz. A Gaussian label centered at the target is generated for each image. These label functions are
used for computing the loss of the predicted classification model on the test images. A set of proposals are
also generated for the test images by jittering the ground truth box. These proposals can be used to train the
bounding box estimating branch.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_param, scale_jitter_param,
proposal_params=None, label_function_params=None, min_crop_inside_ratio=0,
*args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _generate_synthetic_motion for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _generate_synthetic_motion for how the jittering is done.
proposal_params - Arguments for the proposal generation process. See _generate_proposals for details.
label_function_params - Arguments for the label generation process. See _generate_label_function for details.
min_crop_inside_ratio - Minimum amount of cropped search area which should be inside the image.
See _check_if_crop_inside_image for details.
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_param = center_jitter_param
self.scale_jitter_param = scale_jitter_param
self.proposal_params = proposal_params
self.label_function_params = label_function_params
self.min_crop_inside_ratio = min_crop_inside_ratio
def _check_if_crop_inside_image(self, box, im_shape):
x, y, w, h = box.tolist()
if w <= 0.0 or h <= 0.0:
return False
crop_sz = math.ceil(math.sqrt(w * h) * self.search_area_factor)
x1 = x + 0.5 * w - crop_sz * 0.5
x2 = x1 + crop_sz
y1 = y + 0.5 * h - crop_sz * 0.5
y2 = y1 + crop_sz
w_inside = max(min(x2, im_shape[1]) - max(x1, 0), 0)
h_inside = max(min(y2, im_shape[0]) - max(y1, 0), 0)
crop_area = ((x2 - x1) * (y2 - y1))
if crop_area > 0:
inside_ratio = w_inside * h_inside / crop_area
return inside_ratio > self.min_crop_inside_ratio
else:
return False
def _generate_synthetic_motion(self, boxes, images, mode):
num_frames = len(boxes)
out_boxes = []
for i in range(num_frames):
jittered_box = None
for _ in range(10):
orig_box = boxes[i]
jittered_size = orig_box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_param[mode + '_factor'])
if self.center_jitter_param.get(mode + '_mode', 'uniform') == 'uniform':
max_offset = (jittered_size.prod().sqrt() * self.center_jitter_param[mode + '_factor']).item()
offset_factor = (torch.rand(2) - 0.5)
jittered_center = orig_box[0:2] + 0.5 * orig_box[2:4] + max_offset * offset_factor
if self.center_jitter_param.get(mode + '_limit_motion', False) and i > 0:
prev_out_box_center = out_boxes[-1][:2] + 0.5 * out_boxes[-1][2:]
if abs(jittered_center[0] - prev_out_box_center[0]) > out_boxes[-1][2:].prod().sqrt() * 2.5:
jittered_center[0] = orig_box[0] + 0.5 * orig_box[2] + max_offset * offset_factor[0] * -1
if abs(jittered_center[1] - prev_out_box_center[1]) > out_boxes[-1][2:].prod().sqrt() * 2.5:
jittered_center[1] = orig_box[1] + 0.5 * orig_box[3] + max_offset * offset_factor[1] * -1
jittered_box = torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
if self._check_if_crop_inside_image(jittered_box, images[i].shape):
break
else:
jittered_box = torch.tensor([1, 1, 10, 10]).float()
out_boxes.append(jittered_box)
return out_boxes
def _generate_proposals(self, frame2_gt_crop):
# Generate proposals
num_proposals = self.proposal_params['boxes_per_frame']
frame2_proposals = np.zeros((num_proposals, 4))
gt_iou = np.zeros(num_proposals)
sample_p = np.zeros(num_proposals)
for i in range(num_proposals):
frame2_proposals[i, :], gt_iou[i], sample_p[i] = prutils.perturb_box(
frame2_gt_crop,
min_iou=self.proposal_params['min_iou'],
sigma_factor=self.proposal_params['sigma_factor']
)
gt_iou = gt_iou * 2 - 1
return frame2_proposals, gt_iou
def _generate_label_function(self, target_bb, target_absent=None):
gauss_label = prutils.gaussian_label_function(target_bb.view(-1, 4), self.label_function_params['sigma_factor'],
self.label_function_params['kernel_sz'],
self.label_function_params['feature_sz'], self.output_sz,
end_pad_if_even=self.label_function_params.get(
'end_pad_if_even', True))
if target_absent is not None:
gauss_label *= (1 - target_absent).view(-1, 1, 1).float()
return gauss_label
def __call__(self, data: TensorDict):
if self.transform['joint'] is not None:
data['train_images'], data['train_anno'] = self.transform['joint'](image=data['train_images'],
bbox=data['train_anno'])
data['test_images'], data['test_anno'] = self.transform['joint'](image=data['test_images'], bbox=data['test_anno'], new_roll=False)
for s in ['train', 'test']:
# Generate synthetic sequence
jittered_anno = self._generate_synthetic_motion(data[s + '_anno'], data[s + '_images'], s)
# Crop images
crops, boxes, _ = prutils.jittered_center_crop(data[s + '_images'], jittered_anno, data[s + '_anno'],
self.search_area_factor, self.output_sz)
# Add transforms
data[s + '_images'], data[s + '_anno'] = self.transform[s](image=crops, bbox=boxes, joint=False)
if self.proposal_params:
frame2_proposals, gt_iou = zip(*[self._generate_proposals(a.numpy()) for a in data['test_anno']])
data['test_proposals'] = [torch.tensor(p, dtype=torch.float32) for p in frame2_proposals]
data['proposal_iou'] = [torch.tensor(gi, dtype=torch.float32) for gi in gt_iou]
data = data.apply(stack_tensors)
if self.label_function_params is not None:
data['train_label'] = self._generate_label_function(data['train_anno'])
test_target_absent = 1 - (data['test_visible'] * data['test_valid_anno'])
data['test_label'] = self._generate_label_function(data['test_anno'], test_target_absent)
return data
| 54.006403 | 145 | 0.61424 | 6,361 | 50,604 | 4.664833 | 0.06304 | 0.025882 | 0.026792 | 0.014828 | 0.845247 | 0.820847 | 0.812051 | 0.789742 | 0.781249 | 0.775688 | 0 | 0.009156 | 0.298553 | 50,604 | 936 | 146 | 54.064103 | 0.826797 | 0.357146 | 0 | 0.656818 | 0 | 0 | 0.087401 | 0.000723 | 0 | 0 | 0 | 0 | 0.013636 | 1 | 0.079545 | false | 0 | 0.013636 | 0 | 0.177273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82767a9d9793c1b713066c9b3c5518ba6c69dd27 | 63 | py | Python | external/__init__.py | personalOS1234/Core-OS-2.0 | 12a933776fce246f5425faf479d7811af222f2af | [
"MIT"
] | null | null | null | external/__init__.py | personalOS1234/Core-OS-2.0 | 12a933776fce246f5425faf479d7811af222f2af | [
"MIT"
] | null | null | null | external/__init__.py | personalOS1234/Core-OS-2.0 | 12a933776fce246f5425faf479d7811af222f2af | [
"MIT"
] | null | null | null | #TODO replace all this libraries with ours
from .pyaes import * | 31.5 | 42 | 0.793651 | 10 | 63 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15873 | 63 | 2 | 43 | 31.5 | 0.943396 | 0.650794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
828ac2174a6739c989e8e1ea209b6e2ac2458536 | 82 | py | Python | odp/job/publish/saeon.py | SAEON/Open-Data-Platform | 8509c39c6f65ba18518e825e2359213ec4c67af5 | [
"MIT"
] | null | null | null | odp/job/publish/saeon.py | SAEON/Open-Data-Platform | 8509c39c6f65ba18518e825e2359213ec4c67af5 | [
"MIT"
] | null | null | null | odp/job/publish/saeon.py | SAEON/Open-Data-Platform | 8509c39c6f65ba18518e825e2359213ec4c67af5 | [
"MIT"
] | null | null | null | from odp.job.publish import Publisher
class SAEONPublisher(Publisher):
pass
| 13.666667 | 37 | 0.780488 | 10 | 82 | 6.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 82 | 5 | 38 | 16.4 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7d994a3e0c083b02d29cc8b8b8f7c78301de92bc | 4,372 | py | Python | whython/values/value_number.py | NexInfinite/whython | 0c4099ed27151e99ba63465acd0abb2e38bba8ad | [
"MIT"
] | 44 | 2021-08-12T00:23:24.000Z | 2022-02-22T08:33:02.000Z | whython/values/value_number.py | NexInfinite/whython | 0c4099ed27151e99ba63465acd0abb2e38bba8ad | [
"MIT"
] | null | null | null | whython/values/value_number.py | NexInfinite/whython | 0c4099ed27151e99ba63465acd0abb2e38bba8ad | [
"MIT"
] | 4 | 2021-08-12T04:02:43.000Z | 2021-08-25T08:58:19.000Z | # *###################
# * IMPORTS
# *###################
from values.value_values import Value
from errors import RTError
import math
# *###################
# * NUMBER
# *###################
class Number(Value):
def __init__(self, value):
super().__init__()
self.value = value
def added_to(self, other):
if isinstance(other, Number):
return Number(self.value + other.value).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def subtracted_by(self, other):
if isinstance(other, Number):
return Number(self.value - other.value).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def multiplied_by(self, other):
if isinstance(other, Number):
return Number(self.value * other.value).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def exponent_by(self, other):
if isinstance(other, Number):
return Number(self.value ** other.value).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_eq(self, other):
if isinstance(other, Number):
return Number(int(self.value == other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_ne(self, other):
if isinstance(other, Number):
return Number(int(self.value != other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_lt(self, other):
if isinstance(other, Number):
return Number(int(self.value < other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_lte(self, other):
if isinstance(other, Number):
return Number(int(self.value <= other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_gt(self, other):
if isinstance(other, Number):
return Number(int(self.value > other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def get_comparison_gte(self, other):
if isinstance(other, Number):
return Number(int(self.value >= other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def anded_by(self, other):
if isinstance(other, Number):
return Number(int(self.value and other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def ored_by(self, other):
if isinstance(other, Number):
return Number(int(self.value or other.value)).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def notted(self):
return Number(1 if self.value == 0 else 0).set_context(self.context), None
def divided_by(self, other):
if isinstance(other, Number):
if other.value == 0:
return None, RTError(
other.pos_start, other.pos_end,
"Division by zero.",
self.context
)
return Number(self.value / other.value).set_context(self.context), None
else:
return None, Value.illegal_operation(self.pos_start, other.pos_end)
def copy(self):
copy = Number(self.value)
copy.set_pos(self.pos_start, self.pos_end)
copy.set_context(self.context)
return copy
def is_true(self):
return self.value != 0
def __repr__(self):
return str(self.value)
Number.null = Number(0)
Number.ignore = Number(None)
Number.false = Number(0)
Number.true = Number(1)
Number.pi = Number(math.pi) | 35.544715 | 90 | 0.627173 | 552 | 4,372 | 4.800725 | 0.110507 | 0.064528 | 0.079245 | 0.118868 | 0.790566 | 0.773962 | 0.773962 | 0.761132 | 0.761132 | 0.761132 | 0 | 0.002438 | 0.249543 | 4,372 | 123 | 91 | 35.544715 | 0.805242 | 0.005947 | 0 | 0.414894 | 0 | 0 | 0.003987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.191489 | false | 0 | 0.031915 | 0.031915 | 0.56383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
7dea385a46f18cb818a8c7e337ce410ee1b8020e | 22 | py | Python | my_module/my_module/__init__.py | Hekstra-Lab/nesm-python | 5b13cebe0019a589ea234d1191148e120aa75114 | [
"BSD-3-Clause"
] | 1 | 2021-05-07T18:03:30.000Z | 2021-05-07T18:03:30.000Z | my_module/my_module/__init__.py | Hekstra-Lab/nesm-python | 5b13cebe0019a589ea234d1191148e120aa75114 | [
"BSD-3-Clause"
] | 1 | 2021-05-13T14:54:27.000Z | 2021-05-13T14:54:27.000Z | my_module/my_module/__init__.py | Hekstra-Lab/nesm-python | 5b13cebe0019a589ea234d1191148e120aa75114 | [
"BSD-3-Clause"
] | null | null | null | from myfuncs import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
818ea9ab02a5c39743786bfc6ceccb71b87ebacb | 49 | py | Python | social/backends/email.py | raccoongang/python-social-auth | 81c0a542d158772bd3486d31834c10af5d5f08b0 | [
"BSD-3-Clause"
] | 1,987 | 2015-01-01T16:12:45.000Z | 2022-03-29T14:24:25.000Z | social/backends/email.py | raccoongang/python-social-auth | 81c0a542d158772bd3486d31834c10af5d5f08b0 | [
"BSD-3-Clause"
] | 731 | 2015-01-01T22:55:25.000Z | 2022-03-10T15:07:51.000Z | virtual/lib/python3.6/site-packages/social/backends/email.py | dennismwaniki67/awards | 80ed10541f5f751aee5f8285ab1ad54cfecba95f | [
"MIT"
] | 1,082 | 2015-01-01T16:27:26.000Z | 2022-03-22T21:18:33.000Z | from social_core.backends.email import EmailAuth
| 24.5 | 48 | 0.877551 | 7 | 49 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
818f86c3e551dc02afd9f458a08b02841b948e64 | 2,512 | py | Python | alpaca/resources/tutorials/ALPACA_atlas_to_ROI.py | C0C0AN/ALPACA | bfe6012ebc7f7df92ddda2eede3b1b41eb39db90 | [
"BSD-3-Clause"
] | 5 | 2018-12-14T14:17:44.000Z | 2020-11-03T03:15:04.000Z | alpaca/resources/tutorials/ALPACA_atlas_to_ROI.py | PeerHerholz/ALPACA | 39b037c38a122d4e8c3cf2cfe465d38a7c31fa99 | [
"BSD-3-Clause"
] | 9 | 2018-06-01T16:11:39.000Z | 2020-03-21T01:37:35.000Z | alpaca/resources/tutorials/ALPACA_atlas_to_ROI.py | C0C0AN/ALPACA | bfe6012ebc7f7df92ddda2eede3b1b41eb39db90 | [
"BSD-3-Clause"
] | 5 | 2018-01-26T15:16:40.000Z | 2020-11-03T03:15:05.000Z | # ---
# jupyter:
# jupytext:
# formats: py,ipynb
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.3.3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Automatic Locazation and Parcellation of Auditory Cortex Areas
## (ALPACA)
# <img src="../img/ALPACA_logo.png" alt="alpaca logo" width="370" height="250" border="10">
# ## extracting (auditory cortex) regions of interest (ROIs) from atlases of the human brain
# ### This notebook will focus on how to extract regions of interest from atlases of the human. As ALPACA is all about the [auditory cortex](https://en.wikipedia.org/wiki/Auditory_cortex), all examples will be used to extract [regions of interest](https://en.wikipedia.org/wiki/Region_of_interest#Medical_imaging) within the auditory cortex . Given that most atlases of the human brain are in a [reference space, e.g. the mni space](http://www.lead-dbs.org/?p=1241), a section of the notebook will also show how to transform regions of interest from reference to a participants respective native space. Comparable to other notebooks of the ALPACA toolbox, the methods and analyses steps described here are easy to adapt for other, more general purposes than "just" auditory neuroscience related topics.
# ### Around the brain in 80 atlases
# You might ask yourself "What's with all that talking about atlases? What's actually an atlas of the human brain?". So, to enable the best possible understanding and to bring everyone (nearly) on the same page, the first section of this notebook will give a brief overview of atlases of the human brain.
| 96.615385 | 802 | 0.716959 | 394 | 2,512 | 4.545685 | 0.36802 | 0.585148 | 0.850921 | 1.098827 | 0.400335 | 0.294807 | 0.294807 | 0.294807 | 0.294807 | 0.274707 | 0 | 0.009477 | 0.117834 | 2,512 | 25 | 803 | 100.48 | 0.798736 | 0.976911 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
819ccf03ef12dc81905b777b7e6c0142e0e730bf | 28 | py | Python | sample_app/__init__.py | gutoxp/flask-vuejs | 86f3eb196d174ccfa3fb62174de83f6101bf91ff | [
"BSD-3-Clause"
] | 122 | 2021-06-21T17:30:29.000Z | 2022-03-25T06:21:38.000Z | sample_app/__init__.py | gutoxp/flask-vuejs | 86f3eb196d174ccfa3fb62174de83f6101bf91ff | [
"BSD-3-Clause"
] | 125 | 2021-09-01T12:06:48.000Z | 2022-03-30T11:32:57.000Z | app/frontend/__init__.py | openstate/coronalert | 9aa24cc0ea75b85e9bda0cfcd6ff592a2c61c95e | [
"CC-BY-4.0"
] | 21 | 2021-06-22T10:08:15.000Z | 2022-03-18T08:57:02.000Z | from .app import create_app
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81a73f999345b2e78f6d1435100ac35570598cf2 | 160 | py | Python | CodingBat/Warmup-1/near_hundred.py | N-l1/dmoj | bbd55ab45731774385805eb31ea790454a3a6819 | [
"MIT"
] | null | null | null | CodingBat/Warmup-1/near_hundred.py | N-l1/dmoj | bbd55ab45731774385805eb31ea790454a3a6819 | [
"MIT"
] | null | null | null | CodingBat/Warmup-1/near_hundred.py | N-l1/dmoj | bbd55ab45731774385805eb31ea790454a3a6819 | [
"MIT"
] | null | null | null | """
Warmup-1 > near_hundred
Find this problem at:
https://codingbat.com/prob/p124676
"""
def near_hundred(n):
return abs(100-n) <= 10 or abs(200-n) <= 10
| 16 | 47 | 0.6625 | 27 | 160 | 3.851852 | 0.777778 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126866 | 0.1625 | 160 | 9 | 48 | 17.777778 | 0.649254 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
81ccae0065c891a162b04752f381d4d01a04d59f | 45,771 | py | Python | Experiment.py | pitcheverlasting/wrr-AED-drivers | d6803bfd80756c8d5f899952940033d45fcf57c4 | [
"Unlicense"
] | null | null | null | Experiment.py | pitcheverlasting/wrr-AED-drivers | d6803bfd80756c8d5f899952940033d45fcf57c4 | [
"Unlicense"
] | null | null | null | Experiment.py | pitcheverlasting/wrr-AED-drivers | d6803bfd80756c8d5f899952940033d45fcf57c4 | [
"Unlicense"
] | null | null | null | __author__ = 'lpeng'
from pylab import *
import pickle, itertools
import scipy.io
import scipy.stats
from scipy.optimize import minimize
from scipy import signal
import statsmodels.tools.eval_measures as evaluate
import pandas as pd
import IO, FFT, Plotting
from PET_Library import Data
import gc
from scipy.interpolate import interp1d
gc.collect()
# #=============================path================================
datadir = '../Data'
workspace = '../workspace/201609'
# figdir = '/home/water5/lpeng/Figure/pan_spectral/201609'
figdir = '/home/water5/lpeng/Figure/pan_spectral/201705'
##=============================variable============================
vars = ['time', 'p', 'tavg', 'tmin', 'tmax', 'ea', 'rh', 'tc', 'lc', 'wind', 'ts', 'sun', 'rain', 'pan', 'vpd', 'estavg', 'rh_test']
variables = ['tavg', 'wind', 'ea', 'vpd']
vars_pan_var = ['pan', 'tavg', 'sun', 'wind', 'ea', 'vpd']
vars_penpan = ['tavg', 'tmax', 'tmin', 'p', 'ea', 'wind', 'sun', 'lat', 'elev'] # 'tc'
basinlongs=['Songhuajiang', 'Liaohe', 'Northwestern', 'Haihe', 'Yellow', 'Yangtze', 'Huaihe', 'Southeastern', 'Southwestern', 'Pearl']
geoinfo = load('%s/station_geoinfo' %workspace)
station_number = load('%s/basin_station_number' %workspace)
## Time
styr = 1961
edyr = 2001
stdy = datetime.datetime(styr, 1, 1)
eddy = datetime.datetime(edyr, 12, 31)
dates = pd.date_range(stdy, eddy, freq='D')
tstep = len(dates)
dyears = dates.year
dmonths = dates.month
doys = dates.dayofyear
# doys = vstack([dates[i].timetuple().tm_yday for i in xrange(0, tstep)]) # julian day
## quality check using data availability as criteria
# the station list is for PET models
station_flag = pickle.load(open('station_sunhours_80_flag','rb'))
station_qc = [np.where(station_flag[ibasin][:, 0]==0)[0] for ibasin in xrange(0, 10)]
station_pan_flag = pickle.load(open('station_pan_80_flag','rb'))
station_pan_qc = [np.where(station_pan_flag[ibasin][:, 0]==0)[0] for ibasin in xrange(0, 10)]
good_stations = [intersect1d(station_qc[i], station_pan_qc[i]) for i in xrange(0, 10)]
###======================Datetime Toolkits=============================
def Gapfill(daily):
"return the array, not pandas object"
ts = pd.Series(daily, index=dates).fillna(method='pad').values
return ts
def daily2annual(daily):
ts = pd.Series(daily, index=dates).resample('A', how='mean').values
return ts
def daily2monthly(daily):
ts = pd.Series(daily, index=dates).resample('M', how='mean').values
return ts
def daily2monthly_df(array):
arr = pd.DataFrame(array.T, index=dates).resample('M', how='mean').values
return arr.T
def daily2weekly_df(array):
arr = pd.DataFrame(array.T, index=dates).resample('W', how='mean').values
return arr.T
def daily2annual_df(array):
arr = pd.DataFrame(array.T, index=dates).resample('A', how='mean').values
return arr.T
###====================================================================
def msc_groupby_DI(arid):
# aridrange = [0, 1, 1.5, 2.5, 5, 250]
aridrange = [0, 2, 4, 8, 250]
index_DI = []
for igroup in xrange(0, len(aridrange)-1):
low = aridrange[igroup]
high = aridrange[igroup+1]
index_DI.append(np.where((arid>=low) & (arid<high))[0])
return index_DI
def msc_groupby_DI_more(arid):
# aridrange = [0, 1, 1.5, 2.5, 5, 250]
aridrange = [0, 1, 1.5, 2, 4, 8, 20, 40, 80, 160, 250]
index_DI = []
for igroup in xrange(0, len(aridrange)-1):
low = aridrange[igroup]
high = aridrange[igroup+1]
index_DI.append(np.where((arid>=low) & (arid<high))[0])
return index_DI
##########################################################################
# for spectral coherece analysis
##########################################################################
nf = 513
nbasin = 10
nvar = 8
sampling_frequency = 1/(24.0 * 3600.0) # unit: per day
def Coherence_Frequency():
data = scipy.io.loadmat('%s/1_AP.mat' %(datadir))
input = data[variables[0]][0, 0][0:tstep].flatten()
pan = data['pan'][0, 0][0:tstep].flatten()
freq = FFT.Coherence(input, pan, sampling_frequency, 'linear')[0]
return freq
def cel2Kelvin(input):
input['tmax'] = input['tmax']+273.16
input['tmin'] = input['tmin']+273.16
input['tavg'] = input['tavg']+273.16
return input
def Kelvin2cel(input):
input['tmax'] = input['tmax']-273.16
input['tmin'] = input['tmin']-273.16
input['tavg'] = input['tavg']-273.16
return input
from matplotlib import rc
rc('font', family='serif') #')'Times New Roman'
##########################################################################
# Calculate penpan modelled PE
##########################################################################
def chunk_st_ed(npts):
if npts%2 == 0: st = npts/2; ed= npts/2-1
else: st = npts/2; ed = npts/2
return st, ed
def Permute_each_month_station(input):
input_rand = []
# Set for a month
for mon in xrange(1, 13):
# pool out all the data within a month into a Dataframe
idx_mon = np.where((dyears >= styr) & (dyears <= edyr) & (dmonths == mon))[0] # input.loc[input.index.month==mon]
df = input.iloc[idx_mon]
# retrieve all the index in the original order
days = df.index
def shuffle(df):
"each shuffle will produce a new dataframe with original order"
# resample all the data by permutation
df = df.sample(frac=1) # other method: np.random.permutation()
# store the original time step, it looks like not very useful
# df['dates'] = df.index
# reset the time index with original order
df.index = days
# df = df.reset_index(drop=True) this is to remove the original index
return df
input_rand.append(shuffle(df))
# [shuffle(df) for i in xrange(10)]
del df
input_rand = pd.concat(input_rand).sort_index()
# scale the mean to the original monthly mean
# For temperature convert to kelvin
input = cel2Kelvin(input)
input_rand = cel2Kelvin(input_rand)
month_mean_obs = input.resample('MS', how='mean')
month_mean_exp = input_rand.resample('MS', how='mean')
ratio = (month_mean_obs/month_mean_exp).reindex(dates, method='ffill')
input_rand = input_rand * ratio
# input = Kelvin2cel(input)
input_rand = Kelvin2cel(input_rand)
return input_rand.to_dict(orient='series')
def Sample_each_week_station(input):
input_rand = []
# select each day of year
for d in xrange(1, 367):
# For each single day, sample an ensemble with a 7-day window across multiple years
wdays = np.arange(d-3, d+4)
constant = ones((7)) * 366
mask1 = constant * (wdays>366)
mask2 = constant * (wdays<1)
wdays_crop = wdays - mask1 + mask2
idx_doy = np.where((doys == wdays[3]))[0]
days = input.iloc[idx_doy].index # retrieve all the index in the original order
N = len(days) # number of day of year across all years, especially for 366
idx_doy_wind = [np.where((doys == wdays_crop[i]))[0] for i in xrange(0, 7)]
idx_doy_wind = list(itertools.chain.from_iterable(idx_doy_wind))
df = input.iloc[idx_doy_wind]
def shuffle(df):
"each shuffle will produce a new dataframe with original order"
# resample all the data by permutation
df = df.sample(n=N)
# reset the time index with original order
df.index = days
return df
input_rand.append(shuffle(df))
del df
input_rand = pd.concat(input_rand).sort_index()
# scale the mean to the original monthly mean
# For temperature convert to kelvin
input = cel2Kelvin(input)
input_rand = cel2Kelvin(input_rand)
month_mean_obs = input.resample('MS', how='mean')
month_mean_exp = input_rand.resample('MS', how='mean')
ratio = (month_mean_obs/month_mean_exp).reindex(dates, method='ffill')
input_rand = input_rand * ratio
# input = Kelvin2cel(input)
input_rand = Kelvin2cel(input_rand)
# plt.plot(input_rand['wind'])
# plt.plot(input['wind'])
return input_rand.to_dict(orient='series')
def Sample_station_window(input, wind):
input_rand = []
for d in xrange(1, 367):
# For each single day, sample an ensemble with a 7-day window across multiple years
wdays = np.arange(d-wind/2, d+wind/2+1)
constant = ones((wind)) * 366
mask1 = constant * (wdays>366)
mask2 = constant * (wdays<1)
wdays_crop = wdays - mask1 + mask2
idx_doy = np.where((doys == wdays[wind/2]))[0]
days = input.iloc[idx_doy].index # retrieve all the index in the original order
N = len(days) # number of day of year across all years, especially for 366
idx_doy_wind = [np.where((doys == wdays_crop[i]))[0] for i in xrange(0, wind)]
idx_doy_wind = list(itertools.chain.from_iterable(idx_doy_wind))
df = input.iloc[idx_doy_wind]
def shuffle(df):
"each shuffle will produce a new dataframe with original order"
# resample all the data by permutation
df = df.sample(n=N)
# reset the time index with original order
df.index = days
return df
input_rand.append(shuffle(df))
del df
input_rand = pd.concat(input_rand).sort_index()
# scale the mean to the original monthly mean
# For temperature convert to kelvin
input = cel2Kelvin(input)
input_rand = cel2Kelvin(input_rand)
# month_mean_obs = input.resample('MS', how='mean')
# month_mean_exp = input_rand.resample('MS', how='mean')
# ratio = (month_mean_obs/month_mean_exp).reindex(dates, method='ffill')
mean_obs = input.resample('%sD' %wind, how='mean')
mean_exp = input_rand.resample('%sD' %wind, how='mean')
ratio = (mean_obs/mean_exp).reindex(dates, method='ffill')
input_rand = input_rand * ratio
# input = Kelvin2cel(input)
input_rand = Kelvin2cel(input_rand)
return input_rand.to_dict(orient='series')
def Remove_interannual_variability(input):
# For temperature convert to kelvin
input = cel2Kelvin(input)
ann_mean_obs = input.resample('AS', how='mean')
clim_obs = ann_mean_obs.mean()
ratio = (clim_obs/ann_mean_obs).reindex(dates, method='ffill')
input_niav = input * ratio
# input = Kelvin2cel(input)
input_niav = Kelvin2cel(input_niav)
return input_niav.to_dict(orient='series')
def Remove_shortterm_variability(input, vars, npts):
# For temperature convert to kelvin
output = input.copy()
output = cel2Kelvin(output)
def moving_average(y, npts):
return np.convolve(y, np.ones(npts)/npts, mode='same')
def running_mean(y, npts):
return pd.rolling_mean(y, npts, center=True) # [npts-1:]
for var in vars:
# output[var] = moving_average(output[var], npts)
output[var] = running_mean(output[var], npts)
input_smooth = Kelvin2cel(output)
return input_smooth.to_dict(orient='series')
def Remove_variable_shortterm_variability(input, var, npts):
# For temperature convert to kelvin
output = input.copy()
output = cel2Kelvin(output)
def moving_average(y, npts):
return pd.rolling_mean(y, npts)[npts-1:]
output[var] = moving_average(output[var], npts)
input_smooth = Kelvin2cel(output)
return input_smooth.to_dict(orient='series')
def Calculate_Ep_daily():
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# Remove inter-annual variability
# INPUT = Remove_interannual_variability(input)
npts = 15
# INPUT = Remove_shortterm_variability(input, vars_penpan[:-2], npts)
# Radiation
# INPUT = Remove_variable_shortterm_variability(input, 'sun', npts)
# ea
# INPUT = Remove_variable_shortterm_variability(input, 'ea', npts)
# wind
# INPUT = Remove_variable_shortterm_variability(input, 'wind', npts)
# tair
INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg'], npts)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
res = Data(INPUT, 'sunhours')
PENPAN.append(res.penpan)
pe_model = array(PENPAN)
# pe_model.dump('%s/pe_mod_penpan_removeiav_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort15_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort7_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort15_sun_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort15_ea_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort15_wind_good_stations' %(workspace))
# pe_model.dump('%s/pe_mod_penpan_removeshort15_tair_good_stations' %(workspace))
return
# Calculate_Ep_daily()
# exit()
def Calculate_Ep_daily_smoothwindow():
"only take tair as example"
for npts in (7, 15, 30):
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# tair
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg'], npts)
# wind
# INPUT = Remove_shortterm_variability(input, ['wind'], npts)
# humidity
INPUT = Remove_shortterm_variability(input, ['ea'], npts)
# sun
# INPUT = Remove_shortterm_variability(input, [], npts)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
res = Data(INPUT, 'sunhours') #, npts)
PENPAN.append(res.penpan)
pe_model = array(PENPAN)
# pe_model.dump('%s/pe_mod_penpan_removeshort%s_tair_good_stations' %(workspace, npts))
# pe_model.dump('%s/pe_mod_penpan_removeshort%s_rnet_good_stations' %(workspace, npts))
# pe_model.dump('%s/pe_mod_penpan_removeshort%s_wind_good_stations' %(workspace, npts))
# pe_model.dump('%s/pe_mod_penpan_removeshort%s_ea_good_stations' %(workspace, npts))
return
# Calculate_Ep_daily_smoothwindow()
# exit()
def Calculate_Ep_daily_smoothvariable():
"for one period, four variables"
npts = 7
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# tair
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg'], npts)
# wind
# INPUT = Remove_shortterm_variability(input, ['wind'], npts)
# humidity
# INPUT = Remove_shortterm_variability(input, ['ea'], npts)
# sun
INPUT = Remove_shortterm_variability(input, [], npts)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
# res = Data(INPUT, 'sunhours')
res = Data(INPUT, 'sunhours', npts) # for rnet
PENPAN.append(res.penpan)
pe_model = array(PENPAN)
pe_model.dump('%s/pe_mod_penpan_removeshort%s_rnet_good_stations' %(workspace, npts))
return
# Calculate_Ep_daily_smoothvariable()
# exit()
def Calculate_Ep_daily_smooth_test():
for npts in (7, 15, 31, 61):
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# Remove inter-annual variability
# INPUT = Remove_interannual_variability(input)
# vars_penpan = ['tavg', 'tmax', 'tmin', 'p', 'ea', 'wind', 'sun', 'lat', 'elev'] # 'tc'
# INPUT = Remove_shortterm_variability(input, vars_penpan[:-2], npts)
# tair
INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg'], npts)
# tair+wind
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg', 'wind'], npts)
# tair+wind+humidity
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg', 'wind', 'ea'], npts)
# tair+wind+humidity+pressure
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg', 'wind', 'ea', 'p'], npts)
# tair+wind+humidity+pressure+sun
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg', 'wind', 'ea', 'p'], npts)
# tair+sun
# INPUT = Remove_shortterm_variability(input, ['tmax', 'tmin', 'tavg', 'wind', 'ea'], npts)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
res = Data(INPUT, 'sunhours') #, npts)
PENPAN.append(res.penpan)
pe_model = array(PENPAN)
return
# Calculate_Ep_daily_smooth_test()
# exit()
def Calculate_Ep_daily_ensemble():
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# Run the permutation program with multi-ensemble
penpan_ens = []
for i in xrange(10): # Set # of ensemble to 10
# Monthy permutation
# INPUT = Permute_each_month_station(input)
# Weekly permutation
INPUT = Sample_each_week_station(input)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
res = Data(INPUT, 'sunhours')
penpan_ens.append(res.penpan)
# Collect all the ensembles
PENPAN.append(penpan_ens)
del penpan_ens
pe_model = array(PENPAN)
# pe_model.dump('%s/pe_mod_penpan_monthpermute_ens10_good_stations' %(workspace))
pe_model.dump('%s/pe_mod_penpan_weeksample_ens10_good_stations' %(workspace))
return
# Calculate_Ep_daily_ensemble()
# exit()
def Calculate_Ep_daily_samplewindow():
for window in (7, 31, 91):
PENPAN = []
for ibasin in xrange(0, 10):
data = scipy.io.loadmat('%s/%s_AP.mat' %(datadir, ibasin+1))
for istation in good_stations[ibasin]:
print ibasin, istation
index = np.where(geoinfo[:, 0]==data['station_name'][0, istation])[0]
# Read all the necessary input into a dataframe
input = {vars_penpan[i]: Gapfill(data[v][0, istation][0:tstep].flatten()) for i, v in enumerate(vars_penpan[:-2])}
input = pd.DataFrame.from_dict(input)
input.index = dates
# Run the permutation program with multi-ensemble
penpan_ens = []
for i in xrange(10): # Set # of ensemble to 10
# Monthy permutation
INPUT = Sample_station_window(input, window)
INPUT['doy'] = doys
INPUT['lat'] = geoinfo[index, 1]
INPUT['elev'] = geoinfo[index, 3]
### Calculate Epan
res = Data(INPUT, 'sunhours')
penpan_ens.append(res.penpan)
# Collect all the ensembles
PENPAN.append(penpan_ens)
del penpan_ens
pe_model = array(PENPAN)
pe_model.dump('%s/pe_mod_penpan_sample%sd_ens10_good_stations' %(workspace, window))
return
# Calculate_Ep_daily_samplewindow()
# exit()
def Coherence_obs_permute():
"Compare the observed pan with the modelled pan"
obs = load('%s/pe_mod_penpan_good_stations' %(workspace))
# permute = load('%s/pe_mod_penpan_monthpermute_ens10_good_stations' %(workspace))
permute = load('%s/pe_mod_penpan_weeksample_ens10_good_stations' %(workspace))
cohere = []
for ist in xrange(0, 228):
print ist
cohere.append(array([FFT.Coherence(obs[ist, :], permute[ist, i, :], sampling_frequency, 'linear')[1] for i in xrange(10)])) #.reshape(1, 5, nf))
cohere = array(cohere)
# cohere.dump('%s/coherence_penpan_monthpermute_ens10_good_stations' %(workspace))
cohere.dump('%s/coherence_penpan_weeksample_ens10_good_stations' %(workspace))
return
# Coherence_obs_permute()
# exit()
def Coherence_obs_remove_variability_variable():
"Compare the effects of removing shortterm variability in each variable"
obs = load('%s/pe_mod_penpan_good_stations' %(workspace))
# exp = load('%s/pe_mod_penpan_removeiav_good_stations' %(workspace))
npt = 7
st, ed = chunk_st_ed(npt)
vars = ['tair', 'ea', 'rnet', 'wind']
for var in vars[1:]:
exp = load('%s/pe_mod_penpan_removeshort%s_%s_good_stations' %(workspace, npt, var))
cohere = []
for ist in xrange(0, 228):
print ist
# cohere.append(FFT.Coherence(obs[ist, npt:-npt], exp[ist, npt:-npt], sampling_frequency, 'linear')[1]) # for removeshort
cohere.append(FFT.Coherence(obs[ist, st:-ed], exp[ist, st:-ed], sampling_frequency, 'linear')[1]) # for removeshort
cohere = array(cohere)
cohere.dump('%s/coherence_penpan_removeshort%sd_%s_good_stations' %(workspace, npt, var))
return
# Coherence_obs_remove_variability_variable()
# exit()
def Coherence_obs_remove_variability_window():
"Test how the window of moving averaging affect the results"
obs = load('%s/pe_mod_penpan_good_stations' %(workspace))
for npt in (7, 15, 30):
st, ed = chunk_st_ed(npt)
# exp = load('%s/pe_mod_penpan_removeiav_good_stations' %(workspace))
# exp = load('%s/pe_mod_penpan_removeshort%s_tair_good_stations' %(workspace, npt))
# exp = load('%s/pe_mod_penpan_removeshort%s_wind_good_stations' %(workspace, npt))
# exp = load('%s/pe_mod_penpan_removeshort%s_rnet_good_stations' %(workspace, npt))
# exp = load('%s/pe_mod_penpan_removeshort%s_ea_good_stations' %(workspace, npt))
cohere = []
for ist in xrange(0, 228):
print ist
cohere.append(FFT.Coherence(obs[ist, st:-ed], exp[ist, st:-ed], sampling_frequency, 'linear')[1]) # for removeshort
cohere = array(cohere)
# cohere.dump('%s/coherence_penpan_removeshort%sd_tair_good_stations' %(workspace, npt))
# cohere.dump('%s/coherence_penpan_removeshort%sd_wind_good_stations' %(workspace, npt))
# cohere.dump('%s/coherence_penpan_removeshort%sd_rnet_good_stations' %(workspace, npt))
# cohere.dump('%s/coherence_penpan_removeshort%sd_ea_good_stations' %(workspace, npt))
# for npt in (7, 31, 91):
# exp = load('%s/pe_mod_penpan_sample%sd_ens10_good_stations' %(workspace, npt))
# cohere = []
# for ist in xrange(0, 228):
# print ist
# cohere.append(array([FFT.Coherence(obs[ist, :], exp[ist, i, :], sampling_frequency, 'linear')[1] for i in xrange(10)]))
# cohere = array(cohere)
# cohere.dump('%s/coherence_penpan_sample%sd_good_stations' %(workspace, npt))
return
# Coherence_obs_remove_variability_window()
# exit()
def Plot_PSD_obs_remove_variability_window():
"Compare the observed pan with the modelled pan"
freq = Coherence_Frequency()
fig, ax = plt.subplots(figsize=(8, 4))
data = []
for npt in (7, 15, 30):
# exp = load('%s/pe_mod_penpan_removeiav_good_stations' %(workspace))
exp = load('%s/pe_mod_penpan_removeshort%s_tair_good_stations' %(workspace, npt))
psd = []
for ist in xrange(0, 228):
print ist
psd.append(FFT.Power_Spectrum(exp[ist, npt-1:], sampling_frequency, 'linear')[1])
# psd.append(FFT.Power_Spectrum(obs[ist, npt:-npt], sampling_frequency, 'linear')[1])
psd = array(psd)
data.append(mean(psd, axis=0))
data = array(data)
Plotting.CoherenceWindowPlot(ax, data, sampling_frequency, freq)
plt.show()
return
# Plot_PSD_obs_remove_variability_window()
# exit()
def Plot_crossspectrum_obs_remove_variability_window():
"Compare the observed pan with the modelled pan"
freq = Coherence_Frequency()
obs = load('%s/pe_mod_penpan_good_stations' %(workspace))
fig, ax = plt.subplots(figsize=(8, 4))
data = []
for npt in (7, 15, 30):
# exp = load('%s/pe_mod_penpan_removeiav_good_stations' %(workspace))
exp = load('%s/pe_mod_penpan_removeshort%s_tair_good_stations' %(workspace, npt))
cross = []
for ist in xrange(0, 228):
print ist
cross.append(FFT.CrossPowerSpectrum(obs[ist, npt-1:], exp[ist, npt-1:], sampling_frequency, 'linear')[1])
# cross.append(FFT.CrossPowerSpectrum(obs[ist, npt:-npt], exp[ist, npt:-npt], sampling_frequency, 'linear')[1])
cross = array(cross)
data.append(mean(cross, axis=0))
data = array(data)
Plotting.CoherenceWindowPlot(ax, data, sampling_frequency, freq)
plt.show()
return
# Plot_crossspectrum_obs_remove_variability_window()
# exit()
def Plot_Coherence_Average():
fig = plt.figure(figsize=(9, 5))
freq = Coherence_Frequency()
# cohere = load('%s/coherence_penpan_monthpermute_ens10_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeiav_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeshort15_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeshort7_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeshort15_sun_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeshort15_ea_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_removeshort15_wind_good_stations' %(workspace))
cohere = load('%s/coherence_penpan_removeshort15_tair_good_stations' %(workspace))
# cohere = load('%s/coherence_penpan_weeksample_ens10_good_stations' %(workspace))
# for all stations average
ax = fig.add_subplot(1, 1, 1)
# plot all ensemble: found they can be averaged
# Plotting.CoherenceEnsemblePlot(ax, mean(cohere, axis=0), sampling_frequency, freq, 'Average')
# plot ensemble mean and then all aridity
# cohere_avg = mean(cohere, axis=1)
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
Plotting.CoherenceAridityPlot(ax, cohere, index_DI, sampling_frequency, freq, '', '') # for ensemble: cohere_avg
# ax.legend(loc=2, fontsize=15)
plt.show()
# savefig('%s/coh_penpan_removeiav_good_stations.tif' %(figdir), dpi=200)
# savefig('%s/coh_penpan_removeshort15_good_stations.tif' %(figdir), dpi=200)
# savefig('%s/coh_penpan_removeshort7_good_stations.tif' %(figdir), dpi=200)
# savefig('%s/coh_penpan_weeksample_ens10_good_stations.tif' %(figdir), dpi=200)
return
# Plot_Coherence_Average()
# exit()
def Plot_Coherence_multiple():
freq = Coherence_Frequency()
rnet = load('%s/coherence_penpan_removeshort30d_rnet_good_stations' %(workspace))
wind = load('%s/coherence_penpan_removeshort30d_wind_good_stations' %(workspace))
tair = load('%s/coherence_penpan_removeshort30d_tair_good_stations' %(workspace))
ea = load('%s/coherence_penpan_removeshort30d_ea_good_stations' %(workspace))
all = load('%s/coherence_penpan_removeshort15_good_stations' %(workspace))
sample = load('%s/coherence_penpan_weeksample_ens10_good_stations' %(workspace))
# plot ensemble mean and then all aridity
sample_avg = mean(sample, axis=1)
vars = [tair, rnet, ea, wind, all, sample_avg]
labels = ('a', 'b', 'c', 'd', 'e', 'f')
varnames = ['Tair', 'Solar', r'$e_a$', 'Wind', 'All', 'Randomized']
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
fig, axes = plt.subplots(3, 2, figsize=(11, 10))
for i, var in enumerate(vars):
Plotting.CoherenceAridityPlot(axes[i/2, i%2], var, index_DI, sampling_frequency, freq, labels[i], varnames[i]) # for ensemble: cohere_avg
if i == 5:
axes[i/2, i%2].legend(loc=2, fontsize=13)
fig.tight_layout()
plt.show()
# savefig('%s/coh_penpan_removeshort15_multi_good_stations.tif' %(figdir), dpi=200)
return
# Plot_Coherence_multiple()
# exit()
def Plot_Coherence_multiwindow():
freq = Coherence_Frequency()
fig, ax = plt.subplots(2, 2, figsize=(9, 6), sharey=True, sharex=True)
varnames = ['tair', 'rnet', 'wind', 'ea']
nums = ['(a) ', '(b) ', '(c) ', '(d) ']
labels = [r'$T_a$', r'$R_n$', r'$u_2$', r'$e_a$']
for i in range(0,4):
vars = [load('%s/coherence_penpan_removeshort%sd_%s_good_stations' %(workspace, npts, varnames[i])) for npts in (7, 15, 30)]
coh = array([mean(var, axis=0) for var in vars])
# vars = [load('%s/coherence_penpan_sample%sd_good_stations' %(workspace, npt)) for npt in (7, 31, 91)]
# coh = array([mean(mean(var, axis=1), axis=0) for var in vars])
Plotting.CoherenceWindowPlot(ax[i/2, i%2], 1-coh, nums[i]+labels[i], sampling_frequency, freq)
ax[1,1].legend(loc='upper right', frameon=False, fontsize=14)
fig.tight_layout()
plt.show()
# savefig('%s/figS1_coh_penpan_removeshort7-30_4var_good_stations.tif' %(figdir), dpi=300)
return
# Plot_Coherence_multiwindow()
# exit()
def Plot_Coherence_samplewindow():
freq = Coherence_Frequency()
names = ['(a) Sampling window', '(b) Dryness']
fig, axes = plt.subplots(1, 2, figsize=(10.5, 3.5))
# The first figure
vars = [load('%s/coherence_penpan_sample%sd_good_stations' %(workspace, npt)) for npt in (91, 31, 7)]
coh = array([mean(mean(var, axis=1), axis=0) for var in vars])
i = 0
Plotting.CoherenceWindow2Plot(axes[i], coh, sampling_frequency, freq, names[i])
axes[i].legend(loc=2, fontsize=14)
sample = load('%s/coherence_penpan_sample7d_good_stations' %(workspace))
# plot ensemble mean and then all aridity
sample_avg = mean(sample, axis=1)
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
# for i, var in enumerate(vars):
i = 1
Plotting.CoherenceAridityPlot(axes[i], sample_avg, index_DI, sampling_frequency, freq, names[i], '') # for ensemble: cohere_avg
axes[i].legend(loc=2, fontsize=12)
fig.tight_layout()
plt.show()
# savefig('%s/coh_penpan_sample_window_dryness_good_stations.tif' %(figdir), dpi=300)
return
# Plot_Coherence_samplewindow()
# exit()
def Plot_Coherence_loss_samplewindow():
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
names = ['(a) Sampling window', '(b) Dryness']
fig, axes = plt.subplots(1, 2, figsize=(10.5, 3.5))
# The first figure
vars = [load('%s/coherence_penpan_sample%sd_good_stations' %(workspace, npt)) for npt in (91, 31, 7)]
coh = array([mean(mean(var, axis=1), axis=0) for var in vars])
coh_point = array([interp1d(freq_ts[:], 1-coh)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
i = 0
Plotting.CoherenceWindowPointPlot(axes[i], coh_point, names[i])
axes[i].legend(loc=3, frameon=False, fontsize=14)
# The second figure
sample = load('%s/coherence_penpan_sample7d_good_stations' %(workspace))
sample_avg = mean(sample, axis=1) # plot ensemble mean and then all aridity
sample_point = array([interp1d(freq_ts[:], 1-sample_avg)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
i = 1
Plotting.CoherenceWindowAridityPointPlot(axes[i], sample_point, index_DI, names[i]) # for ensemble: cohere_avg
axes[i].legend(loc=3, frameon=False, fontsize=12)
fig.tight_layout()
plt.show()
# savefig('%s/fig9_coh_penpan_sample_window_dryness_point_good_stations.tif' %(figdir), dpi=300)
return
# Plot_Coherence_loss_samplewindow()
# exit()
def Plot_Coherence_loss_samplewindow_extra():
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
names = ['(a) Sampling window', '(b) Dryness (7d window)', '(c) Dryness (30d window)', '(d) Dryness (90d window)']
fig, axes = plt.subplots(2, 2, figsize=(10.5, 6.5))
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
# # The first figure
vars = [load('%s/coherence_penpan_sample%sd_good_stations' %(workspace, npt)) for npt in (91, 31, 7)]
coh = array([mean(mean(var, axis=1), axis=0) for var in vars])
coh_point = array([interp1d(freq_ts[:], 1-coh)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
i = 0
Plotting.CoherenceWindowPointPlot(axes[0,0], coh_point, names[i])
axes[0,0].legend(loc=3, frameon=False, fontsize=14)
# The second figure
sample = load('%s/coherence_penpan_sample7d_good_stations' %(workspace))
sample_avg = mean(sample, axis=1) # plot ensemble mean and then all aridity
sample_point = array([interp1d(freq_ts[:], 1-sample_avg)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
i = 1
Plotting.CoherenceWindowAridityPointPlot(axes[0,1], sample_point, index_DI, names[i]) # for ensemble: cohere_avg
# The 3rd figure
sample = load('%s/coherence_penpan_sample31d_good_stations' %(workspace))
sample_avg = mean(sample, axis=1) # plot ensemble mean and then all aridity
sample_30 = array([interp1d(freq_ts[:], 1-sample_avg)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
i = 2
Plotting.CoherenceWindowAridityPointPlot(axes[1,0], sample_30, index_DI, names[i])
# The 4th figure
sample = load('%s/coherence_penpan_sample91d_good_stations' %(workspace))
sample_avg = mean(sample, axis=1) # plot ensemble mean and then all aridity
sample_90 = array([interp1d(freq_ts[:], 1-sample_avg)(day)[()] for day in [7, 30, 90, 120, 180, 365]])
i = 3
Plotting.CoherenceWindowAridityPointPlot(axes[1,1], sample_90, index_DI, names[i]) # for ensemble: cohere_avg
axes[1,1].legend(loc=3, frameon=False, fontsize=12)
fig.tight_layout()
plt.show()
# savefig('%s/fig10_coh_penpan_sample_window_dryness_point_good_stations_all.tif' %(figdir), dpi=300)
return
# Plot_Coherence_loss_samplewindow_extra()
# exit()
def Plot_Coherence_grid_test():
freq = Coherence_Frequency()
rnet = load('%s/coherence_penpan_removeshort30d_rnet_good_stations' %(workspace))
wind = load('%s/coherence_penpan_removeshort30d_wind_good_stations' %(workspace))
tair = load('%s/coherence_penpan_removeshort30d_tair_good_stations' %(workspace))
ea = load('%s/coherence_penpan_removeshort30d_ea_good_stations' %(workspace))
vars = [tair, rnet, wind, ea]
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
matrix = np.zeros((4, 2, 3))
freq_ts = sampling_frequency/freq
for iv, var in enumerate(vars):
res = array([[interp1d(freq_ts[:], var[istation, :])(day)[()] for day in [7, 15, 30]] for istation in xrange(0, var.shape[0])])
wet = vstack((res[index_DI[0], :], res[index_DI[1], :]))
dry = vstack((res[index_DI[2], :], res[index_DI[3], :]))
matrix[iv, 0, :] = mean(wet, axis=0)
matrix[iv, 1, :] = mean(dry, axis=0)
# rearrange the matrix into table
imshow_data = np.zeros((4, 6))
for id in range(0,2):
for it in range(0,3):
for iv in range(0,4):
row = id*2 + iv/2
col = it*2 + iv%2
print row, col, matrix[iv, id, it]
imshow_data[row, col] = 1 - matrix[iv, id, it]
plt.imshow(imshow_data, cmap='hot_r', interpolation='nearest')
plt.show()
# savefig('%s/coh_penpan_removeshort15_multi_good_stations.tif' %(figdir), dpi=200)
return
# Plot_Coherence_grid_test()
# exit()
def Plot_Coherence_grid_average(fig, ax, npt, title):
"try my best to extract the information"
rnet = load('%s/coherence_penpan_removeshort%sd_rnet_good_stations' %(workspace, npt))
wind = load('%s/coherence_penpan_removeshort%sd_wind_good_stations' %(workspace, npt))
tair = load('%s/coherence_penpan_removeshort%sd_tair_good_stations' %(workspace, npt))
ea = load('%s/coherence_penpan_removeshort%sd_ea_good_stations' %(workspace, npt))
vars = [tair, rnet, wind, ea]
varnames = [r'$T_a$', r'$R_n$', r'$u_2$', r'$e_a$']
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
# make a 3D matrix for summary table
matrix = np.zeros((4, 2, 2))
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
for iv, var in enumerate(vars):
index_week = (freq_ts>=2) & (freq_ts<=7)
index_month = (freq_ts>7) & (freq_ts<=30)
res = vstack((mean(var[:, index_week], axis=1), mean(var[:, index_month], axis=1)))
wet = hstack((res[:, index_DI[0]], res[:, index_DI[1]]))
dry = hstack((res[:, index_DI[2]], res[:, index_DI[3]]))
matrix[iv, 0, :] = mean(wet, axis=1)
matrix[iv, 1, :] = mean(dry, axis=1)
# rearrange the matrix into table
imshow_data = np.zeros((4, 4)); imshow_label = np.empty((4, 4), dtype=int)
for id in range(0,2):
for it in range(0,2):
for iv in range(0,4):
row = id*2 + iv/2
col = it*2 + iv%2
print row, col
imshow_data[row, col] = 1 - matrix[iv, id, it] # This is the influence 1-MSC
imshow_label[row, col] = iv
im = ax.imshow(imshow_data, vmax=0.45, vmin=0.0, cmap='YlOrRd', interpolation='nearest')
# colorbar
if npt == 30:
# set up the axis
cax = fig.add_axes([0.91, 0.12, 0.02, 0.78])
cb = fig.colorbar(im, cax) # adjust the size
# cb = ax.colorbar(im) #, fraction=0.046, pad=0.04) # magic number!!!!!
cb.ax.tick_params(labelsize=14) # change the colorbar fontsize
# Text portion
ind_array = np.arange(0, 4, 1)
x, y = meshgrid(ind_array, ind_array)
for xloc, yloc in zip(x.flatten(), y.flatten()):
ax.text(xloc, yloc, varnames[imshow_label[yloc, xloc]], va='center', ha='center', fontsize=20)
# Two separate lines
ax.plot([1.5, 1.5], [-0.5, 3.5], c='black', linewidth=2)
ax.plot([-0.5, 3.5],[1.5, 1.5], c='black', linewidth=2)
# x y label
fig.subplots_adjust(bottom=0.12)
ax.set_xticks((0.5, 2.5))
ax.set_yticks((0.5, 2.5))
ax.set_xticklabels(['Weekly cycle\n(2-7d)', 'Monthly cycle\n(7-30d)'], fontsize=16)
ax.set_yticklabels(["Wet" "\n" r"($\phi$<4)", "Dry" "\n" r"($\phi$>4)"], fontsize=16) # treat this as special string
ax.set_title(title, fontsize=16)
# savefig('%s/coh_grid_average_scale_climate_4var_removeshort30.tif' %(figdir), dpi=300)
return
def Plot_Coherence_grid_average_multiple():
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
Plot_Coherence_grid_average(fig, ax[0], 7, '(a) Window = 7d')
Plot_Coherence_grid_average(fig, ax[1], 30, '(b) Window = 30d')
plt.show()
# savefig('%s/coh_grid_average_scale_climate_4var_removeshort7-30.tif' %(figdir), dpi=300)
return
# Plot_Coherence_grid_average_multiple()
# exit()
def Plot_Coherence_grid_average_update(fig, ax, npt, title):
"try my best to extract the information"
rnet = load('%s/coherence_penpan_removeshort%sd_rnet_good_stations' %(workspace, npt))
wind = load('%s/coherence_penpan_removeshort%sd_wind_good_stations' %(workspace, npt))
tair = load('%s/coherence_penpan_removeshort%sd_tair_good_stations' %(workspace, npt))
ea = load('%s/coherence_penpan_removeshort%sd_ea_good_stations' %(workspace, npt))
vars = [tair, rnet, wind, ea]
varnames = [r'$T_a$', r'$R_n$', r'$u_2$', r'$e_a$']
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
# make a 3D matrix for summary table
matrix = np.zeros((4, 2, 2))
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
for iv, var in enumerate(vars):
index_week = (freq_ts>=2) & (freq_ts<=7)
index_month = (freq_ts>7) & (freq_ts<=30)
res = vstack((mean(var[:, index_week], axis=1), mean(var[:, index_month], axis=1)))
wet = hstack((res[:, index_DI[0]], res[:, index_DI[1]]))
dry = hstack((res[:, index_DI[2]], res[:, index_DI[3]]))
matrix[iv, 0, :] = mean(wet, axis=1)
matrix[iv, 1, :] = mean(dry, axis=1)
# rearrange the matrix into table
imshow_data = np.zeros((4, 4)); imshow_label = np.empty((4, 4), dtype=int)
for id in range(0,2):
for it in range(0,2):
for iv in range(0,4):
row = id*2 + iv/2
col = it*2 + iv%2
print row, col
imshow_data[row, col] = 1 - matrix[iv, id, it] # This is the influence 1-MSC
imshow_label[row, col] = iv
im = ax.imshow(imshow_data, vmax=0.45, vmin=0.0, cmap='YlOrRd', interpolation='nearest')
# colorbar
# set up the axis
# cax = fig.add_axes([0.91, 0.12, 0.02, 0.78])
# cb = fig.colorbar(im, cax) # adjust the size
cb = plt.colorbar(im, ax=ax, fraction=0.046, pad=0.04) # magic number!!!!!
cb.ax.tick_params(labelsize=14) # change the colorbar fontsize
# Text portion
ind_array = np.arange(0, 4, 1)
x, y = meshgrid(ind_array, ind_array)
for xloc, yloc in zip(x.flatten(), y.flatten()):
ax.text(xloc, yloc, varnames[imshow_label[yloc, xloc]], va='center', ha='center', fontsize=20)
# Two separate lines
ax.plot([1.5, 1.5], [-0.5, 3.5], c='black', linewidth=2)
ax.plot([-0.5, 3.5],[1.5, 1.5], c='black', linewidth=2)
# x y label
fig.subplots_adjust(bottom=0.12)
ax.set_xticks((0.5, 2.5))
ax.set_yticks((0.5, 2.5))
ax.set_xticklabels(['Weekly cycle\n(2-7d)', 'Monthly cycle\n(7-30d)'], fontsize=14)
if npt==7:
ax.set_yticklabels(["Wet" "\n" r"($\phi$<4)", "Dry" "\n" r"($\phi$>4)"], fontsize=16) # treat this as special string
else:
ax.set_yticklabels(["", ""], fontsize=16) # treat this as special string
ax.set_title(title, fontsize=16)
# savefig('%s/coh_grid_average_scale_climate_4var_removeshort30.tif' %(figdir), dpi=300)
return
def Coherence_grid_average(npt):
"try my best to extract the information"
rnet = load('%s/coherence_penpan_removeshort%sd_rnet_good_stations' %(workspace, npt))
wind = load('%s/coherence_penpan_removeshort%sd_wind_good_stations' %(workspace, npt))
tair = load('%s/coherence_penpan_removeshort%sd_tair_good_stations' %(workspace, npt))
ea = load('%s/coherence_penpan_removeshort%sd_ea_good_stations' %(workspace, npt))
vars = [tair, rnet, wind, ea]
arid = []
for ibasin in xrange(0, 10):
arid.append(load('%s/aridity_station_%s' %(workspace, basinlongs[ibasin])))
arid = array(list(itertools.chain(*arid)))
index_DI = msc_groupby_DI(arid)
# make a 3D matrix for summary table
matrix = np.zeros((4, 2, 2))
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
for iv, var in enumerate(vars):
index_week = (freq_ts>=2) & (freq_ts<=7)
index_month = (freq_ts>7) & (freq_ts<=30)
res = vstack((mean(var[:, index_week], axis=1), mean(var[:, index_month], axis=1)))
wet = hstack((res[:, index_DI[0]], res[:, index_DI[1]]))
dry = hstack((res[:, index_DI[2]], res[:, index_DI[3]]))
matrix[iv, 0, :] = mean(wet, axis=1)
matrix[iv, 1, :] = mean(dry, axis=1)
# rearrange the matrix into table
imshow_data = np.zeros((4, 4)); imshow_label = np.empty((4, 4), dtype=int)
for id in range(0,2):
for it in range(0,2):
for iv in range(0,4):
row = id*2 + iv/2
col = it*2 + iv%2
print row, col
imshow_data[row, col] = 1 - matrix[iv, id, it] # This is the influence 1-MSC
imshow_label[row, col] = iv
return imshow_data, imshow_label
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
def Plot_difference(fig, ax, label, data1, data2, title):
varnames = [r'$T_a$', r'$R_n$', r'$u_2$', r'$e_a$']
# my_cmap = cm.YlOrRd
# my_cmap.set_under('white')
norm = MidpointNormalize(vmax=0.4, midpoint=0, vmin=-0.1)
im = ax.imshow(data2-data1, cmap='bwr', norm=norm, interpolation='nearest')
# colorbar
# set up the axis
cb = plt.colorbar(im, ax=ax, fraction=0.046, pad=0.04) # magic number!!!!!
cb.ax.tick_params(labelsize=14) # change the colorbar fontsize
# Text portion
ind_array = np.arange(0, 4, 1)
x, y = meshgrid(ind_array, ind_array)
for xloc, yloc in zip(x.flatten(), y.flatten()):
ax.text(xloc, yloc, varnames[label[yloc, xloc]], va='center', ha='center', fontsize=20)
# Two separate lines
ax.plot([1.5, 1.5], [-0.5, 3.5], c='black', linewidth=2)
ax.plot([-0.5, 3.5],[1.5, 1.5], c='black', linewidth=2)
# x y label
fig.subplots_adjust(bottom=0.12)
ax.set_xticks((0.5, 2.5))
ax.set_yticks((0.5, 2.5))
ax.set_xticklabels(['Weekly cycle\n(2-7d)', 'Monthly cycle\n(7-30d)'], fontsize=14)
ax.set_yticklabels(["", ""], fontsize=16) # treat this as special string
ax.set_title(title, fontsize=16)
return
def Plot_Coherence_grid_average_multiple_update():
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
Plot_Coherence_grid_average_update(fig, ax[0], 7, '(a) Window = 7d')
Plot_Coherence_grid_average_update(fig, ax[1], 30, '(b) Window = 30d')
data7, labels = Coherence_grid_average(7)
data30, labels = Coherence_grid_average(30)
Plot_difference(fig, ax[2], labels, data7, data30, '(c) 30d - 7d')
fig.tight_layout()
plt.show()
# savefig('%s/Fig9_coh_grid_average_scale_climate_4var_diff_removeshort7-30.tif' %(figdir), dpi=300)
return
# Plot_Coherence_grid_average_multiple_update()
# exit()
def Print_Coherence_Average():
cohere = []
freq = Coherence_Frequency()
freq_ts = sampling_frequency/freq
for ibasin in xrange(0, 10):
cohere_basin = load('%s/coherence_obs_5model_good_station_%s' %(workspace, basinlongs[ibasin]))
cohere.append(cohere_basin)
# for all stations average
res = [interp1d(freq_ts[:], mean(vstack(cohere), axis=0))(day)[()] for day in [250]]
print res
return
# Print_Coherence_Average()
# exit()
| 34.362613 | 146 | 0.689345 | 6,877 | 45,771 | 4.404101 | 0.084194 | 0.039225 | 0.056856 | 0.027074 | 0.833757 | 0.789547 | 0.761185 | 0.732492 | 0.699607 | 0.643642 | 0 | 0.031975 | 0.141793 | 45,771 | 1,331 | 147 | 34.38843 | 0.73906 | 0.266129 | 0 | 0.660526 | 0 | 0 | 0.139728 | 0.074809 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018421 | null | null | 0.021053 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4b976326693bd6b5616d027c084ce393d9a07aa | 33 | py | Python | lib/hachoir/wx/resource/__init__.py | 0x20Man/Watcher3 | 4656b42bc5879a3741bb95f534b7c6612a25264d | [
"Apache-2.0"
] | 320 | 2017-03-28T23:33:45.000Z | 2022-02-17T08:45:01.000Z | lib/hachoir/wx/resource/__init__.py | 0x20Man/Watcher3 | 4656b42bc5879a3741bb95f534b7c6612a25264d | [
"Apache-2.0"
] | 300 | 2017-03-28T19:22:54.000Z | 2021-12-01T01:11:55.000Z | lib/hachoir/wx/resource/__init__.py | 0x20Man/Watcher3 | 4656b42bc5879a3741bb95f534b7c6612a25264d | [
"Apache-2.0"
] | 90 | 2017-03-29T16:12:43.000Z | 2022-03-01T06:23:48.000Z | from .resource import * # noqa
| 16.5 | 32 | 0.666667 | 4 | 33 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 33 | 1 | 33 | 33 | 0.88 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
480a9cfe196c5b11bae0bfcd5a756e5a236997fa | 32,966 | py | Python | tests/api/v1/endpoints/test_privacy_request_endpoints.py | nathanawmk/fidesops | 1ab840206a78e60673aebd5838ba567095512a58 | [
"Apache-2.0"
] | null | null | null | tests/api/v1/endpoints/test_privacy_request_endpoints.py | nathanawmk/fidesops | 1ab840206a78e60673aebd5838ba567095512a58 | [
"Apache-2.0"
] | null | null | null | tests/api/v1/endpoints/test_privacy_request_endpoints.py | nathanawmk/fidesops | 1ab840206a78e60673aebd5838ba567095512a58 | [
"Apache-2.0"
] | null | null | null | import json
from datetime import datetime
from typing import List
from unittest import mock
from sqlalchemy import (
column,
table,
select,
)
from fastapi_pagination import Params
import pytest
from starlette.testclient import TestClient
from fidesops.api.v1.urn_registry import (
PRIVACY_REQUESTS,
V1_URL_PREFIX,
REQUEST_PREVIEW,
)
from fidesops.api.v1.scope_registry import (
PRIVACY_REQUEST_CREATE,
STORAGE_CREATE_OR_UPDATE,
PRIVACY_REQUEST_READ,
)
from fidesops.db.session import (
get_db_engine,
get_db_session,
)
from fidesops.models.client import ClientDetail
from fidesops.models.privacy_request import PrivacyRequest
from fidesops.models.policy import DataCategory
from fidesops.schemas.dataset import DryRunDatasetResponse
from fidesops.util.cache import get_identity_cache_key
page_size = Params().size
def stringify_date(log_date: datetime) -> str:
return log_date.strftime("%Y-%m-%dT%H:%M:%S.%f+00:00")
class TestCreatePrivacyRequest:
@pytest.fixture(scope="function")
def url(self, oauth_client: ClientDetail, policy) -> str:
return V1_URL_PREFIX + PRIVACY_REQUESTS
def test_privacy_request_unauthenticated(self, api_client: TestClient, url):
resp = api_client.post(url)
assert resp.status_code == 401
def test_privacy_request_wrong_scopes(
self, api_client: TestClient, url, generate_auth_header
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
resp = api_client.post(url, json={}, headers=auth_header)
assert resp.status_code == 403
@mock.patch("fidesops.task.graph_task.run_access_request")
def test_create_privacy_request(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [{"email": "test@example.com"}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
assert run_access_request_mock.called
@mock.patch("fidesops.task.graph_task.run_access_request")
def test_create_privacy_request_limit_exceeded(
self,
_,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
payload = []
for i in range(0, 51):
payload.append(
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [{"email": "ftest{i}@example.com"}],
},
)
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
response = api_client.post(url, headers=auth_header, json=payload)
assert 422 == response.status_code
assert (
json.loads(response.text)["detail"][0]["msg"]
== "ensure this value has at most 50 items"
)
@mock.patch("fidesops.models.privacy_request.PrivacyRequest.start_processing")
def test_create_privacy_request_starts_processing(
self,
start_processing_mock,
url,
api_client: TestClient,
db,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [{"email": "test@example.com"}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
assert start_processing_mock.called
response_data = resp.json()["succeeded"]
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
@mock.patch("fidesops.task.graph_task.run_access_request")
def test_create_privacy_request_with_external_id(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
external_id = "ext_some-uuid-here-1234"
data = [
{
"external_id": external_id,
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [{"email": "test@example.com"}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(
V1_URL_PREFIX + PRIVACY_REQUESTS, json=data, headers=auth_header
)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
assert response_data[0]["external_id"] == external_id
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
assert pr.external_id == external_id
pr.delete(db=db)
assert run_access_request_mock.called
@mock.patch("fidesops.task.graph_task.run_access_request")
def test_create_privacy_request_caches_identity(
self,
run_access_request_mock,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
cache,
):
identity = {"email": "test@example.com"}
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [identity],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
key = get_identity_cache_key(
privacy_request_id=pr.id,
identity_attribute=list(identity.keys())[0],
)
assert cache.get(key) == list(identity.values())[0]
pr.delete(db=db)
assert run_access_request_mock.called
def test_create_privacy_request_no_identities(
self,
url,
api_client: TestClient,
generate_auth_header,
policy,
):
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 0
response_data = resp.json()["failed"]
assert len(response_data) == 1
@pytest.mark.integration
def test_create_and_process_access_request(
self,
postgres_example_test_dataset_config,
url,
db,
api_client: TestClient,
generate_auth_header,
policy,
):
customer_email = "customer-1@example.com"
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": policy.key,
"identities": [{"email": customer_email}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
results = pr.get_results()
assert len(results.keys()) == 11
for key in results.keys():
assert results[key] is not None
assert results[key] != {}
result_key_prefix = f"EN_{pr.id}__access_request__postgres_example_test_dataset:"
customer_key = result_key_prefix + "customer"
assert results[customer_key][0]["email"] == customer_email
visit_key = result_key_prefix + "visit"
assert results[visit_key][0]["email"] == customer_email
pr.delete(db=db)
@pytest.mark.integration_erasure
def test_create_and_process_erasure_request_specific_category(
self,
postgres_example_test_dataset_config,
url,
db,
api_client: TestClient,
generate_auth_header,
erasure_policy,
):
customer_email = "customer-1@example.com"
customer_id = 1
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": erasure_policy.key,
"identities": [{"email": customer_email}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
example_postgres_uri = (
"postgresql://postgres:postgres@postgres_example/postgres_example"
)
engine = get_db_engine(database_uri=example_postgres_uri)
SessionLocal = get_db_session(engine=engine)
integration_db = SessionLocal()
stmt = select(
column("id"),
column("name"),
).select_from(table("customer"))
res = integration_db.execute(stmt).all()
customer_found = False
for row in res:
if customer_id in row:
customer_found = True
# Check that the `name` field is `None`
assert row[1] is None
assert customer_found
@pytest.mark.integration_erasure
def test_create_and_process_erasure_request_generic_category(
self,
postgres_example_test_dataset_config,
url,
db,
api_client: TestClient,
generate_auth_header,
erasure_policy,
):
# It's safe to change this here since the `erasure_policy` fixture is scoped
# at function level
target = erasure_policy.rules[0].targets[0]
target.data_category = DataCategory("user.provided.identifiable.contact").value
target.save(db=db)
email = "customer-2@example.com"
customer_id = 2
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": erasure_policy.key,
"identities": [{"email": email}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
example_postgres_uri = (
"postgresql://postgres:postgres@postgres_example/postgres_example"
)
engine = get_db_engine(database_uri=example_postgres_uri)
SessionLocal = get_db_session(engine=engine)
integration_db = SessionLocal()
stmt = select(
column("id"),
column("email"),
).select_from(table("customer"))
res = integration_db.execute(stmt).all()
customer_found = False
for row in res:
if customer_id in row:
customer_found = True
# Check that the `email` field is `None` and that its data category
# ("user.provided.identifiable.contact.email") has been erased by the parent
# category ("user.provided.identifiable.contact")
assert row[1] is None
else:
# There are two rows other rows, and they should not have been erased
assert row[1] in ["customer-1@example.com", "jane@example.com"]
assert customer_found
@pytest.mark.integration_erasure
def test_create_and_process_erasure_request_with_table_joins(
self,
postgres_example_test_dataset_config,
url,
db,
api_client: TestClient,
generate_auth_header,
erasure_policy,
):
# It's safe to change this here since the `erasure_policy` fixture is scoped
# at function level
target = erasure_policy.rules[0].targets[0]
target.data_category = DataCategory(
"user.provided.identifiable.financial"
).value
target.save(db=db)
customer_email = "customer-1@example.com"
customer_id = 1
data = [
{
"requested_at": "2021-08-30T16:09:37.359Z",
"policy_key": erasure_policy.key,
"identities": [{"email": customer_email}],
}
]
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_CREATE])
resp = api_client.post(url, json=data, headers=auth_header)
assert resp.status_code == 200
response_data = resp.json()["succeeded"]
assert len(response_data) == 1
pr = PrivacyRequest.get(db=db, id=response_data[0]["id"])
pr.delete(db=db)
example_postgres_uri = (
"postgresql://postgres:postgres@postgres_example/postgres_example"
)
engine = get_db_engine(database_uri=example_postgres_uri)
SessionLocal = get_db_session(engine=engine)
integration_db = SessionLocal()
stmt = select(
column("customer_id"),
column("id"),
column("ccn"),
column("code"),
column("name"),
).select_from(table("payment_card"))
res = integration_db.execute(stmt).all()
card_found = False
for row in res:
if row[0] == customer_id:
card_found = True
assert row[2] is None
assert row[3] is None
assert row[4] is None
assert card_found == True
class TestGetPrivacyRequests:
@pytest.fixture(scope="function")
def url(self, oauth_client: ClientDetail) -> str:
return V1_URL_PREFIX + PRIVACY_REQUESTS
def test_get_privacy_requests_unauthenticated(self, api_client: TestClient, url):
response = api_client.get(url, headers={})
assert 401 == response.status_code
def test_get_privacy_requests_wrong_scope(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[STORAGE_CREATE_OR_UPDATE])
response = api_client.get(url, headers=auth_header)
assert 403 == response.status_code
def test_conflicting_query_params(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?completed_lt=2021-01-01&errored_gt=2021-01-02",
headers=auth_header,
)
assert 400 == response.status_code
def test_get_privacy_requests_by_id(
self,
api_client: TestClient,
url,
generate_auth_header,
privacy_request,
postgres_execution_log,
mongo_execution_log,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?id={privacy_request.id}", headers=auth_header
)
assert 200 == response.status_code
expected_resp = {
"items": [
{
"id": privacy_request.id,
"created_at": stringify_date(privacy_request.created_at),
"started_processing_at": stringify_date(
privacy_request.started_processing_at
),
"finished_processing_at": None,
"status": privacy_request.status.value,
"external_id": privacy_request.external_id,
}
],
"total": 1,
"page": 1,
"size": page_size,
}
resp = response.json()
assert resp == expected_resp
def test_filter_privacy_requests_by_status(
self,
api_client: TestClient,
url,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?status=complete", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
response = api_client.get(url + f"?status=error", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == failed_privacy_request.id
def test_filter_privacy_requests_by_external_id(
self,
db,
api_client,
url,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?external_id={succeeded_privacy_request.id}", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
privacy_request.external_id = "test_external_id_1"
privacy_request.save(db)
response = api_client.get(
url + f"?external_id=test_external_id_1", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == privacy_request.id
def test_filter_privacy_requests_by_created(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?created_lt=2019-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(url + f"?created_gt=2019-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 3
assert resp["items"][0]["id"] == privacy_request.id
assert resp["items"][1]["id"] == succeeded_privacy_request.id
assert resp["items"][2]["id"] == failed_privacy_request.id
def test_filter_privacy_requests_by_started(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?started_lt=2021-05-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 2
assert resp["items"][0]["id"] == privacy_request.id
assert resp["items"][1]["id"] == failed_privacy_request.id
response = api_client.get(url + f"?started_gt=2021-05-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
def test_filter_privacy_requests_by_completed(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url + f"?completed_lt=2021-10-01", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(
url + f"?completed_gt=2021-10-01", headers=auth_header
)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == succeeded_privacy_request.id
def test_filter_privacy_requests_by_errored(
self,
api_client: TestClient,
generate_auth_header,
privacy_request,
succeeded_privacy_request,
failed_privacy_request,
url,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?errored_lt=2021-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 0
response = api_client.get(url + f"?errored_gt=2021-01-01", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert len(resp["items"]) == 1
assert resp["items"][0]["id"] == failed_privacy_request.id
def test_verbose_privacy_requests(
self,
api_client: TestClient,
generate_auth_header,
privacy_request: PrivacyRequest,
postgres_execution_log,
second_postgres_execution_log,
mongo_execution_log,
url,
):
"""Test privacy requests endpoint with verbose query param to show execution logs"""
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(url + f"?verbose=True", headers=auth_header)
assert 200 == response.status_code
resp = response.json()
assert (
postgres_execution_log.updated_at < second_postgres_execution_log.updated_at
)
expected_resp = {
"items": [
{
"id": privacy_request.id,
"created_at": stringify_date(privacy_request.created_at),
"started_processing_at": stringify_date(
privacy_request.started_processing_at
),
"finished_processing_at": None,
"status": privacy_request.status.value,
"external_id": privacy_request.external_id,
"results": {
"my-mongo-db": [
{
"collection_name": "orders",
"fields_affected": [
{
"path": "orders.name",
"field_name": "name",
"data_categories": [
"user.provided.identifiable.contact.name"
],
}
],
"message": None,
"action_type": "access",
"status": "in_processing",
"updated_at": stringify_date(
mongo_execution_log.updated_at
),
}
],
"my-postgres-db": [
{
"collection_name": "user",
"fields_affected": [
{
"path": "user.email",
"field_name": "email",
"data_categories": [
"user.provided.identifiable.contact.email"
],
}
],
"message": None,
"action_type": "access",
"status": "pending",
"updated_at": stringify_date(
postgres_execution_log.updated_at
),
},
{
"collection_name": "address",
"fields_affected": [
{
"path": "address.street",
"field_name": "street",
"data_categories": [
"user.provided.identifiable.contact.street"
],
},
{
"path": "address.city",
"field_name": "city",
"data_categories": [
"user.provided.identifiable.contact.city"
],
},
],
"message": "Database timed out.",
"action_type": "access",
"status": "error",
"updated_at": stringify_date(
second_postgres_execution_log.updated_at
),
},
],
},
},
],
"total": 1,
"page": 1,
"size": page_size,
}
assert resp == expected_resp
class TestGetExecutionLogs:
@pytest.fixture(scope="function")
def url(self, db, privacy_request):
return V1_URL_PREFIX + PRIVACY_REQUESTS + f"/{privacy_request.id}/log"
def test_get_execution_logs_unauthenticated(
self, api_client: TestClient, privacy_request, url
):
response = api_client.get(url + "/", headers={})
assert 401 == response.status_code
def test_get_execution_logs_wrong_scope(
self, api_client: TestClient, generate_auth_header, url
):
auth_header = generate_auth_header(scopes=[STORAGE_CREATE_OR_UPDATE])
response = api_client.get(url, headers=auth_header)
assert 403 == response.status_code
def test_get_execution_logs_invalid_privacy_request_id(
self, api_client: TestClient, generate_auth_header
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
V1_URL_PREFIX + PRIVACY_REQUESTS + f"/invalid_privacy_request_id/log",
headers=auth_header,
)
assert 404 == response.status_code
def test_get_execution_logs(
self,
api_client: TestClient,
generate_auth_header,
url,
postgres_execution_log,
mongo_execution_log,
second_postgres_execution_log,
):
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.get(
url,
headers=auth_header,
)
assert 200 == response.status_code
resp = response.json()
expected_resp = {
"items": [
{
"collection_name": "user",
"fields_affected": [
{
"path": "user.email",
"field_name": "email",
"data_categories": [
"user.provided.identifiable.contact.email"
],
}
],
"message": None,
"action_type": "access",
"status": "pending",
"updated_at": stringify_date(postgres_execution_log.updated_at),
"dataset_name": "my-postgres-db",
},
{
"collection_name": "orders",
"fields_affected": [
{
"path": "orders.name",
"field_name": "name",
"data_categories": [
"user.provided.identifiable.contact.name"
],
}
],
"message": None,
"action_type": "access",
"status": "in_processing",
"updated_at": stringify_date(mongo_execution_log.updated_at),
"dataset_name": "my-mongo-db",
},
{
"collection_name": "address",
"fields_affected": [
{
"path": "address.street",
"field_name": "street",
"data_categories": [
"user.provided.identifiable.contact.street"
],
},
{
"path": "address.city",
"field_name": "city",
"data_categories": [
"user.provided.identifiable.contact.city"
],
},
],
"message": "Database timed out.",
"action_type": "access",
"status": "error",
"updated_at": stringify_date(
second_postgres_execution_log.updated_at
),
"dataset_name": "my-postgres-db",
},
],
"total": 3,
"page": 1,
"size": page_size,
}
assert resp == expected_resp
class TestRequestPreview:
@pytest.fixture(scope="function")
def url(self, db, privacy_request):
return V1_URL_PREFIX + REQUEST_PREVIEW
def test_request_preview(
self,
dataset_config_preview,
api_client: TestClient,
url,
generate_auth_header,
) -> None:
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
data = [dataset_config_preview.fides_key]
response = api_client.put(url, headers=auth_header, json=data)
assert response.status_code == 200
response_body: List[DryRunDatasetResponse] = json.loads(response.text)
assert (
next(
response["query"]
for response in response_body
if response["collectionAddress"]["dataset"] == "postgres"
if response["collectionAddress"]["collection"] == "subscriptions"
)
== "SELECT email,id FROM subscriptions WHERE email = ?"
)
def test_request_preview_all(
self,
dataset_config_preview,
api_client: TestClient,
url,
generate_auth_header,
) -> None:
auth_header = generate_auth_header(scopes=[PRIVACY_REQUEST_READ])
response = api_client.put(url, headers=auth_header)
assert response.status_code == 200
response_body: List[DryRunDatasetResponse] = json.loads(response.text)
assert (
next(
response["query"]
for response in response_body
if response["collectionAddress"]["dataset"] == "postgres"
if response["collectionAddress"]["collection"] == "subscriptions"
)
== "SELECT email,id FROM subscriptions WHERE email = ?"
)
| 36.226374 | 92 | 0.544895 | 3,262 | 32,966 | 5.216125 | 0.087983 | 0.064649 | 0.05501 | 0.040552 | 0.834793 | 0.795592 | 0.769556 | 0.757214 | 0.739289 | 0.721481 | 0 | 0.021366 | 0.358278 | 32,966 | 909 | 93 | 36.266227 | 0.782935 | 0.016987 | 0 | 0.678788 | 0 | 0 | 0.12571 | 0.053655 | 0 | 0 | 0 | 0 | 0.115152 | 1 | 0.041212 | false | 0 | 0.019394 | 0.006061 | 0.071515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
480ec3917010a1744d09ab7f53ce32a734f03500 | 118 | py | Python | rl/spaces/__init__.py | taylor1355/gym-agents | 3ef5aa0d09b82a7ad63358222f5dae3839d6ca04 | [
"MIT"
] | null | null | null | rl/spaces/__init__.py | taylor1355/gym-agents | 3ef5aa0d09b82a7ad63358222f5dae3839d6ca04 | [
"MIT"
] | null | null | null | rl/spaces/__init__.py | taylor1355/gym-agents | 3ef5aa0d09b82a7ad63358222f5dae3839d6ca04 | [
"MIT"
] | null | null | null | from rl.spaces.utils import is_discrete
from rl.spaces.utils import cardinality
from rl.spaces.utils import enumerate
| 29.5 | 39 | 0.847458 | 19 | 118 | 5.210526 | 0.473684 | 0.181818 | 0.363636 | 0.515152 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 118 | 3 | 40 | 39.333333 | 0.933962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
483a43ee05fa7b94bdda74e47cb328c7dc35876e | 41 | py | Python | arcpyext/toolbox/__init__.py | PeterReyne/arcpyext | 9307115da8f0b6a30e2ca741fb6a7d09e54fd0f3 | [
"BSD-3-Clause"
] | 11 | 2015-05-01T04:08:30.000Z | 2019-09-21T05:00:58.000Z | arcpyext/toolbox/__init__.py | PeterReyne/arcpyext | 9307115da8f0b6a30e2ca741fb6a7d09e54fd0f3 | [
"BSD-3-Clause"
] | 14 | 2015-06-23T02:46:44.000Z | 2019-10-11T00:46:11.000Z | arcpyext/toolbox/__init__.py | PeterReyne/arcpyext | 9307115da8f0b6a30e2ca741fb6a7d09e54fd0f3 | [
"BSD-3-Clause"
] | 9 | 2015-02-27T05:25:42.000Z | 2020-01-19T05:43:14.000Z | from .PythonToolbox import PythonToolbox
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4858e5e86d172f70574af5e603520489269b24e6 | 58,042 | py | Python | remodet_repository_LEE/Projects/Det_CATDOG/DetRelease_Net.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | remodet_repository_LEE/Projects/Det_CATDOG/DetRelease_Net.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | remodet_repository_LEE/Projects/Det_CATDOG/DetRelease_Net.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import sys
import math
sys.dont_write_bytecode = True
import caffe
from caffe import layers as L
from caffe import params as P
from caffe.proto import caffe_pb2
sys.path.append('../')
from PyLib.LayerParam.MultiBoxLossLayerParam import *
from PyLib.NetLib.ConvBNLayer import *
from PyLib.NetLib.InceptionLayer import *
from PyLib.NetLib.MultiScaleLayer import *
from PyLib.NetLib.VggNet import VGG16_BaseNet_ChangeChannel
from PyLib.NetLib.YoloNet import YoloNetPart
from AddC6 import *
from TPDUtils import *
from DetectorHeader import *
from DetNet_Param import *
from DetRelease_Data import *
from DetRelease_General import *
def Deconv(net,from_layer,num_output,group,kernel_size,stride,lr_mult,decay_mult,use_bn,use_scale,use_relu,add_str = "",deconv_name = "_Upsample"):
deconv_param = {
'num_output': num_output,
'kernel_size': kernel_size,
'pad': 0,
'stride': stride,
'weight_filler': dict(type='gaussian', std=0.01),
'bias_filler': dict(type='constant', value=0.0),
'bias_term': True,
'group': group,
}
kwargs_deconv = {
'param': [dict(lr_mult=lr_mult, decay_mult=decay_mult)],
'convolution_param': deconv_param
}
out_layer = from_layer + deconv_name
net[out_layer] = L.Deconvolution(net[from_layer + add_str], **kwargs_deconv)
base_conv_name = out_layer
from_layer = out_layer
# parameters for batchnorm layer.
bn_kwargs = {
'param': [dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0)],
'eps': 0.001,
}
sb_kwargs = {
'bias_term': True,
'param': [dict(lr_mult=lr_mult, decay_mult=0), dict(lr_mult=lr_mult, decay_mult=0)],
'filler': dict(type='constant', value=1.0),
'bias_filler': dict(type='constant', value=0.2),
}
if use_bn:
bn_name = '{}_bn'.format(base_conv_name)
net[bn_name] = L.BatchNorm(net[from_layer], in_place=True, **bn_kwargs)
from_layer = bn_name
if use_scale:
sb_name = '{}_scale'.format(base_conv_name)
net[sb_name] = L.Scale(net[from_layer], in_place=True, **sb_kwargs)
from_layer = sb_name
if use_relu:
relu_name = '{}_relu'.format(base_conv_name)
net[relu_name] = L.ReLU(net[from_layer], in_place=True)
from_layer = relu_name
return out_layer
def MultiScaleEltLayer(net,layers = [],kernels =[], strides = [],out_layer = "",num_channels = 128,lr=1.0,decay=1.0,add_str = "",use_bn = True,flag_withparamname=False):
assert len(layers) == len(kernels) == len(strides)
feat_layers = []
for i in xrange(len(layers)):
f_layer = layers[i]
o_layer = f_layer + "_adapfeat" + out_layer[-1]
k = kernels[i]
ConvBNUnitLayer(net, f_layer + add_str, o_layer, use_bn=use_bn, use_relu=False,
num_output=num_channels, kernel_size=k, pad=(k-1)/2, stride=strides[i], use_scale=True, leaky=False, lr_mult=lr,
decay_mult=decay,flag_withparamname=flag_withparamname,pose_string=add_str)
feat_layers.append(net[o_layer + add_str])
net[out_layer + add_str] = L.Eltwise(*feat_layers, eltwise_param=dict(operation=P.Eltwise.SUM))
relu_name = out_layer + "_relu" + add_str
net[relu_name] = L.ReLU(net[out_layer + add_str], in_place=True)
def DetRelease_FirstBodyPartPoseNet(train=True):
##Step1: Create Data for Body_Part Detection of 16:9, 9:16 and Pose Estimation
##Step2: Create BaseNet for three subnets until conv5_5
##Step3: Create Conv6 for Body_Part Detection for Detection subnets(16:9 and 9:16)
##Step4: Create featuremap1,featuremap2,featuremap3 for Detection subnet_16:9
##Step5: Create featuremap1,featuremap2,featuremap3 for Detection subnet_9:16
##Step6: Create Header and Body Loss for subnet_16:9
##Step7: Create Header and Part Loss for subnet_16:9
##Step8: Create Header and Body Loss for subnet_9:16
##Step9: Create Header and Part Loss for subnet_9:16
##Step10:Create Pose Estimation convf and stage loss
net = caffe.NetSpec()
##Step1: Create Data for Body_Part Detection of 16:9, 9:16 and Pose Estimation
net = get_DAPDataLayer(net, train=train, batchsize=batch_size,data_name = "data",label_name = "label",flag_169=flag_169_global)
if train:
net = get_poseDataLayer(net, train=train, batch_size=batch_size,data_name="data_pose", label_name="label_pose")
net.vec_mask, net.heat_mask, net.vec_temp, net.heat_temp = \
L.Slice(net["label_pose"], ntop=4, slice_param=dict(slice_point=[34, 52, 86], axis=1))
net.vec_label = L.Eltwise(net.vec_mask, net.vec_temp, eltwise_param=dict(operation=P.Eltwise.PROD))
net.heat_label = L.Eltwise(net.heat_mask, net.heat_temp, eltwise_param=dict(operation=P.Eltwise.PROD))
##Step2: Create BaseNet for three subnets until conv5_5
use_bn = False
channels = ((32,), (64,), (128, 64, 128), (192, 96, 192, 96, 192), (256, 128, 256, 128, 256))
strides = (True, True, True, False, False)
kernels = ((3,), (3,), (3, 1, 3), (3, 1, 3, 1, 3), (3, 1, 3, 1, 3))
pool_last = (False,False,False,True,True)
net = VGG16_BaseNet_ChangeChannel(net, from_layer="data", channels=channels, strides=strides,
kernels=kernels,freeze_layers=[], pool_last=pool_last,flag_withparamname=True,add_string='',
use_bn=use_bn,lr_mult=lr_conv1_conv5,decay_mult=1.0,use_global_stats=None)
if train:
pool_last = (False, False, False, True, False)
net = VGG16_BaseNet_ChangeChannel(net, from_layer="data_pose", channels=channels, strides=strides,
kernels=kernels, freeze_layers=[], pool_last=pool_last, flag_withparamname=True,
add_string='_pose', use_bn=use_bn, lr_mult=lr_conv1_conv5, decay_mult=1.0,
use_global_stats=None)
##Step3: Create Conv6 for Body_Part Detection for Detection subnets(16:9 and 9:16)
conv6_output = Conv6_Param.get('conv6_output',[])
conv6_kernal_size = Conv6_Param.get('conv6_kernal_size',[])
from_layer = "pool5"
net = addconv6(net, from_layer=from_layer, use_bn=use_bn, conv6_output=conv6_output, \
conv6_kernal_size=conv6_kernal_size, pre_name="conv6", start_pool=False, lr_mult=lr_conv6_adap,
decay_mult=1, n_group=1, flag_withparamname=False)
##Step4:Create featuremap1,featuremap2,featuremap3 for Detection subnet_16:9
#layers = ["conv3_3", "conv4_5"]
#kernels = [3, 3]
#strides = [1, 1]
#out_layer = "featuremap1"
#num_channels = 128
#add_str = ""
#MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
# num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
# flag_withparamname=False)
#layers = ["conv4_5", "conv5_5"]
#kernels = [3, 3]
#strides = [2, 1]
#out_layer = "featuremap2"
#num_channels = 128
#add_str = ""
#MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
# num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
# flag_withparamname=False)
#layers = ["conv5_5", "conv6_5"]
#kernels = [3, 3]
#strides = [2, 1]
#out_layer = "featuremap3"
#num_channels = 128
#add_str = ""
#MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
# num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
# flag_withparamname=False)
add_str = ""
net['featuremap3'] = L.Concat(net['conv6_5'])
net,topdown16 = tdm(net,'conv5_5','featuremap3',2,freeze = False)
net,topdown8 = tdm(net,'conv4_5','featuremap2',1,freeze = False)
##Step6:Create Header and Body Loss for subnet_16:9
data_layer = "data"
gt_label = "label"
if flag_169_global:
net_width = 368
net_height = 368
else:
net_width = 368
net_height = 368
ssd_Param_1 = get_ssd_Param_1(flag_169=flag_169_global,bboxloss_loc_weight = bboxloss_loc_weight_body,bboxloss_conf_weight=bboxloss_conf_weight_body)
mbox_1_layers = SsdDetectorHeaders(net, \
net_width=net_width, net_height=net_height, data_layer=data_layer, \
from_layers=ssd_Param_1.get('feature_layers', []), \
num_classes=ssd_Param_1.get("num_classes", 2), \
boxsizes=ssd_Param_1.get("anchor_boxsizes", []), \
aspect_ratios=ssd_Param_1.get("anchor_aspect_ratios", []), \
prior_variance=ssd_Param_1.get("anchor_prior_variance", [0.1, 0.1, 0.2, 0.2]), \
flip=ssd_Param_1.get("anchor_flip", True), \
clip=ssd_Param_1.get("anchor_clip", True), \
normalizations=ssd_Param_1.get("interlayers_normalizations", []), \
use_batchnorm=ssd_Param_1.get("interlayers_use_batchnorm", True), \
inter_layer_channels=ssd_Param_1.get("interlayers_channels_kernels", []), \
use_focus_loss=ssd_Param_1.get("bboxloss_using_focus_loss", False), \
use_dense_boxes=ssd_Param_1.get('bboxloss_use_dense_boxes', False), \
stage=1, lr_mult=lr_inter_loss,flag_withparamname=False,add_str=add_str, AnchorFixed = AnchorFixed)
##Step7:Create Header and Part Loss for subnet_16:9
ssd_Param_2 = get_ssd_Param_2(flag_169=flag_169_global, bboxloss_loc_weight=bboxloss_loc_weight_part,bboxloss_conf_weight=bboxloss_conf_weight_part)
mbox_2_layers = SsdDetectorHeaders(net, \
net_width=net_width, net_height=net_height, data_layer=data_layer, \
from_layers=ssd_Param_2.get('feature_layers', []), \
num_classes=ssd_Param_2.get("num_classes", 2), \
boxsizes=ssd_Param_2.get("anchor_boxsizes", []), \
aspect_ratios=ssd_Param_2.get("anchor_aspect_ratios", []), \
prior_variance=ssd_Param_2.get("anchor_prior_variance", [0.1, 0.1, 0.2, 0.2]), \
flip=ssd_Param_2.get("anchor_flip", True), \
clip=ssd_Param_2.get("anchor_clip", True), \
normalizations=ssd_Param_2.get("interlayers_normalizations", []), \
use_batchnorm=ssd_Param_2.get("interlayers_use_batchnorm", True), \
inter_layer_channels=ssd_Param_2.get("interlayers_channels_kernels", []), \
use_focus_loss=ssd_Param_2.get("bboxloss_using_focus_loss", False), \
use_dense_boxes=ssd_Param_2.get('bboxloss_use_dense_boxes', False), \
stage=2, lr_mult=lr_inter_loss,flag_withparamname=False, add_str=add_str)
if train:
loss_param = get_loss_param(normalization=ssd_Param_1.get("bboxloss_normalization", P.Loss.VALID))
mbox_1_layers.append(net[gt_label])
bboxloss_param = {
'gt_labels': ssd_Param_1.get('gt_labels', []),
'target_labels': ssd_Param_1.get('target_labels', []),
'num_classes': ssd_Param_1.get("num_classes", 2),
'alias_id': ssd_Param_1.get("alias_id", 0),
'loc_loss_type': ssd_Param_1.get("bboxloss_loc_loss_type", P.MultiBoxLoss.SMOOTH_L1),
'conf_loss_type': ssd_Param_1.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX),
'loc_weight': ssd_Param_1.get("bboxloss_loc_weight", 1),
'conf_weight': ssd_Param_1.get("bboxloss_conf_weight", 1),
'overlap_threshold': ssd_Param_1.get("bboxloss_overlap_threshold", 0.5),
'neg_overlap': ssd_Param_1.get("bboxloss_neg_overlap", 0.5),
'size_threshold': ssd_Param_1.get("bboxloss_size_threshold", 0.0001),
'do_neg_mining': ssd_Param_1.get("bboxloss_do_neg_mining", True),
'neg_pos_ratio': ssd_Param_1.get("bboxloss_neg_pos_ratio", 3),
'using_focus_loss': ssd_Param_1.get("bboxloss_using_focus_loss", False),
'gama': ssd_Param_1.get("bboxloss_focus_gama", 2),
'use_difficult_gt': ssd_Param_1.get("bboxloss_use_difficult_gt", False),
'code_type': ssd_Param_1.get("bboxloss_code_type", P.PriorBox.CENTER_SIZE),
'flag_noperson':ssd_Param_1.get("flag_noperson", False),
'match_type': P.MultiBoxLoss.PER_PREDICTION,
'share_location': True,
'use_prior_for_matching': True,
'background_label_id': 0,
'encode_variance_in_target': False,
'map_object_to_agnostic': False,
'matchtype_anchorgt':ssd_Param_1.get("matchtype_anchorgt", "REMOVELARGMARGIN"),
'margin_ratio':ssd_Param_1.get("margin_ratio", 0.25),
'sigma_angtdist':ssd_Param_1.get("sigma_angtdist", 0.1),
}
if body_loss_type == "BBoxLoss":
net["mbox_1_loss"] = L.BBoxLoss(*mbox_1_layers, bbox_loss_param=bboxloss_param, \
loss_param=loss_param, include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
else:
net["mbox_1_loss"] = L.BBoxLossWTIOUCKCOVER(*mbox_1_layers, bbox_loss_param=bboxloss_param, \
loss_param=loss_param, include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
loss_param = get_loss_param(normalization=ssd_Param_2.get("bboxloss_normalization", P.Loss.VALID))
mbox_2_layers.append(net[gt_label])
bboxloss_param = {
'gt_labels': ssd_Param_2.get('gt_labels', []),
'target_labels': ssd_Param_2.get('target_labels', []),
'num_classes': ssd_Param_2.get("num_classes", 2),
'alias_id': ssd_Param_2.get("alias_id", 0),
'loc_loss_type': ssd_Param_2.get("bboxloss_loc_loss_type", P.MultiBoxLoss.SMOOTH_L1),
'conf_loss_type': ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.LOGISTIC),
'loc_weight': ssd_Param_2.get("bboxloss_loc_weight", 1),
'conf_weight': ssd_Param_2.get("bboxloss_conf_weight", 1),
'overlap_threshold': ssd_Param_2.get("bboxloss_overlap_threshold", 0.5),
'neg_overlap': ssd_Param_2.get("bboxloss_neg_overlap", 0.5),
'size_threshold': ssd_Param_2.get("bboxloss_size_threshold", 0.0001),
'do_neg_mining': ssd_Param_2.get("bboxloss_do_neg_mining", True),
'neg_pos_ratio': ssd_Param_2.get("bboxloss_neg_pos_ratio", 3),
'using_focus_loss': ssd_Param_2.get("bboxloss_using_focus_loss", False),
'gama': ssd_Param_2.get("bboxloss_focus_gama", 2),
'use_difficult_gt': ssd_Param_2.get("bboxloss_use_difficult_gt", False),
'code_type': ssd_Param_2.get("bboxloss_code_type", P.PriorBox.CENTER_SIZE),
'use_prior_for_matching': True,
'encode_variance_in_target': False,
'flag_noperson': ssd_Param_2.get('flag_noperson', False),
}
net["mbox_2_loss"] = L.DenseBBoxLoss(*mbox_2_layers, dense_bbox_loss_param=bboxloss_param, \
loss_param=loss_param,
include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
##Step10:Create Pose Estimation convf and stage loss
from_layer = "conv5_5"
add_str = "_pose"
num_output = 128
group = 1
kernel_size = 2
stride = 2
use_bn = False
use_scale = False
use_relu = False
out_layer1 = Deconv(net, from_layer, num_output, group, kernel_size, stride, lr_pose, 1.0, use_bn, use_scale,use_relu,add_str)
from_layer = "conv4_5"
out_layer2 = from_layer + "_adap"
kernel_size = 3
ConvBNUnitLayer(net, from_layer + add_str, out_layer2, use_bn=use_bn, use_relu=False,
num_output=num_output, kernel_size=kernel_size, pad=(kernel_size - 1) / 2, stride=1, use_scale=use_scale,
leaky=False, lr_mult=lr_pose,decay_mult=1.0)
feat_layers = []
feat_layers.append(net[out_layer1])
feat_layers.append(net[out_layer2])
out_layer = "convf"
net[out_layer] = L.Eltwise(*feat_layers, eltwise_param=dict(operation=P.Eltwise.SUM))
# relu_name = out_layer + "_relu"
# net[relu_name] = L.ReLU(net[out_layer], in_place=True)
use_stage = 3
use_3_layers = 5
use_1_layers = 0
n_channel = 64
kernel_size = 3
baselayer = "convf"
flag_output_sigmoid = False
for stage in range(use_stage):
if stage == 0:
from_layer = baselayer
else:
from_layer = "concat_stage{}".format(stage)
outlayer = "concat_stage{}".format(stage + 1)
if stage == use_stage - 1:
short_cut = False
else:
short_cut = True
net = mPose_StageX_Train(net, from_layer=from_layer, out_layer=outlayer, stage=stage + 1,
mask_vec="vec_mask", mask_heat="heat_mask", \
label_vec="vec_label", label_heat="heat_label", \
use_3_layers=use_3_layers, use_1_layers=use_1_layers, short_cut=short_cut, \
base_layer=baselayer, lr=lr_pose, decay=1.0, num_channels=n_channel,
kernel_size=kernel_size, flag_sigmoid=flag_output_sigmoid,loss_weight=0.1)
else:
if ssd_Param_1.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.SOFTMAX:
reshape_name = "mbox_1_conf_reshape" + add_str
net[reshape_name] = L.Reshape(mbox_1_layers[1], \
shape=dict(dim=[0, -1, ssd_Param_1.get("num_classes", 2)]))
softmax_name = "mbox_1_conf_softmax" + add_str
net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
flatten_name = "mbox_1_conf_flatten" + add_str
net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
mbox_1_layers[1] = net[flatten_name]
elif ssd_Param_1.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.LOGISTIC:
sigmoid_name = "mbox_1_conf_sigmoid" + add_str
net[sigmoid_name] = L.Sigmoid(mbox_1_layers[1])
mbox_1_layers[1] = net[sigmoid_name]
else:
raise ValueError("Unknown conf loss type.")
det_out_param = {
'num_classes': ssd_Param_1.get("num_classes", 2),
'target_labels': ssd_Param_1.get('detout_target_labels', []),
'alias_id': ssd_Param_1.get("alias_id", 0),
'conf_threshold': ssd_Param_1.get("detout_conf_threshold", 0.01),
'nms_threshold': ssd_Param_1.get("detout_nms_threshold", 0.45),
'size_threshold': ssd_Param_1.get("detout_size_threshold", 0.0001),
'top_k': ssd_Param_1.get("detout_top_k", 30),
'share_location': True,
'code_type': P.PriorBox.CENTER_SIZE,
'background_label_id': 0,
'variance_encoded_in_target': False,
}
net["detection_out_1" + add_str] = L.DetOut(*mbox_1_layers, \
detection_output_param=det_out_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
##Step7:Create Part Header and sigmoid conf for subnet_16:9
if ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.SOFTMAX:
reshape_name = "mbox_2_conf_reshape" + add_str
net[reshape_name] = L.Reshape(mbox_2_layers[1], \
shape=dict(dim=[0, -1, ssd_Param_2.get("num_classes", 2)]))
softmax_name = "mbox_2_conf_softmax" + add_str
net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
flatten_name = "mbox_2_conf_flatten" + add_str
net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
mbox_2_layers[1] = net[flatten_name]
elif ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.LOGISTIC:
sigmoid_name = "mbox_2_conf_sigmoid" + add_str
net[sigmoid_name] = L.Sigmoid(mbox_2_layers[1])
mbox_2_layers[1] = net[sigmoid_name]
else:
raise ValueError("Unknown conf loss type.")
det_out_param = {
'num_classes': ssd_Param_2.get("num_classes", 2),
'target_labels': ssd_Param_2.get('detout_target_labels', []),
'alias_id': ssd_Param_2.get("alias_id", 0),
'conf_threshold': ssd_Param_2.get("detout_conf_threshold", 0.01),
'nms_threshold': ssd_Param_2.get("detout_nms_threshold", 0.45),
'size_threshold': ssd_Param_2.get("detout_size_threshold", 0.0001),
'top_k': ssd_Param_2.get("detout_top_k", 30),
'share_location': True,
'code_type': P.PriorBox.CENTER_SIZE,
'background_label_id': 0,
'variance_encoded_in_target': False,
}
net["detection_out_2" + add_str] = L.DenseDetOut(*mbox_2_layers, \
detection_output_param=det_out_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
##Step8:Create evaluation part for subnet_16:9
eval_Param = get_eval_Param([0, 1, 3])
det_eval_param = {
'gt_labels': eval_Param.get('eval_gt_labels', []),
'num_classes': eval_Param.get("eval_num_classes", 2),
'evaluate_difficult_gt': eval_Param.get("eval_difficult_gt", False),
'boxsize_threshold': eval_Param.get("eval_boxsize_threshold", [0, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25]),
'iou_threshold': eval_Param.get("eval_iou_threshold", [0.9, 0.75, 0.5]),
'background_label_id': 0,
}
det_out_layers = []
det_out_layers.append(net['detection_out_1' + add_str])
det_out_layers.append(net['detection_out_2' + add_str])
name = 'det_out' + add_str
net[name] = L.Concat(*det_out_layers, axis=2)
net["det_accu" + add_str] = L.DetEval(net[name], net[gt_label], \
detection_evaluate_param=det_eval_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
return net
def DetRelease_SecondPartAllNet(train=True):
net = caffe.NetSpec()
if train:
##Step1: Create Data for Body_Part Detection of 16:9, 9:16 and Pose Estimation
net = get_DAPDataLayer(net, train=train, batchsize=batch_size,data_name = "data",label_name = "label",flag_169=flag_169_global)
net = get_MinihandDataLayer(net, train=train, data_name="data_minihand", label_name="label_minihand", flag_169=flag_169_global)
else:
net = get_DAPDataLayer(net, train=train, batchsize=batch_size, data_name="data", label_name="label",flag_169=flag_169_global)
##Step2: Create BaseNet for three subnets until conv5_5
lr_mult = 0.0
decay_mult = 1.0
use_bn = False
channels = ((32,), (64,), (128, 64, 128), (192, 96, 192, 96, 192), (256, 128, 256, 128, 256))
strides = (True, True, True, False, False)
kernels = ((3,), (3,), (3, 1, 3), (3, 1, 3, 1, 3), (3, 1, 3, 1, 3))
pool_last = (False,False,False,True,True)
net = VGG16_BaseNet_ChangeChannel(net, from_layer="data", channels=channels, strides=strides,
kernels=kernels,freeze_layers=[], pool_last=pool_last,flag_withparamname=True,add_string='',
use_bn=use_bn,lr_mult=lr_conv1_conv5,decay_mult=1.0,use_global_stats=None)
if train:
channels = ((32,), (64,), (128, 64, 128), (192, 96, 192, 96, 192))
strides = (True, True, True, False)
kernels = ((3,), (3,), (3, 1, 3), (3, 1, 3, 1, 3))
pool_last = (False, False, False, False)
net = VGG16_BaseNet_ChangeChannel(net, from_layer="data_minihand", channels=channels, strides=strides,
kernels=kernels, freeze_layers=[], pool_last=pool_last, flag_withparamname=True,
add_string='_minihand',use_bn=use_bn, lr_mult=lr_conv1_conv5, decay_mult=1.0, use_global_stats=None)
##Step3: Create Conv6 for Body_Part Detection for Detection subnets(16:9 and 9:16)
conv6_output = Conv6_Param.get('conv6_output',[])
conv6_kernal_size = Conv6_Param.get('conv6_kernal_size',[])
from_layer = "pool5"
net = addconv6(net, from_layer=from_layer, use_bn=use_bn, conv6_output=conv6_output, \
conv6_kernal_size=conv6_kernal_size, pre_name="conv6", start_pool=False, lr_mult=lr_conv6_adap,
decay_mult=1, n_group=1, flag_withparamname=False)
##Step4:Create featuremap1,featuremap2,featuremap3 for Detection subnet_16:9
layers = ["conv3_3", "conv4_5"]
kernels = [3, 3]
strides = [1, 1]
out_layer = "featuremap1"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
layers = ["conv4_5", "conv5_5"]
kernels = [3, 3]
strides = [2, 1]
out_layer = "featuremap2"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
layers = ["conv5_5", "conv6_5"]
kernels = [3, 3]
strides = [2, 1]
out_layer = "featuremap3"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
##Step6:Create Header and Body Loss for subnet_16:9
add_str = ""
data_layer = "data" + add_str
if flag_169_global:
net_width = 512
net_height = 288
else:
net_width = 512
net_height = 288
##Step7:Create Header and Part Loss for subnet_16:9
ssd_Param_2 = get_ssd_Param_2(flag_169=flag_169_global, bboxloss_loc_weight=bboxloss_loc_weight_part, bboxloss_conf_weight=bboxloss_conf_weight_part)
mbox_2_layers = SsdDetectorHeaders(net, \
net_width=net_width, net_height=net_height, data_layer=data_layer, \
from_layers=ssd_Param_2.get('feature_layers', []), \
num_classes=ssd_Param_2.get("num_classes", 2), \
boxsizes=ssd_Param_2.get("anchor_boxsizes", []), \
aspect_ratios=ssd_Param_2.get("anchor_aspect_ratios", []), \
prior_variance=ssd_Param_2.get("anchor_prior_variance", [0.1, 0.1, 0.2, 0.2]), \
flip=ssd_Param_2.get("anchor_flip", True), \
clip=ssd_Param_2.get("anchor_clip", True), \
normalizations=ssd_Param_2.get("interlayers_normalizations", []), \
use_batchnorm=ssd_Param_2.get("interlayers_use_batchnorm", True), \
inter_layer_channels=ssd_Param_2.get("interlayers_channels_kernels", []), \
use_focus_loss=ssd_Param_2.get("bboxloss_using_focus_loss", False), \
use_dense_boxes=ssd_Param_2.get('bboxloss_use_dense_boxes', False), \
stage=2,flag_withparamname=False,add_str=add_str,lr_mult=lr_inter_loss)
use_bn = False
init_xavier = False
if train:
add_str = "_minihand"
else:
add_str = ""
from_layer = "conv1" + add_str
out_layer = 'conv2_hand'
ConvBNUnitLayer(net, from_layer, out_layer, use_bn=use_bn, use_relu=False,
num_output=64, kernel_size=3, pad=1, stride=2, use_scale=False, leaky=False, lr_mult=1,
decay_mult=1, init_xavier=init_xavier)
from_layer = "conv4_5"
Deconv(net, from_layer, num_output=64, group=1, kernel_size=2, stride=2, lr_mult=1.0, decay_mult=1.0,
use_bn=use_bn, use_scale=use_bn, use_relu=False, add_str=add_str,deconv_name="_miniUpsample")
out_layer = "mini_multiscale"
net[out_layer] = L.Eltwise(net["conv2_hand"], net["conv4_5" + "_miniUpsample"],
eltwise_param=dict(operation=P.Eltwise.SUM))
from_layer = out_layer
out_layer = from_layer + "_relu"
net[out_layer] = L.ReLU(net[from_layer], in_place=True)
data_layer = "data" + add_str
ssd_Param_3 = get_ssd_Param_3(flag_169_global, bboxloss_loc_weight=bboxloss_loc_weight_part, bboxloss_conf_weight=bboxloss_conf_weight_part)
mbox_3_layers = SsdDetectorHeaders(net, \
net_width=net_width, net_height=net_height, data_layer=data_layer, \
from_layers=ssd_Param_3.get('feature_layers', []), \
num_classes=ssd_Param_3.get("num_classes", 2), \
boxsizes=ssd_Param_3.get("anchor_boxsizes", []), \
aspect_ratios=ssd_Param_3.get("anchor_aspect_ratios", []), \
prior_variance=ssd_Param_3.get("anchor_prior_variance",
[0.1, 0.1, 0.2, 0.2]), \
flip=ssd_Param_3.get("anchor_flip", True), \
clip=ssd_Param_3.get("anchor_clip", True), \
normalizations=ssd_Param_3.get("interlayers_normalizations", []), \
use_batchnorm=ssd_Param_3.get("interlayers_use_batchnorm", True), \
inter_layer_channels=ssd_Param_3.get("interlayers_channels_kernels", []), \
use_focus_loss=ssd_Param_3.get("bboxloss_using_focus_loss", False), \
use_dense_boxes=ssd_Param_3.get('bboxloss_use_dense_boxes', False), \
stage=3,lr_mult=lr_inter_loss)
if train:
gt_label = "label"
loss_param = get_loss_param(normalization=ssd_Param_2.get("bboxloss_normalization", P.Loss.VALID))
mbox_2_layers.append(net[gt_label])
bboxloss_param = {
'gt_labels': ssd_Param_2.get('gt_labels', []),
'target_labels': ssd_Param_2.get('target_labels', []),
'num_classes': ssd_Param_2.get("num_classes", 2),
'alias_id': ssd_Param_2.get("alias_id", 0),
'loc_loss_type': ssd_Param_2.get("bboxloss_loc_loss_type", P.MultiBoxLoss.SMOOTH_L1),
'conf_loss_type': ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.LOGISTIC),
'loc_weight': ssd_Param_2.get("bboxloss_loc_weight", 1),
'conf_weight': ssd_Param_2.get("bboxloss_conf_weight", 1),
'overlap_threshold': ssd_Param_2.get("bboxloss_overlap_threshold", 0.5),
'neg_overlap': ssd_Param_2.get("bboxloss_neg_overlap", 0.5),
'size_threshold': ssd_Param_2.get("bboxloss_size_threshold", 0.0001),
'do_neg_mining': ssd_Param_2.get("bboxloss_do_neg_mining", True),
'neg_pos_ratio': ssd_Param_2.get("bboxloss_neg_pos_ratio", 3),
'using_focus_loss': ssd_Param_2.get("bboxloss_using_focus_loss", False),
'gama': ssd_Param_2.get("bboxloss_focus_gama", 2),
'use_difficult_gt': ssd_Param_2.get("bboxloss_use_difficult_gt", False),
'code_type': ssd_Param_2.get("bboxloss_code_type", P.PriorBox.CENTER_SIZE),
'use_prior_for_matching': True,
'encode_variance_in_target': False,
'flag_noperson': ssd_Param_2.get('flag_noperson', False),
}
net["mbox_2_loss"] = L.DenseBBoxLoss(*mbox_2_layers, dense_bbox_loss_param=bboxloss_param, \
loss_param=loss_param,
include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
gt_label = "label_minihand"
loss_param = get_loss_param(normalization=ssd_Param_3.get("bboxloss_normalization", P.Loss.VALID))
mbox_3_layers.append(net[gt_label])
bboxloss_param = {
'gt_labels': ssd_Param_3.get('gt_labels', []),
'target_labels': ssd_Param_3.get('target_labels', []),
'num_classes': ssd_Param_3.get("num_classes", 2),
'alias_id': ssd_Param_3.get("alias_id", 0),
'loc_loss_type': ssd_Param_3.get("bboxloss_loc_loss_type", P.MultiBoxLoss.SMOOTH_L1),
'conf_loss_type': ssd_Param_3.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX),
'loc_weight': ssd_Param_3.get("bboxloss_loc_weight", 1),
'conf_weight': ssd_Param_3.get("bboxloss_conf_weight", 1),
'overlap_threshold': ssd_Param_3.get("bboxloss_overlap_threshold", 0.5),
'neg_overlap': ssd_Param_3.get("bboxloss_neg_overlap", 0.5),
'size_threshold': ssd_Param_3.get("bboxloss_size_threshold", 0.0001),
'do_neg_mining': ssd_Param_3.get("bboxloss_do_neg_mining", True),
'neg_pos_ratio': ssd_Param_3.get("bboxloss_neg_pos_ratio", 3),
'using_focus_loss': ssd_Param_3.get("bboxloss_using_focus_loss", False),
'gama': ssd_Param_3.get("bboxloss_focus_gama", 2),
'use_difficult_gt': ssd_Param_3.get("bboxloss_use_difficult_gt", False),
'code_type': ssd_Param_3.get("bboxloss_code_type", P.PriorBox.CENTER_SIZE),
'flag_noperson': ssd_Param_3.get('flag_noperson', False),
'match_type': P.MultiBoxLoss.PER_PREDICTION,
'share_location': True,
'use_prior_for_matching': True,
'background_label_id': 0,
'encode_variance_in_target': False,
'map_object_to_agnostic': False,
}
net["mbox_3_loss"] = L.BBoxLoss(*mbox_3_layers, bbox_loss_param=bboxloss_param, \
loss_param=loss_param,
include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
else:
if ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.SOFTMAX:
reshape_name = "mbox_2_conf_reshape" + add_str
net[reshape_name] = L.Reshape(mbox_2_layers[1], \
shape=dict(dim=[0, -1, ssd_Param_2.get("num_classes", 2)]))
softmax_name = "mbox_2_conf_softmax" + add_str
net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
flatten_name = "mbox_2_conf_flatten" + add_str
net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
mbox_2_layers[1] = net[flatten_name]
elif ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.LOGISTIC:
sigmoid_name = "mbox_2_conf_sigmoid" + add_str
net[sigmoid_name] = L.Sigmoid(mbox_2_layers[1])
mbox_2_layers[1] = net[sigmoid_name]
else:
raise ValueError("Unknown conf loss type.")
det_out_param = {
'num_classes': ssd_Param_2.get("num_classes", 2),
'target_labels': ssd_Param_2.get('detout_target_labels', []),
'alias_id': ssd_Param_2.get("alias_id", 0),
'conf_threshold': ssd_Param_2.get("detout_conf_threshold", 0.01),
'nms_threshold': ssd_Param_2.get("detout_nms_threshold", 0.45),
'size_threshold': ssd_Param_2.get("detout_size_threshold", 0.0001),
'top_k': ssd_Param_2.get("detout_top_k", 30),
'share_location': True,
'code_type': P.PriorBox.CENTER_SIZE,
'background_label_id': 0,
'variance_encoded_in_target': False,
}
net["detection_out_2"] = L.DenseDetOut(*mbox_2_layers, \
detection_output_param=det_out_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
if ssd_Param_3.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.SOFTMAX:
reshape_name = "mbox_3_conf_reshape" + add_str
net[reshape_name] = L.Reshape(mbox_3_layers[1], \
shape=dict(dim=[0, -1, ssd_Param_3.get("num_classes", 2)]))
softmax_name = "mbox_3_conf_softmax" + add_str
net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
flatten_name = "mbox_3_conf_flatten" + add_str
net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
mbox_3_layers[1] = net[flatten_name]
elif ssd_Param_3.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.LOGISTIC:
sigmoid_name = "mbox_3_conf_sigmoid" + add_str
net[sigmoid_name] = L.Sigmoid(mbox_3_layers[1])
mbox_3_layers[1] = net[sigmoid_name]
else:
raise ValueError("Unknown conf loss type.")
det_out_param = {
'num_classes': ssd_Param_3.get("num_classes", 2),
'target_labels': ssd_Param_3.get('detout_target_labels', []),
'alias_id': ssd_Param_3.get("alias_id", 0),
'conf_threshold': ssd_Param_3.get("detout_conf_threshold", 0.01),
'nms_threshold': ssd_Param_3.get("detout_nms_threshold", 0.45),
'size_threshold': ssd_Param_3.get("detout_size_threshold", 0.0001),
'top_k': ssd_Param_3.get("detout_top_k", 30),
'share_location': True,
'code_type': P.PriorBox.CENTER_SIZE,
'background_label_id': 0,
'variance_encoded_in_target': False,
}
net["detection_out_3"] = L.DenseDetOut(*mbox_3_layers, \
detection_output_param=det_out_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
##Step8:Create evaluation part for subnet_16:9
eval_Param = get_eval_Param([1, 3])
det_eval_param = {
'gt_labels': eval_Param.get('eval_gt_labels', []),
'num_classes': eval_Param.get("eval_num_classes", 2),
'evaluate_difficult_gt': eval_Param.get("eval_difficult_gt", False),
'boxsize_threshold': eval_Param.get("eval_boxsize_threshold", [0, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25]),
'iou_threshold': eval_Param.get("eval_iou_threshold", [0.9, 0.75, 0.5]),
'background_label_id': 0,
}
det_out_layers = []
det_out_layers.append(net['detection_out_2' + add_str])
det_out_layers.append(net['detection_out_3' + add_str])
name = 'det_out' + add_str
net[name] = L.Concat(*det_out_layers, axis=2)
net["det_accu" + add_str] = L.DetEval(net[name], net["label"], \
detection_evaluate_param=det_eval_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
return net
def DetRelease_SecondPartAllNetMiniHandFace(train=True):
net = caffe.NetSpec()
##Step1: Create Data for Body_Part Detection of 16:9, 9:16 and Pose Estimation
if train:
net = get_DAPDataLayer(net, train=train, batchsize=batch_size, data_name="data_pd", label_name="label_pd",
flag_169=flag_169_global)
net = get_MinihandDataLayer(net, train=train, data_name="data_minihand", label_name="label_minihand", flag_169=flag_169_global)
data = []
data.append(net["data_minihand"])
data.append(net["data_pd"])
net["data"] = L.Concat(*data, axis=0)
label = []
label.append(net["label_minihand"])
label.append(net["label_pd"])
net["label"] = L.Concat(*label, axis=2)
else:
net = get_DAPDataLayer(net, train=train, batchsize=batch_size, data_name="data", label_name="label",flag_169=flag_169_global)
##Step2: Create BaseNet for three subnets until conv5_5
use_bn = False
channels = ((32,), (64,), (128, 64, 128), (192, 96, 192, 96, 192), (256, 128, 256, 128, 256))
strides = (True, True, True, False, False)
kernels = ((3,), (3,), (3, 1, 3), (3, 1, 3, 1, 3), (3, 1, 3, 1, 3))
pool_last = (False,False,False,True,True)
net = VGG16_BaseNet_ChangeChannel(net, from_layer="data", channels=channels, strides=strides,
kernels=kernels,freeze_layers=[], pool_last=pool_last,flag_withparamname=True,add_string='',
use_bn=use_bn,lr_mult=lr_conv1_conv5,decay_mult=1.0,use_global_stats=None)
##Step3: Create Conv6 for Body_Part Detection for Detection subnets(16:9 and 9:16)
conv6_output = Conv6_Param.get('conv6_output',[])
conv6_kernal_size = Conv6_Param.get('conv6_kernal_size',[])
from_layer = "pool5"
net = addconv6(net, from_layer=from_layer, use_bn=use_bn, conv6_output=conv6_output, \
conv6_kernal_size=conv6_kernal_size, pre_name="conv6", start_pool=False, lr_mult=lr_conv6_adap,
decay_mult=1, n_group=1, flag_withparamname=False)
##Step4:Create featuremap1,featuremap2,featuremap3 for Detection subnet_16:9
layers = ["conv3_3", "conv4_5"]
kernels = [3, 3]
strides = [1, 1]
out_layer = "featuremap1"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
layers = ["conv4_5", "conv5_5"]
kernels = [3, 3]
strides = [2, 1]
out_layer = "featuremap2"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
layers = ["conv5_5", "conv6_5"]
kernels = [3, 3]
strides = [2, 1]
out_layer = "featuremap3"
num_channels = 128
add_str = ""
MultiScaleEltLayer(net, layers=layers, kernels=kernels, strides=strides, out_layer=out_layer,
num_channels=num_channels, lr=lr_conv6_adap, decay=1.0, use_bn=use_bn, add_str=add_str,
flag_withparamname=True)
use_bn = False
init_xavier = False
from_layer = "conv1"
out_layer = 'conv2_mini'
ConvBNUnitLayer(net, from_layer, out_layer, use_bn=use_bn, use_relu=False,
num_output=64, kernel_size=3, pad=1, stride=2, use_scale=False, leaky=False, lr_mult=1,
decay_mult=1, init_xavier=init_xavier)
from_layer = "conv4_5"
Deconv(net, from_layer, num_output=64, group=1, kernel_size=2, stride=2, lr_mult=1.0, decay_mult=1.0,
use_bn=use_bn, use_scale=use_bn, use_relu=False, add_str="", deconv_name="_miniUpsample")
out_layer = "mini_multiscale"
net[out_layer] = L.Eltwise(net["conv2_mini"], net["conv4_5" + "_miniUpsample"],
eltwise_param=dict(operation=P.Eltwise.SUM))
from_layer = out_layer
out_layer = from_layer + "_relu"
net[out_layer] = L.ReLU(net[from_layer], in_place=True)
##Step6:Create Header and Body Loss for subnet_16:9
data_layer = "data"
gt_label = "label"
if flag_169_global:
net_width = 512
net_height = 288
else:
net_width = 512
net_height = 288
##Step7:Create Header and Part Loss for subnet_16:9
ssd_Param_2 = get_ssd_Param_4(flag_169=flag_169_global, bboxloss_loc_weight=bboxloss_loc_weight_part, bboxloss_conf_weight=bboxloss_conf_weight_part)
mbox_2_layers = SsdDetectorHeaders(net, \
net_width=net_width, net_height=net_height, data_layer=data_layer, \
from_layers=ssd_Param_2.get('feature_layers', []), \
num_classes=ssd_Param_2.get("num_classes", 2), \
boxsizes=ssd_Param_2.get("anchor_boxsizes", []), \
aspect_ratios=ssd_Param_2.get("anchor_aspect_ratios", []), \
prior_variance=ssd_Param_2.get("anchor_prior_variance", [0.1, 0.1, 0.2, 0.2]), \
flip=ssd_Param_2.get("anchor_flip", True), \
clip=ssd_Param_2.get("anchor_clip", True), \
normalizations=ssd_Param_2.get("interlayers_normalizations", []), \
use_batchnorm=ssd_Param_2.get("interlayers_use_batchnorm", True), \
inter_layer_channels=ssd_Param_2.get("interlayers_channels_kernels", []), \
use_focus_loss=ssd_Param_2.get("bboxloss_using_focus_loss", False), \
use_dense_boxes=ssd_Param_2.get('bboxloss_use_dense_boxes', False), \
stage=2,flag_withparamname=False,add_str=add_str,lr_mult=lr_inter_loss)
if train:
loss_param = get_loss_param(normalization=ssd_Param_2.get("bboxloss_normalization", P.Loss.VALID))
mbox_2_layers.append(net[gt_label])
bboxloss_param = {
'gt_labels': ssd_Param_2.get('gt_labels', []),
'target_labels': ssd_Param_2.get('target_labels', []),
'num_classes': ssd_Param_2.get("num_classes", 2),
'alias_id': ssd_Param_2.get("alias_id", 0),
'loc_loss_type': ssd_Param_2.get("bboxloss_loc_loss_type", P.MultiBoxLoss.SMOOTH_L1),
'conf_loss_type': ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.LOGISTIC),
'loc_weight': ssd_Param_2.get("bboxloss_loc_weight", 1),
'conf_weight': ssd_Param_2.get("bboxloss_conf_weight", 1),
'overlap_threshold': ssd_Param_2.get("bboxloss_overlap_threshold", 0.5),
'neg_overlap': ssd_Param_2.get("bboxloss_neg_overlap", 0.5),
'size_threshold': ssd_Param_2.get("bboxloss_size_threshold", 0.0001),
'do_neg_mining': ssd_Param_2.get("bboxloss_do_neg_mining", True),
'neg_pos_ratio': ssd_Param_2.get("bboxloss_neg_pos_ratio", 3),
'using_focus_loss': ssd_Param_2.get("bboxloss_using_focus_loss", False),
'gama': ssd_Param_2.get("bboxloss_focus_gama", 2),
'use_difficult_gt': ssd_Param_2.get("bboxloss_use_difficult_gt", False),
'code_type': ssd_Param_2.get("bboxloss_code_type", P.PriorBox.CENTER_SIZE),
'use_prior_for_matching': True,
'encode_variance_in_target': False,
'flag_noperson': ssd_Param_2.get('flag_noperson', False),
}
net["mbox_2_loss" + add_str] = L.DenseBBoxLoss(*mbox_2_layers, dense_bbox_loss_param=bboxloss_param, \
loss_param=loss_param,
include=dict(phase=caffe_pb2.Phase.Value('TRAIN')), \
propagate_down=[True, True, False, False])
else:
if ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.SOFTMAX:
reshape_name = "mbox_2_conf_reshape" + add_str
net[reshape_name] = L.Reshape(mbox_2_layers[1], \
shape=dict(dim=[0, -1, ssd_Param_2.get("num_classes", 2)]))
softmax_name = "mbox_2_conf_softmax" + add_str
net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
flatten_name = "mbox_2_conf_flatten" + add_str
net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
mbox_2_layers[1] = net[flatten_name]
elif ssd_Param_2.get("bboxloss_conf_loss_type", P.MultiBoxLoss.SOFTMAX) == P.MultiBoxLoss.LOGISTIC:
sigmoid_name = "mbox_2_conf_sigmoid" + add_str
net[sigmoid_name] = L.Sigmoid(mbox_2_layers[1])
mbox_2_layers[1] = net[sigmoid_name]
else:
raise ValueError("Unknown conf loss type.")
det_out_param = {
'num_classes': ssd_Param_2.get("num_classes", 2),
'target_labels': ssd_Param_2.get('detout_target_labels', []),
'alias_id': ssd_Param_2.get("alias_id", 0),
'conf_threshold': ssd_Param_2.get("detout_conf_threshold", 0.01),
'nms_threshold': ssd_Param_2.get("detout_nms_threshold", 0.45),
'size_threshold': ssd_Param_2.get("detout_size_threshold", 0.0001),
'top_k': ssd_Param_2.get("detout_top_k", 30),
'share_location': True,
'code_type': P.PriorBox.CENTER_SIZE,
'background_label_id': 0,
'variance_encoded_in_target': False,
}
net["detection_out_2"] = L.DenseDetOut(*mbox_2_layers, \
detection_output_param=det_out_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
##Step8:Create evaluation part for subnet_16:9
eval_Param = get_eval_Param([1, 3])
det_eval_param = {
'gt_labels': eval_Param.get('eval_gt_labels', []),
'num_classes': eval_Param.get("eval_num_classes", 2),
'evaluate_difficult_gt': eval_Param.get("eval_difficult_gt", False),
'boxsize_threshold': eval_Param.get("eval_boxsize_threshold", [0, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25]),
'iou_threshold': eval_Param.get("eval_iou_threshold", [0.9, 0.75, 0.5]),
'background_label_id': 0,
}
net["det_accu"] = L.DetEval(net["detection_out_2"], net[gt_label], \
detection_evaluate_param=det_eval_param, \
include=dict(phase=caffe_pb2.Phase.Value('TEST')))
return net
def mPose_StageX_Train(net, from_layer="concat_stage1", out_layer="concat_stage2", stage=1, \
mask_vec="vec_mask", mask_heat="heat_mask",label_vec="vec_label", label_heat="heat_label", \
use_3_layers=5, use_1_layers=2, short_cut=True,base_layer="convf", lr=1.0, decay=1.0,num_channels = 128,flag_sigmoid = False,
kernel_size=3,addstrs = '',flag_change_layer=False,flag_hasoutput=True,flag_hasloss=True,id_layer_until=0, relu_layer_until = False,loss_weight=1.0):
kwargs = {'param': [dict(lr_mult=lr, decay_mult=decay), dict(lr_mult=2*lr, decay_mult=0)],
'weight_filler': dict(type='gaussian', std=0.01),
'bias_filler': dict(type='constant', value=0)}
assert from_layer in net.keys()
from1_layer = from_layer
from2_layer = from_layer
if use_1_layers > 0:
numlayers = use_3_layers + 1
else:
numlayers = use_3_layers
for layer in range(1, numlayers):
# vec
if layer == numlayers - 1 and flag_change_layer:
num_channels = 64
conv_vec = "stage{}_conv{}_vec".format(stage,layer) + addstrs
net[conv_vec] = L.Convolution(net[from1_layer], num_output=num_channels, pad=(kernel_size-1)/2, kernel_size=kernel_size, **kwargs)
# heat
conv_heat = "stage{}_conv{}_heat".format(stage,layer) + addstrs
net[conv_heat] = L.Convolution(net[from2_layer], num_output=num_channels, pad=(kernel_size-1)/2, kernel_size=kernel_size, **kwargs)
if layer == id_layer_until:
if relu_layer_until:
relu_vec = "stage{}_relu{}_vec".format(stage, layer)
net[relu_vec] = L.ReLU(net[conv_vec], in_place=True)
relu_heat = "stage{}_relu{}_heat".format(stage,layer)
net[relu_heat] = L.ReLU(net[conv_heat], in_place=True)
return net
else:
return net
else:
relu_vec = "stage{}_relu{}_vec".format(stage, layer)
net[relu_vec] = L.ReLU(net[conv_vec], in_place=True)
from1_layer = relu_vec
relu_heat = "stage{}_relu{}_heat".format(stage, layer)
net[relu_heat] = L.ReLU(net[conv_heat], in_place=True)
from2_layer = relu_heat
if flag_hasoutput:
if use_1_layers > 0:
for layer in range(1, use_1_layers):
# vec
conv_vec = "stage{}_conv{}_vec".format(stage,use_3_layers+layer) + addstrs
net[conv_vec] = L.Convolution(net[from1_layer], num_output=num_channels, pad=0, kernel_size=1, **kwargs)
relu_vec = "stage{}_relu{}_vec".format(stage,use_3_layers+layer) + addstrs
net[relu_vec] = L.ReLU(net[conv_vec], in_place=True)
from1_layer = relu_vec
# heat
conv_heat = "stage{}_conv{}_heat".format(stage,use_3_layers+layer) + addstrs
net[conv_heat] = L.Convolution(net[from2_layer], num_output=num_channels, pad=0, kernel_size=1, **kwargs)
relu_heat = "stage{}_relu{}_heat".format(stage,use_3_layers+layer) + addstrs
net[relu_heat] = L.ReLU(net[conv_heat], in_place=True)
from2_layer = relu_heat
# output
conv_vec = "stage{}_conv{}_vec".format(stage,use_3_layers+use_1_layers) + addstrs
net[conv_vec] = L.Convolution(net[from1_layer], num_output=34, pad=0, kernel_size=1, **kwargs)
conv_heat = "stage{}_conv{}_heat".format(stage,use_3_layers+use_1_layers) + addstrs
net[conv_heat] = L.Convolution(net[from2_layer], num_output=18, pad=0, kernel_size=1, **kwargs)
else:
# output by 3x3
if flag_change_layer:
kernel_size = 3
conv_vec = "stage{}_conv{}_vec".format(stage,use_3_layers) + addstrs
net[conv_vec] = L.Convolution(net[from1_layer], num_output=34, pad=(kernel_size-1)/2, kernel_size=kernel_size, **kwargs)
if flag_sigmoid:
conv_vec_sig = conv_vec + "_sig"
net[conv_vec_sig] = L.Sigmoid(net[conv_vec])
conv_vec = conv_vec_sig
conv_heat = "stage{}_conv{}_heat".format(stage,use_3_layers) + addstrs
net[conv_heat] = L.Convolution(net[from2_layer], num_output=18, pad=(kernel_size-1)/2, kernel_size=kernel_size, **kwargs)
if flag_sigmoid:
conv_heat_sig = conv_heat + "_sig"
net[conv_heat_sig] = L.Sigmoid(net[conv_heat])
conv_heat = conv_heat_sig
if flag_hasloss:
weight_vec = "weight_stage{}_vec".format(stage)
weight_heat = "weight_stage{}_heat".format(stage)
loss_vec = "loss_stage{}_vec".format(stage)
loss_heat = "loss_stage{}_heat".format(stage)
net[weight_vec] = L.Eltwise(net[conv_vec], net[mask_vec], eltwise_param=dict(operation=P.Eltwise.PROD))
net[loss_vec] = L.EuclideanLoss(net[weight_vec], net[label_vec], loss_weight=loss_weight)
net[weight_heat] = L.Eltwise(net[conv_heat], net[mask_heat], eltwise_param=dict(operation=P.Eltwise.PROD))
net[loss_heat] = L.EuclideanLoss(net[weight_heat], net[label_heat], loss_weight=loss_weight)
# 特征拼接
if short_cut:
fea_layers = []
fea_layers.append(net[conv_vec])
fea_layers.append(net[conv_heat])
assert base_layer in net.keys()
fea_layers.append(net[base_layer])
net[out_layer] = L.Concat(*fea_layers, axis=1)
return net
| 58.687563 | 172 | 0.60675 | 7,577 | 58,042 | 4.276363 | 0.046324 | 0.053824 | 0.035553 | 0.046664 | 0.866829 | 0.846244 | 0.832603 | 0.802728 | 0.780137 | 0.748164 | 0 | 0.037921 | 0.275335 | 58,042 | 988 | 173 | 58.746964 | 0.732436 | 0.056304 | 0 | 0.604167 | 0 | 0 | 0.164005 | 0.050723 | 0 | 0 | 0 | 0 | 0.003472 | 1 | 0.006944 | false | 0 | 0.021991 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fab30be8d757ba1e34ae8f3aed2c0679f0092cf | 12,054 | py | Python | agents/models.py | anindex/deepRL-projects | bed03d1f985c8340fc75f715028b632bdce40641 | [
"MIT"
] | null | null | null | agents/models.py | anindex/deepRL-projects | bed03d1f985c8340fc75f715028b632bdce40641 | [
"MIT"
] | null | null | null | agents/models.py | anindex/deepRL-projects | bed03d1f985c8340fc75f715028b632bdce40641 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
def hidden_init(layer):
fan_in = layer.weight.data.size()[0]
lim = 1. / np.sqrt(fan_in)
return (-lim, lim)
class FCQNetwork(nn.Module):
"""Fully connected DNN Q function which outputs array of action values"""
def __init__(self, state_size, action_size, seed, fc1_units=64, fc2_units=64):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
fc1_units (int): Number of nodes in first hidden layer
fc2_units (int): Number of nodes in second hidden layer
"""
super(FCQNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units, fc2_units) # action input from second fc layer
self.fc3 = nn.Linear(fc2_units, action_size)
def forward(self, state):
"""Build a network that maps state -> action values."""
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
return self.fc3(x)
class CNNQNetwork(nn.Module):
"""CNN Q Function, which outputs array of action values"""
def __init__(self, state_size, action_size, seed, conv1_filters=16, conv2_filters=16, conv3_filters=16, fc1_units=200, fc2_units=200):
"""Initialize parameters and build model.
Params
======
state_size (list): Shape of each state image, e.g [3, 28, 28]
action_size (int): Dimension of each action
seed (int): Random seed
conv1_filters (int): Number of filters for first CNN layer
conv2_filters (int): Number of filters for second CNN layer
conv3_filters (int): Number of filters for third CNN layer
fc1_units (int): Number of nodes in first FC layer
fc2_units (int): Number of nodes in second FC layer
"""
super(CNNQNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.conv1 = nn.Conv2d(state_size[0], conv1_filters, 3, padding=1)
self.conv2 = nn.Conv2d(conv1_filters, conv2_filters, 3, padding=1)
self.conv3 = nn.Conv2d(conv2_filters, conv3_filters, 3, padding=1)
self.fc1 = nn.Linear(conv3_filters*state_size[1]*state_size[2], fc1_units) # action input from first fc layer
self.drop = nn.Dropout(p=0.4)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.fc3 = nn.Linear(fc2_units, action_size)
def forward(self, state):
"""Build a network that maps state -> action values."""
x = F.relu(self.conv1(state))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.drop(x)
x = F.relu(self.fc2(x))
return self.fc3(x)
class FCCritic(nn.Module):
"""Fully connected DNN Critics Q Model."""
def __init__(self, state_size, action_size, seed, fc1_units=256, fc2_units=256, fc3_units=128):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
fc1_units (int): Number of nodes in first hidden layer
fc2_units (int): Number of nodes in second hidden layer
"""
super(FCCritic, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units + action_size, fc2_units) # action input from second fc layer
self.fc3 = nn.Linear(fc2_units, fc3_units)
self.fc4 = nn.Linear(fc3_units, 1)
self.reset_parameters()
def reset_parameters(self):
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(*hidden_init(self.fc2))
self.fc3.weight.data.uniform_(*hidden_init(self.fc3))
self.fc4.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state, action):
"""Build a network that maps state -> action values."""
xs = F.leaky_relu(self.fc1(state))
x = torch.cat((xs, action), dim=1)
x = F.leaky_relu(self.fc2(x))
x = F.leaky_relu(self.fc3(x))
return self.fc4(x)
class CNNCritic(nn.Module):
"""CNN Critics Q Model. Implement based on DDPG paper 2016"""
def __init__(self, state_size, action_size, seed, conv1_filters=32, conv2_filters=32, conv3_filters=32, fc1_units=256, fc2_units=256):
"""Initialize parameters and build model.
Params
======
state_size (list): Shape of each state image, e.g [3, 28, 28]
action_size (int): Dimension of each action
seed (int): Random seed
conv1_filters (int): Number of filters for first CNN layer
conv2_filters (int): Number of filters for second CNN layer
conv3_filters (int): Number of filters for third CNN layer
fc1_units (int): Number of nodes in first FC layer
fc2_units (int): Number of nodes in second FC layer
"""
super(CNNCritic, self).__init__()
self.seed = torch.manual_seed(seed)
self.conv1 = nn.Conv2d(state_size[0], conv1_filters, 3, padding=1)
self.conv2 = nn.Conv2d(conv1_filters, conv2_filters, 3, padding=1)
self.conv3 = nn.Conv2d(conv2_filters, conv3_filters, 3, padding=1)
self.fc1 = nn.Linear(conv3_filters*state_size[1]*state_size[2] + action_size, fc1_units) # action input from first fc layer
self.drop = nn.Dropout(p=0.4)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.fc3 = nn.Linear(fc2_units, 1)
def forward(self, state, action):
"""Build a network that maps state -> action values."""
xs = F.relu(self.conv1(state))
xs = F.relu(self.conv2(xs))
xs = F.relu(self.conv3(xs))
xs = xs.view(x.size(0), -1)
x = torch.cat((xs, action), dim=1)
x = F.relu(self.fc1(x))
x = self.drop(x)
x = F.relu(self.fc2(x))
return self.fc3(x)
class FCPolicy(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc_units=256):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
fc1_units (int): Number of nodes in first hidden layer
fc2_units (int): Number of nodes in second hidden layer
"""
super(FCPolicy, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc_units)
self.fc2 = nn.Linear(fc_units, action_size)
self.reset_parameters()
def reset_parameters(self):
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state):
"""Build an actor (policy) network that maps states -> actions."""
x = F.relu(self.fc1(state))
return F.tanh(self.fc2(x))
class CNNPolicy(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, conv1_filters=32, conv2_filters=32, conv3_filters=32, fc1_units=200, fc2_units=200):
"""Initialize parameters and build model.
Params
======
state_size (list): Shape of each state image, e.g [3, 28, 28]
action_size (int): Dimension of each action
seed (int): Random seed
conv1_filters (int): Number of filters for first CNN layer
conv2_filters (int): Number of filters for second CNN layer
conv3_filters (int): Number of filters for third CNN layer
fc1_units (int): Number of nodes in first FC layer
fc2_units (int): Number of nodes in second FC layer
"""
super(CNNPolicy, self).__init__()
self.seed = torch.manual_seed(seed)
self.conv1 = nn.Conv2d(state_size[0], conv1_filters, 3, padding=1)
self.conv2 = nn.Conv2d(conv1_filters, conv2_filters, 3, padding=1)
self.conv3 = nn.Conv2d(conv2_filters, conv3_filters, 3, padding=1)
self.fc1 = nn.Linear(conv3_filters*state_size[1]*state_size[2], fc1_units) # action input from first fc layer
self.drop = nn.Dropout(p=0.4)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.fc3 = nn.Linear(fc2_units, action_size)
def forward(self, state):
"""Build an actor (policy) network that maps states -> actions."""
x = F.relu(self.conv1(state))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.drop(x)
x = F.relu(self.fc2(x))
return F.softmax(self.fc3(x), dim=1)
class MAFCPolicy(nn.Module) :
r"""A simple deterministic policy network with batch norms
Args:
observationShape (tuple): shape of the observations given to the network
actionShape (tuple): shape of the actions to be computed by the network
"""
def __init__(self, state_size, action_size, seed, fc1_units=256, fc2_units=128) :
super(MAFCPolicy, self).__init__()
self.seed = torch.manual_seed(seed)
self.bn_input = nn.BatchNorm1d(state_size)
self.fc1 = nn.Linear(state_size, fc1_units)
self.bn_fc1 = nn.BatchNorm1d(fc1_units)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.bn_fc2 = nn.BatchNorm1d(fc2_units)
self.fc3 = nn.Linear(fc2_units, action_size)
self.reset_parameters()
def reset_parameters(self) :
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(*hidden_init(self.fc2))
self.fc3.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state) :
r"""Forward pass for this deterministic policy
Args:
state (torch.tensor): observation used to decide the action
"""
x = self.bn_input(state)
x = F.relu(self.bn_fc1(self.fc1(x)))
x = F.relu(self.bn_fc2(self.fc2(x)))
return F.tanh(self.fc3(x))
class MAFCCritic(nn.Module) :
r"""A simple critic Q-network with batch norm to be used for the centralized critics
Args:
joint_state_size (tuple): shape of the augmented state representation [o1,o2,...on]
joint_action_size (tuple): shape of the augmented action representation [a1,a2,...,an]
"""
def __init__( self, joint_state_size, joint_action_size, seed, fc1_units=128, fc2_units=128) :
super(MAFCCritic, self).__init__()
self.seed = torch.manual_seed(seed)
self.bn_input = nn.BatchNorm1d(joint_state_size)
self.fc1 = nn.Linear(joint_state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units + joint_action_size, fc2_units)
self.fc3 = nn.Linear(fc2_units, 1)
self.reset_parameters()
def reset_parameters(self) :
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(*hidden_init(self.fc2))
self.fc3.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, joint_states, joint_actions) :
r"""Forward pass for this critic at a given (x=[o1,...,an],a=[a1...an]) pair
Args:
joint_states (torch.tensor): augmented observation [o1,o2,...,on]
joint_actions (torch.tensor): augmented action [a1,a2,...,an]
"""
xs = self.bn_input(joint_states)
xs = F.relu(self.fc1(xs))
x = torch.cat([xs, joint_actions], dim=1)
x = F.relu(self.fc2(x))
return self.fc3(x) | 38.883871 | 138 | 0.617969 | 1,716 | 12,054 | 4.178322 | 0.095571 | 0.038912 | 0.027615 | 0.025105 | 0.813389 | 0.775732 | 0.755788 | 0.755788 | 0.755788 | 0.739749 | 0 | 0.04 | 0.26373 | 12,054 | 310 | 139 | 38.883871 | 0.767887 | 0.325867 | 0 | 0.557692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134615 | false | 0 | 0.025641 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fc3e5508b4f0802cc3a81044d7f2b7c9afff5d9 | 2,613 | py | Python | forge_api_client/projects.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 1 | 2019-07-02T08:32:22.000Z | 2019-07-02T08:32:22.000Z | forge_api_client/projects.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | null | null | null | forge_api_client/projects.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 2 | 2019-07-04T05:13:42.000Z | 2020-05-09T22:15:05.000Z | from .utils import get_request, post_request, authorized
class Projects:
@authorized
def getProjects(self, hub_id):
url = self.api_url + '/project/v1/hubs/%s/projects' % hub_id
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getProject(self, hub_id, project_id):
url = self.api_url + '/project/v1/hubs/%s/projects/%s' % (hub_id, project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getProjectHub(self, hub_id, project_id):
url = self.api_url + '/project/v1/hubs/%s/projects/%s/hub' % (hub_id, project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getTopFolders(self, hub_id, project_id):
url = self.api_url + '/project/v1/hubs/%s/projects/%s/topFolders' % (hub_id, project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getDownload(self, project_id, download_id):
url = self.api_url + '/data/v1/projects/%s/downloads/%s' % (project_id, download_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def getJobs(self, project_id, job_id):
url = self.api_url + '/data/v1/projects/%s/jobs/%s' % (project_id, job_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token)
}
return get_request(url, headers)
@authorized
def postDownload(self, project_id, body):
url = self.api_url + '/data/v1/projects/%s/downloads' % (project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token),
'Content-Type': 'application/vnd.api+json'
}
data = body
return post_request(url, data, headers)
@authorized
def postStorage(self, project_id, body):
url = self.api_url + '/data/v1/projects/%s/storage' % (project_id)
headers = {
'Authorization': '%s %s' % (self.token_type, self.access_token),
'Content-Type': 'application/vnd.api+json'
}
data = body
return post_request(url, data, headers)
| 26.663265 | 96 | 0.590892 | 313 | 2,613 | 4.741214 | 0.14377 | 0.084906 | 0.053908 | 0.070081 | 0.816712 | 0.816712 | 0.816712 | 0.816712 | 0.816712 | 0.764151 | 0 | 0.004215 | 0.273632 | 2,613 | 97 | 97 | 26.938144 | 0.777661 | 0 | 0 | 0.580645 | 0 | 0 | 0.180253 | 0.115959 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.016129 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fcf9642fc5fe4ee0c1be8619cab8c162f326921 | 32 | py | Python | src/devstacks_devops_terraform/__init__.py | devstacks-dev/devops-terraform | 440527c9b54a7709d57275983445efcbfa2f287f | [
"MIT"
] | null | null | null | src/devstacks_devops_terraform/__init__.py | devstacks-dev/devops-terraform | 440527c9b54a7709d57275983445efcbfa2f287f | [
"MIT"
] | null | null | null | src/devstacks_devops_terraform/__init__.py | devstacks-dev/devops-terraform | 440527c9b54a7709d57275983445efcbfa2f287f | [
"MIT"
] | null | null | null | from .generator import greetings | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d22d20c963b14b58d56a550692f47d82d0155e65 | 85 | py | Python | myapi/tasks/example.py | trinanda/cookiecutter-flask-restful | 126ebe8168c4a4c95eaf2bad81e6027ba2fe6b72 | [
"MIT"
] | null | null | null | myapi/tasks/example.py | trinanda/cookiecutter-flask-restful | 126ebe8168c4a4c95eaf2bad81e6027ba2fe6b72 | [
"MIT"
] | 1 | 2019-03-09T00:17:04.000Z | 2019-03-09T00:17:04.000Z | myapi/tasks/example.py | trinanda/cookiecutter-flask-restful | 126ebe8168c4a4c95eaf2bad81e6027ba2fe6b72 | [
"MIT"
] | null | null | null | from myapi.extensions import celery
@celery.task
def dummy_task():
return "OK"
| 12.142857 | 35 | 0.729412 | 12 | 85 | 5.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 6 | 36 | 14.166667 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d233dec1b30f4a8e85e6d54ea505355d0cdde812 | 35 | py | Python | rankedlist/__init__.py | xingrongtech/RankedList | f2ffa427f689a2095752238feda49336c4b9fe2c | [
"MIT"
] | null | null | null | rankedlist/__init__.py | xingrongtech/RankedList | f2ffa427f689a2095752238feda49336c4b9fe2c | [
"MIT"
] | null | null | null | rankedlist/__init__.py | xingrongtech/RankedList | f2ffa427f689a2095752238feda49336c4b9fe2c | [
"MIT"
] | null | null | null | from .rankedlist import RankedList
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2587a986995ed912f1eed154d44a6518d16042d | 44 | py | Python | apps/oozie/src/oozie/views/__init__.py | digideskio/hortonworks-sandbox | dd8e95c91faee3daa094707baeb94c3953b41efa | [
"Apache-2.0"
] | 19 | 2015-05-01T19:59:03.000Z | 2021-12-09T08:03:16.000Z | apps/oozie/src/oozie/views/__init__.py | digideskio/hortonworks-sandbox | dd8e95c91faee3daa094707baeb94c3953b41efa | [
"Apache-2.0"
] | 1 | 2018-01-03T15:26:49.000Z | 2018-01-03T15:26:49.000Z | apps/oozie/src/oozie/views/__init__.py | hortonworks/hortonworks-sandbox | dd8e95c91faee3daa094707baeb94c3953b41efa | [
"Apache-2.0"
] | 30 | 2015-03-25T19:40:07.000Z | 2021-05-28T22:59:26.000Z | from dashboard import *
from editor import * | 22 | 23 | 0.795455 | 6 | 44 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 2 | 24 | 22 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d25dc18cacc12af632c66ef562099866d2949cba | 44 | py | Python | src/erwin/t1_map/__init__.py | lamyj/erwin | a2a7c945827a54c1e89dbedb31c82e34363bf7d1 | [
"MIT"
] | 2 | 2021-11-09T10:57:52.000Z | 2022-02-18T09:55:42.000Z | src/erwin/t1_map/__init__.py | lamyj/erwin | a2a7c945827a54c1e89dbedb31c82e34363bf7d1 | [
"MIT"
] | null | null | null | src/erwin/t1_map/__init__.py | lamyj/erwin | a2a7c945827a54c1e89dbedb31c82e34363bf7d1 | [
"MIT"
] | null | null | null | """ T₁/R₁ mapping
"""
from .vfa import VFA
| 8.8 | 20 | 0.613636 | 7 | 44 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.204545 | 44 | 4 | 21 | 11 | 0.714286 | 0.295455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d27af1dbe5a3ae87ff10c25d86fd49e8563d8e45 | 77 | py | Python | deliravision/torch/models/gans/pixel_da/__init__.py | delira-dev/vision_torch | d944aa67d319bd63a2add5cb89e8308413943de6 | [
"BSD-2-Clause"
] | 4 | 2019-08-03T09:56:50.000Z | 2019-09-05T09:32:06.000Z | deliravision/torch/models/gans/pixel_da/__init__.py | delira-dev/vision_torch | d944aa67d319bd63a2add5cb89e8308413943de6 | [
"BSD-2-Clause"
] | 23 | 2019-08-03T14:16:47.000Z | 2019-10-22T10:15:10.000Z | deliravision/torch/models/gans/pixel_da/__init__.py | delira-dev/vision_torch | d944aa67d319bd63a2add5cb89e8308413943de6 | [
"BSD-2-Clause"
] | null | null | null | from deliravision.models.gans.pixel_da.pixel_da import PixelDomainAdaptation
| 38.5 | 76 | 0.896104 | 10 | 77 | 6.7 | 0.8 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 77 | 1 | 77 | 77 | 0.917808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96351faa75c583ab0a7fda7f037b07c89953a0ab | 48 | py | Python | LibSerial7/__init__.py | jaemilton/LibSerial7 | f483cd7474cd494281e384d2e74d65652ca581d6 | [
"MIT"
] | null | null | null | LibSerial7/__init__.py | jaemilton/LibSerial7 | f483cd7474cd494281e384d2e74d65652ca581d6 | [
"MIT"
] | null | null | null | LibSerial7/__init__.py | jaemilton/LibSerial7 | f483cd7474cd494281e384d2e74d65652ca581d6 | [
"MIT"
] | null | null | null | from LibSerial7.LibSerial7 import Serial, Uart
| 16 | 46 | 0.833333 | 6 | 48 | 6.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.125 | 48 | 2 | 47 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9653cf8a45a231964f80b1ed559a8192aeae2d9c | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/22/40/84/e3c3548f23bbe01f30ab9e9cf756b205c886a9ecd1ffe0b6d8d16f3412 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
968d4f71dbcb3a79e231f5cb9f7d498d449d09d8 | 124 | py | Python | python/testData/deprecation/deprecatedImport.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/deprecation/deprecatedImport.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/deprecation/deprecatedImport.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | import <warning descr="the deprecated module is deprecated; use a non-deprecated module instead">deprecatedModule</warning>
| 62 | 123 | 0.822581 | 16 | 124 | 6.375 | 0.75 | 0.313725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 124 | 1 | 124 | 124 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
969d634b194cdf37c7a28cd729f819bd809a449d | 81,940 | py | Python | sdk/python/pulumi_alicloud/cms/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/cms/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/cms/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'AlarmEscalationsCritical',
'AlarmEscalationsInfo',
'AlarmEscalationsWarn',
'GroupMetricRuleEscalations',
'GroupMetricRuleEscalationsCritical',
'GroupMetricRuleEscalationsInfo',
'GroupMetricRuleEscalationsWarn',
'MetricRuleTemplateAlertTemplate',
'MetricRuleTemplateAlertTemplateEscalations',
'MetricRuleTemplateAlertTemplateEscalationsCritical',
'MetricRuleTemplateAlertTemplateEscalationsInfo',
'MetricRuleTemplateAlertTemplateEscalationsWarn',
'MonitorGroupInstancesInstance',
'SiteMonitorIspCity',
'GetAlarmContactGroupsGroupResult',
'GetAlarmContactsContactResult',
'GetGroupMetricRulesRuleResult',
'GetGroupMetricRulesRuleEscalationResult',
'GetGroupMetricRulesRuleEscalationCriticalResult',
'GetGroupMetricRulesRuleEscalationInfoResult',
'GetGroupMetricRulesRuleEscalationWarnResult',
'GetMetricRuleTemplatesTemplateResult',
'GetMetricRuleTemplatesTemplateAlertTemplateResult',
'GetMetricRuleTemplatesTemplateAlertTemplateEscalationResult',
'GetMetricRuleTemplatesTemplateAlertTemplateEscalationCriticalResult',
'GetMetricRuleTemplatesTemplateAlertTemplateEscalationInfoResult',
'GetMetricRuleTemplatesTemplateAlertTemplateEscalationWarnResult',
'GetMonitorGroupInstancesInstanceResult',
'GetMonitorGroupInstancesInstanceInstanceResult',
'GetMonitorGroupsGroupResult',
]
@pulumi.output_type
class AlarmEscalationsCritical(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in AlarmEscalationsCritical. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
AlarmEscalationsCritical.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
AlarmEscalationsCritical.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
:param str statistics: Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
:param str threshold: Critical level alarm threshold value, which must be a numeric value currently.
:param int times: Critical level alarm retry times. Default to 3.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
Critical level alarm threshold value, which must be a numeric value currently.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
Critical level alarm retry times. Default to 3.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class AlarmEscalationsInfo(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in AlarmEscalationsInfo. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
AlarmEscalationsInfo.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
AlarmEscalationsInfo.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
:param str statistics: Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
:param str threshold: Critical level alarm threshold value, which must be a numeric value currently.
:param int times: Critical level alarm retry times. Default to 3.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
Critical level alarm threshold value, which must be a numeric value currently.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
Critical level alarm retry times. Default to 3.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class AlarmEscalationsWarn(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in AlarmEscalationsWarn. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
AlarmEscalationsWarn.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
AlarmEscalationsWarn.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
:param str statistics: Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
:param str threshold: Critical level alarm threshold value, which must be a numeric value currently.
:param int times: Critical level alarm retry times. Default to 3.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
Critical level alarm comparison operator. Valid values: ["<=", "<", ">", ">=", "==", "!="]. Default to "==".
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
Critical level alarm statistics method. It must be consistent with that defined for metrics. Valid values: ["Average", "Minimum", "Maximum", "Value", "ErrorCodeMaximum", "Sum", "Count"]. Default to "Average".
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
Critical level alarm threshold value, which must be a numeric value currently.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
Critical level alarm retry times. Default to 3.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GroupMetricRuleEscalations(dict):
def __init__(__self__, *,
critical: Optional['outputs.GroupMetricRuleEscalationsCritical'] = None,
info: Optional['outputs.GroupMetricRuleEscalationsInfo'] = None,
warn: Optional['outputs.GroupMetricRuleEscalationsWarn'] = None):
"""
:param 'GroupMetricRuleEscalationsCriticalArgs' critical: The critical level.
:param 'GroupMetricRuleEscalationsInfoArgs' info: The info level.
:param 'GroupMetricRuleEscalationsWarnArgs' warn: The warn level.
"""
if critical is not None:
pulumi.set(__self__, "critical", critical)
if info is not None:
pulumi.set(__self__, "info", info)
if warn is not None:
pulumi.set(__self__, "warn", warn)
@property
@pulumi.getter
def critical(self) -> Optional['outputs.GroupMetricRuleEscalationsCritical']:
"""
The critical level.
"""
return pulumi.get(self, "critical")
@property
@pulumi.getter
def info(self) -> Optional['outputs.GroupMetricRuleEscalationsInfo']:
"""
The info level.
"""
return pulumi.get(self, "info")
@property
@pulumi.getter
def warn(self) -> Optional['outputs.GroupMetricRuleEscalationsWarn']:
"""
The warn level.
"""
return pulumi.get(self, "warn")
@pulumi.output_type
class GroupMetricRuleEscalationsCritical(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in GroupMetricRuleEscalationsCritical. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
GroupMetricRuleEscalationsCritical.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
GroupMetricRuleEscalationsCritical.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GroupMetricRuleEscalationsInfo(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in GroupMetricRuleEscalationsInfo. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
GroupMetricRuleEscalationsInfo.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
GroupMetricRuleEscalationsInfo.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GroupMetricRuleEscalationsWarn(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in GroupMetricRuleEscalationsWarn. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
GroupMetricRuleEscalationsWarn.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
GroupMetricRuleEscalationsWarn.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[int] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[int]:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class MetricRuleTemplateAlertTemplate(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "metricName":
suggest = "metric_name"
elif key == "ruleName":
suggest = "rule_name"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MetricRuleTemplateAlertTemplate. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MetricRuleTemplateAlertTemplate.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MetricRuleTemplateAlertTemplate.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
category: str,
metric_name: str,
namespace: str,
rule_name: str,
escalations: Optional['outputs.MetricRuleTemplateAlertTemplateEscalations'] = None,
webhook: Optional[str] = None):
"""
:param str category: The abbreviation of the service name. Valid values: `ecs`, `rds`, `ads`, `slb`, `vpc`, `apigateway`, `cdn`, `cs`, `dcdn`, `ddos`, `eip`, `elasticsearch`, `emr`, `ess`, `hbase`, `iot_edge`, `kvstore_sharding`, `kvstore_splitrw`, `kvstore_standard`, `memcache`, `mns`, `mongodb`, `mongodb_cluster`, `mongodb_sharding`, `mq_topic`, `ocs`, `opensearch`, `oss`, `polardb`, `petadata`, `scdn`, `sharebandwidthpackages`, `sls`, `vpn`.
:param str metric_name: The name of the metric.
:param str namespace: The namespace of the service.
:param str rule_name: The name of the alert rule.
:param 'MetricRuleTemplateAlertTemplateEscalationsArgs' escalations: The information about the trigger condition based on the alert level. See the following `Block escalations`.
:param str webhook: The callback URL to which a POST request is sent when an alert is triggered based on the alert rule.
"""
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "metric_name", metric_name)
pulumi.set(__self__, "namespace", namespace)
pulumi.set(__self__, "rule_name", rule_name)
if escalations is not None:
pulumi.set(__self__, "escalations", escalations)
if webhook is not None:
pulumi.set(__self__, "webhook", webhook)
@property
@pulumi.getter
def category(self) -> str:
"""
The abbreviation of the service name. Valid values: `ecs`, `rds`, `ads`, `slb`, `vpc`, `apigateway`, `cdn`, `cs`, `dcdn`, `ddos`, `eip`, `elasticsearch`, `emr`, `ess`, `hbase`, `iot_edge`, `kvstore_sharding`, `kvstore_splitrw`, `kvstore_standard`, `memcache`, `mns`, `mongodb`, `mongodb_cluster`, `mongodb_sharding`, `mq_topic`, `ocs`, `opensearch`, `oss`, `polardb`, `petadata`, `scdn`, `sharebandwidthpackages`, `sls`, `vpn`.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> str:
"""
The name of the metric.
"""
return pulumi.get(self, "metric_name")
@property
@pulumi.getter
def namespace(self) -> str:
"""
The namespace of the service.
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="ruleName")
def rule_name(self) -> str:
"""
The name of the alert rule.
"""
return pulumi.get(self, "rule_name")
@property
@pulumi.getter
def escalations(self) -> Optional['outputs.MetricRuleTemplateAlertTemplateEscalations']:
"""
The information about the trigger condition based on the alert level. See the following `Block escalations`.
"""
return pulumi.get(self, "escalations")
@property
@pulumi.getter
def webhook(self) -> Optional[str]:
"""
The callback URL to which a POST request is sent when an alert is triggered based on the alert rule.
"""
return pulumi.get(self, "webhook")
@pulumi.output_type
class MetricRuleTemplateAlertTemplateEscalations(dict):
def __init__(__self__, *,
critical: Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsCritical'] = None,
info: Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsInfo'] = None,
warn: Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsWarn'] = None):
"""
:param 'MetricRuleTemplateAlertTemplateEscalationsCriticalArgs' critical: The condition for triggering critical-level alerts. See the following `Block critical`.
:param 'MetricRuleTemplateAlertTemplateEscalationsInfoArgs' info: The condition for triggering info-level alerts. See the following `Block info`.
:param 'MetricRuleTemplateAlertTemplateEscalationsWarnArgs' warn: The condition for triggering warn-level alerts. See the following `Block warn`.
"""
if critical is not None:
pulumi.set(__self__, "critical", critical)
if info is not None:
pulumi.set(__self__, "info", info)
if warn is not None:
pulumi.set(__self__, "warn", warn)
@property
@pulumi.getter
def critical(self) -> Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsCritical']:
"""
The condition for triggering critical-level alerts. See the following `Block critical`.
"""
return pulumi.get(self, "critical")
@property
@pulumi.getter
def info(self) -> Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsInfo']:
"""
The condition for triggering info-level alerts. See the following `Block info`.
"""
return pulumi.get(self, "info")
@property
@pulumi.getter
def warn(self) -> Optional['outputs.MetricRuleTemplateAlertTemplateEscalationsWarn']:
"""
The condition for triggering warn-level alerts. See the following `Block warn`.
"""
return pulumi.get(self, "warn")
@pulumi.output_type
class MetricRuleTemplateAlertTemplateEscalationsCritical(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MetricRuleTemplateAlertTemplateEscalationsCritical. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MetricRuleTemplateAlertTemplateEscalationsCritical.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MetricRuleTemplateAlertTemplateEscalationsCritical.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[str] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for critical-level alerts.
:param str threshold: The threshold for critical-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for critical-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for critical-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[str]:
"""
The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class MetricRuleTemplateAlertTemplateEscalationsInfo(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MetricRuleTemplateAlertTemplateEscalationsInfo. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MetricRuleTemplateAlertTemplateEscalationsInfo.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MetricRuleTemplateAlertTemplateEscalationsInfo.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[str] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for critical-level alerts.
:param str threshold: The threshold for critical-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for critical-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for critical-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[str]:
"""
The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class MetricRuleTemplateAlertTemplateEscalationsWarn(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "comparisonOperator":
suggest = "comparison_operator"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MetricRuleTemplateAlertTemplateEscalationsWarn. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MetricRuleTemplateAlertTemplateEscalationsWarn.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MetricRuleTemplateAlertTemplateEscalationsWarn.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
comparison_operator: Optional[str] = None,
statistics: Optional[str] = None,
threshold: Optional[str] = None,
times: Optional[str] = None):
"""
:param str comparison_operator: The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for critical-level alerts.
:param str threshold: The threshold for critical-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
if comparison_operator is not None:
pulumi.set(__self__, "comparison_operator", comparison_operator)
if statistics is not None:
pulumi.set(__self__, "statistics", statistics)
if threshold is not None:
pulumi.set(__self__, "threshold", threshold)
if times is not None:
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> Optional[str]:
"""
The comparison operator of the threshold for critical-level alerts. Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> Optional[str]:
"""
The statistical aggregation method for critical-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> Optional[str]:
"""
The threshold for critical-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> Optional[str]:
"""
The consecutive number of times for which the metric value is measured before a critical-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class MonitorGroupInstancesInstance(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "instanceId":
suggest = "instance_id"
elif key == "instanceName":
suggest = "instance_name"
elif key == "regionId":
suggest = "region_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MonitorGroupInstancesInstance. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MonitorGroupInstancesInstance.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MonitorGroupInstancesInstance.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
category: str,
instance_id: str,
instance_name: str,
region_id: str):
"""
:param str category: The category of instance.
:param str instance_id: The id of instance.
:param str instance_name: The name of instance.
:param str region_id: The region id of instance.
"""
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "instance_id", instance_id)
pulumi.set(__self__, "instance_name", instance_name)
pulumi.set(__self__, "region_id", region_id)
@property
@pulumi.getter
def category(self) -> str:
"""
The category of instance.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> str:
"""
The id of instance.
"""
return pulumi.get(self, "instance_id")
@property
@pulumi.getter(name="instanceName")
def instance_name(self) -> str:
"""
The name of instance.
"""
return pulumi.get(self, "instance_name")
@property
@pulumi.getter(name="regionId")
def region_id(self) -> str:
"""
The region id of instance.
"""
return pulumi.get(self, "region_id")
@pulumi.output_type
class SiteMonitorIspCity(dict):
def __init__(__self__, *,
city: str,
isp: str):
pulumi.set(__self__, "city", city)
pulumi.set(__self__, "isp", isp)
@property
@pulumi.getter
def city(self) -> str:
return pulumi.get(self, "city")
@property
@pulumi.getter
def isp(self) -> str:
return pulumi.get(self, "isp")
@pulumi.output_type
class GetAlarmContactGroupsGroupResult(dict):
def __init__(__self__, *,
alarm_contact_group_name: str,
contacts: Sequence[str],
describe: str,
enable_subscribed: bool,
id: str):
"""
:param str alarm_contact_group_name: The name of Alarm Contact Group.
:param Sequence[str] contacts: The alarm contacts in the alarm group.
:param str describe: The description of the Alarm Group.
:param bool enable_subscribed: Indicates whether the alarm group subscribes to weekly reports.
:param str id: The ID of the CMS.
"""
pulumi.set(__self__, "alarm_contact_group_name", alarm_contact_group_name)
pulumi.set(__self__, "contacts", contacts)
pulumi.set(__self__, "describe", describe)
pulumi.set(__self__, "enable_subscribed", enable_subscribed)
pulumi.set(__self__, "id", id)
@property
@pulumi.getter(name="alarmContactGroupName")
def alarm_contact_group_name(self) -> str:
"""
The name of Alarm Contact Group.
"""
return pulumi.get(self, "alarm_contact_group_name")
@property
@pulumi.getter
def contacts(self) -> Sequence[str]:
"""
The alarm contacts in the alarm group.
"""
return pulumi.get(self, "contacts")
@property
@pulumi.getter
def describe(self) -> str:
"""
The description of the Alarm Group.
"""
return pulumi.get(self, "describe")
@property
@pulumi.getter(name="enableSubscribed")
def enable_subscribed(self) -> bool:
"""
Indicates whether the alarm group subscribes to weekly reports.
"""
return pulumi.get(self, "enable_subscribed")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the CMS.
"""
return pulumi.get(self, "id")
@pulumi.output_type
class GetAlarmContactsContactResult(dict):
def __init__(__self__, *,
alarm_contact_name: str,
channels_aliim: str,
channels_ding_web_hook: str,
channels_mail: str,
channels_sms: str,
channels_state_aliim: str,
channels_state_ding_web_hook: str,
channels_state_mail: str,
channels_status_sms: str,
contact_groups: Sequence[str],
describe: str,
id: str,
lang: str):
"""
:param str alarm_contact_name: The name of the alarm contact.
:param str channels_aliim: The TradeManager ID of the alarm contact.
:param str channels_ding_web_hook: The webhook URL of the DingTalk chatbot.
:param str channels_mail: The email address of the alarm contact.
:param str channels_sms: The phone number of the alarm contact.
:param str channels_state_aliim: Indicates whether the TradeManager ID is valid.
:param str channels_state_ding_web_hook: Indicates whether the DingTalk chatbot is normal.
:param str channels_state_mail: The status of the email address.
:param str channels_status_sms: The status of the phone number.
:param Sequence[str] contact_groups: The alert groups to which the alarm contact is added.
:param str describe: The description of the alarm contact.
:param str id: The ID of the alarm contact.
"""
pulumi.set(__self__, "alarm_contact_name", alarm_contact_name)
pulumi.set(__self__, "channels_aliim", channels_aliim)
pulumi.set(__self__, "channels_ding_web_hook", channels_ding_web_hook)
pulumi.set(__self__, "channels_mail", channels_mail)
pulumi.set(__self__, "channels_sms", channels_sms)
pulumi.set(__self__, "channels_state_aliim", channels_state_aliim)
pulumi.set(__self__, "channels_state_ding_web_hook", channels_state_ding_web_hook)
pulumi.set(__self__, "channels_state_mail", channels_state_mail)
pulumi.set(__self__, "channels_status_sms", channels_status_sms)
pulumi.set(__self__, "contact_groups", contact_groups)
pulumi.set(__self__, "describe", describe)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "lang", lang)
@property
@pulumi.getter(name="alarmContactName")
def alarm_contact_name(self) -> str:
"""
The name of the alarm contact.
"""
return pulumi.get(self, "alarm_contact_name")
@property
@pulumi.getter(name="channelsAliim")
def channels_aliim(self) -> str:
"""
The TradeManager ID of the alarm contact.
"""
return pulumi.get(self, "channels_aliim")
@property
@pulumi.getter(name="channelsDingWebHook")
def channels_ding_web_hook(self) -> str:
"""
The webhook URL of the DingTalk chatbot.
"""
return pulumi.get(self, "channels_ding_web_hook")
@property
@pulumi.getter(name="channelsMail")
def channels_mail(self) -> str:
"""
The email address of the alarm contact.
"""
return pulumi.get(self, "channels_mail")
@property
@pulumi.getter(name="channelsSms")
def channels_sms(self) -> str:
"""
The phone number of the alarm contact.
"""
return pulumi.get(self, "channels_sms")
@property
@pulumi.getter(name="channelsStateAliim")
def channels_state_aliim(self) -> str:
"""
Indicates whether the TradeManager ID is valid.
"""
return pulumi.get(self, "channels_state_aliim")
@property
@pulumi.getter(name="channelsStateDingWebHook")
def channels_state_ding_web_hook(self) -> str:
"""
Indicates whether the DingTalk chatbot is normal.
"""
return pulumi.get(self, "channels_state_ding_web_hook")
@property
@pulumi.getter(name="channelsStateMail")
def channels_state_mail(self) -> str:
"""
The status of the email address.
"""
return pulumi.get(self, "channels_state_mail")
@property
@pulumi.getter(name="channelsStatusSms")
def channels_status_sms(self) -> str:
"""
The status of the phone number.
"""
return pulumi.get(self, "channels_status_sms")
@property
@pulumi.getter(name="contactGroups")
def contact_groups(self) -> Sequence[str]:
"""
The alert groups to which the alarm contact is added.
"""
return pulumi.get(self, "contact_groups")
@property
@pulumi.getter
def describe(self) -> str:
"""
The description of the alarm contact.
"""
return pulumi.get(self, "describe")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the alarm contact.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def lang(self) -> str:
return pulumi.get(self, "lang")
@pulumi.output_type
class GetGroupMetricRulesRuleResult(dict):
def __init__(__self__, *,
contact_groups: str,
dimensions: str,
effective_interval: str,
email_subject: str,
enable_state: bool,
escalations: Sequence['outputs.GetGroupMetricRulesRuleEscalationResult'],
group_id: str,
group_metric_rule_name: str,
id: str,
metric_name: str,
namespace: str,
no_effective_interval: str,
period: int,
resources: str,
rule_id: str,
silence_time: int,
source_type: str,
status: str,
webhook: str):
"""
:param str contact_groups: Alarm contact group.
:param str dimensions: The dimensions that specify the resources to be associated with the alert rule.
:param str effective_interval: The time period during which the alert rule is effective.
:param str email_subject: The subject of the alert notification email.
:param bool enable_state: Indicates whether the alert rule is enabled.
:param Sequence['GetGroupMetricRulesRuleEscalationArgs'] escalations: Alarm level.
:param str group_id: The ID of the application group.
:param str group_metric_rule_name: The name of the alert rule.
:param str id: The ID of the Group Metric Rule.
:param str metric_name: The name of the metric.
:param str namespace: The namespace of the service.
:param str no_effective_interval: The time period during which the alert rule is ineffective.
:param int period: The aggregation period of the monitoring data. Unit: seconds. The value is an integral multiple of 60. Default value: `300`.
:param str resources: The resources that are associated with the alert rule.
:param str rule_id: The ID of the alert rule.
:param int silence_time: The mute period during which new alerts are not reported even if the alert trigger conditions are met. Unit: seconds. Default value: `86400`, which is equivalent to one day.
:param str source_type: The type of the alert rule. The value is fixed to METRIC, indicating an alert rule for time series metrics.
:param str status: The status of Group Metric Rule..
:param str webhook: The callback URL.
"""
pulumi.set(__self__, "contact_groups", contact_groups)
pulumi.set(__self__, "dimensions", dimensions)
pulumi.set(__self__, "effective_interval", effective_interval)
pulumi.set(__self__, "email_subject", email_subject)
pulumi.set(__self__, "enable_state", enable_state)
pulumi.set(__self__, "escalations", escalations)
pulumi.set(__self__, "group_id", group_id)
pulumi.set(__self__, "group_metric_rule_name", group_metric_rule_name)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "metric_name", metric_name)
pulumi.set(__self__, "namespace", namespace)
pulumi.set(__self__, "no_effective_interval", no_effective_interval)
pulumi.set(__self__, "period", period)
pulumi.set(__self__, "resources", resources)
pulumi.set(__self__, "rule_id", rule_id)
pulumi.set(__self__, "silence_time", silence_time)
pulumi.set(__self__, "source_type", source_type)
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "webhook", webhook)
@property
@pulumi.getter(name="contactGroups")
def contact_groups(self) -> str:
"""
Alarm contact group.
"""
return pulumi.get(self, "contact_groups")
@property
@pulumi.getter
def dimensions(self) -> str:
"""
The dimensions that specify the resources to be associated with the alert rule.
"""
return pulumi.get(self, "dimensions")
@property
@pulumi.getter(name="effectiveInterval")
def effective_interval(self) -> str:
"""
The time period during which the alert rule is effective.
"""
return pulumi.get(self, "effective_interval")
@property
@pulumi.getter(name="emailSubject")
def email_subject(self) -> str:
"""
The subject of the alert notification email.
"""
return pulumi.get(self, "email_subject")
@property
@pulumi.getter(name="enableState")
def enable_state(self) -> bool:
"""
Indicates whether the alert rule is enabled.
"""
return pulumi.get(self, "enable_state")
@property
@pulumi.getter
def escalations(self) -> Sequence['outputs.GetGroupMetricRulesRuleEscalationResult']:
"""
Alarm level.
"""
return pulumi.get(self, "escalations")
@property
@pulumi.getter(name="groupId")
def group_id(self) -> str:
"""
The ID of the application group.
"""
return pulumi.get(self, "group_id")
@property
@pulumi.getter(name="groupMetricRuleName")
def group_metric_rule_name(self) -> str:
"""
The name of the alert rule.
"""
return pulumi.get(self, "group_metric_rule_name")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the Group Metric Rule.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> str:
"""
The name of the metric.
"""
return pulumi.get(self, "metric_name")
@property
@pulumi.getter
def namespace(self) -> str:
"""
The namespace of the service.
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="noEffectiveInterval")
def no_effective_interval(self) -> str:
"""
The time period during which the alert rule is ineffective.
"""
return pulumi.get(self, "no_effective_interval")
@property
@pulumi.getter
def period(self) -> int:
"""
The aggregation period of the monitoring data. Unit: seconds. The value is an integral multiple of 60. Default value: `300`.
"""
return pulumi.get(self, "period")
@property
@pulumi.getter
def resources(self) -> str:
"""
The resources that are associated with the alert rule.
"""
return pulumi.get(self, "resources")
@property
@pulumi.getter(name="ruleId")
def rule_id(self) -> str:
"""
The ID of the alert rule.
"""
return pulumi.get(self, "rule_id")
@property
@pulumi.getter(name="silenceTime")
def silence_time(self) -> int:
"""
The mute period during which new alerts are not reported even if the alert trigger conditions are met. Unit: seconds. Default value: `86400`, which is equivalent to one day.
"""
return pulumi.get(self, "silence_time")
@property
@pulumi.getter(name="sourceType")
def source_type(self) -> str:
"""
The type of the alert rule. The value is fixed to METRIC, indicating an alert rule for time series metrics.
"""
return pulumi.get(self, "source_type")
@property
@pulumi.getter
def status(self) -> str:
"""
The status of Group Metric Rule..
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def webhook(self) -> str:
"""
The callback URL.
"""
return pulumi.get(self, "webhook")
@pulumi.output_type
class GetGroupMetricRulesRuleEscalationResult(dict):
def __init__(__self__, *,
criticals: Sequence['outputs.GetGroupMetricRulesRuleEscalationCriticalResult'],
infos: Sequence['outputs.GetGroupMetricRulesRuleEscalationInfoResult'],
warns: Sequence['outputs.GetGroupMetricRulesRuleEscalationWarnResult']):
"""
:param Sequence['GetGroupMetricRulesRuleEscalationCriticalArgs'] criticals: The critical level.
:param Sequence['GetGroupMetricRulesRuleEscalationInfoArgs'] infos: The info level.
:param Sequence['GetGroupMetricRulesRuleEscalationWarnArgs'] warns: The warn level.
"""
pulumi.set(__self__, "criticals", criticals)
pulumi.set(__self__, "infos", infos)
pulumi.set(__self__, "warns", warns)
@property
@pulumi.getter
def criticals(self) -> Sequence['outputs.GetGroupMetricRulesRuleEscalationCriticalResult']:
"""
The critical level.
"""
return pulumi.get(self, "criticals")
@property
@pulumi.getter
def infos(self) -> Sequence['outputs.GetGroupMetricRulesRuleEscalationInfoResult']:
"""
The info level.
"""
return pulumi.get(self, "infos")
@property
@pulumi.getter
def warns(self) -> Sequence['outputs.GetGroupMetricRulesRuleEscalationWarnResult']:
"""
The warn level.
"""
return pulumi.get(self, "warns")
@pulumi.output_type
class GetGroupMetricRulesRuleEscalationCriticalResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: int):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> int:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetGroupMetricRulesRuleEscalationInfoResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: int):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> int:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetGroupMetricRulesRuleEscalationWarnResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: int):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param int times: The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> int:
"""
The consecutive number of times for which the metric value is measured before a warn-level alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateResult(dict):
def __init__(__self__, *,
alert_templates: Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateResult'],
description: str,
group_id: str,
id: str,
metric_rule_template_name: str,
rest_version: str,
template_id: str):
"""
:param Sequence['GetMetricRuleTemplatesTemplateAlertTemplateArgs'] alert_templates: The details of alert rules that are generated based on the alert template.
:param str description: The description of the alert template.
:param str group_id: GroupId.
:param str id: The ID of the Metric Rule Template.
:param str metric_rule_template_name: The name of the alert template.
:param str rest_version: The version of the alert template.
:param str template_id: The ID of the alert template.
"""
pulumi.set(__self__, "alert_templates", alert_templates)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "group_id", group_id)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "metric_rule_template_name", metric_rule_template_name)
pulumi.set(__self__, "rest_version", rest_version)
pulumi.set(__self__, "template_id", template_id)
@property
@pulumi.getter(name="alertTemplates")
def alert_templates(self) -> Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateResult']:
"""
The details of alert rules that are generated based on the alert template.
"""
return pulumi.get(self, "alert_templates")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the alert template.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="groupId")
def group_id(self) -> str:
"""
GroupId.
"""
return pulumi.get(self, "group_id")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the Metric Rule Template.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="metricRuleTemplateName")
def metric_rule_template_name(self) -> str:
"""
The name of the alert template.
"""
return pulumi.get(self, "metric_rule_template_name")
@property
@pulumi.getter(name="restVersion")
def rest_version(self) -> str:
"""
The version of the alert template.
"""
return pulumi.get(self, "rest_version")
@property
@pulumi.getter(name="templateId")
def template_id(self) -> str:
"""
The ID of the alert template.
"""
return pulumi.get(self, "template_id")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateAlertTemplateResult(dict):
def __init__(__self__, *,
category: str,
escalations: Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationResult'],
metric_name: str,
namespace: str,
rule_name: str,
selector: str,
webhook: str):
"""
:param str category: The abbreviation of the service name. Valid values: `ecs`, `rds`, `ads`, `slb`, `vpc`, `apigateway`, `cdn`, `cs`, `dcdn`, `ddos`, `eip`, `elasticsearch`, `emr`, `ess`, `hbase`, `iot_edge`, `kvstore_sharding`, `kvstore_splitrw`, `kvstore_standard`, `memcache`, `mns`, `mongodb`, `mongodb_cluster`, `mongodb_sharding`, `mq_topic`, `ocs`, `opensearch`, `oss`, `polardb`, `petadata`, `scdn`, `sharebandwidthpackages`, `sls`, `vpn`.
:param Sequence['GetMetricRuleTemplatesTemplateAlertTemplateEscalationArgs'] escalations: The information about the trigger condition based on the alert level.
:param str metric_name: The name of the metric.
:param str namespace: The namespace of the service.
:param str rule_name: The name of the alert rule.
:param str webhook: The callback URL to which a POST request is sent when an alert is triggered based on the alert rule.
"""
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "escalations", escalations)
pulumi.set(__self__, "metric_name", metric_name)
pulumi.set(__self__, "namespace", namespace)
pulumi.set(__self__, "rule_name", rule_name)
pulumi.set(__self__, "selector", selector)
pulumi.set(__self__, "webhook", webhook)
@property
@pulumi.getter
def category(self) -> str:
"""
The abbreviation of the service name. Valid values: `ecs`, `rds`, `ads`, `slb`, `vpc`, `apigateway`, `cdn`, `cs`, `dcdn`, `ddos`, `eip`, `elasticsearch`, `emr`, `ess`, `hbase`, `iot_edge`, `kvstore_sharding`, `kvstore_splitrw`, `kvstore_standard`, `memcache`, `mns`, `mongodb`, `mongodb_cluster`, `mongodb_sharding`, `mq_topic`, `ocs`, `opensearch`, `oss`, `polardb`, `petadata`, `scdn`, `sharebandwidthpackages`, `sls`, `vpn`.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter
def escalations(self) -> Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationResult']:
"""
The information about the trigger condition based on the alert level.
"""
return pulumi.get(self, "escalations")
@property
@pulumi.getter(name="metricName")
def metric_name(self) -> str:
"""
The name of the metric.
"""
return pulumi.get(self, "metric_name")
@property
@pulumi.getter
def namespace(self) -> str:
"""
The namespace of the service.
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="ruleName")
def rule_name(self) -> str:
"""
The name of the alert rule.
"""
return pulumi.get(self, "rule_name")
@property
@pulumi.getter
def selector(self) -> str:
return pulumi.get(self, "selector")
@property
@pulumi.getter
def webhook(self) -> str:
"""
The callback URL to which a POST request is sent when an alert is triggered based on the alert rule.
"""
return pulumi.get(self, "webhook")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateAlertTemplateEscalationResult(dict):
def __init__(__self__, *,
criticals: Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationCriticalResult'],
infos: Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationInfoResult'],
warns: Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationWarnResult']):
"""
:param Sequence['GetMetricRuleTemplatesTemplateAlertTemplateEscalationCriticalArgs'] criticals: The condition for triggering critical-level alerts.
:param Sequence['GetMetricRuleTemplatesTemplateAlertTemplateEscalationInfoArgs'] infos: The condition for triggering info-level alerts.
:param Sequence['GetMetricRuleTemplatesTemplateAlertTemplateEscalationWarnArgs'] warns: The condition for triggering warn-level alerts.
"""
pulumi.set(__self__, "criticals", criticals)
pulumi.set(__self__, "infos", infos)
pulumi.set(__self__, "warns", warns)
@property
@pulumi.getter
def criticals(self) -> Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationCriticalResult']:
"""
The condition for triggering critical-level alerts.
"""
return pulumi.get(self, "criticals")
@property
@pulumi.getter
def infos(self) -> Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationInfoResult']:
"""
The condition for triggering info-level alerts.
"""
return pulumi.get(self, "infos")
@property
@pulumi.getter
def warns(self) -> Sequence['outputs.GetMetricRuleTemplatesTemplateAlertTemplateEscalationWarnResult']:
"""
The condition for triggering warn-level alerts.
"""
return pulumi.get(self, "warns")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateAlertTemplateEscalationCriticalResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: str):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> str:
"""
The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateAlertTemplateEscalationInfoResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: str):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> str:
"""
The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetMetricRuleTemplatesTemplateAlertTemplateEscalationWarnResult(dict):
def __init__(__self__, *,
comparison_operator: str,
statistics: str,
threshold: str,
times: str):
"""
:param str comparison_operator: The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
:param str statistics: The statistical aggregation method for warn-level alerts.
:param str threshold: The threshold for warn-level alerts.
:param str times: The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
pulumi.set(__self__, "comparison_operator", comparison_operator)
pulumi.set(__self__, "statistics", statistics)
pulumi.set(__self__, "threshold", threshold)
pulumi.set(__self__, "times", times)
@property
@pulumi.getter(name="comparisonOperator")
def comparison_operator(self) -> str:
"""
The comparison operator of the threshold for warn-level alerts.Valid values: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanOrEqualToThreshold`, `LessThanThreshold`, `NotEqualToThreshold`, `GreaterThanYesterday`, `LessThanYesterday`, `GreaterThanLastWeek`, `LessThanLastWeek`, `GreaterThanLastPeriod`, `LessThanLastPeriod`.
"""
return pulumi.get(self, "comparison_operator")
@property
@pulumi.getter
def statistics(self) -> str:
"""
The statistical aggregation method for warn-level alerts.
"""
return pulumi.get(self, "statistics")
@property
@pulumi.getter
def threshold(self) -> str:
"""
The threshold for warn-level alerts.
"""
return pulumi.get(self, "threshold")
@property
@pulumi.getter
def times(self) -> str:
"""
The consecutive number of times for which the metric value is measured before a warn-level
alert is triggered.
"""
return pulumi.get(self, "times")
@pulumi.output_type
class GetMonitorGroupInstancesInstanceResult(dict):
def __init__(__self__, *,
instances: Sequence['outputs.GetMonitorGroupInstancesInstanceInstanceResult']):
pulumi.set(__self__, "instances", instances)
@property
@pulumi.getter
def instances(self) -> Sequence['outputs.GetMonitorGroupInstancesInstanceInstanceResult']:
return pulumi.get(self, "instances")
@pulumi.output_type
class GetMonitorGroupInstancesInstanceInstanceResult(dict):
def __init__(__self__, *,
category: str,
instance_id: str,
instance_name: str,
region_id: str):
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "instance_id", instance_id)
pulumi.set(__self__, "instance_name", instance_name)
pulumi.set(__self__, "region_id", region_id)
@property
@pulumi.getter
def category(self) -> str:
return pulumi.get(self, "category")
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> str:
return pulumi.get(self, "instance_id")
@property
@pulumi.getter(name="instanceName")
def instance_name(self) -> str:
return pulumi.get(self, "instance_name")
@property
@pulumi.getter(name="regionId")
def region_id(self) -> str:
return pulumi.get(self, "region_id")
@pulumi.output_type
class GetMonitorGroupsGroupResult(dict):
def __init__(__self__, *,
bind_url: str,
contact_groups: Sequence[str],
dynamic_tag_rule_id: str,
gmt_create: int,
gmt_modified: int,
group_id: str,
id: str,
monitor_group_name: str,
service_id: str,
tags: Mapping[str, Any],
template_ids: Sequence[str],
type: str):
"""
:param str bind_url: The URL of the Kubernetes cluster from which the application group is synchronized.
:param Sequence[str] contact_groups: The list of alert groups that receive alert notifications for the application group.
:param str dynamic_tag_rule_id: The ID of the tag rule.
:param int gmt_create: The time when the application group was created.
:param int gmt_modified: The time when the application group was modified.
:param str group_id: The ID of the application group.
:param str id: The ID of the Monitor Group.
:param str monitor_group_name: The name of the application group.
:param str service_id: The ID of the Alibaba Cloud service.
:param Mapping[str, Any] tags: A map of tags assigned to the Cms Monitor Group.
:param Sequence[str] template_ids: The alert templates applied to the application group.
:param str type: The type of the application group.
"""
pulumi.set(__self__, "bind_url", bind_url)
pulumi.set(__self__, "contact_groups", contact_groups)
pulumi.set(__self__, "dynamic_tag_rule_id", dynamic_tag_rule_id)
pulumi.set(__self__, "gmt_create", gmt_create)
pulumi.set(__self__, "gmt_modified", gmt_modified)
pulumi.set(__self__, "group_id", group_id)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "monitor_group_name", monitor_group_name)
pulumi.set(__self__, "service_id", service_id)
pulumi.set(__self__, "tags", tags)
pulumi.set(__self__, "template_ids", template_ids)
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="bindUrl")
def bind_url(self) -> str:
"""
The URL of the Kubernetes cluster from which the application group is synchronized.
"""
return pulumi.get(self, "bind_url")
@property
@pulumi.getter(name="contactGroups")
def contact_groups(self) -> Sequence[str]:
"""
The list of alert groups that receive alert notifications for the application group.
"""
return pulumi.get(self, "contact_groups")
@property
@pulumi.getter(name="dynamicTagRuleId")
def dynamic_tag_rule_id(self) -> str:
"""
The ID of the tag rule.
"""
return pulumi.get(self, "dynamic_tag_rule_id")
@property
@pulumi.getter(name="gmtCreate")
def gmt_create(self) -> int:
"""
The time when the application group was created.
"""
return pulumi.get(self, "gmt_create")
@property
@pulumi.getter(name="gmtModified")
def gmt_modified(self) -> int:
"""
The time when the application group was modified.
"""
return pulumi.get(self, "gmt_modified")
@property
@pulumi.getter(name="groupId")
def group_id(self) -> str:
"""
The ID of the application group.
"""
return pulumi.get(self, "group_id")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the Monitor Group.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="monitorGroupName")
def monitor_group_name(self) -> str:
"""
The name of the application group.
"""
return pulumi.get(self, "monitor_group_name")
@property
@pulumi.getter(name="serviceId")
def service_id(self) -> str:
"""
The ID of the Alibaba Cloud service.
"""
return pulumi.get(self, "service_id")
@property
@pulumi.getter
def tags(self) -> Mapping[str, Any]:
"""
A map of tags assigned to the Cms Monitor Group.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="templateIds")
def template_ids(self) -> Sequence[str]:
"""
The alert templates applied to the application group.
"""
return pulumi.get(self, "template_ids")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the application group.
"""
return pulumi.get(self, "type")
| 38.505639 | 456 | 0.640896 | 8,421 | 81,940 | 6.045719 | 0.045482 | 0.022412 | 0.038813 | 0.056726 | 0.801418 | 0.766239 | 0.743513 | 0.69374 | 0.67235 | 0.654004 | 0 | 0.000443 | 0.256322 | 81,940 | 2,127 | 457 | 38.523742 | 0.835026 | 0.318123 | 0 | 0.733982 | 1 | 0.008921 | 0.173375 | 0.0723 | 0 | 0 | 0 | 0 | 0 | 1 | 0.174371 | false | 0 | 0.004866 | 0.007299 | 0.344688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7383605c0462c57365d6f53de4e69dc1d56d2a98 | 3,669 | py | Python | scivision_plankton_models/model.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | scivision_plankton_models/model.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | scivision_plankton_models/model.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import torch
import torchvision
import pickle
class resnet50_label1:
def __init__(self):
# preload the pretrained model
model = torchvision.models.resnet50(pretrained=True)
num_ftrs = model.fc.in_features
target_classes = 2
model.fc = torch.nn.Linear(num_ftrs, target_classes)
# replace default weights by the fine-tune model
model.load_state_dict(torch.load(f'/output/models/resnet50/resnet50_label1_001.pth', map_location=torch.device('cpu'))) # path of your weights
#initialise the model in evaluation mode
self.pretrained_model = model
self.pretrained_model.eval()
def predict(self, image: np.ndarray) -> np.ndarray:
batch = torch.tensor(image)
y = self.pretrained_model(batch)
return y
class resnet50_label2:
def __init__(self):
# preload the pretrained model
model = torchvision.models.resnet50(pretrained=True)
num_ftrs = model.fc.in_features
target_classes = 3
model.fc = torch.nn.Linear(num_ftrs, target_classes)
# replace default weights by the fine-tune model
model.load_state_dict(torch.load(f'/output/models/resnet50/resnet50_label2_001.pth', map_location=torch.device('cpu'))) # path of your weights
#initialise the model in evaluation mode
self.pretrained_model = model
self.pretrained_model.eval()
def predict(self, image: np.ndarray) -> np.ndarray:
batch = torch.tensor(image)
y = self.pretrained_model(batch)
return y
class resnet50_label3:
def __init__(self):
# preload the pretrained model
model = torchvision.models.resnet50(pretrained=True)
num_ftrs = model.fc.in_features
target_classes = 39
model.fc = torch.nn.Linear(num_ftrs, target_classes)
# replace default weights by the fine-tune model
model.load_state_dict(torch.load(f'/output/models/resnet50/resnet50_label3_001.pth', map_location=torch.device('cpu'))) # path of your weights
#initialise the model in evaluation mode
self.pretrained_model = model
self.pretrained_model.eval()
def predict(self, image: np.ndarray) -> np.ndarray:
batch = torch.tensor(image)
y = self.pretrained_model(batch)
return y
class randomforest_label1:
def __init__(self):
with open(f"/output/models/randomforest/rf-label1.pkl", 'rb') as f:
rf_model = pickle.load(f)
self.pretrained_model = rf_model
# def features(self, image: np.ndarray) -> np.ndarray:
# self.img_features = 'test'
def predict(self, image: np.ndarray) -> np.ndarray:
y = self.pretrained_model.predict(image)
return y
class randomforest_label2:
def __init__(self):
with open(f"/output/models/randomforest/rf-label2.pkl", 'rb') as f:
rf_model = pickle.load(f)
self.pretrained_model = rf_model
# def features(self, image: np.ndarray) -> np.ndarray:
# self.img_features = 'test'
def predict(self, image: np.ndarray) -> np.ndarray:
y = self.pretrained_model.predict(image)
return y
class randomforest_label3:
def __init__(self):
with open(f"/output/models/randomforest/rf-label3.pkl", 'rb') as f:
rf_model = pickle.load(f)
self.pretrained_model = rf_model
# def features(self, image: np.ndarray) -> np.ndarray:
# self.img_features = 'test'
def predict(self, image: np.ndarray) -> np.ndarray:
y = self.pretrained_model.predict(image)
return y
if __name__ == "__main__":
pass | 34.613208 | 151 | 0.663123 | 476 | 3,669 | 4.918067 | 0.159664 | 0.115335 | 0.121743 | 0.069201 | 0.927381 | 0.927381 | 0.927381 | 0.927381 | 0.927381 | 0.927381 | 0 | 0.017475 | 0.235759 | 3,669 | 106 | 152 | 34.613208 | 0.817404 | 0.179613 | 0 | 0.695652 | 0 | 0 | 0.095922 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.014493 | 0.057971 | 0 | 0.405797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7387850975897cf1ca8c2626ab811840531d9e95 | 37 | py | Python | GmailWrapper_JE/je_gmail/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | 2 | 2020-12-30T06:37:10.000Z | 2020-12-30T07:27:45.000Z | GmailWrapper_JE/je_gmail/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | GmailWrapper_JE/je_gmail/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | from je_gmail.core import GmailCore
| 18.5 | 36 | 0.837838 | 6 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73986838dc338f097b909f6c861e928a3573dbd4 | 229 | py | Python | Daraja/mpesa_integration/keys.py | DelivceNdegwa/Daraja- | 6d46ae3b57c3fe3e91877578db7b975d2b0fd1a3 | [
"MIT"
] | null | null | null | Daraja/mpesa_integration/keys.py | DelivceNdegwa/Daraja- | 6d46ae3b57c3fe3e91877578db7b975d2b0fd1a3 | [
"MIT"
] | null | null | null | Daraja/mpesa_integration/keys.py | DelivceNdegwa/Daraja- | 6d46ae3b57c3fe3e91877578db7b975d2b0fd1a3 | [
"MIT"
] | null | null | null | shortcode = "174379"
lipa_na_mpesa_pass_key = "bfb279f9aa9bdbcf158e97dd71a467cd2e0c893059b10f78e6b72ada1ed2c91"
phone_number = "254711994966"
consumer_key = "aGd3WKTtLpL9AsvGkfi6MaLNqE5H1mCA"
consumer_secret = "JBqhnOTuxftnTe2E"
| 38.166667 | 90 | 0.868996 | 17 | 229 | 11.294118 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.271028 | 0.065502 | 229 | 5 | 91 | 45.8 | 0.626168 | 0 | 0 | 0 | 0 | 0 | 0.563319 | 0.414847 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
73b5506c8d4f24e33676da44a510e9e0c0a8215c | 8,947 | py | Python | testsuite/test_resolver.py | ronaldoussoren/objc_asyncio | 89e573fd4c95592515ea8c8a4abfeebdd261fde2 | [
"MIT-0"
] | 2 | 2021-02-20T22:10:54.000Z | 2021-03-26T21:45:06.000Z | testsuite/test_resolver.py | ronaldoussoren/objc_asyncio | 89e573fd4c95592515ea8c8a4abfeebdd261fde2 | [
"MIT-0"
] | null | null | null | testsuite/test_resolver.py | ronaldoussoren/objc_asyncio | 89e573fd4c95592515ea8c8a4abfeebdd261fde2 | [
"MIT-0"
] | null | null | null | import socket
import unittest
import unittest.mock
from . import utils
class TestResolver(utils.TestCase):
def test_getaddrinfo(self):
for dom, port in (("blog.ronaldoussoren.net", 80), ("www.python.org", "https")):
with self.subTest(dom=dom, family="*"):
infos = self.loop.run_until_complete(self.loop.getaddrinfo(dom, port))
self.assertEqual(set(infos), set(socket.getaddrinfo(dom, port)))
for dom, port in (("blog.ronaldoussoren.net", 80), ("www.python.org", "https")):
with self.subTest(dom=dom, family="IPv4"):
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port, family=socket.AF_INET)
)
self.assertEqual(
set(infos),
set(socket.getaddrinfo(dom, port, family=socket.AF_INET)),
)
for dom, port in (("blog.ronaldoussoren.net", 80), ("www.python.org", "https")):
with self.subTest(dom=dom, proto="STREAM"):
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port, proto=socket.SOCK_STREAM)
)
self.assertEqual(
set(infos),
set(socket.getaddrinfo(dom, port, proto=socket.SOCK_STREAM)),
)
def test_getaddrinfo_debug(self):
with utils.captured_log() as stream:
self.loop.set_debug(True)
for dom, port in (
("blog.ronaldoussoren.net", 80),
("www.python.org", "https"),
):
with self.subTest(dom=dom, family="*"):
stream.seek(0)
stream.truncate()
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port)
)
self.assertEqual(set(infos), set(socket.getaddrinfo(dom, port)))
contents = stream.getvalue()
self.assertIn("Get address info", contents)
self.assertIn("Getting address info", contents)
self.assertNotIn("family=", contents)
self.assertNotIn("type=", contents)
self.assertNotIn("proto=", contents)
self.assertNotIn("flags=", contents)
self.assertIn("DEBUG", contents)
for dom, port in (
("blog.ronaldoussoren.net", 80),
("www.python.org", "https"),
):
with self.subTest(dom=dom, family="IPv4"):
stream.seek(0)
stream.truncate()
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port, family=socket.AF_INET)
)
self.assertEqual(
set(infos),
set(socket.getaddrinfo(dom, port, family=socket.AF_INET)),
)
contents = stream.getvalue()
self.assertIn("family=", contents)
self.assertNotIn("type=", contents)
self.assertNotIn("proto=", contents)
self.assertNotIn("flags=", contents)
for dom, port in (
("blog.ronaldoussoren.net", 80),
("www.python.org", "https"),
):
with self.subTest(dom=dom, type="STREAM"):
stream.seek(0)
stream.truncate()
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port, type=socket.SOCK_STREAM)
)
self.assertEqual(
set(infos),
set(socket.getaddrinfo(dom, port, type=socket.SOCK_STREAM)),
)
contents = stream.getvalue()
self.assertNotIn("family=", contents)
self.assertIn("type=", contents)
self.assertNotIn("proto=", contents)
self.assertNotIn("flags=", contents)
for dom, port in (
("blog.ronaldoussoren.net", 80),
("www.python.org", "https"),
):
with self.subTest(dom=dom, proto="TCP"):
stream.seek(0)
stream.truncate()
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(
dom, port, type=socket.SOCK_STREAM, proto=socket.IPPROTO_TCP
)
)
self.assertEqual(
set(infos),
set(
socket.getaddrinfo(
dom,
port,
type=socket.SOCK_STREAM,
proto=socket.IPPROTO_TCP,
)
),
)
contents = stream.getvalue()
self.assertNotIn("family=", contents)
self.assertIn("type=", contents)
self.assertIn("proto=", contents)
self.assertNotIn("flags=", contents)
for dom, port in (
("blog.ronaldoussoren.net", 80),
("www.python.org", "https"),
):
with self.subTest(dom=dom, flags="AI_CANONNAME"):
stream.seek(0)
stream.truncate()
infos = self.loop.run_until_complete(
self.loop.getaddrinfo(dom, port, flags=socket.AI_CANONNAME)
)
self.assertEqual(
set(infos),
set(socket.getaddrinfo(dom, port, flags=socket.AI_CANONNAME)),
)
contents = stream.getvalue()
self.assertNotIn("family=", contents)
self.assertNotIn("type=", contents)
self.assertNotIn("proto=", contents)
self.assertIn("flags=", contents)
with self.subTest("Resolving error"):
stream.seek(0)
stream.truncate()
awaitable = self.loop.getaddrinfo("nosuchhost.python.org", 443)
with self.assertRaises(socket.error):
self.loop.run_until_complete(awaitable)
contents = stream.getvalue()
self.assertIn("Getting address info", contents)
self.assertIn("failed in", contents)
self.assertIn("DEBUG", contents)
# Check that slow queries get logged at INFO level by (crudely)
# mocking a slow clock.
with self.subTest("Slow resolver"):
with unittest.mock.patch(
"objc_asyncio.PyObjCEventLoop.time", side_effect=list(range(1000))
):
stream.seek(0)
stream.truncate()
awaitable = self.loop.getaddrinfo("www.python.org", 443)
self.loop.run_until_complete(awaitable)
contents = stream.getvalue()
self.assertIn("INFO", contents)
with self.subTest("Slow resolver"):
with unittest.mock.patch(
"objc_asyncio.PyObjCEventLoop.time", side_effect=list(range(1000))
):
stream.seek(0)
stream.truncate()
awaitable = self.loop.getaddrinfo("nosuchhost.python.org", 443)
with self.assertRaises(socket.error):
self.loop.run_until_complete(awaitable)
contents = stream.getvalue()
self.assertIn("INFO", contents)
def test_getaddrinfo_no_such_addr(self):
awaitable = self.loop.getaddrinfo("nosuchhost.python.org", 443)
with self.assertRaises(socket.error):
self.loop.run_until_complete(awaitable)
self.loop.set_debug(True)
awaitable = self.loop.getaddrinfo("nosuchhost.python.org", 443)
with self.assertRaises(socket.error):
self.loop.run_until_complete(awaitable)
def test_getnameinfo(self):
infos = socket.getaddrinfo(
"blog.ronaldoussoren.net", 80, proto=socket.SOCK_STREAM
)
self.assertNotEqual(infos, [])
for flags in (0, socket.NI_NOFQDN):
for info in infos:
result = self.loop.run_until_complete(
self.loop.getnameinfo(info[-1], flags)
)
self.assertEqual(result, socket.getnameinfo(info[-1], flags))
| 41.613953 | 88 | 0.483067 | 791 | 8,947 | 5.388116 | 0.128951 | 0.056312 | 0.067574 | 0.052557 | 0.86321 | 0.813937 | 0.813937 | 0.77405 | 0.768888 | 0.757156 | 0 | 0.010221 | 0.409523 | 8,947 | 214 | 89 | 41.808411 | 0.796517 | 0.009277 | 0 | 0.659459 | 0 | 0 | 0.090735 | 0.040289 | 0 | 0 | 0 | 0 | 0.227027 | 1 | 0.021622 | false | 0 | 0.021622 | 0 | 0.048649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb50de40d27619dedc16d39d5566afd759239db8 | 26,107 | py | Python | morepath/tests/test_directive.py | hugovk/morepath | 5596f9ce43ee4e5cd73eaa2ab9ef37825f88ae28 | [
"BSD-3-Clause"
] | null | null | null | morepath/tests/test_directive.py | hugovk/morepath | 5596f9ce43ee4e5cd73eaa2ab9ef37825f88ae28 | [
"BSD-3-Clause"
] | null | null | null | morepath/tests/test_directive.py | hugovk/morepath | 5596f9ce43ee4e5cd73eaa2ab9ef37825f88ae28 | [
"BSD-3-Clause"
] | null | null | null | import dectate
from .fixtures import (
basic,
nested,
abbr,
mapply_bug,
method,
conflict,
noconverter,
)
from dectate import ConflictError, DirectiveReportError
from morepath.error import LinkError
from morepath.view import render_html
from morepath.converter import Converter
import morepath
import reg
import pytest
from webtest import TestApp as Client
def test_basic():
c = Client(basic.app())
response = c.get("/foo")
assert response.body == b"The view for model: foo"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo"
def test_basic_json():
c = Client(basic.app())
response = c.get("/foo/json")
assert response.body == b'{"id":"foo"}'
def test_basic_root():
c = Client(basic.app())
response = c.get("/")
assert response.body == b"The root: ROOT"
# + is to make sure we get the view, not the sub-model as
# the model is greedy
response = c.get("/+link")
assert response.body == b"http://localhost/"
def test_nested():
c = Client(nested.outer_app())
response = c.get("/inner/foo")
assert response.body == b"The view for model: foo"
response = c.get("/inner/foo/link")
assert response.body == b"http://localhost/inner/foo"
def test_abbr():
c = Client(abbr.app())
response = c.get("/foo")
assert response.body == b"Default view: foo"
response = c.get("/foo/edit")
assert response.body == b"Edit view: foo"
def test_scanned_static_method():
c = Client(method.app())
response = c.get("/static")
assert response.body == b"Static Method"
root = method.Root()
assert isinstance(root.static_method(), method.StaticMethod)
def test_scanned_no_converter():
with pytest.raises(DirectiveReportError):
noconverter.app.commit()
def test_scanned_conflict():
with pytest.raises(ConflictError):
conflict.app.commit()
def test_basic_scenario():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self):
self.value = "ROOT"
class Model(object):
def __init__(self, id):
self.id = id
@app.path(model=Model, path="{id}")
def get_model(id):
return Model(id)
@app.view(model=Model)
def default(self, request):
return "The view for model: %s" % self.id
@app.view(model=Model, name="link")
def link(self, request):
return request.link(self)
@app.view(model=Model, name="json", render=morepath.render_json)
def json(self, request):
return {"id": self.id}
@app.view(model=Root)
def root_default(self, request):
return "The root: %s" % self.value
@app.view(model=Root, name="link")
def root_link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The view for model: foo"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo"
response = c.get("/foo/json")
assert response.body == b'{"id":"foo"}'
response = c.get("/")
assert response.body == b"The root: ROOT"
# + is to make sure we get the view, not the sub-model
response = c.get("/+link")
assert response.body == b"http://localhost/"
def test_link_to_unknown_model():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self):
self.value = "ROOT"
class Model(object):
def __init__(self, id):
self.id = id
@app.view(model=Root)
def root_link(self, request):
try:
return request.link(Model("foo"))
except LinkError:
return "Link error"
@app.view(model=Root, name="default")
def root_link_with_default(self, request):
try:
return request.link(Model("foo"), default="hey")
except LinkError:
return "Link Error"
c = Client(app())
response = c.get("/")
assert response.body == b"Link error"
response = c.get("/default")
assert response.body == b"Link Error"
def test_link_to_none():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self):
self.value = "ROOT"
class Model(object):
def __init__(self, id):
self.id = id
@app.view(model=Root)
def root_link(self, request):
return str(request.link(None) is None)
@app.view(model=Root, name="default")
def root_link_with_default(self, request):
return request.link(None, default="unknown")
c = Client(app())
response = c.get("/")
assert response.body == b"True"
response = c.get("/default")
assert response.body == b"unknown"
def test_link_with_parameters():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self):
self.value = "ROOT"
class Model(object):
def __init__(self, id, param):
self.id = id
self.param = param
@app.path(model=Model, path="{id}")
def get_model(id, param=0):
assert isinstance(param, int)
return Model(id, param)
@app.view(model=Model)
def default(self, request):
return "The view for model: %s %s" % (self.id, self.param)
@app.view(model=Model, name="link")
def link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The view for model: foo 0"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo?param=0"
response = c.get("/foo?param=1")
assert response.body == b"The view for model: foo 1"
response = c.get("/foo/link?param=1")
assert response.body == b"http://localhost/foo?param=1"
def test_root_link_with_parameters():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self, param=0):
assert isinstance(param, int)
self.param = param
@app.view(model=Root)
def default(self, request):
return "The view for root: %s" % self.param
@app.view(model=Root, name="link")
def link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/")
assert response.body == b"The view for root: 0"
response = c.get("/link")
assert response.body == b"http://localhost/?param=0"
response = c.get("/?param=1")
assert response.body == b"The view for root: 1"
response = c.get("/link?param=1")
assert response.body == b"http://localhost/?param=1"
def test_link_with_prefix():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
@app.view(model=Root, name="link")
def link(self, request):
return request.link(self)
@app.link_prefix()
def link_prefix(request):
return request.headers["TESTPREFIX"]
c = Client(app())
# we don't do anything with the prefix, so a slash at the end of the prefix
# leads to a double prefix at the end
response = c.get("/link", headers={"TESTPREFIX": "http://testhost/"})
assert response.body == b"http://testhost//"
response = c.get("/link", headers={"TESTPREFIX": "http://testhost"})
assert response.body == b"http://testhost/"
def test_link_with_prefix_app_arg():
class App(morepath.App):
pass
@App.path(path="")
class Root(object):
pass
@App.view(model=Root, name="link")
def link(self, request):
return request.link(self)
@App.link_prefix()
def link_prefix(app, request):
assert isinstance(app, App)
return request.headers["TESTPREFIX"]
c = Client(App())
# we don't do anything with the prefix, so a slash at the end of the prefix
# leads to a double prefix at the end
response = c.get("/link", headers={"TESTPREFIX": "http://testhost/"})
assert response.body == b"http://testhost//"
response = c.get("/link", headers={"TESTPREFIX": "http://testhost"})
assert response.body == b"http://testhost/"
def test_link_prefix_cache():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
@app.view(model=Root, name="link")
def link(self, request):
request.link(self) # make an extra call before returning
return request.link(self)
@app.link_prefix()
def link_prefix(request):
if not hasattr(request, "callnumber"):
request.callnumber = 1
else:
request.callnumber += 1
return str(request.callnumber)
c = Client(app())
response = c.get("/link")
assert response.body == b"1/"
def test_link_with_invalid_prefix():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
@app.view(model=Root, name="link")
def link(self, request):
return request.link(self)
@app.link_prefix()
def link_prefix(request):
return None
c = Client(app())
with pytest.raises(TypeError):
c.get("/link")
def test_external_link_prefix():
class App(morepath.App):
pass
class ExternalApp(morepath.App):
pass
class InternalDoc(object):
pass
class ExternalDoc(object):
pass
@App.path(model=InternalDoc, path="")
def internal_path(request):
return InternalDoc()
@ExternalApp.path(model=ExternalDoc, path="external")
def external_path(request):
return ExternalDoc()
@App.defer_links(model=ExternalDoc)
def defer_external_links(app, obj):
return ExternalApp()
@ExternalApp.link_prefix()
def prefix_external_link(request):
return "example.org"
@App.json(model=InternalDoc)
def main_view(self, request):
return {
"internal_link": request.link(InternalDoc()),
"external_link_def": request.link(ExternalDoc()),
"external_link_expl": request.link(
ExternalDoc(), app=ExternalApp()
),
}
assert Client(App()).get("/").json == {
"external_link_expl": "example.org/external",
"external_link_def": "example.org/external",
"internal_link": "http://localhost/",
}
def test_implicit_variables():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
class Model(object):
def __init__(self, id):
self.id = id
@app.path(model=Model, path="{id}")
def get_model(id):
return Model(id)
@app.view(model=Model)
def default(self, request):
return "The view for model: %s" % self.id
@app.view(model=Model, name="link")
def link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo"
def test_implicit_parameters():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
class Model(object):
def __init__(self, id):
self.id = id
@app.path(model=Model, path="foo")
def get_model(id):
return Model(id)
@app.view(model=Model)
def default(self, request):
return "The view for model: %s" % self.id
@app.view(model=Model, name="link")
def link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The view for model: None"
response = c.get("/foo?id=bar")
assert response.body == b"The view for model: bar"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo"
response = c.get("/foo/link?id=bar")
assert response.body == b"http://localhost/foo?id=bar"
def test_implicit_parameters_default():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
class Model(object):
def __init__(self, id):
self.id = id
@app.path(model=Model, path="foo")
def get_model(id="default"):
return Model(id)
@app.view(model=Model)
def default(self, request):
return "The view for model: %s" % self.id
@app.view(model=Model, name="link")
def link(self, request):
return request.link(self)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The view for model: default"
response = c.get("/foo?id=bar")
assert response.body == b"The view for model: bar"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo?id=default"
response = c.get("/foo/link?id=bar")
assert response.body == b"http://localhost/foo?id=bar"
def test_simple_root():
class app(morepath.App):
pass
class Hello(object):
pass
hello = Hello()
@app.path(model=Hello, path="")
def hello_model():
return hello
@app.view(model=Hello)
def hello_view(self, request):
return "hello"
c = Client(app())
response = c.get("/")
assert response.body == b"hello"
def test_json_directive():
class app(morepath.App):
pass
@app.path(path="{id}")
class Model(object):
def __init__(self, id):
self.id = id
@app.json(model=Model)
def json(self, request):
return {"id": self.id}
c = Client(app())
response = c.get("/foo")
assert response.body == b'{"id":"foo"}'
def test_redirect():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
def __init__(self):
pass
@app.view(model=Root, render=render_html)
def default(self, request):
return morepath.redirect("/")
c = Client(app())
c.get("/", status=302)
def test_root_conflict():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
@app.path(path="")
class Something(object):
pass
with pytest.raises(ConflictError):
app.commit()
def test_root_conflict2():
class app(morepath.App):
pass
@app.path(path="")
class Root(object):
pass
@app.path(path="/")
class Something(object):
pass
with pytest.raises(ConflictError):
app.commit()
def test_root_no_conflict_different_apps():
class app_a(morepath.App):
pass
class app_b(morepath.App):
pass
@app_a.path(path="")
class Root(object):
pass
@app_b.path(path="")
class Something(object):
pass
dectate.commit(app_a, app_b)
def test_model_conflict():
class app(morepath.App):
pass
class A(object):
pass
@app.path(model=A, path="a")
def get_a():
return A()
@app.path(model=A, path="a")
def get_a_again():
return A()
with pytest.raises(ConflictError):
app.commit()
def test_path_conflict():
class app(morepath.App):
pass
class A(object):
pass
class B(object):
pass
@app.path(model=A, path="a")
def get_a():
return A()
@app.path(model=B, path="a")
def get_b():
return B()
with pytest.raises(ConflictError):
app.commit()
def test_path_conflict_with_variable():
class app(morepath.App):
pass
class A(object):
pass
class B(object):
pass
@app.path(model=A, path="a/{id}")
def get_a(id):
return A()
@app.path(model=B, path="a/{id2}")
def get_b(id):
return B()
with pytest.raises(ConflictError):
app.commit()
def test_path_conflict_with_variable_different_converters():
class app(morepath.App):
pass
class A(object):
pass
class B(object):
pass
@app.path(model=A, path="a/{id}", converters=Converter(decode=int))
def get_a(id):
return A()
@app.path(model=B, path="a/{id}")
def get_b(id):
return B()
with pytest.raises(ConflictError):
app.commit()
def test_model_no_conflict_different_apps():
class app_a(morepath.App):
pass
class app_b(morepath.App):
pass
class A(object):
pass
@app_a.path(model=A, path="a")
def get_a():
return A()
@app_b.path(model=A, path="a")
def get_a_again():
return A()
dectate.commit(app_a, app_b)
def test_view_conflict():
class app(morepath.App):
pass
class Model(object):
pass
@app.view(model=Model, name="a")
def a_view(self, request):
pass
@app.view(model=Model, name="a")
def a1_view(self, request):
pass
with pytest.raises(ConflictError):
app.commit()
def test_view_no_conflict_different_names():
class app(morepath.App):
pass
class Model(object):
pass
@app.view(model=Model, name="a")
def a_view(self, request):
pass
@app.view(model=Model, name="b")
def b_view(self, request):
pass
app.commit()
def test_view_no_conflict_different_predicates():
class app(morepath.App):
pass
class Model(object):
pass
@app.view(model=Model, name="a", request_method="GET")
def a_view(self, request):
pass
@app.view(model=Model, name="a", request_method="POST")
def b_view(self, request):
pass
app.commit()
def test_view_no_conflict_different_apps():
class app_a(morepath.App):
pass
class app_b(morepath.App):
pass
class Model(object):
pass
@app_a.view(model=Model, name="a")
def a_view(self, request):
pass
@app_b.view(model=Model, name="a")
def a1_view(self, request):
pass
dectate.commit(app_a, app_b)
def test_view_conflict_with_json():
class app(morepath.App):
pass
class Model(object):
pass
@app.view(model=Model, name="a")
def a_view(self, request):
pass
@app.json(model=Model, name="a")
def a1_view(self, request):
pass
with pytest.raises(ConflictError):
app.commit()
def test_view_conflict_with_html():
class app(morepath.App):
pass
class Model(object):
pass
@app.view(model=Model, name="a")
def a_view(self, request):
pass
@app.html(model=Model, name="a")
def a1_view(self, request):
pass
with pytest.raises(ConflictError):
app.commit()
def test_function():
class App(morepath.App):
@morepath.dispatch_method("a")
def func(self, a):
return "default"
class A(object):
pass
@App.method(App.func, a=A)
def a_func(app, request):
return "A"
app = App()
assert app.func(A()) == "A"
assert app.func(None) == "default"
def test_method():
class App(morepath.App):
@morepath.dispatch_method("a")
def func(self, a):
return "default"
class A(object):
pass
@App.method(App.func, a=A)
def a_func(app, request):
assert isinstance(app, App)
return "A"
app = App()
assert app.func(A()) == "A"
assert app.func(None) == "default"
def test_function_conflict():
class app(morepath.App):
@morepath.dispatch_method("a")
def func(self, a):
pass
class A(object):
pass
@app.method(app.func, a=A)
def a_func(app, a, request):
pass
@app.method(app.func, a=A)
def a1_func(app, a, request):
pass
with pytest.raises(ConflictError):
app.commit()
def test_function_no_conflict_different_apps():
class base(morepath.App):
@morepath.dispatch_method("a")
def func(self, a):
pass
class app_a(base):
pass
class app_b(base):
pass
class A(object):
pass
@app_a.method(base.func, a=A)
def a_func(app, a):
pass
@app_b.method(base.func, a=A)
def a1_func(app, a):
pass
dectate.commit(app_a, app_b)
def test_run_app_with_context_without_it():
class app(morepath.App):
pass
def __init__(self, mount_id):
self.mount_id = mount_id
with pytest.raises(TypeError):
app()
def test_mapply_bug():
c = Client(mapply_bug.app())
response = c.get("/")
assert response.body == b"the root"
def test_abbr_imperative():
class app(morepath.App):
pass
class Model(object):
pass
@app.path(path="/", model=Model)
def get_model():
return Model()
with app.view(model=Model) as view:
@view()
def default(self, request):
return "Default view"
@view(name="edit")
def edit(self, request):
return "Edit view"
c = Client(app())
response = c.get("/")
assert response.body == b"Default view"
response = c.get("/edit")
assert response.body == b"Edit view"
def test_abbr_exception():
class app(morepath.App):
pass
class Model(object):
pass
@app.path(path="/", model=Model)
def get_model():
return Model()
try:
with app.view(model=Model) as view:
@view()
def default(self, request):
return "Default view"
1 / 0
@view(name="edit")
def edit(self, request):
return "Edit view"
except ZeroDivisionError:
pass
c = Client(app())
response = c.get("/")
assert response.body == b"Default view"
# an exception happened halfway, so this one is never registered
c.get("/edit", status=404)
def test_abbr_imperative2():
class app(morepath.App):
pass
class Model(object):
pass
@app.path(path="/", model=Model)
def get_model():
return Model()
with app.view(model=Model) as view:
@view()
def default(self, request):
return "Default view"
@view(name="edit")
def edit(self, request):
return "Edit view"
c = Client(app())
response = c.get("/")
assert response.body == b"Default view"
response = c.get("/edit")
assert response.body == b"Edit view"
def test_abbr_nested():
class app(morepath.App):
pass
class Model(object):
pass
@app.path(path="/", model=Model)
def get_model():
return Model()
with app.view(model=Model) as view:
@view()
def default(self, request):
return "Default"
with view(name="extra") as view:
@view()
def get(self, request):
return "Get"
@view(request_method="POST")
def post(self, request):
return "Post"
c = Client(app())
response = c.get("/")
assert response.body == b"Default"
response = c.get("/extra")
assert response.body == b"Get"
response = c.post("/extra")
assert response.body == b"Post"
def test_function_directive():
class app(morepath.App):
@morepath.dispatch_method("o")
def mygeneric(self, o):
return "The object: %s" % o
class Foo(object):
def __init__(self, value):
self.value = value
def __repr__(self):
return "<Foo with value: %s>" % self.value
@app.method(app.mygeneric, o=Foo)
def mygeneric_for_foo(app, o):
return "The foo object: %s" % o
a = app()
assert a.mygeneric("blah") == "The object: blah"
assert a.mygeneric(Foo(1)) == ("The foo object: <Foo with value: 1>")
def test_classgeneric_function_directive():
class app(morepath.App):
@morepath.dispatch_method(reg.match_class("o"))
def mygeneric(self, o):
return "The object"
class Foo(object):
pass
@app.method(app.mygeneric, o=Foo)
def mygeneric_for_foo(app, o):
return "The foo object"
a = app()
assert a.mygeneric(object) == "The object"
assert a.mygeneric(Foo) == "The foo object"
def test_staticmethod():
class App(morepath.App):
pass
@App.path("/")
class Root(object):
pass
class A(object):
@staticmethod
@App.view(model=Root)
def root_default(self, request):
assert isinstance(self, Root)
return "Hello world"
c = Client(App())
response = c.get("/")
assert response.body == b"Hello world"
def test_classmethod_equivalent_to_staticmethod():
class App(morepath.App):
pass
@App.path("/")
class Root(object):
pass
class A(object):
@classmethod
@App.view(model=Root)
def root_default(self, request):
assert isinstance(self, Root)
return "Hello world"
c = Client(App())
response = c.get("/")
assert response.body == b"Hello world"
def test_classmethod_bound_outside():
class App(morepath.App):
pass
@App.path("/")
class Root(object):
pass
class A(object):
@classmethod
def root_default(cls, self, request):
assert isinstance(self, Root)
return "Hello world"
App.view(model=Root)(A.root_default)
c = Client(App())
response = c.get("/")
assert response.body == b"Hello world"
def test_instantiation_before_config():
class App(morepath.App):
pass
# Typically, instantiating App would be done later, after the
# decorators. Since this use case has been found in the wild, we
# might as well make sure it works:
app = App()
@App.path(path="")
class Hello(object):
pass
@App.view(model=Hello)
def hello_view(self, request):
return "hello"
c = Client(app)
response = c.get("/")
assert response.body == b"hello"
| 21.105093 | 79 | 0.587084 | 3,385 | 26,107 | 4.423338 | 0.060857 | 0.02805 | 0.067321 | 0.071061 | 0.788419 | 0.749549 | 0.734389 | 0.721766 | 0.680825 | 0.647432 | 0 | 0.001901 | 0.274524 | 26,107 | 1,236 | 80 | 21.122168 | 0.788648 | 0.023174 | 0 | 0.71816 | 0 | 0 | 0.088509 | 0 | 0 | 0 | 0 | 0 | 0.086085 | 1 | 0.194575 | false | 0.127358 | 0.011792 | 0.079009 | 0.431604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
fb7810c86a4c1abe39e465a8de48615bc8895b39 | 147 | py | Python | order.py | saykent/gitwork | 5dc0734137b617428f5a8ee25ceb826b9c5cd2b4 | [
"Apache-2.0"
] | null | null | null | order.py | saykent/gitwork | 5dc0734137b617428f5a8ee25ceb826b9c5cd2b4 | [
"Apache-2.0"
] | null | null | null | order.py | saykent/gitwork | 5dc0734137b617428f5a8ee25ceb826b9c5cd2b4 | [
"Apache-2.0"
] | null | null | null | def order_list():
pass
def order_details():
pass
def add_order():
pass
def update_order():
pass
def delete_order():
pass
| 8.647059 | 20 | 0.619048 | 20 | 147 | 4.3 | 0.4 | 0.325581 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.278912 | 147 | 16 | 21 | 9.1875 | 0.811321 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
fb9640883de6f5ac8c553518167ea8c4e733e172 | 376 | py | Python | Week 1 In-out, int and bool/Task08-09.py | retverd/python_hse | cb9bfb092c1cf68ae0c53b9919ca24a71a8cbf88 | [
"MIT"
] | null | null | null | Week 1 In-out, int and bool/Task08-09.py | retverd/python_hse | cb9bfb092c1cf68ae0c53b9919ca24a71a8cbf88 | [
"MIT"
] | null | null | null | Week 1 In-out, int and bool/Task08-09.py | retverd/python_hse | cb9bfb092c1cf68ae0c53b9919ca24a71a8cbf88 | [
"MIT"
] | null | null | null | # #8 У 60 белочек было 38746298762973632324233242 орешков. Они решили разделить их поровну. Сколько орешков досталось
# каждой белочке? В качестве ответа необходимо сдать целое число.
# #9 Часы показывали полночь. Прошло 38746298762973632324233242 минут. Сколько полных часов прошло? В качестве ответа
# необходимо сдать целое число.
print(38746298762973632324233242 // 60)
| 47 | 117 | 0.816489 | 45 | 376 | 6.822222 | 0.711111 | 0.058632 | 0.09772 | 0.162866 | 0.260586 | 0.260586 | 0.260586 | 0 | 0 | 0 | 0 | 0.258462 | 0.135638 | 376 | 7 | 118 | 53.714286 | 0.686154 | 0.859043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fba1a8f2931bbbd48e8964dee3d1819709bf50ec | 85 | py | Python | sympycore/algebras/__init__.py | radovankavicky/pymaclab | 21da758f64ed0b62969c9289576f677e977cfd98 | [
"Apache-2.0"
] | 96 | 2015-01-25T05:59:56.000Z | 2021-12-29T14:05:22.000Z | sympycore/algebras/__init__.py | 1zinnur9/pymaclab | 21da758f64ed0b62969c9289576f677e977cfd98 | [
"Apache-2.0"
] | 3 | 2015-12-17T19:25:46.000Z | 2018-06-19T07:05:20.000Z | sympycore/algebras/__init__.py | 1zinnur9/pymaclab | 21da758f64ed0b62969c9289576f677e977cfd98 | [
"Apache-2.0"
] | 36 | 2016-01-31T15:22:01.000Z | 2021-03-29T07:03:07.000Z |
from .groups import Group, AdditiveGroup, AdditiveAbelianGroup, MultiplicativeGroup
| 28.333333 | 83 | 0.858824 | 7 | 85 | 10.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094118 | 85 | 2 | 84 | 42.5 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.