hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
124f5664af0507917119b5b04c1302515e77c19a | 16,847 | py | Python | MIND_dataset.py | whonor/NNR | 5b54af75ac5d9d6f8c0d53e3efb26c4b9c701e66 | [
"MIT"
] | 15 | 2021-09-07T08:22:50.000Z | 2022-02-21T15:38:27.000Z | MIND_dataset.py | whonor/NNR | 5b54af75ac5d9d6f8c0d53e3efb26c4b9c701e66 | [
"MIT"
] | 4 | 2021-09-29T03:35:18.000Z | 2022-03-30T03:39:19.000Z | MIND_dataset.py | whonor/NNR | 5b54af75ac5d9d6f8c0d53e3efb26c4b9c701e66 | [
"MIT"
] | 3 | 2021-09-29T09:51:56.000Z | 2022-03-30T02:25:12.000Z | from MIND_corpus import MIND_Corpus
import time
from config import Config
import torch.utils.data as data
from numpy.random import randint
from torch.utils.data import DataLoader
class MIND_Train_Dataset(data.Dataset):
def __init__(self, corpus: MIND_Corpus):
self.negative_sample_num = corpus.negative_sample_num
self.news_category = corpus.news_category
self.news_subCategory = corpus.news_subCategory
self.news_title_text = corpus.news_title_text
self.news_title_mask = corpus.news_title_mask
self.news_title_entity = corpus.news_title_entity
self.news_abstract_text = corpus.news_abstract_text
self.news_abstract_mask = corpus.news_abstract_mask
self.news_abstract_entity = corpus.news_abstract_entity
self.user_history_graph = corpus.train_user_history_graph
self.user_history_category_mask = corpus.train_user_history_category_mask
self.user_history_category_indices = corpus.train_user_history_category_indices
self.train_behaviors = corpus.train_behaviors
self.train_samples = [[0 for _ in range(1 + self.negative_sample_num)] for __ in range(len(self.train_behaviors))]
self.num = len(self.train_behaviors)
def negative_sampling(self, rank=None):
print('\n%sBegin negative sampling, training sample num : %d' % ('' if rank is None else ('rank ' + str(rank) + ' : '), self.num))
start_time = time.time()
for i, train_behavior in enumerate(self.train_behaviors):
self.train_samples[i][0] = train_behavior[3]
negative_samples = train_behavior[4]
news_num = len(negative_samples)
if news_num <= self.negative_sample_num:
for j in range(self.negative_sample_num):
self.train_samples[i][j + 1] = negative_samples[j % news_num]
else:
used_negative_samples = set()
for j in range(self.negative_sample_num):
while True:
k = randint(0, news_num)
if k not in used_negative_samples:
self.train_samples[i][j + 1] = negative_samples[k]
used_negative_samples.add(k)
break
end_time = time.time()
print('%sEnd negative sampling, used time : %.3fs' % ('' if rank is None else ('rank ' + str(rank) + ' : '), end_time - start_time))
# user_ID : [1]
# user_category : [max_history_num]
# usre_subCategory : [max_history_num]
# user_title_text : [max_history_num, max_title_length]
# user_title_mask : [max_history_num, max_title_length]
# user_title_entity : [max_history_num, max_title_length]
# user_abstract_text : [max_history_num, max_abstract_length]
# user_abstract_mask : [max_history_num, max_abstract_length]
# user_abstract_entity : [max_history_num, max_abstract_length]
# user_history_mask : [max_history_num]
# user_history_graph : [max_history_num, max_history_num]
# user_history_category_mask : [category_num + 1]
# user_history_category_indices : [max_history_num]
# news_category : [1 + negative_sample_num]
# news_subCategory : [1 + negative_sample_num]
# news_title_text : [1 + negative_sample_num, max_title_length]
# news_title_mask : [1 + negative_sample_num, max_title_length]
# news_title_entity : [1 + negative_sample_num, max_title_length]
# news_abstract_text : [1 + negative_sample_num, max_abstract_length]
# news_abstract_mask : [1 + negative_sample_num, max_abstract_length]
# news_abstract_entity : [1 + negative_sample_num, max_abstract_length]
def __getitem__(self, index):
train_behavior = self.train_behaviors[index]
history_index = train_behavior[1]
sample_index = self.train_samples[index]
behavior_index = train_behavior[5]
return train_behavior[0], self.news_category[history_index], self.news_subCategory[history_index], self.news_title_text[history_index], self.news_title_mask[history_index], self.news_title_entity[history_index], self.news_abstract_text[history_index], self.news_abstract_mask[history_index], self.news_abstract_entity[history_index], train_behavior[2], self.user_history_graph[behavior_index], self.user_history_category_mask[behavior_index], self.user_history_category_indices[behavior_index], \
self.news_category[sample_index], self.news_subCategory[sample_index], self.news_title_text[sample_index], self.news_title_mask[sample_index], self.news_title_entity[sample_index], self.news_abstract_text[sample_index], self.news_abstract_mask[sample_index], self.news_abstract_entity[sample_index]
def __len__(self):
return self.num
class MIND_DevTest_Dataset(data.Dataset):
def __init__(self, corpus: MIND_Corpus, mode: str):
assert mode in ['dev', 'test'], 'mode must be chosen from \'dev\' or \'test\''
self.news_category = corpus.news_category
self.news_subCategory = corpus.news_subCategory
self.news_title_text = corpus.news_title_text
self.news_title_mask = corpus.news_title_mask
self.news_title_entity = corpus.news_title_entity
self.news_abstract_text = corpus.news_abstract_text
self.news_abstract_mask = corpus.news_abstract_mask
self.news_abstract_entity = corpus.news_abstract_entity
self.user_history_graph = corpus.dev_user_history_graph if mode == 'dev' else corpus.test_user_history_graph
self.user_history_category_mask = corpus.dev_user_history_category_mask if mode == 'dev' else corpus.test_user_history_category_mask
self.user_history_category_indices = corpus.dev_user_history_category_indices if mode == 'dev' else corpus.test_user_history_category_indices
self.behaviors = corpus.dev_behaviors if mode == 'dev' else corpus.test_behaviors
self.num = len(self.behaviors)
# user_ID : [1]
# user_category : [max_history_num]
# user_subCategory : [max_history_num]
# user_title_text : [max_history_num, max_title_length]
# user_title_mask : [max_history_num, max_title_length]
# user_title_entity : [max_history_num, max_title_length]
# user_abstract_text : [max_history_num, max_abstract_length]
# user_abstract_mask : [max_history_num, max_abstract_length]
# user_abstract_entity : [max_history_num, max_abstract_length]
# user_history_mask : [max_history_num]
# user_history_graph : [max_history_num, max_history_num]
# user_history_category_mask : [category_num + 1]
# user_history_category_indices : [max_history_num]
# candidate_news_category : [1]
# candidate_news_subCategory : [1]
# candidate_news_title_text : [max_title_length]
# candidate_news_title_mask : [max_title_length]
# candidate_news_title_entity : [max_title_lenght]
# candidate_news_abstract_text : [max_abstract_length]
# candidate_news_abstract_mask : [max_abstract_length]
# candidate_news_abstract_entity : [max_abstract_length]
def __getitem__(self, index):
behavior = self.behaviors[index]
history_index = behavior[1]
candidate_news_index = behavior[3]
behavior_index = behavior[4]
return behavior[0], self.news_category[history_index], self.news_subCategory[history_index], self.news_title_text[history_index], self.news_title_mask[history_index], self.news_title_entity[history_index], self.news_abstract_text[history_index], self.news_abstract_mask[history_index], self.news_abstract_entity[history_index], behavior[2], self.user_history_graph[behavior_index], self.user_history_category_mask[behavior_index], self.user_history_category_indices[behavior_index], \
self.news_category[candidate_news_index], self.news_subCategory[candidate_news_index], self.news_title_text[candidate_news_index], self.news_title_mask[candidate_news_index], self.news_title_entity[candidate_news_index], self.news_abstract_text[candidate_news_index], self.news_abstract_mask[candidate_news_index], self.news_abstract_entity[candidate_news_index]
def __len__(self):
return self.num
if __name__ == '__main__':
start_time = time.time()
config = Config()
mind_corpus = MIND_Corpus(config)
print('user_num :', len(mind_corpus.user_ID_dict))
print('news_num :', len(mind_corpus.news_title_text))
print('average title word num :', mind_corpus.title_word_num / mind_corpus.news_num)
print('average abstract word num :', mind_corpus.abstract_word_num / mind_corpus.news_num)
mind_train_dataset = MIND_Train_Dataset(mind_corpus)
mind_dev_dataset = MIND_DevTest_Dataset(mind_corpus, 'dev')
mind_test_dataset = MIND_DevTest_Dataset(mind_corpus, 'test')
mind_train_dataset.negative_sampling()
end_time = time.time()
print('load time : %.3fs' % (end_time - start_time))
print('MIND_Train_Dataset :', len(mind_train_dataset))
train_dataloader = DataLoader(mind_train_dataset, batch_size=config.batch_size, shuffle=True, num_workers=config.batch_size // 16)
for (user_ID, user_category, user_subCategory, user_title_text, user_title_mask, user_title_entity, user_abstract_text, user_abstract_mask, user_abstract_entity, user_history_mask, user_history_graph, user_history_category_mask, user_history_category_indices, \
news_category, news_subCategory, news_title_text, news_title_mask, news_title_entity, news_abstract_text, news_abstract_mask, news_abstract_entity) in train_dataloader:
print('user_ID', user_ID.size(), user_ID.dtype)
print('user_category', user_category.size(), user_category.dtype)
print('user_subCategory', user_subCategory.size(), user_subCategory.dtype)
print('user_title_text', user_title_text.size(), user_title_text.dtype)
print('user_title_mask', user_title_mask.size(), user_title_mask.dtype)
print('user_title_entity', user_title_entity.size(), user_title_entity.dtype)
print('user_abstract_text', user_abstract_text.size(), user_abstract_text.dtype)
print('user_abstract_mask', user_abstract_mask.size(), user_abstract_mask.dtype)
print('user_abstract_entity', user_abstract_entity.size(), user_abstract_entity.dtype)
print('user_history_mask', user_history_mask.size(), user_history_mask.dtype)
print('user_history_graph', user_history_graph.size(), user_history_graph.dtype)
print('user_history_category_mask', user_history_category_mask.size(), user_history_category_mask.dtype)
print('user_history_category_indices', user_history_category_indices.size(), user_history_category_indices.dtype)
print('news_category', news_category.size(), news_category.dtype)
print('news_subCategory', news_subCategory.size(), news_subCategory.dtype)
print('news_title_text', news_title_text.size(), news_title_text.dtype)
print('news_title_mask', news_title_mask.size(), news_title_mask.dtype)
print('news_title_entity', news_title_entity.size(), news_title_entity.dtype)
print('news_abstract_text', news_abstract_text.size(), news_abstract_text.dtype)
print('news_abstract_mask', news_abstract_mask.size(), news_abstract_mask.dtype)
print('news_abstract_entity', news_abstract_entity.size(), news_abstract_entity.dtype)
break
print('MIND_Dev_Dataset :', len(mind_dev_dataset))
dev_dataloader = DataLoader(mind_dev_dataset, batch_size=config.batch_size, shuffle=False, num_workers=config.batch_size // 16)
for (user_ID, user_category, user_subCategory, user_title_text, user_title_mask, user_title_entity, user_abstract_text, user_abstract_mask, user_abstract_entity, user_history_mask, user_history_graph, user_history_category_mask, user_history_category_indices, \
news_category, news_subCategory, news_title_text, news_title_mask, news_title_entity, news_abstract_text, news_abstract_mask, news_abstract_entity) in dev_dataloader:
print('user_ID', user_ID.size(), user_ID.dtype)
print('user_category', user_category.size(), user_category.dtype)
print('user_subCategory', user_subCategory.size(), user_subCategory.dtype)
print('user_title_text', user_title_text.size(), user_title_text.dtype)
print('user_title_mask', user_title_mask.size(), user_title_mask.dtype)
print('user_title_entity', user_title_entity.size(), user_title_entity.dtype)
print('user_abstract_text', user_abstract_text.size(), user_abstract_text.dtype)
print('user_abstract_mask', user_abstract_mask.size(), user_abstract_mask.dtype)
print('user_abstract_entity', user_abstract_entity.size(), user_abstract_entity.dtype)
print('user_history_mask', user_history_mask.size(), user_history_mask.dtype)
print('user_history_graph', user_history_graph.size(), user_history_graph.dtype)
print('user_history_category_mask', user_history_category_mask.size(), user_history_category_mask.dtype)
print('user_history_category_indices', user_history_category_indices.size(), user_history_category_indices.dtype)
print('news_category', news_category.size(), news_category.dtype)
print('news_subCategory', news_subCategory.size(), news_subCategory.dtype)
print('news_title_text', news_title_text.size(), news_title_text.dtype)
print('news_title_mask', news_title_mask.size(), news_title_mask.dtype)
print('news_title_entity', news_title_entity.size(), news_title_entity.dtype)
print('news_abstract_text', news_abstract_text.size(), news_abstract_text.dtype)
print('news_abstract_mask', news_abstract_mask.size(), news_abstract_mask.dtype)
print('news_abstract_entity', news_abstract_entity.size(), news_abstract_entity.dtype)
break
print(len(mind_corpus.dev_indices))
print('MIND_Test_Dataset :', len(mind_test_dataset))
test_dataloader = DataLoader(mind_test_dataset, batch_size=config.batch_size, shuffle=False, num_workers=config.batch_size // 16)
for (user_ID, user_category, user_subCategory, user_title_text, user_title_mask, user_title_entity, user_abstract_text, user_abstract_mask, user_abstract_entity, user_history_mask, user_history_graph, user_history_category_mask, user_history_category_indices, \
news_category, news_subCategory, news_title_text, news_title_mask, news_title_entity, news_abstract_text, news_abstract_mask, news_abstract_entity) in test_dataloader:
print('user_ID', user_ID.size(), user_ID.dtype)
print('user_category', user_category.size(), user_category.dtype)
print('user_subCategory', user_subCategory.size(), user_subCategory.dtype)
print('user_title_text', user_title_text.size(), user_title_text.dtype)
print('user_title_mask', user_title_mask.size(), user_title_mask.dtype)
print('user_title_entity', user_title_entity.size(), user_title_entity.dtype)
print('user_abstract_text', user_abstract_text.size(), user_abstract_text.dtype)
print('user_abstract_mask', user_abstract_mask.size(), user_abstract_mask.dtype)
print('user_abstract_entity', user_abstract_entity.size(), user_abstract_entity.dtype)
print('user_history_mask', user_history_mask.size(), user_history_mask.dtype)
print('user_history_graph', user_history_graph.size(), user_history_graph.dtype)
print('user_history_category_mask', user_history_category_mask.size(), user_history_category_mask.dtype)
print('user_history_category_indices', user_history_category_indices.size(), user_history_category_indices.dtype)
print('news_category', news_category.size(), news_category.dtype)
print('news_subCategory', news_subCategory.size(), news_subCategory.dtype)
print('news_title_text', news_title_text.size(), news_title_text.dtype)
print('news_title_mask', news_title_mask.size(), news_title_mask.dtype)
print('news_title_entity', news_title_entity.size(), news_title_entity.dtype)
print('news_abstract_text', news_abstract_text.size(), news_abstract_text.dtype)
print('news_abstract_mask', news_abstract_mask.size(), news_abstract_mask.dtype)
print('news_abstract_entity', news_abstract_entity.size(), news_abstract_entity.dtype)
break
print(len(mind_corpus.test_indices))
| 74.544248 | 504 | 0.734908 | 2,219 | 16,847 | 5.095088 | 0.047769 | 0.074916 | 0.070582 | 0.042721 | 0.841942 | 0.788077 | 0.742084 | 0.725809 | 0.698479 | 0.660534 | 0 | 0.002798 | 0.172672 | 16,847 | 225 | 505 | 74.875556 | 0.808366 | 0.150175 | 0 | 0.569767 | 0 | 0 | 0.098361 | 0.011559 | 0 | 0 | 0 | 0 | 0.005814 | 1 | 0.040698 | false | 0 | 0.034884 | 0.011628 | 0.110465 | 0.436047 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
89c2b9fce49945ff2e815e800fd7f714930113fa | 11,890 | py | Python | tests/model/test_group_preserving_fragment.py | KyleVaughn/mocmg | 228f1ab751c5f584b51cd047bd9b37e2252eb629 | [
"MIT"
] | null | null | null | tests/model/test_group_preserving_fragment.py | KyleVaughn/mocmg | 228f1ab751c5f584b51cd047bd9b37e2252eb629 | [
"MIT"
] | null | null | null | tests/model/test_group_preserving_fragment.py | KyleVaughn/mocmg | 228f1ab751c5f584b51cd047bd9b37e2252eb629 | [
"MIT"
] | null | null | null | """Test the group_preserving_fragment function."""
import os
import sys
from unittest import TestCase
import gmsh
import mocmg
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from testing_utils import captured_output
# Honestly, just use gmsh.fltk.run() to debug.
ref_out = [
"INFO : mocmg.model.group_preserving_fragment - Fragmenting 2 entities",
"INFO : mocmg.model.group_preserving_fragment - Synchronizing model",
]
bad_overwrite = [
"ERROR : mocmg.model.group_preserving_fragment - Material to be overwritten"
+ " is not in the physical groups of the entities being fragmented."
]
# 2 intersecting rectangles
groups_2_rectangles = {
"Group 1": [1, 2],
"Group 2": [2, 3],
"All": [1, 2, 3],
}
groups_2_rectangles_w_materials = {
"Material_1": [1],
"Material_2": [2, 3],
"All": [1, 2, 3],
}
centroids_2_rectangles = {
1: (0.83333333, 0.83333333, 0.0),
2: (1.5, 1.5, 0.0),
3: (2.16666666, 2.16666666, 0.0),
}
areas_2_rectangles = {
1: 3.0,
2: 1.0,
3: 3.0,
}
# 2 intersecting rectangles, 1 other rectangle off to the side.
groups_3_rectangles_1_not_in_frag = {
"Group 1": [4, 5],
"Group 2": [5, 6],
"Group 3": [3],
"All": [3, 4, 5, 6],
}
centroids_3_rectangles_1_not_in_frag = {
3: (5.0, 5.0, 0.0),
4: (0.83333333, 0.83333333, 0.0),
5: (1.5, 1.5, 0.0),
6: (2.16666666, 2.16666666, 0.0),
}
areas_3_rectangles_1_not_in_frag = {
3: 4.0,
4: 3.0,
5: 1.0,
6: 3.0,
}
class TestGroupPreservingFragment(TestCase):
"""Test the group_preserving_fragment function."""
def test_2_rectangles(self):
"""Test a 2 rectangle case."""
ref_groups = groups_2_rectangles
ref_centroids = centroids_2_rectangles
ref_areas = areas_2_rectangles
# Setup
with captured_output() as (out, err):
mocmg.initialize()
gmsh.initialize()
gmsh.model.occ.addRectangle(0.0, 0.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(1.0, 1.0, 0.0, 2.0, 2.0)
gmsh.model.occ.synchronize()
gmsh.model.addPhysicalGroup(2, [1])
gmsh.model.addPhysicalGroup(2, [2])
gmsh.model.addPhysicalGroup(2, [1, 2])
gmsh.model.setPhysicalName(2, 1, "Group 1")
gmsh.model.setPhysicalName(2, 2, "Group 2")
gmsh.model.setPhysicalName(2, 3, "All")
mocmg.model.group_preserving_fragment([(2, 1)], [(2, 2)])
out, err = out.getvalue().splitlines(), err.getvalue().splitlines()
out = [line.split(None, 1)[1] for line in out]
err = [line.split(None, 1)[1] for line in err] # strip times
self.assertEqual(out, ref_out)
self.assertEqual(err, [])
# Get info
group_nums = gmsh.model.getPhysicalGroups()
names = [gmsh.model.getPhysicalName(*grp) for grp in group_nums]
ref_names = list(ref_groups.keys())
# Check correct group names/entities
for i, name in enumerate(names):
self.assertEqual(name, ref_names[i])
index = names.index(name)
group_ents = list(gmsh.model.getEntitiesForPhysicalGroup(*group_nums[index]))
ref_group_ents = ref_groups[name]
self.assertEqual(group_ents, ref_group_ents)
# Check correct area/centroid
for ent in gmsh.model.getEntities(2):
tag = ent[1]
mass = gmsh.model.occ.getMass(2, tag)
self.assertAlmostEqual(ref_areas[tag], mass, places=5)
x, y, z = gmsh.model.occ.getCenterOfMass(2, tag)
centroid = (x, y, z)
for i in range(3):
self.assertAlmostEqual(centroid[i], ref_centroids[tag][i])
# Clean up
gmsh.clear()
gmsh.finalize()
def test_2_rectangles_no_group(self):
"""Test for a 2 rectangle case, where 1 has no groups."""
ref_groups = groups_2_rectangles
ref_centroids = centroids_2_rectangles
ref_areas = areas_2_rectangles
# Setup
with captured_output() as (out, err):
mocmg.initialize()
gmsh.initialize()
gmsh.model.occ.addRectangle(0.0, 0.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(1.0, 1.0, 0.0, 2.0, 2.0)
gmsh.model.occ.synchronize()
gmsh.model.addPhysicalGroup(2, [1])
gmsh.model.setPhysicalName(2, 1, "Group 1")
mocmg.model.group_preserving_fragment([(2, 1)], [(2, 2)])
out, err = out.getvalue().splitlines(), err.getvalue().splitlines()
out = [line.split(None, 1)[1] for line in out]
err = [line.split(None, 1)[1] for line in err] # strip times
self.assertEqual(out, ref_out)
self.assertEqual(err, [])
# Get info
group_nums = gmsh.model.getPhysicalGroups()
names = [gmsh.model.getPhysicalName(*grp) for grp in group_nums]
ref_names = list(ref_groups.keys())
# Check correct group names/entities
for i, name in enumerate(names):
self.assertEqual(name, ref_names[i])
index = names.index(name)
group_ents = list(gmsh.model.getEntitiesForPhysicalGroup(*group_nums[index]))
ref_group_ents = ref_groups[name]
self.assertEqual(group_ents, ref_group_ents)
# Check correct area/centroid
for ent in gmsh.model.getEntities(2):
tag = ent[1]
mass = gmsh.model.occ.getMass(2, tag)
self.assertAlmostEqual(ref_areas[tag], mass, places=5)
x, y, z = gmsh.model.occ.getCenterOfMass(2, tag)
centroid = (x, y, z)
for i in range(3):
self.assertAlmostEqual(centroid[i], ref_centroids[tag][i])
# Clean up
gmsh.clear()
gmsh.finalize()
def test_3_rectangles_1_not_in_frag(self):
"""Test 2 rectangles in frag, one off to the side."""
ref_groups = groups_3_rectangles_1_not_in_frag
ref_centroids = centroids_3_rectangles_1_not_in_frag
ref_areas = areas_3_rectangles_1_not_in_frag
# Setup
with captured_output() as (out, err):
mocmg.initialize()
gmsh.initialize()
gmsh.model.occ.addRectangle(0.0, 0.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(1.0, 1.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(4.0, 4.0, 0.0, 2.0, 2.0)
gmsh.model.occ.synchronize()
gmsh.model.addPhysicalGroup(2, [1])
gmsh.model.addPhysicalGroup(2, [2])
gmsh.model.addPhysicalGroup(2, [3])
gmsh.model.addPhysicalGroup(2, [1, 2, 3])
gmsh.model.setPhysicalName(2, 1, "Group 1")
gmsh.model.setPhysicalName(2, 2, "Group 2")
gmsh.model.setPhysicalName(2, 3, "Group 3")
gmsh.model.setPhysicalName(2, 4, "All")
mocmg.model.group_preserving_fragment([(2, 1)], [(2, 2)])
out, err = out.getvalue().splitlines(), err.getvalue().splitlines()
out = [line.split(None, 1)[1] for line in out]
err = [line.split(None, 1)[1] for line in err] # strip times
self.assertEqual(out, ref_out)
self.assertEqual(err, [])
# Get info
group_nums = gmsh.model.getPhysicalGroups()
names = [gmsh.model.getPhysicalName(*grp) for grp in group_nums]
ref_names = list(ref_groups.keys())
# Check correct group names/entities
for i, name in enumerate(names):
self.assertEqual(name, ref_names[i])
index = names.index(name)
group_ents = list(gmsh.model.getEntitiesForPhysicalGroup(*group_nums[index]))
ref_group_ents = ref_groups[name]
self.assertEqual(group_ents, ref_group_ents)
# Check correct area/centroid
for ent in gmsh.model.getEntities(2):
tag = ent[1]
mass = gmsh.model.occ.getMass(2, tag)
self.assertAlmostEqual(ref_areas[tag], mass, places=5)
x, y, z = gmsh.model.occ.getCenterOfMass(2, tag)
centroid = (x, y, z)
for i in range(3):
self.assertAlmostEqual(centroid[i], ref_centroids[tag][i])
# Clean up
gmsh.clear()
gmsh.finalize()
def test_2_rectangles_with_overwrite(self):
"""Test a 2 rectangle case, overwriting a material."""
ref_groups = groups_2_rectangles_w_materials
ref_centroids = centroids_2_rectangles
ref_areas = areas_2_rectangles
# Setup
with captured_output() as (out, err):
mocmg.initialize()
gmsh.initialize()
gmsh.model.occ.addRectangle(0.0, 0.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(1.0, 1.0, 0.0, 2.0, 2.0)
gmsh.model.occ.synchronize()
gmsh.model.addPhysicalGroup(2, [1])
gmsh.model.addPhysicalGroup(2, [2])
gmsh.model.addPhysicalGroup(2, [1, 2])
gmsh.model.setPhysicalName(2, 1, "Material_1")
gmsh.model.setPhysicalName(2, 2, "Material_2")
gmsh.model.setPhysicalName(2, 3, "All")
mocmg.model.group_preserving_fragment(
[(2, 1)], [(2, 2)], overwrite_material="Material_1"
)
out, err = out.getvalue().splitlines(), err.getvalue().splitlines()
out = [line.split(None, 1)[1] for line in out]
err = [line.split(None, 1)[1] for line in err] # strip times
self.assertEqual(out, ref_out)
self.assertEqual(err, [])
# Get info
group_nums = gmsh.model.getPhysicalGroups()
names = [gmsh.model.getPhysicalName(*grp) for grp in group_nums]
ref_names = list(ref_groups.keys())
# Check correct group names/entities
for i, name in enumerate(names):
self.assertEqual(name, ref_names[i])
index = names.index(name)
group_ents = list(gmsh.model.getEntitiesForPhysicalGroup(*group_nums[index]))
ref_group_ents = ref_groups[name]
self.assertEqual(group_ents, ref_group_ents)
# Check correct area/centroid
for ent in gmsh.model.getEntities(2):
tag = ent[1]
mass = gmsh.model.occ.getMass(2, tag)
self.assertAlmostEqual(ref_areas[tag], mass, places=5)
x, y, z = gmsh.model.occ.getCenterOfMass(2, tag)
centroid = (x, y, z)
for i in range(3):
self.assertAlmostEqual(centroid[i], ref_centroids[tag][i])
# Clean up
gmsh.clear()
gmsh.finalize()
def test_2_rectangles_with_bad_overwrite(self):
"""Test a 2 rectangle case, overwriting a material that doesnt exist."""
with self.assertRaises(SystemExit):
with captured_output() as (out, err):
mocmg.initialize()
gmsh.initialize()
gmsh.model.occ.addRectangle(0.0, 0.0, 0.0, 2.0, 2.0)
gmsh.model.occ.addRectangle(1.0, 1.0, 0.0, 2.0, 2.0)
gmsh.model.occ.synchronize()
gmsh.model.addPhysicalGroup(2, [1])
gmsh.model.addPhysicalGroup(2, [2])
gmsh.model.addPhysicalGroup(2, [1, 2])
gmsh.model.setPhysicalName(2, 1, "Material_1")
gmsh.model.setPhysicalName(2, 2, "Material_2")
gmsh.model.setPhysicalName(2, 3, "All")
mocmg.model.group_preserving_fragment(
[(2, 1)], [(2, 2)], overwrite_material="BAD_MAT"
)
gmsh.clear()
gmsh.finalize()
out, err = out.getvalue().splitlines(), err.getvalue().splitlines()
out = [line.split(None, 1)[1] for line in out]
err = [line.split(None, 1)[1] for line in [err[0]]] # strip times
self.assertEqual(out, ref_out)
self.assertEqual(err, bad_overwrite)
| 41.428571 | 89 | 0.591169 | 1,567 | 11,890 | 4.352904 | 0.090619 | 0.089723 | 0.011875 | 0.008796 | 0.884914 | 0.86219 | 0.82202 | 0.783903 | 0.783903 | 0.783903 | 0 | 0.05171 | 0.279479 | 11,890 | 286 | 90 | 41.573427 | 0.744485 | 0.073339 | 0 | 0.699588 | 0 | 0 | 0.042173 | 0.010132 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.020576 | false | 0 | 0.024691 | 0 | 0.049383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d60c7305254b9f0ff8fad71109693eff59f5bcb1 | 20 | py | Python | zpl2/__init__.py | twam/pyzpl2 | 70e08feafd1e139a4063e4ee871534e6d6efc650 | [
"MIT"
] | 7 | 2016-08-18T13:29:57.000Z | 2021-05-25T11:22:18.000Z | zpl2/__init__.py | twam/pyzpl2 | 70e08feafd1e139a4063e4ee871534e6d6efc650 | [
"MIT"
] | 1 | 2016-11-16T20:47:04.000Z | 2016-11-16T20:47:04.000Z | zpl2/__init__.py | twam/pyzpl2 | 70e08feafd1e139a4063e4ee871534e6d6efc650 | [
"MIT"
] | 10 | 2016-04-30T17:55:15.000Z | 2021-12-24T14:47:17.000Z | from .zpl2 import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.2 | 20 | 1 | 20 | 20 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d61299aa24d41379f34e07d3ebfc53f00c91aa6c | 352 | py | Python | app/main/errors.py | prisconapoli/m3rcury | 49b3aa9d32dd4a9d00a720d8d91b0ef335c08478 | [
"MIT"
] | null | null | null | app/main/errors.py | prisconapoli/m3rcury | 49b3aa9d32dd4a9d00a720d8d91b0ef335c08478 | [
"MIT"
] | null | null | null | app/main/errors.py | prisconapoli/m3rcury | 49b3aa9d32dd4a9d00a720d8d91b0ef335c08478 | [
"MIT"
] | null | null | null | from flask import render_template
from . import main
@main.app_errorhandler(400)
def not_bad_request(e):
return render_template('400.html'), 400
@main.app_errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
@main.app_errorhandler(500)
def internal_server_error(e):
return render_template('500.html'), 500
| 19.555556 | 43 | 0.761364 | 53 | 352 | 4.811321 | 0.433962 | 0.219608 | 0.223529 | 0.247059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087662 | 0.125 | 352 | 17 | 44 | 20.705882 | 0.74026 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c389dbc615bfcde8febf9773c1fe8a5751e3c9a4 | 183 | py | Python | gym_sokoban/envs/__init__.py | TinkTheBoush/gym-sokoban | c2bd131227feef7d0642889e89bd4e80591d9767 | [
"MIT"
] | 261 | 2018-03-14T20:41:36.000Z | 2022-03-21T14:08:18.000Z | gym_sokoban/envs/__init__.py | TinkTheBoush/gym-sokoban | c2bd131227feef7d0642889e89bd4e80591d9767 | [
"MIT"
] | 37 | 2018-03-18T10:39:23.000Z | 2022-03-25T02:28:48.000Z | gym_sokoban/envs/__init__.py | TinkTheBoush/gym-sokoban | c2bd131227feef7d0642889e89bd4e80591d9767 | [
"MIT"
] | 69 | 2018-04-12T07:35:27.000Z | 2022-03-14T20:43:55.000Z | from gym_sokoban.envs.sokoban_env import SokobanEnv, ACTION_LOOKUP, CHANGE_COORDINATES
from gym_sokoban.envs import room_utils
from gym_sokoban.envs.sokoban_env_variations import *
| 30.5 | 86 | 0.868852 | 27 | 183 | 5.555556 | 0.518519 | 0.14 | 0.28 | 0.36 | 0.373333 | 0.373333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087432 | 183 | 5 | 87 | 36.6 | 0.898204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c39e99bce79ac23a74dce6bc4ee0f659b1401e4c | 327 | py | Python | polliwog/plane/test_coordinate_planes.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | polliwog/plane/test_coordinate_planes.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | polliwog/plane/test_coordinate_planes.py | algrs/polliwog | faa6531e8e2f7d0b52e928d64a4c1914199c4023 | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
import vg
from .coordinate_planes import coordinate_planes
def test_constants():
np.testing.assert_array_equal(coordinate_planes.xy.normal, vg.basis.z)
np.testing.assert_array_equal(coordinate_planes.xz.normal, vg.basis.y)
np.testing.assert_array_equal(coordinate_planes.yz.normal, vg.basis.x)
| 32.7 | 74 | 0.807339 | 51 | 327 | 4.941176 | 0.45098 | 0.31746 | 0.178571 | 0.238095 | 0.488095 | 0.488095 | 0.488095 | 0 | 0 | 0 | 0 | 0 | 0.097859 | 327 | 9 | 75 | 36.333333 | 0.854237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.142857 | true | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3b0c7469f639acaeccfbc8a466f462dee257833 | 55 | py | Python | code/sample_4-3-16.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | 1 | 2022-03-29T13:50:12.000Z | 2022-03-29T13:50:12.000Z | code/sample_4-3-16.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | code/sample_4-3-16.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | x = ["a", "b", "b", "c", "c", "c"]
print(x.count("c"))
| 18.333333 | 34 | 0.345455 | 11 | 55 | 1.727273 | 0.545455 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 55 | 2 | 35 | 27.5 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c3cb84ec466d59b008e50120b12a509a86b3a15b | 26 | py | Python | terrascript/influxdb/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | null | null | null | terrascript/influxdb/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | null | null | null | terrascript/influxdb/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | 1 | 2018-11-15T16:23:05.000Z | 2018-11-15T16:23:05.000Z | """2019-05-28 10:49:50"""
| 13 | 25 | 0.538462 | 6 | 26 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0.076923 | 26 | 1 | 26 | 26 | 0 | 0.730769 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f1d278bc55d5666ddacbe694737fba4482147ed | 41 | py | Python | yolox/ops/pytorch/convex/__init__.py | DDGRCF/YOLOX_OBB | 27b80953306492b8bc83b86b1353d8cee01ef9b6 | [
"Apache-2.0"
] | 39 | 2021-11-09T12:12:06.000Z | 2022-03-28T13:45:20.000Z | yolox/ops/pytorch/convex/__init__.py | DDGRCF/YOLOX_OBB | 27b80953306492b8bc83b86b1353d8cee01ef9b6 | [
"Apache-2.0"
] | 12 | 2021-11-09T11:33:29.000Z | 2022-03-25T17:00:14.000Z | yolox/ops/pytorch/convex/__init__.py | DDGRCF/YOLOX_OBB | 27b80953306492b8bc83b86b1353d8cee01ef9b6 | [
"Apache-2.0"
] | 1 | 2022-03-24T06:53:39.000Z | 2022-03-24T06:53:39.000Z | from .convex_wrapper import convex_sort
| 13.666667 | 39 | 0.853659 | 6 | 41 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 2 | 40 | 20.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61800a7e7238569340775e656235b1c30f43fe48 | 188 | py | Python | tests/test_sum.py | braunmagrin/python-docker-base | 3de1a608b08a6022efe8addca681cebcb7a63616 | [
"MIT"
] | null | null | null | tests/test_sum.py | braunmagrin/python-docker-base | 3de1a608b08a6022efe8addca681cebcb7a63616 | [
"MIT"
] | null | null | null | tests/test_sum.py | braunmagrin/python-docker-base | 3de1a608b08a6022efe8addca681cebcb7a63616 | [
"MIT"
] | null | null | null | import pytest
import fix_tests_pypath
import main
def test_sum():
assert main.sum(0,0) == 0
assert main.sum(0,1) == 1
assert main.sum(1,0) == 1
assert main.sum(1,1) == 2
| 17.090909 | 29 | 0.632979 | 35 | 188 | 3.314286 | 0.371429 | 0.344828 | 0.448276 | 0.241379 | 0.258621 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082192 | 0.223404 | 188 | 10 | 30 | 18.8 | 0.712329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | true | 0 | 0.375 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
61a92885a4d56c36e08fe5d1f692547d37d696eb | 2,255 | py | Python | tests/test_scripts.py | rhanka/addok | 320d145e72964d54eb33742f0329e9f46f5c5ab5 | [
"WTFPL"
] | 215 | 2016-01-29T08:37:56.000Z | 2022-03-28T06:28:41.000Z | tests/test_scripts.py | bendathierrycom/addok | 07346046ed53993d8e2b66262f52d505f26f5ba9 | [
"MIT"
] | 487 | 2016-01-13T10:11:34.000Z | 2022-03-31T10:56:24.000Z | tests/test_scripts.py | bendathierrycom/addok | 07346046ed53993d8e2b66262f52d505f26f5ba9 | [
"MIT"
] | 52 | 2016-01-12T13:10:28.000Z | 2022-03-24T15:45:39.000Z | from addok.helpers import scripts
def test_manual_scan(factory):
factory(name="rue de la monnaie", city="Vitry")
factory(name="La monnaye", city="Saint-Loup-Cammas")
street1 = factory(name="rue de la monnaie", city="Paris", importance=1)
street2 = factory(name="rue de la monnaie", city="Condom", importance=0.9)
results = scripts.manual_scan(keys=['w|monnaie', 'w|rue', 'w|de'],
args=[2])
assert results == ['d|{}'.format(street1['_id']).encode(),
'd|{}'.format(street2['_id']).encode()]
def test_manual_scan_with_filter(factory):
vitry = factory(name="Vitry", type="city")
factory(name="La monnaye", city="Saint-Loup-Cammas")
street1 = factory(name="rue de la monnaie", city="Paris", importance=1)
street2 = factory(name="rue de la monnaie", city="Condom", importance=0.9)
results = scripts.manual_scan(keys=['w|rue', 'w|de', 'f|type|street'],
args=[2])
assert results == ['d|{}'.format(street1['_id']).encode(),
'd|{}'.format(street2['_id']).encode()]
results = scripts.manual_scan(keys=['w|rue', 'w|de', 'f|type|whatever'],
args=[2])
assert results == []
results = scripts.manual_scan(keys=['w|vitry', 'f|type|city'], args=[2])
assert results == ['d|{}'.format(vitry['_id']).encode()]
def test_zinter(factory):
docs = (
factory(name="rue de la monnaie", city="Vitry"),
factory(name="La monnaye", city="Saint-Loup-Cammas"),
factory(name="rue de la monnaie", city="Paris", importance=1),
factory(name="rue de la monnaie", city="Condom", importance=0.9),
)
results = scripts.zinter(keys=['w|monnaie', 'w|rue', 'w|de'],
args=['tmp', 2])
assert results == ['d|{}'.format(docs[2]['_id']).encode(),
'd|{}'.format(docs[3]['_id']).encode()]
results = scripts.zinter(keys=['w|monnaie', 'w|rue', 'w|de'],
args=['tmp', 3])
assert results == ['d|{}'.format(docs[2]['_id']).encode(),
'd|{}'.format(docs[3]['_id']).encode(),
'd|{}'.format(docs[0]['_id']).encode()]
| 47.978723 | 78 | 0.545455 | 281 | 2,255 | 4.302491 | 0.174377 | 0.109181 | 0.092639 | 0.105873 | 0.82713 | 0.817204 | 0.772539 | 0.772539 | 0.760959 | 0.760959 | 0 | 0.016403 | 0.243016 | 2,255 | 46 | 79 | 49.021739 | 0.691857 | 0 | 0 | 0.425 | 0 | 0 | 0.205322 | 0 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.075 | false | 0 | 0.175 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61b1bb973c4f0fe72164a301956413437cfc8ae7 | 11,717 | py | Python | tests/test_main.py | dustlang/homu | 68e4dd921067b8f22f38439b2f5cec9c23ffc961 | [
"MIT"
] | null | null | null | tests/test_main.py | dustlang/homu | 68e4dd921067b8f22f38439b2f5cec9c23ffc961 | [
"MIT"
] | null | null | null | tests/test_main.py | dustlang/homu | 68e4dd921067b8f22f38439b2f5cec9c23ffc961 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch, Mock, MagicMock, call
from homu.main import sha_or_blank, force, parse_commands, \
get_words
class TestMain(unittest.TestCase):
def call_parse_commands(self, cfg={}, body='', username='user', repo_cfg={},
state=None, my_username='my_user', db=None,
states=[], realtime=False, sha=''):
return parse_commands(cfg, body, username, repo_cfg, state, my_username, db,
states, realtime=realtime, sha=sha)
def test_get_words_no_username(self):
self.assertEqual(get_words("Hi, I'm a test message.", ''), [])
def test_get_words_incorrect_username(self):
self.assertEqual(get_words("@user I'm a message", 'username'), [])
def test_get_words_correct_username(self):
self.assertEqual(get_words("@user I'm a message", 'user'), ['@user', "I'm", 'a', 'message'])
def test_sha_or_blank_return_sha(self):
self.assertEqual(sha_or_blank('f5d42200481'), 'f5d42200481')
def test_sha_or_blank_return_blank(self):
self.assertEqual(sha_or_blank('f5d@12'), '')
@patch('homu.main.get_words', return_value=["@bot", "are", "you", "still", "there?"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.still_here')
def test_parse_commands_still_here_realtime(self, mock_still_here, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, realtime=True))
mock_still_here.assert_called_once_with(state)
@patch('homu.main.get_words', return_value=["@bot", "are", "you", "still", "there?"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.still_here')
def test_parse_commands_still_here_not_realtime(self, mock_still_here, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state))
assert not mock_still_here.called, 'still_here was called and should never be.'
@patch('homu.main.get_words', return_value=["r+"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.review_approved')
def test_parse_commands_review_approved_verified(self, mock_review_approved, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_review_approved.assert_called_once_with(state, False, 'user', 'user', 'my_user', 'abc123', [])
@patch('homu.main.get_words', return_value=["r+"])
@patch('homu.main.verify_auth', return_value=False)
@patch('homu.main.PullReqState')
@patch('homu.action.review_approved')
def test_parse_commands_review_approved_not_verified(self, mock_review_approved, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, sha='abc123'))
assert not mock_review_approved.called, 'mock_review_approved was called and should never be.'
@patch('homu.main.get_words', return_value=["r=user2"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.review_approved')
def test_parse_commands_review_approved_verified_different_approver(self, mock_review_approved, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_review_approved.assert_called_once_with(state, False, 'user2', 'user', 'my_user', 'abc123', [])
@patch('homu.main.get_words', return_value=["r-"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.review_rejected')
def test_parse_commands_review_rejected(self, mock_review_rejected, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_review_rejected.assert_called_once_with(state, False)
@patch('homu.main.get_words', return_value=["p=1"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.set_priority')
def test_parse_commands_set_priority(self, mock_set_priority, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_set_priority.assert_called_once_with(state, False, '1', {})
@patch('homu.main.get_words', return_value=["delegate=user2"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.delegate_to')
def test_parse_commands_delegate_to(self, mock_delegate_to, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_delegate_to.assert_called_once_with(state, False, 'user2')
@patch('homu.main.get_words', return_value=["delegate-"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.delegate_negative')
def test_parse_commands_delegate_negative(self, mock_delegate_negative, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_delegate_negative.assert_called_once_with(state)
@patch('homu.main.get_words', return_value=["delegate+"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.delegate_positive')
def test_parse_commands_delegate_positive(self, mock_delegate_positive, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
state.num = 2
state.get_repo().pull_request(state.num).user.login = 'delegate'
self.assertTrue(self.call_parse_commands(state=state, sha='abc123'))
mock_delegate_positive.assert_called_once_with(state, 'delegate', False)
@patch('homu.main.get_words', return_value=["retry"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.retry')
def test_parse_commands_retry_realtime(self, mock_retry, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_retry.assert_called_once_with(state)
@patch('homu.main.get_words', return_value=["retry"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.retry')
def test_parse_commands_retry_not_realtime(self, mock_retry, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, sha='abc123'))
assert not mock_retry.called, 'retry was called and should never be.'
@patch('homu.main.get_words', return_value=["try"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action._try')
def test_parse_commands_try_realtime(self, mock_try, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_try.assert_called_once_with(state, 'try')
@patch('homu.main.get_words', return_value=["try"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action._try')
def test_parse_commands_try_not_realtime(self, mock_try, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, sha='abc123'))
assert not mock_try.called, '_try was called and should never be.'
@patch('homu.main.get_words', return_value=["rollup"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.rollup')
def test_parse_commands_rollup(self, mock_rollup, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_rollup.assert_called_once_with(state, 'rollup')
@patch('homu.main.get_words', return_value=["clean"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.clean')
def test_parse_commands_clean_realtime(self, mock_clean, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_clean.assert_called_once_with(state)
@patch('homu.main.get_words', return_value=["clean"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.clean')
def test_parse_commands_clean_not_realtime(self, mock_clean, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, sha='abc123'))
assert not mock_clean.called, 'clean was called and should never be.'
@patch('homu.main.get_words', return_value=["hello?"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.hello_or_ping')
def test_parse_commands_hello_or_ping_realtime(self, mock_hello_or_ping, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_hello_or_ping.assert_called_once_with(state)
@patch('homu.main.get_words', return_value=["hello?"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.hello_or_ping')
def test_parse_commands_hello_or_ping_not_realtime(self, mock_hello_or_ping, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertFalse(self.call_parse_commands(state=state, sha='abc123'))
assert not mock_hello_or_ping.called, 'hello_or_ping was called and should never be.'
@patch('homu.main.get_words', return_value=["treeclosed=1"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.set_treeclosed')
def test_parse_commands_set_treeclosed(self, mock_set_treeclosed, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_set_treeclosed.assert_called_once_with(state, '1')
@patch('homu.main.get_words', return_value=["treeclosed-"])
@patch('homu.main.verify_auth', return_value=True)
@patch('homu.main.PullReqState')
@patch('homu.action.treeclosed_negative')
def test_parse_commands_treeclosed_negative(self, mock_treeclosed_negative, MockPullReqState, mock_auth, mock_words):
state = MockPullReqState()
self.assertTrue(self.call_parse_commands(state=state, realtime=True, sha='abc123'))
mock_treeclosed_negative.assert_called_once_with(state)
if __name__ == '__main__':
unittest.main()
| 52.075556 | 141 | 0.723308 | 1,512 | 11,717 | 5.29828 | 0.068783 | 0.09437 | 0.102234 | 0.041942 | 0.85807 | 0.819873 | 0.790538 | 0.778804 | 0.76033 | 0.76033 | 0 | 0.00927 | 0.143808 | 11,717 | 224 | 142 | 52.308036 | 0.789274 | 0 | 0 | 0.552083 | 0 | 0 | 0.214645 | 0.108133 | 0 | 0 | 0 | 0 | 0.244792 | 1 | 0.140625 | false | 0 | 0.015625 | 0.005208 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61b21f0ea5e4cc4d99203c534c96bd3768428a24 | 128 | py | Python | tests/conftest.py | allrod5/extra-trees | ba7d94bd8a77daaa2357241f343c3b571a900131 | [
"MIT"
] | 4 | 2017-08-18T04:07:43.000Z | 2022-01-17T01:41:29.000Z | tests/conftest.py | allrod5/extra-trees | ba7d94bd8a77daaa2357241f343c3b571a900131 | [
"MIT"
] | 2 | 2017-08-15T06:42:05.000Z | 2017-08-24T16:44:49.000Z | tests/conftest.py | allrod5/extra-trees | ba7d94bd8a77daaa2357241f343c3b571a900131 | [
"MIT"
] | 3 | 2019-08-15T06:21:59.000Z | 2021-11-25T09:55:38.000Z | import pytest
from sklearn import datasets
@pytest.fixture(scope='session')
def circles():
return datasets.make_circles()
| 16 | 34 | 0.765625 | 16 | 128 | 6.0625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132813 | 128 | 7 | 35 | 18.285714 | 0.873874 | 0 | 0 | 0 | 0 | 0 | 0.054688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
61c406d74e1bf763d771d9d2a7fe465340986559 | 126 | py | Python | fractalis/utils.py | jeffque/games-turtle | bd53ba0232f4a6f291454aa50a7c8d7cfb8ee06c | [
"Unlicense"
] | null | null | null | fractalis/utils.py | jeffque/games-turtle | bd53ba0232f4a6f291454aa50a7c8d7cfb8ee06c | [
"Unlicense"
] | null | null | null | fractalis/utils.py | jeffque/games-turtle | bd53ba0232f4a6f291454aa50a7c8d7cfb8ee06c | [
"Unlicense"
] | null | null | null | #!/bin/env python3
from math import cos,pi
def size_inscrit_polygon(size, n):
return size * (2 - 2 * cos(2*pi / n))**0.5 | 21 | 46 | 0.642857 | 24 | 126 | 3.291667 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.190476 | 126 | 6 | 46 | 21 | 0.715686 | 0.134921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4edec392b64401c002d5f8f988c019a2853e95fa | 260 | py | Python | cases/listNested.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | 1 | 2019-04-30T16:27:19.000Z | 2019-04-30T16:27:19.000Z | cases/listNested.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | cases/listNested.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | a = ["Hello", [2.00, 4, 4 + 5], 2 * 4.9, "World" * 3]
print a
print a[1][2]
print a[1][::]
b = ["Hello", [2.00, 4, 4 + 5], [[2 * 4.9], [1, 2, 3] + [4, 5, 6]], "World" * 3]
print b
print b[1][0]
print b[2][1]
print b[2][0][0]
print b[2][::]
print b[2][0][::]
| 20 | 80 | 0.442308 | 61 | 260 | 1.885246 | 0.229508 | 0.313043 | 0.243478 | 0.156522 | 0.243478 | 0.243478 | 0.243478 | 0.243478 | 0.243478 | 0 | 0 | 0.190244 | 0.211538 | 260 | 12 | 81 | 21.666667 | 0.370732 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.818182 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4ee84516c8bba59ca596f85aed84ecc6937b5113 | 6,935 | py | Python | tests/mask/test_mask.py | valentingol/transformers_tf | 4b6d685ce019d847695e2d6ea5a223c5e543d0ed | [
"MIT"
] | 2 | 2021-07-24T02:05:31.000Z | 2022-01-16T05:57:00.000Z | tests/mask/test_mask.py | valentingol/TransfoTF | 4b6d685ce019d847695e2d6ea5a223c5e543d0ed | [
"MIT"
] | 1 | 2021-07-03T14:06:57.000Z | 2021-07-03T14:06:57.000Z | tests/mask/test_mask.py | valentingol/transformers_tf | 4b6d685ce019d847695e2d6ea5a223c5e543d0ed | [
"MIT"
] | 2 | 2021-10-21T14:57:48.000Z | 2021-12-23T21:23:04.000Z | import os
import sys
import pytest
import tensorflow as tf
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../../')
from tests.tests_utils import tf_equal
from transformer.mask.mask import get_padding_mask, get_future_mask, get_masks
@pytest.fixture
def input_seq():
return tf.constant([[6, 20, 87, 68, 14, 87, 11, 36, 0, 0],
[57, 18, 37, 9, 41, 47, 25, 31, 60, 0],
[82, 64, 3, 77, 69, 34, 76, 14, 46, 70],
[51, 21, 19, 61, 83, 0, 0, 0, 0, 0]])
@pytest.fixture
def output_seq():
return tf.constant([[62, 83, 41, 77, 53, 2, 45, 16, 75, 19],
[73, 43, 5, 61, 37, 84, 52, 42, 0, 0],
[76, 19, 52, 43, 6, 22, 52, 0, 0, 0],
[85, 34, 15, 26, 74, 63, 8, 13, 43, 0],
[64, 84, 0, 0, 0, 0, 0, 0, 0, 0]])
def test_get_padding_mask(input_seq):
padding_mask = get_padding_mask(input_seq)
expected_mask = tf.constant([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]])
expected_mask = tf.reshape(expected_mask, (4, 1, 1, 10))
assert padding_mask.shape == tf.TensorShape((4, 1, 1, 10)), (
"padding_mask has not the expected shape"
)
assert tf_equal(padding_mask, expected_mask), ("padding mask do not match "
"the expected values")
def test_get_future_mask(output_seq):
future_mask = get_future_mask(output_seq)
expected_mask = tf.constant([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
expected_mask = tf.reshape(expected_mask, (1, 1, 10, 10))
assert future_mask.shape == tf.TensorShape(((1, 1, 10, 10))), (
"padding_mask has not the expected shape"
)
assert tf_equal(future_mask, expected_mask), ("padding mask do not match "
"the expected values")
def test_get_masks(input_seq, output_seq):
in_pad_mask, out_mask = get_masks(input_seq, output_seq)
expected_in_pad_mask = tf.constant([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]])
expected_in_pad_mask = tf.reshape(expected_in_pad_mask, (4, 1, 1, 10))
assert in_pad_mask.shape == tf.TensorShape(((4, 1, 1, 10))), (
"padding_mask has not the expected shape"
)
assert tf_equal(in_pad_mask, expected_in_pad_mask), (
"padding mask do not match the expected values"
)
expected_out_mask = tf.constant([
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]],
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]],
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0]],
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]
])
expected_out_mask = tf.reshape(expected_out_mask, (5, 1, 10, 10))
assert out_mask.shape == tf.TensorShape((5, 1, 10, 10)), (
"padding_mask has not the expected shape"
)
assert tf_equal(out_mask, expected_out_mask), (
"padding mask do not match the expected values"
)
| 50.253623 | 79 | 0.323576 | 1,074 | 6,935 | 2.010242 | 0.092179 | 0.267717 | 0.309866 | 0.32793 | 0.689671 | 0.639648 | 0.565076 | 0.565076 | 0.561371 | 0.522464 | 0 | 0.256892 | 0.508291 | 6,935 | 137 | 80 | 50.620438 | 0.376246 | 0 | 0 | 0.560976 | 0 | 0 | 0.049459 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 1 | 0.04065 | false | 0 | 0.04878 | 0.01626 | 0.105691 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f603e6f25506099ef0c127cd4db77d9e94b425ca | 2,930 | py | Python | test/test_utils.py | Hasenpfote/fpq | 3154ed1b1d5eca08255e8359b5027439af43691c | [
"MIT"
] | null | null | null | test/test_utils.py | Hasenpfote/fpq | 3154ed1b1d5eca08255e8359b5027439af43691c | [
"MIT"
] | 1 | 2021-01-09T07:56:22.000Z | 2021-01-09T07:56:22.000Z | test/test_utils.py | Hasenpfote/fpq | 3154ed1b1d5eca08255e8359b5027439af43691c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from unittest import TestCase
import numpy as np
import sys
sys.path.append('../')
from fpq.utils import *
class TestUtils(TestCase):
def test_get_max_component_indices(self):
arr = np.array([1., 2., 3., 4.])
actual = get_max_component_indices(arr)
expected = (3,)
self.assertTrue(isinstance(actual, tuple))
self.assertTrue(np.array_equal(actual, expected))
arr = np.array([[1., 2., 3., 4.],
[3., 2., 1., 0.]])
actual = get_max_component_indices(arr)
expected = (np.array([0, 1]), np.array([3, 0]))
self.assertTrue(isinstance(actual, tuple))
self.assertTrue(np.array_equal(actual, expected))
def test_remove_max_component(self):
arr = np.array([1., 2., 3., 4.])
actual = remove_component(arr, indices=(3,))
expected = np.array([1., 2., 3.])
self.assertTrue(isinstance(actual, np.ndarray))
self.assertTrue(np.array_equal(actual, expected))
arr = np.array([[1., 2., 3., 4.],
[3., 2., 1., 0.]])
actual = remove_component(arr, indices=(np.array([0, 1]), np.array([3, 0])))
expected = np.array([[1., 2., 3.],
[2., 1., 0.]])
self.assertTrue(isinstance(actual, np.ndarray))
self.assertTrue(np.array_equal(actual, expected))
def test_remap(self):
src_min, src_max = np.float64(0.), np.float64(10.)
dst_min, dst_max = np.float64(0.), np.float64(1.)
src_val = np.float64(10.)
dst_val = remap(src_val, src_min, src_max, dst_min, dst_max)
self.assertTrue(isinstance(dst_val, np.float64))
self.assertTrue(dst_min <= dst_val <= dst_max)
src_min, src_max = np.float64(-10.), np.float64(10.)
dst_min, dst_max = np.float64(-1.), np.float64(1.)
src_val = np.float64(-10.)
dst_val = remap(src_val, src_min, src_max, dst_min, dst_max)
self.assertTrue(isinstance(dst_val, np.float64))
self.assertTrue(dst_min <= dst_val <= dst_max)
src_min, src_max = np.float64(0.), np.float64(10.)
dst_min, dst_max = np.float64(0.), np.float64(1.)
src_val = np.array([0., 2.5, 5., 7.5, 10.], dtype=np.float64)
dst_val = remap(src_val, src_min, src_max, dst_min, dst_max)
self.assertTrue(isinstance(dst_val, np.ndarray))
self.assertTrue(np.all(dst_val >= dst_min) and np.all(dst_val <= dst_max))
src_min, src_max = np.float64(-10.), np.float64(10.)
dst_min, dst_max = np.float64(-1.), np.float64(1.)
src_val = np.array([-10, -7.5, -5., -2.5, 0., 2.5, 5., 7.5, 10.], dtype=np.float64)
dst_val = remap(src_val, src_min, src_max, dst_min, dst_max)
self.assertTrue(isinstance(dst_val, np.ndarray))
self.assertTrue(np.all(dst_val >= dst_min) and np.all(dst_val <= dst_max))
| 42.463768 | 91 | 0.592833 | 437 | 2,930 | 3.787185 | 0.125858 | 0.119637 | 0.054381 | 0.058006 | 0.882175 | 0.850755 | 0.829003 | 0.783082 | 0.761329 | 0.723867 | 0 | 0.061717 | 0.23686 | 2,930 | 68 | 92 | 43.088235 | 0.678444 | 0.014676 | 0 | 0.642857 | 0 | 0 | 0.00104 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.053571 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f60ab1fbcac9e2d97cc29f92aec3998304733769 | 1,001 | py | Python | dictionary/models/m2m.py | ankitgc1/django-sozluk-master | eeebc3eadf932a3f98e826400d5010d2b5b360b7 | [
"BSD-3-Clause"
] | 248 | 2019-10-29T20:35:01.000Z | 2022-03-25T11:17:35.000Z | dictionary/models/m2m.py | ankitgc1/django-sozluk-master | eeebc3eadf932a3f98e826400d5010d2b5b360b7 | [
"BSD-3-Clause"
] | 27 | 2019-12-18T21:58:15.000Z | 2022-02-28T11:41:25.000Z | dictionary/models/m2m.py | ankitgc1/django-sozluk-master | eeebc3eadf932a3f98e826400d5010d2b5b360b7 | [
"BSD-3-Clause"
] | 74 | 2019-10-30T14:06:46.000Z | 2022-02-28T07:29:05.000Z | from django.db import models
class TopicFollowing(models.Model):
topic = models.ForeignKey("Topic", on_delete=models.CASCADE)
author = models.ForeignKey("Author", on_delete=models.CASCADE)
read_at = models.DateTimeField(auto_now_add=True)
date_created = models.DateTimeField(auto_now_add=True)
class EntryFavorites(models.Model):
author = models.ForeignKey("Author", on_delete=models.CASCADE)
entry = models.ForeignKey("Entry", on_delete=models.CASCADE)
date_created = models.DateTimeField(auto_now_add=True)
class UpvotedEntries(models.Model):
author = models.ForeignKey("Author", on_delete=models.CASCADE)
entry = models.ForeignKey("Entry", on_delete=models.CASCADE)
date_created = models.DateTimeField(auto_now_add=True)
class DownvotedEntries(models.Model):
author = models.ForeignKey("Author", on_delete=models.CASCADE)
entry = models.ForeignKey("Entry", on_delete=models.CASCADE)
date_created = models.DateTimeField(auto_now_add=True)
| 37.074074 | 66 | 0.767233 | 125 | 1,001 | 5.96 | 0.216 | 0.171812 | 0.150336 | 0.225503 | 0.797315 | 0.797315 | 0.75302 | 0.75302 | 0.687248 | 0.621477 | 0 | 0 | 0.118881 | 1,001 | 26 | 67 | 38.5 | 0.844671 | 0 | 0 | 0.611111 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
f650bbcaba17f77f66c596abeaa6c9700eae59ec | 151 | py | Python | inflector/__init__.py | Jacobe2169/Python-Inflector | 590af0d2f1b5862ebb5f0c605df492a959d3d054 | [
"MIT-0"
] | 1 | 2021-09-15T17:33:05.000Z | 2021-09-15T17:33:05.000Z | inflector/__init__.py | joseignaciorc/Python-Inflector | 320fc9a58fa02fba5a181a81d154227aa623bb88 | [
"MIT-0"
] | null | null | null | inflector/__init__.py | joseignaciorc/Python-Inflector | 320fc9a58fa02fba5a181a81d154227aa623bb88 | [
"MIT-0"
] | 1 | 2019-08-31T21:51:41.000Z | 2019-08-31T21:51:41.000Z | #coding=utf-8
from .rules.english import English
from .rules.french import French
from .rules.spanish import Spanish
from .inflector import Inflector | 21.571429 | 34 | 0.81457 | 22 | 151 | 5.590909 | 0.454545 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007519 | 0.119205 | 151 | 7 | 35 | 21.571429 | 0.917293 | 0.07947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c85abfa10fc0b223c16f0323550ff877ff9222a | 8,431 | py | Python | openmdao/core/tests/test_reconf_connections.py | Subraiz/OpenMDAO | ba247746e76fc3a46b768d0f09955ef58ee71ae4 | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_reconf_connections.py | Subraiz/OpenMDAO | ba247746e76fc3a46b768d0f09955ef58ee71ae4 | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_reconf_connections.py | Subraiz/OpenMDAO | ba247746e76fc3a46b768d0f09955ef58ee71ae4 | [
"Apache-2.0"
] | null | null | null | """
Tests connections with Reconfigurable Model Execution.
Tests for absolute and promoted connections, for different nonlinear solvers.
"""
# FIXME With NonlinearRunOnce run_model() fails with ValueError (P.O.)
# FIXME With Newton solver and NLBGS variable sizes are not updated. (P.O.)
import numpy as np
import unittest
from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent
from openmdao.solvers.linear.direct import DirectSolver
from openmdao.solvers.nonlinear.newton import NewtonSolver
from openmdao.solvers.nonlinear.nonlinear_block_gs import NonlinearBlockGS
from openmdao.utils.assert_utils import assert_rel_error
class ReconfComp1(ExplicitComponent):
def initialize(self):
self.size = 1
self.counter = 0
def reconfigure(self):
self.counter += 1
if self.counter % 2 == 0:
self.size += 1
flag = True
else:
flag = False
return flag
def setup(self):
self.add_input('x', val=1.0)
self.add_output('y', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 2 * inputs['x']
def compute_partials(self, inputs, jacobian):
jacobian['y', 'x'] = 2 * np.ones((self.size, 1))
class ReconfComp2(ReconfComp1):
"""The size of the y input changes the same as way as in ReconfComp"""
def setup(self):
self.add_input('y', val=np.zeros(self.size))
self.add_output('f', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['f'] = 2 * inputs['y']
def compute_partials(self, inputs, jacobian):
jacobian['f', 'y'] = 2 * np.ones((self.size, 1))
class TestReconfConnections(unittest.TestCase):
@unittest.expectedFailure
def test_promoted_connections(self):
p = Problem()
p.model = model = Group()
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'], promotes_outputs=['y'])
model.add_subsystem('c3', ReconfComp2(), promotes_inputs=['y'],
promotes_outputs=['f'])
p.setup()
p['x'] = 3.
self.assertEqual(len(p['y']), 1)
# First run the model once; counter = 1, size of y = 1
p.run_model()
totals = p.compute_totals(wrt=['x'], of=['y'])
assert_rel_error(self, p['x'], 3.0)
assert_rel_error(self, p['y'], 6.0)
assert_rel_error(self, totals['y', 'x'], [[2.0]])
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # Fails with ValueError
self.assertEqual(len(p['y']), 2)
@unittest.expectedFailure
def test_abs_connections(self):
p = Problem()
p.model = model = Group()
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'])
model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
model.connect('c2.y', 'c3.y')
p.setup()
p['x'] = 3.
self.assertEqual(len(p['c2.y']), 1)
self.assertEqual(len(p['c3.y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # Fails with ValueError
self.assertEqual(len(p['c2.y']), 2)
self.assertEqual(len(p['c3.y']), 2)
@unittest.expectedFailure
def test_reconf_comp_connections_newton_solver(self):
p = Problem()
p.model = model = Group()
model.linear_solver = DirectSolver()
model.nonlinear_solver = NewtonSolver(solve_subsystems=False)
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'])
model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
model.connect('c2.y', 'c3.y')
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
self.assertEqual(len(p['c2.y']), 1)
self.assertEqual(len(p['c3.y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
self.assertEqual(len(p['c2.y']), 2)
self.assertEqual(len(p['c3.y']), 2)
assert_rel_error(self, p['c3.y'], [6., 6.])
@unittest.expectedFailure
def test_reconf_comp_connections_nlbgs_solver(self):
p = Problem()
p.model = model = Group()
model.linear_solver = DirectSolver()
model.nonlinear_solver = NonlinearBlockGS()
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'])
model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
model.connect('c2.y', 'c3.y')
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
self.assertEqual(len(p['c2.y']), 1)
self.assertEqual(len(p['c3.y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
self.assertEqual(len(p['c2.y']), 2)
self.assertEqual(len(p['c3.y']), 2)
assert_rel_error(self, p['c3.y'], [6., 6.])
@unittest.expectedFailure
def test_promoted_connections_newton_solver(self):
p = Problem()
p.model = model = Group()
model.linear_solver = DirectSolver()
model.nonlinear_solver = NewtonSolver(solve_subsystems=False)
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'], promotes_outputs=['y'])
model.add_subsystem('c3', ReconfComp2(), promotes_inputs=['y'], promotes_outputs=['f'])
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
self.assertEqual(len(p['y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
self.assertEqual(len(p['y']), 2)
assert_rel_error(self, p['y'], [6., 6.])
@unittest.expectedFailure
def test_test_promoted_connections_nlbgs_solver(self):
p = Problem()
p.model = model = Group()
model.linear_solver = DirectSolver()
model.nonlinear_solver = NonlinearBlockGS()
model.nonlinear_solver.options['reraise_child_analysiserror'] = True
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'], promotes_outputs=['y'])
model.add_subsystem('c3', ReconfComp2(), promotes_inputs=['y'], promotes_outputs=['f'])
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
self.assertEqual(len(p['y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
self.assertEqual(len(p['y']), 2)
assert_rel_error(self, p['y'], [6., 6.])
def test_reconf_comp_not_connected(self):
p = Problem()
p.model = model = Group()
model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
model.add_subsystem('c2', ReconfComp1(), promotes_inputs=['x'])
model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
# c2.y not connected to c3.y
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
self.assertEqual(len(p['c2.y']), 1)
self.assertEqual(len(p['c3.y']), 1)
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model()
self.assertEqual(len(p['c3.y']), 2)
self.assertEqual(len(p['c2.y']), 2)
if __name__ == '__main__':
unittest.main()
| 35.724576 | 95 | 0.606452 | 1,108 | 8,431 | 4.48917 | 0.118231 | 0.066345 | 0.079614 | 0.084037 | 0.799357 | 0.786088 | 0.756333 | 0.704463 | 0.704463 | 0.700643 | 0 | 0.025509 | 0.242083 | 8,431 | 235 | 96 | 35.876596 | 0.752895 | 0.173289 | 0 | 0.71519 | 0 | 0 | 0.035174 | 0.003892 | 0 | 0 | 0 | 0.004255 | 0.189873 | 1 | 0.094937 | false | 0 | 0.044304 | 0 | 0.164557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cce49005d35e00a6656c9d9a476839825b5b0a9 | 70 | py | Python | core/utils/__init__.py | cleiveliu/django-template | 01c6d03a66fe869e7155f8189b5b79570f36ba44 | [
"MIT"
] | null | null | null | core/utils/__init__.py | cleiveliu/django-template | 01c6d03a66fe869e7155f8189b5b79570f36ba44 | [
"MIT"
] | null | null | null | core/utils/__init__.py | cleiveliu/django-template | 01c6d03a66fe869e7155f8189b5b79570f36ba44 | [
"MIT"
] | null | null | null | import uuid
def uid_generator() -> str:
return uuid.uuid1().hex
| 11.666667 | 27 | 0.671429 | 10 | 70 | 4.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.2 | 70 | 5 | 28 | 14 | 0.803571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
9cd9e7fa9c48705611e0839a1fbc55fd90f9c1d3 | 2,931 | py | Python | tests/openapi/test_responses_buckets.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 4,618 | 2015-08-06T18:17:02.000Z | 2022-03-31T10:03:07.000Z | tests/openapi/test_responses_buckets.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 1,578 | 2015-07-30T09:47:41.000Z | 2022-03-31T13:12:50.000Z | tests/openapi/test_responses_buckets.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 546 | 2015-07-30T10:31:35.000Z | 2022-03-31T12:44:32.000Z | from bravado_core.response import validate_response
from .support import MINIMALIST_BUCKET, OpenAPITest
class OpenAPIBucketResponsesTest(OpenAPITest):
def test_get_bucket_200(self):
response = self.app.get("/buckets/b1", headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].get_bucket
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
def test_post_bucket_200(self):
response = self.app.post_json("/buckets", self.bucket, headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].create_bucket
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
def test_post_bucket_201(self):
response = self.app.post_json(
"/buckets", MINIMALIST_BUCKET, headers=self.headers, status=201
)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].create_bucket
schema = self.spec.deref(op.op_spec["responses"]["201"])
validate_response(schema, op, response)
def test_put_bucket_200(self):
response = self.app.put("/buckets/b1", headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].update_bucket
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
def test_put_bucket_201(self):
response = self.app.put("/buckets/b2", headers=self.headers, status=201)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].update_bucket
schema = self.spec.deref(op.op_spec["responses"]["201"])
validate_response(schema, op, response)
def test_delete_bucket_200(self):
response = self.app.delete("/buckets/b1", headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].delete_bucket
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
def test_get_buckets_200(self):
response = self.app.get("/buckets", headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].get_buckets
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
def test_delete_buckets_200(self):
response = self.app.delete("/buckets", headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources["Buckets"].delete_buckets
schema = self.spec.deref(op.op_spec["responses"]["200"])
validate_response(schema, op, response)
| 45.796875 | 96 | 0.685432 | 362 | 2,931 | 5.356354 | 0.10221 | 0.09902 | 0.066013 | 0.078391 | 0.917483 | 0.911294 | 0.865395 | 0.775658 | 0.769469 | 0.760186 | 0 | 0.031906 | 0.187308 | 2,931 | 63 | 97 | 46.52381 | 0.782116 | 0 | 0 | 0.528302 | 0 | 0 | 0.077789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.037736 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1403dd8f75e434e605509f7043765cd43b19c062 | 89 | py | Python | app/modules/wifi/__init__.py | bytecode-tech/my-tank | e37dc844fd2801b26710b461f64a6f938a5371db | [
"MIT"
] | 1 | 2020-05-21T04:56:51.000Z | 2020-05-21T04:56:51.000Z | app/modules/wifi/__init__.py | kandiki/my-tank | e37dc844fd2801b26710b461f64a6f938a5371db | [
"MIT"
] | null | null | null | app/modules/wifi/__init__.py | kandiki/my-tank | e37dc844fd2801b26710b461f64a6f938a5371db | [
"MIT"
] | 1 | 2020-04-21T20:24:36.000Z | 2020-04-21T20:24:36.000Z | from .wifi import Network, WifiNetwork, Wifi
from .wifi_controller import wifi_controller | 44.5 | 44 | 0.853933 | 12 | 89 | 6.166667 | 0.5 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101124 | 89 | 2 | 45 | 44.5 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1406e07fcbff18771fe1c8dd3cedd211906ea7b8 | 129 | py | Python | utils_collection/file.py | mzolfaghari/crossmodal-contrastive-learning | 060674e626abd552497ca34f12131c68f9f47a74 | [
"Apache-2.0"
] | null | null | null | utils_collection/file.py | mzolfaghari/crossmodal-contrastive-learning | 060674e626abd552497ca34f12131c68f9f47a74 | [
"Apache-2.0"
] | null | null | null | utils_collection/file.py | mzolfaghari/crossmodal-contrastive-learning | 060674e626abd552497ca34f12131c68f9f47a74 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
def get_folder_size(d):
return sum(f.stat().st_size for f in Path(d).glob('**/*') if f.is_file())
| 21.5 | 77 | 0.666667 | 25 | 129 | 3.28 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155039 | 129 | 5 | 78 | 25.8 | 0.752294 | 0 | 0 | 0 | 0 | 0 | 0.031008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
14a22c62bb3d592ef75944c221f548316df7b936 | 166 | py | Python | linux-locale/tests/conftest.py | cheretbe/ansible-playbooks | 89ecb0ef1f455a6f378920b089fda02a4ca666f9 | [
"MIT"
] | null | null | null | linux-locale/tests/conftest.py | cheretbe/ansible-playbooks | 89ecb0ef1f455a6f378920b089fda02a4ca666f9 | [
"MIT"
] | 10 | 2019-10-22T11:25:57.000Z | 2022-02-05T13:31:44.000Z | linux-locale/tests/conftest.py | cheretbe/ansible-playbooks | 89ecb0ef1f455a6f378920b089fda02a4ca666f9 | [
"MIT"
] | null | null | null | def pytest_addoption(parser):
parser.addoption("--default-lang", action="store", default=None)
parser.addoption("--default-lc", action="store", default=None)
| 41.5 | 68 | 0.716867 | 20 | 166 | 5.9 | 0.5 | 0.254237 | 0.372881 | 0.372881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 166 | 3 | 69 | 55.333333 | 0.786667 | 0 | 0 | 0 | 0 | 0 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14ba488453a29d85042d3bfd21ea0b54a7f2ce66 | 312 | py | Python | glue/clients/tests/test_util.py | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | glue/clients/tests/test_util.py | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | glue/clients/tests/test_util.py | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from numpy.testing import assert_allclose
from ..util import fast_limits
def test_fast_limits_nans():
x = np.zeros((10, 10)) * np.nan
assert_allclose(fast_limits(x, 0, 1), [0, 1])
def test_single_value():
x = np.array([1])
assert_allclose(fast_limits(x, 5., 95.), [1, 1])
| 20.8 | 52 | 0.673077 | 53 | 312 | 3.754717 | 0.471698 | 0.201005 | 0.180905 | 0.241206 | 0.251256 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054688 | 0.179487 | 312 | 14 | 53 | 22.285714 | 0.722656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.222222 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2139bdb5e71b004941db08aae84ac6ae6b0a17b5 | 129 | py | Python | src/agc_optims/optim/__init__.py | Skyy93/agc_optims | 488e56fe39b23129364f426133240a0edf506154 | [
"MIT"
] | 1 | 2021-09-21T13:25:24.000Z | 2021-09-21T13:25:24.000Z | src/agc_optims/optim/__init__.py | Skyy93/agc_optims | 488e56fe39b23129364f426133240a0edf506154 | [
"MIT"
] | null | null | null | src/agc_optims/optim/__init__.py | Skyy93/agc_optims | 488e56fe39b23129364f426133240a0edf506154 | [
"MIT"
] | null | null | null | from .adam_agc import Adam_AGC
from .adamw_agc import AdamW_AGC
from .rmsprop_agc import RMSprop_AGC
from .sgd_agc import SGD_AGC | 32.25 | 36 | 0.852713 | 24 | 129 | 4.25 | 0.291667 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 129 | 4 | 37 | 32.25 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
213c360fbe51820120f24302637d1dca42283eb1 | 154 | py | Python | src/interfaces/website/serializers/__init__.py | cruz-f/protrend | b72c17fa1606b4cf5ca6d60c51737b43ba3fdbc1 | [
"MIT"
] | null | null | null | src/interfaces/website/serializers/__init__.py | cruz-f/protrend | b72c17fa1606b4cf5ca6d60c51737b43ba3fdbc1 | [
"MIT"
] | 1 | 2022-02-11T18:38:39.000Z | 2022-02-11T18:38:39.000Z | src/interfaces/website/serializers/__init__.py | cruz-f/protrend | b72c17fa1606b4cf5ca6d60c51737b43ba3fdbc1 | [
"MIT"
] | null | null | null | from .organism import (OrganismSerializer,
OrganismsSerializer)
from .regulator import (RegulatorsSerializer, RegulatorSerializer)
| 38.5 | 66 | 0.733766 | 10 | 154 | 11.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 154 | 3 | 67 | 51.333333 | 0.933884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2146d3a41f6bfb9480010139b3854f0bbb9048df | 134 | py | Python | ndreg/__init__.py | neurodata/ndreg | 62d58f30d4464fc981f79410838aa581076aa8d5 | [
"Apache-2.0"
] | 21 | 2016-09-30T18:46:19.000Z | 2021-09-27T01:06:33.000Z | ndreg/__init__.py | neurodata/ndreg | 62d58f30d4464fc981f79410838aa581076aa8d5 | [
"Apache-2.0"
] | 49 | 2016-03-23T15:24:26.000Z | 2019-05-09T03:15:45.000Z | ndreg/__init__.py | neurodata/ndreg | 62d58f30d4464fc981f79410838aa581076aa8d5 | [
"Apache-2.0"
] | 9 | 2017-01-24T18:45:08.000Z | 2020-10-23T04:32:26.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from .ndreg import *
from . import preprocessor
from . import util
from . import plotter | 22.333333 | 26 | 0.69403 | 19 | 134 | 4.894737 | 0.684211 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.164179 | 134 | 6 | 27 | 22.333333 | 0.821429 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2159210ea2d0f22acb656c523499e82fb056c45d | 34 | py | Python | palindrome/__init__.py | khanstark/palindrome | d5889b03e6e49b6d5fa4494bf64066732a02126c | [
"MIT"
] | null | null | null | palindrome/__init__.py | khanstark/palindrome | d5889b03e6e49b6d5fa4494bf64066732a02126c | [
"MIT"
] | null | null | null | palindrome/__init__.py | khanstark/palindrome | d5889b03e6e49b6d5fa4494bf64066732a02126c | [
"MIT"
] | null | null | null | from palindrome.palin import palin | 34 | 34 | 0.882353 | 5 | 34 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
215ad8db2b2b577c05cdfa5b4b4e091ae2e48229 | 96 | py | Python | PySyntext/resources/__init__.py | YenanZ/PySyntext | b9bcfd1b2803d048bbfdb35309f1c086126e3818 | [
"MIT"
] | null | null | null | PySyntext/resources/__init__.py | YenanZ/PySyntext | b9bcfd1b2803d048bbfdb35309f1c086126e3818 | [
"MIT"
] | null | null | null | PySyntext/resources/__init__.py | YenanZ/PySyntext | b9bcfd1b2803d048bbfdb35309f1c086126e3818 | [
"MIT"
] | null | null | null | ## init
from resources.eng_words import eng_words
from resources.toxic_words import toxic_words
| 24 | 45 | 0.854167 | 15 | 96 | 5.2 | 0.466667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 96 | 3 | 46 | 32 | 0.906977 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4dc0ba9bbc6386fb76370d1fb3285216727cf7d | 99 | py | Python | plugins/zoom/icon_zoom/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/zoom/icon_zoom/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/zoom/icon_zoom/triggers/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
from .user_activity_event.trigger import UserActivityEvent
| 33 | 58 | 0.828283 | 14 | 99 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 99 | 2 | 59 | 49.5 | 0.930233 | 0.373737 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ed8a76ace5c8ab64e9fe24e90ba8a07aa455d31 | 1,064 | py | Python | build/lib/minotaur-manticore-maze/class_structure.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | build/lib/minotaur-manticore-maze/class_structure.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | build/lib/minotaur-manticore-maze/class_structure.py | smidem/minotaur-manticore-maze | 0c08c83857b19be6cc6cae4b1f2acf5d485858a6 | [
"MIT"
] | null | null | null | class LoadMap():
def __init__(self, map_file):
pass
class Navigation():
def __init__(self, current_loc):
pass
def move(self, direction):
pass
class Boundary():
def __init__(self, coordinate):
pass
def find_boundary(self):
pass
class Item():
def __init__(self, coordinate):
pass
def find_item(self):
pass
class Lighting():
def __init__(self, torch):
pass
def torch_check(self):
pass
class Battle():
def __init__(self, health, armor, attack):
pass
class Scene():
def enter(self):
pass
class Entrance(Scene):
def enter(self):
pass
class Death(Scene):
def enter(self):
pass
class Minotaur(Scene):
def enter(scene):
pass
class Manticore(Scene):
def enter(self):
pass
class WayOut(Scene):
def enter(self):
pass
class HiddenDoor(Scene):
def enter(self):
pass
class Progress():
def bar(self, secs, prefix):
pass
| 11.2 | 46 | 0.567669 | 122 | 1,064 | 4.713115 | 0.295082 | 0.203478 | 0.203478 | 0.177391 | 0.382609 | 0.382609 | 0.111304 | 0 | 0 | 0 | 0 | 0 | 0.331767 | 1,064 | 94 | 47 | 11.319149 | 0.80872 | 0 | 0 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.36 | false | 0.36 | 0 | 0 | 0.64 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
25780b233223f68db93e5977d723642ffcee0671 | 45 | py | Python | mit/6.006/r01/problem1Dd.py | abrantesasf/algoritmos | 37a599d71f41b45ec585577b74aa2b060ad4cf87 | [
"MIT"
] | null | null | null | mit/6.006/r01/problem1Dd.py | abrantesasf/algoritmos | 37a599d71f41b45ec585577b74aa2b060ad4cf87 | [
"MIT"
] | null | null | null | mit/6.006/r01/problem1Dd.py | abrantesasf/algoritmos | 37a599d71f41b45ec585577b74aa2b060ad4cf87 | [
"MIT"
] | 2 | 2018-12-01T20:40:05.000Z | 2021-11-26T02:53:06.000Z | problemList = [8, 1, 2, 3, 6, 5, 4, 5, 4, 3]
| 22.5 | 44 | 0.466667 | 11 | 45 | 1.909091 | 0.727273 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30303 | 0.266667 | 45 | 1 | 45 | 45 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25ee7f7bb5c54c8cdde2e7ef065c71ab5f8280f9 | 19 | py | Python | roomBuilder/__init__.py | bartoszpogoda/academic-py-painter-assistant | 2a2dff55e1d7b631f28d6492c5553a8e7ac5abc2 | [
"MIT"
] | null | null | null | roomBuilder/__init__.py | bartoszpogoda/academic-py-painter-assistant | 2a2dff55e1d7b631f28d6492c5553a8e7ac5abc2 | [
"MIT"
] | null | null | null | roomBuilder/__init__.py | bartoszpogoda/academic-py-painter-assistant | 2a2dff55e1d7b631f28d6492c5553a8e7ac5abc2 | [
"MIT"
] | null | null | null | from . import room
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d32018a9c02eead9ab9c6ca9ef5c95e068264c3f | 109 | py | Python | test/test_basic.py | DahlitzFlorian/personal-bookshelf | 201a24532c2a74633e449097b061fffb0bc19a64 | [
"Apache-2.0"
] | 1 | 2019-03-18T07:01:12.000Z | 2019-03-18T07:01:12.000Z | test/test_basic.py | DahlitzFlorian/personal-bookshelf | 201a24532c2a74633e449097b061fffb0bc19a64 | [
"Apache-2.0"
] | 5 | 2019-02-08T18:51:50.000Z | 2019-03-22T21:39:20.000Z | test/test_basic.py | DahlitzFlorian/personal-bookshelf | 201a24532c2a74633e449097b061fffb0bc19a64 | [
"Apache-2.0"
] | null | null | null | """Basic test to get started."""
def test_basic():
"""Test basic."""
assert (1, 2, 3) == (1, 2, 3)
| 15.571429 | 33 | 0.504587 | 17 | 109 | 3.176471 | 0.588235 | 0.333333 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0.247706 | 109 | 6 | 34 | 18.166667 | 0.585366 | 0.348624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d33d7ab05ac8da2de898060c6b5120199ab85bb2 | 33 | py | Python | spoopy/tools/bob/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 12 | 2019-11-26T07:44:08.000Z | 2021-03-03T09:51:43.000Z | spoopy/tools/bob/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 13 | 2020-01-28T22:09:41.000Z | 2022-03-11T23:43:37.000Z | spoopy/tools/bob/__init__.py | rodrigobressan/PADify | 362db2b3a33793ac53f938e89f90a6ecdf778e89 | [
"MIT"
] | 5 | 2020-01-02T09:52:42.000Z | 2022-02-21T15:45:23.000Z | from tools.bob import bob_sample
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d34a34f436fe1788342e4837672651f39671d779 | 24 | py | Python | Pycom/__init__.py | ccubed/CommanderSnek | 0d0878e476d77b7f25cb0417ab797fed0ff65115 | [
"MIT"
] | 1 | 2017-10-18T12:36:04.000Z | 2017-10-18T12:36:04.000Z | Pycom/__init__.py | ccubed/CommanderSnek | 0d0878e476d77b7f25cb0417ab797fed0ff65115 | [
"MIT"
] | null | null | null | Pycom/__init__.py | ccubed/CommanderSnek | 0d0878e476d77b7f25cb0417ab797fed0ff65115 | [
"MIT"
] | null | null | null | from .Pycom import Pycom | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3522ebc72eec46d94f4d5d0aaa574b3225dfae8 | 4,696 | py | Python | UnitTests/CrackerTests.py | MrBhendel/Cirrus | 23ce19991e7c12efaebb3412cf4bb2ce438ceb36 | [
"MIT"
] | 5 | 2017-08-03T13:18:38.000Z | 2019-03-02T03:25:19.000Z | UnitTests/CrackerTests.py | MrBhendel/Cirrus | 23ce19991e7c12efaebb3412cf4bb2ce438ceb36 | [
"MIT"
] | null | null | null | UnitTests/CrackerTests.py | MrBhendel/Cirrus | 23ce19991e7c12efaebb3412cf4bb2ce438ceb36 | [
"MIT"
] | 2 | 2017-07-10T00:42:27.000Z | 2019-04-22T19:51:54.000Z | import unittest
import os
import sys
sys.path.append('../')
from Cracker import *
from ModulusList import ModulusListImpl
class QuietInformer(StatusInformer):
def InformUserOfSuccessfulCrack(self, ip1, ip2):
pass
class CrackerTests(unittest.TestCase):
def test_cracker_should_work_with_one_value(self):
theList = ModulusListImpl()
theList.add('127.0.0.1', 24, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertFalse(os.path.isfile('127.0.0.1.key'))
def test_cracker_should_work_with_no_common_factors(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 3, 3)
theList.add('192.168.1.2', 5, 3)
theList.add('192.168.1.3', 11, 3)
theList.add('192.168.1.4', 13, 3)
theList.add('192.168.1.5', 17, 3)
theList.add('192.168.1.6', 23, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertFalse(os.path.isfile('192.168.1.1.key'))
self.assertFalse(os.path.isfile('192.168.1.2.key'))
self.assertFalse(os.path.isfile('192.168.1.3.key'))
self.assertFalse(os.path.isfile('192.168.1.4.key'))
self.assertFalse(os.path.isfile('192.168.1.5.key'))
self.assertFalse(os.path.isfile('192.168.1.6.key'))
def test_cracker_should_work_with_two_values_with_common_factor(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 2*3, 3)
theList.add('192.168.1.2', 5*3, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertTrue(os.path.isfile('192.168.1.1.key'))
self.assertTrue(os.path.isfile('192.168.1.2.key'))
os.remove('192.168.1.1.key')
os.remove('192.168.1.2.key')
def test_cracker_should_work_with_three_values_with_three_common_factors(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 2*3, 3)
theList.add('192.168.1.2', 5*3, 3)
theList.add('192.168.1.3', 7*3, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertTrue(os.path.isfile('192.168.1.1.key'))
self.assertTrue(os.path.isfile('192.168.1.2.key'))
self.assertTrue(os.path.isfile('192.168.1.3.key'))
os.remove('192.168.1.1.key')
os.remove('192.168.1.2.key')
os.remove('192.168.1.3.key')
def test_cracker_should_work_with_three_values_with_two_common_factors_1(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 2*3, 3)
theList.add('192.168.1.2', 2*5, 3)
theList.add('192.168.1.3', 7*11, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertTrue(os.path.isfile('192.168.1.1.key'))
self.assertTrue(os.path.isfile('192.168.1.2.key'))
self.assertFalse(os.path.isfile('192.168.1.3.key'))
os.remove('192.168.1.1.key')
os.remove('192.168.1.2.key')
def test_cracker_should_work_with_three_values_with_two_common_factors_2(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 2*3, 3)
theList.add('192.168.1.2', 7*5, 3)
theList.add('192.168.1.3', 2*11, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertTrue(os.path.isfile('192.168.1.1.key'))
self.assertFalse(os.path.isfile('192.168.1.2.key'))
self.assertTrue(os.path.isfile('192.168.1.3.key'))
os.remove('192.168.1.1.key')
os.remove('192.168.1.3.key')
def test_cracker_should_work_with_three_values_with_two_common_factors_3(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 7*3, 3)
theList.add('192.168.1.2', 2*5, 3)
theList.add('192.168.1.3', 2*11, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertFalse(os.path.isfile('192.168.1.1.key'))
self.assertTrue(os.path.isfile('192.168.1.2.key'))
self.assertTrue(os.path.isfile('192.168.1.3.key'))
os.remove('192.168.1.2.key')
os.remove('192.168.1.3.key')
def test_cracker_should_work_with_multiple_common_factors(self):
theList = ModulusListImpl()
theList.add('192.168.1.1', 2*3, 3)
theList.add('192.168.1.2', 2*5, 3)
theList.add('192.168.1.3', 7*11, 3)
theList.add('192.168.1.4', 2*13, 3)
theList.add('192.168.1.5', 7*17, 3)
theList.add('192.168.1.6', 23, 3)
sut = CrackerImpl(theList, QuietInformer())
sut.CrackAndWriteCertificates()
self.assertTrue(os.path.isfile('192.168.1.1.key'))
self.assertTrue(os.path.isfile('192.168.1.2.key'))
self.assertTrue(os.path.isfile('192.168.1.3.key'))
self.assertTrue(os.path.isfile('192.168.1.4.key'))
self.assertTrue(os.path.isfile('192.168.1.5.key'))
self.assertFalse(os.path.isfile('192.168.1.6.key'))
os.remove('192.168.1.1.key')
os.remove('192.168.1.2.key')
os.remove('192.168.1.3.key')
os.remove('192.168.1.4.key')
os.remove('192.168.1.5.key')
if __name__ == '__main__':
unittest.main()
| 36.403101 | 80 | 0.70592 | 811 | 4,696 | 3.988903 | 0.078915 | 0.126121 | 0.147141 | 0.128594 | 0.908192 | 0.895518 | 0.881298 | 0.86306 | 0.832148 | 0.780835 | 0 | 0.152036 | 0.095187 | 4,696 | 128 | 81 | 36.6875 | 0.60932 | 0 | 0 | 0.666667 | 0 | 0 | 0.202087 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 1 | 0.078947 | false | 0.008772 | 0.04386 | 0 | 0.140351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d35e977c9ce1f29747514dc2584ad5d5c2f12d68 | 156 | py | Python | reth_buffer/reth_buffer/sampler/__init__.py | sosp2021/Reth | 10c032f44a25049355ebdd97a2cb3299e8c3fb82 | [
"MIT"
] | null | null | null | reth_buffer/reth_buffer/sampler/__init__.py | sosp2021/Reth | 10c032f44a25049355ebdd97a2cb3299e8c3fb82 | [
"MIT"
] | 1 | 2021-08-10T02:58:58.000Z | 2021-08-10T02:58:58.000Z | reth_buffer/reth_buffer/sampler/__init__.py | sosp2021/reth | 10c032f44a25049355ebdd97a2cb3299e8c3fb82 | [
"MIT"
] | null | null | null | from .base_sampler import BaseSampler
from .fifo_sampler import FIFOSampler
from .per_sampler import PERSampler
from .uniform_sampler import UniformSampler
| 31.2 | 43 | 0.871795 | 20 | 156 | 6.6 | 0.55 | 0.393939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 156 | 4 | 44 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d36bbcbba79b38602e45031df3c3dec711903455 | 105 | py | Python | h0rton/h0_inference/__init__.py | jiwoncpark/h0rton | 2541885d70d090fdb777339cfb77a3a9f3e7996d | [
"MIT"
] | 4 | 2020-12-02T02:18:08.000Z | 2021-11-25T21:56:33.000Z | h0rton/h0_inference/__init__.py | jiwoncpark/h0rton | 2541885d70d090fdb777339cfb77a3a9f3e7996d | [
"MIT"
] | 25 | 2019-10-17T08:18:38.000Z | 2020-12-26T09:38:05.000Z | h0rton/h0_inference/__init__.py | jiwoncpark/h0rton | 2541885d70d090fdb777339cfb77a3a9f3e7996d | [
"MIT"
] | 1 | 2020-12-03T02:14:12.000Z | 2020-12-03T02:14:12.000Z | from .h0_posterior import H0Posterior
from .gaussian_bnn_posterior import *
from .plotting_utils import * | 35 | 37 | 0.847619 | 14 | 105 | 6.071429 | 0.642857 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.104762 | 105 | 3 | 38 | 35 | 0.882979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3af96c4a997caaf150b76a68aac862d150d83f4 | 40 | py | Python | Cython/runTest.py | Abesuden/sandbox | af092b9ec841b3fef6066d660a9d6d999e8eaa1f | [
"MIT"
] | null | null | null | Cython/runTest.py | Abesuden/sandbox | af092b9ec841b3fef6066d660a9d6d999e8eaa1f | [
"MIT"
] | null | null | null | Cython/runTest.py | Abesuden/sandbox | af092b9ec841b3fef6066d660a9d6d999e8eaa1f | [
"MIT"
] | null | null | null | import testOne
print(testOne.add(2, 4)) | 13.333333 | 24 | 0.75 | 7 | 40 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.1 | 40 | 3 | 24 | 13.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
6cc3ba477b553c127c606569a9062fae21b04442 | 788 | py | Python | pyexlatex/table/__init__.py | whoopnip/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 4 | 2020-06-08T07:17:12.000Z | 2021-11-04T21:39:52.000Z | pyexlatex/table/__init__.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 24 | 2020-02-17T17:20:44.000Z | 2021-12-20T00:10:19.000Z | pyexlatex/table/__init__.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | null | null | null | from pyexlatex.table.models.panels import Panel
from pyexlatex.table.models.data.table import DataTable
from pyexlatex.table.models.labels.table import LabelCollection, LabelTable
from pyexlatex.table.models.labels.label import Label
from pyexlatex.table.models.table.table import Table
from pyexlatex.table.models.texgen.items import Tabular
from pyexlatex.table.models.data.valuestable import ValuesTable
from pyexlatex.table.models.texgen.alignment import ColumnAlignment, ColumnsAlignment
from pyexlatex.table.models.texgen.lines import TopRule, MidRule, BottomRule, TableLineSegment
from pyexlatex.table.models.labels.multicolumnlabel import MultiColumnLabel
from pyexlatex.models.format.breaks import TableLineBreak
from pyexlatex.table.models.texgen.tabularstar import TabularStar
| 60.615385 | 94 | 0.86802 | 99 | 788 | 6.909091 | 0.30303 | 0.22807 | 0.289474 | 0.385965 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067259 | 788 | 12 | 95 | 65.666667 | 0.930612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f36b4d78d89011fc247efa1fe7ed17069412899 | 45 | py | Python | Warp_Module/occlusion_mapper/__init__.py | ClementPinard/direct-warper | be46410202c8cd9efb982b5dc4c1eb954ab45b10 | [
"MIT"
] | 2 | 2021-05-24T06:27:40.000Z | 2021-06-11T02:39:59.000Z | Warp_Module/occlusion_mapper/__init__.py | ClementPinard/direct-warper | be46410202c8cd9efb982b5dc4c1eb954ab45b10 | [
"MIT"
] | null | null | null | Warp_Module/occlusion_mapper/__init__.py | ClementPinard/direct-warper | be46410202c8cd9efb982b5dc4c1eb954ab45b10 | [
"MIT"
] | null | null | null | from .occlusion_mapper import OcclusionMapper | 45 | 45 | 0.911111 | 5 | 45 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f6ee57424492742dde125ec733b3d2caab6b6c2 | 2,846 | py | Python | tests/integration/test_hooks.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 5 | 2019-02-19T10:56:56.000Z | 2020-11-28T11:37:45.000Z | tests/integration/test_hooks.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 63 | 2019-01-21T21:44:28.000Z | 2022-03-21T14:01:11.000Z | tests/integration/test_hooks.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 2 | 2019-01-04T19:31:52.000Z | 2020-12-10T21:40:09.000Z | from unittest import mock
from .helpers import ProviderForTesting
class TestHooks:
def test_basic(self, cmd, mocker, commit):
result, _ = cmd('start', git_inited=True)
assert result.exit_code == 0
mocker.spy(ProviderForTesting, 'stop')
mocker.spy(ProviderForTesting, 'start')
commit('Some message')
result, _ = cmd('hooks post-commit')
assert result.exit_code == 0
ProviderForTesting.stop.assert_called_once_with(mock.ANY, 'Some message', force=False, task=None)
ProviderForTesting.start.assert_called_once_with(mock.ANY)
def test_ignored_non_running_repos(self, cmd, mocker, commit):
result, _ = cmd('init --no-hook', inited=False, git_inited=True)
assert result.exit_code == 0
mocker.spy(ProviderForTesting, 'stop')
mocker.spy(ProviderForTesting, 'start')
commit('Some message')
result, _ = cmd('hooks post-commit')
assert result.exit_code == 0
assert ProviderForTesting.stop.call_count == 0
assert ProviderForTesting.start.call_count == 0
def test_task_static(self, cmd, mocker, commit):
result, _ = cmd('start', git_inited=True)
assert result.exit_code == 0
mocker.spy(ProviderForTesting, 'stop')
mocker.spy(ProviderForTesting, 'start')
commit('Some message')
result, _ = cmd('hooks post-commit', config='task_static.config')
assert result.exit_code == 0
ProviderForTesting.stop.assert_called_once_with(mock.ANY, 'Some message', force=False, task='some task name')
ProviderForTesting.start.assert_called_once_with(mock.ANY)
def test_task_dynamic_branch(self, cmd, mocker, commit):
result, _ = cmd('start', git_inited=True)
assert result.exit_code == 0
mocker.spy(ProviderForTesting, 'stop')
mocker.spy(ProviderForTesting, 'start')
commit('Some message', branch='#123_Some_brunch')
result, _ = cmd('hooks post-commit', config='task_dynamic_branch.config')
assert result.exit_code == 0
ProviderForTesting.stop.assert_called_once_with(mock.ANY, 'Some message', force=False, task=123)
ProviderForTesting.start.assert_called_once_with(mock.ANY)
def test_task_dynamic_commit(self, cmd, mocker, commit):
result, _ = cmd('start', git_inited=True)
assert result.exit_code == 0
mocker.spy(ProviderForTesting, 'stop')
mocker.spy(ProviderForTesting, 'start')
commit('#321 Some message')
result, _ = cmd('hooks post-commit', config='task_dynamic_commit.config')
assert result.exit_code == 0
ProviderForTesting.stop.assert_called_once_with(mock.ANY, '#321 Some message', force=False, task=321)
ProviderForTesting.start.assert_called_once_with(mock.ANY)
| 36.961039 | 117 | 0.676037 | 339 | 2,846 | 5.466077 | 0.162242 | 0.04857 | 0.086346 | 0.107933 | 0.839719 | 0.826228 | 0.811117 | 0.811117 | 0.75823 | 0.728548 | 0 | 0.012021 | 0.210822 | 2,846 | 76 | 118 | 37.447368 | 0.813001 | 0 | 0 | 0.622642 | 0 | 0 | 0.134223 | 0.018271 | 0 | 0 | 0 | 0 | 0.377358 | 1 | 0.09434 | false | 0 | 0.037736 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f9d924c1f381cfe098ad9e83745c7a8c2108d91 | 3,126 | py | Python | admin/utils.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | null | null | null | admin/utils.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | 1 | 2022-03-12T00:57:37.000Z | 2022-03-12T00:57:37.000Z | admin/utils.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | null | null | null | from django.core.exceptions import FieldDoesNotExist
from django.db import models, router
from django.db.models.constants import LOOKUP_SEP
from django.db.models.deletion import Collector
from django.forms.utils import pretty_name
from django.urls import NoReverseMatch, reverse
from django.utils import formats, timezone
from django.utils.html import format_html
from django.utils.text import capfirst
from django.utils.translation import ngettext, override as translation_override
def get_disabled_objects(objs, request, admin_site):
try:
obj = objs[0]
except IndexError:
return [], {}, set(), []
else:
using = router.db_for_write(obj._meta.model)
collector = NestedObjects(using=using)
collector.collect(objs)
perms_needed = set()
def format_callback(obj):
model = obj.__class__
has_admin = model in admin_site._registry
opts = obj._meta
no_edit_link = '%s: %s' % (capfirst(opts.verbose_name), obj)
if has_admin:
if not admin_site._registry[model].has_delete_permission(request, obj):
perms_needed.add(opts.verbose_name)
try:
admin_url = reverse('%s:%s_%s_disable' % (admin_site.name, opts.app_label, opts.model_name), None, (quote(obj.pk),))
except NoReverseMatch:
return no_edit_link
return format_html('{}: <a href="{}">{}</a>', capfirst(opts.verbose_name), admin_url, obj)
else:
return no_edit_link
to_delete = collector.nested(format_callback)
protected = [format_callback(obj) for obj in collector.protected]
model_count = {model._meta.verbose_name_plural: len(objs) for model, objs in collector.model_objs.items()}
return to_delete, model_count, perms_needed, protected
def get_enabled_objects(objs, request, admin_site):
try:
obj = objs[0]
except IndexError:
return [], {}, set(), []
else:
using = router.db_for_write(obj._meta.model)
collector = NestedObjects(using=using)
collector.collect(objs)
perms_needed = set()
def format_callback(obj):
model = obj.__class__
has_admin = model in admin_site._registry
opts = obj._meta
no_edit_link = '%s: %s' % (capfirst(opts.verbose_name), obj)
if has_admin:
if not admin_site._registry[model].has_delete_permission(request, obj):
perms_needed.add(opts.verbose_name)
try:
admin_url = reverse('%s:%s_%s_enable' % (admin_site.name, opts.app_label, opts.model_name), None, (quote(obj.pk),))
except NoReverseMatch:
return no_edit_link
return format_html('{}: <a href="{}">{}</a>', capfirst(opts.verbose_name), admin_url, obj)
else:
return no_edit_link
to_delete = collector.nested(format_callback)
protected = [format_callback(obj) for obj in collector.protected]
model_count = {model._meta.verbose_name_plural: len(objs) for model, objs in collector.model_objs.items()}
return to_delete, model_count, perms_needed, protected | 43.416667 | 132 | 0.671785 | 403 | 3,126 | 4.952854 | 0.223325 | 0.0501 | 0.03006 | 0.046092 | 0.774549 | 0.774549 | 0.774549 | 0.774549 | 0.774549 | 0.774549 | 0 | 0.000824 | 0.223608 | 3,126 | 72 | 133 | 43.416667 | 0.82159 | 0 | 0 | 0.794118 | 0 | 0 | 0.028462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.147059 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9fc5d84c689c7b4c060f38eb0b64266cc4b6314d | 36 | py | Python | wsbtrading/maths/__init__.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 14 | 2021-01-25T00:01:39.000Z | 2021-08-12T09:20:39.000Z | wsbtrading/maths/__init__.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 15 | 2021-01-24T20:18:13.000Z | 2021-02-04T21:54:27.000Z | wsbtrading/maths/__init__.py | bordumb/wsbtrading | 32cadab1d9e2f4d37e7d028cc30f4cd0e924be92 | [
"MIT"
] | 3 | 2021-01-27T14:03:02.000Z | 2021-08-29T04:13:26.000Z | from wsbtrading.maths.maths import * | 36 | 36 | 0.833333 | 5 | 36 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c7d237d6f6fb7182ae2462a3d06aad66534cf2f | 159 | py | Python | python/13_regex_and_parsing/13_detecthtmltags,attributesandattributevalues.py | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | null | null | null | python/13_regex_and_parsing/13_detecthtmltags,attributesandattributevalues.py | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | null | null | null | python/13_regex_and_parsing/13_detecthtmltags,attributesandattributevalues.py | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | 3 | 2021-09-22T11:06:58.000Z | 2022-01-25T09:29:24.000Z | Solution to [Detect HTML Tags, Attributes and Attribute Values](https://www.hackerrank.com/challenges/detect-html-tags-attributes-and-attribute-values/problem) | 159 | 159 | 0.830189 | 22 | 159 | 6 | 0.681818 | 0.151515 | 0.212121 | 0.363636 | 0.636364 | 0.636364 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0.050314 | 159 | 1 | 159 | 159 | 0.874172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c85e7bc54c19a1d8e8d7147a5a3be5f835242ec | 22 | py | Python | agoodle/__init__.py | lossyrob/agoodle | 3d6d1d983832acbe3a5e0f2fa78d86c7518560c1 | [
"MIT"
] | 1 | 2015-01-09T16:08:48.000Z | 2015-01-09T16:08:48.000Z | agoodle/__init__.py | lossyrob/agoodle | 3d6d1d983832acbe3a5e0f2fa78d86c7518560c1 | [
"MIT"
] | null | null | null | agoodle/__init__.py | lossyrob/agoodle | 3d6d1d983832acbe3a5e0f2fa78d86c7518560c1 | [
"MIT"
] | null | null | null | from agoodle import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ca62f5531bbac84ed75ec040535d400c6c6b1b7 | 12,098 | py | Python | tests/cases/resources/tests/concept.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/concept.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/concept.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | 1 | 2020-01-16T15:26:37.000Z | 2020-01-16T15:26:37.000Z | import json
from django.test.utils import override_settings
from avocado.models import DataConcept, DataConceptField, DataField, \
DataCategory
from avocado.events.models import Log
from .base import BaseTestCase
class ConceptResourceTestCase(BaseTestCase):
def setUp(self):
super(ConceptResourceTestCase, self).setUp()
self.name_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'name')
self.salary_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'salary')
self.boss_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'boss')
c1 = DataConcept(name='Title', published=True)
c1.save()
DataConceptField(concept=c1, field=self.name_field, order=1).save()
DataConceptField(concept=c1, field=self.salary_field, order=2).save()
DataConceptField(concept=c1, field=self.boss_field, order=3).save()
c2 = DataConcept(name='Salary')
c2.save()
DataConceptField(concept=c2, field=self.salary_field, order=1).save()
DataConceptField(concept=c2, field=self.boss_field, order=2).save()
c3 = DataConcept(name='Name', published=True)
c3.save()
DataConceptField(concept=c3, field=self.name_field, order=1).save()
def test_get_all(self):
response = self.client.get('/api/concepts/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
def test_get_all_category_sort(self):
# Create some temporary concepts and categories
cat1 = DataCategory(name='Category1', order=1.0, published=True)
cat1.save()
c1 = DataConcept(name='B', published=True, category=cat1)
c1.save()
field1 = DataConceptField(concept=c1, field=self.name_field, order=1)
field1.save()
c2 = DataConcept(name='C', published=True, category=cat1)
c2.save()
field2 = DataConceptField(concept=c2, field=self.name_field, order=1)
field2.save()
c3 = DataConcept(name='A', published=True, category=cat1)
c3.save()
field3 = DataConceptField(concept=c3, field=self.name_field, order=1)
field3.save()
# Check that category ordering is happening by default
response = self.client.get('/api/concepts/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['Title', 'Name', 'B', 'C', 'A'])
# Reverse the ordering of the categories
response = self.client.get('/api/concepts/',
{'order': 'desc'},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['B', 'C', 'A', 'Title', 'Name'])
# Order by concept name in addition to category
response = self.client.get('/api/concepts/',
{'sort': 'name'},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['Name', 'Title', 'A', 'B', 'C'])
# Reverse the name and category sorting
response = self.client.get('/api/concepts/',
{'sort': 'name', 'order': 'desc'},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['C', 'B', 'A', 'Title', 'Name'])
c1.delete()
c2.delete()
c3.delete()
field1.delete()
field2.delete()
field3.delete()
cat1.delete()
def test_get_all_name_sort(self):
response = self.client.get('/api/concepts/',
{'sort': 'name'},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['Name', 'Title'])
response = self.client.get('/api/concepts/',
{'sort': 'name', 'order': 'desc'},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
names = [concept.get('name', '') for concept in
json.loads(response.content)]
self.assertEqual(names, ['Title', 'Name'])
def test_get_all_limit(self):
# Name and title are both published but with the limit param set below
# we should only get one back.
response = self.client.get('/api/concepts/',
{'limit': 1},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 1)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_all_orphan(self):
# Orphan one of the fields we are about to embed in the concepts we
# are about to retrieve.
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name='XXX')
response = self.client.get('/api/concepts/', {'embed': True},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 1)
# If we aren't embedding the fields, then none of the concepts
# should be filtered out.
response = self.client.get('/api/concepts/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_all_orphan_check_off(self):
# Orphan one of the fields we are about to embed in the concepts we
# are about to retrieve.
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name='XXX')
response = self.client.get('/api/concepts/', {'embed': True},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
# If we aren't embedding the fields, then none of the concepts
# should be filtered out.
response = self.client.get('/api/concepts/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 2)
def test_get_one(self):
response = self.client.get('/api/concepts/999/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 404)
response = self.client.get('/api/concepts/3/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
self.assertTrue(Log.objects.filter(event='read', object_id=3).exists())
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_one_orphan(self):
# Orphan one of the fields on the concept before we retrieve it
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name='XXX')
response = self.client.get('/api/concepts/1/', {'embed': True},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 500)
# If we aren't embedding the fields, there should not be a server error
response = self.client.get('/api/concepts/1/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_one_orphan_check_off(self):
# Orphan one of the fields on the concept before we retrieve it
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name='XXX')
response = self.client.get('/api/concepts/1/', {'embed': True},
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
# If we aren't embedding the fields, there should not be a server error
response = self.client.get('/api/concepts/1/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
def test_get_privileged(self):
# Superuser sees everything
self.client.login(username='root', password='password')
response = self.client.get('/api/concepts/?unpublished=1',
HTTP_ACCEPT='application/json')
self.assertEqual(len(json.loads(response.content)), 3)
response = self.client.get('/api/concepts/2/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
class ConceptFieldResourceTestCase(BaseTestCase):
def setUp(self):
super(ConceptFieldResourceTestCase, self).setUp()
self.name_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'name')
self.salary_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'salary')
self.boss_field = DataField.objects.get_by_natural_key(
'tests', 'title', 'boss')
c1 = DataConcept(name='Title', published=True)
c1.save()
DataConceptField(concept=c1, field=self.name_field, order=1).save()
DataConceptField(concept=c1, field=self.salary_field, order=2).save()
DataConceptField(concept=c1, field=self.boss_field, order=3).save()
def test_get(self):
response = self.client.get('/api/concepts/1/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 3)
def test_get_orphan(self):
# Orphan the data field linked to the concept we are about to read
# the fields for.
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name="XXX")
response = self.client.get('/api/concepts/1/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 500)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_orphan_check_off(self):
# Orphan the data field linked to the concept we are about to read
# the fields for.
DataField.objects.filter(pk=self.salary_field.pk) \
.update(field_name="XXX")
response = self.client.get('/api/concepts/1/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
| 44.477941 | 79 | 0.604315 | 1,375 | 12,098 | 5.204364 | 0.114909 | 0.088038 | 0.057854 | 0.067496 | 0.825601 | 0.807015 | 0.769145 | 0.756568 | 0.750419 | 0.717859 | 0 | 0.017528 | 0.273764 | 12,098 | 271 | 80 | 44.642066 | 0.79695 | 0.092495 | 0 | 0.69802 | 0 | 0 | 0.098138 | 0.008855 | 0 | 0 | 0 | 0 | 0.222772 | 1 | 0.074257 | false | 0.004951 | 0.024752 | 0 | 0.108911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e288cef4e75e4f3e1026640a792b6373a11757a8 | 45 | py | Python | performance/driver/classes/summarize/__init__.py | mesosphere/dcos-perf-test-driver | 8fba87cb6c6f64690c0b5bef5c7d9f2aa0fba06b | [
"Apache-2.0"
] | 2 | 2018-02-27T18:21:21.000Z | 2018-03-16T12:12:12.000Z | performance/driver/classes/summarize/__init__.py | mesosphere/dcos-perf-test-driver | 8fba87cb6c6f64690c0b5bef5c7d9f2aa0fba06b | [
"Apache-2.0"
] | 1 | 2018-06-25T07:14:41.000Z | 2018-06-25T07:14:41.000Z | performance/driver/classes/summarize/__init__.py | mesosphere/dcos-perf-test-driver | 8fba87cb6c6f64690c0b5bef5c7d9f2aa0fba06b | [
"Apache-2.0"
] | 1 | 2020-06-25T10:37:21.000Z | 2020-06-25T10:37:21.000Z | from .percentile import PercentileSummarizer
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c58b0266d095278283e880204c7e718bb4cc0bf | 26 | py | Python | Flask_app/__init__.py | Napchat/Flask_app | ccc84d51938cf5a213703d5f9c16f4825b29c3f9 | [
"Apache-2.0"
] | null | null | null | Flask_app/__init__.py | Napchat/Flask_app | ccc84d51938cf5a213703d5f9c16f4825b29c3f9 | [
"Apache-2.0"
] | null | null | null | Flask_app/__init__.py | Napchat/Flask_app | ccc84d51938cf5a213703d5f9c16f4825b29c3f9 | [
"Apache-2.0"
] | null | null | null | from .Flask_app import app | 26 | 26 | 0.846154 | 5 | 26 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2bda20c1f35c5ef20329091f0de26c875ffd635 | 3,835 | py | Python | test/test_check.py | aequitas/concourse-ftp-resource | 435faa8f101d273d6d360ec568b8c83abac325a8 | [
"MIT"
] | 6 | 2017-04-02T15:46:03.000Z | 2021-12-19T17:33:41.000Z | test/test_check.py | aequitas/concourse-ftp-resource | 435faa8f101d273d6d360ec568b8c83abac325a8 | [
"MIT"
] | 12 | 2016-11-07T21:32:27.000Z | 2019-10-29T08:20:27.000Z | test/test_check.py | aequitas/concourse-ftp-resource | 435faa8f101d273d6d360ec568b8c83abac325a8 | [
"MIT"
] | 6 | 2016-08-26T16:41:10.000Z | 2020-04-16T15:54:48.000Z | from conftest import cmd, make_files
def test_check_one_file(ftp_root, ftp_server):
"""Test if one uploaded file returns one version."""
make_files(ftp_root, ['filename-0.0.0.tgz'])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source)
assert result == [{"version": "0.0.0"}]
def test_semver(ftp_root, ftp_server):
"""Test if semver versions don't break."""
make_files(ftp_root, ['filename-1.0.0-rc.1.tgz'])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source)
assert result == [{"version": "1.0.0-rc.1"}]
def test_check_multiple_files(ftp_root, ftp_server):
"""Test if multiple uploaded file return more versions."""
make_files(ftp_root, ['filename-0.0.0.tgz', 'filename-0.0.1.tgz'])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source)
assert result == [{"version": "0.0.1"}], 'should only return most recent version'
def test_check_passing_version(ftp_root, ftp_server):
"""Test when a version is passed only new versions are returned."""
make_files(ftp_root, [
'filename-0.0.0.tgz', 'filename-0.0.1.tgz', 'filename-0.0.2.tgz', 'filename-0.0.3.tgz'
])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source, version={"version": "0.0.1"})
assert {"version": "0.0.2"} in result, 'new version should be in result'
assert {"version": "0.0.3"} in result, 'new version should be in result'
assert {"version": "0.0.1"} in result, 'current version should be in result'
assert {"version": "0.0.0"} not in result, 'older version should not be in result'
def test_check_no_new_version(ftp_root, ftp_server):
"""When passing a version and no newer files are found return requested version."""
make_files(ftp_root, ['filename-0.0.0.tgz', 'filename-0.0.1.tgz'])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source, version={"version": "0.0.1"})
assert {"version": "0.0.1"} in result, 'current version should be in result'
def test_check_missing_version(ftp_root, ftp_server):
"""When passing a version that is no longer valid newer versions should be returned."""
make_files(ftp_root, ['filename-0.0.2.tgz', 'filename-0.0.3.tgz'])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source, version={"version": "0.0.1"})
assert {"version": "0.0.0"} not in result, 'older version should not be in result'
assert {"version": "0.0.1"} not in result, 'current version should not be in result'
assert {"version": "0.0.2"} in result, 'new version should be in result'
assert {"version": "0.0.3"} in result, 'new version should be in result'
def test_check_requested_version_missing(ftp_root, ftp_server):
"""Test when the requested version is no longer valid it is not returned."""
make_files(ftp_root, [
'filename-0.0.0.tgz', 'filename-0.0.2.tgz', 'filename-0.0.3.tgz'
])
source = {
"uri": ftp_server,
"regex": "(?P<file>filename-(?P<version>.*).tgz)"
}
result = cmd('check', source, version={"version": "0.0.1"})
assert {"version": "0.0.2"} in result, 'new version should be in result'
assert {"version": "0.0.3"} in result, 'new version should be in result'
assert {"version": "0.0.1"} not in result, 'current version should not be in result'
assert {"version": "0.0.0"} not in result, 'older version should not be in result'
| 32.5 | 94 | 0.621121 | 578 | 3,835 | 4.022491 | 0.115917 | 0.037849 | 0.073548 | 0.083871 | 0.821505 | 0.803441 | 0.749247 | 0.749247 | 0.728602 | 0.692473 | 0 | 0.034922 | 0.201043 | 3,835 | 117 | 95 | 32.777778 | 0.72389 | 0.111864 | 0 | 0.671429 | 0 | 0 | 0.403561 | 0.085757 | 0 | 0 | 0 | 0 | 0.228571 | 1 | 0.1 | false | 0.014286 | 0.014286 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2ecf2e772c3f202ff55ec90a7db542325e0552c | 26 | py | Python | pyytdl/__init__.py | JohannesKnopp/PyYtDl | 522db6067f6695be45638478a75e6a0708cb85ad | [
"MIT"
] | null | null | null | pyytdl/__init__.py | JohannesKnopp/PyYtDl | 522db6067f6695be45638478a75e6a0708cb85ad | [
"MIT"
] | null | null | null | pyytdl/__init__.py | JohannesKnopp/PyYtDl | 522db6067f6695be45638478a75e6a0708cb85ad | [
"MIT"
] | null | null | null | from .pyytdl import PyYtDl | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2f6b4b985970808cae44ff2664fdb4300dc4804 | 158 | py | Python | Python/String/LowerCasefold.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | null | null | null | Python/String/LowerCasefold.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | null | null | null | Python/String/LowerCasefold.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | null | null | null | print("FORMATAÇÃO".lower())
print("FORMATAÇÃO".casefold())
print("der Fluß".lower())
print("der Fluß".casefold())
#https://pt.stackoverflow.com/q/544727/101
| 22.571429 | 42 | 0.71519 | 21 | 158 | 5.380952 | 0.619048 | 0.265487 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 0.050633 | 158 | 6 | 43 | 26.333333 | 0.693333 | 0.259494 | 0 | 0 | 0 | 0 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e2fda034326ae3fb56fd0eae33a4bfb7795b18bf | 44 | py | Python | offlineslides/__init__.py | alexisfcote/offline_slides | f7b37265093042fd0aa1e68e0d50d971e4ea86d9 | [
"MIT"
] | 6 | 2018-10-02T19:54:20.000Z | 2019-09-02T09:40:46.000Z | offlineslides/__init__.py | alexisfcote/offline_slides | f7b37265093042fd0aa1e68e0d50d971e4ea86d9 | [
"MIT"
] | 2 | 2019-02-07T14:27:32.000Z | 2019-02-07T14:35:43.000Z | offlineslides/__init__.py | alexisfcote/offline_slides | f7b37265093042fd0aa1e68e0d50d971e4ea86d9 | [
"MIT"
] | 1 | 2019-02-06T16:56:34.000Z | 2019-02-06T16:56:34.000Z | from .offlineslides import export_to_offline | 44 | 44 | 0.909091 | 6 | 44 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a71871cf996f4668d4a9ab4baf959ef45af213b | 41 | py | Python | pynetatest1.py | crknipe/pyneta | a14a5733443803b4f81fdf327c83d1f24b0a3692 | [
"Apache-2.0"
] | null | null | null | pynetatest1.py | crknipe/pyneta | a14a5733443803b4f81fdf327c83d1f24b0a3692 | [
"Apache-2.0"
] | null | null | null | pynetatest1.py | crknipe/pyneta | a14a5733443803b4f81fdf327c83d1f24b0a3692 | [
"Apache-2.0"
] | null | null | null | print("Hello World")
print("Hello back")
| 13.666667 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 2 | 21 | 20.5 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.512195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1a7e6c0eabafd600f2697a3068db15c37af4a78f | 29 | py | Python | bumpv/client/versioning/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | bumpv/client/versioning/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | bumpv/client/versioning/__init__.py | kylie-a/bumpversion | 13a150daa02f29e7dd74b5240c54c7929ec176b8 | [
"MIT"
] | 1 | 2019-11-24T15:36:19.000Z | 2019-11-24T15:36:19.000Z | from .version import Version
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a90c257006aa6274657e035f10d596456f23aaa | 29 | py | Python | telebot/__init__.py | ntk148v/telegram-bot | 8dbcd73de766cf41026a4bc7a30c85588e3377d3 | [
"Apache-2.0"
] | 2 | 2017-12-21T01:04:25.000Z | 2020-02-07T11:00:09.000Z | telebot/__init__.py | ntk148v/telegram-bot | 8dbcd73de766cf41026a4bc7a30c85588e3377d3 | [
"Apache-2.0"
] | 1 | 2018-01-15T01:44:14.000Z | 2018-01-15T02:30:26.000Z | telebot/__init__.py | ntk148v/telegram-bot | 8dbcd73de766cf41026a4bc7a30c85588e3377d3 | [
"Apache-2.0"
] | 1 | 2017-12-21T10:12:25.000Z | 2017-12-21T10:12:25.000Z | from telebot.run import main
| 14.5 | 28 | 0.827586 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1aafa32797433ff85c78d21105a6493d7674624b | 24 | py | Python | core/__init__.py | iomintz/Chiaki-Nanami | f59dc04e34f320689dc4232ecd1b82ecd73fba04 | [
"MIT"
] | 1 | 2018-07-15T21:40:43.000Z | 2018-07-15T21:40:43.000Z | core/__init__.py | bmintz/Chiaki-Nanami | f59dc04e34f320689dc4232ecd1b82ecd73fba04 | [
"MIT"
] | null | null | null | core/__init__.py | bmintz/Chiaki-Nanami | f59dc04e34f320689dc4232ecd1b82ecd73fba04 | [
"MIT"
] | null | null | null | from .bot import Chiaki
| 12 | 23 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
646267a48d70bdeae7028105d2380e42363f7e17 | 36 | py | Python | pytest_jupyter_kernel/__init__.py | yitzchak/pytest-jupyter-kernel | 07743c3a9d7429655bd121568e94eae40c09ea91 | [
"MIT"
] | null | null | null | pytest_jupyter_kernel/__init__.py | yitzchak/pytest-jupyter-kernel | 07743c3a9d7429655bd121568e94eae40c09ea91 | [
"MIT"
] | null | null | null | pytest_jupyter_kernel/__init__.py | yitzchak/pytest-jupyter-kernel | 07743c3a9d7429655bd121568e94eae40c09ea91 | [
"MIT"
] | null | null | null | from .fixture import jupyter_kernel
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
648bed99684380581cc0b369b5da6f63f7c012e9 | 39 | py | Python | examples/phobos/tests/test_std_utf.py | kinke/autowrap | 2f042df3f292aa39b1da0b9607fbe3424f56ff4a | [
"BSD-3-Clause"
] | 47 | 2019-07-16T10:38:07.000Z | 2022-03-30T16:34:24.000Z | examples/phobos/tests/test_std_utf.py | kinke/autowrap | 2f042df3f292aa39b1da0b9607fbe3424f56ff4a | [
"BSD-3-Clause"
] | 199 | 2019-06-17T23:24:40.000Z | 2021-06-16T16:41:36.000Z | examples/phobos/tests/test_std_utf.py | kinke/autowrap | 2f042df3f292aa39b1da0b9607fbe3424f56ff4a | [
"BSD-3-Clause"
] | 7 | 2019-09-13T18:03:49.000Z | 2022-01-17T03:53:00.000Z | def test_import():
import std_utf
| 9.75 | 18 | 0.692308 | 6 | 39 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 3 | 19 | 13 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
649536b2015bbd3fbf1a8da26ff11097e89e2843 | 23 | py | Python | pygurobi/__init__.py | AndrewBMartin/pygurobi | 231c61252781135ac8e5cc1ba4a828d69d1840f3 | [
"MIT"
] | 20 | 2016-08-05T00:06:16.000Z | 2021-01-05T08:12:18.000Z | pygurobi/__init__.py | AndrewBMartin/pygurobi | 231c61252781135ac8e5cc1ba4a828d69d1840f3 | [
"MIT"
] | 2 | 2018-02-13T15:44:36.000Z | 2018-05-27T19:54:04.000Z | pygurobi/__init__.py | AndrewBMartin/pygurobi | 231c61252781135ac8e5cc1ba4a828d69d1840f3 | [
"MIT"
] | 6 | 2017-08-23T20:40:13.000Z | 2021-08-09T14:47:50.000Z | from pygurobi import *
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b38b0620bd532f1b370cc369edb4eb5451d6f665 | 1,314 | py | Python | testing/integration/test_function_name.py | symonk/pytest-validate | 384fef57919f0bf8d25d65f9ab64b407a95e6ebe | [
"MIT"
] | 1 | 2020-08-12T11:10:43.000Z | 2020-08-12T11:10:43.000Z | testing/integration/test_function_name.py | symonk/pytest-infrastructure | 384fef57919f0bf8d25d65f9ab64b407a95e6ebe | [
"MIT"
] | 23 | 2020-03-29T18:25:54.000Z | 2020-11-14T15:20:24.000Z | testing/integration/test_function_name.py | symonk/pytest-infrastructure | 384fef57919f0bf8d25d65f9ab64b407a95e6ebe | [
"MIT"
] | null | null | null | from _pytest.config import ExitCode
def test_custom_name_is_ok(testdir):
path = testdir.makepyfile(
"""
from infrastructure import infrastructure
@infrastructure(name="Bazinga!")
def some_function():
pass
def test_something():
pass
"""
)
result = testdir.runpytest(f"--infra-module={path}")
assert result.ret == ExitCode.OK
result.stdout.fnmatch_lines(["*Bazinga!*"])
def test_name_none(testdir):
path = testdir.makepyfile(
"""
from infrastructure import infrastructure
@infrastructure(name=None)
def some_function():
pass
def test_something():
pass
"""
)
result = testdir.runpytest(f"--infra-module={path}")
assert result.ret == ExitCode.OK
result.stdout.fnmatch_lines(["*some_function*"])
def test_custom_name_empty_str(testdir):
path = testdir.makepyfile(
"""
from infrastructure import infrastructure
@infrastructure(name=" ")
def some_function():
pass
def test_something():
pass
"""
)
result = testdir.runpytest(f"--infra-module={path}")
assert result.ret == ExitCode.OK
result.stdout.fnmatch_lines(["*some_function*"])
| 23.464286 | 56 | 0.597412 | 130 | 1,314 | 5.869231 | 0.253846 | 0.055046 | 0.070773 | 0.110092 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0 | 0 | 0.285388 | 1,314 | 55 | 57 | 23.890909 | 0.812567 | 0 | 0 | 0.578947 | 0 | 0 | 0.142462 | 0.087137 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b3c45f0c5f1ece5f6e6c1f11ef51b3469a783032 | 85 | py | Python | src/pbi/__init__.py | redkitedata/pbi-tools | 9712ae7751fbaf4036d4adb491849d57972c66be | [
"MIT"
] | null | null | null | src/pbi/__init__.py | redkitedata/pbi-tools | 9712ae7751fbaf4036d4adb491849d57972c66be | [
"MIT"
] | 6 | 2021-12-01T11:12:29.000Z | 2022-01-07T10:20:09.000Z | src/pbi/__init__.py | redkitedata/pbi-tools | 9712ae7751fbaf4036d4adb491849d57972c66be | [
"MIT"
] | 2 | 2021-11-04T16:37:50.000Z | 2021-11-08T21:53:43.000Z | from . import api
from . import tools
from . import deploy
from .token import Token
| 14.166667 | 24 | 0.752941 | 13 | 85 | 4.923077 | 0.461538 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 85 | 5 | 25 | 17 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3e2225f46ce76b2ed781d62685e009d308fcb01 | 6,723 | py | Python | test/test_integration.py | sensiblecodeio/csv-to-cantabular-metadata-2021 | 32bad86f5acaec76af6538bdeff89fc78575c8dd | [
"Apache-2.0"
] | null | null | null | test/test_integration.py | sensiblecodeio/csv-to-cantabular-metadata-2021 | 32bad86f5acaec76af6538bdeff89fc78575c8dd | [
"Apache-2.0"
] | null | null | null | test/test_integration.py | sensiblecodeio/csv-to-cantabular-metadata-2021 | 32bad86f5acaec76af6538bdeff89fc78575c8dd | [
"Apache-2.0"
] | null | null | null | import json
import unittest.mock
import unittest
import pathlib
import os
from datetime import date
import ons_csv_to_ctb_json_main
FILENAME_TABLES = 'cantabm_v9-3-0_unknown-metadata-version_tables-md_19700101-1.json'
FILENAME_DATASET = 'cantabm_v9-3-0_unknown-metadata-version_dataset-md_19700101-1.json'
FILENAME_SERVICE = 'cantabm_v9-3-0_unknown-metadata-version_service-md_19700101-1.json'
FILENAME_TABLES_NO_GEO = 't_cantabm_v9-3-0_no-geo_tables-md_19700101-2.json'
FILENAME_DATASET_NO_GEO = 't_cantabm_v9-3-0_no-geo_dataset-md_19700101-2.json'
FILENAME_SERVICE_NO_GEO = 't_cantabm_v9-3-0_no-geo_service-md_19700101-2.json'
class TestIntegration(unittest.TestCase):
def test_directory_validity(self):
"""Check that a sensible error is raised if the input/output directory is invalid."""
file_dir = pathlib.Path(__file__).parent.resolve()
input_dir = os.path.join(file_dir, 'testdata')
output_dir = os.path.join(file_dir, 'out')
invalid_dir = os.path.join(file_dir, 'invalid')
expected_error = f'{invalid_dir} does not exist or is not a dir'
with unittest.mock.patch('sys.argv', ['test', '-i', invalid_dir, '-o', output_dir]):
with self.assertRaisesRegex(ValueError, expected_error):
ons_csv_to_ctb_json_main.main()
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', invalid_dir]):
with self.assertRaisesRegex(ValueError, expected_error):
ons_csv_to_ctb_json_main.main()
expected_error = f'{__file__} does not exist or is not a dir'
with unittest.mock.patch('sys.argv', ['test', '-i', __file__, '-o', output_dir]):
with self.assertRaisesRegex(ValueError, expected_error):
ons_csv_to_ctb_json_main.main()
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', __file__]):
with self.assertRaisesRegex(ValueError, expected_error):
ons_csv_to_ctb_json_main.main()
def test_metadata_master_version(self):
"""Check that a SystemExit is raised if the metadata master version is invalid."""
file_dir = pathlib.Path(__file__).parent.resolve()
input_dir = os.path.join(file_dir, 'testdata')
output_dir = os.path.join(file_dir, 'out')
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', output_dir,
'-m', 'a/../b']):
with self.assertRaises(SystemExit):
ons_csv_to_ctb_json_main.main()
def test_build_number(self):
"""Check that a SystemExit is raised if the build number is invalid."""
file_dir = pathlib.Path(__file__).parent.resolve()
input_dir = os.path.join(file_dir, 'testdata')
output_dir = os.path.join(file_dir, 'out')
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', output_dir,
'-b', 'a']):
with self.assertRaises(SystemExit):
ons_csv_to_ctb_json_main.main()
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', output_dir,
'-b', '-1']):
with self.assertRaises(SystemExit):
ons_csv_to_ctb_json_main.main()
@unittest.mock.patch('ons_csv_to_ctb_json_main.date')
def test_generated_json(self, mock_date):
"""Generate JSON from source CSV and compare it with expected values."""
mock_date.today.return_value = date(1970, 1, 1)
mock_date.side_effect = lambda *args, **kw: date(*args, **kw)
file_dir = pathlib.Path(__file__).parent.resolve()
input_dir = os.path.join(file_dir, 'testdata')
output_dir = os.path.join(file_dir, 'out')
geo_dir = os.path.join(input_dir, 'geography/geography.csv')
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', output_dir, '-g', geo_dir]):
ons_csv_to_ctb_json_main.main()
with open(os.path.join(output_dir, FILENAME_SERVICE)) as f:
service_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/service-metadata.json')) as f:
expected_service_metadata = json.load(f)
self.assertEqual(service_metadata, expected_service_metadata)
with open(os.path.join(output_dir, FILENAME_DATASET)) as f:
dataset_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/dataset-metadata.json')) as f:
expected_dataset_metadata = json.load(f)
self.assertEqual(dataset_metadata, expected_dataset_metadata)
with open(os.path.join(output_dir, FILENAME_TABLES)) as f:
table_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/table-metadata.json')) as f:
expected_table_metadata = json.load(f)
self.assertEqual(table_metadata, expected_table_metadata)
@unittest.mock.patch('ons_csv_to_ctb_json_main.date')
def test_no_geography_file(self, mock_date):
"""Generate JSON from source CSV and compare it with expected values."""
mock_date.today.return_value = date(1970, 1, 1)
mock_date.side_effect = lambda *args, **kw: date(*args, **kw)
file_dir = pathlib.Path(__file__).parent.resolve()
input_dir = os.path.join(file_dir, 'testdata')
output_dir = os.path.join(file_dir, 'out')
with unittest.mock.patch('sys.argv', ['test', '-i', input_dir, '-o', output_dir,
'-m', 'no-geo', '-b', '2', '-p', 't']):
ons_csv_to_ctb_json_main.main()
with open(os.path.join(output_dir, FILENAME_SERVICE_NO_GEO)) as f:
service_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/service-metadata.json')) as f:
expected_service_metadata = json.load(f)
self.assertEqual(service_metadata, expected_service_metadata)
with open(os.path.join(output_dir, FILENAME_DATASET_NO_GEO)) as f:
dataset_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/dataset-metadata-no-geo.json')) as f:
expected_dataset_metadata = json.load(f)
self.assertEqual(dataset_metadata, expected_dataset_metadata)
with open(os.path.join(output_dir, FILENAME_TABLES_NO_GEO)) as f:
table_metadata = json.load(f)
with open(os.path.join(file_dir, 'expected/table-metadata.json')) as f:
expected_table_metadata = json.load(f)
self.assertEqual(table_metadata, expected_table_metadata)
| 52.116279 | 105 | 0.644355 | 915 | 6,723 | 4.443716 | 0.114754 | 0.035416 | 0.059026 | 0.058534 | 0.859321 | 0.831038 | 0.821446 | 0.797098 | 0.797098 | 0.759223 | 0 | 0.016634 | 0.230998 | 6,723 | 128 | 106 | 52.523438 | 0.769826 | 0.052953 | 0 | 0.61165 | 0 | 0 | 0.14698 | 0.096199 | 0 | 0 | 0 | 0 | 0.126214 | 1 | 0.048544 | false | 0 | 0.067961 | 0 | 0.126214 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3737c797b7973e82cc4b2acbaf2d174d0040e57f | 2,422 | py | Python | epytope/Data/pssms/smm/mat/A_02_17_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smm/mat/A_02_17_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smm/mat/A_02_17_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_02_17_10 = {0: {'A': -0.013, 'C': 0.01, 'E': 0.013, 'D': 0.003, 'G': 0.013, 'F': -0.036, 'I': 0.034, 'H': -0.015, 'K': -0.002, 'M': -0.031, 'L': -0.021, 'N': 0.0, 'Q': 0.014, 'P': 0.0, 'S': -0.004, 'R': 0.008, 'T': 0.011, 'W': 0.004, 'V': 0.017, 'Y': -0.005}, 1: {'A': -0.0, 'C': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': -0.0, 'N': 0.0, 'Q': -0.0, 'P': 0.0, 'S': -0.0, 'R': 0.0, 'T': -0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 2: {'A': -0.362, 'C': -0.082, 'E': -0.088, 'D': 0.032, 'G': 0.275, 'F': 0.22, 'I': 0.12, 'H': 0.0, 'K': 0.444, 'M': 0.052, 'L': -0.184, 'N': -0.131, 'Q': 0.0, 'P': 0.0, 'S': -0.441, 'R': 0.0, 'T': 0.272, 'W': 0.0, 'V': -0.217, 'Y': 0.09}, 3: {'A': 0.035, 'C': 0.05, 'E': 0.057, 'D': 0.052, 'G': -0.03, 'F': 0.101, 'I': -0.337, 'H': -0.094, 'K': 0.088, 'M': -0.06, 'L': 0.062, 'N': -0.206, 'Q': 0.0, 'P': 0.103, 'S': -0.139, 'R': 0.127, 'T': 0.144, 'W': -0.015, 'V': 0.061, 'Y': 0.0}, 4: {'A': -0.094, 'C': 0.013, 'E': -0.097, 'D': -0.138, 'G': 0.027, 'F': -0.009, 'I': 0.318, 'H': 0.326, 'K': 0.053, 'M': -0.018, 'L': 0.368, 'N': -0.141, 'Q': -0.029, 'P': -0.479, 'S': -0.28, 'R': -0.145, 'T': 0.202, 'W': 0.014, 'V': 0.133, 'Y': -0.022}, 5: {'A': 0.0, 'C': 0.0, 'E': -0.0, 'D': -0.0, 'G': 0.0, 'F': -0.0, 'I': 0.0, 'H': 0.0, 'K': -0.0, 'M': -0.0, 'L': 0.0, 'N': -0.0, 'Q': 0.0, 'P': -0.0, 'S': -0.0, 'R': 0.0, 'T': -0.0, 'W': 0.0, 'V': 0.0, 'Y': -0.0}, 6: {'A': 0.01, 'C': -0.057, 'E': 0.054, 'D': -0.002, 'G': -0.008, 'F': 0.029, 'I': 0.082, 'H': -0.003, 'K': 0.001, 'M': -0.061, 'L': 0.077, 'N': -0.045, 'Q': 0.015, 'P': 0.007, 'S': -0.055, 'R': -0.004, 'T': -0.012, 'W': 0.0, 'V': -0.028, 'Y': 0.0}, 7: {'A': -0.025, 'C': 0.034, 'E': 0.0, 'D': 0.004, 'G': -0.013, 'F': -0.085, 'I': 0.044, 'H': 0.007, 'K': 0.005, 'M': 0.045, 'L': 0.064, 'N': -0.011, 'Q': -0.016, 'P': 0.017, 'S': 0.034, 'R': -0.003, 'T': -0.045, 'W': -0.068, 'V': 0.009, 'Y': 0.002}, 8: {'A': -0.061, 'C': 0.485, 'E': 0.264, 'D': 0.0, 'G': -0.152, 'F': -0.229, 'I': 0.286, 'H': 0.0, 'K': -0.224, 'M': -0.455, 'L': -0.158, 'N': 0.0, 'Q': -0.193, 'P': -0.025, 'S': -0.154, 'R': -0.348, 'T': 0.002, 'W': 0.515, 'V': 0.221, 'Y': 0.226}, 9: {'A': 0.526, 'C': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.155, 'F': -0.628, 'I': 0.053, 'H': 0.0, 'K': 0.0, 'M': -0.058, 'L': -0.638, 'N': 0.0, 'Q': 0.0, 'P': 0.3, 'S': 0.0, 'R': 0.0, 'T': 0.333, 'W': 0.0, 'V': -0.044, 'Y': 0.0}, -1: {'con': 2.83174}} | 2,422 | 2,422 | 0.366226 | 618 | 2,422 | 1.430421 | 0.211974 | 0.149321 | 0.016968 | 0.022624 | 0.2681 | 0.184389 | 0.184389 | 0.184389 | 0.14819 | 0.14819 | 0 | 0.338645 | 0.170933 | 2,422 | 1 | 2,422 | 2,422 | 0.101594 | 0 | 0 | 0 | 0 | 0 | 0.08378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3764899e7acb55aa747b1cb1f51cef86daf0b012 | 38 | py | Python | naive_bayes_plus/__init__.py | Kalafinaian/python-naive_bayes_plus | 74c0d400589f72bf10f89f0b5f66ceac10f61d37 | [
"MIT"
] | 1 | 2019-06-03T10:29:52.000Z | 2019-06-03T10:29:52.000Z | naive_bayes_plus/__init__.py | Kalafinaian/python-naive_bayes_plus | 74c0d400589f72bf10f89f0b5f66ceac10f61d37 | [
"MIT"
] | null | null | null | naive_bayes_plus/__init__.py | Kalafinaian/python-naive_bayes_plus | 74c0d400589f72bf10f89f0b5f66ceac10f61d37 | [
"MIT"
] | null | null | null | from . naive_bayes_plus_code import *
| 19 | 37 | 0.815789 | 6 | 38 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8089994330b1d474c3431a352900f479e7326f77 | 6,792 | py | Python | gamd/langevin/total_boost_integrators.py | MiaoLab20/GaMD-OpenMM | 22c641b0a684cdd5c756f47aa6a64d8f962d65fc | [
"MIT"
] | 14 | 2021-05-28T21:09:41.000Z | 2022-01-25T08:47:51.000Z | gamd/langevin/total_boost_integrators.py | pablo-arantes/GaMD-OpenMM | 5cf53b1525f0b25f2a07d0fc29fa77d3e39455aa | [
"MIT"
] | 5 | 2021-04-12T15:15:28.000Z | 2021-04-12T16:18:45.000Z | gamd/langevin/total_boost_integrators.py | pablo-arantes/GaMD-OpenMM | 5cf53b1525f0b25f2a07d0fc29fa77d3e39455aa | [
"MIT"
] | 6 | 2021-09-07T10:25:19.000Z | 2021-11-07T17:57:51.000Z | """
gamd.py: Implements the GaMD integration method.
Portions copyright (c) 2020 University of Kansas
Authors: Matthew Copeland, Yinglong Miao
Contributors: Lane Votapka
"""
from __future__ import absolute_import
__author__ = "Matthew Copeland"
__version__ = "1.0"
from simtk import unit as unit
from abc import ABC
from gamd.langevin.base_integrator import GroupBoostIntegrator
from ..stage_integrator import BoostType
from ..stage_integrator import BoostMethod
from ..stage_integrator import ComputeType
class TotalBoostIntegrator(GroupBoostIntegrator, ABC):
def __init__(self, dt, ntcmdprep, ntcmd, ntebprep, nteb, nstlim, ntave,
sigma0, collision_rate,
temperature, restart_filename):
"""
Parameters
----------
:param dt: The Amount of time between each time step.
:param ntcmdprep: The number of conventional MD steps for system equilibration.
:param ntcmd: The total number of conventional MD steps (including ntcmdprep). (must be a multiple of ntave)
:param ntebprep: The number of GaMD pre-equilibration steps.
:param nteb: The number of GaMD equilibration steps (including ntebprep). (must be a multiple of ntave)
:param nstlim: The total number of simulation steps.
:param ntave: The number of steps used to smooth the average and sigma of potential energy (corresponds to a
running average window size).
:param sigma0: The upper limit of the standard deviation of the potential boost that allows for
accurate reweighting.
:param collision_rate: Collision rate (gamma) compatible with 1/picoseconds, default: 1.0/unit.picoseconds
:param temperature: "Bath" temperature value compatible with units.kelvin, default: 298.15*unit.kelvin
:param restart_filename: The file name of the restart file. (default=None indicates new simulation.)
"""
group_dict = {}
super(TotalBoostIntegrator, self).__init__(group_dict,
BoostType.TOTAL,
BoostMethod.TOTAL,
dt, ntcmdprep, ntcmd,
ntebprep, nteb, nstlim,
ntave, collision_rate, temperature,
restart_filename)
self.addGlobalVariable("sigma0_" + BoostType.TOTAL.value, sigma0)
class LowerBoundIntegrator(TotalBoostIntegrator):
def __init__(self, dt=2.0 * unit.femtoseconds, ntcmdprep=200000, ntcmd=1000000, ntebprep=200000, nteb=1000000,
nstlim=3000000, ntave=50000, sigma0=6.0 * unit.kilocalories_per_mole,
collision_rate=1.0 / unit.picoseconds, temperature=298.15 * unit.kelvin, restart_filename=None):
"""
Parameters
----------
:param dt: The Amount of time between each time step.
:param ntcmdprep: The number of conventional MD steps for system equilibration.
:param ntcmd: The total number of conventional MD steps (including ntcmdprep). (must be a multiple of ntave)
:param ntebprep: The number of GaMD pre-equilibration steps.
:param nteb: The number of GaMD equilibration steps (including ntebprep). (must be a multiple of ntave)
:param nstlim: The total number of simulation steps.
:param ntave: The number of steps used to smooth the average and sigma of potential energy (corresponds to a
running average window size).
:param sigma0: The upper limit of the standard deviation of the potential boost that allows for
accurate reweighting.
:param collision_rate: Collision rate (gamma) compatible with 1/picoseconds, default: 1.0/unit.picoseconds
:param temperature: "Bath" temperature value compatible with units.kelvin, default: 298.15*unit.kelvin
:param restart_filename: The file name of the restart file. (default=None indicates new simulation.)
"""
super(LowerBoundIntegrator, self).__init__(dt, ntcmdprep, ntcmd, ntebprep, nteb, nstlim, ntave, sigma0,
collision_rate, temperature, restart_filename)
def _calculate_threshold_energy_and_effective_harmonic_constant(
self, compute_type):
super()._lower_bound_calculate_threshold_energy_and_effective_harmonic_constant(
compute_type)
class UpperBoundIntegrator(TotalBoostIntegrator):
def __init__(self, dt=2.0 * unit.femtoseconds, ntcmdprep=200000,
ntcmd=1000000, ntebprep=200000, nteb=1000000,
nstlim=3000000, ntave=50000, sigma0=6.0 * unit.kilocalories_per_mole,
collision_rate=1.0 / unit.picoseconds,
temperature=298.15 * unit.kelvin, restart_filename=None):
"""
Parameters
----------
:param dt: The Amount of time between each time step.
:param ntcmdprep: The number of conventional MD steps for system equilibration.
:param ntcmd: The total number of conventional MD steps (including ntcmdprep). (must be a multiple of ntave)
:param ntebprep: The number of GaMD pre-equilibration steps.
:param nteb: The number of GaMD equilibration steps (including ntebprep). (must be a multiple of ntave)
:param nstlim: The total number of simulation steps.
:param ntave: The number of steps used to smooth the average and sigma of potential energy (corresponds to a
running average window size).
:param sigma0: The upper limit of the standard deviation of the potential boost that allows for
accurate reweighting.
:param collision_rate: Collision rate (gamma) compatible with 1/picoseconds, default: 1.0/unit.picoseconds
:param temperature: "Bath" temperature value compatible with units.kelvin, default: 298.15*unit.kelvin
:param restart_filename: The file name of the restart file. (default=None indicates new simulation.)
"""
super(UpperBoundIntegrator, self).__init__(dt, ntcmdprep, ntcmd, ntebprep, nteb, nstlim, ntave, sigma0,
collision_rate, temperature, restart_filename)
def _calculate_threshold_energy_and_effective_harmonic_constant(
self, compute_type):
super()._upper_bound_calculate_threshold_energy_and_effective_harmonic_constant(
compute_type)
| 57.07563 | 120 | 0.64679 | 755 | 6,792 | 5.683444 | 0.189404 | 0.033559 | 0.030762 | 0.030762 | 0.838732 | 0.829643 | 0.829643 | 0.820555 | 0.820555 | 0.820555 | 0 | 0.028536 | 0.287986 | 6,792 | 118 | 121 | 57.559322 | 0.858768 | 0.523999 | 0 | 0.227273 | 0 | 0 | 0.009223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113636 | false | 0 | 0.159091 | 0 | 0.340909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8091041793aa10b11d3f081673123d9026f10fd5 | 201 | py | Python | tests/conftest.py | florianeinfalt/nukecontexts | 6fb78affcc9acbe0dc403b01ec48a8d2cbb2840b | [
"Apache-2.0"
] | 12 | 2017-03-16T11:01:14.000Z | 2021-02-13T16:33:53.000Z | tests/conftest.py | florianeinfalt/nukecontexts | 6fb78affcc9acbe0dc403b01ec48a8d2cbb2840b | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | florianeinfalt/nukecontexts | 6fb78affcc9acbe0dc403b01ec48a8d2cbb2840b | [
"Apache-2.0"
] | null | null | null | import nuke
import pytest
@pytest.fixture(scope='session')
def nuke():
import nuke
return nuke
@pytest.fixture(scope='session')
def node(nuke):
return nuke.nodes.Write(name='test_write')
| 16.75 | 46 | 0.716418 | 28 | 201 | 5.107143 | 0.464286 | 0.13986 | 0.251748 | 0.34965 | 0.391608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149254 | 201 | 11 | 47 | 18.272727 | 0.836257 | 0 | 0 | 0.444444 | 0 | 0 | 0.119403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.333333 | 0.111111 | 0.777778 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
80a1bf2465841590580e62b2f38a85463455eb95 | 241 | py | Python | hardware_usage_notifier/comparators/comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | hardware_usage_notifier/comparators/comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | hardware_usage_notifier/comparators/comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class Comparator(ABC):
def __init__(self, reference_value):
super().__init__()
self.reference_value = reference_value
@abstractmethod
def compare(self, value):
pass
| 18.538462 | 46 | 0.672199 | 26 | 241 | 5.807692 | 0.538462 | 0.278146 | 0.225166 | 0.291391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244813 | 241 | 12 | 47 | 20.083333 | 0.82967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.125 | 0.125 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
03bf2cc9d2e6d90f9b85836b21dee91d8804d41d | 37 | py | Python | model/__init__.py | m-nobinur/d-markov-chatbot | 8ae6b9276765805edfeb0436a56b255ac2b895fe | [
"Apache-2.0"
] | null | null | null | model/__init__.py | m-nobinur/d-markov-chatbot | 8ae6b9276765805edfeb0436a56b255ac2b895fe | [
"Apache-2.0"
] | null | null | null | model/__init__.py | m-nobinur/d-markov-chatbot | 8ae6b9276765805edfeb0436a56b255ac2b895fe | [
"Apache-2.0"
] | null | null | null | from .markov_chain import MarkovModel | 37 | 37 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03d499b75f45194b8602a21a4341321a50781fcb | 30 | py | Python | clrnet/models/heads/__init__.py | Turoad/CLRNet | 51e082db12973943bddefd76fd0d431fcb3350ff | [
"Apache-2.0"
] | 18 | 2022-03-16T07:29:19.000Z | 2022-03-31T07:05:37.000Z | clrnet/models/heads/__init__.py | Turoad/CLRNet | 51e082db12973943bddefd76fd0d431fcb3350ff | [
"Apache-2.0"
] | 1 | 2022-03-16T11:47:13.000Z | 2022-03-17T10:15:25.000Z | clrnet/models/heads/__init__.py | Turoad/CLRNet | 51e082db12973943bddefd76fd0d431fcb3350ff | [
"Apache-2.0"
] | 1 | 2022-03-25T14:49:58.000Z | 2022-03-25T14:49:58.000Z | from .clr_head import CLRHead
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03e79cb09c8db085c5f2ba648b6f73f2081a3de7 | 45 | py | Python | postfilter/__init__.py | jbudis/dante | 90177c33825d5f9ce3fba5463092fbcf20b72fe2 | [
"Apache-2.0"
] | 4 | 2018-09-28T14:50:47.000Z | 2021-08-09T12:46:12.000Z | postfilter/__init__.py | jbudis/dante | 90177c33825d5f9ce3fba5463092fbcf20b72fe2 | [
"Apache-2.0"
] | 6 | 2019-01-02T13:08:31.000Z | 2021-03-25T21:45:40.000Z | postfilter/__init__.py | jbudis/dante | 90177c33825d5f9ce3fba5463092fbcf20b72fe2 | [
"Apache-2.0"
] | 1 | 2017-12-12T10:38:26.000Z | 2017-12-12T10:38:26.000Z | from postfilter.Postfilter import Postfilter
| 22.5 | 44 | 0.888889 | 5 | 45 | 8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
206c2d73b4d388de183121722ffd8cfebe46af57 | 2,748 | py | Python | backend/tests/functional/order/test_insert.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | backend/tests/functional/order/test_insert.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | backend/tests/functional/order/test_insert.py | willrp/willorders-ws | de0757d8888dab41095c93500a6a88c813755530 | [
"MIT"
] | null | null | null | import requests
from uuid import uuid4
from backend.model import Order, Product, OrderProduct
from backend.util.response.error import ErrorSchema
from backend.util.slug import uuid_to_slug
def test_insert(domain_url, db_perm_session, token_session, prod_list):
assert len(db_perm_session.query(Order).all()) == 0
assert len(db_perm_session.query(Product).all()) == 0
assert len(db_perm_session.query(OrderProduct).all()) == 0
user_slug = uuid_to_slug(uuid4())
prod_id_list = [p.meta["id"] for p in prod_list]
item_list = [{"item_id": prod_id, "amount": 2} for prod_id in prod_id_list]
response = token_session.put(
domain_url + "/api/order/insert",
json={"user_slug": user_slug, "item_list": item_list}
)
data = response.json()
assert data == {}
assert response.status_code == 201
assert len(db_perm_session.query(Order).all()) == 1
assert len(db_perm_session.query(Product).all()) == 5
assert len(db_perm_session.query(OrderProduct).all()) == 5
response = token_session.put(
domain_url + "/api/order/insert",
json={"user_slug": user_slug, "item_list": item_list}
)
data = response.json()
assert data == {}
assert response.status_code == 201
assert len(db_perm_session.query(Order).all()) == 2
assert len(db_perm_session.query(Product).all()) == 5
assert len(db_perm_session.query(OrderProduct).all()) == 10
def test_insert_not_registered(domain_url, db_perm_session, token_session):
assert len(db_perm_session.query(Order).all()) == 0
assert len(db_perm_session.query(Product).all()) == 0
assert len(db_perm_session.query(OrderProduct).all()) == 0
user_slug = uuid_to_slug(uuid4())
bad_item_list = [{"item_id": str(uuid4()), "amount": 2} for i in range(3)]
response = token_session.put(
domain_url + "/api/order/insert",
json={"user_slug": user_slug, "item_list": bad_item_list}
)
data = response.json()
ErrorSchema().load(data)
assert response.status_code == 400
assert data["error"].find("not registered") != -1
assert len(db_perm_session.query(Order).all()) == 0
assert len(db_perm_session.query(Product).all()) == 0
assert len(db_perm_session.query(OrderProduct).all()) == 0
def test_insert_unauthorized(domain_url, prod_list):
user_slug = uuid_to_slug(uuid4())
prod_id_list = [p.meta["id"] for p in prod_list]
item_list = [{"item_id": prod_id, "amount": 2} for prod_id in prod_id_list]
response = requests.put(
domain_url + "/api/order/insert",
json={"user_slug": user_slug, "item_list": item_list}
)
data = response.json()
ErrorSchema().load(data)
assert response.status_code == 401
| 34.78481 | 79 | 0.680495 | 400 | 2,748 | 4.4075 | 0.1575 | 0.057856 | 0.125355 | 0.127623 | 0.804311 | 0.804311 | 0.804311 | 0.764606 | 0.764606 | 0.764606 | 0 | 0.016829 | 0.178312 | 2,748 | 78 | 80 | 35.230769 | 0.76395 | 0 | 0 | 0.633333 | 0 | 0 | 0.073508 | 0 | 0 | 0 | 0 | 0 | 0.366667 | 1 | 0.05 | false | 0 | 0.083333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20aa3b03862b097b1abf2937cc7a50df9c14a985 | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/plugins/do_example.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/plugins/do_example.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/plugins/do_example.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/d0/0e/90/d76d75687ba93ee8b892ba2d4d602830d4b58add76cc554f9bb79857f0 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20ae2d161087537be7432384036c9810d6fc42f4 | 373 | py | Python | toontown/building/DistributedBrutalCFOElevator.py | MasterLoopyBM/Toontown | ebed7fc3f2ef06a529cf02eda7ab46361aceef9d | [
"MIT"
] | 1 | 2020-02-07T18:15:12.000Z | 2020-02-07T18:15:12.000Z | toontown/building/DistributedBrutalCFOElevator.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | null | null | null | toontown/building/DistributedBrutalCFOElevator.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | 2 | 2020-09-26T20:37:18.000Z | 2020-11-15T20:55:33.000Z | from toontown.building.DistributedCFOElevator import DistributedCFOElevator
from toontown.toonbase import TTLocalizer
class DistributedBrutalCFOElevator(DistributedCFOElevator):
notify = directNotify.newCategory('DistributedBrutalCFOElevator')
def setupElevator(self):
pass
def getDestName(self):
return TTLocalizer.ElevatorBrutalCashBotBoss | 31.083333 | 75 | 0.812332 | 28 | 373 | 10.821429 | 0.678571 | 0.079208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136729 | 373 | 12 | 76 | 31.083333 | 0.940994 | 0 | 0 | 0 | 0 | 0 | 0.074866 | 0.074866 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.125 | 0.25 | 0.125 | 0.875 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
20dd757907a2dae7048c81851f3774ad385ce007 | 83 | py | Python | code/environment/__init__.py | OnlinePredictor/AdaptiveOnlineTimeSeriesPrediction | 712c461c20bec21e7f7569c94942441aaee0e37c | [
"Apache-2.0"
] | 1 | 2020-10-08T10:32:59.000Z | 2020-10-08T10:32:59.000Z | code/environment/__init__.py | OnlinePredictor/AdaptiveOnlineTimeSeriesPrediction | 712c461c20bec21e7f7569c94942441aaee0e37c | [
"Apache-2.0"
] | null | null | null | code/environment/__init__.py | OnlinePredictor/AdaptiveOnlineTimeSeriesPrediction | 712c461c20bec21e7f7569c94942441aaee0e37c | [
"Apache-2.0"
] | null | null | null | # utils init file
import environment.RealCore
import environment.RealExperiment | 20.75 | 33 | 0.831325 | 9 | 83 | 7.666667 | 0.777778 | 0.492754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13253 | 83 | 4 | 33 | 20.75 | 0.958333 | 0.180723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
455029e2e5a0e7ba56285c7fec4c1e32d1e9bcc1 | 82 | py | Python | hnews.py | HyphenGroup/hive | e3cae481f4c25ce8956ac19179fb540e01ef7422 | [
"MIT"
] | 6 | 2021-03-18T20:44:20.000Z | 2021-09-17T19:21:31.000Z | hnews.py | HyphenGroup/hive | e3cae481f4c25ce8956ac19179fb540e01ef7422 | [
"MIT"
] | 7 | 2021-05-03T06:13:30.000Z | 2021-11-08T01:18:47.000Z | hnews.py | HyphenGroup/hive | e3cae481f4c25ce8956ac19179fb540e01ef7422 | [
"MIT"
] | 6 | 2021-03-18T20:44:19.000Z | 2021-09-29T22:46:34.000Z | def hnews():
print('This feature isnt available yet. Please check back soon.') | 41 | 69 | 0.719512 | 12 | 82 | 4.916667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 82 | 2 | 69 | 41 | 0.867647 | 0 | 0 | 0 | 0 | 0 | 0.674699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
45546910d18d0ff80f4d2e31c03df8bb8659d024 | 82 | py | Python | flusstools/lidartools/__init__.py | Ecohydraulics/flusstools | ab356788846dee089af146e924822dfafd096828 | [
"BSD-3-Clause"
] | 1 | 2020-11-13T09:22:12.000Z | 2020-11-13T09:22:12.000Z | flusstools/lidartools/__init__.py | Ecohydraulics/flusstools | ab356788846dee089af146e924822dfafd096828 | [
"BSD-3-Clause"
] | null | null | null | flusstools/lidartools/__init__.py | Ecohydraulics/flusstools | ab356788846dee089af146e924822dfafd096828 | [
"BSD-3-Clause"
] | null | null | null | from .laspy_main import *
from .lastools_core import *
from .hdf_analyst import *
| 20.5 | 28 | 0.780488 | 12 | 82 | 5.083333 | 0.666667 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 82 | 3 | 29 | 27.333333 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
456220c247b1f9d1659b884a6fde622534cb6060 | 74 | py | Python | src/tests/import_parent.py | vschweitzer/rad_rework | d1c164914827ff5f44d22911b92953f6f4e151b6 | [
"MIT"
] | null | null | null | src/tests/import_parent.py | vschweitzer/rad_rework | d1c164914827ff5f44d22911b92953f6f4e151b6 | [
"MIT"
] | null | null | null | src/tests/import_parent.py | vschweitzer/rad_rework | d1c164914827ff5f44d22911b92953f6f4e151b6 | [
"MIT"
] | null | null | null | import sys
import os
sys.path.insert(1, os.path.join(sys.path[0], ".."))
| 14.8 | 51 | 0.662162 | 14 | 74 | 3.5 | 0.571429 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.108108 | 74 | 4 | 52 | 18.5 | 0.712121 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
456d1f7664f042e0b2e5211cccb651464da9e8be | 25 | py | Python | CloudTrim/__init__.py | CodeWithAlvin/CloudTrim | 6e5e18c68a6fcbcfcace05da757c9bddb7b80e6f | [
"MIT"
] | 3 | 2021-11-06T22:18:48.000Z | 2021-11-07T12:46:51.000Z | build/lib/CloudTrim/__init__.py | CodeWithAlvin/CloudTrim | 6e5e18c68a6fcbcfcace05da757c9bddb7b80e6f | [
"MIT"
] | null | null | null | build/lib/CloudTrim/__init__.py | CodeWithAlvin/CloudTrim | 6e5e18c68a6fcbcfcace05da757c9bddb7b80e6f | [
"MIT"
] | null | null | null | from .CloudTrim import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
45bafc0d738dc468a612d3091143c5cd19e01ba7 | 43 | py | Python | apps/cms/__init__.py | kaixiang1992/bbs | b195bd287267d1facf2253c79727ed8bd4d0ad32 | [
"MIT"
] | null | null | null | apps/cms/__init__.py | kaixiang1992/bbs | b195bd287267d1facf2253c79727ed8bd4d0ad32 | [
"MIT"
] | 5 | 2021-03-19T02:01:00.000Z | 2022-03-11T23:52:57.000Z | apps/cms/__init__.py | kaixiang1992/bbs | b195bd287267d1facf2253c79727ed8bd4d0ad32 | [
"MIT"
] | null | null | null | from .views import bp
import apps.cms.hooks | 21.5 | 21 | 0.813953 | 8 | 43 | 4.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 2 | 22 | 21.5 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b30ddf423e4450a0e60f000fc546f9eb44fe21ab | 46 | py | Python | src/test/resources/fichier.test.1.py | t3ctechnologies/Anchel_Core | 6c3f9c5787d181453dbebfdeb98aa8e5669a85ee | [
"MIT"
] | null | null | null | src/test/resources/fichier.test.1.py | t3ctechnologies/Anchel_Core | 6c3f9c5787d181453dbebfdeb98aa8e5669a85ee | [
"MIT"
] | null | null | null | src/test/resources/fichier.test.1.py | t3ctechnologies/Anchel_Core | 6c3f9c5787d181453dbebfdeb98aa8e5669a85ee | [
"MIT"
] | null | null | null | #!/usr/bin/python
import urllib2
import sys
| 7.666667 | 17 | 0.73913 | 7 | 46 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.152174 | 46 | 5 | 18 | 9.2 | 0.846154 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b343d92bf73159561b45df0c1661dc8d5d6a0e6c | 37 | py | Python | 6 kyu/Thinkful Object Drills Puzzlebox.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 6 | 2020-09-03T09:32:25.000Z | 2020-12-07T04:10:01.000Z | 6 kyu/Thinkful Object Drills Puzzlebox.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 1 | 2021-12-13T15:30:21.000Z | 2021-12-13T15:30:21.000Z | 6 kyu/Thinkful Object Drills Puzzlebox.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | null | null | null | def answer(puzzlebox):
return 42 | 18.5 | 22 | 0.702703 | 5 | 37 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.216216 | 37 | 2 | 23 | 18.5 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b36fc546965d2f174209856244fa881ff24fbcc1 | 106 | py | Python | cleanly/__init__.py | russelljk/cleanly | 81be2979786defb40d1475d3bc9816b63e2d471d | [
"Zlib"
] | 1 | 2020-10-21T17:28:59.000Z | 2020-10-21T17:28:59.000Z | cleanly/__init__.py | russelljk/cleanly | 81be2979786defb40d1475d3bc9816b63e2d471d | [
"Zlib"
] | null | null | null | cleanly/__init__.py | russelljk/cleanly | 81be2979786defb40d1475d3bc9816b63e2d471d | [
"Zlib"
] | null | null | null | from cleanly.backend.html5lib_backend import cleanup_html, sanitize_html, sanitizer_manager, run_sanitizer | 106 | 106 | 0.896226 | 14 | 106 | 6.428571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.056604 | 106 | 1 | 106 | 106 | 0.89 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3728097217e8a412ade4720035fe72e3dac7424 | 5,169 | py | Python | finishedgames/catalogsources/migrations/0001_initial.py | Kartones/finished-games | 9b1f86aee3ea26be50b666887e3bdecad2c8f757 | [
"Unlicense"
] | 7 | 2019-01-23T20:09:00.000Z | 2021-12-19T17:50:48.000Z | finishedgames/catalogsources/migrations/0001_initial.py | Kartones/finished-games | 9b1f86aee3ea26be50b666887e3bdecad2c8f757 | [
"Unlicense"
] | 2 | 2019-08-11T11:16:00.000Z | 2019-09-04T00:07:04.000Z | finishedgames/catalogsources/migrations/0001_initial.py | Kartones/finished-games | 9b1f86aee3ea26be50b666887e3bdecad2c8f757 | [
"Unlicense"
] | 2 | 2019-01-23T20:09:05.000Z | 2020-09-06T10:43:25.000Z | # Generated by Django 2.1.7 on 2019-02-26 23:23
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
("core", "0002_usergame_no_longer_owned"),
]
operations = [
migrations.CreateModel(
name="FetchedGame",
fields=[
("id", models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
("name", models.CharField(db_index=True, max_length=200, unique=True, verbose_name="Name")),
(
"publish_date",
models.IntegerField(
validators=[
django.core.validators.MinValueValidator(1970),
django.core.validators.MaxValueValidator(3000),
],
verbose_name="Year first published",
),
),
("dlc_or_expansion", models.BooleanField(default=False, verbose_name="DLC/Expansion")),
("source_id", models.CharField(db_index=True, max_length=50, verbose_name="Source identifier")),
(
"last_modified_date",
models.DateTimeField(
blank=True, db_index=True, default=None, null=True, verbose_name="Last data modification"
),
),
(
"source_game_id",
models.CharField(db_index=True, max_length=50, verbose_name="Source game identifier"),
),
("source_url", models.CharField(max_length=255, verbose_name="Resource source URI")),
(
"change_hash",
models.CharField(max_length=32, verbose_name="Marker to detect data changes after fetch"),
),
("hidden", models.BooleanField(db_index=True, default=False, verbose_name="Item hidden")),
(
"fg_game_id",
models.ForeignKey(
blank=True,
default=None,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="core.Game",
),
),
(
"parent_game",
models.ForeignKey(
blank=True,
default=None,
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="catalogsources.FetchedGame",
),
),
("platforms", models.ManyToManyField(to="core.Platform")),
],
options={"abstract": False},
),
migrations.CreateModel(
name="FetchedPlatform",
fields=[
("id", models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
("name", models.CharField(db_index=True, max_length=100, unique=True, verbose_name="Name")),
("shortname", models.CharField(default=None, max_length=40, unique=True, verbose_name="Shortname")),
(
"publish_date",
models.IntegerField(
validators=[
django.core.validators.MinValueValidator(1970),
django.core.validators.MaxValueValidator(3000),
],
verbose_name="Year published",
),
),
("source_id", models.CharField(db_index=True, max_length=50, verbose_name="Source identifier")),
(
"last_modified_date",
models.DateTimeField(
blank=True, db_index=True, default=None, null=True, verbose_name="Last data modification"
),
),
(
"source_platform_id",
models.CharField(db_index=True, max_length=50, verbose_name="Source platform identifier"),
),
("source_url", models.CharField(max_length=255, verbose_name="Resource source URI")),
(
"change_hash",
models.CharField(max_length=32, verbose_name="Marker to detect data changes after fetch"),
),
("hidden", models.BooleanField(db_index=True, default=False, verbose_name="Item hidden")),
(
"fg_platform",
models.ForeignKey(
blank=True,
default=None,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="core.Platform",
),
),
],
options={"abstract": False},
),
]
| 42.719008 | 116 | 0.474947 | 423 | 5,169 | 5.624113 | 0.252955 | 0.092476 | 0.046238 | 0.055486 | 0.75578 | 0.734763 | 0.708701 | 0.708701 | 0.708701 | 0.708701 | 0 | 0.020456 | 0.423099 | 5,169 | 120 | 117 | 43.075 | 0.777331 | 0.008706 | 0 | 0.637168 | 1 | 0 | 0.139399 | 0.010738 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026549 | 0 | 0.061947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64179924f4ea232800afe7c9640d932e7afc15f0 | 157 | py | Python | indexer/__init__.py | entn-at/AGAIN-VC | dbf94bf55882f897c312c7760cd892c51c93c9ab | [
"MIT"
] | 78 | 2020-10-24T02:55:59.000Z | 2022-03-08T03:09:13.000Z | indexer/__init__.py | entn-at/AGAIN-VC | dbf94bf55882f897c312c7760cd892c51c93c9ab | [
"MIT"
] | 15 | 2020-11-03T18:34:15.000Z | 2022-03-26T19:47:59.000Z | indexer/__init__.py | entn-at/AGAIN-VC | dbf94bf55882f897c312c7760cd892c51c93c9ab | [
"MIT"
] | 16 | 2020-11-09T21:17:53.000Z | 2022-03-17T04:07:26.000Z | import importlib
def get_indexer(config):
Indexer = importlib.import_module(f'.{config.indexer_name}', package=__package__).Indexer
return Indexer() | 31.4 | 93 | 0.770701 | 19 | 157 | 6 | 0.578947 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11465 | 157 | 5 | 94 | 31.4 | 0.820144 | 0 | 0 | 0 | 0 | 0 | 0.139241 | 0.139241 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff295950e0e7b1976003341f7eced8b2cf3850e0 | 242 | py | Python | app/controllers/login.py | FelixWolf/mpbb | c18e8995f9903b7a0907542af391f6f2be4e711a | [
"Unlicense"
] | null | null | null | app/controllers/login.py | FelixWolf/mpbb | c18e8995f9903b7a0907542af391f6f2be4e711a | [
"Unlicense"
] | null | null | null | app/controllers/login.py | FelixWolf/mpbb | c18e8995f9903b7a0907542af391f6f2be4e711a | [
"Unlicense"
] | null | null | null | from app import app, render_template
from flask import request
@app.route('/login')
def showLogin():
return render_template("login.htm")
@app.route('/login', methods=['POST'])
def doLogin():
return render_template("login_sent.htm")
| 22 | 44 | 0.72314 | 33 | 242 | 5.181818 | 0.515152 | 0.245614 | 0.152047 | 0.292398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123967 | 242 | 10 | 45 | 24.2 | 0.806604 | 0 | 0 | 0 | 0 | 0 | 0.161157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ff58a9cf376bb048d0ceb5ce59359b187453ca59 | 39 | py | Python | simpleflow/job/__init__.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | 69 | 2015-02-24T00:49:40.000Z | 2022-02-05T02:35:04.000Z | simpleflow/job/__init__.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | 295 | 2015-02-06T11:02:00.000Z | 2022-03-21T11:01:34.000Z | simpleflow/job/__init__.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | 27 | 2015-08-31T22:14:42.000Z | 2022-02-08T07:25:01.000Z | from .k8s import KubernetesJob # noqa
| 19.5 | 38 | 0.769231 | 5 | 39 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 39 | 1 | 39 | 39 | 0.90625 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ffadb55a649a69a18f431752c07f8b8d0e378827 | 21,639 | py | Python | reviewboard/scmtools/tests/test_repository.py | pombredanne/reviewboard | 15f1d7236ec7a5cb4778ebfeb8b45d13a46ac71d | [
"MIT"
] | null | null | null | reviewboard/scmtools/tests/test_repository.py | pombredanne/reviewboard | 15f1d7236ec7a5cb4778ebfeb8b45d13a46ac71d | [
"MIT"
] | null | null | null | reviewboard/scmtools/tests/test_repository.py | pombredanne/reviewboard | 15f1d7236ec7a5cb4778ebfeb8b45d13a46ac71d | [
"MIT"
] | null | null | null | import os
import kgb
from django.contrib.auth.models import AnonymousUser
from django.core.cache import cache
from django.core.exceptions import ValidationError
from djblets.testing.decorators import add_fixtures
from reviewboard.scmtools.core import FileLookupContext
from reviewboard.scmtools.models import Repository, Tool
from reviewboard.scmtools.signals import (checked_file_exists,
checking_file_exists,
fetched_file, fetching_file)
from reviewboard.testing.testcase import TestCase
class RepositoryTests(kgb.SpyAgency, TestCase):
"""Unit tests for Repository operations."""
fixtures = ['test_scmtools']
def setUp(self):
super(RepositoryTests, self).setUp()
self.local_repo_path = os.path.join(os.path.dirname(__file__),
'..', 'testdata', 'git_repo')
self.repository = Repository.objects.create(
name='Git test repo',
path=self.local_repo_path,
tool=Tool.objects.get(name='Git'))
def test_archive(self):
"""Testing Repository.archive"""
repository1 = self.repository
repository1.archive()
self.assertTrue(repository1.name.startswith('ar:Git test repo:'))
self.assertTrue(repository1.archived)
self.assertFalse(repository1.public)
self.assertIsNotNone(repository1.archived_timestamp)
repository2 = Repository.objects.get(pk=repository1.pk)
self.assertEqual(repository2.name,
repository1.name)
self.assertEqual(repository2.archived,
repository1.archived)
self.assertEqual(repository2.public,
repository1.public)
self.assertEqual(repository2.archived_timestamp,
repository1.archived_timestamp)
def test_archive_no_save(self):
"""Testing Repository.archive with save=False"""
repository1 = self.repository
repository1.archive(save=False)
self.assertTrue(repository1.name.startswith('ar:Git test repo:'))
self.assertTrue(repository1.archived)
self.assertFalse(repository1.public)
self.assertIsNotNone(repository1.archived_timestamp)
repository2 = Repository.objects.get(pk=repository1.pk)
self.assertNotEqual(repository2.name,
repository1.name)
self.assertNotEqual(repository2.archived,
repository1.archived)
self.assertNotEqual(repository2.public,
repository1.public)
self.assertNotEqual(repository2.archived_timestamp,
repository1.archived_timestamp)
def test_clean_without_conflict(self):
"""Testing Repository.clean without name/path conflicts"""
with self.assertNumQueries(1):
self.repository.clean()
def test_clean_with_name_conflict(self):
"""Testing Repository.clean with name conflict"""
repository = Repository(name=self.repository.name,
path='path/to/repo.git',
tool=self.repository.tool)
with self.assertRaises(ValidationError) as ctx:
with self.assertNumQueries(1):
repository.clean()
self.assertEqual(ctx.exception.message_dict, {
'name': ['A repository with this name already exists'],
})
def test_clean_with_path_conflict(self):
"""Testing Repository.clean with path conflict"""
repository = Repository(name='New test repo',
path=self.repository.path,
tool=self.repository.tool)
with self.assertRaises(ValidationError) as ctx:
with self.assertNumQueries(1):
repository.clean()
self.assertEqual(ctx.exception.message_dict, {
'path': ['A repository with this path already exists'],
})
def test_clean_with_name_and_path_conflict(self):
"""Testing Repository.clean with name and path conflict"""
repository = Repository(name=self.repository.name,
path=self.repository.path,
tool=self.repository.tool)
with self.assertRaises(ValidationError) as ctx:
with self.assertNumQueries(1):
repository.clean()
self.assertEqual(ctx.exception.message_dict, {
'name': ['A repository with this name already exists'],
'path': ['A repository with this path already exists'],
})
def test_clean_with_path_conflict_with_archived(self):
"""Testing Repository.clean with archived repositories ignored for
path conflict
"""
orig_repository = self.repository
orig_repository.archive()
repository = Repository(name='New test repo',
path=orig_repository.path,
tool=orig_repository.tool)
with self.assertNumQueries(1):
repository.clean()
def test_get_file_with_context(self):
"""Testing Repository.get_file with context="""
repository = self.repository
scmtool_cls = repository.scmtool_class
self.spy_on(scmtool_cls.get_file,
owner=scmtool_cls,
op=kgb.SpyOpReturn(b'data'))
context = FileLookupContext(base_commit_id='def456')
repository.get_file(path='readme',
revision='abc123',
context=context)
self.assertSpyCalledWith(
scmtool_cls.get_file,
'readme',
revision='abc123',
base_commit_id='def456',
context=context)
def test_get_file_caching(self):
"""Testing Repository.get_file caches result"""
path = 'readme'
revision = 'abc123'
base_commit_id = 'def456'
repository = self.repository
scmtool_cls = repository.scmtool_class
self.spy_on(scmtool_cls.get_file,
owner=scmtool_cls,
op=kgb.SpyOpReturn(b'file data'))
# Two requests to the same path/revision should result in one only
# call.
data1 = repository.get_file(path=path,
revision=revision)
data2 = repository.get_file(path=path,
revision=revision,
context=FileLookupContext())
self.assertIsInstance(data1, bytes)
self.assertIsInstance(data2, bytes)
self.assertEqual(data1, b'file data')
self.assertEqual(data1, data2)
self.assertSpyCallCount(scmtool_cls.get_file, 1)
self.assertSpyLastCalledWith(scmtool_cls.get_file,
path,
revision=revision,
base_commit_id=None)
# A base commit ID should result in a new call.
data3 = repository.get_file(path=path,
revision=revision,
base_commit_id=base_commit_id)
self.assertEqual(data3, data1)
self.assertSpyCallCount(scmtool_cls.get_file, 2)
self.assertSpyLastCalledWith(scmtool_cls.get_file,
path,
revision=revision,
base_commit_id=base_commit_id)
# Another fetch with the same base_commit_id will use the cached
# version, even if specified in a FileLookupContext.
context = FileLookupContext(base_commit_id=base_commit_id)
data4 = repository.get_file(path=path,
revision=revision,
context=context)
self.assertEqual(data4, data1)
self.assertSpyCallCount(scmtool_cls.get_file, 2)
def test_get_file_signals(self):
"""Testing Repository.get_file emits signals"""
def on_fetching_file(**kwargs):
pass
def on_fetched_file(**kwargs):
pass
repository = self.repository
fetching_file.connect(on_fetching_file, sender=repository)
fetched_file.connect(on_fetched_file, sender=repository)
self.spy_on(on_fetching_file)
self.spy_on(on_fetched_file)
path = 'readme'
revision = 'e965047'
base_commit_id = 'def456'
request = self.create_http_request()
context = FileLookupContext(request=request,
base_commit_id=base_commit_id)
repository.get_file(path=path,
revision=revision,
context=context)
self.assertSpyCalledWith(
on_fetching_file,
sender=repository,
path=path,
revision=revision,
base_commit_id=base_commit_id,
request=request,
context=context)
self.assertSpyCalledWith(
on_fetched_file,
sender=repository,
path=path,
revision=revision,
base_commit_id=base_commit_id,
request=request,
context=context,
data=b'Hello\n')
def test_get_file_exists_caching_when_exists(self):
"""Testing Repository.get_file_exists caches result when exists"""
path = 'readme'
revision = 'e965047'
base_commit_id = 'def456'
repository = self.repository
scmtool_cls = repository.scmtool_class
self.spy_on(scmtool_cls.file_exists,
owner=scmtool_cls,
op=kgb.SpyOpReturn(True))
# Two requests to the same path/revision should result in one only
# call.
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision))
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
context=FileLookupContext()))
self.assertSpyCallCount(scmtool_cls.file_exists, 1)
self.assertSpyLastCalledWith(scmtool_cls.file_exists,
path,
revision=revision,
base_commit_id=None)
# A base commit ID should result in a new call.
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
base_commit_id=base_commit_id))
self.assertSpyCallCount(scmtool_cls.file_exists, 2)
self.assertSpyLastCalledWith(scmtool_cls.file_exists,
path,
revision=revision,
base_commit_id=base_commit_id)
# Another check with the same base_commit_id will use the cached
# version, even if specified in a FileLookupContext.
context = FileLookupContext(base_commit_id=base_commit_id)
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
context=context))
self.assertSpyCallCount(scmtool_cls.file_exists, 2)
def test_get_file_exists_caching_when_not_exists(self):
"""Testing Repository.get_file_exists doesn't cache result when the
file does not exist
"""
path = 'readme'
revision = '12345'
base_commit_id = 'def456'
repository = self.repository
scmtool_cls = repository.scmtool_class
self.spy_on(scmtool_cls.file_exists,
owner=scmtool_cls,
op=kgb.SpyOpReturn(False))
context = FileLookupContext(base_commit_id=base_commit_id)
self.assertFalse(repository.get_file_exists(
path=path,
revision=revision))
self.assertFalse(repository.get_file_exists(
path=path,
revision=revision,
context=FileLookupContext()))
self.assertFalse(repository.get_file_exists(
path=path,
revision=revision,
base_commit_id=base_commit_id))
self.assertFalse(repository.get_file_exists(
path=path,
revision=revision,
context=context))
self.assertSpyCallCount(scmtool_cls.file_exists, 4)
self.assertSpyCalledWith(
scmtool_cls.file_exists.calls[0],
path,
revision=revision,
base_commit_id=None)
self.assertSpyCalledWith(
scmtool_cls.file_exists.calls[1],
path,
revision=revision,
base_commit_id=None)
self.assertSpyCalledWith(
scmtool_cls.file_exists.calls[2],
path,
revision=revision,
base_commit_id=base_commit_id)
self.assertSpyCalledWith(
scmtool_cls.file_exists.calls[3],
path,
revision=revision,
base_commit_id=base_commit_id,
context=context)
def test_get_file_exists_caching_with_fetched_file(self):
"""Testing Repository.get_file_exists uses get_file's cached result"""
path = 'readme'
revision = 'abc123'
base_commit_id = 'def456'
repository = self.repository
scmtool_cls = repository.scmtool_class
self.spy_on(scmtool_cls.get_file,
owner=scmtool_cls,
op=kgb.SpyOpReturn(b'file data'))
self.spy_on(scmtool_cls.file_exists,
owner=scmtool_cls,
op=kgb.SpyOpReturn(True))
# These requests to the same path/revision should result in one only
# call.
repository.get_file(path=path,
revision=revision)
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision))
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
context=FileLookupContext()))
self.assertSpyCallCount(scmtool_cls.get_file, 1)
self.assertSpyNotCalled(scmtool_cls.file_exists)
# A base commit ID should result in a new call, which should then
# persist for file checks.
repository.get_file(path=path,
revision=revision,
base_commit_id=base_commit_id)
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
base_commit_id=base_commit_id))
self.assertTrue(repository.get_file_exists(
path=path,
revision=revision,
context=FileLookupContext(base_commit_id=base_commit_id)))
self.assertSpyCallCount(scmtool_cls.get_file, 2)
self.assertSpyNotCalled(scmtool_cls.file_exists)
def test_get_file_exists_signals(self):
"""Testing Repository.get_file_exists emits signals"""
def on_checking(**kwargs):
pass
def on_checked(**kwargs):
pass
repository = self.repository
checking_file_exists.connect(on_checking, sender=repository)
checked_file_exists.connect(on_checked, sender=repository)
self.spy_on(on_checking)
self.spy_on(on_checked)
path = 'readme'
revision = 'e965047'
base_commit_id = 'def456'
request = self.create_http_request()
context = FileLookupContext(request=request,
base_commit_id=base_commit_id)
repository.get_file_exists(path=path,
revision=revision,
context=context)
self.assertSpyCalledWith(
on_checking,
sender=repository,
path=path,
revision=revision,
base_commit_id=base_commit_id,
request=request,
context=context)
self.assertSpyCalledWith(
on_checked,
sender=repository,
path=path,
revision=revision,
base_commit_id=base_commit_id,
request=request,
context=context)
def test_repository_name_with_255_characters(self):
"""Testing Repository.name with 255 characters"""
repository = self.create_repository(name='t' * 255)
self.assertEqual(len(repository.name), 255)
def test_is_accessible_by_with_public(self):
"""Testing Repository.is_accessible_by with public repository"""
user = self.create_user()
repository = self.create_repository()
self.assertTrue(repository.is_accessible_by(user))
self.assertTrue(repository.is_accessible_by(AnonymousUser()))
def test_is_accessible_by_with_public_and_hidden(self):
"""Testing Repository.is_accessible_by with public hidden repository"""
user = self.create_user()
repository = self.create_repository(visible=False)
self.assertTrue(repository.is_accessible_by(user))
self.assertTrue(repository.is_accessible_by(AnonymousUser()))
def test_is_accessible_by_with_private_and_not_member(self):
"""Testing Repository.is_accessible_by with private repository and
user not a member
"""
user = self.create_user()
repository = self.create_repository(public=False)
self.assertFalse(repository.is_accessible_by(user))
self.assertFalse(repository.is_accessible_by(AnonymousUser()))
def test_is_accessible_by_with_private_and_member(self):
"""Testing Repository.is_accessible_by with private repository and
user is a member
"""
user = self.create_user()
repository = self.create_repository(public=False)
repository.users.add(user)
self.assertTrue(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_and_member_by_group(self):
"""Testing Repository.is_accessible_by with private repository and
user is a member by group
"""
user = self.create_user()
group = self.create_review_group(invite_only=True)
group.users.add(user)
repository = self.create_repository(public=False)
repository.review_groups.add(group)
self.assertTrue(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_and_superuser(self):
"""Testing Repository.is_accessible_by with private repository and
user is a superuser
"""
user = self.create_user(is_superuser=True)
repository = self.create_repository(public=False)
self.assertTrue(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_hidden_not_member(self):
"""Testing Repository.is_accessible_by with private hidden
repository and user not a member
"""
user = self.create_user()
repository = self.create_repository(public=False,
visible=False)
self.assertFalse(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_hidden_and_member(self):
"""Testing Repository.is_accessible_by with private hidden
repository and user is a member
"""
user = self.create_user()
repository = self.create_repository(public=False,
visible=False)
repository.users.add(user)
self.assertTrue(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_hidden_and_member_by_group(self):
"""Testing Repository.is_accessible_by with private hidden
repository and user is a member
"""
user = self.create_user()
group = self.create_review_group(invite_only=True)
group.users.add(user)
repository = self.create_repository(public=False,
visible=False)
repository.review_groups.add(group)
self.assertTrue(repository.is_accessible_by(user))
def test_is_accessible_by_with_private_hidden_and_superuser(self):
"""Testing Repository.is_accessible_by with private hidden
repository and superuser
"""
user = self.create_user(is_superuser=True)
repository = self.create_repository(public=False,
visible=False)
self.assertTrue(repository.is_accessible_by(user))
@add_fixtures(['test_users', 'test_site'])
def test_is_accessible_by_with_local_site_accessible(self):
"""Testing Repository.is_accessible_by with Local Site accessible by
user
"""
user = self.create_user()
repository = self.create_repository(with_local_site=True)
repository.local_site.users.add(user)
self.assertTrue(repository.is_accessible_by(user))
@add_fixtures(['test_users', 'test_site'])
def test_is_accessible_by_with_local_site_not_accessible(self):
"""Testing Repository.is_accessible_by with Local Site not accessible
by user
"""
user = self.create_user()
repository = self.create_repository(with_local_site=True)
self.assertFalse(repository.is_accessible_by(user))
self.assertFalse(repository.is_accessible_by(AnonymousUser()))
| 36.614213 | 79 | 0.612782 | 2,225 | 21,639 | 5.711461 | 0.085843 | 0.04328 | 0.051936 | 0.05288 | 0.836874 | 0.778565 | 0.749213 | 0.702628 | 0.655099 | 0.635112 | 0 | 0.008973 | 0.309857 | 21,639 | 590 | 80 | 36.676271 | 0.841971 | 0.110356 | 0 | 0.719512 | 0 | 0 | 0.028053 | 0 | 0 | 0 | 0 | 0 | 0.209756 | 1 | 0.078049 | false | 0.009756 | 0.02439 | 0 | 0.107317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4420df6dd261e9842fd58614488abad094728f9f | 3,732 | py | Python | _unittests/ut_filehelper/test_backup_file.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 18 | 2015-11-10T08:09:23.000Z | 2022-02-16T11:46:45.000Z | _unittests/ut_filehelper/test_backup_file.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 321 | 2015-06-14T21:34:28.000Z | 2021-11-28T17:10:03.000Z | _unittests/ut_filehelper/test_backup_file.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 10 | 2015-06-20T01:35:00.000Z | 2022-01-19T15:54:32.000Z | """
@brief test log(time=2s)
@author Xavier Dupre
"""
import sys
import os
import unittest
from pyquickhelper.loghelper import fLOG
from pyquickhelper.pycode import get_temp_folder, ExtTestCase
from pyquickhelper.filehelper import EncryptedBackup, FileTreeNode, TransferAPIFile
from pyquickhelper.filehelper.transfer_api import MockTransferAPI
class TestBackupFiles(ExtTestCase):
def test_backup(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
try:
import Crypto as skip_
algo = "AES"
except ImportError:
algo = "fernet"
temp = get_temp_folder(__file__, "temp_backup_files")
root = os.path.normpath(os.path.join(temp, ".."))
fLOG(root)
api = MockTransferAPI()
ft = FileTreeNode(root, filter=".*[.]py", repository=False)
enc = EncryptedBackup(
key=b"unit" * 8,
file_tree_node=ft,
transfer_api=api,
file_status=os.path.join(temp, "status.txt"),
file_map=os.path.join(temp, "mapping.txt"),
root_local=os.path.join(temp, "..", ".."),
threshold_size=2000,
fLOG=fLOG,
algo=algo)
done, issue = enc.start_transfering()
self.assertNotEmpty(done)
self.assertEmpty(issue)
for k, v in sorted(enc.Mapping.items()):
fLOG(k, len(v.pieces), v)
enc.load_mapping()
outfile = os.path.join(temp, "backed_test_backup_file.py")
fpth = "ut_filehelper\\test_backup_file.py"
if not sys.platform.startswith("win"):
fpth = fpth.replace("\\", "/")
enc.retrieve(fpth, filename=outfile)
with open(outfile, "r") as f:
c2 = f.read()
with open(__file__.replace(".pyc", ".py"), "r") as f:
c1 = f.read()
self.assertEqual(c1, c2)
def test_backup_file(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
try:
import Crypto as skip__
algo = "AES"
except ImportError:
algo = "fernet"
temp = get_temp_folder(__file__, "temp_backup_files_file")
root = os.path.normpath(os.path.join(temp, ".."))
fLOG(root)
api = TransferAPIFile(os.path.join(temp, "backup"))
ft = FileTreeNode(root, filter=".*[.]py", repository=False)
enc = EncryptedBackup(
key=b"unit" * 8,
file_tree_node=ft,
transfer_api=api,
file_status=os.path.join(temp, "status.txt"),
file_map=os.path.join(temp, "mapping.txt"),
root_local=os.path.join(temp, "..", ".."),
threshold_size=2000,
fLOG=fLOG,
algo=algo)
done, issue = enc.start_transfering()
self.assertNotEmpty(done)
self.assertEmpty(issue)
for k, v in sorted(enc.Mapping.items()):
fLOG(k, len(v.pieces), v)
enc.load_mapping()
outfile = os.path.join(temp, "backed_test_backup_file.py")
fpth = "ut_filehelper\\test_backup_file.py"
if not sys.platform.startswith("win"):
fpth = fpth.replace("\\", "/")
s = enc.retrieve(fpth, filename=outfile)
with open(outfile, "r") as f:
c2 = f.read()
with open(__file__.replace(".pyc", ".py"), "r") as f:
c1 = f.read()
self.assertEqual(c1, c2)
detemp = os.path.join(temp, "retrieved")
s = enc.retrieve_all(detemp, regex=".*[.]py")
self.assertNotEmpty(s)
if __name__ == "__main__":
unittest.main()
| 29.856 | 83 | 0.566452 | 423 | 3,732 | 4.763593 | 0.267139 | 0.041687 | 0.059553 | 0.083375 | 0.750372 | 0.750372 | 0.750372 | 0.750372 | 0.750372 | 0.750372 | 0 | 0.007291 | 0.301715 | 3,732 | 124 | 84 | 30.096774 | 0.765925 | 0.014469 | 0 | 0.757895 | 0 | 0 | 0.089646 | 0.038692 | 0 | 0 | 0 | 0 | 0.073684 | 1 | 0.021053 | false | 0 | 0.115789 | 0 | 0.147368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
442a0594b296560a4108e38219e460ececa696f0 | 189 | py | Python | core/api/controllers/__init__.py | zilohumberto/EcobiciStatistics | fbd6c6ef9806cb55385f8cbc1acc9ebbc32a9868 | [
"MIT"
] | null | null | null | core/api/controllers/__init__.py | zilohumberto/EcobiciStatistics | fbd6c6ef9806cb55385f8cbc1acc9ebbc32a9868 | [
"MIT"
] | null | null | null | core/api/controllers/__init__.py | zilohumberto/EcobiciStatistics | fbd6c6ef9806cb55385f8cbc1acc9ebbc32a9868 | [
"MIT"
] | null | null | null | from core.api.controllers.controller import Controller
from core.api.controllers.trip_controller import TripController
from core.api.controllers.station_controller import StationController
| 47.25 | 69 | 0.888889 | 23 | 189 | 7.217391 | 0.434783 | 0.144578 | 0.198795 | 0.39759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 189 | 3 | 70 | 63 | 0.937853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
445bac4b0093adecdcffc85d6d7c054854b56b27 | 3,689 | py | Python | pkgs/conf-pkg/src/genie/libs/conf/te/iosxe/te.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/conf-pkg/src/genie/libs/conf/te/iosxe/te.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/conf-pkg/src/genie/libs/conf/te/iosxe/te.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z |
from abc import ABC
import warnings
from genie.conf.base.attributes import AttributesHelper
from genie.conf.base.cli import CliConfigBuilder
from genie.conf.base.config import CliConfig
import re
class Te(ABC):
class DeviceAttributes(ABC):
def build_config(self, links=None, apply=True, attributes=None, unconfig=False, **kwargs):
'''Device build config'''
assert not apply
attributes = AttributesHelper(self, attributes)
configurations = CliConfigBuilder(unconfig=unconfig)
if attributes.iswildcard:
# iosxe : mpls traffic-eng tunnels
configurations.append_line('mpls traffic-eng tunnels', \
unconfig_cmd = 'default mpls traffic-eng tunnels')
if attributes.value('advertise_expnull'):
configurations.append_line('mpls traffic-eng signalling advertise explicit-null')
# Add per-interface config
for sub, attributes2 in attributes.mapping_values('interface_attr', keys=self.interfaces, sort=True):
configurations.append_block(sub.build_config(apply=False, attributes=attributes2, unconfig=unconfig, **kwargs))
return CliConfig(device=self.device, unconfig=unconfig,
cli_config=configurations)
def build_unconfig(self, links=None, apply=True, attributes=None, **kwargs):
return self.build_config(links=links, apply=apply, attributes=attributes, unconfig=True, **kwargs)
class InterfaceAttributes(ABC):
def build_config(self, apply=True, attributes=None, unconfig=False, **kwargs):
'''Interface build config'''
assert not apply
attributes = AttributesHelper(self, attributes)
configurations = CliConfigBuilder(unconfig=unconfig)
with configurations.submode_context(attributes.format('interface {interface_name}', force=True)):
if attributes.iswildcard:
# iosxe : mpls traffic-eng tunnels
configurations.append_line('mpls traffic-eng tunnels', \
unconfig_cmd = 'default mpls traffic-eng tunnels')
return str(configurations)
def build_unconfig(self, apply=True, attributes=None, **kwargs):
return self.build_config(apply=apply, attributes=attributes, unconfig=True, **kwargs)
class Srlg(ABC):
class DeviceAttributes(ABC):
def build_config(self, apply=True, attributes=None, unconfig=False):
assert not apply
attributes = AttributesHelper(self, attributes)
configurations = CliConfigBuilder(unconfig=unconfig)
# TODO
pass
return CliConfig(device=self.device, unconfig=unconfig,
cli_config=configurations)
def build_unconfig(self, apply=True, attributes=None):
return self.build_config(apply=apply, attributes=attributes, unconfig=True)
class InterfaceAttributes(ABC):
def build_config(self, apply=True, attributes=None, unconfig=False):
attributes = AttributesHelper(self, attributes)
configurations = CliConfigBuilder(unconfig=unconfig)
# TODO
pass
return str(configurations)
def build_unconfig(self, apply=True, attributes=None):
return self.build_config(apply=apply, attributes=attributes, unconfig=True)
| 40.538462 | 127 | 0.619409 | 349 | 3,689 | 6.472779 | 0.212034 | 0.053564 | 0.067286 | 0.081452 | 0.760956 | 0.760956 | 0.744135 | 0.714918 | 0.658256 | 0.633466 | 0 | 0.000773 | 0.298997 | 3,689 | 90 | 128 | 40.988889 | 0.872776 | 0.039035 | 0 | 0.648148 | 0 | 0 | 0.062358 | 0 | 0 | 0 | 0 | 0.011111 | 0.055556 | 1 | 0.148148 | false | 0.037037 | 0.111111 | 0.074074 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
446ace6149fe23f84a748f6d58679650f139de83 | 125 | py | Python | straxen/analyses/__init__.py | jhowl01/straxen | 60f414b910620bdb3a281177cd8a761927e8d50a | [
"BSD-3-Clause"
] | null | null | null | straxen/analyses/__init__.py | jhowl01/straxen | 60f414b910620bdb3a281177cd8a761927e8d50a | [
"BSD-3-Clause"
] | 1 | 2021-03-30T14:19:48.000Z | 2021-03-30T14:39:05.000Z | straxen/analyses/__init__.py | jhowl01/straxen | 60f414b910620bdb3a281177cd8a761927e8d50a | [
"BSD-3-Clause"
] | 1 | 2021-03-23T19:12:04.000Z | 2021-03-23T19:12:04.000Z | from . import quick_checks
from . import records_matrix
from . import waveform_plot
from . import holoviews_waveform_display
| 25 | 40 | 0.84 | 17 | 125 | 5.882353 | 0.588235 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128 | 125 | 4 | 41 | 31.25 | 0.917431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
927a221fe5705b2c96f7ac869f4e438b9b4495d9 | 65,251 | py | Python | tests/test_transforms.py | open-contracting/oc4idskit | a864c4716c09dfa2141483d602106c147b211063 | [
"BSD-3-Clause"
] | null | null | null | tests/test_transforms.py | open-contracting/oc4idskit | a864c4716c09dfa2141483d602106c147b211063 | [
"BSD-3-Clause"
] | 16 | 2021-05-19T23:39:30.000Z | 2022-01-13T20:00:16.000Z | tests/test_transforms.py | open-contracting/oc4idskit | a864c4716c09dfa2141483d602106c147b211063 | [
"BSD-3-Clause"
] | null | null | null | import copy
import json
import pytest
from oc4idskit import transforms
from tests import read
@pytest.mark.vcr()
def test_initial_tranform_state():
releases = json.loads(read("release-package_additional-contact-points.json"))[
"releases"
]
transform_state = transforms.InitialTransformState(releases, "1")
assert len(transform_state.compiled_releases) == 1
assert len(transform_state.releases_by_ocid["ocds-213czf-1"]) == 2
@pytest.mark.vcr()
def test_run_all():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "roles": ["publicAuthority"]}],
}
]
output = transforms.run_transforms({}, releases, "1")
assert output["parties"] == releases[0]["parties"]
@pytest.mark.vcr()
def test_run_all_release_package():
releases_package_1 = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "roles": ["publicAuthority"]}],
}
]
releases_package_2 = [
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "2", "name": "a", "roles": ["publicAuthority"]}],
}
]
release_packages = [
{"uri": "example.com", "releases": releases_package_1},
{"uri": "example.com", "releases": releases_package_2},
]
output = transforms.run_transforms({}, release_packages, "1")
assert len(output["contractingProcesses"]) == 2
assert len(output["parties"]) == 2
assert output["contractingProcesses"][0]["releases"] == [
{"url": "example.com#1", "date": "2001-02-03T04:05:06Z", "tag": "planning"}
]
@pytest.mark.vcr()
def test_public_authority_role():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{"id": "1", "name": "a", "roles": ["publicAuthority"]},
{"id": "2", "name": "b", "roles": ["publicAuthority"]},
],
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert output["parties"] == releases[0]["parties"]
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{"id": "1", "roles": ["publicAuthority"]},
{"id": "2", "roles": ["buyer"]},
],
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 1
@pytest.mark.vcr()
def test_duplicate_public_authority_role():
# Match on identifier
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "a",
"roles": ["publicAuthority"],
"identifier": {"id": "a", "scheme": "b"},
}
],
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "a",
"roles": ["publicAuthority"],
"identifier": {"id": "a", "scheme": "b"},
}
],
},
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 1
assert output["parties"][0]["id"] == "b-a"
# No match on identifier
releases[0]["parties"][0]["identifier"]["id"] = "b"
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 2
print(output["parties"])
assert output["parties"][0]["id"] == "b-b"
assert output["parties"][1]["id"] == "b-a"
# Match on name
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "a", "name": "org 1", "roles": ["publicAuthority"]}],
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "a", "name": "org 1", "roles": ["publicAuthority"]}],
},
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 1
assert output["parties"][0]["id"] == "1"
# No match on name
releases[0]["parties"][0]["name"] = "org 2"
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 2
# Generated autoincrement party ids
assert output["parties"][0]["id"] == "1"
assert output["parties"][1]["id"] == "2"
# Match on name but different address
releases[0]["parties"][0]["name"] = "org 1"
releases[0]["parties"][0]["address"] = {"streetAddress": "1 the street"}
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 2
assert output["parties"][0]["id"] == "1"
assert output["parties"][1]["id"] == "2"
# Match on name and address
releases[1]["parties"][0]["address"] = {"streetAddress": "1 the street"}
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 1
assert output["parties"][0]["id"] == "1"
# all roles if match
releases[1]["parties"][0]["roles"].append("some role")
releases[1]["parties"][0]["roles"].append("some other role")
output = transforms._run_transforms(
releases, "1", transforms=[transforms.public_authority_role]
)
assert len(output["parties"]) == 1
assert len(output["parties"][0]["roles"]) == 3
assert output["parties"][0]["id"] == "1"
@pytest.mark.vcr()
def test_buyer_role():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "roles": ["buyer"]}],
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.buyer_role]
)
assert "publicAuthority" in output["parties"][0]["roles"]
assert "buyer" in output["parties"][0]["roles"]
@pytest.mark.vcr()
def test_sector():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {
"sector": {
"scheme": "COFOG",
"description": "Road transportation",
"id": "04.5.1",
}
}
},
}
]
output = transforms._run_transforms(releases, "1", transforms=[transforms.sector])
assert output["sector"] == ["COFOG-04.5.1"]
# 2 contracting processes but same sector
releases.append(
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {
"sector": {
"scheme": "COFOG",
"description": "Road transportation",
"id": "04.5.1",
}
}
},
}
)
output = transforms._run_transforms(releases, "1", transforms=[transforms.sector])
assert output["sector"] == ["COFOG-04.5.1"]
# 2 contracting processes but differnt sector
releases[1]["planning"]["project"]["sector"]["id"] = "2"
output = transforms._run_transforms(releases, "1", transforms=[transforms.sector])
assert set(output["sector"]) == set(["COFOG-04.5.1", "COFOG-2"])
@pytest.mark.vcr()
def test_additional_classifications():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {"additionalClassifications": [{"scheme": "a", "id": "1"}]}
},
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.additional_classifications]
)
assert output["additionalClassifications"] == [{"scheme": "a", "id": "1"}]
# same classification
releases.append(
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {"additionalClassifications": [{"scheme": "a", "id": "1"}]}
},
}
)
output = transforms._run_transforms(
releases, "1", transforms=[transforms.additional_classifications]
)
assert output["additionalClassifications"] == [{"scheme": "a", "id": "1"}]
# new classification
releases.append(
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {"additionalClassifications": [{"scheme": "a", "id": "2"}]}
},
}
)
output = transforms._run_transforms(
releases, "1", transforms=[transforms.additional_classifications]
)
assert output["additionalClassifications"] == [
{"scheme": "a", "id": "1"},
{"scheme": "a", "id": "2"},
]
@pytest.mark.vcr()
def test_title():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"title": "a title"}},
}
]
output = transforms._run_transforms(releases, "1", transforms=[transforms.title])
assert output["title"] == releases[0]["planning"]["project"]["title"]
# clashing titles give warning and no output
releases.append(
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"title": "b title"}},
}
)
output = transforms._run_transforms(releases, "1", transforms=[transforms.title])
assert "title" not in output
@pytest.mark.vcr()
def test_title_from_tender():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "a title"},
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.title, transforms.title_from_tender]
)
assert output["title"] == releases[0]["tender"]["title"]
releases.append(
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "b title"},
}
)
output = transforms._run_transforms(
releases, "1", transforms=[transforms.title, transforms.title_from_tender]
)
assert output["title"] == "<ocds-213czf-1> a title\n<ocds-213czf-2> b title\n"
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"title": "a title"}},
"tender": {"title": "a non used title"},
}
]
output = transforms._run_transforms(
releases, "1", transforms=[transforms.title, transforms.title_from_tender]
)
assert output["title"] == releases[0]["planning"]["project"]["title"]
@pytest.mark.vcr()
def test_contracting_process_setup_releases():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "a title"},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "a title"},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.contracting_process_setup]
)
expected = """
{
"id": "1",
"contractingProcesses": [
{
"id": "ocds-213czf-1",
"summary": {
"ocid": "ocds-213czf-1"
},
"embeddedReleases": [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"title": "a title"
}
}
]
},
{
"id": "ocds-213czf-2",
"summary": {
"ocid": "ocds-213czf-2"
},
"embeddedReleases": [
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"title": "a title"
}
}
]
}
]
}
"""
assert output == json.loads(expected)
@pytest.mark.vcr()
def test_contracting_process_setup_release_packages():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "a title"},
},
{
"ocid": "ocds-213czf-2",
"id": "2",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"title": "a title"},
},
]
release_packages = [{"uri": "example.com", "releases": releases}]
output = transforms._run_transforms(
copy.deepcopy(release_packages),
"1",
transforms=[transforms.contracting_process_setup],
)
expected = """
{
"id": "1",
"contractingProcesses": [
{
"id": "ocds-213czf-1",
"summary": {
"ocid": "ocds-213czf-1"
},
"releases": [
{
"url": "example.com#1",
"date": "2001-02-03T04:05:06Z",
"tag": "planning"
}
]
},
{
"id": "ocds-213czf-2",
"summary": {
"ocid": "ocds-213czf-2"
},
"releases": [
{
"url": "example.com#2",
"date": "2001-02-03T04:05:06Z",
"tag": "planning"
}
]
}
]
}
"""
assert output == json.loads(expected)
@pytest.mark.vcr()
def test_procuring_entity():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"procuringEntity": {"id": "1"}},
"parties": [{"id": "1", "roles": ["procuringEntity"]}],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.procuring_entity],
)
assert output["parties"] == releases[0]["parties"]
assert (
output["contractingProcesses"][0]["summary"]["tender"] == releases[0]["tender"]
)
# with identifier no duplicate party id
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "1",
"roles": ["procuringEntity"],
"identifier": {"id": "a", "scheme": "a"},
}
],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.procuring_entity],
)
assert output["parties"][0]["id"] == "1"
assert (
output["contractingProcesses"][0]["summary"]["tender"]["procuringEntity"]["id"]
== "1"
)
# with identifier and duplicate party id
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "1",
"roles": ["procuringEntity"],
"identifier": {"id": "a", "scheme": "a"},
}
],
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "1",
"roles": ["procuringEntity"],
"identifier": {"id": "a", "scheme": "a"},
}
],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.procuring_entity],
)
assert output["parties"][0]["id"] == "a-a"
assert (
output["contractingProcesses"][0]["summary"]["tender"]["procuringEntity"]["id"]
== "a-a"
)
# with genderated id
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "name": "org1", "roles": ["procuringEntity"]}],
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "name": "org2", "roles": ["procuringEntity"]}],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.procuring_entity],
)
assert output["parties"][0]["id"] == "1"
assert output["parties"][1]["id"] == "2"
assert (
output["contractingProcesses"][0]["summary"]["tender"]["procuringEntity"]["id"]
== "1"
)
assert (
output["contractingProcesses"][1]["summary"]["tender"]["procuringEntity"]["id"]
== "2"
)
@pytest.mark.vcr()
def test_administrative_entity():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [{"id": "1", "name": "a", "roles": ["administrativeEntity"]}],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[
transforms.contracting_process_setup,
transforms.administrative_entity,
],
)
assert output["parties"] == releases[0]["parties"]
assert output["contractingProcesses"][0]["summary"]["tender"][
"administrativeEntity"
] == {"id": "1", "name": "a"}
@pytest.mark.vcr()
def test_multiple_administrative_entity_in_process():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{"id": "1", "name": "a", "roles": ["administrativeEntity"]},
{"id": "2", "name": "b", "roles": ["administrativeEntity"]},
],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[
transforms.contracting_process_setup,
transforms.administrative_entity,
],
)
assert output["parties"] == releases[0]["parties"]
# tender is not created as there are multiple adminastrative entities
assert "tender" not in output["contractingProcesses"][0]["summary"]
@pytest.mark.vcr()
def test_contract_status_pre_award():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"status": "pending"}],
"awards": [{"status": "pending"}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-3",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"status": "active"}],
"awards": [{"status": "pending"}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-4",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [{"status": "pending", "date": "3000-01-01"}],
"tender": {"id": 1, "awardPeriod": {"startDate": "3000-01-01"}},
},
{
"ocid": "ocds-213czf-5",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [{"date": "3000-01-01"}],
"tender": {"id": 1, "awardPeriod": {"startDate": "2000-01-01"}},
},
{
"ocid": "ocds-213czf-6",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [{"date": "2000-01-01"}],
"tender": {"id": 1, "awardPeriod": {"startDate": "3000-01-01"}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_status],
)
assert output["contractingProcesses"][0]["summary"]["status"] == "pre-award"
assert output["contractingProcesses"][1]["summary"]["status"] == "pre-award"
assert output["contractingProcesses"][2]["summary"]["status"] != "pre-award"
assert output["contractingProcesses"][3]["summary"]["status"] == "pre-award"
# Currently no status at all as fits no status
assert output["contractingProcesses"][4]["summary"].get("status") != "pre-award"
assert output["contractingProcesses"][5]["summary"].get("status") != "pre-award"
@pytest.mark.vcr()
def test_contract_status_active():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"status": "active"}, {"status": "pending"}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"period": {"startDate": "2000-01-01"}}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-3",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [
{"period": {"startDate": "2000-01-01", "endDate": "2000-01-01"}}
],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-4",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [
{"period": {"startDate": "2000-01-01", "endDate": "2000-01-01"}}
],
"awards": [
{"contractPeriod": {"startDate": "2000-01-01", "endDate": "3000-01-01"}}
],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-5",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [
{"period": {"startDate": "2000-01-01", "endDate": "2000-01-01"}}
],
"tender": {
"id": 1,
"contractPeriod": {"startDate": "2000-01-01", "endDate": "3000-01-01"},
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_status],
)
assert output["contractingProcesses"][0]["summary"]["status"] == "active"
assert output["contractingProcesses"][1]["summary"]["status"] == "active"
assert output["contractingProcesses"][2]["summary"]["status"] != "active"
# Currently no status at all as fits no status
assert output["contractingProcesses"][3]["summary"]["status"] == "active"
assert output["contractingProcesses"][4]["summary"]["status"] == "active"
@pytest.mark.vcr()
def test_contract_status_closed():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"status": "cancelled"},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [{"status": "cancelled"}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-3",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"status": "cancelled"}],
"tender": {"id": 1},
},
{
"ocid": "ocds-213czf-4",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"period": {"endDate": "2000-01-01"}}],
"awards": [{"contractPeriod": {"endDate": "2000-01-01"}}],
"tender": {"id": 1, "contractPeriod": {"endDate": "2000-01-01"}},
},
{
"ocid": "ocds-213czf-5",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"period": {"endDate": "3000-01-01"}}],
"awards": [{"contractPeriod": {"endDate": "2000-01-01"}}],
"tender": {"id": 1, "contractPeriod": {"endDate": "2000-01-01"}},
},
{
"ocid": "ocds-213czf-6",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"period": {"endDate": "2000-01-01"}}],
"awards": [{"contractPeriod": {"endDate": "3000-01-01"}}],
"tender": {"id": 1, "contractPeriod": {"endDate": "2000-01-01"}},
},
{
"ocid": "ocds-213czf-7",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"period": {"endDate": "2000-01-01"}}],
"awards": [{"contractPeriod": {"endDate": "2000-01-01"}}],
"tender": {"id": 1, "contractPeriod": {"endDate": "3000-01-01"}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_status],
)
assert output["contractingProcesses"][0]["summary"]["status"] == "closed"
assert output["contractingProcesses"][1]["summary"]["status"] == "closed"
assert output["contractingProcesses"][2]["summary"]["status"] == "closed"
assert output["contractingProcesses"][3]["summary"]["status"] == "closed"
assert output["contractingProcesses"][4]["summary"]["status"] != "closed"
assert output["contractingProcesses"][5]["summary"]["status"] != "closed"
assert output["contractingProcesses"][6]["summary"]["status"] != "closed"
@pytest.mark.vcr()
def test_procurment_process():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"procurementMethod": "method",
"procurementMethodDetails": "details",
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[
transforms.contracting_process_setup,
transforms.procurement_process,
],
)
assert (
output["contractingProcesses"][0]["summary"]["tender"] == releases[0]["tender"]
)
@pytest.mark.vcr()
def test_number_of_tenderers():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"numberOfTenderers": 123},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[
transforms.contracting_process_setup,
transforms.number_of_tenderers,
],
)
assert (
output["contractingProcesses"][0]["summary"]["tender"]["numberOfTenderers"]
== 123
)
@pytest.mark.vcr()
def test_location():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"locations": [{"description": "Mars"}]}},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.location]
)
assert output["locations"] == [{"description": "Mars"}]
@pytest.mark.vcr()
def test_location_multiple_releases():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"project": {
"locations": [{"description": "Mars"}, {"description": "Jupiter"}]
}
},
},
{
"ocid": "ocds-213czf-2",
"id": "2",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"locations": [{"description": "Earth"}]}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.location]
)
assert output["locations"] == [
{"description": "Mars"},
{"description": "Jupiter"},
{"description": "Earth"},
]
@pytest.mark.vcr()
def test_location_from_item_location():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"items": [
{
"id": "item1",
"deliveryLocation": {
"geometry": {
"type": "Point",
"coordinates": [51.751944, -1.257778],
},
"uri": "http://www.geonames.org/2640729/oxford.html",
},
}
],
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.location, transforms.location_from_items],
)
assert output["locations"] == [
releases[0]["tender"]["items"][0]["deliveryLocation"]
]
@pytest.mark.vcr()
def test_location_from_delivery_address():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"items": [
{
"id": "item2",
"deliveryAddress": {
"postalCode": "OX1 1BX",
"countryName": "United Kingdom",
"streetAddress": "Town Hall, St Aldate's",
"region": "Oxfordshire",
"locality": "Oxford",
},
}
],
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.location, transforms.location_from_items],
)
assert output["locations"] == [
{"address": releases[0]["tender"]["items"][0]["deliveryAddress"]}
]
@pytest.mark.vcr()
def test_location_multiple():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"items": [
{
"id": "item1",
"deliveryLocation": {
"geometry": {
"type": "Point",
"coordinates": [51.751944, -1.257778],
},
"uri": "http://www.geonames.org/2640729/oxford.html",
},
"deliveryAddress": {
"postalCode": "OX1 1BX",
"countryName": "United Kingdom",
"streetAddress": "Town Hall, St Aldate's",
"region": "Oxfordshire",
"locality": "Oxford",
},
}
],
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.location, transforms.location_from_items],
)
assert output["locations"] == [
releases[0]["tender"]["items"][0]["deliveryLocation"],
{"address": releases[0]["tender"]["items"][0]["deliveryAddress"]},
]
@pytest.mark.vcr()
def test_location_not_inferred():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"items": [
{
"id": "item1",
"deliveryLocation": {
"geometry": {
"type": "Point",
"coordinates": [51.751944, -1.257778],
},
"uri": "http://www.geonames.org/2640729/oxford.html",
},
"deliveryAddress": {
"postalCode": "OX1 1BX",
"countryName": "United Kingdom",
"streetAddress": "Town Hall, St Aldate's",
"region": "Oxfordshire",
"locality": "Oxford",
},
}
],
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.location]
)
assert "locations" not in output
@pytest.mark.vcr()
def test_budget():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"budget": {"amount": {"amount": "1000", "currency": "USD"}}},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.budget]
)
assert output["budget"]["amount"] == releases[0]["planning"]["budget"]["amount"]
@pytest.mark.vcr()
def test_budget_multiple():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"budget": {"amount": {"amount": "1000", "currency": "USD"}}},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T06:07:08Z",
"planning": {"budget": {"amount": {"amount": "1234", "currency": "USD"}}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.budget]
)
total = float(releases[0]["planning"]["budget"]["amount"]["amount"]) + float(
releases[1]["planning"]["budget"]["amount"]["amount"]
)
assert output["budget"]["amount"]["amount"] == total
assert (
output["budget"]["amount"]["currency"]
== releases[0]["planning"]["budget"]["amount"]["currency"]
)
@pytest.mark.vcr()
def test_budget_fail():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"budget": {"amount": {"amount": "999", "currency": "USD"}}},
},
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T06:07:08Z",
"planning": {"budget": {"amount": {"amount": "6464", "currency": "EUR"}}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.budget]
)
# Different currencies could not be totalled
assert "budget" not in output
@pytest.mark.vcr()
def test_budget_approval():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc1",
"documentType": "projectPlan",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Document",
},
]
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.budget_approval]
)
assert output["documents"] == [releases[0]["planning"]["documents"][1]]
# duplicate document id in different process, auto increment new doc ids.
releases.append(
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Another Document",
}
]
},
}
)
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.budget_approval]
)
assert len(output["documents"]) == 2
assert output["documents"][0]["id"] == "1"
assert output["documents"][1]["id"] == "2"
@pytest.mark.vcr()
def test_purpose_one():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"rationale": "We were hungry."},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.purpose]
)
assert output["purpose"] == releases[0]["planning"]["rationale"]
@pytest.mark.vcr()
def test_purpose_multiple():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"rationale": "We were hungry."},
},
{
"ocid": "ocds-213czf-2",
"id": "2",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"rationale": "There are never enough post-its."},
},
]
rationales = "<ocds-213czf-1> We were hungry.\n<ocds-213czf-2> There are never enough post-its.\n"
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.purpose]
)
assert output["purpose"] == rationales
@pytest.mark.vcr()
def test_needs_assessment():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc1",
"documentType": "needsAssessment",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Document",
},
]
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.purpose_needs_assessment]
)
assert output["documents"] == [releases[0]["planning"]["documents"][0]]
@pytest.mark.vcr()
def test_description_one():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"description": "A project description"}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.description]
)
assert output["description"] == releases[0]["planning"]["project"]["description"]
@pytest.mark.vcr()
def test_description_multiple():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"description": "A project description"}},
},
{
"ocid": "ocds-213czf-2",
"id": "2",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"description": "A project description"}},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.description]
)
assert output["description"] == releases[0]["planning"]["project"]["description"]
# contraditing descriptions
releases[0]["planning"]["project"]["description"] = "another description"
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.description]
)
assert "description" not in output
@pytest.mark.vcr()
def test_description_tender():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"description": "A project description"},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.description, transforms.description_tender],
)
assert output["description"] == releases[0]["tender"]["description"]
releases.append(
{
"ocid": "ocds-213czf-2",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"description": "A new project description"},
}
)
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.description, transforms.description_tender],
)
assert (
output["description"]
== "<ocds-213czf-1> A project description\n<ocds-213czf-2> A new project description\n"
)
@pytest.mark.vcr()
def test_description_not_tender():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {"project": {"description": "Another project description"}},
"tender": {"description": "A project description"},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases),
"1",
transforms=[transforms.description, transforms.description_tender],
)
assert output["description"] == releases[0]["planning"]["project"]["description"]
@pytest.mark.vcr()
def test_environmental_impact():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc1",
"documentType": "environmentalImpact",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Document",
},
]
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.environmental_impact]
)
assert output["documents"] == [releases[0]["planning"]["documents"][0]]
@pytest.mark.vcr()
def test_land_and_settlement_impact():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc1",
"documentType": "environmentalImpact",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "landAndSettlementImpact",
"title": "Another Document",
},
]
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.land_and_settlement_impact]
)
assert output["documents"] == [releases[0]["planning"]["documents"][1]]
@pytest.mark.vcr()
def test_project_scope():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"planning": {
"documents": [
{
"id": "doc1",
"documentType": "projectScope",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Document",
},
]
},
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.project_scope]
)
assert output["documents"] == [releases[0]["planning"]["documents"][0]]
@pytest.mark.vcr()
def test_project_scope_summary():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {
"description": "c",
"items": [{"description": "Some item"}],
"milestones": [
{"description": "Some milestone"},
{"description": "Another milestone"},
],
},
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.project_scope_summary,
],
)
assert "items" in output["contractingProcesses"][0]["summary"]["tender"]
assert "milestones" in output["contractingProcesses"][0]["summary"]["tender"]
assert (
output["contractingProcesses"][0]["summary"]["tender"]["items"]
== releases[0]["tender"]["items"]
)
assert (
output["contractingProcesses"][0]["summary"]["tender"]["milestones"]
== releases[0]["tender"]["milestones"]
)
@pytest.mark.vcr()
def test_funders_budget():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "GB-LAC-E09000003-557",
"name": "London Borough of Barnet - Transport Services",
"details": "This is just a test.",
},
{
"id": "GB-GOV-23",
"name": "Department for Transport",
"details": "This is also a test.",
},
],
"planning": {
"budget": {
"id": "1",
"description": "Multi-source budget, see budget breakdown for details.",
"amount": {"amount": 300000, "currency": "GBP"},
"budgetBreakdown": [
{
"sourceParty": {
"id": "GB-LAC-E09000003-557",
"name": "London Borough of Barnet - Transport Services",
},
"period": {
"startDate": "2016-01-01T00:00:00Z",
"endDate": "2016-12-31T23:59:59Z",
},
"id": "001",
"description": "Budget contribution from the local government",
"amount": {"amount": 150000, "currency": "GBP"},
},
{
"sourceParty": {
"id": "GB-GOV-23",
"name": "Department for Transport",
},
"period": {
"startDate": "2016-01-01T00:00:00Z",
"endDate": "2016-12-31T23:59:59Z",
},
"id": "002",
"description": "Budget contribution from the national government",
"amount": {"amount": 150000, "currency": "GBP"},
},
],
}
},
}
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.funding_sources]
)
assert output["parties"][0]["id"] == "GB-LAC-E09000003-557"
assert output["parties"][0]["details"] == "This is just a test."
assert "funder" in output["parties"][0]["roles"]
@pytest.mark.vcr()
def test_funders():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{
"id": "GB-LAC-E09000003-557",
"name": "London Borough of Barnet - Transport Services",
"details": "This is just a test.",
"roles": ["funder"],
},
{
"id": "GB-GOV-23",
"name": "Department for Transport",
"details": "This is also a test.",
"roles": ["funder"],
},
],
}
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.funding_sources]
)
assert output["parties"][0]["id"] == "GB-LAC-E09000003-557"
assert output["parties"][0]["details"] == "This is just a test."
assert "funder" in output["parties"][0]["roles"]
@pytest.mark.vcr()
def test_cost_estimate():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"tender": {"status": "planning", "value": {"amount": 1}},
},
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2100-02-03T04:05:06Z",
"tender": {"status": "planning", "value": {"amount": 10}},
},
]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.cost_estimate],
)
assert output["contractingProcesses"][0]["summary"]["tender"]["costEstimate"] == {
"amount": 10
}
# reverse releases
releases[0]["date"], releases[1]["date"] = releases[1]["date"], releases[0]["date"]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.cost_estimate],
)
assert output["contractingProcesses"][0]["summary"]["tender"]["costEstimate"] == {
"amount": 1
}
releases.append(
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2200-02-03T04:05:06Z",
"tender": {"status": "active", "value": {"amount": 100}},
}
)
# last releases is not planning
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.cost_estimate],
)
assert output["contractingProcesses"][0]["summary"]["tender"]["costEstimate"] == {
"amount": 1
}
@pytest.mark.vcr()
def test_contract_title():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"title": "a"}],
"awards": [{"title": "b"}],
"tender": {"title": "c"},
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_title],
)
assert output["contractingProcesses"][0]["summary"]["title"] == "a"
# with second contract we use tender title
releases[0]["contracts"].append({"title": "a"})
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_title],
)
assert output["contractingProcesses"][0]["summary"]["title"] == "c"
# if we remove contracts we use award title
releases[0].pop("contracts")
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_title],
)
assert output["contractingProcesses"][0]["summary"]["title"] == "b"
@pytest.mark.vcr()
def test_supplier():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"parties": [
{"id": "a", "name": "A", "roles": ["supplier"]},
{"id": "b", "name": "B", "roles": ["supplier"]},
],
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.suppliers],
)
assert output["parties"] == releases[0]["parties"]
assert output["contractingProcesses"][0]["summary"]["suppliers"] == [
{"id": "a", "name": "A"},
{"id": "b", "name": "B"},
]
@pytest.mark.vcr()
def test_contract_value():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [
{"value": {"amount": 10, "currency": "USD"}},
{"value": {"amount": 10, "currency": "USD"}},
{"value": {"amount": 10, "currency": "USD"}},
],
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_price],
)
assert output["contractingProcesses"][0]["summary"]["contractValue"] == {
"amount": 30,
"currency": "USD",
}
# change an currency
releases[0]["awards"][1]["value"]["currency"] = "CAD"
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_price],
)
assert "contractValue" not in output["contractingProcesses"][0]["summary"]
@pytest.mark.vcr()
def test_contracting_process_description():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [{"description": "a", "items": [{"description": "item_a"}]}],
"awards": [{"description": "b", "items": [{"description": "item_b"}]}],
"tender": {"description": "c", "items": [{"description": "item_c"}]},
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "a"
# with no contract description we do not use contract description but item description
releases[0]["contracts"][0].pop("description")
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "item_a"
# with no contracts we use awards
releases[0].pop("contracts")
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "b"
# with no award description use award item
releases[0]["awards"][0].pop("description")
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "item_b"
# with second award item nothing is populated
releases[0]["awards"][0]["items"].append({"description": "item_b"})
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert "description" not in output["contractingProcesses"][0]["summary"]
# with a second award uses tender
releases[0]["awards"].append({"description": "b"})
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "c"
# with no tender description use items
releases[0]["tender"].pop("description")
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert output["contractingProcesses"][0]["summary"]["description"] == "item_c"
# with second tender item we do not have a viable description
releases[0]["tender"]["items"].append({"description": "item_c"})
output = transforms._run_transforms(
releases,
"1",
transforms=[
transforms.contracting_process_setup,
transforms.contract_process_description,
],
)
assert "description" not in output["contractingProcesses"][0]["summary"]
@pytest.mark.vcr()
def test_contracting_period():
releases = [
{
"ocid": "ocds-213czf-4",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"awards": [
{
"contractPeriod": {
"startDate": "2000-01-01",
"endDate": "3000-02-01",
}
},
{
"contractPeriod": {
"startDate": "1999-01-01",
"endDate": "3000-01-01",
}
},
],
"tender": {
"contractPeriod": {"startDate": "2100-01-01", "endDate": "2200-01-01"}
},
}
]
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_period],
)
assert output["contractingProcesses"][0]["summary"]["contractPeriod"] == {
"startDate": "1999-01-01",
"endDate": "3000-02-01",
}
# remove awards so we get tender contract period
releases[0].pop("awards")
output = transforms._run_transforms(
releases,
"1",
transforms=[transforms.contracting_process_setup, transforms.contract_period],
)
assert (
output["contractingProcesses"][0]["summary"]["contractPeriod"]
== releases[0]["tender"]["contractPeriod"]
)
@pytest.mark.vcr()
def test_final_audit():
releases = [
{
"ocid": "ocds-213czf-1",
"id": "1",
"tag": "planning",
"date": "2001-02-03T04:05:06Z",
"contracts": [
{
"implementation": {
"documents": [
{
"id": "doc1",
"documentType": "finalAudit",
"title": "A Document",
},
{
"id": "doc2",
"documentType": "budgetApproval",
"title": "Another Document",
},
]
},
},
{
"implementation": {
"documents": [
{
"id": "doc3",
"documentType": "finalAudit",
"title": "B Document",
},
{
"id": "doc4",
"documentType": "projectScope",
"title": "Yet another Document",
},
]
},
},
],
},
]
output = transforms._run_transforms(
copy.deepcopy(releases), "1", transforms=[transforms.final_audit]
)
assert output["documents"] == [
releases[0]["contracts"][0]["implementation"]["documents"][0],
releases[0]["contracts"][1]["implementation"]["documents"][0],
]
| 29.445397 | 102 | 0.462384 | 5,472 | 65,251 | 5.427997 | 0.064327 | 0.014039 | 0.046192 | 0.037977 | 0.864656 | 0.840213 | 0.788364 | 0.760387 | 0.742105 | 0.722207 | 0 | 0.067326 | 0.36355 | 65,251 | 2,215 | 103 | 29.458691 | 0.647885 | 0.019555 | 0 | 0.584826 | 0 | 0.001581 | 0.279735 | 0.007116 | 0 | 0 | 0 | 0 | 0.069547 | 1 | 0.026344 | false | 0 | 0.002634 | 0 | 0.028978 | 0.000527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9295c8a5ebb1aa4a29130723e5b43d3d9b3a72ed | 37 | py | Python | gran/dataset/__init__.py | longland-m/wikigen | 459ba7bf9d3ca9584de65388cc9b9a15fa16a69f | [
"MIT"
] | null | null | null | gran/dataset/__init__.py | longland-m/wikigen | 459ba7bf9d3ca9584de65388cc9b9a15fa16a69f | [
"MIT"
] | 2 | 2021-08-25T16:04:29.000Z | 2022-02-10T01:50:44.000Z | gran/dataset/__init__.py | longland-m/wikigen | 459ba7bf9d3ca9584de65388cc9b9a15fa16a69f | [
"MIT"
] | null | null | null | from gran.dataset.gran_data import *
| 18.5 | 36 | 0.810811 | 6 | 37 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
929b53bc415421bdc46c61d6eab8181aa1977938 | 143 | py | Python | profile_filter.py | LUMC/fastq-filter | e752e43daa9a263008e5352a3f7eccb58ccb6fe8 | [
"MIT"
] | null | null | null | profile_filter.py | LUMC/fastq-filter | e752e43daa9a263008e5352a3f7eccb58ccb6fe8 | [
"MIT"
] | 5 | 2021-11-05T15:10:26.000Z | 2021-12-29T08:03:49.000Z | profile_filter.py | LUMC/fastq-filter | e752e43daa9a263008e5352a3f7eccb58ccb6fe8 | [
"MIT"
] | 1 | 2021-09-13T09:39:53.000Z | 2021-09-13T09:39:53.000Z | import cProfile
import sys
from fastq_filter import filter_fastq
cProfile.run(f"filter_fastq('{sys.argv[1]}', '{sys.argv[2]}', '/dev/null')") | 23.833333 | 76 | 0.727273 | 23 | 143 | 4.391304 | 0.565217 | 0.217822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.083916 | 143 | 6 | 76 | 23.833333 | 0.755725 | 0 | 0 | 0 | 0 | 0 | 0.409722 | 0.201389 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2be40403019aa929813f7d896f13a8849e1e5078 | 38 | py | Python | simple.py | ngb2zf/cs3240-labdemo | c26a1d6eafbc14734b199246d5f5dd5c143e445f | [
"MIT"
] | null | null | null | simple.py | ngb2zf/cs3240-labdemo | c26a1d6eafbc14734b199246d5f5dd5c143e445f | [
"MIT"
] | null | null | null | simple.py | ngb2zf/cs3240-labdemo | c26a1d6eafbc14734b199246d5f5dd5c143e445f | [
"MIT"
] | null | null | null | __author__ = 'ngb2zf'
print(210+250) | 9.5 | 21 | 0.710526 | 5 | 38 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 0.131579 | 38 | 4 | 22 | 9.5 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
920a1c5276faab745e6f8dd556d61175adcb3719 | 38 | py | Python | example/example/datatables/__init__.py | KnightConan/sspdatatables | 1179a11358734e5e472e5eee703e8d34fa49e9bf | [
"MIT"
] | 4 | 2018-11-23T16:17:38.000Z | 2018-11-26T16:08:49.000Z | example/example/datatables/__init__.py | zhiwei2017/sspdatatables | 1179a11358734e5e472e5eee703e8d34fa49e9bf | [
"MIT"
] | 8 | 2018-11-26T16:38:55.000Z | 2019-01-18T15:13:12.000Z | example/example/datatables/__init__.py | KnightConan/sspdatatables | 1179a11358734e5e472e5eee703e8d34fa49e9bf | [
"MIT"
] | null | null | null | from .datatables import BookDataTables | 38 | 38 | 0.894737 | 4 | 38 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.