hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
85e7812e3adb9a9392fe81b3aef0d8fe535aff0f | 135 | py | Python | pics/gallery/admin.py | i-k-i/pics | ceaab81aa3110de4686b632a531297c4d7cdb691 | [
"MIT"
] | null | null | null | pics/gallery/admin.py | i-k-i/pics | ceaab81aa3110de4686b632a531297c4d7cdb691 | [
"MIT"
] | 10 | 2020-06-06T00:28:20.000Z | 2022-02-10T09:01:40.000Z | pics/gallery/admin.py | i-k-i/pics | ceaab81aa3110de4686b632a531297c4d7cdb691 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Picture, Note
admin.site.register((Picture, Note))
# Register your models here.
| 19.285714 | 36 | 0.777778 | 19 | 135 | 5.526316 | 0.631579 | 0.209524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 135 | 6 | 37 | 22.5 | 0.897436 | 0.192593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c80f106209903bc0495b5f4c58f5abcbad6e1004 | 9,916 | py | Python | ppipe/ee_ls.py | rfernand387/Planet-GEE-Pipeline-CLI | 18da56148f94fee9b48cefe64b8ab91ae173cddd | [
"Apache-2.0"
] | 42 | 2017-05-05T21:22:51.000Z | 2022-03-08T09:54:17.000Z | ppipe/ee_ls.py | rfernand387/Planet-GEE-Pipeline-CLI | 18da56148f94fee9b48cefe64b8ab91ae173cddd | [
"Apache-2.0"
] | 18 | 2017-06-20T20:15:49.000Z | 2021-04-25T15:13:44.000Z | ppipe/ee_ls.py | rfernand387/Planet-GEE-Pipeline-CLI | 18da56148f94fee9b48cefe64b8ab91ae173cddd | [
"Apache-2.0"
] | 12 | 2017-06-20T15:07:20.000Z | 2021-07-07T16:57:21.000Z | from __future__ import print_function
__copyright__ = """
Copyright 2019 Samapriya Roy
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
__license__ = "Apache 2.0"
import ee
import subprocess
import csv
import os
##initialize earth engine
ee.Initialize()
suffixes = ["B", "KB", "MB", "GB", "TB", "PB"]
def humansize(nbytes):
i = 0
while nbytes >= 1024 and i < len(suffixes) - 1:
nbytes /= 1024.0
i += 1
f = ("%.2f" % nbytes).rstrip("0").rstrip(".")
return "%s %s" % (f, suffixes[i])
def object_sz(object):
sz = humansize(object.get("system:asset_size").getInfo())
return sz
def collect_sz(object):
sz = humansize(
object.reduceColumns(ee.Reducer.sum(), ["system:asset_size"])
.get("sum")
.getInfo()
)
return sz
def lst(location, typ, items=None, output=None):
if items is None and typ == "print":
assets_list = ee.data.getList(params={"id": location})
for things in assets_list:
header = things["type"]
tail = things["id"]
try:
if header == "ImageCollection":
collc = ee.ImageCollection(tail)
ast = collc.size().getInfo()
sz = collect_sz(collc)
print(
"Image Collection: "
+ str(tail)
+ " has "
+ str(ast)
+ " images a total of "
+ str(sz)
)
elif header == "Image":
collc = ee.Image(tail)
sz = object_sz(collc)
print("Image: " + str(tail) + " is " + str(sz))
elif header == "Table":
collc = ee.FeatureCollection(tail)
sz = object_sz(collc)
print("Table: " + str(tail) + " is " + str(sz))
elif (
header == "Folder"
): ##Folders are not added to the list but are print
assets_list = ee.data.getList(params={"id": location})
print("Folder: " + str(tail) + " has " + str(len(assets_list)))
except Exception as e:
print(e)
elif items > 0 and typ == "print":
assets_list = ee.data.getList(params={"id": location})
if int(len(assets_list)) > int(items):
subset = assets_list[: int(items)]
for things in subset:
header = things["type"]
tail = things["id"]
try:
if header == "ImageCollection":
collc = ee.ImageCollection(tail)
ast = collc.size().getInfo()
sz = collect_sz(collc)
print(
"Image Collection: "
+ str(tail)
+ " has "
+ str(ast)
+ " images a total of "
+ str(sz)
)
elif header == "Image":
collc = ee.Image(tail)
sz = object_sz(collc)
print("Image: " + str(tail) + " is " + str(sz))
elif header == "Table":
collc = ee.FeatureCollection(tail)
sz = object_sz(collc)
print("Table: " + str(tail) + " is " + str(sz))
elif (
header == "Folder"
): ##Folders are not added to the list but are print
assets_list = ee.data.getList(params={"id": location})
print("Folder: " + str(tail) + " has " + str(len(assets_list)))
except Exception as e:
print(e)
elif items is None and typ == "report":
with open(output, "wb") as csvfile:
writer = csv.DictWriter(
csvfile,
fieldnames=["type", "path", "No of Assets", "size"],
delimiter=",",
)
writer.writeheader()
assets_list = ee.data.getList(params={"id": location})
for things in assets_list:
header = things["type"]
tail = things["id"]
try:
if header == "ImageCollection":
collc = ee.ImageCollection(tail)
ast = collc.size().getInfo()
sz = collect_sz(collc)
print(
"Image Collection: "
+ str(tail)
+ " has "
+ str(ast)
+ " images a total of "
+ str(sz)
)
with open(output, "a") as csvfile:
writer = csv.writer(csvfile, delimiter=",", lineterminator="\n")
writer.writerow([header, tail, ast, str(sz)])
csvfile.close()
elif header == "Image":
collc = ee.Image(tail)
sz = object_sz(collc)
print("Image: " + str(tail) + " is " + str(sz))
with open(output, "a") as csvfile:
writer = csv.writer(csvfile, delimiter=",", lineterminator="\n")
writer.writerow([header, tail, ast, str(sz)])
csvfile.close()
elif header == "Table":
collc = ee.FeatureCollection(tail)
sz = object_sz(collc)
print("Table: " + str(tail) + " is " + str(sz))
with open(output, "a") as csvfile:
writer = csv.writer(csvfile, delimiter=",", lineterminator="\n")
writer.writerow([header, tail, ast, str(sz)])
csvfile.close()
elif (
header == "Folder"
): ##Folders are not added to the list but are print
assets_list = ee.data.getList(params={"id": location})
print("Folder: " + str(tail) + " has " + str(len(assets_list)))
except Exception as e:
print(e)
elif items > 0 and typ == "report":
with open(output, "wb") as csvfile:
writer = csv.DictWriter(
csvfile,
fieldnames=["type", "path", "No of Assets", "size"],
delimiter=",",
)
writer.writeheader()
assets_list = ee.data.getList(params={"id": location})
if int(len(assets_list)) > int(items):
subset = assets_list[: int(items)]
for things in subset:
header = things["type"]
tail = things["id"]
try:
if header == "ImageCollection":
collc = ee.ImageCollection(tail)
ast = collc.size().getInfo()
sz = collect_sz(collc)
print(
"Image Collection: "
+ str(tail)
+ " has "
+ str(ast)
+ " images a total of "
+ str(sz)
)
with open(output, "a") as csvfile:
writer = csv.writer(
csvfile, delimiter=",", lineterminator="\n"
)
writer.writerow([header, tail, ast, str(sz)])
csvfile.close()
elif header == "Image":
collc = ee.Image(tail)
sz = object_sz(collc)
print("Image: " + str(tail) + " is " + str(sz))
with open(output, "a") as csvfile:
writer = csv.writer(
csvfile, delimiter=",", lineterminator="\n"
)
writer.writerow([header, tail, 1, str(sz)])
csvfile.close()
elif header == "Table":
collc = ee.FeatureCollection(tail)
sz = object_sz(collc)
print("Table: " + str(tail) + " is " + str(sz))
with open(output, "a") as csvfile:
writer = csv.writer(
csvfile, delimiter=",", lineterminator="\n"
)
writer.writerow([header, tail, 1, str(sz)])
csvfile.close()
elif (
header == "Folder"
): ##Folders are not added to the list but are print
assets_list = ee.data.getList(params={"id": location})
print("Folder: " + str(tail) + " has " + str(len(assets_list)))
except Exception as e:
print(e)
# lst(location="users/samapriya/Belem", typ="report",items=0,output=r"C:\planet_demo\rep.csv")
| 41.145228 | 94 | 0.427088 | 911 | 9,916 | 4.596048 | 0.183315 | 0.04299 | 0.034392 | 0.030571 | 0.782422 | 0.76642 | 0.76642 | 0.76642 | 0.76642 | 0.76642 | 0 | 0.005431 | 0.461476 | 9,916 | 240 | 95 | 41.316667 | 0.778652 | 0.030658 | 0 | 0.775229 | 0 | 0 | 0.128841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018349 | false | 0 | 0.022936 | 0 | 0.055046 | 0.105505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c82412e081089076ff0dfab5ec8fe1d1e73afece | 133 | py | Python | Codewars/8kyu/will-there-be-enough-space/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/8kyu/will-there-be-enough-space/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/8kyu/will-there-be-enough-space/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
test.describe('Example Tests')
test.assert_equals(enough(10, 5, 5), 0)
test.assert_equals(enough(100, 60, 50), 10)
| 22.166667 | 43 | 0.699248 | 24 | 133 | 3.791667 | 0.666667 | 0.10989 | 0.351648 | 0.483516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144068 | 0.112782 | 133 | 5 | 44 | 26.6 | 0.627119 | 0.105263 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c849e2a9387f4e5ff4905e8018485c8c44780b6b | 102 | py | Python | stattools/ensemble/__init__.py | artemmavrin/SLTools | 04525b5d6777be3ccdc6ad44e4cbfe24a8875933 | [
"MIT"
] | 2 | 2018-07-10T22:16:23.000Z | 2019-10-08T00:12:44.000Z | stattools/ensemble/__init__.py | artemmavrin/SLTools | 04525b5d6777be3ccdc6ad44e4cbfe24a8875933 | [
"MIT"
] | null | null | null | stattools/ensemble/__init__.py | artemmavrin/SLTools | 04525b5d6777be3ccdc6ad44e4cbfe24a8875933 | [
"MIT"
] | 4 | 2019-05-17T23:06:07.000Z | 2021-03-22T14:04:24.000Z | """Ensemble methods."""
from .bagging import BaggingClassifier
from .bagging import BaggingRegressor
| 20.4 | 38 | 0.803922 | 10 | 102 | 8.2 | 0.7 | 0.268293 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 102 | 4 | 39 | 25.5 | 0.901099 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c0933194ec466b9778437f767913ba2b23f51a4d | 591 | py | Python | ABC/079/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | ABC/079/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | ABC/079/c.py | fumiyanll23/AtCoder | 362ca9fcacb5415c1458bc8dee5326ba2cc70b65 | [
"MIT"
] | null | null | null | def main():
# input
A, B, C, D = list(map(int, input()))
# compute
# output
if A+B+C+D == 7:
print(f'{A}+{B}+{C}+{D}=7')
elif A-B+C+D == 7:
print(f'{A}-{B}+{C}+{D}=7')
elif A+B-C+D == 7:
print(f'{A}+{B}-{C}+{D}=7')
elif A+B+C-D == 7:
print(f'{A}+{B}+{C}-{D}=7')
elif A-B-C+D == 7:
print(f'{A}-{B}-{C}+{D}=7')
elif A+B-C-D == 7:
print(f'{A}+{B}-{C}-{D}=7')
elif A-B+C-D == 7:
print(f'{A}-{B}+{C}-{D}=7')
else:
print(f'{A}-{B}-{C}-{D}=7')
if __name__ == '__main__':
main()
| 21.107143 | 40 | 0.370558 | 116 | 591 | 1.818966 | 0.163793 | 0.151659 | 0.227488 | 0.303318 | 0.696682 | 0.696682 | 0.696682 | 0.64455 | 0.64455 | 0.64455 | 0 | 0.036232 | 0.299492 | 591 | 27 | 41 | 21.888889 | 0.47343 | 0.033841 | 0 | 0 | 0 | 0 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | true | 0 | 0 | 0 | 0.05 | 0.4 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b1181f1774b1ed6beb8d20cf4976642ba407b00 | 134 | py | Python | faculty_distributed/__init__.py | facultyai/faculty-distributed | d5b770da603e3d5fe13c1b0c33e86dd9af2d6b8d | [
"Apache-2.0"
] | 5 | 2019-05-15T12:45:25.000Z | 2020-10-11T15:21:25.000Z | faculty_distributed/__init__.py | facultyai/faculty-distributed | d5b770da603e3d5fe13c1b0c33e86dd9af2d6b8d | [
"Apache-2.0"
] | 7 | 2019-05-10T11:25:29.000Z | 2020-02-12T18:29:24.000Z | faculty_distributed/__init__.py | facultyai/faculty-distributed | d5b770da603e3d5fe13c1b0c33e86dd9af2d6b8d | [
"Apache-2.0"
] | 1 | 2021-04-04T14:29:06.000Z | 2021-04-04T14:29:06.000Z | from .manager import FacultyJobExecutor
from .utils import job_name_to_job_id
__all__ = ["FacultyJobExecutor", "job_name_to_job_id"]
| 26.8 | 54 | 0.828358 | 19 | 134 | 5.210526 | 0.526316 | 0.141414 | 0.181818 | 0.242424 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097015 | 134 | 4 | 55 | 33.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.268657 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b22a2b8cb328f6bd47c419f4b4b4c4a2dcbcccd | 2,840 | py | Python | tests/plugins/mockserver/test_tracing_enabled.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | tests/plugins/mockserver/test_tracing_enabled.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | tests/plugins/mockserver/test_tracing_enabled.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | import aiohttp.web
import pytest
from testsuite.mockserver import server # pylint: disable=protected-access
# pylint: disable=invalid-name
async def test_mockserver_responds_with_handler_to_current_test(
mockserver, create_service_client,
):
@mockserver.handler('/arbitrary/path')
def _handler(request):
return aiohttp.web.Response(text='arbitrary text', status=200)
client = create_service_client(
mockserver.base_url,
headers={mockserver.trace_id_header: mockserver.trace_id},
)
response = await client.post('arbitrary/path')
assert response.status_code == 200
assert response.text == 'arbitrary text'
async def test_mockserver_responds_with_json_handler_to_current_test(
mockserver, create_service_client,
):
@mockserver.json_handler('/arbitrary/path')
def _json_handler(request):
return {'arbitrary_key': True}
client = create_service_client(
mockserver.base_url,
headers={mockserver.trace_id_header: mockserver.trace_id},
)
response = await client.post('arbitrary/path')
assert response.status_code == 200
assert response.json() == {'arbitrary_key': True}
async def test_mockserver_skips_handler_and_responds_500_to_other_test(
mockserver, create_service_client,
):
@mockserver.handler('/arbitrary/path')
def _handler(request):
return aiohttp.web.Response(text='arbitrary text', status=200)
client = create_service_client(
mockserver.base_url,
headers={mockserver.trace_id_header: server.generate_trace_id()},
)
response = await client.post('arbitrary/path')
assert response.status_code == 500
assert response.text == server.REQUEST_FROM_ANOTHER_TEST_ERROR
async def test_mockserver_skips_json_handler_and_responds_500_to_other_test(
mockserver, create_service_client,
):
@mockserver.json_handler('/arbitrary/path')
def _json_handler(request):
return {'arbitrary_key': True}
client = create_service_client(
mockserver.base_url,
headers={mockserver.trace_id_header: server.generate_trace_id()},
)
response = await client.post('arbitrary/path')
assert response.status_code == 500
assert response.text == server.REQUEST_FROM_ANOTHER_TEST_ERROR
@pytest.mark.parametrize(
'http_headers',
[
{}, # no trace_id in http headers
{server.DEFAULT_TRACE_ID_HEADER: ''},
{server.DEFAULT_TRACE_ID_HEADER: 'id_without_testsuite-_prefix'},
],
)
async def test_mockserver_responds_500_on_unhandled_request_from_other_sources(
mockserver, http_headers, create_service_client,
):
client = create_service_client(mockserver.base_url, headers=http_headers)
response = await client.post('arbitrary/path')
assert response.status_code == 500
| 30.869565 | 79 | 0.730634 | 337 | 2,840 | 5.804154 | 0.189911 | 0.039366 | 0.097137 | 0.133436 | 0.833333 | 0.763804 | 0.729039 | 0.729039 | 0.703988 | 0.687628 | 0 | 0.012799 | 0.174648 | 2,840 | 91 | 80 | 31.208791 | 0.821672 | 0.031338 | 0 | 0.652174 | 0 | 0 | 0.091372 | 0.010193 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.057971 | false | 0 | 0.043478 | 0.057971 | 0.15942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b2433d8718703fe92a1890b225e8566fea5c9e9 | 34 | py | Python | examples/docs_snippets_crag/docs_snippets_crag/concepts/solids_pipelines/dynamic_pipeline/sample/program.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 4,606 | 2018-06-21T17:45:20.000Z | 2022-03-31T23:39:42.000Z | examples/docs_snippets_crag/docs_snippets_crag/concepts/solids_pipelines/dynamic_pipeline/sample/program.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 6,221 | 2018-06-12T04:36:01.000Z | 2022-03-31T21:43:05.000Z | examples/docs_snippets_crag/docs_snippets_crag/concepts/solids_pipelines/dynamic_pipeline/sample/program.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 619 | 2018-08-22T22:43:09.000Z | 2022-03-31T22:48:06.000Z | def calculate():
return "yes"
| 11.333333 | 16 | 0.617647 | 4 | 34 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 34 | 2 | 17 | 17 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
9e5957c6a467306f2065db95b6865cbdda4a16d5 | 681 | py | Python | app/controllers/index_controller.py | dhruvshah1996/Project3 | d87ad37f6cf2de0d3402c71d21b25258946aad69 | [
"MIT"
] | null | null | null | app/controllers/index_controller.py | dhruvshah1996/Project3 | d87ad37f6cf2de0d3402c71d21b25258946aad69 | [
"MIT"
] | null | null | null | app/controllers/index_controller.py | dhruvshah1996/Project3 | d87ad37f6cf2de0d3402c71d21b25258946aad69 | [
"MIT"
] | null | null | null | from app.controllers.controller import ControllerBase
from flask import render_template
class IndexController(ControllerBase):
@staticmethod
def get():
return render_template('index.html')
class pylintController(ControllerBase):
@staticmethod
def get():
return render_template('pylint.html')
class AAAtestingController(ControllerBase):
@staticmethod
def get():
return render_template('AAA.html')
class OOPController(ControllerBase):
@staticmethod
def get():
return render_template('OOPs.html')
class SOLIDController(ControllerBase):
@staticmethod
def get():
return render_template('SOLID.html')
| 23.482759 | 53 | 0.71953 | 66 | 681 | 7.333333 | 0.378788 | 0.173554 | 0.299587 | 0.330579 | 0.53719 | 0.53719 | 0.53719 | 0 | 0 | 0 | 0 | 0 | 0.189427 | 681 | 28 | 54 | 24.321429 | 0.876812 | 0 | 0 | 0.454545 | 0 | 0 | 0.070588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | true | 0 | 0.090909 | 0.227273 | 0.772727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9e6df1b085fd8146d6edae666e6954eff8adaaaf | 729 | py | Python | Fastir_Collector/hooks/hook-cachedns.py | Unam3dd/Train-2018-2020 | afb6ae70fe338cbe55a21b74648d91996b818fa2 | [
"MIT"
] | 4 | 2021-04-23T15:39:17.000Z | 2021-12-27T22:53:24.000Z | Fastir_Collector/hooks/hook-cachedns.py | Unam3dd/Train-2018-2020 | afb6ae70fe338cbe55a21b74648d91996b818fa2 | [
"MIT"
] | null | null | null | Fastir_Collector/hooks/hook-cachedns.py | Unam3dd/Train-2018-2020 | afb6ae70fe338cbe55a21b74648d91996b818fa2 | [
"MIT"
] | 2 | 2021-04-19T08:28:54.000Z | 2022-01-19T13:23:29.000Z | import sys
if sys.maxsize > 2 ** 32:
datas = [("./../_x64/boost_python-vc120-gd-1_55.dll", ''),
("./../_x64/boost_python-vc120-gd-1_55.lib", ''),
("./../_x64/boost_python-vc120-mt-gd-1_55.lib", ''),
("./../_x64/msvcp120d.dll", ''),
("./../_x64/msvcr120d.dll", ''),
("./../memory/dnscache_x64.pyd", ''),
]
else:
datas = [("./../_x86/boost_python-vc120-gd-1_55.dll", ''),
("./../_x86/boost_python-vc120-gd-1_55.lib", ''),
("./../_x86/boost_python-vc120-mt-gd-1_55.lib", ''),
("./../_x86/msvcp120d.dll", ''),
("./../_x86/msvcr120d.dll", ''),
("./../memory/dnscache_x64.pyd", ''),
]
| 33.136364 | 65 | 0.449931 | 80 | 729 | 3.8 | 0.3 | 0.217105 | 0.315789 | 0.236842 | 0.776316 | 0.736842 | 0.526316 | 0.171053 | 0 | 0 | 0 | 0.138889 | 0.259259 | 729 | 21 | 66 | 34.714286 | 0.424074 | 0 | 0 | 0.117647 | 0 | 0 | 0.540466 | 0.540466 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9eb99be89f8c3fc8b0f17ca4c26f85a61803c227 | 96 | py | Python | venv/lib/python3.8/site-packages/rope/refactor/topackage.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/rope/refactor/topackage.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/rope/refactor/topackage.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/d8/83/47/d86c3973cf9be39d7cc2a59a4cf385ca8e19160ff32f3839d0e0cb11a9 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b5e3e5f6e35636614d06c88e79635190dcbf6cd | 62 | py | Python | tensor2struct/datasets/__init__.py | DreamerDeo/tensor2struct-public | 48e41b7faf041189c17dff8445d9e2b4d709e753 | [
"MIT"
] | null | null | null | tensor2struct/datasets/__init__.py | DreamerDeo/tensor2struct-public | 48e41b7faf041189c17dff8445d9e2b4d709e753 | [
"MIT"
] | null | null | null | tensor2struct/datasets/__init__.py | DreamerDeo/tensor2struct-public | 48e41b7faf041189c17dff8445d9e2b4d709e753 | [
"MIT"
] | null | null | null | from . import overnight
from . import spider
from . import ssp | 20.666667 | 23 | 0.774194 | 9 | 62 | 5.333333 | 0.555556 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 62 | 3 | 24 | 20.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7babb10f1c816ae63ebada6885d6192577b9fe17 | 25 | py | Python | __init__.py | JeffHeard/terrapyn | 8d883b6c8a729e972c12d6f0c3b2a34ea86dcd88 | [
"Apache-2.0"
] | 1 | 2016-10-27T14:58:04.000Z | 2016-10-27T14:58:04.000Z | __init__.py | JeffHeard/terrapyn | 8d883b6c8a729e972c12d6f0c3b2a34ea86dcd88 | [
"Apache-2.0"
] | null | null | null | __init__.py | JeffHeard/terrapyn | 8d883b6c8a729e972c12d6f0c3b2a34ea86dcd88 | [
"Apache-2.0"
] | null | null | null | import ows
import geocms
| 8.333333 | 13 | 0.84 | 4 | 25 | 5.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 2 | 14 | 12.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c889f8dd9250f3812967c26638d3f001e902c3ad | 41 | py | Python | taf/testlib/linux/maa/__init__.py | stepanandr/taf | 75cb85861f8e9703bab7dc6195f3926b8394e3d0 | [
"Apache-2.0"
] | 10 | 2016-12-16T00:05:58.000Z | 2018-10-30T17:48:25.000Z | taf/testlib/linux/maa/__init__.py | stepanandr/taf | 75cb85861f8e9703bab7dc6195f3926b8394e3d0 | [
"Apache-2.0"
] | 40 | 2017-01-04T23:07:05.000Z | 2018-04-16T19:52:02.000Z | taf/testlib/linux/maa/__init__.py | stepanandr/taf | 75cb85861f8e9703bab7dc6195f3926b8394e3d0 | [
"Apache-2.0"
] | 23 | 2016-12-30T05:03:53.000Z | 2020-04-01T08:40:24.000Z | from .maa import MatchActionAcceleration
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c88b5c5af758bae12b7c4e100640ba8f5a404572 | 132 | py | Python | ChatbotCreator/__init__.py | SamambaMan/chatbot-creator | 9a266d5b6f2100540aff64d72cd785b6069ad122 | [
"MIT"
] | 2 | 2021-04-20T23:12:35.000Z | 2021-04-23T16:05:39.000Z | ChatbotCreator/__init__.py | SamambaMan/chatbot-creator | 9a266d5b6f2100540aff64d72cd785b6069ad122 | [
"MIT"
] | null | null | null | ChatbotCreator/__init__.py | SamambaMan/chatbot-creator | 9a266d5b6f2100540aff64d72cd785b6069ad122 | [
"MIT"
] | 1 | 2021-04-21T00:25:15.000Z | 2021-04-21T00:25:15.000Z | from ChatbotCreator.dbot import CreateDiscordBot
from ChatbotCreator.run import Run
from ChatbotCreator.cc import ChatbotCreator
| 33 | 49 | 0.863636 | 15 | 132 | 7.6 | 0.466667 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 132 | 3 | 50 | 44 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8c4bda63f77889233fa80787aa027cc09f9897e | 57 | py | Python | python/ga4gh/wes/__init__.py | junjun-zhang/workflow-execution-service-schemas | 120cd3b265e531855c4ae48046bf596a3e83329f | [
"Apache-2.0"
] | null | null | null | python/ga4gh/wes/__init__.py | junjun-zhang/workflow-execution-service-schemas | 120cd3b265e531855c4ae48046bf596a3e83329f | [
"Apache-2.0"
] | null | null | null | python/ga4gh/wes/__init__.py | junjun-zhang/workflow-execution-service-schemas | 120cd3b265e531855c4ae48046bf596a3e83329f | [
"Apache-2.0"
] | 1 | 2018-05-30T20:53:36.000Z | 2018-05-30T20:53:36.000Z | import client
import server
assert server
assert client
| 9.5 | 13 | 0.842105 | 8 | 57 | 6 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 57 | 5 | 14 | 11.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c8f0a3a5f223c29f35ad538b05c7a54e802177c7 | 45 | py | Python | sqljob/__init__.py | kota7/sqljob | 32c88f637ae25ace23bdb6a9be09d90d878f8ccb | [
"MIT"
] | null | null | null | sqljob/__init__.py | kota7/sqljob | 32c88f637ae25ace23bdb6a9be09d90d878f8ccb | [
"MIT"
] | null | null | null | sqljob/__init__.py | kota7/sqljob | 32c88f637ae25ace23bdb6a9be09d90d878f8ccb | [
"MIT"
] | null | null | null | from .sqljob import sqljob, SqlJob, Connector | 45 | 45 | 0.822222 | 6 | 45 | 6.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8f76a05fe0a774394c6f6e98dca9fc237e5b68d | 23 | py | Python | HSVideo/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | HSVideo/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | HSVideo/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | from ._hsvideo import * | 23 | 23 | 0.782609 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8fd3bf5ad7becd53168fbf06ca72ade6da3ec43 | 1,599 | py | Python | students/k3342/laboratory_works/Demin_Danil/laboratory_work_2-3/django-vue-auth/django_project/django_app/migrations/0003_auto_20200814_2201.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 10 | 2020-03-20T09:06:12.000Z | 2021-07-27T13:06:02.000Z | students/k3342/laboratory_works/Demin_Danil/laboratory_work_2-3/django-vue-auth/django_project/django_app/migrations/0003_auto_20200814_2201.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 134 | 2020-03-23T09:47:48.000Z | 2022-03-12T01:05:19.000Z | students/k3342/laboratory_works/Demin_Danil/laboratory_work_2-3/django-vue-auth/django_project/django_app/migrations/0003_auto_20200814_2201.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 71 | 2020-03-20T12:45:56.000Z | 2021-10-31T19:22:25.000Z | # Generated by Django 3.0.8 on 2020-08-14 22:01
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('django_app', '0002_auto_20200705_1507'),
]
operations = [
migrations.AddField(
model_name='breed',
name='avg_productivity',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='breed',
name='avg_weight',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='building',
name='number',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='cage',
name='number',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='cage',
name='row',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='chicken',
name='age',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='chicken',
name='productivity',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='chicken',
name='weight',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='worker',
name='salary',
field=models.IntegerField(default=0),
),
]
| 27.101695 | 50 | 0.52783 | 140 | 1,599 | 5.921429 | 0.314286 | 0.195416 | 0.249698 | 0.293124 | 0.780458 | 0.743064 | 0.743064 | 0.681544 | 0.681544 | 0.681544 | 0 | 0.03876 | 0.354597 | 1,599 | 58 | 51 | 27.568966 | 0.764535 | 0.028143 | 0 | 0.692308 | 1 | 0 | 0.099227 | 0.01482 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019231 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdaffca0ec8cc082ee712c4000dfdc3fb6bcdb05 | 19 | py | Python | lltk/corpus/tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | 5 | 2021-03-15T21:05:06.000Z | 2022-03-04T10:52:16.000Z | lltk/corpus/tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | 1 | 2021-05-04T17:01:47.000Z | 2021-05-10T15:14:55.000Z | lltk/corpus/tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | null | null | null | from .tcp import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5319c21e7738c887dd6d54c2e970224bd9fdf04 | 98 | py | Python | pydocxs3upload/exceptions.py | jhubert/pydocx-s3-images | e7c96b257c67db8043822292550c193db410a4e6 | [
"Apache-2.0"
] | 2 | 2016-04-17T02:45:33.000Z | 2019-07-26T09:26:41.000Z | pydocxs3upload/exceptions.py | jhubert/pydocx-s3-images | e7c96b257c67db8043822292550c193db410a4e6 | [
"Apache-2.0"
] | 3 | 2015-07-17T20:04:53.000Z | 2015-07-18T19:24:40.000Z | pydocxs3upload/exceptions.py | jhubert/pydocx-s3-images | e7c96b257c67db8043822292550c193db410a4e6 | [
"Apache-2.0"
] | null | null | null | from __future__ import (
absolute_import,
)
class ImageUploadException(Exception):
pass
| 12.25 | 38 | 0.744898 | 9 | 98 | 7.555556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193878 | 98 | 7 | 39 | 14 | 0.860759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b5a7eccebecfc2d7bd617f2ccc94c4e890ce2263 | 93 | py | Python | config.py | mayukh18/covidexplore | 7181c845e789c4d2eb2477dc4f6c6ea1f761761b | [
"MIT"
] | 5 | 2020-03-31T19:30:11.000Z | 2020-05-12T10:37:12.000Z | config.py | mayukh18/covidexplore | 7181c845e789c4d2eb2477dc4f6c6ea1f761761b | [
"MIT"
] | 1 | 2020-04-02T23:02:07.000Z | 2020-04-02T23:02:07.000Z | config.py | mayukh18/covidexplore | 7181c845e789c4d2eb2477dc4f6c6ea1f761761b | [
"MIT"
] | 3 | 2020-05-12T05:41:52.000Z | 2020-07-26T03:54:49.000Z | # Dates
CASES_DATA_UPDATE_DATE = 'April 23, 2020'
CLIMATE_DATA_UPDATE_DATE = 'April 23, 2020' | 31 | 43 | 0.784946 | 15 | 93 | 4.466667 | 0.6 | 0.298507 | 0.41791 | 0.567164 | 0.746269 | 0.746269 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 0.11828 | 93 | 3 | 43 | 31 | 0.670732 | 0.053763 | 0 | 0 | 0 | 0 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a982398c3388704ba31d1f9546c7f4643b40b852 | 6,932 | py | Python | hubcheck/pageobjects/widgets/members_profile_form.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | 1 | 2016-02-13T13:42:23.000Z | 2016-02-13T13:42:23.000Z | hubcheck/pageobjects/widgets/members_profile_form.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | null | null | null | hubcheck/pageobjects/widgets/members_profile_form.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | null | null | null | from hubcheck.pageobjects.basepagewidget import BasePageWidget
class MembersProfileForm1(BasePageWidget):
def __init__(self, owner, locatordict={}):
super(MembersProfileForm1,self).__init__(owner,locatordict)
# load hub's classes
MembersProfileForm_Locators = self.load_class('MembersProfileForm_Locators')
MembersProfileBiography = self.load_class('MembersProfileBiography')
MembersProfileCitizenship = self.load_class('MembersProfileCitizenship')
MembersProfileEmail = self.load_class('MembersProfileEmail')
MembersProfileEmployment = self.load_class('MembersProfileEmployment')
MembersProfileGender = self.load_class('MembersProfileGender')
MembersProfileInterests = self.load_class('MembersProfileInterests')
MembersProfileMailPreference = self.load_class('MembersProfileMailPreference')
MembersProfileName = self.load_class('MembersProfileName')
MembersProfileOrganization = self.load_class('MembersProfileOrganization')
MembersProfilePassword = self.load_class('MembersProfilePassword')
MembersProfileResidence = self.load_class('MembersProfileResidence')
MembersProfileWebsite = self.load_class('MembersProfileWebsite')
MembersProfileTelephone = self.load_class('MembersProfileTelephone')
# update this object's locator
self.locators.update(MembersProfileForm_Locators.locators)
# update the locators with those from the owner
self.update_locators_from_owner()
# setup page object's components
self.biography = MembersProfileBiography(self,{'base':'biography'})
self.citizenship = MembersProfileCitizenship(self,{'base':'citizenship'})
self.email = MembersProfileEmail(self,{'base':'email'})
self.employment = MembersProfileEmployment(self,{'base':'employment'})
self.gender = MembersProfileGender(self,{'base':'gender'})
self.interests = MembersProfileInterests(self,{'base':'interests'})
self.mailpreference = MembersProfileMailPreference(self,{'base':'mailpreference'})
self.name = MembersProfileName(self,{'base':'name'})
self.organization = MembersProfileOrganization(self,{'base':'organization'})
self.password = MembersProfilePassword(self,{'base':'password'})
self.residence = MembersProfileResidence(self,{'base':'residence'})
self.website = MembersProfileWebsite(self,{'base':'website'})
self.telephone = MembersProfileTelephone(self,{'base':'telephone'})
# update the component's locators with this objects overrides
self._updateLocators()
class MembersProfileForm2(BasePageWidget):
def __init__(self, owner, locatordict=None):
super(MembersProfileForm2,self).__init__(owner,locatordict)
# load hub's classes
MembersProfileForm_Locators = self.load_class('MembersProfileForm_Locators')
MembersProfileBiography = self.load_class('MembersProfileBiography')
MembersProfileEmail = self.load_class('MembersProfileEmail')
MembersProfileName = self.load_class('MembersProfileName')
MembersProfilePassword = self.load_class('MembersProfilePassword')
# update this object's locator
self.locators.update(MembersProfileForm_Locators.locators)
# update the locators with those from the owner
self.update_locators_from_owner()
# setup page object's components
self.biography = MembersProfileBiography(self,{'base':'biography'})
self.email = MembersProfileEmail(self,{'base':'email'})
self.name = MembersProfileName(self,{'base':'name'})
self.password = MembersProfilePassword(self,{'base':'password'})
# update the component's locators with this objects overrides
self._updateLocators()
class MembersProfileForm3(BasePageWidget):
def __init__(self, owner, locatordict={}):
super(MembersProfileForm3,self).__init__(owner,locatordict)
# load hub's classes
MembersProfileForm_Locators = self.load_class('MembersProfileForm_Locators')
MembersProfileBiography = self.load_class('MembersProfileBiography')
MembersProfileEmail = self.load_class('MembersProfileEmail')
MembersProfileEmployment = self.load_class('MembersProfileEmployment')
MembersProfileInterests = self.load_class('MembersProfileInterests')
MembersProfileMailPreference = self.load_class('MembersProfileMailPreference')
MembersProfileName = self.load_class('MembersProfileName')
MembersProfileOrganization = self.load_class('MembersProfileOrganization')
MembersProfileWebsite = self.load_class('MembersProfileWebsite')
MembersProfileTelephone = self.load_class('MembersProfileTelephone')
# update this object's locator
self.locators.update(MembersProfileForm_Locators.locators)
# update the locators with those from the owner
self.update_locators_from_owner()
# setup page object's components
self.biography = MembersProfileBiography(self,{'base':'biography'})
self.email = MembersProfileEmail(self,{'base':'email'})
self.employment = MembersProfileEmployment(self,{'base':'employment'})
self.interests = MembersProfileInterests(self,{'base':'interests'})
self.mailpreference = MembersProfileMailPreference(self,{'base':'mailpreference'})
self.name = MembersProfileName(self,{'base':'name'})
self.organization = MembersProfileOrganization(self,{'base':'organization'})
self.website = MembersProfileWebsite(self,{'base':'website'})
self.telephone = MembersProfileTelephone(self,{'base':'telephone'})
# update the component's locators with this objects overrides
self._updateLocators()
class MembersProfileForm_Locators_Base(object):
"""locators for MembersProfileForm object"""
locators = {
'base' : "css=#profile",
'biography' : "css=.profile-bio",
'citizenship' : "css=.profile-countryorigin",
'email' : "css=.profile-email",
'employment' : "css=.profile-orgtype",
'gender' : "css=.profile-sex",
'interests' : "css=.profile-interests",
'mailpreference' : "css=.profile-optin",
'name' : "css=.profile-name",
'organization' : "css=.profile-org",
'password' : "css=.profile-password",
'residence' : "css=.profile-countryresident",
'website' : "css=.profile-web",
'telephone' : "css=.profile-phone",
}
| 53.323077 | 90 | 0.67513 | 553 | 6,932 | 8.325497 | 0.135624 | 0.050391 | 0.081885 | 0.01629 | 0.798653 | 0.763249 | 0.734361 | 0.706125 | 0.706125 | 0.706125 | 0 | 0.001108 | 0.218552 | 6,932 | 129 | 91 | 53.736434 | 0.848809 | 0.08569 | 0 | 0.659341 | 0 | 0 | 0.216208 | 0.099557 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032967 | false | 0.054945 | 0.010989 | 0 | 0.098901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a9834508a52115f665f75846901ea4cb8df774e5 | 183 | py | Python | A-Byte-of-Python/18_5_list_comprehension.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | A-Byte-of-Python/18_5_list_comprehension.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | A-Byte-of-Python/18_5_list_comprehension.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | list_one = [2, 3, 4]
list_two = [2 * i for i in list_one if i > 2]
print(list_two)
print()
list_one = [2, 3, 4]
list_two = [2 * i for i in list_one if i > 1]
print(list_two)
print()
| 18.3 | 45 | 0.622951 | 42 | 183 | 2.52381 | 0.285714 | 0.264151 | 0.150943 | 0.169811 | 0.660377 | 0.660377 | 0.660377 | 0.660377 | 0.660377 | 0.660377 | 0 | 0.070423 | 0.224044 | 183 | 9 | 46 | 20.333333 | 0.676056 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a99e3827965cc9149cc3166a80d1d4563b0641dc | 144 | py | Python | helloWorld.py | jakekali/learning-python | 6d390883fa32b43a39fbed7ea4345095ea35ff05 | [
"MIT"
] | null | null | null | helloWorld.py | jakekali/learning-python | 6d390883fa32b43a39fbed7ea4345095ea35ff05 | [
"MIT"
] | null | null | null | helloWorld.py | jakekali/learning-python | 6d390883fa32b43a39fbed7ea4345095ea35ff05 | [
"MIT"
] | null | null | null | x = "Hello " + "World"
print(x)
if(x == "Hello World"):
print("X is equal to hello World")
else:
print("X is not equal to Hello World") | 20.571429 | 42 | 0.604167 | 25 | 144 | 3.48 | 0.4 | 0.45977 | 0.252874 | 0.367816 | 0.390805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229167 | 144 | 7 | 42 | 20.571429 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.524138 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8d19910405d7a6c1c4c2a464035cf616bd61f887 | 26,030 | py | Python | Model&Data/CADA-VAE/main.py | LiangjunFeng/Generative-Any-Shot-Learning | 693c4ab92f2eb04cc453c870782710a982f98e80 | [
"Apache-2.0"
] | null | null | null | Model&Data/CADA-VAE/main.py | LiangjunFeng/Generative-Any-Shot-Learning | 693c4ab92f2eb04cc453c870782710a982f98e80 | [
"Apache-2.0"
] | null | null | null | Model&Data/CADA-VAE/main.py | LiangjunFeng/Generative-Any-Shot-Learning | 693c4ab92f2eb04cc453c870782710a982f98e80 | [
"Apache-2.0"
] | null | null | null | # execute
# generalized ZSL
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA1 --few_train False --num_shots 0 --generalized True > awa1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset SUN --few_train False --num_shots 0 --generalized True > sun.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset CUB --few_train False --num_shots 0 --generalized True > cub.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 0 --generalized True > awa2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized True > flo.log 2>&1 &
# naive feature
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 0 --generalized True --image_embedding res101_naive > awa2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset SUN --few_train False --num_shots 0 --generalized True --image_embedding res101_naive > sun.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset CUB --few_train False --num_shots 0 --generalized True --image_embedding res101_naive > cub.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized True --image_embedding res101_naive > flo.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset aPY --few_train False --num_shots 0 --generalized True --image_embedding res101_naive > apy.log 2>&1 &
# finetue feature
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 0 --generalized True --image_embedding res101_finetune > awa2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset SUN --few_train False --num_shots 0 --generalized True --image_embedding res101_finetune > sun.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset CUB --few_train False --num_shots 0 --generalized True --image_embedding res101_finetune > cub.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized True --image_embedding res101_finetune > flo.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset aPY --few_train False --num_shots 0 --generalized True --image_embedding res101_finetune > apy.log 2>&1 &
# reg feature
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized True --image_embedding res101_reg > flo.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset CUB --few_train False --num_shots 0 --generalized True --image_embedding res101_reg > cub.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset SUN --few_train False --num_shots 0 --generalized True --image_embedding res101_reg > sun.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 0 --generalized True --image_embedding res101_reg > awa2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset aPY --few_train False --num_shots 0 --generalized True --image_embedding res101_reg > apy.log 2>&1 &
# few shot
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 1 --generalized True --image_embedding res101_reg > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train False --num_shots 5 --generalized True --image_embedding res101_reg > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train False --num_shots 10 --generalized True --image_embedding res101_reg > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train False --num_shots 20 --generalized True --image_embedding res101_reg > flo3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train True --num_shots 1 --generalized True --image_embedding res101_naive > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train True --num_shots 5 --generalized True --image_embedding res101_naive > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train True --num_shots 10 --generalized True --image_embedding res101_naive > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train True --num_shots 20 --generalized True --image_embedding res101_naive > flo3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset CUB --few_train False --num_shots 1 --generalized True --image_embedding res101_reg > cub0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset CUB --few_train False --num_shots 5 --generalized True --image_embedding res101_reg > cub1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset CUB --few_train False --num_shots 10 --generalized True --image_embedding res101_reg > cub2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset CUB --few_train False --num_shots 20 --generalized True --image_embedding res101_reg > cub3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset CUB --few_train True --num_shots 1 --generalized True --image_embedding res101_naive > cub0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset CUB --few_train True --num_shots 5 --generalized True --image_embedding res101_naive > cub1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset CUB --few_train True --num_shots 10 --generalized True --image_embedding res101_naive > cub2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset CUB --few_train True --num_shots 20 --generalized True --image_embedding res101_naive > cub3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset SUN --few_train False --num_shots 1 --generalized True --image_embedding res101_reg > sun0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset SUN --few_train False --num_shots 5 --generalized True --image_embedding res101_reg > sun1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset SUN --few_train False --num_shots 10 --generalized True --image_embedding res101_reg > sun2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset SUN --few_train True --num_shots 1 --generalized True --image_embedding res101 > sun0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset SUN --few_train True --num_shots 5 --generalized True --image_embedding res101 > sun1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset SUN --few_train True --num_shots 10 --generalized True --image_embedding res101 > sun2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 1 --generalized True --image_embedding res101_naive > awa20.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 5 --generalized True --image_embedding res101_naive > awa21.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 10 --generalized True --image_embedding res101_naive > awa22.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset AWA2 --few_train False --num_shots 20 --generalized True --image_embedding res101_naive > awa23.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA2 --few_train True --num_shots 1 --generalized True --image_embedding res101_naive > awa20.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset AWA2 --few_train True --num_shots 5 --generalized True --image_embedding res101_naive > awa21.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset AWA2 --few_train True --num_shots 10 --generalized True --image_embedding res101_naive > awa22.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset AWA2 --few_train True --num_shots 20 --generalized True --image_embedding res101_naive > awa23.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA1 --few_train False --num_shots 1 --generalized True --image_embedding res101 > awa10.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset AWA1 --few_train False --num_shots 5 --generalized True --image_embedding res101 > awa11.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset AWA1 --few_train False --num_shots 10 --generalized True --image_embedding res101 > awa12.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset AWA1 --few_train False --num_shots 20 --generalized True --image_embedding res101 > awa13.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset AWA1 --few_train True --num_shots 1 --generalized True --image_embedding res101 > awa10.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset AWA1 --few_train True --num_shots 5 --generalized True --image_embedding res101 > awa11.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset AWA1 --few_train True --num_shots 10 --generalized True --image_embedding res101 > awa12.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset AWA1 --few_train True --num_shots 20 --generalized True --image_embedding res101 > awa13.log 2>&1 &
# reg feature + att
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized False --image_embedding res101_reg --class_embedding att > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized False --image_embedding res101_reg --class_embedding att_naive > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized False --image_embedding res101_reg --class_embedding att_GRU > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 0 --generalized False --image_embedding res101_reg --class_embedding att_GRU_biased > flo3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train True --num_shots 1 --generalized True --image_embedding res101_naive --class_embedding att_GRU_biased > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train True --num_shots 5 --generalized True --image_embedding res101_naive --class_embedding att_GRU_biased > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train True --num_shots 10 --generalized True --image_embedding res101_naive --class_embedding att_GRU_biased > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train True --num_shots 20 --generalized True --image_embedding res101_naive --class_embedding att_GRU_biased > flo3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train True --num_shots 1 --generalized True --image_embedding res101_naive --class_embedding att > flo4.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train True --num_shots 5 --generalized True --image_embedding res101_naive --class_embedding att > flo5.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train True --num_shots 10 --generalized True --image_embedding res101_naive --class_embedding att > flo6.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train True --num_shots 20 --generalized True --image_embedding res101_naive --class_embedding att > flo7.log 2>&1 &
# few shot + class
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train False --num_shots 1 --generalized False --image_embedding res101_reg --class_embedding att_GRU_biased > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train False --num_shots 5 --generalized False --image_embedding res101_reg --class_embedding att_GRU_biased > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train False --num_shots 10 --generalized False --image_embedding res101_reg --class_embedding att_GRU_biased > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train False --num_shots 20 --generalized False --image_embedding res101_reg --class_embedding att_GRU_biased > flo3.log 2>&1 &
# CUDA_VISIBLE_DEVICES=0 nohup python -u main.py --dataset FLO --few_train True --num_shots 1 --generalized False --image_embedding res101_naive --class_embedding att_GRU_biased > flo0.log 2>&1 &
# CUDA_VISIBLE_DEVICES=1 nohup python -u main.py --dataset FLO --few_train True --num_shots 5 --generalized False --image_embedding res101_naive --class_embedding att_GRU_biased > flo1.log 2>&1 &
# CUDA_VISIBLE_DEVICES=2 nohup python -u main.py --dataset FLO --few_train True --num_shots 10 --generalized False --image_embedding res101_naive --class_embedding att_GRU_biased > flo2.log 2>&1 &
# CUDA_VISIBLE_DEVICES=3 nohup python -u main.py --dataset FLO --few_train True --num_shots 20 --generalized False --image_embedding res101_naive --class_embedding att_GRU_biased > flo3.log 2>&1 &
from vaemodel import Model
import torch
import argparse
import warnings
import numpy as np
warnings.filterwarnings("ignore")
def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='FLO', help='FLO')
parser.add_argument('--few_train', default = False, type = str2bool, help='use few train samples')
parser.add_argument('--num_shots', type=int, default=5, help='the number of shots, if few_train, then num_shots is for train classes, else for test classes')
parser.add_argument('--generalized', default=False, type = str2bool, help='enable generalized zero-shot learning')
parser.add_argument('--image_embedding', default='res101', help='res101')
parser.add_argument('--class_embedding', default='att', help='att')
args = parser.parse_args()
########################################
# the basic hyperparameters
########################################
hyperparameters = {
'num_shots': 0,
'device': 'cuda',
'model_specifics': {'cross_reconstruction': True,
'name': 'CADA',
'distance': 'wasserstein',
'warmup': {'beta': {'factor': 0.25,
'end_epoch': 93,
'start_epoch': 0},
'cross_reconstruction': {'factor': 2.37,
'end_epoch': 75,
'start_epoch': 21},
'distance': {'factor': 8.13,
'end_epoch': 22,
'start_epoch': 6}}},
'lr_gen_model': 0.00015,
'generalized': True,
'batch_size': 50,
'xyu_samples_per_class': {'SUN': (200, 0, 400, 0),
'aPY': (200, 0, 400, 0),
'CUB': (200, 0, 400, 0),
'AWA2': (200, 0, 400, 0),
'FLO': (200, 0, 400, 0),
'AWA1': (200, 0, 400, 0)},
'epochs': 100,
'loss': 'l1',
'auxiliary_data_source' : 'attributes',
'lr_cls': 0.001,
'dataset': 'CUB',
'hidden_size_rule': {'resnet_features': (1560, 1660),
'attributes': (1450, 665),
'sentences': (1450, 665) },
'latent_size': 64
}
# The training epochs for the final classifier, for early stopping,
# as determined on the validation spit
cls_train_steps = [
{'dataset': 'SUN', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 21},
{'dataset': 'SUN', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 30},
{'dataset': 'SUN', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 22},
{'dataset': 'SUN', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 96},
{'dataset': 'SUN', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 29},
{'dataset': 'SUN', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 78},
{'dataset': 'SUN', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 29},
{'dataset': 'SUN', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 61},
{'dataset': 'SUN', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 79},
{'dataset': 'SUN', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 94},
{'dataset': 'SUN', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 79},
{'dataset': 'SUN', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 94},
{'dataset': 'AWA1', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 33},
{'dataset': 'AWA1', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 100},
{'dataset': 'AWA1', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 40},
{'dataset': 'AWA1', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 81},
{'dataset': 'AWA1', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 89},
{'dataset': 'AWA1', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 62},
{'dataset': 'AWA1', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 56},
{'dataset': 'AWA1', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 59},
{'dataset': 'AWA1', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 100},
{'dataset': 'AWA1', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 50},
{'dataset': 'AWA1', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 100},
{'dataset': 'AWA1', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 50},
{'dataset': 'CUB', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 100},
{'dataset': 'CUB', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 100},
{'dataset': 'CUB', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 34},
{'dataset': 'CUB', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 46},
{'dataset': 'CUB', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 64},
{'dataset': 'CUB', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 73},
{'dataset': 'CUB', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 39},
{'dataset': 'CUB', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 31},
{'dataset': 'CUB', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'CUB', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 67},
{'dataset': 'CUB', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'CUB', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 67},
{'dataset': 'AWA2', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 29},
{'dataset': 'AWA2', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 39},
{'dataset': 'AWA2', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 44},
{'dataset': 'AWA2', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 96},
{'dataset': 'AWA2', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 99},
{'dataset': 'AWA2', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 100},
{'dataset': 'AWA2', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 69},
{'dataset': 'AWA2', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 79},
{'dataset': 'AWA2', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 86},
{'dataset': 'AWA2', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 78},
{'dataset': 'AWA2', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 86},
{'dataset': 'AWA2', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 78},
{'dataset': 'aPY', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 23},
{'dataset': 'aPY', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 22},
{'dataset': 'aPY', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 34},
{'dataset': 'aPY', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 46},
{'dataset': 'aPY', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 64},
{'dataset': 'aPY', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 73},
{'dataset': 'aPY', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 39},
{'dataset': 'aPY', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 31},
{'dataset': 'aPY', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'aPY', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 67},
{'dataset': 'aPY', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'aPY', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 67},
{'dataset': 'FLO', 'num_shots': 0, 'generalized': True, 'cls_train_steps': 23},
{'dataset': 'FLO', 'num_shots': 0, 'generalized': False, 'cls_train_steps': 100},
{'dataset': 'FLO', 'num_shots': 1, 'generalized': True, 'cls_train_steps': 34},
{'dataset': 'FLO', 'num_shots': 1, 'generalized': False, 'cls_train_steps': 46},
{'dataset': 'FLO', 'num_shots': 5, 'generalized': True, 'cls_train_steps': 64},
{'dataset': 'FLO', 'num_shots': 5, 'generalized': False, 'cls_train_steps': 73},
{'dataset': 'FLO', 'num_shots': 2, 'generalized': True, 'cls_train_steps': 39},
{'dataset': 'FLO', 'num_shots': 2, 'generalized': False, 'cls_train_steps': 31},
{'dataset': 'FLO', 'num_shots': 10, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'FLO', 'num_shots': 10, 'generalized': False, 'cls_train_steps': 67},
{'dataset': 'FLO', 'num_shots': 20, 'generalized': True, 'cls_train_steps': 85},
{'dataset': 'FLO', 'num_shots': 20, 'generalized': False, 'cls_train_steps': 67}
]
##################################
# change some hyperparameters here
##################################
hyperparameters['dataset'] = args.dataset
hyperparameters['num_shots']= args.num_shots
hyperparameters['generalized']= args.generalized
hyperparameters['few_train']= args.few_train
hyperparameters['image_embedding']= args.image_embedding
hyperparameters['class_embedding']= args.class_embedding
hyperparameters['cls_train_steps'] = [x['cls_train_steps'] for x in cls_train_steps
if all([hyperparameters['dataset']==x['dataset'],
hyperparameters['num_shots']==x['num_shots'],
hyperparameters['generalized']==x['generalized'] ])][0]
print('***')
print(hyperparameters['cls_train_steps'])
if hyperparameters['generalized']:
if hyperparameters['few_train']==True or hyperparameters['num_shots']==0:
hyperparameters['samples_per_class'] = {'CUB': (200, 0, 400, 0), 'SUN': (200, 0, 400, 0),
'aPY': (200, 0, 400, 0), 'AWA1': (200, 0, 400, 0),
'AWA2': (200, 0, 400, 0), 'FLO': (200, 0, 400, 0)}
else:
hyperparameters['samples_per_class'] = {'CUB': (200, 0, 200, 200), 'SUN': (200, 0, 200, 200),
'aPY': (200, 0, 200, 200), 'AWA1': (200, 0, 200, 200),
'AWA2': (200, 0, 200, 200), 'FLO': (200, 0, 200, 200)}
else:
if hyperparameters['few_train']==True or hyperparameters['num_shots']==0:
hyperparameters['samples_per_class'] = {'CUB': (0, 0, 200, 0), 'SUN': (0, 0, 200, 0),
'aPY': (0, 0, 200, 0), 'AWA1': (0, 0, 200, 0),
'AWA2': (0, 0, 200, 0), 'FLO': (0, 0, 200, 0)}
else:
hyperparameters['samples_per_class'] = {'CUB': (0, 0, 200, 200), 'SUN': (0, 0, 200, 200),
'aPY': (0, 0, 200, 200), 'AWA1': (0, 0, 200, 200),
'AWA2': (0, 0, 200, 200), 'FLO': (0, 0, 200, 200)}
model = Model(hyperparameters)
model.to(hyperparameters['device'])
"""
########################################
### load model where u left
########################################
saved_state = torch.load('./saved_models/CADA_trained.pth.tar')
model.load_state_dict(saved_state['state_dict'])
for d in model.all_data_sources_without_duplicates:
model.encoder[d].load_state_dict(saved_state['encoder'][d])
model.decoder[d].load_state_dict(saved_state['decoder'][d])
########################################
"""
losses = model.train_vae()
u,s,h,history = model.train_classifier()
if model.DATASET == "AWA2":
syn_feature, syn_label = model.generate_syn_feature()
np.save("./cadavae_feat.npy", syn_feature.data.cpu().numpy())
np.save("./cadavae_label.npy", syn_label.data.cpu().numpy())
print(syn_feature.data.cpu().numpy().shape, syn_label.data.cpu().numpy().shape)
if hyperparameters['generalized']==True:
acc = [hi[2] for hi in history]
elif hyperparameters['generalized']==False:
acc = [hi[1] for hi in history]
print(acc[-1])
state = {
'state_dict': model.state_dict() ,
'hyperparameters':hyperparameters,
'encoder':{},
'decoder':{}
}
for d in model.all_data_sources:
state['encoder'][d] = model.encoder[d].state_dict()
state['decoder'][d] = model.decoder[d].state_dict()
torch.save(state, 'CADA_trained.pth.tar')
print('>> saved')
| 69.413333 | 196 | 0.660776 | 3,670 | 26,030 | 4.464578 | 0.069482 | 0.077632 | 0.085688 | 0.076167 | 0.838755 | 0.825328 | 0.803845 | 0.791944 | 0.723955 | 0.720964 | 0 | 0.058683 | 0.177103 | 26,030 | 374 | 197 | 69.59893 | 0.706256 | 0.506992 | 0 | 0.032609 | 0 | 0 | 0.369055 | 0.003468 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005435 | false | 0 | 0.027174 | 0 | 0.043478 | 0.027174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d4d44172cb6cf8cc8abb5a7448ade57f61bff5d | 30 | py | Python | court_scraper/platforms/wiscourts/__init__.py | DiPierro/court-scraper | eb289e1559d317c04f1d92dacc96a49b9480e552 | [
"ISC"
] | null | null | null | court_scraper/platforms/wiscourts/__init__.py | DiPierro/court-scraper | eb289e1559d317c04f1d92dacc96a49b9480e552 | [
"ISC"
] | null | null | null | court_scraper/platforms/wiscourts/__init__.py | DiPierro/court-scraper | eb289e1559d317c04f1d92dacc96a49b9480e552 | [
"ISC"
] | null | null | null | from .site import WiscourtSite | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d9ed684ac8ecf9feba491a315963d05508d9dab | 215 | py | Python | nose2_kflag/tests/scenario/doctests/pymodule.py | stefanholek/nose2-kflag | 8236cc43d0f06afabdb401b111af45d5d8fd9a49 | [
"BSD-2-Clause"
] | 1 | 2020-06-14T13:54:15.000Z | 2020-06-14T13:54:15.000Z | nose2_kflag/tests/scenario/doctests/pymodule.py | stefanholek/nose2-kflag | 8236cc43d0f06afabdb401b111af45d5d8fd9a49 | [
"BSD-2-Clause"
] | 1 | 2021-02-02T05:04:05.000Z | 2021-02-02T05:04:05.000Z | nose2_kflag/tests/scenario/doctests/pymodule.py | stefanholek/nose2-kflag | 8236cc43d0f06afabdb401b111af45d5d8fd9a49 | [
"BSD-2-Clause"
] | null | null | null | """
>>> print('foo')
foo
"""
def func_foo():
"""
>>> print('foo')
foo
"""
def func_bar():
"""
>>> print('bar')
bar
"""
def func_baz():
"""
>>> print('baz')
baz
"""
| 9.347826 | 20 | 0.35814 | 21 | 215 | 3.52381 | 0.285714 | 0.283784 | 0.297297 | 0.378378 | 0.486486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35814 | 215 | 22 | 21 | 9.772727 | 0.536232 | 0.386047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8da95e612cbffa6047266e2f2f1d7089de586633 | 19 | py | Python | bills_service/bills_service/routes/__init__.py | reubinoff/bills | 36163ff643046cbdf78ed572dd110e045a4b6d08 | [
"MIT"
] | 1 | 2021-11-15T19:10:41.000Z | 2021-11-15T19:10:41.000Z | bot/handlers/__init__.py | famaxth/YouTube-Parser-Bot | 420c8f0bdc6f3abd367b475b2034969057f6e3f2 | [
"MIT"
] | 7 | 2021-09-02T00:44:06.000Z | 2022-02-26T17:23:44.000Z | backend/flask/app/apis/__init__.py | yf-dev/twitch-dccon-manager | 97b0b5410621a10896679474f149bf1892e678b8 | [
"MIT"
] | null | null | null | from . import users | 19 | 19 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9392ab80e4bb630a8a2e1da2c6f9e5b211336f1e | 13,467 | py | Python | main/migrations/0001_initial.py | AminAliH47/PicoStyle | 768daccc6f28f08aa848318d633af1a19544e499 | [
"Apache-2.0"
] | 19 | 2022-02-16T20:00:08.000Z | 2022-03-08T17:38:59.000Z | main/migrations/0001_initial.py | AminAliH47/PicoStyle | 768daccc6f28f08aa848318d633af1a19544e499 | [
"Apache-2.0"
] | 3 | 2022-02-16T20:59:11.000Z | 2022-02-23T20:40:12.000Z | main/migrations/0001_initial.py | AminAliH47/PicoStyle | 768daccc6f28f08aa848318d633af1a19544e499 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.9 on 2022-02-12 08:44
import ckeditor.fields
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='BrandsSlider',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page', models.CharField(choices=[('All', 'All'), ('Women', 'Women'), ('Men', 'Men'), ('Raw material', 'Raw material'), ('Life style', 'Life style')], help_text='Which home page to display?', max_length=15)),
('title_en', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('title_ru', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('title_it', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('link', models.CharField(max_length=120)),
('description_en', models.TextField(blank=True, max_length=100, null=True, verbose_name='Description')),
('description_ru', models.TextField(blank=True, max_length=100, null=True, verbose_name='Description')),
('description_it', models.TextField(blank=True, max_length=100, null=True, verbose_name='Description')),
('button_en', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_ru', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_it', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_link', models.CharField(blank=True, max_length=150, null=True)),
('cover', models.BooleanField(default=True, help_text='Have dark cover?')),
('image', models.ImageField(upload_to='')),
],
options={
'verbose_name': 'slide',
'verbose_name_plural': '03. Brands slider',
},
),
migrations.CreateModel(
name='MainCategories',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page', models.CharField(choices=[('All', 'All'), ('Women', 'Women'), ('Men', 'Men'), ('Raw material', 'Raw material'), ('Life style', 'Life style')], help_text='Which home page to display?', max_length=15)),
('title_en', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_ru', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_it', models.CharField(max_length=50, null=True, verbose_name='Title')),
('image', models.ImageField(upload_to='main/categories')),
('link', models.CharField(max_length=100)),
],
options={
'verbose_name': 'category',
'verbose_name_plural': '07. Main categories',
},
),
migrations.CreateModel(
name='MainNavbar',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page', models.CharField(choices=[('All', 'All'), ('Women', 'Women'), ('Men', 'Men'), ('Raw material', 'Raw material'), ('Life style', 'Life style')], help_text='Which home page to display?', max_length=15)),
('title_en', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_ru', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_it', models.CharField(max_length=50, null=True, verbose_name='Title')),
('slug', models.CharField(max_length=150)),
],
options={
'verbose_name': 'item',
'verbose_name_plural': '04. Main navbar',
},
),
migrations.CreateModel(
name='MainSlider',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page', models.CharField(choices=[('All', 'All'), ('Women', 'Women'), ('Men', 'Men'), ('Raw material', 'Raw material'), ('Life style', 'Life style')], help_text='Which home page to display?', max_length=15)),
('title_en', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('title_ru', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('title_it', models.CharField(blank=True, max_length=15, verbose_name='Title')),
('link', models.CharField(max_length=120)),
('description_en', models.TextField(blank=True, max_length=120, null=True, verbose_name='Description')),
('description_ru', models.TextField(blank=True, max_length=120, null=True, verbose_name='Description')),
('description_it', models.TextField(blank=True, max_length=120, null=True, verbose_name='Description')),
('button_en', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_ru', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_it', models.CharField(blank=True, max_length=15, null=True, verbose_name='Button')),
('button_link', models.CharField(blank=True, max_length=150, null=True)),
('cover', models.BooleanField(default=True, help_text='Have dark cover?')),
('image', models.ImageField(upload_to='')),
],
options={
'verbose_name': 'slide',
'verbose_name_plural': '02. Main slider',
},
),
migrations.CreateModel(
name='Retailers',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('gender', models.CharField(choices=[('MS.', 'MS.'), ('MrS.', 'MrS.'), ('Mr.', 'Mr.')], max_length=4)),
('first_name', models.CharField(max_length=50)),
('last_name', models.CharField(max_length=50)),
('email', models.EmailField(max_length=254)),
('phone', models.CharField(max_length=11, validators=[django.core.validators.RegexValidator(message='Your entered phone number is not valid', regex='^[ 0-9]+$')])),
('address', models.TextField()),
('zip_code', models.CharField(max_length=20)),
('country', models.CharField(max_length=50)),
('city', models.CharField(max_length=50)),
('experience', models.CharField(choices=[('Yes', 'Yes'), ('No', 'No')], max_length=4)),
('experience_info', models.JSONField()),
('have_store', models.CharField(choices=[('Yes', 'Yes'), ('No', 'No')], max_length=4, null=True)),
('center_town', models.CharField(choices=[('Yes', 'Yes'), ('No', 'No')], max_length=4, null=True)),
],
options={
'verbose_name': 'retailer',
'verbose_name_plural': '08. Retailers',
},
),
migrations.CreateModel(
name='SiteSetting',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('site_title', models.CharField(max_length=100)),
('logo', models.ImageField(upload_to='main/logo')),
('favicon', models.ImageField(upload_to='main/favicon')),
],
options={
'verbose_name': 'setting',
'verbose_name_plural': '01. Site setting',
},
),
migrations.CreateModel(
name='SizeGuide',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title_en', models.CharField(max_length=100, verbose_name='title')),
('title_ru', models.CharField(max_length=100, verbose_name='title')),
('title_it', models.CharField(max_length=100, verbose_name='title')),
('link', models.CharField(max_length=120)),
('image', models.ImageField(upload_to='main/size-guide')),
],
options={
'verbose_name': 'item',
'verbose_name_plural': '12. Size guide',
},
),
migrations.CreateModel(
name='SocialMedia',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title_en', models.CharField(max_length=50, null=True, verbose_name='title')),
('title_ru', models.CharField(max_length=50, null=True, verbose_name='title')),
('title_it', models.CharField(max_length=50, null=True, verbose_name='title')),
('link', models.CharField(max_length=120)),
('icon_code', models.CharField(choices=[('linkedin', 'linkedin'), ('youtube', 'youtube'), ('facebook', 'facebook'), ('instagram', 'instagram'), ('telegram', 'telegram')], max_length=15)),
],
options={
'verbose_name': 'social media',
'verbose_name_plural': '06. Social media',
},
),
migrations.CreateModel(
name='SpecialProjects',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title_en', models.CharField(max_length=100, verbose_name='title')),
('title_ru', models.CharField(max_length=100, verbose_name='title')),
('title_it', models.CharField(max_length=100, verbose_name='title')),
('link', models.CharField(max_length=120)),
('image', models.ImageField(upload_to='main/special-project')),
],
options={
'verbose_name': 'item',
'verbose_name_plural': '10. Special projects',
},
),
migrations.CreateModel(
name='StoreAgent',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name_en', models.CharField(max_length=120, null=True, verbose_name='Name')),
('name_ru', models.CharField(max_length=120, null=True, verbose_name='Name')),
('name_it', models.CharField(max_length=120, null=True, verbose_name='Name')),
('country', models.CharField(max_length=120)),
('country_code', models.CharField(max_length=4)),
('description_en', ckeditor.fields.RichTextField(max_length=450, null=True, verbose_name='Description')),
('description_ru', ckeditor.fields.RichTextField(max_length=450, null=True, verbose_name='Description')),
('description_it', ckeditor.fields.RichTextField(max_length=450, null=True, verbose_name='Description')),
('image', models.ImageField(upload_to='main/store-and-agent')),
('active', models.BooleanField(default=True)),
],
options={
'verbose_name': 'store',
'verbose_name_plural': '09. Store and agent',
},
),
migrations.CreateModel(
name='SubNavbar',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page', models.CharField(choices=[('All', 'All'), ('Women', 'Women'), ('Men', 'Men'), ('Raw material', 'Raw material'), ('Life style', 'Life style')], help_text='Which home page to display?', max_length=15)),
('title_en', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_ru', models.CharField(max_length=50, null=True, verbose_name='Title')),
('title_it', models.CharField(max_length=50, null=True, verbose_name='Title')),
('slug', models.CharField(max_length=150)),
],
options={
'verbose_name': 'item',
'verbose_name_plural': '05. Sub navbar',
},
),
migrations.CreateModel(
name='WorkWithUs',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title_en', models.CharField(max_length=100, verbose_name='title')),
('title_ru', models.CharField(max_length=100, verbose_name='title')),
('title_it', models.CharField(max_length=100, verbose_name='title')),
('link', models.CharField(max_length=120)),
('image', models.ImageField(upload_to='main/work-with-us')),
],
options={
'verbose_name': 'item',
'verbose_name_plural': '11. Work with us',
},
),
]
| 59.325991 | 225 | 0.56709 | 1,396 | 13,467 | 5.280802 | 0.123926 | 0.120863 | 0.10255 | 0.136734 | 0.789338 | 0.752849 | 0.735757 | 0.719343 | 0.719343 | 0.715138 | 0 | 0.022644 | 0.268731 | 13,467 | 226 | 226 | 59.588496 | 0.725934 | 0.003342 | 0 | 0.584475 | 1 | 0 | 0.19225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013699 | 0 | 0.031963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93c5bf4ecd6ae4b4a8cb9c24cb5594fc9a800fa7 | 187 | py | Python | datek_jaipur/domain/errors/player_created.py | DAtek/datek-jaipur | e49e4b391f2e23ed5a333477cc479ccbc1c90dee | [
"MIT"
] | null | null | null | datek_jaipur/domain/errors/player_created.py | DAtek/datek-jaipur | e49e4b391f2e23ed5a333477cc479ccbc1c90dee | [
"MIT"
] | 1 | 2022-03-26T11:05:28.000Z | 2022-03-26T11:05:28.000Z | datek_jaipur/domain/errors/player_created.py | DAtek/datek-jaipur | e49e4b391f2e23ed5a333477cc479ccbc1c90dee | [
"MIT"
] | null | null | null | from datek_jaipur.errors import EventValidationError
class PlayerCreatedValidationError(EventValidationError):
pass
class InvalidNameError(PlayerCreatedValidationError):
pass
| 18.7 | 57 | 0.84492 | 14 | 187 | 11.214286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 187 | 9 | 58 | 20.777778 | 0.951515 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
93d84167912b639dbd9b9a34c228afc885455bb0 | 20,933 | py | Python | test_numfile.py | bzaczynski/numfile | b00694f83954bc79e2be582a548e61c1fd900c18 | [
"MIT"
] | null | null | null | test_numfile.py | bzaczynski/numfile | b00694f83954bc79e2be582a548e61c1fd900c18 | [
"MIT"
] | null | null | null | test_numfile.py | bzaczynski/numfile | b00694f83954bc79e2be582a548e61c1fd900c18 | [
"MIT"
] | null | null | null | import types
from pathlib import Path
from unittest.mock import call
import pytest
from pytest_mock import MockerFixture
from numfile import NumberedFile, open_all, open_latest, open_next
class TestNumberedFileOf:
def test_should_accept_str_or_path(self):
assert NumberedFile.of("file") == NumberedFile.of(Path("file"))
def test_should_return_numbered_file(self):
assert isinstance(NumberedFile.of("/path/to/file"), NumberedFile)
def test_should_parse_no_path(self):
file = NumberedFile.of("file")
assert file.parent == Path(".")
def test_should_parse_relative_path(self):
file = NumberedFile.of("path/to/file")
assert file.parent == Path("path/to/")
def test_should_parse_absolute_path(self):
file = NumberedFile.of("/path/to/file")
assert file.parent == Path("/path/to")
def test_should_parse_plain_name(self):
file = NumberedFile.of("/path/to/file")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number is None
assert file.suffix == ""
def test_should_parse_simple_suffix(self):
file = NumberedFile.of("/path/to/file.txt")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number is None
assert file.suffix == ".txt"
def test_should_parse_complex_suffix(self):
file = NumberedFile.of("/path/to/file.txt.tar.gz")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number is None
assert file.suffix == ".txt.tar.gz"
def test_should_parse_number(self):
file = NumberedFile.of("/path/to/file-42")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number == 42
assert file.suffix == ""
def test_should_parse_number_and_simple_suffix(self):
file = NumberedFile.of("/path/to/file-42.txt")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number == 42
assert file.suffix == ".txt"
def test_should_parse_number_and_complex_suffix(self):
file = NumberedFile.of("/path/to/file-42.txt.tar.gz")
assert file.parent == Path("/path/to")
assert file.name == "file"
assert file.number == 42
assert file.suffix == ".txt.tar.gz"
class TestCurrentPath:
@pytest.mark.parametrize(
"path",
[
"/path/to/file",
"/path/to/file.txt",
"/path/to/file.txt.tar.gz",
"/path/to/file-42",
"/path/to/file-42.txt",
"/path/to/file-42.txt.tar.gz",
],
)
def test_should_return_path_instance(self, path: str):
file = NumberedFile.of(path)
assert isinstance(file.path, Path)
@pytest.mark.parametrize(
"path",
[
"/path/to/file",
"/path/to/file.txt",
"/path/to/file.txt.tar.gz",
],
)
def test_should_parse_path_without_number(self, path: str):
file = NumberedFile.of(path)
assert file.path == Path(path)
@pytest.mark.parametrize(
"path",
[
"/path/to/file-42",
"/path/to/file-42.txt",
"/path/to/file-42.txt.tar.gz",
],
)
def test_should_parse_path_with_number(self, path: str):
file = NumberedFile.of(path)
assert file.path == Path(path)
class TestNext:
def test_should_return_numbered_file(self, fake_files):
file = NumberedFile.of("/path/to/file")
assert isinstance(file.next, NumberedFile)
def test_no_latest_and_no_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, False])
file = NumberedFile.of("/path/to/no-such-file")
assert file.next == NumberedFile.of("/path/to/no-such-file-1")
def test_no_latest_but_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, False])
file = NumberedFile.of("/path/to/no-such-file-42")
assert file.next == NumberedFile.of("/path/to/no-such-file-42")
def test_latest_without_number_and_no_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, True])
file = NumberedFile.of("/path/to/foobar")
assert file.next == NumberedFile.of("/path/to/foobar-2")
def test_latest_without_number_but_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, True])
file = NumberedFile.of("/path/to/foobar-42")
assert file.next == NumberedFile.of("/path/to/foobar-2")
def test_latest_with_number_but_no_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, True])
file = NumberedFile.of("/path/to/foo")
assert file.next == NumberedFile.of("/path/to/foo-3")
def test_latest_with_number_and_number_given(
self, mocker: MockerFixture, fake_files
):
mocker.patch.object(Path, "exists", side_effect=[True, True])
file = NumberedFile.of("/path/to/foo-42")
assert file.next == NumberedFile.of("/path/to/foo-3")
class TestLatest:
def test_should_return_numbered_file(self, fake_files):
file = NumberedFile.of("/path/to/file")
assert isinstance(file.latest, NumberedFile)
def test_should_return_self_if_no_such_file(self, fake_files):
file = NumberedFile.of("/path/to/no-such-file")
assert file.latest is file
def test_should_return_file_with_highest_number(self, fake_files):
file = NumberedFile.of("/path/to/ipsum")
assert file.latest.path == Path("/path/to/ipsum-12")
def test_should_ignore_given_number(self, fake_files):
file = NumberedFile.of("/path/to/ipsum-42")
assert file.latest.path == Path("/path/to/ipsum-12")
@pytest.fixture
def fake_files(mocker: MockerFixture) -> None:
mocker.patch.object(Path, "exists", return_value=True)
mocker.patch.object(
Path,
"iterdir",
return_value=[
# Unrelated files
Path("/path/to/__init__.py"),
Path("/path/to/main.py"),
Path("/path/to/utils.py"),
# Same name but different suffixes
Path("/path/to/foo"),
Path("/path/to/foo.md"),
Path("/path/to/foo.txt"),
Path("/path/to/foo.txt.tar.gz"),
# Name prefix shared with other files
Path("/path/to/foobar"),
Path("/path/to/foobar.md"),
Path("/path/to/foobar.txt"),
Path("/path/to/foobar.txt.tar.gz"),
# Plain with numbers
Path("/path/to/file-1"),
Path("/path/to/file-2"),
Path("/path/to/file-3"),
Path("/path/to/file-4"),
# Plain starting with no number
Path("/path/to/lorem"),
Path("/path/to/lorem-2"),
Path("/path/to/lorem-3"),
# Gaps
Path("/path/to/ipsum-3"),
Path("/path/to/ipsum-8"),
Path("/path/to/ipsum-9"),
Path("/path/to/ipsum-12"),
# Numbers
Path("/path/to/foo-1"),
Path("/path/to/foo-2"),
Path("/path/to/foo-1.md"),
Path("/path/to/foo-2.md"),
Path("/path/to/foo-3.md"),
Path("/path/to/foo-1.txt"),
Path("/path/to/foo-2.txt"),
Path("/path/to/foo-3.txt"),
Path("/path/to/foo-4.txt"),
Path("/path/to/foo-1.txt.tar.gz"),
Path("/path/to/foo-2.txt.tar.gz"),
],
)
class TestSiblings:
def test_should_return_generator_of_numbered_files(self, fake_files):
file = NumberedFile.of("/path/to/file")
siblings = file.siblings
assert isinstance(siblings, types.GeneratorType)
assert isinstance(next(siblings), NumberedFile)
def test_should_be_empty_when_no_parent_dir(self, fake_files):
file = NumberedFile.of("/no/such/path")
assert list(file.siblings) == []
def test_should_be_empty_when_no_such_file(self, fake_files):
file = NumberedFile.of("/path/to/no-such-file")
assert list(file.siblings) == []
def test_should_retain_parent(self, fake_files):
file = NumberedFile.of("/path/to/file")
sibling = next(file.siblings)
assert sibling.parent == Path("/path/to")
@pytest.mark.parametrize(
"path, expected_siblings",
[
(
"/path/to/foo",
{
Path("/path/to/foo"),
Path("/path/to/foo-1"),
Path("/path/to/foo-2"),
},
),
(
"/path/to/foo.txt",
{
Path("/path/to/foo.txt"),
Path("/path/to/foo-1.txt"),
Path("/path/to/foo-2.txt"),
Path("/path/to/foo-3.txt"),
Path("/path/to/foo-4.txt"),
},
),
(
"/path/to/foo.txt.tar.gz",
{
Path("/path/to/foo.txt.tar.gz"),
Path("/path/to/foo-1.txt.tar.gz"),
Path("/path/to/foo-2.txt.tar.gz"),
},
),
(
"/path/to/foo.md",
{
Path("/path/to/foo.md"),
Path("/path/to/foo-1.md"),
Path("/path/to/foo-2.md"),
Path("/path/to/foo-3.md"),
},
),
(
"/path/to/foobar",
{
Path("/path/to/foobar"),
},
),
(
"/path/to/foobar.md",
{
Path("/path/to/foobar.md"),
},
),
(
"/path/to/foobar.txt",
{
Path("/path/to/foobar.txt"),
},
),
(
"/path/to/foobar.txt.tar.gz",
{
Path("/path/to/foobar.txt.tar.gz"),
},
),
(
"/path/to/file",
{
Path("/path/to/file-1"),
Path("/path/to/file-2"),
Path("/path/to/file-3"),
Path("/path/to/file-4"),
},
),
(
"/path/to/lorem",
{
Path("/path/to/lorem"),
Path("/path/to/lorem-2"),
Path("/path/to/lorem-3"),
},
),
(
"/path/to/ipsum",
{
Path("/path/to/ipsum-3"),
Path("/path/to/ipsum-8"),
Path("/path/to/ipsum-9"),
Path("/path/to/ipsum-12"),
},
),
(
"/path/to/main.py",
{
Path("/path/to/main.py"),
},
),
],
)
def test_should_filter_files_by_name_and_suffix(
self, path, expected_siblings, fake_files
):
file = NumberedFile.of(path)
siblings = set(sibling.path for sibling in file.siblings)
assert siblings == expected_siblings
@pytest.mark.parametrize(
"path, expected_siblings",
[
(
"/path/to/foo-42",
{
Path("/path/to/foo"),
Path("/path/to/foo-1"),
Path("/path/to/foo-2"),
},
),
(
"/path/to/foo-42.txt",
{
Path("/path/to/foo.txt"),
Path("/path/to/foo-1.txt"),
Path("/path/to/foo-2.txt"),
Path("/path/to/foo-3.txt"),
Path("/path/to/foo-4.txt"),
},
),
(
"/path/to/foo-42.txt.tar.gz",
{
Path("/path/to/foo.txt.tar.gz"),
Path("/path/to/foo-1.txt.tar.gz"),
Path("/path/to/foo-2.txt.tar.gz"),
},
),
],
)
def test_should_ignore_given_number(
self, path, expected_siblings, fake_files
):
file = NumberedFile.of(path)
siblings = set(sibling.path for sibling in file.siblings)
assert siblings == expected_siblings
class TestSiblingsAscending:
def test_should_return_list_of_numbered_files(self, fake_files):
file = NumberedFile.of("/path/to/file")
siblings_ascending = file.siblings_ascending
assert isinstance(siblings_ascending, list)
assert isinstance(siblings_ascending[0], NumberedFile)
def test_should_be_empty_when_no_parent_dir(self, fake_files):
file = NumberedFile.of("/no/such/path")
assert file.siblings_ascending == []
def test_should_be_empty_when_no_such_file(self, fake_files):
file = NumberedFile.of("/path/to/no-such-file")
assert file.siblings_ascending == []
@pytest.mark.parametrize(
"path, expected_siblings",
[
(
"/path/to/foo",
[
Path("/path/to/foo"),
Path("/path/to/foo-1"),
Path("/path/to/foo-2"),
],
),
(
"/path/to/ipsum",
[
Path("/path/to/ipsum-3"),
Path("/path/to/ipsum-8"),
Path("/path/to/ipsum-9"),
Path("/path/to/ipsum-12"),
],
),
],
)
def test_should_have_increasing_number(
self, path: str, expected_siblings: list, fake_files
):
file = NumberedFile.of(path)
siblings_ascending = [
sibling.path for sibling in file.siblings_ascending
]
assert siblings_ascending == expected_siblings
class TestSiblingsDescending:
def test_should_return_list_of_numbered_files(self, fake_files):
file = NumberedFile.of("/path/to/file")
siblings_descending = file.siblings_descending
assert isinstance(siblings_descending, list)
assert isinstance(siblings_descending[0], NumberedFile)
def test_should_be_empty_when_no_parent_dir(self, fake_files):
file = NumberedFile.of("/no/such/path")
assert file.siblings_descending == []
def test_should_be_empty_when_no_such_file(self, fake_files):
file = NumberedFile.of("/path/to/no-such-file")
assert file.siblings_descending == []
@pytest.mark.parametrize(
"path, expected_siblings",
[
(
"/path/to/foo",
[
Path("/path/to/foo-2"),
Path("/path/to/foo-1"),
Path("/path/to/foo"),
],
),
(
"/path/to/ipsum",
[
Path("/path/to/ipsum-12"),
Path("/path/to/ipsum-9"),
Path("/path/to/ipsum-8"),
Path("/path/to/ipsum-3"),
],
),
],
)
def test_should_have_decreasing_number(
self, path: str, expected_siblings: list, fake_files
):
file = NumberedFile.of(path)
siblings_descending = [
sibling.path for sibling in file.siblings_descending
]
assert siblings_descending == expected_siblings
class TestOpenAll:
def test_should_open_ascending_siblings(
self, mocker: MockerFixture, fake_files
):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
expected_calls = [
call(Path("/path/to/foo.txt"), mode="r", encoding="utf-8"),
call(Path("/path/to/foo-1.txt"), mode="r", encoding="utf-8"),
call(Path("/path/to/foo-2.txt"), mode="r", encoding="utf-8"),
call(Path("/path/to/foo-3.txt"), mode="r", encoding="utf-8"),
call(Path("/path/to/foo-4.txt"), mode="r", encoding="utf-8"),
]
for i, file in enumerate(open_all("/path/to/foo.txt", mode="r")):
assert expected_calls[i] == mock_open.call_args
def test_should_close_all_files_automatically(
self, mocker: MockerFixture, fake_files
):
mocker.patch("builtins.open", mocker.mock_open(read_data="fake data"))
files = []
for file in open_all("/path/to/foo.txt", mode="r"):
assert file.read() == "fake data"
files.append(file)
for file in files:
file.close.assert_called()
def test_should_open_in_read_mode_by_default(
self, mocker: MockerFixture, fake_files
):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
for _ in enumerate(open_all("/path/to/foo.txt")):
assert mock_open.call_args.kwargs["mode"] == "r"
class TestOpenLatest:
def test_read_no_such_file(self, mocker: MockerFixture, fake_files):
mocker.patch.object(Path, "exists", side_effect=[False])
with pytest.raises(FileNotFoundError):
open_latest("/path/to/no-such-file", mode="r")
def test_write_no_such_file(self, mocker: MockerFixture, fake_files):
mocker.patch("builtins.open", mocker.mock_open())
mocker.patch.object(Path, "exists", side_effect=[False])
open_latest("/path/to/no-such-file", mode="w")
def test_read_existing_file(self, mocker: MockerFixture, fake_files):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
mocker.patch.object(Path, "exists", side_effect=[True])
open_latest("/path/to/file", mode="r")
mock_open.assert_called_once_with(
Path("/path/to/file-4"), mode="r", encoding="utf-8"
)
def test_write_existing_file(self, mocker: MockerFixture, fake_files):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
mocker.patch.object(Path, "exists", side_effect=[True])
open_latest("/path/to/file", mode="w")
mock_open.assert_called_once_with(
Path("/path/to/file-4"), mode="w", encoding="utf-8"
)
def test_should_open_in_append_mode_by_default(
self, mocker: MockerFixture, fake_files
):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
for _ in enumerate(open_latest("/path/to/foo.txt")):
assert mock_open.call_args.kwargs["mode"] == "a"
class TestOpenNext:
def test_read_no_such_file(self, mocker: MockerFixture, fake_files):
mocker.patch.object(Path, "exists", return_value=False)
with pytest.raises(FileNotFoundError):
open_next("/path/to/no-such-file", mode="r")
def test_write_no_such_file(self, mocker: MockerFixture, fake_files):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
mocker.patch.object(Path, "exists", return_value=False)
open_next("/path/to/no-such-file", mode="w")
mock_open.assert_called_once_with(
Path("/path/to/no-such-file-1"), mode="w", encoding="utf-8"
)
def test_read_existing_file(self, mocker: MockerFixture, fake_files):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
mocker.patch.object(Path, "exists", return_value=True)
open_next("/path/to/file", mode="r")
mock_open.assert_called_once_with(
Path("/path/to/file-5"), mode="r", encoding="utf-8"
)
def test_write_existing_file(self, mocker: MockerFixture, fake_files):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
mocker.patch.object(Path, "exists", return_value=True)
open_next("/path/to/file", mode="w")
mock_open.assert_called_once_with(
Path("/path/to/file-5"), mode="w", encoding="utf-8"
)
def test_should_open_in_write_mode_by_default(
self, mocker: MockerFixture, fake_files
):
mock_open = mocker.mock_open()
mocker.patch("builtins.open", mock_open)
for _ in enumerate(open_next("/path/to/foo.txt")):
assert mock_open.call_args.kwargs["mode"] == "w"
| 34.260229 | 78 | 0.542015 | 2,470 | 20,933 | 4.417814 | 0.063158 | 0.102823 | 0.103556 | 0.06195 | 0.837518 | 0.817815 | 0.803519 | 0.762555 | 0.725806 | 0.665048 | 0 | 0.009483 | 0.314862 | 20,933 | 610 | 79 | 34.316393 | 0.75136 | 0.006975 | 0 | 0.578748 | 0 | 0 | 0.176476 | 0.037634 | 0 | 0 | 0 | 0 | 0.134725 | 1 | 0.100569 | false | 0 | 0.011385 | 0 | 0.13093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93da43f63fccb6f1298cf2f4dd5b80db2bce17f9 | 60 | py | Python | dlmb/utils/__init__.py | Jonathan-Andrews/dlmb | 552148bcac2ffb4308c8db24599c458652684ed2 | [
"MIT"
] | 5 | 2019-11-23T13:32:21.000Z | 2022-01-01T16:32:48.000Z | dlmb/utils/__init__.py | Jonathan-Andrews/dlmb | 552148bcac2ffb4308c8db24599c458652684ed2 | [
"MIT"
] | null | null | null | dlmb/utils/__init__.py | Jonathan-Andrews/dlmb | 552148bcac2ffb4308c8db24599c458652684ed2 | [
"MIT"
] | null | null | null | from .function_helpers import *
from .data_helpers import *
| 20 | 31 | 0.8 | 8 | 60 | 5.75 | 0.625 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 2 | 32 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93ddf2b7dd954a3730574ba6abad1fe5ca3530d4 | 4,565 | py | Python | mysite/polls/tests.py | z-gora/django-tutorial | 8af9de5e4ce233767b2b6303de69bfea9d7c4df6 | [
"MIT"
] | null | null | null | mysite/polls/tests.py | z-gora/django-tutorial | 8af9de5e4ce233767b2b6303de69bfea9d7c4df6 | [
"MIT"
] | null | null | null | mysite/polls/tests.py | z-gora/django-tutorial | 8af9de5e4ce233767b2b6303de69bfea9d7c4df6 | [
"MIT"
] | null | null | null | import datetime
from django.test import TestCase
from django.utils import timezone
from django.urls import reverse
from .models import Question
# Create your tests here.
class QuestionModelTests(TestCase):
def test_was_published_recently_with_future_question(self):
time = timezone.now() + datetime.timedelta(days=30)
future_question = Question(pub_date=time)
self.assertIs(future_question.was_published_recently(), False)
def test_was_published_recently_with_recent_question(self):
time = timezone.now() + datetime.timedelta(minutes=-10)
future_question = Question(pub_date=time)
self.assertIs(future_question.was_published_recently(), True)
def test_was_published_recently_with_old_question(self):
time = timezone.now() + datetime.timedelta(days=-30)
future_question = Question(pub_date=time)
self.assertIs(future_question.was_published_recently(), False)
def create_question(question_text, days):
time = timezone.now() + datetime.timedelta(days=days)
return Question.objects.create(question_text=question_text, pub_date=time)
class QuestionIndexViewTest(TestCase):
def test_no_questions(self):
# No questions in the database
response = self.client.get(reverse("polls:index"))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "No polls are available")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_past_question(self):
# Questions with pub_date in the past should be displayed
create_question(question_text="Past question", days=-30)
response = self.client.get(reverse("polls:index"))
self.assertContains(response, "Past question")
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question>']
)
def test_future_question(self):
# Questions with pub_date in the future should not be displayed
create_question(question_text="Future question", days=30)
response = self.client.get(reverse("polls:index"))
self.assertContains(response, "No polls are available")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_future_and_past_question(self):
# ONLY past questions are displayed
create_question(question_text="Past question", days=-30)
create_question(question_text="Future question", days=30)
response = self.client.get(reverse("polls:index"))
self.assertContains(response, "Past question")
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question>']
)
def test_future_and_past_question(self):
# ONLY past questions are displayed
create_question(question_text="Past question 1", days=-20)
create_question(question_text="Past question 2", days=-30)
response = self.client.get(reverse("polls:index"))
self.assertContains(response, "Past question 1")
self.assertContains(response, "Past question 2")
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question 1>', '<Question: Past question 2>']
)
class QuestionDetailViewTest(TestCase):
def test_future_question(self):
future_question = create_question(question_text="Past question", days=30)
url = reverse('polls:detail', args=(future_question.id,))
response = self.client.get(url)
self.assertEqual(response.status_code,404)
def test_past_question(self):
future_question = create_question(question_text="Future question", days=-30)
url = reverse('polls:detail', args=(future_question.id,))
response = self.client.get(url)
self.assertContains(response, future_question.question_text)
class QuestionResultsViewTest(TestCase):
def test_future_question(self):
future_question = create_question(question_text="Past question", days=30)
url = reverse('polls:results', args=(future_question.id,))
response = self.client.get(url)
self.assertEqual(response.status_code,404)
def test_past_question(self):
future_question = create_question(question_text="Future question", days=-30)
url = reverse('polls:results', args=(future_question.id,))
response = self.client.get(url)
self.assertContains(response, future_question.question_text)
| 42.268519 | 84 | 0.704491 | 530 | 4,565 | 5.866038 | 0.154717 | 0.108073 | 0.083628 | 0.091991 | 0.842071 | 0.819234 | 0.762303 | 0.748151 | 0.71084 | 0.696365 | 0 | 0.011105 | 0.191238 | 4,565 | 107 | 85 | 42.663551 | 0.830986 | 0.052136 | 0 | 0.641975 | 0 | 0 | 0.127546 | 0 | 0 | 0 | 0 | 0 | 0.234568 | 1 | 0.160494 | false | 0 | 0.061728 | 0 | 0.283951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93fb51a815e78253510f821d1020e6d0059996b6 | 774 | py | Python | octicons16px/telescope.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/telescope.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/telescope.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_TELESCOPE = """
<svg class="octicon octicon-telescope" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M14.184 1.143a1.75 1.75 0 00-2.502-.57L.912 7.916a1.75 1.75 0 00-.53 2.32l.447.775a1.75 1.75 0 002.275.702l11.745-5.656a1.75 1.75 0 00.757-2.451l-1.422-2.464zm-1.657.669a.25.25 0 01.358.081l1.422 2.464a.25.25 0 01-.108.35l-2.016.97-1.505-2.605 1.85-1.26zM9.436 3.92l1.391 2.41-5.42 2.61-.942-1.63 4.97-3.39zM3.222 8.157l-1.466 1a.25.25 0 00-.075.33l.447.775a.25.25 0 00.325.1l1.598-.769-.83-1.436zm6.253 2.306a.75.75 0 00-.944-.252l-1.809.87a.75.75 0 00-.293.253L4.38 14.326a.75.75 0 101.238.848l1.881-2.75v2.826a.75.75 0 001.5 0v-2.826l1.881 2.75a.75.75 0 001.238-.848l-2.644-3.863z"></path></svg>
"""
| 154.8 | 744 | 0.684755 | 200 | 774 | 2.645 | 0.55 | 0.05104 | 0.047259 | 0.045369 | 0.045369 | 0 | 0 | 0 | 0 | 0 | 0 | 0.530899 | 0.080103 | 774 | 4 | 745 | 193.5 | 0.212079 | 0 | 0 | 0 | 0 | 0.333333 | 0.965071 | 0.351876 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9e19e9d4e4f219dbdbcd2bdc903af06f06bde4a6 | 19,029 | py | Python | cloud_userInfo.py | Christopherfwz/YourGuide | 425654d285b656980a395e20f0f8323baea2dc16 | [
"MIT"
] | null | null | null | cloud_userInfo.py | Christopherfwz/YourGuide | 425654d285b656980a395e20f0f8323baea2dc16 | [
"MIT"
] | null | null | null | cloud_userInfo.py | Christopherfwz/YourGuide | 425654d285b656980a395e20f0f8323baea2dc16 | [
"MIT"
] | null | null | null | # coding: utf-8
# import datetime
import leancloud
import re
from django.core.wsgi import get_wsgi_application
from leancloud import Engine
from leancloud import LeanEngineError
import datetime
import requests
import json
cloud_userInfo = Engine(get_wsgi_application())
@cloud_userInfo.define
def getUserInfo(**params):
try:
id = params.get('id')
except:
raise LeanEngineError(501, 'Invalid argument.')
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(id)
user.fetch()
avatar = user.get("avatar")
nickname = user.get("nickname")
city = user.get("city")
level = user.get("level")
intro = user.get("introduction")
is_guide = user.get("is_guide")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
# 查询该用户的游记
TravelNote = leancloud.Object.extend("TravelNote")
query = TravelNote.query
query.equal_to("author", user)
query_list = query.find()
moments = []
for j in query_list:
pics = []
content = j.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(j.get("author").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
# 查询comment数量
Comment = leancloud.Object.extend("Comment")
query = Comment.query
query.equal_to("travelNote", j)
query_list = query.find()
commentNum = len(query_list)
moments.append({
"id": str(j.id),
"image": pics[0],
"title": j.get("title"),
"nickname": user.get("nickname"),
"avatar": avatar_url,
"favNum": j.get("like"),
"replyNum": commentNum,
"price": j.get("spend"),
"date": j.get("createdAt").strftime("%Y-%m-%d %H:%M:%S"),
})
favorites = []
TravelNoteFav = leancloud.Object.extend("TravelNoteFav")
query = TravelNoteFav.query
query.equal_to("favUser", user)
query_list = query.find()
for j in query_list:
pics = []
travelNote = TravelNote.create_without_data(j.get("TravelNote").id)
travelNote.fetch()
content = travelNote.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(travelNote.get("author").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
# 查询comment数量
Comment = leancloud.Object.extend("Comment")
query = Comment.query
query.equal_to("travelNote", travelNote)
query_list = query.find()
commentNum = len(query_list)
favorites.append({
"id": str(travelNote.id),
"image": pics[0],
"title": travelNote.get("title"),
"nickname": user.get("nickname"),
"avatar": avatar_url,
"favNum": travelNote.get("like"),
"replyNum": commentNum,
"price": travelNote.get("spend"),
"date": travelNote.get("createdAt").strftime("%Y-%m-%d %H:%M:%S"),
})
result = {
"avatar": avatar_url,
"nickname": nickname,
"city": city,
"level": level,
"intro": intro,
"is_guide": is_guide,
"moments": moments,
"favorites": favorites,
}
return result
@cloud_userInfo.define
def getReceivedLike(**params):
current_user = cloud_userInfo.current.user
if not current_user:
raise LeanEngineError("401", "Unauthorized")
try:
page = params.get('page')
except:
raise LeanEngineError(501, 'Invalid argument.')
array = []
# 查询我的游记
TravelNote = leancloud.Object.extend("TravelNote")
query = TravelNote.query
query.equal_to("author", current_user)
query_list = query.find()
# j是游记
for j in query_list:
# 这篇游记的图片、标题、发布时间
title = j.get("title")
publishTime = j.get("createdAt")
pics = []
content = j.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询谁赞了这篇游记
TravelNoteLike = leancloud.Object.extend("TravelNoteLike")
query = TravelNoteLike.query
query.equal_to("TravelNote", j)
query_list1 = query.find()
# k是赞了游记的记录
for k in query_list1:
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(k.get("likeUser").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
array.append({
"id": str(k.id), # 赞的id
"image": pics[0], # 你游记的图片,或者你评论的那个游记的图片,
"title": title, # 同上的那个游记的标题
"nickname": user.get("nickname"), # 赞的人的名字,
"avatar": avatar_url, # 赞的人的头像
"time": k.get("createdAt").strftime("%Y-%m-%d %H:%M:%S"), # 赞的时间
"publishTime": publishTime.strftime("%Y-%m-%d %H:%M:%S"), # 同上的那个游记的发布时间
"type": 0, # 0为赞了游记,1为赞了评论
"comment": None
})
# 查询我的评论
Comment = leancloud.Object.extend("Comment")
query = Comment.query
query.equal_to("comment_user", current_user)
query_list = query.find()
# j是评论
for j in query_list:
# 我的评论的内容
content = j.get("content")
# 我评论的游记
travelNote = TravelNote.create_without_data(j.get("TravelNote").id)
travelNote.fetch()
# 这篇游记的图片、标题、发布时间
title = travelNote.get("title")
publishTime = travelNote.get("createdAt")
pics = []
content = travelNote.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询谁赞了这个评论
CommentLike = leancloud.Object.extend("CommentLike")
query = CommentLike.query
query.equal_to("comment", j)
query_list1 = query.find()
# k是赞了评论的记录
for k in query_list1:
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(k.get("likeUser").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
array.append({
"id": str(k.id), # 赞的id
"image": pics[0], # 你游记的图片,或者你评论的那个游记的图片,
"title": title, # 同上的那个游记的标题
"nickname": user.get("nickname"), # 赞的人的名字,
"avatar": avatar_url, # 赞的人的头像
"time": k.get("createdAt").strftime("%Y-%m-%d %H:%M:%S"), # 赞的时间
"publishTime": publishTime.strftime("%Y-%m-%d %H:%M:%S"), # 同上的那个游记的发布时间
"type": 1, # 0为赞了游记,1为赞了评论
"comment": content
})
result = {
"next": -1,
"array": array,
}
return result
@cloud_userInfo.define
def getReceivedComment(**params):
current_user = cloud_userInfo.current.user
if not current_user:
raise LeanEngineError("401", "Unauthorized")
try:
page = params.get('page')
except:
raise LeanEngineError(501, 'Invalid argument.')
array = []
# 查询我的游记
TravelNote = leancloud.Object.extend("TravelNote")
query = TravelNote.query
query.equal_to("author", current_user)
query_list = query.find()
# j是游记
for j in query_list:
# 这篇游记的图片、标题、发布时间
title = j.get("title")
publishTime = j.get("createdAt")
pics = []
content = j.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询谁评论了这篇游记
Comment = leancloud.Object.extend("Comment")
query = Comment.query
query.equal_to("TravelNote", j)
query_list1 = query.find()
# k是评论了游记的记录
for k in query_list1:
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(k.get("comment_user").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
array.append({
"id": str(k.id), # 评论的id
"image": pics[0], # 你游记的图片,或者你评论的那个游记的图片,
"title": title, # 同上的那个游记的标题
"nickname": user.get("nickname"), # 评论的人的名字,
"avatar": avatar_url, # 评论的人的头像
"time": k.get("createdAt").strftime("%Y-%m-%d %H:%M:%S"), # 评论的时间
"publishTime": publishTime.strftime("%Y-%m-%d %H:%M:%S"), # 同上的那个游记的发布时间
"comment": k.get("content")
})
result = {
"next": -1,
"array": array,
}
return result
@cloud_userInfo.define
def getGuideInfo(**params):
try:
id = params.get('id') # 用户id,是user的objectId,不是导游表的id
except:
raise LeanEngineError(501, 'Invalid argument.')
# 根据用户id查询用户
User = leancloud.Object.extend("_User")
user = User.create_without_data(id)
user.fetch()
# 根据用户查询导游信息
Guide = leancloud.Object.extend("Guide")
query = Guide.query
query.equal_to("user", user)
query_list = query.find()
# i是导游
for i in query_list:
guide_id = i.id # 导游id
labels = []
features = i.get("features")
sightseeings = []
about = i.get("about")
# 根据导游id查询导游标签
GuideNeedTagMap = leancloud.Object.extend("GuideNeedTagMap")
query = GuideNeedTagMap.query
query.equal_to("guide", i)
query_list = query.find()
# j是导游标签map的条目
for j in query_list:
GuideNeedTag = leancloud.Object.extend("GuideNeedTag")
guideNeedTag = GuideNeedTag.create_without_data(j.get("guideNeedTag").id)
guideNeedTag.fetch()
labels.append(guideNeedTag.get("name"))
# 根据导游id查询导游熟悉景点
GuideAttractionMap = leancloud.Object.extend("GuideAttractionMap")
query = GuideAttractionMap.query
query.equal_to("guide", i)
query_list = query.find()
# j是导游熟悉景点map的条目
for j in query_list:
Attraction = leancloud.Object.extend("Attraction")
attraction = Attraction.create_without_data(j.get("attraction").id)
attraction.fetch()
sightseeings.append(attraction.get("title"))
# 查询该导游的相关游记
TravelNote = leancloud.Object.extend("TravelNote")
query = TravelNote.query
query.equal_to("guide", i)
query_list = query.find()
travel_notes = []
for j in query_list:
pics = []
content = j.get("content")
if content != None:
# picUrls_pre = re.findall('!\[.*?\]\(.*?\)', str(i.get("content"))) #MD的正则表达式
picUrls_pre = re.findall('<img.*?>', content) # html的提取img标签的正则表达式
else:
picUrls_pre = []
# for url in picUrls_pre:
# print url
# c = re.compile('\]\(.*?\)', re.S)
# v = c.findall(url)[0]
# pics.append(v[2:-1])
# 读取src中的url,放在pics里
for group in picUrls_pre:
match_obj = re.search('src="(.*?)"', group)
picUrls = match_obj.groups()
pics = list(picUrls)
if len(pics) == 0:
pics.append("http://lc-vqwqjioq.cn-n1.lcfile.com/72a3304b67086be0c5bd.jpg")
# 查询author信息
User = leancloud.Object.extend("_User")
user = User.create_without_data(j.get("author").id)
user.fetch()
avatar = user.get("avatar")
if avatar:
avatar_url = avatar.url
else:
avatar_url = None
# 查询comment数量
Comment = leancloud.Object.extend("Comment")
query = Comment.query
query.equal_to("travelNote", j)
query_list = query.find()
commentNum = len(query_list)
travel_notes.append({
"id": str(j.id),
"image": pics[0],
"title": j.get("title"),
"nickname": user.get("username"),
"avatar": avatar_url,
"favNum": j.get("like"),
"replyNum": commentNum,
"price": j.get("spend"),
"date": [j.get("startDate").strftime("%Y-%m-%d %H:%M:%S"), j.get("endDate").strftime("%Y-%m-%d %H:%M:%S")]
})
result = {
"labels": labels,
"features": features,
"sightseeings": sightseeings,
"about": about,
"travel_notes": travel_notes
}
return result
@cloud_userInfo.define
def editUserInfo(**params):
current_user = cloud_userInfo.current.user
if not current_user:
raise LeanEngineError("401", "Unauthorized")
try:
filed = params.get('filed')
data = params.get('data')
except:
raise LeanEngineError(501, 'Invalid argument.')
try:
current_user.set(filed, data)
current_user.save()
except:
return {
"status": -1,
"message": '字段不存在或无权限修改',
}
return {
"status": 0,
}
@cloud_userInfo.define
def getFullInfo(**params):
current_user = cloud_userInfo.current.user
if not current_user:
raise LeanEngineError("401", "Unauthorized")
phone = current_user.get("mobilePhoneNumber")
nickname = current_user.get("nickname")
introduction = current_user.get("introduction")
user = {
"phone": phone,
"nickname": nickname,
"introduction": introduction,
}
is_guide = current_user.get("is_guide")
guide = {}
# 根据用户查询导游信息
Guide = leancloud.Object.extend("Guide")
query = Guide.query
query.equal_to("user", current_user)
query_list = query.find()
# i是导游
for i in query_list:
guide_id = i.id # 导游id
introduction = i.get("about")
max_num = i.get("max_num")
price = i.get("price")
city = i.get("area")
sightseeings = []
sightseeing_names = []
features = i.get("features")
# 根据导游id查询导游熟悉景点
GuideAttractionMap = leancloud.Object.extend("GuideAttractionMap")
query = GuideAttractionMap.query
query.equal_to("guide", i)
query_list = query.find()
# j是导游熟悉景点map的条目
for j in query_list:
Attraction = leancloud.Object.extend("Attraction")
attraction = Attraction.create_without_data(j.get("attraction").id)
attraction.fetch()
sightseeings.append(attraction.id)
sightseeing_names.append(attraction.get("title"))
guide = {
"is_open": is_guide,
"introduction": introduction,
"max_num": max_num,
"price": price,
"city": city,
"sightseeings": sightseeings,
"sightseeing_names": sightseeing_names,
"features": features
}
break
result = {
"user": user,
"guide": guide
}
return result
| 31.767947 | 122 | 0.533607 | 1,947 | 19,029 | 5.109913 | 0.108372 | 0.030154 | 0.059101 | 0.029048 | 0.769123 | 0.749724 | 0.723289 | 0.715951 | 0.709318 | 0.704392 | 0 | 0.012325 | 0.326344 | 19,029 | 598 | 123 | 31.82107 | 0.763788 | 0.114667 | 0 | 0.747768 | 0 | 0 | 0.134717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013393 | false | 0 | 0.017857 | 0 | 0.046875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5587844b5ba1dcfdbc30a62bd524ab8b0478223 | 1,104 | py | Python | part3/ex05-exercise/answer01a.py | abasu1007/intro-to-python | f6978816c3020860b74219a1bfe191aea8e4e75f | [
"CC-BY-4.0"
] | 13 | 2015-05-11T06:20:24.000Z | 2017-04-13T19:47:54.000Z | part3/ex05-exercise/answer01a.py | PythonWorkshop/intro-to-python | 9f8ad86e42e40d059877fd234fb160602e8907ac | [
"CC-BY-4.0"
] | null | null | null | part3/ex05-exercise/answer01a.py | PythonWorkshop/intro-to-python | 9f8ad86e42e40d059877fd234fb160602e8907ac | [
"CC-BY-4.0"
] | 10 | 2016-04-16T19:28:22.000Z | 2018-06-15T14:56:57.000Z | # Script that wishes happy birthday to Wolfe+585, Senior
# http://en.wikipedia.org/wiki/Wolfe+585,_Senior
def happy_birthday(name):
print("Happy Birthday, dear " + name + "!")
# This guy is a real-life test case for name fields
happy_birthday("Adolph Blaine Charles David Earl Frederick Gerald Hubert Irvin John Kenneth Lloyd Martin Nero Oliver Paul Quincy Randolph Sherman Thomas Uncas Victor William Xerxes Yancy Zeus Wolfeschlegelsteinhausenbergerdorffvoralternwarengewissenhaftschaferswessenschafewarenwohlgepflegeundsorgfaltigkeitbeschutzenvonangreifendurchihrraubgierigfeindewelchevoralternzwolftausendjahresvorandieerscheinenwanderersteerdemenschderraumschiffgebrauchlichtalsseinursprungvonkraftgestartseinlangefahrthinzwischensternartigraumaufdersuchenachdiesternwelchegehabtbewohnbarplanetenkreisedrehensichundwohinderneurassevonverstandigmenschlichkeitkonntefortplanzenundsicherfreuenanlebenslanglichfreudeundruhemitnichteinfurchtvorangreifenvonandererintelligentgeschopfsvonhinzwischensternartigraum, Senior")
| 110.4 | 868 | 0.84058 | 219 | 1,104 | 4.570776 | 0.538813 | 0.01998 | 0.024975 | 0.053946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005786 | 0.060688 | 1,104 | 9 | 869 | 122.666667 | 0.88621 | 0.136775 | 0 | 0 | 0 | 0.333333 | 0.918862 | 0.702845 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f577921775e8fab37d96397ea2b596f1ccc9b103 | 121 | py | Python | wingline/pingpong.py | HappyEinara/wingline | 08d67ad9f58c869c385f954def6af5fa92e968ff | [
"MIT"
] | null | null | null | wingline/pingpong.py | HappyEinara/wingline | 08d67ad9f58c869c385f954def6af5fa92e968ff | [
"MIT"
] | null | null | null | wingline/pingpong.py | HappyEinara/wingline | 08d67ad9f58c869c385f954def6af5fa92e968ff | [
"MIT"
] | null | null | null | """A basic placeholder to stand in for initial tests."""
def ping() -> str:
"""Return "pong"."""
return "pong"
| 17.285714 | 56 | 0.586777 | 16 | 121 | 4.4375 | 0.875 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223141 | 121 | 6 | 57 | 20.166667 | 0.755319 | 0.53719 | 0 | 0 | 0 | 0 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
f58bb1e8b1d66ce85d31cc31bed1c82466bca700 | 42 | py | Python | mlxgtools/ml/__init__.py | gdtydm/mlxgtools | 542eada8837f69a35b5694a926e90dabcc1d4323 | [
"Apache-2.0"
] | null | null | null | mlxgtools/ml/__init__.py | gdtydm/mlxgtools | 542eada8837f69a35b5694a926e90dabcc1d4323 | [
"Apache-2.0"
] | null | null | null | mlxgtools/ml/__init__.py | gdtydm/mlxgtools | 542eada8837f69a35b5694a926e90dabcc1d4323 | [
"Apache-2.0"
] | null | null | null | from .cv import PurgedGroupTimeSeriesSplit | 42 | 42 | 0.904762 | 4 | 42 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f59ac154c5980ef8f4572e7767e9ae22addf9fa3 | 33 | py | Python | public/testpython.py | mekimeki/MiddleBack | e371c67bf99f503621eb4091b4d6123ae87b9bbe | [
"MIT"
] | null | null | null | public/testpython.py | mekimeki/MiddleBack | e371c67bf99f503621eb4091b4d6123ae87b9bbe | [
"MIT"
] | null | null | null | public/testpython.py | mekimeki/MiddleBack | e371c67bf99f503621eb4091b4d6123ae87b9bbe | [
"MIT"
] | null | null | null | import sys
print("hello world")
| 8.25 | 20 | 0.727273 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 3 | 21 | 11 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1952e766a9c556ec4701f1dd037dcadd2c7d3cfb | 14,772 | py | Python | pytorch/utils/vcl.py | goldfarbDave/vcl | 24fb33a1dcadfa6c6cf5e9e9838b64f4fd23143a | [
"Apache-2.0"
] | null | null | null | pytorch/utils/vcl.py | goldfarbDave/vcl | 24fb33a1dcadfa6c6cf5e9e9838b64f4fd23143a | [
"Apache-2.0"
] | null | null | null | pytorch/utils/vcl.py | goldfarbDave/vcl | 24fb33a1dcadfa6c6cf5e9e9838b64f4fd23143a | [
"Apache-2.0"
] | null | null | null | import numpy as np
from utils.mobile_net_v2_vanilla import mobilenetv2_vanilla
from utils.mobile_net_v2_bayesian import mobilenetv2_bayesian
import utils.test as test
from utils.multihead_models import Vanilla_NN, Vanilla_CNN, MFVI_NN, MFVI_CNN
from . import flags
import utils.GAN as GAN
import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
try:
from torchviz import make_dot, make_dot_from_trace
except ImportError:
print("Torchviz was not found.")
def run_vcl(hidden_size, no_epochs, data_gen, coreset_method, coreset_size=0, batch_size=None, single_head=True, gan_bol = False, is_toy=False, use_lrt=False):
in_dim, out_dim = data_gen.get_dims()
x_coresets, y_coresets = [], []
x_testsets, y_testsets = [], []
x_trainsets, y_trainsets = [], []
gans = []
all_acc = np.array([])
for task_id in range(data_gen.max_iter):
print('Current task: '+str(task_id))
x_train, y_train, x_test, y_test = data_gen.next_task()
x_testsets.append(x_test)
y_testsets.append(y_test)
x_trainsets.append(x_train)
y_trainsets.append(y_train)
# Set the readout head to train
head = 0 if single_head else task_id
bsize = x_train.shape[0] if (batch_size is None) else batch_size
# Train network with maximum likelihood to initialize first model
if task_id == 0:
print_graph_bol = False #set to True if you want to see the graph
if(is_toy):
ml_model = Vanilla_NN(in_dim, hidden_size, out_dim, x_train.shape[0], learning_rate=0.005)
else:
ml_model = Vanilla_NN(in_dim, hidden_size, out_dim, x_train.shape[0])
# train for first task
ml_model.train(x_train, y_train, task_id, no_epochs, bsize)
# updated weights of network after SGD on task 1 -- these are means of posterior distribution of weights after task 1 ==> new prior for task 2
mf_weights = ml_model.get_weights()
# use these weights to initialise weights of new task model
mf_model = MFVI_NN(in_dim, hidden_size, out_dim, x_train.shape[0], single_head = single_head, prev_means=mf_weights, LRT=use_lrt)
if not gan_bol:
if coreset_size > 0:
x_coresets, y_coresets, x_train, y_train = coreset_method(x_coresets, y_coresets, x_train, y_train, coreset_size)
gans = None
if print_graph_bol:
#Just if you want to see the computational graph
output_tensor = mf_model._KL_term() #mf_model.get_loss(torch.Tensor(x_train).to(device), torch.Tensor(y_train).to(device), task_id), params=params)
print_graph(mf_model, output_tensor)
print_graph_bol = False
if gan_bol:
gan_i = GAN.VGR(task_id)
gan_i.train(x_train, y_train)
gans.append(gan_i)
mf_model.train(x_train, y_train, head, no_epochs, bsize)
mf_model.update_prior()
# Save weights before test (and last-minute training on coreset
mf_model.save_weights()
acc = test.get_scores(mf_model, x_trainsets, y_trainsets, x_testsets, y_testsets, no_epochs, single_head, x_coresets, y_coresets, batch_size, False,gans, is_toy=is_toy)
all_acc = test.concatenate_results(acc, all_acc)
mf_model.load_weights()
mf_model.clean_copy_weights()
if not single_head:
mf_model.create_head()
return all_acc
def run_vcl_cnn(input_dims, hidden_size, output_dims, no_epochs, data_gen, coreset_method, coreset_size=0, batch_size=None, single_head=True, gan_bol = False, is_toy=False, use_lrt=False, is_cifar=False):
in_dim, out_dim = data_gen.get_dims()
x_coresets, y_coresets = [], []
x_testsets, y_testsets = [], []
x_trainsets, y_trainsets = [], []
gans = []
all_acc = np.array([])
for task_id in range(data_gen.max_iter):
print('Current task: '+str(task_id))
x_train, y_train, x_test, y_test = data_gen.next_task()
x_testsets.append(x_test)
y_testsets.append(y_test)
x_trainsets.append(x_train)
y_trainsets.append(y_train)
# Set the readout head to train
head = 0 if single_head else task_id
bsize = x_train.shape[0] if (batch_size is None) else batch_size
# Train network with maximum likelihood to initialize first model
if task_id == 0:
print_graph_bol = False #set to True if you want to see the graph
if(is_toy):
ml_model = Vanilla_CNN(input_dims, hidden_size, output_dims, x_train.shape[0], learning_rate=0.005, is_cifar=is_cifar)
else:
ml_model = Vanilla_CNN(input_dims, hidden_size, output_dims, x_train.shape[0],is_cifar=is_cifar)
# train for first task
ml_model.train(x_train, y_train, task_id, no_epochs, bsize)
# updated weights of network after SGD on task 1 -- these are means of posterior distribution of weights after task 1 ==> new prior for task 2
mf_weights = ml_model.get_weights()
# use these weights to initialise weights of new task model
if(is_cifar):
mf_model = MFVI_CNN(input_dims, hidden_size, output_dims, x_train.shape[0], single_head = single_head, prev_means=mf_weights, LRT=use_lrt, is_cifar=is_cifar, learning_rate=0.01)
else:
mf_model = MFVI_CNN(input_dims, hidden_size, output_dims, x_train.shape[0], single_head = single_head, prev_means=mf_weights, LRT=use_lrt, is_cifar=is_cifar)
if not gan_bol:
if coreset_size > 0:
x_coresets, y_coresets, x_train, y_train = coreset_method(x_coresets, y_coresets, x_train, y_train, coreset_size)
gans = None
if print_graph_bol:
#Just if you want to see the computational graph
output_tensor = mf_model._KL_term() #mf_model.get_loss(torch.Tensor(x_train).to(device), torch.Tensor(y_train).to(device), task_id), params=params)
print_graph(mf_model, output_tensor)
print_graph_bol = False
if gan_bol:
gan_i = GAN.VGR(task_id)
gan_i.train(x_train, y_train)
gans.append(gan_i)
mf_model.train(x_train, y_train, head, no_epochs, bsize)
mf_model.update_prior()
# Save weights before test (and last-minute training on coreset
mf_model.save_weights()
acc = test.get_scores(mf_model, x_trainsets, y_trainsets, x_testsets, y_testsets, no_epochs, single_head, x_coresets, y_coresets, batch_size, False,gans, is_toy=is_toy)
all_acc = test.concatenate_results(acc, all_acc)
mf_model.load_weights()
mf_model.clean_copy_weights()
if not single_head:
mf_model.create_head()
return all_acc
def run_coreset_only(hidden_size, no_epochs, data_gen, coreset_method, coreset_size=0, batch_size=None, single_head=True):
in_dim, out_dim = data_gen.get_dims()
x_coresets, y_coresets = [], []
x_testsets, y_testsets = [], []
x_trainsets, y_trainsets = [], []
all_acc = np.array([])
for task_id in range(data_gen.max_iter):
x_train, y_train, x_test, y_test = data_gen.next_task()
x_testsets.append(x_test)
y_testsets.append(y_test)
x_trainsets.append(x_train)
y_trainsets.append(y_train)
head = 0 if single_head else task_id
bsize = x_train.shape[0] if (batch_size is None) else batch_size
if task_id == 0:
mf_model = MFVI_NN(in_dim, hidden_size, out_dim, x_train.shape[0], single_head = single_head, prev_means=None)
if coreset_size > 0:
x_coresets, y_coresets, x_train, y_train = coreset_method(x_coresets, y_coresets, x_train, y_train, coreset_size)
mf_model.save_weights()
acc = test.get_scores(mf_model, x_trainsets, y_trainsets, x_testsets, y_testsets, no_epochs, single_head, x_coresets, y_coresets, batch_size, just_vanilla =False)
all_acc = test.concatenate_results(acc, all_acc)
mf_model.load_weights()
mf_model.clean_copy_weights()
if not single_head:
mf_model.create_head()
return all_acc
def run_vcl_cifar(no_epochs, data_gen, coreset_method, coreset_size=0, batch_size=None, single_head=True, gan_bol = False, use_lrt=False, device="cpu"):
x_coresets, y_coresets = [], []
x_testsets, y_testsets = [], []
x_trainsets, y_trainsets = [], []
gans = []
all_acc = np.array([])
in_dim, out_dim = data_gen.get_dims()
for task_id in range(data_gen.max_iter):
print('Current task: '+str(task_id))
x_train, y_train, x_test, y_test = data_gen.next_task()
x_testsets.append(x_test)
y_testsets.append(y_test)
x_trainsets.append(x_train)
y_trainsets.append(y_train)
# Set the readout head to train
head = 0 if single_head else task_id
bsize = x_train.shape[0] if (batch_size is None) else batch_size
cur_acc = 0
# Train network with maximum likelihood to initialize first model
if task_id == 0:
print_graph_bol = False #set to True if you want to see the graph
ml_model = mobilenetv2_vanilla(device=device, num_classes=out_dim)
ml_model.to(device=device)
# train for first task
ml_model.train(x_train, y_train, task_id, no_epochs, bsize)
pred_means = []
pred=torch.argmax(ml_model.prediction_prob(torch.Tensor(x_test).to(device=device), None), dim=1)
y_labs = torch.Tensor(y_test).type(torch.LongTensor).to(device=device)
print(pred.shape, y_labs.shape)
# # # Loop over all batches
# for i in range(len(x_test)//bsize):
# start_ind = i*bsize
# end_ind = np.min([(i+1)*bsize, len(x_test)])
# batch_x_test = torch.Tensor(x_test[start_ind:end_ind, :]).to(device = device)
# batch_y_test = torch.Tensor(y_test[start_ind:end_ind]).type(torch.LongTensor).to(device = device)
# pred = ml_model.prediction_prob(batch_x_test, head)
# # pred_mean = pred.mean(0)
# pred_means.extend(list(pred.detach().cpu().numpy()))
# # pred_y = torch.argmax(pred_mean, dim=0)
# # cur_acc += end_ind - start_ind-(pred_y - batch_y_test).nonzero().shape[0]
print(sum(pred==y_labs)/len(y_labs))
# cur_acc = float(cur_acc)
# cur_acc /= len(x_test)
# print(cur_acc)
# acc.append(cur_acc)
# print("Accuracy is {}".format(cur_acc))
# updated weights of network after SGD on task 1 -- these are means of posterior distribution of weights after task 1 ==> new prior for task 2
mf_weights = ml_model.get_weights_for_bayesian()
# use these weights to initialise weights of new task model
mf_model = mobilenetv2_bayesian(device=device, num_classes=out_dim, prev_means=mf_weights)
mf_model.to(device=device)
if not gan_bol:
if coreset_size > 0:
x_coresets, y_coresets, x_train, y_train = coreset_method(x_coresets, y_coresets, x_train, y_train, coreset_size)
gans = None
if print_graph_bol:
#Just if you want to see the computational graph
output_tensor = mf_model._KL_term() #mf_model.get_loss(torch.Tensor(x_train).to(device), torch.Tensor(y_train).to(device), task_id), params=params)
print_graph(mf_model, output_tensor)
print_graph_bol = False
if gan_bol:
gan_i = GAN.VGR(task_id)
gan_i.train(x_train, y_train)
gans.append(gan_i)
mf_model.train(x_train, y_train, head, no_epochs, bsize)
for ind_test, x_test_ in enumerate(x_testsets):
print('Task:'+str(ind_test))
pred=torch.argmax(mf_model.prediction_prob(torch.Tensor(x_test_).to(device=device), head).squeeze(0), dim=1)
y_labs = torch.Tensor(y_testsets[ind_test]).type(torch.LongTensor).to(device=device)
print(pred.shape, y_labs.shape)
# # # Loop over all batches
# for i in range(len(x_test)//bsize):
# start_ind = i*bsize
# end_ind = np.min([(i+1)*bsize, len(x_test)])
# batch_x_test = torch.Tensor(x_test[start_ind:end_ind, :]).to(device = device)
# batch_y_test = torch.Tensor(y_test[start_ind:end_ind]).type(torch.LongTensor).to(device = device)
# pred = ml_model.prediction_prob(batch_x_test, head)
# # pred_mean = pred.mean(0)
# pred_means.extend(list(pred.detach().cpu().numpy()))
# # pred_y = torch.argmax(pred_mean, dim=0)
# # cur_acc += end_ind - start_ind-(pred_y - batch_y_test).nonzero().shape[0]
print(sum(pred==y_labs)/len(y_labs))
print(len(x_testsets), len(y_testsets))
mf_model.update_prior()
# Save weights before test (and last-minute training on coreset)
mf_model.save_weights()
# acc = test.get_scores(mf_model, x_trainsets, y_trainsets, x_testsets, y_testsets, no_epochs, single_head, x_coresets, y_coresets, batch_size, False,gans)
# all_acc = test.concatenate_results(acc, all_acc)
mf_model.load_weights()
if not single_head:
mf_model.create_head()
return None
# return all_acc
def print_graph(model, output):
params = dict()
for i in range(len(model.W_m)):
params["W_m{}".format(i)] = model.W_m[i]
params["W_v{}".format(i)] = model.W_v[i]
params["b_m{}".format(i)] = model.b_m[i]
params["b_v{}".format(i)] = model.b_v[i]
params["prior_W_m".format(i)] = model.prior_W_m[i]
params["prior_W_v".format(i)] = model.prior_W_v[i]
params["prior_b_m".format(i)] = model.prior_b_m[i]
params["prior_b_v".format(i)] = model.prior_b_v[i]
for i in range(len(model.W_last_m)):
params["W_last_m".format(i)] = model.W_last_m[i]
params["W_last_v".format(i)] = model.W_last_v[i]
params["b_last_m".format(i)] = model.b_last_m[i]
params["b_last_v".format(i)] = model.b_last_v[i]
params["prior_W_last_m".format(i)] = model.prior_W_last_m[i]
params["prior_W_last_v".format(i)] = model.prior_W_last_v[i]
params["prior_b_last_m".format(i)] = model.prior_b_last_m[i]
params["prior_b_last_v".format(i)] = model.prior_b_last_v[i]
dot = make_dot(output, params=params)
dot.view()
return
| 44.628399 | 204 | 0.646967 | 2,244 | 14,772 | 3.937611 | 0.084225 | 0.032481 | 0.019805 | 0.02852 | 0.888637 | 0.860118 | 0.824355 | 0.811453 | 0.805002 | 0.805002 | 0 | 0.006041 | 0.249188 | 14,772 | 330 | 205 | 44.763636 | 0.790641 | 0.218522 | 0 | 0.697674 | 0 | 0 | 0.019693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.046512 | 0 | 0.093023 | 0.106977 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
195389e20c88d6e537f3a8199a8cfd489bb3b221 | 65 | py | Python | tests/test_modules/test_modules.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | 11 | 2016-10-04T23:11:39.000Z | 2022-01-25T15:44:43.000Z | tests/test_modules/test_modules.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | 153 | 2016-06-01T13:31:02.000Z | 2022-03-31T11:17:18.000Z | tests/test_modules/test_modules.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | 16 | 2016-06-10T13:45:27.000Z | 2020-10-24T13:45:04.000Z | import unittest
class TestModules(unittest.TestCase):
pass
| 10.833333 | 37 | 0.769231 | 7 | 65 | 7.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 65 | 5 | 38 | 13 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1992a2d6ffa2d5b2e779e0d40b406d5b32cecc61 | 138 | py | Python | python/pySimE/space/exp/pykep/example2.py | ProkopHapala/SimpleSimulationEngine | 240f9b7e85b3a6eda7a27dc15fe3f7b8c08774c5 | [
"MIT"
] | 26 | 2016-12-04T04:45:12.000Z | 2022-03-24T09:39:28.000Z | python/pySimE/space/exp/pykep/example2.py | Aki78/FlightAI | 9c5480f2392c9c89b9fee4902db0c4cde5323a6c | [
"MIT"
] | null | null | null | python/pySimE/space/exp/pykep/example2.py | Aki78/FlightAI | 9c5480f2392c9c89b9fee4902db0c4cde5323a6c | [
"MIT"
] | 2 | 2019-02-09T12:31:06.000Z | 2019-04-28T02:24:50.000Z | # -*- coding: utf-8 -*-
from PyKEP import *
#kep_examples.run_example1()
#kep_examples.run_example2()
#kep_examples.run_example3() | 19.714286 | 29 | 0.702899 | 18 | 138 | 5.055556 | 0.666667 | 0.362637 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033613 | 0.137681 | 138 | 7 | 30 | 19.714286 | 0.731092 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
199f93f1aee3e3335d1dbc423929e4ff830ad342 | 153 | py | Python | core/__init__.py | macrat/macracoin | a5267c4b3b9f812378af059562676dc5e7cb320e | [
"MIT"
] | null | null | null | core/__init__.py | macrat/macracoin | a5267c4b3b9f812378af059562676dc5e7cb320e | [
"MIT"
] | null | null | null | core/__init__.py | macrat/macracoin | a5267c4b3b9f812378af059562676dc5e7cb320e | [
"MIT"
] | null | null | null | from core.block import Block, mining
from core.chain import Chain
from core.message import Message
from core.user import User
from core.errors import *
| 21.857143 | 36 | 0.810458 | 25 | 153 | 4.96 | 0.36 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143791 | 153 | 6 | 37 | 25.5 | 0.946565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5fea82ff28c1aa5396c06b2a0942424de6fe2b58 | 13,437 | py | Python | test/test_application_properties.py | jackdewinter/application_properties | 087696f6e155b98503036d929e0a7fcc17a10996 | [
"MIT"
] | null | null | null | test/test_application_properties.py | jackdewinter/application_properties | 087696f6e155b98503036d929e0a7fcc17a10996 | [
"MIT"
] | null | null | null | test/test_application_properties.py | jackdewinter/application_properties | 087696f6e155b98503036d929e0a7fcc17a10996 | [
"MIT"
] | null | null | null | """
Tests for the ApplicationProperties class
"""
from application_properties import ApplicationProperties
# pylint: disable=too-many-lines
def test_property_name_separator():
"""
Test to make sure that the property name separator is as expected.
"""
# Arrange
application_properties = ApplicationProperties()
expected_separator = "."
# Act
actual_separator = application_properties.separator
# Assert
assert actual_separator == expected_separator
def test_properties_with_object():
"""
Test to make sure that a default application properties object has no properties.
"""
# Arrange
application_properties = ApplicationProperties()
expected_property_count = 0
# Act
actual_property_count = application_properties.number_of_properties
# Assert
assert actual_property_count == expected_property_count
def test_properties_with_single_property():
"""
Test a configuration map with a single property, and how that property looks.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = {"enabled": True}
expected_property_count = 1
# Act
application_properties.load_from_dict(config_map)
actual_property_count = application_properties.number_of_properties
found_names = application_properties.property_names
# Assert
assert actual_property_count == expected_property_count
assert len(found_names) == expected_property_count
assert "enabled" in found_names
def test_properties_with_single_nested_property():
"""
Test a configuration map with a single nested property, and how that property looks.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = {"feature": {"enabled": True}}
expected_property_count = 1
# Act
application_properties.load_from_dict(config_map)
actual_property_count = application_properties.number_of_properties
found_names = application_properties.property_names
# Assert
assert actual_property_count == expected_property_count
assert len(found_names) == expected_property_count
assert "feature.enabled" in found_names
def test_properties_with_mixed_properties():
"""
Test a configuration map with properties at different levels, and how those properties look.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = {
"feature": {"enabled": True},
"other_feature": {"enabled": False, "other": 1},
}
expected_property_count = 3
# Act
application_properties.load_from_dict(config_map)
actual_property_count = application_properties.number_of_properties
found_names = application_properties.property_names
# Assert
assert actual_property_count == expected_property_count
assert len(found_names) == expected_property_count
assert "feature.enabled" in found_names
assert "other_feature.enabled" in found_names
assert "other_feature.other" in found_names
def test_get_properties_under_at_top_level_partial():
"""
Test calling the `property_names_under` function specifying only part of the top level.
"""
# Arrange
config_map = {
"feature": {"enabled": True},
"other_feature": {"enabled": False, "other": 1},
}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
found_names = application_properties.property_names_under("other_feature")
# Assert
assert len(found_names) == len(config_map["other_feature"])
assert "other_feature.enabled" in found_names
assert "other_feature.other" in found_names
def test_get_properties_under_at_top_level_none():
"""
Test calling the `property_names_under` function specifying none of the top level.
"""
# Arrange
config_map = {
"feature": {"enabled": True},
"other_feature": {"enabled": False, "other": 1},
}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
found_names = application_properties.property_names_under("missing_feature")
# Assert
assert not found_names
assert "missing_feature" not in config_map
def test_get_properties_under_at_sub_level():
"""
Test calling the `property_names_under` function specifying none of the top level.
"""
# Arrange
config_map = {
"new_top_level": {
"feature": {"enabled": True},
"other_feature": {"enabled": False, "other": 1},
}
}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
found_names = application_properties.property_names_under("new_top_level.feature")
# Assert
assert len(found_names) == len(config_map["new_top_level"]["feature"])
assert "new_top_level.feature.enabled" in found_names
def test_properties_load_from_non_dictionary():
"""
Test a loading a configuration map that is not a dictionary.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = [{"feature": True}]
# Act
raised_exception = None
try:
application_properties.load_from_dict(config_map)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception) == "Specified parameter was not a dictionary."
), "Expected message was not present in exception."
def test_properties_load_with_non_string_key():
"""
Test a loading a configuration map that contains a key that is not a string.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = {1: True}
# Act
raised_exception = None
try:
application_properties.load_from_dict(config_map)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "All keys in the main dictionary and nested dictionaries must be strings."
), "Expected message was not present in exception."
def test_properties_load_with_key_containing_dot():
"""
Test a loading a configuration map that contains a key with a '.' character.
"""
# Arrange
application_properties = ApplicationProperties()
config_map = {"my.property": True}
# Act
raised_exception = None
try:
application_properties.load_from_dict(config_map)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "Keys strings cannot contain the separator character '.'."
), "Expected message was not present in exception."
def test_properties_get_generic_with_bad_type():
"""
Test a fetching a configuration value where the generic function is
used and the type and the default are confused.
"""
# Arrange
config_map = {"property": True}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
raised_exception = None
try:
application_properties.get_property("property", False)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "The property_type argument for 'property' must be a type."
), "Expected message was not present in exception."
def test_properties_get_generic_with_required_and_found():
"""
Test a fetching a configuration value where the value is required and present.
"""
# Arrange
config_map = {"property": True}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
expected_value = True
# Act
actual_value = application_properties.get_property(
"property", bool, is_required=True
)
# Assert
assert actual_value == expected_value
def test_properties_get_generic_with_required_and_not_found():
"""
Test a fetching a configuration value where the value is required and not present.
"""
# Arrange
config_map = {"property": True}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
raised_exception = None
try:
application_properties.get_property("other_property", bool, is_required=True)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "A value for property 'other_property' must be provided."
), "Expected message was not present in exception."
def test_properties_get_generic_with_strict_mode_and_bad_type():
"""
Test a fetching a configuration value where strict mode is on and the type is not correct.
"""
# Arrange
config_map = {"property": 1}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
raised_exception = None
try:
application_properties.get_property("property", str, strict_mode=True)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "The value for property 'property' must be of type 'str'."
), "Expected message was not present in exception."
def test_properties_get_generic_with_global_strict_mode_and_bad_type():
"""
Test a fetching a configuration value where strict mode is on and the type is not correct.
"""
# Arrange
config_map = {"property": 1}
application_properties = ApplicationProperties(strict_mode=True)
application_properties.load_from_dict(config_map)
# Act
raised_exception = None
try:
application_properties.get_property("property", str)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert application_properties.strict_mode
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "The value for property 'property' must be of type 'str'."
), "Expected message was not present in exception."
def test_properties_get_generic_with_delayed_global_strict_mode_and_bad_type():
"""
Test a fetching a configuration value where strict mode is on through the delayed mechanism and the type is not correct.
"""
# Arrange
config_map = {"property": 1}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
application_properties.enable_strict_mode()
# Act
raised_exception = None
try:
application_properties.get_property("property", str)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert application_properties.strict_mode
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "The value for property 'property' must be of type 'str'."
), "Expected message was not present in exception."
def __sample_string_validation_function(property_value):
"""
Simple string validation that throws an error if not "1" or "2".
"""
if property_value not in ["1", "2"]:
raise ValueError("Value '" + str(property_value) + "' is not '1' or '2'")
def test_properties_get_generic_with_strict_mode_and_bad_validity():
"""
Test a fetching a configuration value where strict mode is on and the value is not valid.
"""
# Arrange
config_map = {"property": "3"}
application_properties = ApplicationProperties()
application_properties.load_from_dict(config_map)
# Act
raised_exception = None
try:
application_properties.get_property(
"property",
str,
strict_mode=True,
valid_value_fn=__sample_string_validation_function,
)
assert False, "Should have raised an exception by now."
except ValueError as this_exception:
raised_exception = this_exception
# Assert
assert raised_exception, "Expected exception was not raised."
assert (
str(raised_exception)
== "The value for property 'property' is not valid: Value '3' is not '1' or '2'"
), "Expected message was not present in exception."
| 30.538636 | 124 | 0.708119 | 1,574 | 13,437 | 5.768107 | 0.096569 | 0.131843 | 0.083269 | 0.051107 | 0.846459 | 0.814958 | 0.790065 | 0.774975 | 0.730918 | 0.71021 | 0 | 0.00209 | 0.216566 | 13,437 | 439 | 125 | 30.6082 | 0.860359 | 0.150703 | 0 | 0.695652 | 0 | 0.004348 | 0.198355 | 0.008318 | 0 | 0 | 0 | 0 | 0.217391 | 1 | 0.082609 | false | 0 | 0.004348 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
271c6c4835fb0ccb630f053573294b6472232b09 | 8,495 | py | Python | tests/functional/factories/test_minion_factory.py | cmcmarrow/pytest-salt-factories | 12515411ea0fa11d7058a9deb61584a56c5f5108 | [
"Apache-2.0"
] | null | null | null | tests/functional/factories/test_minion_factory.py | cmcmarrow/pytest-salt-factories | 12515411ea0fa11d7058a9deb61584a56c5f5108 | [
"Apache-2.0"
] | null | null | null | tests/functional/factories/test_minion_factory.py | cmcmarrow/pytest-salt-factories | 12515411ea0fa11d7058a9deb61584a56c5f5108 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
tests.functional.factories.test_minion_factory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Functional tests for the salt minion factory
"""
import pytest
def test_hook_basic_config_defaults(testdir):
testdir.makeconftest(
"""
def pytest_saltfactories_minion_configuration_defaults():
return {'zzzz': True}
"""
)
p = testdir.makepyfile(
"""
def test_basic_config_override(request, salt_factories):
minion_config = salt_factories.configure_minion(request, 'minion-1')
assert 'zzzz' in minion_config
"""
)
res = testdir.runpytest("-v")
res.assert_outcomes(passed=1)
def test_keyword_basic_config_defaults(request, salt_factories):
minion_config = salt_factories.configure_minion(
request, "minion-1", config_defaults={"zzzz": True}
)
assert "zzzz" in minion_config
def test_interface_config_defaults(request, salt_factories):
interface = "172.17.0.1"
master_config = salt_factories.configure_master(
request, "master-1", config_defaults={"interface": interface}
)
assert master_config["interface"] != interface
assert master_config["interface"] == "127.0.0.1"
def test_master_config_defaults(request, salt_factories):
master = "172.17.0.1"
minion_config = salt_factories.configure_minion(
request, "minion-1", config_defaults={"master": master}
)
assert minion_config["master"] != master
assert minion_config["master"] == "127.0.0.1"
def test_hook_basic_config_overrides(testdir):
testdir.makeconftest(
"""
def pytest_saltfactories_minion_configuration_overrides():
return {'zzzz': True}
"""
)
p = testdir.makepyfile(
"""
def test_basic_config_override(request, salt_factories):
minion_config = salt_factories.configure_minion(request, 'minion-1')
assert 'zzzz' in minion_config
"""
)
res = testdir.runpytest("-v")
res.assert_outcomes(passed=1)
def test_keyword_basic_config_overrides(request, salt_factories):
minion_config = salt_factories.configure_minion(
request, "minion-1", config_overrides={"zzzz": True}
)
assert "zzzz" in minion_config
def test_interface_config_overrides(request, salt_factories):
interface = "172.17.0.1"
master_config = salt_factories.configure_master(
request, "master-1", config_overrides={"interface": interface}
)
assert master_config["interface"] == interface
assert master_config["interface"] != "127.0.0.1"
def test_master_config_overrides(request, salt_factories):
master = "172.17.0.1"
minion_config = salt_factories.configure_minion(
request, "minion-1", config_overrides={"master": master}
)
assert minion_config["master"] == master
assert minion_config["master"] != "127.0.0.1"
def test_hook_simple_overrides_override_defaults(testdir):
testdir.makeconftest(
"""
def pytest_saltfactories_minion_configuration_defaults():
return {'zzzz': False}
def pytest_saltfactories_minion_configuration_overrides():
return {'zzzz': True}
"""
)
p = testdir.makepyfile(
"""
def test_basic_config_override(request, salt_factories):
minion_config = salt_factories.configure_minion(request, 'minion-1')
assert 'zzzz' in minion_config
assert minion_config['zzzz'] is True
"""
)
res = testdir.runpytest("-v")
res.assert_outcomes(passed=1)
def test_keyword_simple_overrides_override_defaults(request, salt_factories):
minion_config = salt_factories.configure_minion(
request, "minion-1", config_defaults={"zzzz": False}, config_overrides={"zzzz": True}
)
assert "zzzz" in minion_config
assert minion_config["zzzz"] is True
def test_hook_nested_overrides_override_defaults(testdir):
testdir.makeconftest(
"""
def pytest_saltfactories_minion_configuration_defaults():
return {
'zzzz': False,
'user': 'foobar',
'colors': {
'black': True,
'white': False
}
}
def pytest_saltfactories_minion_configuration_overrides():
return {
'colors': {
'white': True,
'grey': False
}
}
"""
)
p = testdir.makepyfile(
"""
def test_basic_config_override(request, salt_factories):
minion_config = salt_factories.configure_minion(request, 'minion-1')
assert 'zzzz' in minion_config
assert minion_config['zzzz'] is False
assert minion_config['colors'] == {
'black': True,
'white': True,
'grey': False
}
"""
)
res = testdir.runpytest("-v")
res.assert_outcomes(passed=1)
def test_keyword_nested_overrides_override_defaults(request, salt_factories):
minion_config = salt_factories.configure_minion(
request,
"minion-1",
config_defaults={
"zzzz": False,
"user": "foobar",
"colors": {"black": True, "white": False},
},
config_overrides={"colors": {"white": True, "grey": False}},
)
assert "zzzz" in minion_config
assert minion_config["zzzz"] is False
assert minion_config["colors"] == {"black": True, "white": True, "grey": False}
def test_nested_overrides_override_defaults(testdir):
testdir.makeconftest(
"""
def pytest_saltfactories_minion_configuration_defaults():
return {
'zzzz': True,
'user': 'foobar',
'colors': {
'black': False,
'white': True,
'blue': False,
}
}
def pytest_saltfactories_minion_configuration_overrides():
return {
'colors': {
'white': False,
'grey': True,
'blue': True,
}
}
"""
)
p = testdir.makepyfile(
"""
def test_basic_config_override(request, salt_factories):
minion_config = salt_factories.configure_minion(
request,
'minion-1',
config_defaults={
'zzzz': False,
'user': 'foobar',
'colors': {
'black': True,
'white': False
}
},
config_overrides={
'colors': {
'white': True,
'grey': False
}
}
)
assert 'zzzz' in minion_config
assert minion_config['zzzz'] is False
assert minion_config['colors'] == {
'black': True,
'white': True,
'grey': False,
'blue': True
}
"""
)
res = testdir.runpytest("-v")
res.assert_outcomes(passed=1)
def test_provide_root_dir(testdir, request, salt_factories):
root_dir = testdir.mkdir("custom-root")
config_defaults = {"root_dir": root_dir}
minion_config = salt_factories.configure_minion(
request, "minion-1", config_defaults=config_defaults
)
assert minion_config["root_dir"] == root_dir
def configure_kwargs_ids(value):
return "configure_kwargs={!r}".format(value)
@pytest.mark.parametrize(
"configure_kwargs",
[{"config_defaults": {"user": "blah"}}, {"config_overrides": {"user": "blah"}}, {}],
ids=configure_kwargs_ids,
)
def test_provide_user(request, salt_factories, configure_kwargs):
minion_config = salt_factories.configure_minion(request, "minion-1", **configure_kwargs)
if not configure_kwargs:
# salt-factories injects the current username
assert minion_config["user"] is not None
assert minion_config["user"] == salt_factories.get_running_username()
else:
# salt-factories does not override the passed user value
assert minion_config["user"] != salt_factories.get_running_username()
assert minion_config["user"] == "blah"
| 31.579926 | 93 | 0.584815 | 834 | 8,495 | 5.671463 | 0.097122 | 0.098943 | 0.064693 | 0.088795 | 0.850106 | 0.820296 | 0.820296 | 0.820296 | 0.814799 | 0.769767 | 0 | 0.012259 | 0.298999 | 8,495 | 268 | 94 | 31.697761 | 0.782032 | 0.030724 | 0 | 0.318966 | 0 | 0 | 0.105421 | 0.0045 | 0 | 0 | 0 | 0 | 0.215517 | 1 | 0.137931 | false | 0.043103 | 0.008621 | 0.008621 | 0.155172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
272b9b67fc4e5bec8501ee64e785e2b17a624aaf | 21,987 | py | Python | deuce/tests/test_blockstorage.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | deuce/tests/test_blockstorage.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | deuce/tests/test_blockstorage.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | import ddt
import hashlib
import falcon
from deuce.util.misc import relative_uri
import json
import uuid
import os
from mock import patch
from deuce.model import Block
from deuce.tests import ControllerTest
@ddt.ddt
class TestBlockStorageController(ControllerTest):
def setUp(self):
super(TestBlockStorageController, self).setUp()
# Create a vault for us to work with
self.vault_name = self.create_vault_id()
self._vault_path = '/v1.0/vaults/{0}'.format(self.vault_name)
self._blocks_path = '{0:}/blocks'.format(self._vault_path)
self._storage_path = '{0:}/storage'.format(self._vault_path)
self._block_storage_path = '{0:}/blocks'.format(self._storage_path)
self._hdrs = {"x-project-id": self.create_project_id()}
self.helper_create_vault(self.vault_name, self._hdrs)
def tearDown(self):
self.helper_delete_vault(self.vault_name, self._hdrs)
super(TestBlockStorageController, self).tearDown()
def test_post_blocks(self):
data = 'abcdef'
block_path = 'http://localhost' + \
self.get_blocks_path(self.vault_name)
storage_path = self.get_storage_blocks_path(self.vault_name)
response = self.simulate_post(storage_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_405)
self.assertIn('x-blocks-location', self.srmock.headers_dict)
self.assertEqual(block_path,
self.srmock.headers_dict['x-blocks-location'])
def test_put_block_nonexistant_block(self):
# No block already in metadata/storage
block_id = self.create_storage_block_id()
block_path = self.get_storage_block_path(self.vault_name, block_id)
response = self.simulate_put(block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_405)
self.assertTrue(self.srmock.headers_dict['x-block-location'])
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(block_id, self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(str(None), self.srmock.headers_dict['x-block-id'])
def test_put_block_existing_block(self):
# block already in metadata/storage
# Generate a block
upload_data = os.urandom(100)
upload_block_id = self.calc_sha1(upload_data)
# Upload it to Deuce in the correct method (via blocks/{sha1})
upload_block_path = self.get_block_path(self.vault_name,
upload_block_id)
upload_headers = self._hdrs
upload_headers.update({
"Content-Type": "application/octet-stream",
"Content-Length": "100"
})
response = self.simulate_put(upload_block_path,
headers=self._hdrs,
body=upload_data)
self.assertEqual(self.srmock.status, falcon.HTTP_201)
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(upload_block_id,
self.srmock.headers_dict['x-block-id'])
# Now try to upload it via the Storage Blocks method
storage_block_id = self.srmock.headers_dict['x-storage-id']
block_path = self.get_storage_block_path(
self.vault_name, storage_block_id)
response = self.simulate_put(block_path,
headers=upload_headers,
body=upload_data)
self.assertEqual(self.srmock.status, falcon.HTTP_405)
self.assertTrue(self.srmock.headers_dict['x-block-location'])
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(storage_block_id,
self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(upload_block_id,
self.srmock.headers_dict['x-block-id'])
def test_put_block_vault_name_is_storage(self):
# Rebuild the vault data
self.vault_name = 'storage'
self._vault_path = '/v1.0/vaults/{0}'.format(self.vault_name)
self._blocks_path = '{0}/blocks'.format(self._vault_path)
self._storage_path = '{0:}/storage'.format(self._vault_path)
self._block_storage_path = '{0:}/blocks'.format(self._storage_path)
self.helper_create_vault(self.vault_name, self._hdrs)
block_id = self.create_storage_block_id()
block_path = 'http://localhost{0}/{1}'.format(self._blocks_path,
block_id)
storage_block_path = self.get_storage_block_path(
self.vault_name, block_id)
response = self.simulate_put(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_405)
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(block_id, self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(str(None), self.srmock.headers_dict['x-block-id'])
block_location = self.srmock.headers_dict['x-block-location']
self.assertTrue(block_location)
self.assertIn('storage', block_location)
self.assertEqual(block_path, block_location)
def test_list_blocks_bad_vault(self):
block_storage_path = self.get_storage_blocks_path(
self.create_vault_id())
response = self.simulate_get(block_storage_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_list_blocks(self):
response = self.simulate_get(self._block_storage_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_200)
def test_list_blocks_with_limit_marker(self):
# Test with bad marker
block_storage_path = self.get_storage_blocks_path(self.vault_name)
block_marker = self.create_storage_block_id()
marker = 'marker={0:}'.format(block_marker)
response = self.simulate_get(self._block_storage_path,
query_string=marker,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_200)
self.assertEqual(response[0].decode(), '[]')
block_list, block_data = self.helper_create_blocks(num_blocks=40)
storage_list = self.helper_store_blocks(self.vault_name, block_data)
# Test with no marker
response = self.simulate_get(self._block_storage_path,
headers=self._hdrs)
storage_ids = json.loads(response[0].decode())
self.assertEqual(self.srmock.status, falcon.HTTP_200)
# Test valid marker
# Lets list from the second storage_id
query_marker = storage_ids[0]
marker = 'marker={0:}'.format(query_marker)
response = self.simulate_get(self._block_storage_path,
query_string=marker,
headers=self._hdrs)
responses = json.loads(response[0].decode())
for resp in responses:
resp_sha1, resp_uuid = resp.split('_')
self.assertTrue(uuid.UUID(resp_uuid))
self.assertIn(resp, storage_ids)
# Test valid limit
response = self.simulate_get(self._block_storage_path,
query_string='limit=3',
headers=self._hdrs)
self.assertEqual(len(json.loads(response[0].decode())), 3)
# Test valid marker with limit
query_string = 'marker={0:}&limit={1:}'.format(query_marker, '2')
response = self.simulate_get(self._block_storage_path,
query_string=query_string,
headers=self._hdrs)
self.assertEqual(len(json.loads(response[0].decode())), 2)
next_url, querystring = relative_uri(
self.srmock.headers_dict['x-next-batch'])
response = self.simulate_get(next_url,
query_string=querystring,
headers=self._hdrs)
self.assertEqual(len(json.loads(response[0].decode())), 2)
def test_head_block_in_nonexistent_vault(self):
block_id = self.create_storage_block_id()
block_path = self.get_storage_block_path(self.create_vault_id(),
block_id)
response = self.simulate_head(block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_head_block_nonexistent_block(self):
block_id = self.create_storage_block_id()
block_path = self.get_storage_block_path(self.vault_name, block_id)
response = self.simulate_head(block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_head_orphaned_block(self):
block_list, block_data = self.helper_create_blocks(num_blocks=1)
self.assertEqual(len(block_list), 1)
size, data, sha1 = zip(*block_data)
block_path = self.get_block_path(self.vault_name, sha1[0])
upload_headers = {}
upload_headers.update(self._hdrs)
upload_headers.update({
"Content-Type": "application/octet-stream",
"Content-Length": str(size[0]),
})
# Upload the block twice, orphaning the second block
upload_first = self.simulate_put(block_path,
headers=upload_headers,
body=data[0])
self.assertEqual(self.srmock.status, falcon.HTTP_201)
first_storage_id = self.srmock.headers_dict['x-storage-id']
first_block_id = self.srmock.headers_dict['x-block-id']
upload_second = self.simulate_put(block_path,
headers=upload_headers,
body=data[0])
self.assertEqual(self.srmock.status, falcon.HTTP_201)
second_storage_id = self.srmock.headers_dict['x-storage-id']
second_block_id = self.srmock.headers_dict['x-block-id']
# Verify we got the same block id but different storage ids
self.assertEqual(first_block_id, second_block_id)
self.assertNotEqual(first_storage_id, second_storage_id)
# Get the storage id from the orphaned second block
storage_id = second_storage_id
storage_block_path = self.get_storage_block_path(self.vault_name,
storage_id)
# Now try to head the orphaned block
response = self.simulate_head(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_204)
self.assertIn('x-block-size', self.srmock.headers_dict)
self.assertEqual(str(size[0]),
self.srmock.headers_dict['x-block-size'])
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(storage_id, self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual('None', self.srmock.headers_dict['x-block-id'])
self.assertIn('x-ref-modified', self.srmock.headers_dict)
self.assertEqual('None', self.srmock.headers_dict['x-ref-modified'])
self.assertIn('x-block-reference-count', self.srmock.headers_dict)
self.assertEqual('0',
self.srmock.headers_dict['x-block-reference-count'])
self.assertIn('x-block-orphaned', self.srmock.headers_dict)
self.assertEqual(str(True),
self.srmock.headers_dict['x-block-orphaned'])
def test_head_happy_path(self):
block_list, block_data = self.helper_create_blocks(num_blocks=1)
self.assertEqual(len(block_list), 1)
size, data, sha1 = zip(*block_data)
block_path = self.get_block_path(self.vault_name, sha1[0])
upload_headers = {}
upload_headers.update(self._hdrs)
upload_headers.update({
"Content-Type": "application/octet-stream",
"Content-Length": str(size[0]),
})
upload = self.simulate_put(block_path,
headers=upload_headers,
body=data[0])
self.assertEqual(self.srmock.status, falcon.HTTP_201)
# Get the storage id from the orphaned second block
storage_id = self.srmock.headers_dict['x-storage-id']
storage_block_path = self.get_storage_block_path(self.vault_name,
storage_id)
# Now try to head the block
response = self.simulate_head(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_204)
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(storage_id, self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(sha1[0], self.srmock.headers_dict['x-block-id'])
self.assertIn('x-ref-modified', self.srmock.headers_dict)
self.assertNotEqual('None', self.srmock.headers_dict['x-ref-modified'])
self.assertIn('x-block-reference-count', self.srmock.headers_dict)
self.assertEqual('0',
self.srmock.headers_dict['x-block-reference-count'])
self.assertIn('x-block-size', self.srmock.headers_dict)
self.assertEqual(str(size[0]),
self.srmock.headers_dict['x-block-size'])
self.assertIn('x-block-orphaned', self.srmock.headers_dict)
self.assertEqual(str(False),
self.srmock.headers_dict['x-block-orphaned'])
def test_get_block_invalid_block(self):
block_id = self.create_storage_block_id()
block_path = self.get_storage_block_path(self.vault_name, block_id)
response = self.simulate_get(block_path, headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_get_block_bad_vault(self):
storage_block_id = self.create_storage_block_id()
block_path = self.get_storage_block_path(self.create_vault_id(),
storage_block_id)
response = self.simulate_get(block_path, headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_get_block(self):
block_list, block_data = self.helper_create_blocks(num_blocks=1)
self.assertEqual(len(block_list), 1)
storage_list = self.helper_store_blocks(self.vault_name, block_data)
self.assertEqual(len(storage_list), 1)
block_path = self.get_storage_block_path(
self.vault_name, storage_list[0][1])
response = self.simulate_get(block_path, headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_200)
self.assertIn('x-ref-modified', self.srmock.headers_dict)
self.assertIn('x-block-reference-count', self.srmock.headers_dict)
self.assertEqual(
int(self.srmock.headers_dict['x-block-reference-count']),
0)
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(storage_list[0][1],
self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(block_list[0],
self.srmock.headers_dict['x-block-id'])
z = hashlib.sha1()
z.update(response.read())
self.assertEqual(z.hexdigest(), block_list[0])
def test_get_block_orphaned_block(self):
block_list, block_data = self.helper_create_blocks(num_blocks=1)
self.assertEqual(len(block_list), 1)
size, data, sha1 = zip(*block_data)
upload_block_path = self.get_block_path(self.vault_name, sha1[0])
upload_headers = {}
upload_headers.update(self._hdrs)
upload_headers.update({
"Content-Type": "application/octet-stream",
"Content-Length": str(size[0]),
})
# Upload the block twice, orphaning the second block
upload_first = self.simulate_put(upload_block_path,
headers=upload_headers,
body=data[0])
self.assertEqual(self.srmock.status, falcon.HTTP_201)
first_storage_id = self.srmock.headers_dict['x-storage-id']
first_block_id = self.srmock.headers_dict['x-block-id']
upload_second = self.simulate_put(upload_block_path,
headers=upload_headers,
body=data[0])
self.assertEqual(self.srmock.status, falcon.HTTP_201)
second_storage_id = self.srmock.headers_dict['x-storage-id']
second_block_id = self.srmock.headers_dict['x-block-id']
# Verify we got the same block id but different storage ids
self.assertEqual(first_block_id, second_block_id)
self.assertNotEqual(first_storage_id, second_storage_id)
# Get the storage id from the orphaned second block
storage_id = second_storage_id
block_path = self.get_storage_block_path(self.vault_name,
storage_id)
response = self.simulate_get(block_path, headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_200)
self.assertIn('x-ref-modified', self.srmock.headers_dict)
self.assertIn('x-block-reference-count', self.srmock.headers_dict)
self.assertEqual(
int(self.srmock.headers_dict['x-block-reference-count']),
0)
self.assertIn('x-storage-id', self.srmock.headers_dict)
self.assertEqual(second_storage_id,
self.srmock.headers_dict['x-storage-id'])
self.assertIn('x-block-id', self.srmock.headers_dict)
self.assertEqual(str(None),
self.srmock.headers_dict['x-block-id'])
bindata = response.read()
z = hashlib.sha1()
z.update(bindata)
self.assertEqual(z.hexdigest(), sha1[0])
self.assertIn('content-length', self.srmock.headers_dict)
self.assertEqual(str(size[0]),
self.srmock.headers_dict['content-length'])
self.assertEqual(size[0], len(bindata))
self.assertEqual(data[0], bindata)
def test_delete_storage_non_existent(self):
storage_block_id = self.create_storage_block_id()
storage_block_path = self.get_storage_block_path(self.vault_name,
storage_block_id)
response = self.simulate_delete(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_404)
def test_delete_storage_block_with_references(self):
# NOTE(TheSriram): Let's just spoof ref-count to get 42 References.
with patch.object(Block, 'get_ref_count', return_value=42):
block_id = self.create_block_id(b'mock')
response = self.simulate_put(self.get_block_path(self.vault_name,
block_id),
headers=self._hdrs,
body=b'mock')
storage_block_id = self.srmock.headers_dict['x-storage-id']
storage_block_path = self.get_storage_block_path(self.vault_name,
storage_block_id)
response = self.simulate_delete(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_409)
def test_delete_storage_orphaned_block(self):
block_id = self.create_block_id(b'mock')
# NOTE(TheSriram): We put the same block twice, to orphan the second
# block as it will not have a reference in the metadata. But, it will
# nevertheless be present in block storage
response = self.simulate_put(self.get_block_path(self.vault_name,
block_id),
headers=self._hdrs,
body=b'mock')
real_storage_id = self.srmock.headers_dict['x-storage-id']
response = self.simulate_put(self.get_block_path(self.vault_name,
block_id),
headers=self._hdrs,
body=b'mock')
orphaned_storage_id = self.srmock.headers_dict['x-storage-id']
self.assertNotEqual(real_storage_id, orphaned_storage_id)
storage_block_path = self.get_storage_block_path(self.vault_name,
orphaned_storage_id)
response = self.simulate_delete(storage_block_path,
headers=self._hdrs)
self.assertEqual(self.srmock.status, falcon.HTTP_204)
| 43.974 | 79 | 0.61027 | 2,606 | 21,987 | 4.882195 | 0.069071 | 0.077812 | 0.098876 | 0.122141 | 0.817732 | 0.796117 | 0.790851 | 0.784721 | 0.770573 | 0.741492 | 0 | 0.010682 | 0.288943 | 21,987 | 499 | 80 | 44.062124 | 0.803121 | 0.048665 | 0 | 0.663043 | 0 | 0 | 0.070129 | 0.014457 | 0 | 0 | 0 | 0 | 0.290761 | 1 | 0.054348 | false | 0 | 0.027174 | 0 | 0.084239 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
272ee9938c8a1b9a446b2804c4ef5e759f1cb664 | 31 | py | Python | SO_sim_runner.py | EngTurtle/VISSIM_Routing_Thesis | 4ee77146675631175866137e4bec6a424cfb8c3b | [
"Apache-2.0"
] | null | null | null | SO_sim_runner.py | EngTurtle/VISSIM_Routing_Thesis | 4ee77146675631175866137e4bec6a424cfb8c3b | [
"Apache-2.0"
] | null | null | null | SO_sim_runner.py | EngTurtle/VISSIM_Routing_Thesis | 4ee77146675631175866137e4bec6a424cfb8c3b | [
"Apache-2.0"
] | null | null | null | import win32com.client as com
| 10.333333 | 29 | 0.806452 | 5 | 31 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.16129 | 31 | 2 | 30 | 15.5 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27949205b594356143b2ec4249cc29950d520642 | 4,795 | py | Python | recipes/Python/577284_Maclaurinsseriesbinomialseries/recipe-577284.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/577284_Maclaurinsseriesbinomialseries/recipe-577284.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/577284_Maclaurinsseriesbinomialseries/recipe-577284.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | #On the name of ALLAH and may the blessing and peace of Allah
#be upon the Messenger of Allah Mohamed Salla Allahu Aliahi Wassalam.
#Author : Fouad Teniou
#Date : 06/07/10
#version :2.6
"""
maclaurin_binomial is a function to compute(1+x)^m using maclaurin
binomial series and the interval of convergence is -1 < x < 1
(1+x)^m = 1 + mx + m(m-1)x^2/2! + m(m-1)(m-2)x^3/3!...........
note: if m is a nonegative integer the binomial is a polynomial of
degree m and it is valid on -inf < x < +inf,thus, the error function
will not be valid.
"""
from math import *
def error(number):
""" Raises interval of convergence error."""
if number >= 1 or number <= -1 :
raise TypeError,\
"\n<The interval of convergence should be -1 < value < 1 \n"
def maclaurin_binomial(value,m,k):
"""
Compute maclaurin's binomial series approximation for (1+x)^m.
"""
global first_value
first_value = 0.0
error(value)
#attempt to Approximate (1+x)^m for given values
try:
for item in xrange(1,k):
next_value =m*(value**item)/factorial(item)
for i in range(2,item+1):
next_second_value =(m-i+1)
next_value *= next_second_value
first_value += next_value
return first_value + 1
#Raise TypeError if input is not within
#the interval of convergence
except TypeError,exception:
print exception
#Raise OverflowError if an over flow occur
except OverflowError:
print '\n<Please enter a lower k value to avoid the Over flow\n '
if __name__ == "__main__":
maclaurin_binomial_1 = maclaurin_binomial(0.777,-0.5,171)
print maclaurin_binomial_1
maclaurin_binomial_2 = maclaurin_binomial(0.37,0.5,171)
print maclaurin_binomial_2
maclaurin_binomial_3 = maclaurin_binomial(0.3,0.717,171)
print maclaurin_binomial_3
########################################################################
#c:python
#
#0.750164116353
#1.17046999107
#1.20697252357
#######################################################################
#Version : Python 3.2
#import math
#def maclaurin_binomial(value,m,k):
# """
# Compute maclaurin's binomial series approximation for (1+x)^m.
# """
# global first_value
# first_value = 0.0
#
# #attempt to Approximate (1+x)^m for given values
# try:
#
# for item in range(1,k):
# next_value =m*(value**item)/math.factorial(item)
#
# for i in range(2,item+1):
# next_second_value =(m-i+1)
# next_value *= next_second_value
# first_value += next_value
# return first_value + 1
#
# #Raise TypeError if input is not within
# #the interval of convergence
# except TypeError as exception:
# print (exception)
#
# #Raise OverflowError if an over flow occur
# except OverflowError:
# print ('\n<Please enter a lower k value to avoid the Over flow\n ')
#
#
#if __name__ == "__main__":
# maclaurin_binomial_1 = maclaurin_binomial(0.777,-0.5,171)
# print (maclaurin_binomial_1 )
# maclaurin_binomial_2 = maclaurin_binomial(0.37,0.5,171)
# print (maclaurin_binomial_2)
# maclaurin_binomial_3 = maclaurin_binomial(0.3,0.717,171)
# print (maclaurin_binomial_3)
######################################################################################
#decimal Version Python 3.2
#from math import *
#from decimal import Decimal as D,Context, localcontext
#def error(number):
# """ Raises interval of convergence error."""
# if number >= 1 or number <= -1 :
# raise TypeError("\n<The interval of convergence should be -1 < value < 1 \n")
#def maclaurin_binomial(value,m,k):
# """
# Compute maclaurin's binomial series approximation for (1+x)^m.
# """
# global first_value
# first_value = 0
#
# #attempt to Approximate (1+x)^m for given values
# try:
#
# for item in range(1,k):
# next_value = (m*(value**item))/factorial(item)
# for i in range(2,item+1):
# next_second_value =(m-i+1)
# next_value *= next_second_value
# first_value += next_value
#
# return (first_value) + (1)
#
# #Raise TypeError if input is not within
# #the interval of convergence
# except TypeError as exception:
# print(exception)
#
# #Raise OverflowError if an over flow occur
# except OverflowError:
# print('\n<Please enter a lower k value to avoid the Over flow\n ')
#
#
#if __name__ == "__main__":
#
# with localcontext(Context(prec= 1777)):
# for arg in range(2,-8,-2):
# print(maclaurin_binomial(D("0.777"),arg,171))
| 29.237805 | 86 | 0.594578 | 644 | 4,795 | 4.284161 | 0.192547 | 0.14788 | 0.008699 | 0.052193 | 0.767669 | 0.767669 | 0.767669 | 0.767669 | 0.767669 | 0.767669 | 0 | 0.048203 | 0.251512 | 4,795 | 163 | 87 | 29.417178 | 0.720535 | 0.575182 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035714 | null | null | 0.178571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
279d065ffbd0a89a506adad41d988db86b27aa1f | 61 | py | Python | book-code/numpy-ml/numpy_ml/neural_nets/models/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | 3 | 2021-07-07T13:28:01.000Z | 2021-11-12T06:32:49.000Z | book-code/numpy-ml/numpy_ml/neural_nets/models/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | book-code/numpy-ml/numpy_ml/neural_nets/models/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | 3 | 2021-11-17T08:46:37.000Z | 2022-03-04T16:35:36.000Z | from .vae import *
from .wgan_gp import *
from .w2v import *
| 15.25 | 22 | 0.704918 | 10 | 61 | 4.2 | 0.6 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.196721 | 61 | 3 | 23 | 20.333333 | 0.836735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27d2dacdbf1d1c931dd312f390681580ff305450 | 1,908 | py | Python | tests/get_str_dict_property_tests.py | agorinenko/power_dict | 01e8bcdfd6a77915e5c971882d5b94646f04cfdb | [
"MIT"
] | 3 | 2019-11-28T11:56:02.000Z | 2020-10-21T10:39:46.000Z | tests/get_str_dict_property_tests.py | agorinenko/power_dict | 01e8bcdfd6a77915e5c971882d5b94646f04cfdb | [
"MIT"
] | 6 | 2020-02-19T08:09:37.000Z | 2020-09-02T15:17:01.000Z | tests/get_str_dict_property_tests.py | agorinenko/power-dict | 01e8bcdfd6a77915e5c971882d5b94646f04cfdb | [
"MIT"
] | null | null | null | import unittest
from power_dict.errors import NoneParameterError
from power_dict.utils import DictUtils
class GetStrDictPropertyTests(unittest.TestCase):
properties = {
"property_1": "Hello!",
"property_1_none": None,
"property_1_empty": ''
}
def test_get_property(self):
target = DictUtils.get_str_dict_property(self.properties, 'property_1')
self.assertIsInstance(target, str)
self.assertEqual(target, "Hello!")
target = DictUtils.get_str_dict_property(self.properties, 'property_1_none', default_value="Default string")
self.assertIsInstance(target, str)
self.assertEqual(target, "Default string")
target = DictUtils.get_str_dict_property(self.properties, 'property_1_empty', default_value="Default string")
self.assertIsInstance(target, str)
self.assertEqual(target, "Default string")
target = DictUtils.get_str_dict_property(self.properties, 'key_not_found')
self.assertIsInstance(target, str)
self.assertEqual(target, '')
def test_get_required_property(self):
target = DictUtils.get_required_str_dict_property(self.properties, 'property_1')
self.assertIsInstance(target, str)
self.assertEqual(target, "Hello!")
with self.assertRaises(NoneParameterError):
DictUtils.get_required_str_dict_property(self.properties, 'property_1_none',
required_error="Key property_1_none is None")
with self.assertRaises(NoneParameterError):
DictUtils.get_required_str_dict_property(self.properties, 'property_1_empty',
required_error="Key property_1_none is None")
with self.assertRaises(NoneParameterError):
DictUtils.get_required_str_dict_property(self.properties, 'key_not_found')
| 41.478261 | 117 | 0.68501 | 204 | 1,908 | 6.102941 | 0.181373 | 0.079518 | 0.096386 | 0.122088 | 0.825703 | 0.801606 | 0.801606 | 0.761446 | 0.729317 | 0.729317 | 0 | 0.007437 | 0.224843 | 1,908 | 45 | 118 | 42.4 | 0.834348 | 0 | 0 | 0.411765 | 0 | 0 | 0.145178 | 0 | 0 | 0 | 0 | 0 | 0.382353 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
27e35604505557e781c035de45bdd7a16548e089 | 36,449 | py | Python | LibCns_Gas.py | stefdeli/Gas-Swing-Contracts | d32d6545f0a898b440a5e81f455cc28cb095bf6c | [
"MIT"
] | 1 | 2021-08-07T06:00:51.000Z | 2021-08-07T06:00:51.000Z | LibCns_Gas.py | stefdeli/Gas-Swing-Contracts | d32d6545f0a898b440a5e81f455cc28cb095bf6c | [
"MIT"
] | 1 | 2018-10-08T08:14:11.000Z | 2018-10-08T08:17:50.000Z | LibCns_Gas.py | stefdeli/Gas-Swing-Contracts | d32d6545f0a898b440a5e81f455cc28cb095bf6c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Dec 7 17:13:27 2017
@author: delikars
"""
from collections import defaultdict
import numpy as np
import itertools
import gurobipy as gb
import defaults
class expando(object):
pass
def OuterApprox_Discretization(self):
# Pressure discretization at every gas node
gnodes = self.gdata.gnodes
gndata = self.gdata.gnodedf
prd = defaultdict(list)
for gn in gnodes:
prd[gn] = np.linspace(gndata['PresMin'][gn], gndata['PresMax'][gn], self.gdata.Nfxpp).tolist()
self.gdata.prd = prd
# Create Parameter for Positive and negative part of outer approximation equation
Kpos = defaultdict(lambda: defaultdict(list) )
for pl in self.gdata.pplinepassive:
ns, nr = pl # Pipeline between nodes ns and nr
for vs, vr in list(itertools.product(range(self.gdata.Nfxpp), repeat = 2)):
if prd[ns][vs] > prd[nr][vr]:
pres_s = prd[ns][vs]
pres_r = prd[nr][vr]
K = self.gdata.pplineK[pl]
Kpos[pl][vs, vr] = K/np.sqrt(np.square(pres_s) - np.square(pres_r))
# elif prd[ns][vs] == prd[nr][vr]:
# pres_s = prd[ns][vs]+1e-3
# pres_r = prd[nr][vr]
# K = self.gdata.pplineK[pl]
# Kpos[pl][vs, vr] =1#3 K/np.sqrt(np.square(pres_s) - np.square(pres_r))
self.gdata.Kpos = Kpos
# if bi-directional flow model flow limits from Receiving to Sending end
if self.gdata.flow2dir == True:
Kneg = defaultdict(lambda: defaultdict(list) )
for pl in self.gdata.pplinepassive:
ns, nr = pl
for vs, vr in list(itertools.product(range(self.gdata.Nfxpp), repeat = 2)):
if prd[ns][vs] < prd[nr][vr]:
Kneg[pl][vs, vr] = self.gdata.pplineK[pl]/np.sqrt(np.square(prd[nr][vr]) - np.square(prd[ns][vs]))
self.gdata.Kneg = Kneg
def add_constraint(self,lhs,sign_str,rhs,name):
ALL_ON_LHS=False
ALL_ON_RHS=False
# Only one will be executed
if sign_str=='>=':
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=lhs
cc.rhs=rhs
if ALL_ON_LHS:
cc.expr=self.model.addConstr( -cc.lhs+cc.rhs<=0.0,name=name)
elif ALL_ON_RHS:
cc.expr=self.model.addConstr(0.0<=cc.lhs-cc.rhs,name=name)
else:
cc.expr=self.model.addConstr(cc.lhs>=cc.rhs,name=name)
elif sign_str=='<=':
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=lhs
cc.rhs=rhs
if ALL_ON_LHS:
cc.expr=self.model.addConstr(-cc.rhs+cc.lhs<=0.0,name=name)
elif ALL_ON_RHS:
cc.expr=self.model.addConstr(0.0<=cc.rhs-cc.lhs,name=name)
else:
cc.expr=self.model.addConstr(cc.lhs<=cc.rhs,name=name)
elif sign_str=='==':
if defaults.REMOVE_EQUALITY:
# gEQ
self.constraints[name+'_geq']= expando()
cc=self.constraints[name+'_geq']
cc.lhs=lhs
cc.rhs=rhs
if ALL_ON_LHS:
cc.expr=self.model.addConstr(-cc.lhs+cc.rhs<=0.0,name=name+'_geq')
elif ALL_ON_RHS:
cc.expr=self.model.addConstr(0.0<=cc.lhs-cc.rhs,name=name+'_geq')
else:
cc.expr=self.model.addConstr(cc.lhs>=cc.rhs,name=name+'_geq')
self.constraints[name+'_leq']= expando()
cc=self.constraints[name+'_leq']
cc.lhs=lhs
cc.rhs=rhs
if ALL_ON_LHS:
cc.expr=self.model.addConstr(-cc.rhs+cc.lhs<=0.0,name=name+'_leq')
elif ALL_ON_RHS:
cc.expr=self.model.addConstr(0.0<=cc.rhs-cc.lhs,name=name+'_leq')
else:
cc.expr=self.model.addConstr(cc.lhs<=cc.rhs,name=name+'_leq')
else:
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=lhs
cc.rhs=rhs
if ALL_ON_LHS:
cc.expr=self.model.addConstr(cc.lhs-cc.rhs==0.0,name=name)
elif ALL_ON_RHS:
cc.expr=self.model.addConstr(0.0==cc.lhs-cc.rhs,name=name)
else:
cc.expr=self.model.addConstr(cc.lhs==cc.rhs,name=name)
#==============================================================================
# Gas system constraints
# NB! Gas storage constraints included as limits in variable definitions
#==============================================================================
#==============================================================================
# Day-ahead market
#==============================================================================
def _build_constraints_gasDA(self):
#--- Get Parameters
m = self.model
var = self.variables
gnodes = self.gdata.gnodes
pplines = self.gdata.pplineorder
time = self.gdata.time
gndata = self.gdata.gnodedf
gstorage = self.gdata.gstorage
gwells = self.gdata.wellsinfo.index.tolist()
bigM = self.gdata.bigM
sclim = self.gdata.sclim # Swing contract limits
if self.gdata.GasSlack=='FixInput':
gn=self.gdata.gnodeorder[0]
for t in time:
for k in sclim:
name="PresSlackMax({0},{1},{2})".format(gn,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[gn,k,t]
cc.rhs=self.gdata.gnodedf['PresMax'][gn]
cc.expr = m.addConstr(cc.lhs==cc.rhs,name=name)
elif self.gdata.GasSlack=='FixOutput':
gn=self.gdata.gnodeorder[-1] # Do it for last node
for t in time:
for k in sclim:
name="PresSlackMin({0},{1},{2})".format(gn,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[gn,k,t]
cc.rhs=self.gdata.gnodedf['PresMax'][gn]
cc.expr = m.addConstr(cc.lhs==cc.rhs,name=name)
elif self.gdata.GasSlack == 'ConstantOutput':
gn=self.gdata.gnodeorder[0]
for tpr, t in zip(time, time[1:]):
name="Constant_Slack({},{},{})".format(gn,t,tpr)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs= var.pr[gn,'k0',t]-var.pr[gn,'k0',tpr]
cc.rhs=np.float64(0.0)
cc.expr=m.addConstr(cc.lhs==cc.rhs,name=name)
#--- Define Pressure Limits
# Gas pressure limits
for gn in gnodes:
for t in time:
for k in sclim:
name="PresMax({0},{1},{2})".format(gn,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[gn,k,t]
cc.rhs=self.gdata.gnodedf['PresMax'][gn]
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
name="PresMin({0},{1},{2})".format(gn,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[gn,k,t]
cc.rhs=self.gdata.gnodedf['PresMin'][gn]
cc.expr=m.addConstr(cc.lhs>=cc.rhs,name=name)
# --- Outer Approximation of Weymouth
# Create Points for Outer Approximation
OuterApprox_Discretization(self)
# Gas flow outer approximation
if self.gdata.flow2dir == True:
u = var.u # if bi-directional flow u={0,1}
elif self.gdata.flow2dir == False:
u = dict.fromkeys(self.variables.gflow_sr, 1.0) # if bi-directional flow u={1} i.e. from send to receive
Kpos=self.gdata.Kpos
prd=self.gdata.prd
for pl in self.gdata.pplineorder:
ns, nr = pl
for t in time:
for k in sclim:
name = "gflow_sr_log({0},{1},{2})".format(pl,k,t).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gflow_sr[pl,k,t]-u[pl,k,t]*bigM
cc.rhs=np.float64(0.0)
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
for vs, vr in Kpos[pl].keys():
name = "gflow_sr_lim({0},{1},{2},{3},{4})".format(pl,k,t,vs,vr).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gflow_sr[pl,k,t]
cc.rhs=Kpos[pl][vs,vr] * prd[ns][vs] * var.pr[ns,k,t] - \
Kpos[pl][vs,vr] * prd[nr][vr] * var.pr[nr,k,t] \
+ bigM * (1.0 - u[pl,k,t])
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
# if bi-directional flow model flow limits from Receiving to Sending end
# if bi-directional flow add constraints from Receiving to Sending end
if self.gdata.flow2dir == True:
Kneg=self.gdata.Kneg
prd=self.gdata.prd
for pl in self.gdata.pplineorder:
ns, nr = pl
for t in time:
for k in sclim:
name = "gflow_rs_log({0},{1},{2})".format(pl,k,t).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gflow_rs[pl,k,t]
cc.rhs=(1.0-u[pl,k,t])*bigM
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
for vs, vr in self.gdata.Kneg[pl].keys():
name = "gflow_rs_lim({0},{1},{2},{3},{4})".format(pl,k,t,vs,vr).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gflow_rs[pl,k,t]
cc.rhs= Kneg[pl][vs,vr] * prd[nr][vr] * var.pr[nr,k,t] - \
Kneg[pl][vs,vr] * prd[ns][vs] * var.pr[ns,k,t] +\
bigM * u[pl,k,t]
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
#--- Gas well maximum production
for gw in gwells:
for k in sclim:
for t in time:
name="gprod_max({0},{1},{2})".format(gw,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gprod[gw,k,t]
cc.rhs=self.gdata.wellsinfo['MaxProd'][gw]
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
#--- Compressors
# Compressors - Limit the outlet pressure of compressor to be less than
# full compression (max) and above the inlet pressure (min)
# Compressors
#new comment
for pl in self.gdata.pplineactive:
ns, nr = pl
for t in time:
for k in sclim:
name = 'compr_max({0},{1},{2})'.format(pl,k,t).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[nr,k,t]
cc.rhs=self.gdata.pplinecr[pl] * var.pr[ns,k,t]
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
name = 'compr_min({0},{1},{2})'.format(pl,k,t).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.pr[ns,k,t]
cc.rhs=var.pr[nr,k,t]
cc.expr=m.addConstr(cc.lhs<=cc.rhs,name=name)
#--- Line Pack Def
# Line-pack constraints
for pl in pplines:
ns, nr = pl
for t in time:
for k in sclim:
name='gflow_sr_io({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.gflow_sr[pl,k,t]
rhs=(var.qin_sr[pl,k,t] + var.qout_sr[pl,k,t]) * 0.5
add_constraint(self,-lhs,'==',-rhs,name=name)
# Separate for debugging purposes
for pl in pplines:
ns, nr = pl
for k in sclim:
for t in time:
name= 'lpack_def({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.lpack[pl,k,t]
rhs= self.gdata.pplinels[pl]*0.5*(var.pr[ns,k,t]+var.pr[nr,k,t])
add_constraint(self,-lhs,'==',-rhs,name=name)
'''
NB! gflow_rs_io = {} # Gas flow (R to S) only if bi-directional flow.
This needs to be included in the line-pack definition constraint
'''
if self.gdata.flow2dir == True:
# Gas flow (R to S) 'decomposition' to IN/OUT
for pl in pplines:
ns, nr = pl
for t in time:
for k in sclim:
name='gflow_rs_io({0},{1},{2})'.format(pl,k,t).replace(" ","")
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gflow_rs[pl,k,t]
cc.rhs=(var.qin_rs[pl,k,t] + var.qout_rs[pl,k,t]) * 0.5
cc.expr=m.addConstr(cc.lhs==cc.rhs,name=name)
for tpr, t in zip(time, time[1:]):
k = 'k0' # Line-pack storage defined for 'central case' k0
name='line_store({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.lpack[pl,k,t]
rhs=var.lpack[pl,k,tpr] + var.qin_sr[pl,k,t] - var.qout_sr[pl,k,t] \
+ var.qin_rs[pl,k,t] - var.qout_rs[pl,k,t]
add_constraint(self,-lhs,'==',-rhs,name=name)
t = time[0]
k= 'k0' # Line-pack storage defined for 'central case' k0
name='line_store({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.lpack[pl,k,t]
rhs= self.gdata.pplinelsini[pl] + var.qin_sr[pl,k,t] - var.qout_sr[pl,k,t] \
+ var.qin_rs[pl,k,t] - var.qout_rs[pl,k,t]
add_constraint(self,-lhs,'==',-rhs,name=name)
elif self.gdata.flow2dir == False:
kappa = 'k0' # Line-pack storage defined for 'central case' k0
for pl in pplines:
ns, nr = pl
for k in sclim: # For every Scenario
# For all time steps (except for t=0)
for tpr, t in zip(time, time[1:]):
name='line_store({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs=var.lpack[pl,k,t]
rhs=var.lpack[pl,kappa,tpr] + var.qin_sr[pl,k,t] - var.qout_sr[pl,k,t]
add_constraint(self,-lhs,'==',-rhs,name=name)
# Time =0 Either Steady State or Transient
t = time[0]
# If pipelines have linepack storage then add initial value
if self.gdata.pplinels[pl] >0 :
name='line_store({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.lpack[pl,k,t]
rhs= self.gdata.pplinelsini[pl] + var.qin_sr[pl,k,t] - var.qout_sr[pl,k,t]
add_constraint(self,-lhs,'==',-rhs,name=name)
# Otherwise system is in steady state and there is no initial linepack
else:
name='line_store({0},{1},{2})'.format(pl,k,t).replace(" ","")
lhs= var.lpack[pl,k,t]
rhs= var.qin_sr[pl,k,t] - var.qout_sr[pl,k,t]
add_constraint(self,-lhs,'==',-rhs,name=name)
#--- Linepack End of Day
# At end of optimization the total linepack should be within Line-pack end of optimization (only for case k0)
# Only need this constraint if linepack parameters are defined
if sum(self.gdata.pplinels.values()) >0:
k = 'k0'
name='lpack_end'
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=gb.quicksum(var.lpack[pl, k, time[-1]] for pl in pplines)
cc.rhs=gb.quicksum(self.gdata.pplinelsini[pl] for pl in pplines)
cc.expr=m.addConstr(cc.lhs>=cc.rhs,name=name)
#--- Gas Storage
for gs in self.gdata.gstorage:
for t in time:
for k in sclim:
name='gsinMax({0},{1},{2})'.format(gs,t,k)
lhs=var.gsin[gs,k,t]
rhs=self.gdata.gstorageinfo['MaxInFlow'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gsoutMax({0},{1},{2})'.format(gs,t,k)
lhs=var.gsout[gs,k,t]
rhs=self.gdata.gstorageinfo['MaxOutFlow'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gstore_max({0},{1},{2})'.format(gs,t,k)
lhs=var.gstore[gs,k,t]
rhs=self.gdata.gstorageinfo['MaxStore'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gstore_min({0},{1},{2})'.format(gs,t,k)
lhs=var.gstore[gs,k,t]
rhs=self.gdata.gstorageinfo['MinStore'][gs]
add_constraint(self,lhs,'>=',rhs,name)
for gs in gstorage:
for k in sclim:
for tpr, t in zip(time, time[1:]):
name='gstor_def({0},{1},{2})'.format(gs,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gstore[gs,k,t]
cc.rhs=var.gstore[gs,k,tpr]+var.gsin[gs,k,t]-var.gsout[gs,k,t]
cc.expr=m.addConstr(cc.lhs==cc.rhs,name=name)
t = time[0]
name='gstor_def({0},{1},{2})'.format(gs,k,t)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gstore[gs,k,t]
cc.rhs=self.gdata.gstorageinfo['IniStore'][gs]+var.gsin[gs,k,t]-var.gsout[gs,k,t]
cc.expr=m.addConstr(cc.lhs==cc.rhs,name=name)
k = 'k0' # Gas storage content defined for 'central case' k0
name='gs_end({0},{1})'.format(gs,k)
self.constraints[name]= expando()
cc=self.constraints[name]
cc.lhs=var.gstore[gs, k, time[-1]]
cc.rhs=self.gdata.gstorageinfo['IniStore'][gs]
cc.expr=m.addConstr(cc.lhs>=cc.rhs,name=name)
#--- Nodal Gas Balance
###########################################################################
# Swing constracts can be defined per Gas Node - Here defined per GFPP
# ... we need the GFPP node mapping and efficiencies
###########################################################################
Pgen = self.gdata.Pgen
PgenSC = self.gdata.PgenSC
RSC = self.gdata.RSC
HR = self.gdata.generatorinfo.HR
# if bi-directional flow add in gas balance flow from Receiving to Sending end
if self.gdata.flow2dir == True:
for k in sclim:
for gn in gnodes:
for t in time:
name='gas_balance({0},{1},{2})'.format(gn,k,t)
lhs=gb.quicksum(var.gprod[gw,k,t] for gw in self.gdata.Map_Gn2Gp[gn]) +\
gb.quicksum(var.qout_sr[pl,k,t] - var.qin_rs[pl,k,t] for pl in self.gdata.nodetoinpplines[gn]) +\
gb.quicksum(var.qout_rs[pl,k,t] - var.qin_sr[pl,k,t] for pl in self.gdata.nodetooutpplines[gn])+\
gb.quicksum(var.gsout[gs,k,t] - var.gsin[gs,k,t] for gs in self.gdata.Map_Gn2Gs[gn])
rhs=self.gdata.gasload[gn][t]+ gb.quicksum((Pgen[gen][t]+ \
PgenSC[gen][t]+RSC[gen,k,t])*self.gdata.generatorinfo.HR[gen] for gen in self.gdata.gfpp if gen in self.gdata.Map_Gn2Eg[gn] )
add_constraint(self,-lhs,'==',-rhs,name=name)
elif self.gdata.flow2dir == False:
for t in time:
for gn in gnodes:
for k in sclim:
name='gas_balance({0},{1},{2})'.format(gn,k,t)
lhs= gb.quicksum(var.gprod[gw,k,t] for gw in self.gdata.Map_Gn2Gp[gn]) \
+ gb.quicksum(var.qout_sr[pl,k,t] for pl in self.gdata.nodetoinpplines[gn])\
- gb.quicksum(var.qin_sr[pl,k,t] for pl in self.gdata.nodetooutpplines[gn]) \
+ gb.quicksum(var.gsout[gs,k,t] - var.gsin[gs,k,t] for gs in self.gdata.Map_Gn2Gs[gn])
rhs= self.gdata.gasload[gn][t] + gb.quicksum((Pgen[gen][t]+ \
PgenSC[gen][t]+RSC[gen,k,t])*HR[gen] for gen in self.gdata.gfpp if gen in self.gdata.Map_Gn2Eg[gn] )
add_constraint(self,-lhs,'==',-rhs,name=name)
m.update()
#==============================================================================
# Real-time market
#==============================================================================
def _build_constraints_gasRT(self,dispatchGasDA,dispatchElecRT):
m = self.model
var = self.variables
gnodes = self.gdata.gnodes
pplines = self.gdata.pplineorder
time = self.gdata.time
gndata = self.gdata.gnodedf
gstorage = self.gdata.gstorage
gwells = self.gdata.wellsinfo.index.tolist()
bigM = self.gdata.bigM
scenarios = self.gdata.scenarios
gprod = dispatchGasDA.gprod
qin_sr = dispatchGasDA.qin_sr
qout_sr = dispatchGasDA.qout_sr
if self.gdata.GasSlack=='FixInput':
gn=self.gdata.gnodeorder[0]
for t in time:
for s in scenarios:
name="PresSlackMax({0},{1},{2})".format(gn,s,t)
lhs=var.pr_rt[gn,s,t]
rhs=self.gdata.gnodedf['PresMax'][gn]
add_constraint(self,lhs,'==',rhs,name)
elif self.gdata.GasSlack=='FixOutput':
gn=self.gdata.gnodeorder[-1] # Do it for last node
for t in time:
for s in scenarios:
name="PresSlackMin({0},{1},{2})".format(gn,s,t)
lhs=var.pr_rt[gn,s,t]
rhs=self.gdata.gnodedf['PresMax'][gn]
add_constraint(self,lhs,'==',rhs,name)
elif self.gdata.GasSlack == 'ConstantOutput':
gn=self.gdata.gnodeorder[0]
for tpr, t in zip(time, time[1:]):
for s in scenarios:
name="Constant_Slack({},{},{})".format(gn,t,tpr)
lhs= var.pr_rt[gn,s,t]-var.pr_rt[gn,s,tpr]
rhs=np.float64(0.0)
add_constraint(self,lhs,'==',rhs,name)
#--- Gas pressure limits
for gn in gnodes:
for t in time:
for s in scenarios:
name="PresMax({0},{1},{2})".format(gn,s,t)
lhs=var.pr_rt[gn,s,t]
rhs=self.gdata.gnodedf['PresMax'][gn]
add_constraint(self,lhs,'<=',rhs,name)
name="PresMin({0},{1},{2})".format(gn,s,t)
lhs=var.pr_rt[gn,s,t]
rhs=self.gdata.gnodedf['PresMin'][gn]
add_constraint(self,lhs,'>=',rhs,name)
# for t in time:
# for s in scenarios:
# name='Send_geq_Receive_rt{0}{1}'.format(t,s)
# lhs=var.pr_rt['ng101',s,t]
# rhs=var.pr_rt['ng102',s,t]
# add_constraint(self,lhs,'>=',rhs,name)
#
#--- Outer Approximation
# Gas flow outer approximation
OuterApprox_Discretization(self)
if self.gdata.flow2dir == True:
u = var.u # if bi-directional flow u={0,1}
elif self.gdata.flow2dir == False:
u = dict.fromkeys(self.variables.gflow_sr_rt, 1.0) # if bi-directional flow u={1} i.e. from send to receive
Kpos=self.gdata.Kpos
prd=self.gdata.prd
for pl in self.gdata.pplineorder:
ns, nr = pl
for t in time:
for s in scenarios:
# name = "gflow_sr_rt_logical_BIG_M({0},{1},{2})".format(pl,s,t).replace(" ","")
# lhs=var.gflow_sr_rt[pl,s,t]-u[pl,s,t]*bigM
# rhs=np.float64(0.0)
# add_constraint(self,lhs,'<=',rhs,name)
for vs, vr in Kpos[pl].keys():
name = "gflow_sr_rt_lim_approx_weymouth({0},{1},{2},{3},{4})".format(pl,s,t,vs,vr).replace(" ","")
lhs=var.gflow_sr_rt[pl,s,t]
rhs=Kpos[pl][vs,vr] * prd[ns][vs] * var.pr_rt[ns,s,t] - \
Kpos[pl][vs,vr] * prd[nr][vr] * var.pr_rt[nr,s,t]
#+ bigM * (1.0 - u[pl,s,t])
add_constraint(self,lhs,'<=',rhs,name)
# if bi-directional flow model flow limits from Receiving to Sending end
# if bi-directional flow add constraints from Receiving to Sending end
if self.gdata.flow2dir == True:
Kneg=self.gdata.Kneg
prd=self.gdata.prd
for pl in self.gdata.pplineorder:
ns, nr = pl
for t in time:
for s in scenarios:
name = "gflow_rs_rt_logical_BIG_M({0},{1},{2})".format(pl,s,t).replace(" ","")
lhs=var.gflow_rs[pl,s,t]
rhs=(1.0-u[pl,s,t])*bigM
add_constraint(self,lhs,'<=',rhs,name)
for vs, vr in self.gdata.Kneg[pl].keys():
name = "gflow_rs_rt_lim_approx_weymouth({0},{1},{2},{3},{4})".format(pl,s,t,vs,vr).replace(" ","")
lhs=var.gflow_rs_rt[pl,s,t]
rhs= Kneg[pl][vs,vr] * prd[nr][vr] * var.pr_rt[nr,s,t] - \
Kneg[pl][vs,vr] * prd[ns][vs] * var.pr_rt[ns,s,t] +\
bigM * u[pl,s,t]
add_constraint(self,lhs,'<=',rhs,name)
# Gas well maximum production
for gw in gwells:
for s in scenarios:
for t in time:
name="gprodUp_max({0},{1},{2})".format(gw,s,t)
lhs= var.gprodUp[gw,s,t]
rhs=self.gdata.wellsinfo['MaxProd'][gw]-gprod[gw][t]['k0']
add_constraint(self,lhs,'<=',rhs,name)
name="gprodDn_max({0},{1},{2})".format(gw,s,t)
lhs= var.gprodDn[gw,s,t]
rhs=gprod[gw][t]['k0']
add_constraint(self,lhs,'<=',rhs,name)
# Compressors - Limit the outlet pressure of compressor to be less than
# full compression (max) and above the inlet pressure (min)
# Compression rate definition - Active pipelines
for pl in self.gdata.pplineactive:
ns, nr = pl
for t in time:
for s in scenarios:
name = 'compr_rt_max({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs= var.pr_rt[nr,s,t] -self.gdata.pplinecr[pl] * var.pr_rt[ns,s,t]
rhs= np.float64(0.0)
add_constraint(self,lhs,'<=',rhs,name)
name = 'compr_rt_min({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs=var.pr_rt[ns,s,t] -var.pr_rt[nr,s,t]
rhs= np.float64(0.0)
add_constraint(self,lhs,'<=',rhs,name)
# Line-pack constraints
for pl in pplines:
ns, nr = pl
for t in time:
for s in scenarios:
name='gflow_sr_io_rt({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs=var.gflow_sr_rt[pl,s,t]-(var.qin_sr_rt[pl,s,t] + var.qout_sr_rt[pl,s,t]) * 0.5
rhs=np.float64(0.0)
add_constraint(self,-lhs,'==',-rhs,name)
name = 'lpack_def_rt({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs=var.lpack_rt[pl,s,t]-self.gdata.pplinels[pl]*0.5*(var.pr_rt[ns,s,t]+var.pr_rt[nr,s,t])
rhs= np.float64(0.0)
add_constraint(self,-lhs,'==',-rhs,name)
'''
NB! gflow_rs_io = {} # Gas flow (R to S) only if bi-directional flow.
This needs to be included in the line-pack definition constraint
'''
for pl in pplines:
ns, nr = pl
for s in scenarios:
for tpr, t in zip(time, time[1:]):
name='line_store_rt({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs=var.lpack_rt[pl,s,t] -var.lpack_rt[pl,s,tpr] - var.qin_sr_rt[pl,s,t] + var.qout_sr_rt[pl,s,t]
rhs= np.float64(0.0)
add_constraint(self,-lhs,'==',-rhs,name)
t = time[0]
name='line_store_rt({0},{1},{2})'.format(pl,s,t).replace(" ","")
lhs=var.lpack_rt[pl,s,t] - var.qin_sr_rt[pl,s,t] + var.qout_sr_rt[pl,s,t]
rhs= self.gdata.pplinelsini[pl]
add_constraint(self,-lhs,'==',-rhs,name)
# for t in time:
# name='ss{0},{1},{2}'.format(pl,s,t)
# lhs=- var.qin_sr_rt[pl,s,t] + var.qout_sr_rt[pl,s,t]
# rhs=np.float64(0.0)
# add_constraint(self,lhs,'==',rhs,name)
for s in scenarios:
name='lpack_end_max({0})'.format(s)
lhs=gb.quicksum(var.lpack_rt[pl, s, time[-1]] for pl in pplines)
rhs= (1.0+defaults.FINAL_LP_DEV)*gb.quicksum(self.gdata.pplinelsini[pl] for pl in pplines)
add_constraint(self,lhs,'<=',rhs,name)
name='lpack_end_min({0})'.format(s)
lhs=gb.quicksum(var.lpack_rt[pl, s, time[-1]] for pl in pplines)
rhs= (1.0-defaults.FINAL_LP_DEV)*gb.quicksum(self.gdata.pplinelsini[pl] for pl in pplines)
add_constraint(self,rhs,'<=',lhs,name)
#
# Gas Storage
for gs in self.gdata.gstorage:
for t in time:
for s in scenarios:
name='gsinMax_rt({0},{1},{2})'.format(gs,t,s)
lhs=var.gsin_rt[gs,s,t]
rhs=self.gdata.gstorageinfo['MaxInFlow'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gsoutMax_rt({0},{1},{2})'.format(gs,t,s)
lhs=var.gsout_rt[gs,s,t]
rhs=self.gdata.gstorageinfo['MaxOutFlow'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gstore_max_rt({0},{1},{2})'.format(gs,t,s)
lhs=var.gstore_rt[gs,s,t]
rhs=self.gdata.gstorageinfo['MaxStore'][gs]
add_constraint(self,lhs,'<=',rhs,name)
name='gstore_min_rt({0},{1},{2})'.format(gs,t,s)
lhs=var.gstore_rt[gs,s,t]
rhs=self.gdata.gstorageinfo['MinStore'][gs]
add_constraint(self,lhs,'>=',rhs,name)
for gs in gstorage:
for s in scenarios:
for tpr, t in zip(time, time[1:]):
name='gstor_def_rt({0},{1},{2})'.format(gs,s,t)
lhs=var.gstore_rt[gs,s,t]
rhs= var.gstore_rt[gs,s,tpr]+var.gsin_rt[gs,s,t]-var.gsout_rt[gs,s,t]
add_constraint(self,-lhs,'==',-rhs,name)
t = time[0]
name='gstor_def_rt({0},{1},{2})'.format(gs,s,t)
lhs=var.gstore_rt[gs,s,t]
rhs= self.gdata.gstorageinfo['IniStore'][gs]+var.gsin_rt[gs,s,t]-var.gsout_rt[gs,s,t]
add_constraint(self,-lhs,'==',-rhs,name)
# At end of time the linepack should be +/- 10% percent of the initial value
# Actual deviations stored in defaults.FINAL_LP_DEV
name='gstor_end({0},{1})'.format(gs,s)
lhs=var.gstore_rt[gs, s, time[-1]]
rhs= self.gdata.gstorageinfo['IniStore'][gs]
add_constraint(self,lhs,'>=',rhs,name)
# Gas shedding
for s in scenarios:
for gn in gnodes:
for t in time:
name = 'gas_shed_rt({0},{1},{2})'.format(gn,s,t)
lhs= var.gshed_rt[gn,s,t]
rhs= self.gdata.gasload[gn][t]
add_constraint(self,lhs,'<=',rhs,name)
# Nodal Gas Balance : Real-time
# Here modeled only for 'uni-directional' gas flow (from S to R)
###########################################################################
# Swing constracts can be defined per Gas Node - Here defined per GFPP
# ... we need the GFPP node mapping and efficiencies
###########################################################################
# Up/down reserve deployment by GFPP
RUp_gfpp = dispatchElecRT.RUpSC.add(dispatchElecRT.RUp.loc[:, self.gdata.gfpp])
RDn_gfpp = dispatchElecRT.RDnSC.add(dispatchElecRT.RDn.loc[:, self.gdata.gfpp])
Rgfpp = RUp_gfpp - RDn_gfpp
# Day-ahead gas flows
qin_sr = dispatchGasDA.qin_sr
qout_sr = dispatchGasDA.qout_sr
HR=self.gdata.generatorinfo.HR
# Day-ahead gas storage schedule
gsin = dispatchGasDA.gsin
gsout = dispatchGasDA.gsout
for s in scenarios:
for gn in gnodes:
for t in time:
name='gas_balance({0},{1},{2})'.format(gn,s,t)
lhs= gb.quicksum(( var.gprodUp[gw,s,t] - var.gprodDn[gw,s,t]) for gw in self.gdata.Map_Gn2Gp[gn]) \
- gb.quicksum(( qout_sr[pl][t]['k0'] - var.qout_sr_rt[pl,s,t]) for pl in self.gdata.nodetoinpplines[gn]) \
+ gb.quicksum(( qin_sr[pl][t]['k0'] - var.qin_sr_rt[pl,s,t] ) for pl in self.gdata.nodetooutpplines[gn]) \
+ gb.quicksum(( var.gsout_rt[gs,s,t] - gsout[gs][t]['k0'] + gsin[gs][t]['k0'] - var.gsin_rt[gs,s,t]) for gs in self.gdata.Map_Gn2Gs[gn]) \
- gb.quicksum(Rgfpp[gen][t,s]*(HR[gen]) for gen in self.gdata.gfpp if gen in self.gdata.Map_Gn2Eg[gn]) \
+ var.gshed_rt[gn,s,t] # Load SHedding
rhs = np.float64(0.0)
add_constraint(self,-lhs,'==',-rhs,name)
m.update()
| 38.693206 | 166 | 0.469807 | 4,700 | 36,449 | 3.574894 | 0.072979 | 0.071242 | 0.012856 | 0.025711 | 0.846744 | 0.810023 | 0.786394 | 0.765564 | 0.753303 | 0.733722 | 0 | 0.015588 | 0.368158 | 36,449 | 942 | 167 | 38.693206 | 0.713982 | 0.128728 | 0 | 0.660746 | 0 | 0 | 0.058513 | 0.039148 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007105 | false | 0.005329 | 0.008881 | 0 | 0.017762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e10b64fdea6ed79f0a6e5b48c6b890f0791f12d | 71 | py | Python | paleocirc/__init__.py | xFranciB/PaleoCirc | bb95abf973342c004c4f84516092a686072bc12e | [
"MIT"
] | 7 | 2021-02-16T15:41:59.000Z | 2021-12-22T19:56:23.000Z | paleocirc/__init__.py | xFranciB/PaleoCirc | bb95abf973342c004c4f84516092a686072bc12e | [
"MIT"
] | 5 | 2021-04-16T13:36:31.000Z | 2021-12-12T17:43:13.000Z | paleocirc/__init__.py | xFranciB/paleocirc | bb95abf973342c004c4f84516092a686072bc12e | [
"MIT"
] | null | null | null | from .circolari import Circolari
from .circolariasync import Circolari | 35.5 | 37 | 0.859155 | 8 | 71 | 7.625 | 0.5 | 0.491803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 37 | 35.5 | 0.968254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd68b737ef778b3db9969f5d85a34c2aceac54f0 | 33 | py | Python | src/apis/__init__.py | wangyingbo/sspanel-mining | 1b1d7caf195388e1d9144979b740c52b7163f477 | [
"Apache-2.0"
] | 1 | 2021-11-03T12:09:27.000Z | 2021-11-03T12:09:27.000Z | src/apis/__init__.py | starkiller43/sspanel-mining | 27a7dd1fafcf551808245a69f2f449d5d52a0df7 | [
"Apache-2.0"
] | null | null | null | src/apis/__init__.py | starkiller43/sspanel-mining | 27a7dd1fafcf551808245a69f2f449d5d52a0df7 | [
"Apache-2.0"
] | null | null | null | from .v2rss_api import v2rss_api
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.333333 | 0.666667 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.121212 | 33 | 1 | 33 | 33 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fd6aa81e5fa5469d3c5161c0f5454a706b080667 | 42 | py | Python | gym_npm/envs/__init__.py | a-nooj/gym-npm | 650f49c25affef7cbe9d577106a860bebcb3ce5e | [
"MIT"
] | null | null | null | gym_npm/envs/__init__.py | a-nooj/gym-npm | 650f49c25affef7cbe9d577106a860bebcb3ce5e | [
"MIT"
] | null | null | null | gym_npm/envs/__init__.py | a-nooj/gym-npm | 650f49c25affef7cbe9d577106a860bebcb3ce5e | [
"MIT"
] | null | null | null | from gym_npm.envs.poke_env import PokeEnv
| 21 | 41 | 0.857143 | 8 | 42 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd944b87d6451d4118df7269e6fae293180e4976 | 157 | py | Python | cqrs_template/authentication/__init__.py | wcomartin/cqrs-feature-template | bc9e5c2598f0290c110c5e199f8de056d6641d9a | [
"MIT"
] | null | null | null | cqrs_template/authentication/__init__.py | wcomartin/cqrs-feature-template | bc9e5c2598f0290c110c5e199f8de056d6641d9a | [
"MIT"
] | null | null | null | cqrs_template/authentication/__init__.py | wcomartin/cqrs-feature-template | bc9e5c2598f0290c110c5e199f8de056d6641d9a | [
"MIT"
] | null | null | null | from flask import Blueprint
authentication = Blueprint('authentication', __name__)
from .features.auth_login import *
from .features.register_user import * | 26.166667 | 54 | 0.815287 | 18 | 157 | 6.777778 | 0.611111 | 0.377049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10828 | 157 | 6 | 55 | 26.166667 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
fdc279afff0c8589fbc2f06edb9671098bb6fb6f | 1,079 | py | Python | autolens/aggregator/__init__.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | autolens/aggregator/__init__.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | autolens/aggregator/__init__.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | from autolens.aggregator.aggregator import grid_search_result_as_array
from autolens.aggregator.aggregator import (
grid_search_log_evidences_as_array_from_grid_search_result,
grid_search_subhalo_masses_as_array_from_grid_search_result,
grid_search_subhalo_centres_as_array_from_grid_search_result,
)
from autolens.aggregator.aggregator import fit_imaging_from_agg_obj
from autolens.aggregator.aggregator import (
fit_imaging_generator_from_aggregator as FitImaging,
)
from autolens.aggregator.aggregator import (
fit_interferometer_generator_from_aggregator as FitInterferometer,
)
from autolens.aggregator.aggregator import masked_imaging_from_agg_obj
from autolens.aggregator.aggregator import (
masked_imaging_generator_from_aggregator as MaskedImaging,
)
from autolens.aggregator.aggregator import (
masked_interferometer_generator_from_aggregator as MaskedInterferometer,
)
from autolens.aggregator.aggregator import tracer_from_agg_obj
from autolens.aggregator.aggregator import tracer_generator_from_aggregator as Tracer
| 46.913043 | 86 | 0.861909 | 133 | 1,079 | 6.533835 | 0.195489 | 0.13809 | 0.253165 | 0.368239 | 0.851554 | 0.704258 | 0.517837 | 0.283084 | 0.227848 | 0 | 0 | 0 | 0.1038 | 1,079 | 22 | 87 | 49.045455 | 0.898656 | 0 | 0 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.454545 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fdda6b7d629d60335eedd79d09665c38fe15cdcc | 115 | py | Python | molecool/molecool_io/__init__.py | hgandhi2411/Molecool | aa2471241abcbe74443b6d764c7f7874645673aa | [
"BSD-3-Clause"
] | null | null | null | molecool/molecool_io/__init__.py | hgandhi2411/Molecool | aa2471241abcbe74443b6d764c7f7874645673aa | [
"BSD-3-Clause"
] | 1 | 2020-02-05T19:19:57.000Z | 2020-02-05T19:19:57.000Z | molecool/molecool_io/__init__.py | hgandhi2411/Molecool | aa2471241abcbe74443b6d764c7f7874645673aa | [
"BSD-3-Clause"
] | null | null | null | """
Import functions for molecool_io subpackage
"""
from .xyz import open_xyz, write_xyz
from .pdb import open_pdb | 19.166667 | 43 | 0.782609 | 18 | 115 | 4.777778 | 0.611111 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 6 | 44 | 19.166667 | 0.868687 | 0.373913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3053c014a29507ee6e8930f738925e516a97b42 | 31,054 | py | Python | goutdotcom/ultaid/views.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | goutdotcom/ultaid/views.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | goutdotcom/ultaid/views.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | from django.contrib.auth.mixins import LoginRequiredMixin
from django.shortcuts import redirect
from django.views.generic import CreateView, DetailView, UpdateView
from ..history.forms import (
AllopurinolHypersensitivitySimpleForm,
CKDForm,
ErosionsForm,
FebuxostatHypersensitivitySimpleForm,
HeartAttackSimpleForm,
OrganTransplantForm,
StrokeSimpleForm,
TophiForm,
XOIInteractionsSimpleForm,
)
from ..history.models import (
CKD,
AllopurinolHypersensitivity,
Erosions,
FebuxostatHypersensitivity,
HeartAttack,
OrganTransplant,
Stroke,
Tophi,
XOIInteractions,
)
from ..ppxaid.models import PPxAid
from ..ult.models import ULT
from .forms import ULTAidForm
from .models import ULTAid
class ULTAidCreate(CreateView):
model = ULTAid
form_class = ULTAidForm
CKD_form_class = CKDForm
erosions_form_class = ErosionsForm
XOI_interactions_form_class = XOIInteractionsSimpleForm
organ_transplant_form_class = OrganTransplantForm
allopurinol_hypersensitivity_form_class = AllopurinolHypersensitivitySimpleForm
febuxostat_hypersensitivity_form_class = FebuxostatHypersensitivitySimpleForm
heartattack_form_class = HeartAttackSimpleForm
stroke_form_class = StrokeSimpleForm
tophi_form_class = TophiForm
def form_valid(self, form):
# Check if POST has 'ult' kwarg and assign ULTAid ult OnetoOne related object based on pk='ult'
if self.kwargs.get("ult"):
form.instance.ult = ULT.objects.get(pk=self.kwargs.get("ult"))
form.instance.erosions = form.instance.ult.erosions
form.instance.tophi = form.instance.ult.tophi
# Check if ULT calculator() result was indicated or conditional and set ULTAid need field true if so, False if not
if form.instance.ult.calculator() == "Indicated" or form.instance.ult.calculator() == "Conditional":
form.instance.need = True
else:
form.instance.need = False
# If user is not authenticated and created a ULTAid from a ULT, use ULT CKD instance instead of making new one, removed forms in form via Kwargs and layout objects
if self.request.user.is_authenticated == False:
if form.instance.ult.ckd.value == False:
form.instance.ckd = form.instance.ult.ckd
if self.request.user.is_authenticated:
form.instance.user = self.request.user
return super().form_valid(form)
else:
return super().form_valid(form)
def get(self, request, *args, **kwargs):
# Checks if user is logged in, if they have already created a ULTAid, and redirects to UpdateView if so
if self.request.user.is_authenticated:
try:
user_ULTAid = self.model.objects.get(user=self.request.user)
except self.model.DoesNotExist:
user_ULTAid = None
if user_ULTAid:
return redirect("ultaid:update", pk=self.model.objects.get(user=self.request.user).pk)
else:
return super().get(request, *args, **kwargs)
else:
return super().get(request, *args, **kwargs)
def get_context_data(self, **kwargs):
context = super(ULTAidCreate, self).get_context_data(**kwargs)
## IS IF NOT IN CONTEXT STATEMENT NECESSARY? TEST IT BY DELETING
## WHAT IF USER IS NOT LOGGED IN? CHECK CONTEXT
# Add ULTAid OnetoOne related model objects from the MedicalProfile for the logged in User
if self.request.user.is_anonymous == False:
if "CKD_form" not in context:
context["CKD_form"] = self.CKD_form_class(instance=self.request.user.medicalprofile.CKD)
if "erosions_form" not in context:
context["erosions_form"] = self.erosions_form_class(instance=self.request.user.medicalprofile.erosions)
if "XOI_interactions_form" not in context:
context["XOI_interactions_form"] = self.XOI_interactions_form_class(
instance=self.request.user.medicalprofile.XOI_interactions
)
if "organ_transplant_form" not in context:
context["organ_transplant_form"] = self.organ_transplant_form_class(
instance=self.request.user.medicalprofile.organ_transplant
)
if "allopurinol_hypersensitivity_form" not in context:
context["allopurinol_hypersensitivity_form"] = self.allopurinol_hypersensitivity_form_class(
instance=self.request.user.medicalprofile.allopurinol_hypersensitivity
)
if "febuxostat_hypersensitivity_form" not in context:
context["febuxostat_hypersensitivity_form"] = self.febuxostat_hypersensitivity_form_class(
instance=self.request.user.medicalprofile.febuxostat_hypersensitivity
)
if "heartattack_form" not in context:
context["heartattack_form"] = self.heartattack_form_class(
instance=self.request.user.medicalprofile.heartattack
)
if "stroke_form" not in context:
context["stroke_form"] = self.stroke_form_class(instance=self.request.user.medicalprofile.stroke)
if "tophi_form" not in context:
context["tophi_form"] = self.tophi_form_class(instance=self.request.user.medicalprofile.tophi)
# Check if user is logged in, pass ULT results to ULTAid view/context for JQuery evaluation to update form fields
#### IS THIS NEEDED FOR POST?
if self.request.user.is_authenticated:
if self.request.user.ult:
context["user_ult"] = ULT.objects.get(user=self.request.user).calculator()
return context
else:
if "CKD_form" not in context:
context["CKD_form"] = self.CKD_form_class(self.request.GET)
if "erosions_form" not in context:
context["erosions_form"] = self.erosions_form_class(self.request.GET)
if "XOI_interactions_form" not in context:
context["XOI_interactions_form"] = self.XOI_interactions_form_class(self.request.GET)
if "organ_transplant_form" not in context:
context["organ_transplant_form"] = self.organ_transplant_form_class(self.request.GET)
if "allopurinol_hypersensitivity_form" not in context:
context["allopurinol_hypersensitivity_form"] = self.allopurinol_hypersensitivity_form_class(
self.request.GET
)
if "febuxostat_hypersensitivity_form" not in context:
context["febuxostat_hypersensitivity_form"] = self.febuxostat_hypersensitivity_form_class(
self.request.GET
)
if "heartattack_form" not in context:
context["heartattack_form"] = self.heartattack_form_class(self.request.GET)
if "stroke_form" not in context:
context["stroke_form"] = self.stroke_form_class(self.request.GET)
if "tophi_form" not in context:
context["tophi_form"] = self.tophi_form_class(self.request.GET)
return context
def get_form_kwargs(self):
"""Ovewrites get_form_kwargs() to look for 'ult' kwarg in GET request, uses 'ult' to query database for associated ULT for use in ULTAidForm
returns: [dict: dict containing 'ult' kwarg for form]"""
# Assign self.flare from GET request kwargs before calling super() which will overwrite kwargs
self.ult = self.kwargs.get("ult", None)
self.no_user = False
if self.request.user.is_authenticated == False:
self.no_user = True
kwargs = super(ULTAidCreate, self).get_form_kwargs()
# Checks if flare kwarg came from Flare Detail and queries database for flare_pk that matches self.flare from initial kwargs
if self.ult:
ult_pk = self.ult
ult = ULT.objects.get(pk=ult_pk)
kwargs["ult"] = ult
kwargs["no_user"] = self.no_user
# If User is anonymous / not logged in and FlareAid has a Flare, pass ckd from ULT to ULTAid to avoid duplication of user input
if self.request.user.is_authenticated == False:
kwargs["ckd"] = ult.ckd
return kwargs
# This unfortunately needs to get rewritten and the object assigned at the start of POST once you mess with enough CBV code. Not sure what exactly triggers it.
def get_object(self):
object = self.model
return object
def post(self, request, **kwargs):
form = self.form_class(request.POST, instance=ULTAid())
self.object = self.get_object()
if form.is_valid():
ULTAid_data = form.save(commit=False)
## WOULD LIKE TO CONSOLIDATE REQUEST.USER ADD TO RIGHT BEFORE SAVE(), THEN CAN COMBINE THE REST
# Check if user is authenticated and pull OnetoOne related model data from MedicalProfile if so
if request.user.is_authenticated:
ULTAid_data.user = request.user
CKD_form = self.CKD_form_class(request.POST, instance=request.user.medicalprofile.CKD)
CKD_data = CKD_form.save(commit=False)
CKD_data.last_modified = "ULTAid"
CKD_data.save()
erosions_form = self.erosions_form_class(request.POST, instance=request.user.medicalprofile.erosions)
erosions_data = erosions_form.save(commit=False)
erosions_data.last_modified = "ULTAid"
erosions_data.save()
XOI_interactions_form = self.XOI_interactions_form_class(
request.POST, instance=request.user.medicalprofile.XOI_interactions
)
XOI_interactions_data = XOI_interactions_form.save(commit=False)
XOI_interactions_data.last_modified = "ULTAid"
XOI_interactions_data.save()
organ_transplant_form = self.organ_transplant_form_class(
request.POST, instance=request.user.medicalprofile.organ_transplant
)
organ_transplant_data = organ_transplant_form.save(commit=False)
organ_transplant_data.last_modified = "ULTAid"
organ_transplant_data.save()
allopurinol_hypersensitivity_form = self.allopurinol_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.allopurinol_hypersensitivity
)
allopurinol_hypersensitivity_data = allopurinol_hypersensitivity_form.save(commit=False)
allopurinol_hypersensitivity_data.last_modified = "ULTAid"
allopurinol_hypersensitivity_data.save()
febuxostat_hypersensitivity_form = self.febuxostat_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.febuxostat_hypersensitivity
)
febuxostat_hypersensitivity_data = febuxostat_hypersensitivity_form.save(commit=False)
febuxostat_hypersensitivity_data.last_modified = "ULTAid"
febuxostat_hypersensitivity_data.save()
heartattack_form = self.heartattack_form_class(
request.POST, instance=request.user.medicalprofile.heartattack
)
heartattack_data = heartattack_form.save(commit=False)
heartattack_data.last_modified = "ULTAid"
heartattack_data.save()
stroke_form = self.stroke_form_class(request.POST, instance=request.user.medicalprofile.stroke)
stroke_data = stroke_form.save(commit=False)
stroke_data.last_modified = "ULTAid"
stroke_data.save()
tophi_form = self.tophi_form_class(request.POST, instance=request.user.medicalprofile.tophi)
tophi_data = tophi_form.save(commit=False)
tophi_data.last_modified = "ULTAid"
tophi_data.save()
ULTAid_data.ckd = CKD_data
ULTAid_data.erosions = erosions_data
ULTAid_data.XOI_interactions = XOI_interactions_data
ULTAid_data.organ_transplant = organ_transplant_data
ULTAid_data.allopurinol_hypersensitivity = allopurinol_hypersensitivity_data
ULTAid_data.febuxostat_hypersensitivity = febuxostat_hypersensitivity_data
ULTAid_data.heartattack = heartattack_data
ULTAid_data.stroke = stroke_data
ULTAid_data.tophi = tophi_data
ULTAid_data.save()
# Check if User has already created a PPxAid for some reason and, if so, assign it to the newly created/saved ULTAid to that attribute on the PPxAid
try:
self.ppxaid = request.user.ppxaid
except PPxAid.DoesNotExist:
self.ppxaid = None
if self.ppxaid:
self.ppxaid.ultaid = ULTAid_data
self.ppxaid.save()
else:
CKD_form = self.CKD_form_class(request.POST, instance=CKD())
CKD_data = CKD_form.save(commit=False)
CKD_data.last_modified = "ULTAid"
CKD_data.save()
erosions_form = self.erosions_form_class(request.POST, instance=Erosions())
erosions_data = erosions_form.save(commit=False)
erosions_data.last_modified = "ULTAid"
erosions_data.save()
XOI_interactions_form = self.XOI_interactions_form_class(request.POST, instance=XOIInteractions())
XOI_interactions_data = XOI_interactions_form.save(commit=False)
XOI_interactions_data.last_modified = "ULTAid"
XOI_interactions_data.save()
organ_transplant_form = self.organ_transplant_form_class(request.POST, instance=OrganTransplant())
organ_transplant_data = organ_transplant_form.save(commit=False)
organ_transplant_data.last_modified = "ULTAid"
organ_transplant_data.save()
allopurinol_hypersensitivity_form = self.allopurinol_hypersensitivity_form_class(
request.POST, instance=AllopurinolHypersensitivity()
)
allopurinol_hypersensitivity_data = allopurinol_hypersensitivity_form.save(commit=False)
allopurinol_hypersensitivity_data.last_modified = "ULTAid"
allopurinol_hypersensitivity_data.save()
febuxostat_hypersensitivity_form = self.febuxostat_hypersensitivity_form_class(
request.POST, instance=FebuxostatHypersensitivity()
)
febuxostat_hypersensitivity_data = febuxostat_hypersensitivity_form.save(commit=False)
febuxostat_hypersensitivity_data.last_modified = "ULTAid"
febuxostat_hypersensitivity_data.save()
heartattack_form = self.heartattack_form_class(request.POST, instance=HeartAttack())
heartattack_data = heartattack_form.save(commit=False)
heartattack_data.last_modified = "ULTAid"
heartattack_data.save()
stroke_form = self.stroke_form_class(request.POST, instance=Stroke())
stroke_data = stroke_form.save(commit=False)
stroke_data.last_modified = "ULTAid"
stroke_data.save()
tophi_form = self.tophi_form_class(request.POST, instance=Tophi())
tophi_data = tophi_form.save(commit=False)
tophi_data.last_modified = "ULTAid"
tophi_data.save()
ULTAid_data.ckd = CKD_data
ULTAid_data.erosions = erosions_data
ULTAid_data.XOI_interactions = XOI_interactions_data
ULTAid_data.organ_transplant = organ_transplant_data
ULTAid_data.allopurinol_hypersensitivity = allopurinol_hypersensitivity_data
ULTAid_data.febuxostat_hypersensitivity = febuxostat_hypersensitivity_data
ULTAid_data.heartattack = heartattack_data
ULTAid_data.stroke = stroke_data
ULTAid_data.tophi = tophi_data
ULTAid_data.save()
# Need to call form_valid(), not redirect. form_valid() super function returns to object DetailView
return self.form_valid(form)
else:
if request.user.is_authenticated:
return self.render_to_response(
self.get_context_data(
form=form,
CKD_form=self.CKD_form_class(request.POST, instance=request.user.medicalprofile.CKD),
erosions_form=self.erosions_form_class(
request.POST, instance=request.user.medicalprofile.erosions
),
XOI_interactions_form=self.XOI_interactions_form_class(
request.POST, instance=request.user.medicalprofile.XOI_interactions
),
organ_transplant_form=self.organ_transplant_form_class(
request.POST, instance=request.user.medicalprofile.organ_transplant
),
allopurinol_hypersensitivity_form=self.allopurinol_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.allopurinol_hypersensitivity
),
febuxostat_hypersensitivity_form=self.febuxostat_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.febuxostat_hypersensitivity
),
heartattack_form=self.heartattack_form_class(
request.POST, instance=request.user.medicalprofile.heartattack
),
stroke_form=self.stroke_form_class(request.POST, instance=request.user.medicalprofile.stroke),
tophi_form=self.tophi_form_class(request.POST, instance=request.user.medicalprofile.tophi),
)
)
else:
return self.render_to_response(
self.get_context_data(
form=form,
CKD_form=self.CKD_form_class(request.POST, instance=CKD()),
erosions_form=self.erosions_form_class(request.POST, instance=Erosions()),
XOI_interactions_form=self.XOI_interactions_form_class(
request.POST, instance=XOIInteractions()
),
organ_transplant_form=self.organ_transplant_form_class(
request.POST, instance=OrganTransplant()
),
allopurinol_hypersensitivity_form=self.allopurinol_hypersensitivity_form_class(
request.POST, instance=AllopurinolHypersensitivity()
),
febuxostat_hypersensitivity_form=self.febuxostat_hypersensitivity_form_class(
request.POST, instance=FebuxostatHypersensitivity()
),
heartattack_form=self.heartattack_form_class(request.POST, instance=HeartAttack()),
stroke_form=self.stroke_form_class(request.POST, instance=Stroke()),
tophi_form=self.tophi_form_class(request.POST, instance=Tophi()),
)
)
class ULTAidDetail(DetailView):
model = ULTAid
class ULTAidUpdate(LoginRequiredMixin, UpdateView):
model = ULTAid
form_class = ULTAidForm
CKD_form_class = CKDForm
erosions_form_class = ErosionsForm
XOI_interactions_form_class = XOIInteractionsSimpleForm
organ_transplant_form_class = OrganTransplantForm
allopurinol_hypersensitivity_form_class = AllopurinolHypersensitivitySimpleForm
febuxostat_hypersensitivity_form_class = FebuxostatHypersensitivitySimpleForm
heartattack_form_class = HeartAttackSimpleForm
stroke_form_class = StrokeSimpleForm
tophi_form_class = TophiForm
def get_context_data(self, **kwargs):
context = super(ULTAidUpdate, self).get_context_data(**kwargs)
# Adds appropriate OnetoOne related History/MedicalProfile model forms to context
if self.request.POST:
if "CKD_form" not in context:
context["CKD_form"] = self.CKD_form_class(
self.request.POST, instance=self.request.user.medicalprofile.CKD
)
if "erosions_form" not in context:
context["erosions_form"] = self.erosions_form_class(
self.request.POST, instance=self.request.user.medicalprofile.erosions
)
if "XOI_interactions_form" not in context:
context["XOI_interactions_form"] = self.XOI_interactions_form_class(
self.request.POST, instance=self.request.user.medicalprofile.XOI_interactions
)
if "organ_transplant_form" not in context:
context["organ_transplant_form"] = self.organ_transplant_form_class(
self.request.POST, instance=self.request.user.medicalprofile.organ_transplant
)
if "allopurinol_hypersensitivity_form" not in context:
context["allopurinol_hypersensitivity_form"] = self.allopurinol_hypersensitivity_form_class(
self.request.POST, instance=self.request.user.medicalprofile.allopurinol_hypersensitivity
)
if "febuxostat_hypersensitivity_form" not in context:
context["febuxostat_hypersensitivity_form"] = self.febuxostat_hypersensitivity_form_class(
self.request.POST, instance=self.request.user.medicalprofile.febuxostat_hypersensitivity
)
if "heartattack_form" not in context:
context["heartattack_form"] = self.heartattack_form_class(
self.request.POST, instance=self.request.user.medicalprofile.heartattack
)
if "stroke_form" not in context:
context["stroke_form"] = self.stroke_form_class(
self.request.POST, instance=self.request.user.medicalprofile.stroke
)
if "tophi_form" not in context:
context["tophi_form"] = self.tophi_form_class(
self.request.POST, instance=self.request.user.medicalprofile.tophi
)
# Check if user is logged in, pass ULT results to ULTAid view/context for JQuery evaluation to update form fields
#### IS THIS NEEDED FOR POST?
if self.request.user.is_authenticated:
if self.request.user.ult:
context["user_ult"] = ULT.objects.get(user=self.request.user).calculator()
return context
else:
if "CKD_form" not in context:
context["CKD_form"] = self.CKD_form_class(instance=self.request.user.medicalprofile.CKD)
if "erosions_form" not in context:
context["erosions_form"] = self.erosions_form_class(instance=self.request.user.medicalprofile.erosions)
if "XOI_interactions_form" not in context:
context["XOI_interactions_form"] = self.XOI_interactions_form_class(
instance=self.request.user.medicalprofile.XOI_interactions
)
if "organ_transplant_form" not in context:
context["organ_transplant_form"] = self.organ_transplant_form_class(
instance=self.request.user.medicalprofile.organ_transplant
)
if "allopurinol_hypersensitivity_form" not in context:
context["allopurinol_hypersensitivity_form"] = self.allopurinol_hypersensitivity_form_class(
instance=self.request.user.medicalprofile.allopurinol_hypersensitivity
)
if "febuxostat_hypersensitivity_form" not in context:
context["febuxostat_hypersensitivity_form"] = self.febuxostat_hypersensitivity_form_class(
instance=self.request.user.medicalprofile.febuxostat_hypersensitivity
)
if "heartattack_form" not in context:
context["heartattack_form"] = self.heartattack_form_class(
instance=self.request.user.medicalprofile.heartattack
)
if "stroke_form" not in context:
context["stroke_form"] = self.stroke_form_class(instance=self.request.user.medicalprofile.stroke)
if "tophi_form" not in context:
context["tophi_form"] = self.tophi_form_class(instance=self.request.user.medicalprofile.tophi)
# Check if user is logged in, pass ULT results to ULTAid view/context for JQuery evaluation to update form fields
if self.request.user.is_authenticated:
if self.request.user.ult:
context["user_ult"] = ULT.objects.get(user=self.request.user).calculator()
return context
def post(self, request, **kwargs):
# Uses UpdateView to get the ULTAid instance requested and put it in a form
form = self.form_class(request.POST, instance=self.get_object())
CKD_form = self.CKD_form_class(request.POST, instance=request.user.medicalprofile.CKD)
erosions_form = self.erosions_form_class(request.POST, instance=request.user.medicalprofile.erosions)
XOI_interactions_form = self.XOI_interactions_form_class(
request.POST, instance=request.user.medicalprofile.XOI_interactions
)
organ_transplant_form = self.organ_transplant_form_class(
request.POST, instance=request.user.medicalprofile.organ_transplant
)
allopurinol_hypersensitivity_form = self.allopurinol_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.allopurinol_hypersensitivity
)
febuxostat_hypersensitivity_form = self.febuxostat_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.febuxostat_hypersensitivity
)
heartattack_form = self.heartattack_form_class(request.POST, instance=request.user.medicalprofile.heartattack)
stroke_form = self.stroke_form_class(request.POST, instance=request.user.medicalprofile.stroke)
tophi_form = self.tophi_form_class(request.POST, instance=request.user.medicalprofile.tophi)
if form.is_valid():
# Uses related OnetoOne field forms to populate ULTAid fields, changes last_modified to ULTAid, and saves all data
ULTAid_data = form.save(commit=False)
CKD_data = CKD_form.save(commit=False)
CKD_data.last_modified = "ULTAid"
CKD_data.save()
erosions_data = erosions_form.save(commit=False)
erosions_data.last_modified = "ULTAid"
erosions_data.save()
XOI_interactions_data = XOI_interactions_form.save(commit=False)
XOI_interactions_data.last_modified = "ULTAid"
XOI_interactions_data.save()
organ_transplant_data = organ_transplant_form.save(commit=False)
organ_transplant_data.last_modified = "ULTAid"
organ_transplant_data.save()
allopurinol_hypersensitivity_data = allopurinol_hypersensitivity_form.save(commit=False)
allopurinol_hypersensitivity_data.last_modified = "ULTAid"
allopurinol_hypersensitivity_data.save()
febuxostat_hypersensitivity_data = febuxostat_hypersensitivity_form.save(commit=False)
febuxostat_hypersensitivity_data.last_modified = "ULTAid"
febuxostat_hypersensitivity_data.save()
heartattack_data = heartattack_form.save(commit=False)
heartattack_data.last_modified = "ULTAid"
heartattack_data.save()
stroke_data = stroke_form.save(commit=False)
stroke_data.last_modified = "ULTAid"
stroke_data.save()
tophi_data = tophi_form.save(commit=False)
tophi_data.last_modified = "ULTAid"
tophi_data.save()
ULTAid_data.ckd = CKD_data
ULTAid_data.erosions = erosions_data
ULTAid_data.XOI_interactions = XOI_interactions_data
ULTAid_data.organ_transplant = organ_transplant_data
ULTAid_data.allopurinol_hypersensitivity = allopurinol_hypersensitivity_data
ULTAid_data.febuxostat_hypersensitivity = febuxostat_hypersensitivity_data
ULTAid_data.heartattack = heartattack_data
ULTAid_data.stroke = stroke_data
ULTAid_data.tophi = tophi_data
ULTAid_data.save()
return self.form_valid(form)
else:
return self.render_to_response(
self.get_context_data(
form=form,
CKD_form=self.CKD_form_class(request.POST, instance=request.user.medicalprofile.CKD),
erosions_form=self.erosions_form_class(request.POST, instance=request.user.medicalprofile.erosions),
XOI_interactions_form=self.XOI_interactions_form_class(
request.POST, instance=request.user.medicalprofile.XOI_interactions
),
organ_transplant_form=self.organ_transplant_form_class(
request.POST, instance=request.user.medicalprofile.organ_transplant
),
allopurinol_hypersensitivity_form=self.allopurinol_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.allopurinol_hypersensitivity
),
febuxostat_hypersensitivity_form=self.febuxostat_hypersensitivity_form_class(
request.POST, instance=request.user.medicalprofile.febuxostat_hypersensitivity
),
heartattack_form=self.heartattack_form_class(
request.POST, instance=request.user.medicalprofile.heartattack
),
stroke_form=self.stroke_form_class(request.POST, instance=request.user.medicalprofile.stroke),
tophi_form=self.tophi_form_class(request.POST, instance=request.user.medicalprofile.tophi),
)
)
| 57.936567 | 175 | 0.645843 | 3,161 | 31,054 | 6.088896 | 0.067067 | 0.052372 | 0.064166 | 0.058191 | 0.82922 | 0.814776 | 0.801995 | 0.788538 | 0.780693 | 0.780693 | 0 | 0 | 0.284987 | 31,054 | 535 | 176 | 58.04486 | 0.866826 | 0.078702 | 0 | 0.687627 | 0 | 0 | 0.054647 | 0.029967 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016227 | false | 0 | 0.018256 | 0 | 0.119675 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e30b9acd39b58ec2c73f9efed75cd84cbb35a0d8 | 163 | py | Python | wbb/core/types/__init__.py | sppidy/WilliamButcherBot | 8cbd1593dd44a5384f7b1c4d630aa65271282e3e | [
"MIT"
] | null | null | null | wbb/core/types/__init__.py | sppidy/WilliamButcherBot | 8cbd1593dd44a5384f7b1c4d630aa65271282e3e | [
"MIT"
] | null | null | null | wbb/core/types/__init__.py | sppidy/WilliamButcherBot | 8cbd1593dd44a5384f7b1c4d630aa65271282e3e | [
"MIT"
] | null | null | null | # flake8: noqa
from .InlineKeyboardButtons import InlineKeyboardButtonDict
from .InlineQueryResult import InlineQueryResultCachedDocument, InlineQueryResultAudio
| 32.6 | 86 | 0.889571 | 11 | 163 | 13.181818 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006667 | 0.079755 | 163 | 4 | 87 | 40.75 | 0.96 | 0.07362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e30bc1ca07fc199415b9ef04c536cd1d962a8595 | 106 | py | Python | core/tests/test_inputs.py | rodolfolotte/tec | b97c39db7c90b87b5fe61888cf63111b2a32df5a | [
"MIT"
] | 4 | 2019-08-15T17:27:02.000Z | 2022-03-15T13:36:26.000Z | core/tests/test_inputs.py | rodolfolotte/tec | b97c39db7c90b87b5fe61888cf63111b2a32df5a | [
"MIT"
] | null | null | null | core/tests/test_inputs.py | rodolfolotte/tec | b97c39db7c90b87b5fe61888cf63111b2a32df5a | [
"MIT"
] | 2 | 2022-01-05T10:56:27.000Z | 2022-03-03T14:20:11.000Z | import unittest
# import core.inputs as i
class PrepareInputsInputsTest(unittest.TestCase):
pass
| 10.6 | 49 | 0.764151 | 12 | 106 | 6.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179245 | 106 | 9 | 50 | 11.777778 | 0.931034 | 0.216981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e38852c8a192ceb40e65f2bc5b6cdf529e81c51a | 113 | py | Python | nutrition_calculator/__init__.py | marekvymazal/nutrition_calculator | d59dcb8941b77cd760033072e8aca0ae749db7eb | [
"MIT"
] | null | null | null | nutrition_calculator/__init__.py | marekvymazal/nutrition_calculator | d59dcb8941b77cd760033072e8aca0ae749db7eb | [
"MIT"
] | null | null | null | nutrition_calculator/__init__.py | marekvymazal/nutrition_calculator | d59dcb8941b77cd760033072e8aca0ae749db7eb | [
"MIT"
] | null | null | null | # import NutritionCalculator class for package convenience
from .nutrition_calculator import NutritionCalculator
| 37.666667 | 58 | 0.884956 | 11 | 113 | 9 | 0.818182 | 0.505051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097345 | 113 | 2 | 59 | 56.5 | 0.970588 | 0.495575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b98459a8e8548d1aefe8e8ac6e74535b2b17f7f | 1,661 | gyp | Python | clients/android/jni/Android.gyp | sleepingAnt/viewfinder | 9caf4e75faa8070d85f605c91d4cfb52c4674588 | [
"Apache-2.0"
] | 645 | 2015-01-03T02:03:59.000Z | 2021-12-03T08:43:16.000Z | clients/android/jni/Android.gyp | hoowang/viewfinder | 9caf4e75faa8070d85f605c91d4cfb52c4674588 | [
"Apache-2.0"
] | null | null | null | clients/android/jni/Android.gyp | hoowang/viewfinder | 9caf4e75faa8070d85f605c91d4cfb52c4674588 | [
"Apache-2.0"
] | 222 | 2015-01-07T05:00:52.000Z | 2021-12-06T09:54:26.000Z | {
'targets': [
{
'target_name': 'viewfinder',
'type': 'shared_library',
'include_dirs': [
'../../../third_party/shared/leveldb/include/',
'../../../third_party/shared/leveldb/',
'../../../third_party/shared/protobuf/src/',
'../../../third_party/shared/protobuf/',
'../../../third_party/shared/re2/',
'../../../third_party/shared/icu',
'../../../third_party/shared/icu/source/common',
'../../../third_party/shared/icu/source/i18n',
'../../../third_party/shared/icu/source/tools/tzcode',
'../../../third_party/shared/phonenumbers/cpp/src',
'../gen/',
],
'dependencies': [
'../../../third_party/shared/leveldb.gyp:libleveldb',
'../../../third_party/shared/protobuf.gyp:libprotobuf',
'../../../third_party/shared/snappy.gyp:libsnappy',
'../../../third_party/shared/re2.gyp:libre2',
'../../../third_party/shared/icu.gyp:icui18n',
'../../../third_party/shared/icu.gyp:icuuc',
'../../../third_party/shared/icu.gyp:icudata',
'../../../third_party/shared/phonenumbers.gyp:libphonenumbers',
'../../shared/shared.android.gyp:libshared',
'../../shared/shared.android.gyp:sharedprotos',
],
'defines': [
'LEVELDB_PLATFORM_ANDROID',
'LEVELDB_PLATFORM_POSIX',
],
'sources': [
'DayTableEnv.cc',
'DBMigrationAndroid.cc',
'NativeAppState.cc',
'NetworkManagerAndroid.cc',
],
'cppflags': [
'-pthread',
],
'ldflags': [
'-lz',
'-llog',
],
},
],
}
| 31.942308 | 71 | 0.51475 | 141 | 1,661 | 5.886525 | 0.368794 | 0.216867 | 0.346988 | 0.160241 | 0.16988 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005534 | 0.238411 | 1,661 | 51 | 72 | 32.568627 | 0.650593 | 0 | 0 | 0.14 | 0 | 0 | 0.671884 | 0.579771 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8be98282878886714c6f07c9f6d25db3fdea75b7 | 37 | py | Python | core/models/__init__.py | swipswaps/retinal_oct | a99f93d88833fc328b9b7f6aaabe1310632c644b | [
"MIT"
] | 15 | 2021-01-29T17:05:38.000Z | 2022-03-16T17:47:42.000Z | core/models/__init__.py | solomonkimunyu/retinal_oct | a99f93d88833fc328b9b7f6aaabe1310632c644b | [
"MIT"
] | null | null | null | core/models/__init__.py | solomonkimunyu/retinal_oct | a99f93d88833fc328b9b7f6aaabe1310632c644b | [
"MIT"
] | 14 | 2021-03-03T03:16:31.000Z | 2022-03-23T19:23:42.000Z | from .retina_model import RetinaModel | 37 | 37 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47784123f286f062d9069c60aa3210fdc987fae7 | 2,128 | py | Python | policy/net/net_v0.py | yangmuzhi/wuziqi | 7bdee51ef2a37373b0823b00c4536138560ec3bb | [
"MIT"
] | null | null | null | policy/net/net_v0.py | yangmuzhi/wuziqi | 7bdee51ef2a37373b0823b00c4536138560ec3bb | [
"MIT"
] | null | null | null | policy/net/net_v0.py | yangmuzhi/wuziqi | 7bdee51ef2a37373b0823b00c4536138560ec3bb | [
"MIT"
] | null | null | null | """
cnn
"""
import tensorflow as tf
def policy_net(obs):
x = tf.layers.Conv2D(filters=256, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(obs)
x = tf.layers.Conv2D(filters=256, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(4,4), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(4,4), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=64, kernel_size=(3,3), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=64, kernel_size=(3,3), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.flatten(x)
x = tf.layers.Dense(256, activation=tf.nn.relu)(x)
pi = tf.layers.Dense(15*15, activation=tf.nn.sigmoid)(x)
log_p = tf.log(pi)
return pi, log_p
def value_net(obs):
x = tf.layers.Conv2D(filters=256, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(obs)
x = tf.layers.Conv2D(filters=256, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(5,5), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(4,4), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=128, kernel_size=(4,4), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=64, kernel_size=(3,3), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.Conv2D(filters=64, kernel_size=(3,3), padding="same", use_bias=True, data_format="channels_first")(x)
x = tf.layers.flatten(x)
x = tf.layers.Dense(256, activation=tf.nn.relu)(x)
out = tf.layers.Dense(1, activation=tf.nn.sigmoid)(x)
return out
| 53.2 | 122 | 0.701128 | 355 | 2,128 | 4.033803 | 0.123944 | 0.111732 | 0.113128 | 0.146648 | 0.924581 | 0.893855 | 0.893855 | 0.893855 | 0.893855 | 0.893855 | 0 | 0.04825 | 0.113722 | 2,128 | 39 | 123 | 54.564103 | 0.711029 | 0.00141 | 0 | 0.692308 | 0 | 0 | 0.119149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
477a47e991128b5c1dfdcc58164ad430ca1cdf8b | 231 | py | Python | daskms/apps/tests/test_chunk_transformer.py | ratt-ru/dask-ms | becd3572f86a0ad78b55540f25fce6e129976a29 | [
"BSD-3-Clause"
] | 7 | 2019-08-23T03:44:53.000Z | 2021-05-06T00:51:18.000Z | daskms/apps/tests/test_chunk_transformer.py | ska-sa/dask-ms | ce33e7aad36eeb7c2c79093622b9776186856304 | [
"BSD-3-Clause"
] | 76 | 2019-08-20T14:34:05.000Z | 2022-02-10T13:21:29.000Z | daskms/apps/tests/test_chunk_transformer.py | ratt-ru/dask-ms | becd3572f86a0ad78b55540f25fce6e129976a29 | [
"BSD-3-Clause"
] | 4 | 2019-10-15T13:35:19.000Z | 2021-03-23T14:52:23.000Z | from daskms.apps.convert import parse_chunks
def test_chunk_parsing():
assert parse_chunks("{row: 1000, chan: 16}") == {"row": 1000, "chan": 16}
assert parse_chunks("{row: (1000, 1000, 10)}") == {"row": (1000, 1000, 10)}
| 33 | 79 | 0.640693 | 33 | 231 | 4.333333 | 0.515152 | 0.195804 | 0.237762 | 0.27972 | 0.335664 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164103 | 0.155844 | 231 | 6 | 80 | 38.5 | 0.569231 | 0 | 0 | 0 | 0 | 0 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
47c87188f1b48889dc0863270b6ce528269c3f88 | 77 | py | Python | symmys/layers/__init__.py | klarh/symmys | 13b232a817975a15fa471879fdd8d21de783ecd6 | [
"MIT"
] | null | null | null | symmys/layers/__init__.py | klarh/symmys | 13b232a817975a15fa471879fdd8d21de783ecd6 | [
"MIT"
] | null | null | null | symmys/layers/__init__.py | klarh/symmys | 13b232a817975a15fa471879fdd8d21de783ecd6 | [
"MIT"
] | null | null | null |
from .QuaternionRotation import QuaternionRotation, QuaternionRotoinversion
| 25.666667 | 75 | 0.896104 | 5 | 77 | 13.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077922 | 77 | 2 | 76 | 38.5 | 0.971831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47f2406a4f2f3dbbd6b17b0682b5f096a773ee04 | 28 | py | Python | sentinella/test/test/__init__.py | GloriaPG/test-plugin | 2a0e3f9824bae5ba48ae262d9ba6c756453a82e5 | [
"Apache-2.0"
] | null | null | null | sentinella/test/test/__init__.py | GloriaPG/test-plugin | 2a0e3f9824bae5ba48ae262d9ba6c756453a82e5 | [
"Apache-2.0"
] | null | null | null | sentinella/test/test/__init__.py | GloriaPG/test-plugin | 2a0e3f9824bae5ba48ae262d9ba6c756453a82e5 | [
"Apache-2.0"
] | null | null | null | from .test import get_stats
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a5677978b7dd0975e6a7460230825726c7b801d | 9,957 | py | Python | tests/unit/altimeter/aws/access/test_accessor.py | jparten/altimeter | 956cf7f7c2fe443751b8da393a764f8a7bb82348 | [
"MIT"
] | null | null | null | tests/unit/altimeter/aws/access/test_accessor.py | jparten/altimeter | 956cf7f7c2fe443751b8da393a764f8a7bb82348 | [
"MIT"
] | null | null | null | tests/unit/altimeter/aws/access/test_accessor.py | jparten/altimeter | 956cf7f7c2fe443751b8da393a764f8a7bb82348 | [
"MIT"
] | null | null | null | import datetime
import os
from unittest import TestCase
from altimeter.aws.access.accessor import (
AccessStep,
MultiHopAccessor,
SessionCache,
SessionCacheValue,
)
import boto3
from moto import mock_sts
class TestSessionCacheValue(TestCase):
def test_is_expired_when_expired(self):
cache_value = SessionCacheValue(
None, datetime.datetime.utcnow() - datetime.timedelta(minutes=5)
)
self.assertTrue(cache_value.is_expired())
def test_is_expired_when_not_expired(self):
cache_value = SessionCacheValue(
None, datetime.datetime.utcnow() + datetime.timedelta(minutes=5)
)
self.assertFalse(cache_value.is_expired())
class TestSessionCache(TestCase):
def test_build_key(self):
key = SessionCache._build_key(
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
self.assertEqual(key, "1234:test_role:test_role_session:test_region")
def test_get_miss(self):
cache = SessionCache()
cached_session = cache.get(
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
self.assertIsNone(cached_session)
def test_put_get_not_expired(self):
cache = SessionCache()
session = boto3.Session()
cache.put(
session=session,
expiration=datetime.datetime.utcnow() + datetime.timedelta(minutes=5),
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
cached_session = cache.get(
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
self.assertIsNotNone(cached_session)
def test_put_get_expired(self):
cache = SessionCache()
session = boto3.Session()
cache.put(
session=session,
expiration=datetime.datetime.utcnow() - datetime.timedelta(minutes=5),
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
cached_session = cache.get(
account_id="1234",
role_name="test_role",
role_session_name="test_role_session",
region="test_region",
)
self.assertIsNone(cached_session)
class TestAccessStep(TestCase):
def test_str_with_account(self):
access_step = AccessStep(role_name="test_role_name", account_id="1234")
self.assertEqual(str(access_step), "test_role_name@1234")
def test_str_without_account(self):
access_step = AccessStep(role_name="test_role_name")
self.assertEqual(str(access_step), "test_role_name@target")
def test_to_dict(self):
access_step = AccessStep(role_name="test_role_name", account_id="1234", external_id="abcd")
expected_dict = {"role_name": "test_role_name", "account_id": "1234", "external_id": "abcd"}
self.assertEqual(expected_dict, access_step.to_dict())
def test_from_dict(self):
d = {"role_name": "test_role_name", "account_id": "1234", "external_id": "abcd"}
access_step = AccessStep.from_dict(d)
self.assertEqual(access_step.to_dict(), d)
def test_from_dict_without_role_name(self):
d = {"account_id": "1234", "external_id": "abcd"}
with self.assertRaises(ValueError):
AccessStep.from_dict(d)
def test_from_dict_with_external_id_env_var(self):
d = {"role_name": "test_role_name", "account_id": "1234", "external_id_env_var": "EXT_ID"}
os.environ["EXT_ID"] = "abcd"
try:
access_step = AccessStep.from_dict(d)
expected_dict = {
"role_name": "test_role_name",
"account_id": "1234",
"external_id": "abcd",
}
self.assertEqual(access_step.to_dict(), expected_dict)
finally:
del os.environ["EXT_ID"]
def test_from_dict_with_external_id_env_var_missing_var(self):
d = {"role_name": "test_role_name", "account_id": "1234", "external_id_env_var": "EXT_ID"}
with self.assertRaises(ValueError):
AccessStep.from_dict(d)
@mock_sts
class TestMultiHopAccessor(TestCase):
def test_get_session(self):
access_steps = [
AccessStep(role_name="test_role_name1", account_id="1234", external_id="abcd"),
AccessStep(role_name="test_role_name2", account_id="5678"),
AccessStep(role_name="test_role_name3"),
]
mha = MultiHopAccessor(
role_session_name="test_role_session_name", access_steps=access_steps
)
session = mha.get_session("4566")
self.assertIsInstance(session, boto3.Session)
expected_cache_sorted_keys = [
"1234:test_role_name1:test_role_session_name:None",
"4566:test_role_name3:test_role_session_name:None",
"5678:test_role_name2:test_role_session_name:None",
]
self.assertEqual(sorted(mha.session_cache._cache.keys()), expected_cache_sorted_keys)
def test_without_access_steps(self):
with self.assertRaises(ValueError):
MultiHopAccessor(role_session_name="test_role_session_name", access_steps=[])
def test_with_access_steps_non_final_missing_account_id(self):
access_steps = [
AccessStep(role_name="test_role_name1"),
AccessStep(role_name="test_role_name2"),
]
with self.assertRaises(ValueError):
MultiHopAccessor(role_session_name="test_role_session_name", access_steps=access_steps)
def test_with_access_steps_final_with_account_id(self):
access_steps = [
AccessStep(role_name="test_role_name1", account_id="1234"),
AccessStep(role_name="test_role_name2", account_id="5678"),
]
with self.assertRaises(ValueError):
MultiHopAccessor(role_session_name="test_role_session_name", access_steps=access_steps)
def test_cache_usage(self):
access_steps = [
AccessStep(role_name="test_role_name1", account_id="1234", external_id="abcd"),
AccessStep(role_name="test_role_name2", account_id="5678"),
AccessStep(role_name="test_role_name3"),
]
mha = MultiHopAccessor(
role_session_name="test_role_session_name", access_steps=access_steps
)
mha.get_session("4567")
mha.get_session("4567")
mha.get_session("8901")
mha.get_session("8901")
expected_cache_sorted_keys = [
"1234:test_role_name1:test_role_session_name:None",
"4567:test_role_name3:test_role_session_name:None",
"5678:test_role_name2:test_role_session_name:None",
"8901:test_role_name3:test_role_session_name:None",
]
self.assertEqual(sorted(mha.session_cache._cache.keys()), expected_cache_sorted_keys)
def test_str(self):
access_steps = [
AccessStep(role_name="test_role_name1", account_id="1234", external_id="abcd"),
AccessStep(role_name="test_role_name2", account_id="5678"),
AccessStep(role_name="test_role_name3"),
]
mha = MultiHopAccessor(
role_session_name="test_role_session_name", access_steps=access_steps
)
expected_str = "accessor:test_role_session_name:test_role_name1@1234,test_role_name2@5678,test_role_name3@target"
self.assertEqual(str(mha), expected_str)
def test_to_dict(self):
access_steps = [
AccessStep(role_name="test_role_name1", account_id="1234", external_id="abcd"),
AccessStep(role_name="test_role_name2", account_id="5678"),
AccessStep(role_name="test_role_name3"),
]
mha = MultiHopAccessor(
role_session_name="test_role_session_name", access_steps=access_steps
)
expected_dict = {
"role_session_name": "test_role_session_name",
"access_steps": [
{"role_name": "test_role_name1", "external_id": "abcd", "account_id": "1234"},
{"role_name": "test_role_name2", "external_id": None, "account_id": "5678"},
{"role_name": "test_role_name3", "external_id": None, "account_id": None},
],
}
self.assertEqual(mha.to_dict(), expected_dict)
def test_from_dict(self):
mha_dict = {
"role_session_name": "test_role_session_name",
"access_steps": [
{"role_name": "test_role_name1", "external_id": "abcd", "account_id": "1234"},
{"role_name": "test_role_name2", "external_id": None, "account_id": "5678"},
{"role_name": "test_role_name3", "external_id": None, "account_id": None},
],
}
mha = MultiHopAccessor.from_dict(mha_dict)
self.assertEqual(mha_dict, mha.to_dict())
def test_from_dict_missing_access_steps(self):
mha_dict = {"role_session_name": "test_role_session_name"}
with self.assertRaises(ValueError):
MultiHopAccessor.from_dict(mha_dict)
def test_from_dict_missing_role_session_name(self):
mha_dict = {
"access_steps": [
{"role_name": "test_role_name1", "external_id": "abcd", "account_id": "1234"},
{"role_name": "test_role_name2", "external_id": None, "account_id": "5678"},
{"role_name": "test_role_name3", "external_id": None, "account_id": None},
]
}
with self.assertRaises(ValueError):
MultiHopAccessor.from_dict(mha_dict)
| 39.511905 | 121 | 0.638044 | 1,157 | 9,957 | 5.06828 | 0.079516 | 0.105048 | 0.114598 | 0.106412 | 0.826569 | 0.810539 | 0.753581 | 0.739256 | 0.706685 | 0.666951 | 0 | 0.030364 | 0.249171 | 9,957 | 251 | 122 | 39.669323 | 0.754013 | 0 | 0 | 0.540541 | 0 | 0 | 0.221452 | 0.07201 | 0 | 0 | 0 | 0 | 0.108108 | 1 | 0.103604 | false | 0 | 0.027027 | 0 | 0.148649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9a6b5b6ee75f782ac9f503cb1d38a019589a0748 | 50 | py | Python | hello.py | owainkenwayucl/DH-Hackathon | 59ed4973ac648c1a50df5cfe2d986bfb0923f66d | [
"MIT"
] | null | null | null | hello.py | owainkenwayucl/DH-Hackathon | 59ed4973ac648c1a50df5cfe2d986bfb0923f66d | [
"MIT"
] | 1 | 2018-11-07T14:36:49.000Z | 2018-11-07T14:36:49.000Z | hello.py | owainkenwayucl/DH-Hackathon | 59ed4973ac648c1a50df5cfe2d986bfb0923f66d | [
"MIT"
] | null | null | null | print("Hello, world")
print("Hello, world again")
| 16.666667 | 27 | 0.7 | 7 | 50 | 5 | 0.571429 | 0.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 50 | 2 | 28 | 25 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d0a89af8880ef40302c3ba6f0d24ea600bed3bec | 192 | py | Python | malcolm/core/ntscalararray.py | MattTaylorDLS/pymalcolm | 995a8e4729bd745f8f617969111cc5a34ce1ac14 | [
"Apache-2.0"
] | null | null | null | malcolm/core/ntscalararray.py | MattTaylorDLS/pymalcolm | 995a8e4729bd745f8f617969111cc5a34ce1ac14 | [
"Apache-2.0"
] | null | null | null | malcolm/core/ntscalararray.py | MattTaylorDLS/pymalcolm | 995a8e4729bd745f8f617969111cc5a34ce1ac14 | [
"Apache-2.0"
] | null | null | null | from .attributemodel import AttributeModel
from .serializable import Serializable
@Serializable.register_subclass("epics:nt/NTScalarArray:1.0")
class NTScalarArray(AttributeModel):
pass
| 24 | 61 | 0.828125 | 20 | 192 | 7.9 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.09375 | 192 | 7 | 62 | 27.428571 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 0.135417 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d0afad06191fc29424b90422f22143f974a2b5cc | 426 | py | Python | miwell-flask-app/tests/unit_tests/test_forms.py | joshuahigginson1/DevOps-Assessment-1 | d617522ada565b8b587e2ff7525e1138d1559a75 | [
"MIT"
] | 1 | 2020-08-09T20:52:42.000Z | 2020-08-09T20:52:42.000Z | miwell-flask-app/tests/unit_tests/test_forms.py | joshuahigginson1/DevOps-Assessment-1 | d617522ada565b8b587e2ff7525e1138d1559a75 | [
"MIT"
] | null | null | null | miwell-flask-app/tests/unit_tests/test_forms.py | joshuahigginson1/DevOps-Assessment-1 | d617522ada565b8b587e2ff7525e1138d1559a75 | [
"MIT"
] | 1 | 2020-08-08T11:47:27.000Z | 2020-08-08T11:47:27.000Z | # This script is for unit testing our forms.
# Imports --------------------------------------------------------------------------------
# Login Form Tests -----------------------------------------------------------------------
# Patient Register Form Tests ------------------------------------------------------------
# Psychiatrist Register Form Tests -------------------------------------------------------
| 18.521739 | 90 | 0.253521 | 20 | 426 | 5.4 | 0.75 | 0.25 | 0.314815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107981 | 426 | 22 | 91 | 19.363636 | 0.284211 | 0.934272 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0e10b97148b763ef388d204c9d5c842ab6d04ba | 1,764 | py | Python | epytope/Data/pssms/epidemix/mat/B_5101_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/epidemix/mat/B_5101_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/epidemix/mat/B_5101_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_5101_8 = {0: {'A': -2.8, 'C': -1.5, 'E': -2.1, 'D': 1.9, 'G': -3.0, 'F': -1.8, 'I': 0.8, 'H': -1.5, 'K': -2.5, 'M': -1.1, 'L': -0.3, 'N': -1.8, 'Q': -2.0, 'P': -2.5, 'S': -2.5, 'R': -2.6, 'T': 0.5, 'W': -1.0, 'V': 0.1, 'Y': 1.7}, 1: {'A': 1.5, 'C': -1.2, 'E': -2.5, 'D': -2.5, 'G': -2.5, 'F': -2.7, 'I': -2.8, 'H': -1.9, 'K': -2.5, 'M': -1.7, 'L': -3.3, 'N': -2.3, 'Q': -2.1, 'P': 2.4, 'S': -2.2, 'R': -2.7, 'T': -2.2, 'W': -1.9, 'V': -2.6, 'Y': -2.2}, 2: {'A': -2.9, 'C': -1.5, 'E': -3.0, 'D': -0.1, 'G': -3.5, 'F': 1.7, 'I': -1.8, 'H': -1.6, 'K': -3.0, 'M': 1.1, 'L': -0.2, 'N': -2.7, 'Q': -2.4, 'P': 1.1, 'S': -3.0, 'R': -3.0, 'T': -2.5, 'W': -0.7, 'V': 0.8, 'Y': 1.8}, 3: {'A': 0.2, 'C': -1.3, 'E': 0.1, 'D': -2.3, 'G': -2.8, 'F': 0.6, 'I': 0.2, 'H': 1.0, 'K': 1.0, 'M': -1.0, 'L': -0.2, 'N': -2.0, 'Q': -1.5, 'P': -2.4, 'S': 0.0, 'R': 1.0, 'T': 0.0, 'W': -1.1, 'V': 0.0, 'Y': -1.6}, 4: {'A': -2.5, 'C': -1.2, 'E': -2.7, 'D': -2.7, 'G': -3.2, 'F': -1.9, 'I': 1.5, 'H': 1.4, 'K': 0.0, 'M': -0.8, 'L': -0.4, 'N': -2.3, 'Q': -2.1, 'P': 1.4, 'S': 0.0, 'R': -2.4, 'T': -2.1, 'W': -1.3, 'V': 0.8, 'Y': -1.8}, 5: {'A': -2.8, 'C': -1.6, 'E': -2.8, 'D': -2.9, 'G': -3.4, 'F': -2.2, 'I': -2.0, 'H': 2.3, 'K': -2.7, 'M': -1.3, 'L': -0.3, 'N': -2.4, 'Q': -2.3, 'P': 1.7, 'S': 0.0, 'R': -2.8, 'T': -2.4, 'W': 1.3, 'V': 0.9, 'Y': 0.4}, 6: {'A': -2.4, 'C': -1.3, 'E': 0.1, 'D': -2.4, 'G': -2.9, 'F': -2.1, 'I': -1.7, 'H': 0.8, 'K': 0.7, 'M': -1.0, 'L': 0.0, 'N': -2.1, 'Q': 0.6, 'P': 0.4, 'S': 0.3, 'R': 0.8, 'T': -2.0, 'W': -1.2, 'V': 1.1, 'Y': -1.9}, 7: {'A': -2.4, 'C': -0.9, 'E': -2.9, 'D': -2.9, 'G': -3.4, 'F': 0.9, 'I': 2.0, 'H': -1.8, 'K': -2.7, 'M': -0.3, 'L': 0.8, 'N': -2.6, 'Q': -2.3, 'P': -2.6, 'S': -2.8, 'R': -2.6, 'T': -1.9, 'W': -0.7, 'V': 0.9, 'Y': -1.3}} | 1,764 | 1,764 | 0.281179 | 491 | 1,764 | 1.00611 | 0.065173 | 0.036437 | 0.018219 | 0.024292 | 0.291498 | 0.137652 | 0.089069 | 0.032389 | 0 | 0 | 0 | 0.232056 | 0.186508 | 1,764 | 1 | 1,764 | 1,764 | 0.112195 | 0 | 0 | 0 | 0 | 0 | 0.090652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0ec44545456bbc2732a41f0aa731cbcc4944ae0 | 364 | py | Python | lib/python/pulumi_crds/_utilities.py | eljoth/pulumi-kubernetes-operator | 904f08d29510a824a77f17ea44bfd8fc3d21901b | [
"Apache-2.0"
] | null | null | null | lib/python/pulumi_crds/_utilities.py | eljoth/pulumi-kubernetes-operator | 904f08d29510a824a77f17ea44bfd8fc3d21901b | [
"Apache-2.0"
] | 2 | 2020-09-18T17:12:23.000Z | 2020-12-30T19:40:56.000Z | lib/python/pulumi_crds/_utilities.py | eljoth/pulumi-kubernetes-operator | 904f08d29510a824a77f17ea44bfd8fc3d21901b | [
"Apache-2.0"
] | null | null | null | from pulumi_kubernetes import _utilities
def get_env(*args):
return _utilities.get_env(*args)
def get_env_bool(*args):
return _utilities.get_env_bool(*args)
def get_env_int(*args):
return _utilities.get_env_int(*args)
def get_env_float(*args):
return _utilities.get_env_float(*args)
def get_version():
return _utilities.get_version()
| 16.545455 | 42 | 0.747253 | 54 | 364 | 4.611111 | 0.259259 | 0.192771 | 0.361446 | 0.353414 | 0.401606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148352 | 364 | 21 | 43 | 17.333333 | 0.803226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | true | 0 | 0.090909 | 0.454545 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ef7c8ff7940e9a89996998c242aff11415260522 | 32 | py | Python | vae-pytorch/src/encoder/__init__.py | BeeGass/Readable-VAEs | 50dcb03ad9b688c5249c52120cbbb5fed0a0a085 | [
"MIT"
] | 2 | 2022-01-02T16:41:14.000Z | 2022-01-07T05:18:04.000Z | vae-pytorch/src/encoder/__init__.py | BeeGass/Readable-VAEs | 50dcb03ad9b688c5249c52120cbbb5fed0a0a085 | [
"MIT"
] | null | null | null | vae-pytorch/src/encoder/__init__.py | BeeGass/Readable-VAEs | 50dcb03ad9b688c5249c52120cbbb5fed0a0a085 | [
"MIT"
] | null | null | null | from .vae_encoder import Encoder | 32 | 32 | 0.875 | 5 | 32 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efdfc707200667aabc8a6b59192cf5b8330dbba9 | 38 | py | Python | vplotter/engines/__init__.py | AlexanderDKazakov/Plotter | 38874946c0013c30b7749d60368f2e28b6d498fb | [
"MIT"
] | null | null | null | vplotter/engines/__init__.py | AlexanderDKazakov/Plotter | 38874946c0013c30b7749d60368f2e28b6d498fb | [
"MIT"
] | 1 | 2021-04-09T11:26:08.000Z | 2021-04-09T11:26:08.000Z | vplotter/engines/__init__.py | AlexanderDKazakov/Plotter | 38874946c0013c30b7749d60368f2e28b6d498fb | [
"MIT"
] | null | null | null | from .veusz_engine import VeuszEngine
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
324711e720b673587f3748c12567c140b115ca65 | 79 | py | Python | 2016/common/__init__.py | SimplyDanny/advent-of-code-2015 | 66ef2accccec479989fe47145a5fff3159c418bf | [
"BSD-2-Clause"
] | null | null | null | 2016/common/__init__.py | SimplyDanny/advent-of-code-2015 | 66ef2accccec479989fe47145a5fff3159c418bf | [
"BSD-2-Clause"
] | 2 | 2020-02-19T21:06:29.000Z | 2020-03-15T15:14:58.000Z | 2016/common/__init__.py | SimplyDanny/advent-of-code-2015 | 66ef2accccec479989fe47145a5fff3159c418bf | [
"BSD-2-Clause"
] | null | null | null | from .input import read, readline, readlines
from .output import print_results
| 26.333333 | 44 | 0.822785 | 11 | 79 | 5.818182 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 79 | 2 | 45 | 39.5 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
32b60cff6240a00af0d99d52cb1378bdb4f769b4 | 24 | py | Python | __init__.py | jin0g/soundset | 6167638323eac5c475102c72c836f51b8442f54e | [
"MIT"
] | null | null | null | __init__.py | jin0g/soundset | 6167638323eac5c475102c72c836f51b8442f54e | [
"MIT"
] | null | null | null | __init__.py | jin0g/soundset | 6167638323eac5c475102c72c836f51b8442f54e | [
"MIT"
] | null | null | null | from .soundset import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08752f079f4c0367d985988ab32a0569754ea95c | 38 | py | Python | Graphers/__init__.py | tuxtobin/ProfilerReport | 9fd94241ff31511b51b9832448639078e6313fad | [
"MIT"
] | null | null | null | Graphers/__init__.py | tuxtobin/ProfilerReport | 9fd94241ff31511b51b9832448639078e6313fad | [
"MIT"
] | null | null | null | Graphers/__init__.py | tuxtobin/ProfilerReport | 9fd94241ff31511b51b9832448639078e6313fad | [
"MIT"
] | null | null | null | from .grapher import MatplotlibGraphs
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08b491aa2e3a967de122fa857eb794c91fdc744e | 80 | py | Python | module/__init__.py | hirusha-adi/GifGang | 7b49767d9c0321844d7fc70a55a288d72c48acb5 | [
"MIT"
] | null | null | null | module/__init__.py | hirusha-adi/GifGang | 7b49767d9c0321844d7fc70a55a288d72c48acb5 | [
"MIT"
] | null | null | null | module/__init__.py | hirusha-adi/GifGang | 7b49767d9c0321844d7fc70a55a288d72c48acb5 | [
"MIT"
] | null | null | null | """
The official Python API of GifGang
"""
from . import sfw
from . import nsfw
| 13.333333 | 34 | 0.7 | 12 | 80 | 4.666667 | 0.833333 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 80 | 5 | 35 | 16 | 0.875 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e9419848d256db2b236a032adc6f372dbddee62 | 69 | py | Python | curves/__init__.py | hamolicious/Curves | d4b4cb926f570865dbfc5974d033a85d9f1528fb | [
"WTFPL"
] | null | null | null | curves/__init__.py | hamolicious/Curves | d4b4cb926f570865dbfc5974d033a85d9f1528fb | [
"WTFPL"
] | null | null | null | curves/__init__.py | hamolicious/Curves | d4b4cb926f570865dbfc5974d033a85d9f1528fb | [
"WTFPL"
] | null | null | null | from curves.points import Point
from curves.bezier import CubicBezier | 34.5 | 37 | 0.869565 | 10 | 69 | 6 | 0.7 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101449 | 69 | 2 | 37 | 34.5 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e9683b7b8401511aeb724d2c67d0127dd9b7e89 | 57 | py | Python | python/testData/override/qualified.py | Sajaki/intellij-community | 6748af2c40567839d11fd652ec77ba263c074aad | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/override/qualified.py | Sajaki/intellij-community | 6748af2c40567839d11fd652ec77ba263c074aad | [
"Apache-2.0"
] | 1 | 2020-07-30T19:04:47.000Z | 2020-07-30T19:04:47.000Z | python/testData/override/qualified.py | bradleesand/intellij-community | 750ff9c10333c9c1278c00dbe8d88c877b1b9749 | [
"Apache-2.0"
] | 1 | 2020-10-15T05:56:42.000Z | 2020-10-15T05:56:42.000Z | import turtle
class C(turtle.TurtleScreenBase):
pass | 14.25 | 33 | 0.77193 | 7 | 57 | 6.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 57 | 4 | 34 | 14.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3eca6130ae5e411f621b26aa4647f2283b0a803a | 68 | py | Python | heimerdinger/__init__.py | jayfry1077/runeterra_audio_discord_bot | 0327273e702003f094e39ed995a41d21840926c8 | [
"MIT"
] | 1 | 2020-04-19T06:00:15.000Z | 2020-04-19T06:00:15.000Z | heimerdinger/__init__.py | jayfry1077/runeterra_audio_discord_bot | 0327273e702003f094e39ed995a41d21840926c8 | [
"MIT"
] | null | null | null | heimerdinger/__init__.py | jayfry1077/runeterra_audio_discord_bot | 0327273e702003f094e39ed995a41d21840926c8 | [
"MIT"
] | null | null | null | from heimerdinger.invent import deck_code_to_audio, regions_to_audio | 68 | 68 | 0.911765 | 11 | 68 | 5.181818 | 0.818182 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 68 | 1 | 68 | 68 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
41215dfea66426994fa0af506e9af0757a912ede | 189 | py | Python | src/fastshelf/__init__.py | ligonliu/fastshelf | 1d4250cbaaa062207090c0bd5e8237b271f96496 | [
"MIT"
] | null | null | null | src/fastshelf/__init__.py | ligonliu/fastshelf | 1d4250cbaaa062207090c0bd5e8237b271f96496 | [
"MIT"
] | null | null | null | src/fastshelf/__init__.py | ligonliu/fastshelf | 1d4250cbaaa062207090c0bd5e8237b271f96496 | [
"MIT"
] | null | null | null | from .fastshelf import Shelf
try:
from PlyvelDB import PlyvelDB
except ModuleNotFoundError as e:
pass
try:
from RocksDB import RocksDB
except ModuleNotFoundError as e:
pass | 18.9 | 33 | 0.761905 | 24 | 189 | 6 | 0.5 | 0.097222 | 0.375 | 0.388889 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21164 | 189 | 10 | 34 | 18.9 | 0.966443 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.222222 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
de24943e20de544e53a5c6d058defc9353cf07a3 | 136 | py | Python | mythx_models/response/group_status.py | ConsenSys/mythx-models | e912c2fc6e7d18041310d3b9f0f95085db47ed9b | [
"MIT"
] | 2 | 2019-08-26T13:42:28.000Z | 2019-11-13T15:44:16.000Z | mythx_models/response/group_status.py | ConsenSys/mythx-models | e912c2fc6e7d18041310d3b9f0f95085db47ed9b | [
"MIT"
] | 22 | 2019-08-26T13:14:55.000Z | 2021-04-18T14:22:52.000Z | mythx_models/response/group_status.py | ConsenSys/mythx-models | e912c2fc6e7d18041310d3b9f0f95085db47ed9b | [
"MIT"
] | 6 | 2019-08-29T15:51:38.000Z | 2021-04-05T11:41:34.000Z | """This module contains the GroupStatusResponse domain model."""
from .group import Group
class GroupStatusResponse(Group):
pass
| 17 | 64 | 0.764706 | 15 | 136 | 6.933333 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154412 | 136 | 7 | 65 | 19.428571 | 0.904348 | 0.426471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
de3654785fcc6df9a178b09af7cfa05ff0197461 | 29,991 | py | Python | gewittergefahr/gg_utils/nwp_model_utils_test.py | dopplerchase/GewitterGefahr | 4415b08dd64f37eba5b1b9e8cc5aa9af24f96593 | [
"MIT"
] | 26 | 2018-10-04T01:07:35.000Z | 2022-01-29T08:49:32.000Z | gewittergefahr/gg_utils/nwp_model_utils_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 4 | 2017-12-25T02:01:08.000Z | 2018-12-19T01:54:21.000Z | gewittergefahr/gg_utils/nwp_model_utils_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 11 | 2017-12-10T23:05:29.000Z | 2022-01-29T08:49:33.000Z | """Unit tests for nwp_model_utils.py."""
import os.path
import unittest
import numpy
import pandas
from gewittergefahr.gg_utils import nwp_model_utils
from gewittergefahr.gg_utils import interp
TOLERANCE = 1e-6
MAX_MEAN_DISTANCE_ERROR_RAP_METRES = 100.
MAX_MAX_DISTANCE_ERROR_RAP_METRES = 500.
MAX_MEAN_DISTANCE_ERROR_NARR_METRES = 250.
MAX_MAX_DISTANCE_ERROR_NARR_METRES = 1000.
GRID130_LATLNG_FILE_NAME = '{0:s}/grid_point_latlng_grid130.data'.format(
os.path.dirname(__file__)
)
GRID252_LATLNG_FILE_NAME = '{0:s}/grid_point_latlng_grid252.data'.format(
os.path.dirname(__file__)
)
NARR_LATLNG_FILE_NAME = '{0:s}/grid_point_latlng_grid221.data'.format(
os.path.dirname(__file__)
)
MAX_MEAN_SIN_OR_COS_ERROR = 1e-5
MAX_MAX_SIN_OR_COS_ERROR = 1e-4
GRID130_WIND_ROTATION_FILE_NAME = (
'{0:s}/wind_rotation_angles_grid130.data'
).format(os.path.dirname(__file__))
GRID252_WIND_ROTATION_FILE_NAME = (
'{0:s}/wind_rotation_angles_grid252.data'
).format(os.path.dirname(__file__))
# The following constants are used to test get_times_needed_for_interp.
MODEL_TIME_STEP_HOURS = 3
QUERY_TIMES_UNIX_SEC = numpy.array(
[14400, 18000, 21600, 25200, 28800, 32400, 36000], dtype=int
)
MIN_QUERY_TIMES_UNIX_SEC = numpy.array([10800, 21600, 32400], dtype=int)
MAX_QUERY_TIMES_UNIX_SEC = numpy.array([21600, 32400, 43200], dtype=int)
MODEL_TIMES_PREV_INTERP_UNIX_SEC = numpy.array(
[10800, 21600, 32400], dtype=int
)
MODEL_TIMES_NEXT_INTERP_UNIX_SEC = numpy.array(
[21600, 32400, 43200], dtype=int
)
MODEL_TIMES_SUB_AND_LINEAR_INTERP_UNIX_SEC = numpy.array(
[10800, 21600, 32400, 43200], dtype=int
)
MODEL_TIMES_SUPERLINEAR_INTERP_UNIX_SEC = numpy.array(
[0, 10800, 21600, 32400, 43200, 54000], dtype=int
)
THIS_DICT = {
nwp_model_utils.MIN_QUERY_TIME_COLUMN: MIN_QUERY_TIMES_UNIX_SEC,
nwp_model_utils.MAX_QUERY_TIME_COLUMN: MAX_QUERY_TIMES_UNIX_SEC
}
INTERP_TIME_TABLE_PREV_INTERP = pandas.DataFrame.from_dict(THIS_DICT)
INTERP_TIME_TABLE_NEXT_INTERP = pandas.DataFrame.from_dict(THIS_DICT)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP = pandas.DataFrame.from_dict(THIS_DICT)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP = pandas.DataFrame.from_dict(THIS_DICT)
THIS_NESTED_ARRAY = INTERP_TIME_TABLE_PREV_INTERP[[
nwp_model_utils.MIN_QUERY_TIME_COLUMN,
nwp_model_utils.MIN_QUERY_TIME_COLUMN
]].values.tolist()
THIS_ARGUMENT_DICT = {
nwp_model_utils.MODEL_TIMES_COLUMN: THIS_NESTED_ARRAY,
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN: THIS_NESTED_ARRAY
}
INTERP_TIME_TABLE_PREV_INTERP = INTERP_TIME_TABLE_PREV_INTERP.assign(
**THIS_ARGUMENT_DICT)
INTERP_TIME_TABLE_NEXT_INTERP = INTERP_TIME_TABLE_NEXT_INTERP.assign(
**THIS_ARGUMENT_DICT)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP = (
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP.assign(**THIS_ARGUMENT_DICT)
)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP = (
INTERP_TIME_TABLE_SUPERLINEAR_INTERP.assign(**THIS_ARGUMENT_DICT)
)
INTERP_TIME_TABLE_PREV_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[0] = (
numpy.array([10800], dtype=int)
)
INTERP_TIME_TABLE_PREV_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[1] = (
numpy.array([21600], dtype=int)
)
INTERP_TIME_TABLE_PREV_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[2] = (
numpy.array([32400], dtype=int)
)
INTERP_TIME_TABLE_PREV_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[0] = numpy.array([1, 0, 0], dtype=bool)
INTERP_TIME_TABLE_PREV_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[1] = numpy.array([0, 1, 0], dtype=bool)
INTERP_TIME_TABLE_PREV_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[2] = numpy.array([0, 0, 1], dtype=bool)
INTERP_TIME_TABLE_NEXT_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[0] = (
numpy.array([21600], dtype=int)
)
INTERP_TIME_TABLE_NEXT_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[1] = (
numpy.array([32400], dtype=int)
)
INTERP_TIME_TABLE_NEXT_INTERP[nwp_model_utils.MODEL_TIMES_COLUMN].values[2] = (
numpy.array([43200], dtype=int)
)
INTERP_TIME_TABLE_NEXT_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[0] = numpy.array([1, 0, 0], dtype=bool)
INTERP_TIME_TABLE_NEXT_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[1] = numpy.array([0, 1, 0], dtype=bool)
INTERP_TIME_TABLE_NEXT_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[2] = numpy.array([0, 0, 1], dtype=bool)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[0] = numpy.array([10800, 21600], dtype=int)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[1] = numpy.array([21600, 32400], dtype=int)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[2] = numpy.array([32400, 43200], dtype=int)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[0] = numpy.array([1, 1, 0, 0], dtype=bool)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[1] = numpy.array([0, 1, 1, 0], dtype=bool)
INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[2] = numpy.array([0, 0, 1, 1], dtype=bool)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[0] = numpy.array([0, 10800, 21600, 32400], dtype=int)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[1] = numpy.array([10800, 21600, 32400, 43200], dtype=int)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_COLUMN
].values[2] = numpy.array([21600, 32400, 43200, 54000], dtype=int)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[0] = numpy.array([1, 1, 1, 1, 0, 0], dtype=bool)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[1] = numpy.array([0, 1, 1, 1, 1, 0], dtype=bool)
INTERP_TIME_TABLE_SUPERLINEAR_INTERP[
nwp_model_utils.MODEL_TIMES_NEEDED_COLUMN
].values[2] = numpy.array([0, 0, 1, 1, 1, 1], dtype=bool)
# The following constants are used to test rotate_winds_to_earth_relative and
# rotate_winds_to_grid_relative.
HALF_ROOT3 = numpy.sqrt(3) / 2
U_WINDS_GRID_RELATIVE_M_S01 = numpy.array([
[0, 5, 10], [0, -5, -10]
], dtype=float)
V_WINDS_GRID_RELATIVE_M_S01 = numpy.array([
[10, 15, 20], [-10, -15, -20]
], dtype=float)
ROTATION_ANGLE_COSINES = numpy.array([
[1, 0.5, -0.5], [-1, -0.5, 0.5]
])
ROTATION_ANGLE_SINES = numpy.array([
[0, HALF_ROOT3, HALF_ROOT3], [0, -HALF_ROOT3, -HALF_ROOT3]
])
U_WINDS_EARTH_RELATIVE_M_S01 = numpy.array([
[0, 2.5 + 15 * HALF_ROOT3, -5 + 20 * HALF_ROOT3],
[0, 2.5 + 15 * HALF_ROOT3, -5 + 20 * HALF_ROOT3]
])
V_WINDS_EARTH_RELATIVE_M_S01 = numpy.array([
[10, 7.5 - 5 * HALF_ROOT3, -10 - 10 * HALF_ROOT3],
[10, 7.5 - 5 * HALF_ROOT3, -10 - 10 * HALF_ROOT3]
])
def _compare_interp_time_tables(first_interp_time_table,
second_interp_time_table):
"""Compares two tables with interpolation times.
:param first_interp_time_table: pandas DataFrame created by
`nwp_model_utils.get_times_needed_for_interp`.
:param second_interp_time_table: Same.
:return: are_tables_equal: Boolean flag.
"""
first_column_names = list(first_interp_time_table)
second_column_names = list(second_interp_time_table)
if set(first_column_names) != set(second_column_names):
return False
first_num_rows = len(first_interp_time_table.index)
second_num_rows = len(second_interp_time_table.index)
if first_num_rows != second_num_rows:
return False
for i in range(first_num_rows):
for this_column in first_column_names:
if not numpy.array_equal(
first_interp_time_table[this_column].values[i],
second_interp_time_table[this_column].values[i]
):
return False
return True
class NwpModelUtilsTests(unittest.TestCase):
"""Each method is a unit test for nwp_model_utils.py."""
def test_check_grid_name_narr221(self):
"""Ensures correct output from check_grid_name.
In this case, model is NARR and grid is NCEP 221.
"""
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
def test_check_grid_name_narr_extended221(self):
"""Ensures correct output from check_grid_name.
In this case, model is NARR and grid is extended NCEP 221.
"""
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_EXTENDED_221GRID)
def test_check_grid_name_narr130(self):
"""Ensures correct output from check_grid_name.
In this case, model is NARR and grid is NCEP 130.
"""
with self.assertRaises(ValueError):
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
def test_check_grid_name_rap221(self):
"""Ensures correct output from check_grid_name.
In this case, model is RAP and grid is NCEP 221.
"""
with self.assertRaises(ValueError):
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
def test_check_grid_name_rap_extended221(self):
"""Ensures correct output from check_grid_name.
In this case, model is RAP and grid is extended NCEP 221.
"""
with self.assertRaises(ValueError):
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_EXTENDED_221GRID)
def test_check_grid_name_rap130(self):
"""Ensures correct output from check_grid_name.
In this case, model is RAP and grid is NCEP 130.
"""
nwp_model_utils.check_grid_name(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
def test_dimensions_to_grid_221(self):
"""Ensures correct output from dimensions_to_grid for NCEP 221 grid."""
this_num_rows, this_num_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
this_grid_name = nwp_model_utils.dimensions_to_grid(
num_rows=this_num_rows, num_columns=this_num_columns)
self.assertTrue(this_grid_name == nwp_model_utils.NAME_OF_221GRID)
def test_dimensions_to_grid_extended221(self):
"""Ensures correctness of dimensions_to_grid for extended 221 grid."""
this_num_rows, this_num_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_EXTENDED_221GRID)
this_grid_name = nwp_model_utils.dimensions_to_grid(
num_rows=this_num_rows, num_columns=this_num_columns)
self.assertTrue(
this_grid_name == nwp_model_utils.NAME_OF_EXTENDED_221GRID
)
def test_dimensions_to_grid_130(self):
"""Ensures correct output from dimensions_to_grid for NCEP 130 grid."""
this_num_rows, this_num_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
this_grid_name = nwp_model_utils.dimensions_to_grid(
num_rows=this_num_rows, num_columns=this_num_columns)
self.assertTrue(this_grid_name == nwp_model_utils.NAME_OF_130GRID)
def test_dimensions_to_grid_252(self):
"""Ensures correct output from dimensions_to_grid for NCEP 252 grid."""
this_num_rows, this_num_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
this_grid_name = nwp_model_utils.dimensions_to_grid(
num_rows=this_num_rows, num_columns=this_num_columns)
self.assertTrue(this_grid_name == nwp_model_utils.NAME_OF_252GRID)
def test_dimensions_to_grid_fake(self):
"""Ensures correct output from dimensions_to_grid for fake grid."""
with self.assertRaises(ValueError):
nwp_model_utils.dimensions_to_grid(num_rows=1, num_columns=2)
def test_rotate_winds_to_earth_relative(self):
"""Ensures correct output from rotate_winds_to_earth_relative."""
these_u_winds_m_s01, these_v_winds_m_s01 = (
nwp_model_utils.rotate_winds_to_earth_relative(
u_winds_grid_relative_m_s01=U_WINDS_GRID_RELATIVE_M_S01,
v_winds_grid_relative_m_s01=V_WINDS_GRID_RELATIVE_M_S01,
rotation_angle_cosines=ROTATION_ANGLE_COSINES,
rotation_angle_sines=ROTATION_ANGLE_SINES)
)
self.assertTrue(numpy.allclose(
these_u_winds_m_s01, U_WINDS_EARTH_RELATIVE_M_S01, atol=TOLERANCE
))
self.assertTrue(numpy.allclose(
these_v_winds_m_s01, V_WINDS_EARTH_RELATIVE_M_S01, atol=TOLERANCE
))
def test_rotate_winds_to_grid_relative(self):
"""Ensures correct output from rotate_winds_to_grid_relative."""
these_u_winds_m_s01, these_v_winds_m_s01 = (
nwp_model_utils.rotate_winds_to_grid_relative(
u_winds_earth_relative_m_s01=U_WINDS_EARTH_RELATIVE_M_S01,
v_winds_earth_relative_m_s01=V_WINDS_EARTH_RELATIVE_M_S01,
rotation_angle_cosines=ROTATION_ANGLE_COSINES,
rotation_angle_sines=ROTATION_ANGLE_SINES)
)
self.assertTrue(numpy.allclose(
these_u_winds_m_s01, U_WINDS_GRID_RELATIVE_M_S01, atol=TOLERANCE
))
self.assertTrue(numpy.allclose(
these_v_winds_m_s01, V_WINDS_GRID_RELATIVE_M_S01, atol=TOLERANCE
))
def test_get_times_needed_for_interp_previous(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is previous-neighbour.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.PREV_NEIGHBOUR_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec, MODEL_TIMES_PREV_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_PREV_INTERP
))
def test_get_times_needed_for_interp_next(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is next-neighbour.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.NEXT_NEIGHBOUR_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec, MODEL_TIMES_NEXT_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_NEXT_INTERP
))
def test_get_times_needed_for_interp_nearest(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is nearest-neighbour.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.NEAREST_NEIGHBOUR_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec,
MODEL_TIMES_SUB_AND_LINEAR_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP
))
def test_get_times_needed_for_interp_linear(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is linear.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.LINEAR_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec,
MODEL_TIMES_SUB_AND_LINEAR_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_SUB_AND_LINEAR_INTERP
))
def test_get_times_needed_for_interp_quadratic(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is quadratic.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.SPLINE2_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec,
MODEL_TIMES_SUPERLINEAR_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_SUPERLINEAR_INTERP
))
def test_get_times_needed_for_interp_cubic(self):
"""Ensures correct output from get_times_needed_for_interp.
In this case, interpolation method is cubic.
"""
these_model_times_unix_sec, this_interp_time_table = (
nwp_model_utils.get_times_needed_for_interp(
query_times_unix_sec=QUERY_TIMES_UNIX_SEC,
model_time_step_hours=MODEL_TIME_STEP_HOURS,
method_string=interp.SPLINE3_METHOD_STRING)
)
self.assertTrue(numpy.array_equal(
these_model_times_unix_sec,
MODEL_TIMES_SUPERLINEAR_INTERP_UNIX_SEC
))
self.assertTrue(_compare_interp_time_tables(
this_interp_time_table, INTERP_TIME_TABLE_SUPERLINEAR_INTERP
))
def test_projection_grid130(self):
"""Ensures approx correctness of Lambert proj for NCEP 130 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
unique_longitudes_deg, unique_latitudes_deg = numpy.loadtxt(
GRID130_LATLNG_FILE_NAME, unpack=True)
latitude_matrix_deg = numpy.reshape(
unique_latitudes_deg, (num_grid_rows, num_grid_columns)
)
longitude_matrix_deg = numpy.reshape(
unique_longitudes_deg, (num_grid_rows, num_grid_columns)
)
x_matrix_metres, y_matrix_metres = nwp_model_utils.project_latlng_to_xy(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
expected_x_matrix_metres, expected_y_matrix_metres = (
nwp_model_utils.get_xy_grid_point_matrices(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
)
x_error_matrix_metres = x_matrix_metres - expected_x_matrix_metres
y_error_matrix_metres = y_matrix_metres - expected_y_matrix_metres
distance_error_matrix_metres = numpy.sqrt(
x_error_matrix_metres ** 2 + y_error_matrix_metres ** 2
)
self.assertTrue(
numpy.mean(distance_error_matrix_metres) <=
MAX_MEAN_DISTANCE_ERROR_RAP_METRES
)
self.assertTrue(
numpy.max(distance_error_matrix_metres) <=
MAX_MAX_DISTANCE_ERROR_RAP_METRES
)
def test_projection_grid252(self):
"""Ensures approx correctness of Lambert proj for NCEP 252 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
unique_longitudes_deg, unique_latitudes_deg = numpy.loadtxt(
GRID252_LATLNG_FILE_NAME, unpack=True)
latitude_matrix_deg = numpy.reshape(
unique_latitudes_deg, (num_grid_rows, num_grid_columns)
)
longitude_matrix_deg = numpy.reshape(
unique_longitudes_deg, (num_grid_rows, num_grid_columns)
)
x_matrix_metres, y_matrix_metres = nwp_model_utils.project_latlng_to_xy(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
expected_x_matrix_metres, expected_y_matrix_metres = (
nwp_model_utils.get_xy_grid_point_matrices(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
)
x_error_matrix_metres = x_matrix_metres - expected_x_matrix_metres
y_error_matrix_metres = y_matrix_metres - expected_y_matrix_metres
distance_error_matrix_metres = numpy.sqrt(
x_error_matrix_metres ** 2 + y_error_matrix_metres ** 2
)
self.assertTrue(
numpy.mean(distance_error_matrix_metres) <=
MAX_MEAN_DISTANCE_ERROR_RAP_METRES
)
self.assertTrue(
numpy.max(distance_error_matrix_metres) <=
MAX_MAX_DISTANCE_ERROR_RAP_METRES
)
def test_projection_grid221(self):
"""Ensures approx correctness of Lambert proj for NCEP 221 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
unique_longitudes_deg, unique_latitudes_deg = numpy.loadtxt(
NARR_LATLNG_FILE_NAME, unpack=True)
latitude_matrix_deg = numpy.reshape(
unique_latitudes_deg, (num_grid_rows, num_grid_columns)
)
longitude_matrix_deg = numpy.reshape(
unique_longitudes_deg, (num_grid_rows, num_grid_columns)
)
x_matrix_metres, y_matrix_metres = nwp_model_utils.project_latlng_to_xy(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
expected_x_matrix_metres, expected_y_matrix_metres = (
nwp_model_utils.get_xy_grid_point_matrices(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
)
x_error_matrix_metres = x_matrix_metres - expected_x_matrix_metres
y_error_matrix_metres = y_matrix_metres - expected_y_matrix_metres
distance_error_matrix_metres = numpy.sqrt(
x_error_matrix_metres ** 2 + y_error_matrix_metres ** 2
)
self.assertTrue(
numpy.mean(distance_error_matrix_metres) <=
MAX_MEAN_DISTANCE_ERROR_NARR_METRES
)
self.assertTrue(
numpy.max(distance_error_matrix_metres) <=
MAX_MAX_DISTANCE_ERROR_NARR_METRES
)
def test_projection_extended221(self):
"""Ensures approx correctness of Lambert proj for extended 221 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
unique_longitudes_deg, unique_latitudes_deg = numpy.loadtxt(
NARR_LATLNG_FILE_NAME, unpack=True)
latitude_matrix_deg = numpy.reshape(
unique_latitudes_deg, (num_grid_rows, num_grid_columns)
)
longitude_matrix_deg = numpy.reshape(
unique_longitudes_deg, (num_grid_rows, num_grid_columns)
)
x_matrix_metres, y_matrix_metres = nwp_model_utils.project_latlng_to_xy(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_221GRID)
expected_x_matrix_metres, expected_y_matrix_metres = (
nwp_model_utils.get_xy_grid_point_matrices(
model_name=nwp_model_utils.NARR_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_EXTENDED_221GRID)
)
expected_x_matrix_metres = expected_x_matrix_metres[100:-100, 100:-100]
expected_y_matrix_metres = expected_y_matrix_metres[100:-100, 100:-100]
expected_x_matrix_metres -= expected_x_matrix_metres[0, 0]
expected_y_matrix_metres -= expected_y_matrix_metres[0, 0]
x_error_matrix_metres = x_matrix_metres - expected_x_matrix_metres
y_error_matrix_metres = y_matrix_metres - expected_y_matrix_metres
distance_error_matrix_metres = numpy.sqrt(
x_error_matrix_metres ** 2 + y_error_matrix_metres ** 2
)
self.assertTrue(
numpy.mean(distance_error_matrix_metres) <=
MAX_MEAN_DISTANCE_ERROR_NARR_METRES
)
self.assertTrue(
numpy.max(distance_error_matrix_metres) <=
MAX_MAX_DISTANCE_ERROR_NARR_METRES
)
def test_wind_rotation_angles_grid130(self):
"""Ensures approx correctness of rotation angles for NCEP 130 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
expected_cos_vector, expected_sin_vector = numpy.loadtxt(
GRID130_WIND_ROTATION_FILE_NAME, unpack=True)
expected_cos_matrix = numpy.reshape(
expected_cos_vector, (num_grid_rows, num_grid_columns)
)
expected_sin_matrix = numpy.reshape(
expected_sin_vector, (num_grid_rows, num_grid_columns)
)
latitude_matrix_deg, longitude_matrix_deg = (
nwp_model_utils.get_latlng_grid_point_matrices(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_130GRID)
)
rotation_angle_cos_matrix, rotation_angle_sin_matrix = (
nwp_model_utils.get_wind_rotation_angles(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.RAP_MODEL_NAME)
)
cos_error_matrix = numpy.absolute(
rotation_angle_cos_matrix - expected_cos_matrix
)
sin_error_matrix = numpy.absolute(
rotation_angle_sin_matrix - expected_sin_matrix
)
self.assertTrue(
numpy.mean(cos_error_matrix) <= MAX_MEAN_SIN_OR_COS_ERROR
)
self.assertTrue(numpy.max(cos_error_matrix) <= MAX_MAX_SIN_OR_COS_ERROR)
self.assertTrue(
numpy.mean(sin_error_matrix) <= MAX_MEAN_SIN_OR_COS_ERROR
)
self.assertTrue(numpy.max(sin_error_matrix) <= MAX_MAX_SIN_OR_COS_ERROR)
def test_wind_rotation_angles_grid252(self):
"""Ensures approx correctness of rotation angles for NCEP 252 grid."""
num_grid_rows, num_grid_columns = nwp_model_utils.get_grid_dimensions(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
expected_cos_vector, expected_sin_vector = numpy.loadtxt(
GRID252_WIND_ROTATION_FILE_NAME, unpack=True)
expected_cos_matrix = numpy.reshape(
expected_cos_vector, (num_grid_rows, num_grid_columns)
)
expected_sin_matrix = numpy.reshape(
expected_sin_vector, (num_grid_rows, num_grid_columns)
)
latitude_matrix_deg, longitude_matrix_deg = (
nwp_model_utils.get_latlng_grid_point_matrices(
model_name=nwp_model_utils.RAP_MODEL_NAME,
grid_name=nwp_model_utils.NAME_OF_252GRID)
)
rotation_angle_cos_matrix, rotation_angle_sin_matrix = (
nwp_model_utils.get_wind_rotation_angles(
latitudes_deg=latitude_matrix_deg,
longitudes_deg=longitude_matrix_deg,
model_name=nwp_model_utils.RAP_MODEL_NAME)
)
cos_error_matrix = numpy.absolute(
rotation_angle_cos_matrix - expected_cos_matrix
)
sin_error_matrix = numpy.absolute(
rotation_angle_sin_matrix - expected_sin_matrix
)
self.assertTrue(
numpy.mean(cos_error_matrix) <= MAX_MEAN_SIN_OR_COS_ERROR
)
self.assertTrue(numpy.max(cos_error_matrix) <= MAX_MAX_SIN_OR_COS_ERROR)
self.assertTrue(
numpy.mean(sin_error_matrix) <= MAX_MEAN_SIN_OR_COS_ERROR
)
self.assertTrue(numpy.max(sin_error_matrix) <= MAX_MAX_SIN_OR_COS_ERROR)
if __name__ == '__main__':
unittest.main()
| 37.117574 | 80 | 0.71208 | 4,077 | 29,991 | 4.697817 | 0.053961 | 0.055553 | 0.090273 | 0.055031 | 0.931603 | 0.903096 | 0.87036 | 0.836318 | 0.775335 | 0.74782 | 0 | 0.031861 | 0.217198 | 29,991 | 807 | 81 | 37.163569 | 0.783959 | 0.085859 | 0 | 0.597561 | 0 | 0 | 0.007172 | 0.006877 | 0 | 0 | 0 | 0 | 0.069686 | 1 | 0.045296 | false | 0 | 0.010453 | 0 | 0.06446 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de5fa67c9c5c4c6934271d80a8ec3b28d1b14862 | 62 | py | Python | tests/test_hello.py | jfigura/playground | 7f31b8dd962590ab8b700bf949ffc93a88c88f2c | [
"MIT"
] | null | null | null | tests/test_hello.py | jfigura/playground | 7f31b8dd962590ab8b700bf949ffc93a88c88f2c | [
"MIT"
] | null | null | null | tests/test_hello.py | jfigura/playground | 7f31b8dd962590ab8b700bf949ffc93a88c88f2c | [
"MIT"
] | null | null | null | from pytoo.hello import hello
def test_hello():
hello()
| 10.333333 | 29 | 0.693548 | 9 | 62 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209677 | 62 | 5 | 30 | 12.4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de83d420499af1bed39c727d28b0f250e2e56ce0 | 3,569 | py | Python | tests/unit/command/test_prune_folder.py | mhadam/clutchless | be3791881749abe3d56c427e0bd61add54fcb615 | [
"MIT"
] | 1 | 2020-05-21T02:31:01.000Z | 2020-05-21T02:31:01.000Z | tests/unit/command/test_prune_folder.py | mhadam/clutchless | be3791881749abe3d56c427e0bd61add54fcb615 | [
"MIT"
] | 13 | 2020-06-14T15:03:11.000Z | 2021-07-01T12:05:28.000Z | tests/unit/command/test_prune_folder.py | mhadam/clutchless | be3791881749abe3d56c427e0bd61add54fcb615 | [
"MIT"
] | null | null | null | from pathlib import Path
from pytest_mock import MockerFixture
from clutchless.command.prune.folder import PruneFolderCommand
from clutchless.domain.torrent import MetainfoFile
from clutchless.service.torrent import PruneService
from tests.mock_fs import MockFilesystem
def test_prune_folder_run(mocker: MockerFixture):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "aaa", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
command.run()
assert not fs.exists(Path("/some_path"))
def test_prune_folder_dry_run(mocker: MockerFixture):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "aaa", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
command.dry_run()
assert fs.exists(Path("/some_path"))
def test_prune_folder_run_output(mocker: MockerFixture, capsys):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "aaa", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
output = command.run()
output.display()
result = capsys.readouterr().out
assert (
result
== "\n".join(
["The following metainfo files were removed:", "some_name at /some_path"]
)
+ "\n"
)
def test_prune_folder_run_empty_output(mocker: MockerFixture, capsys):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "ccc", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
output = command.run()
output.display()
result = capsys.readouterr().out
assert result == "No metainfo files were removed.\n"
def test_prune_folder_dry_run_output(mocker: MockerFixture, capsys):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "aaa", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
output = command.dry_run()
output.dry_run_display()
result = capsys.readouterr().out
assert (
result
== "\n".join(
[
"The following metainfo files would be removed:",
"some_name at /some_path",
]
)
+ "\n"
)
def test_prune_folder_dry_run_empty_output(mocker: MockerFixture, capsys):
fs = MockFilesystem({"some_path"})
files = {
MetainfoFile({"info_hash": "ccc", "name": "some_name"}, Path("/some_path"))
}
service: PruneService = mocker.Mock(spec=PruneService)
service.get_torrent_hashes.return_value = {"aaa", "bbb"}
command = PruneFolderCommand(service, fs, files)
output = command.dry_run()
output.dry_run_display()
result = capsys.readouterr().out
assert result == "No metainfo files would be removed.\n"
| 30.767241 | 85 | 0.663211 | 403 | 3,569 | 5.665012 | 0.146402 | 0.056067 | 0.04205 | 0.047306 | 0.879106 | 0.862024 | 0.86071 | 0.849759 | 0.849759 | 0.81647 | 0 | 0 | 0.203138 | 3,569 | 115 | 86 | 31.034783 | 0.802743 | 0 | 0 | 0.629213 | 0 | 0 | 0.149061 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 1 | 0.067416 | false | 0 | 0.067416 | 0 | 0.134831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de8a979a5416ef7cbf92286af0376ef33fa8d3d4 | 181 | py | Python | giscube_search/utils/apps.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 5 | 2018-06-07T12:54:35.000Z | 2022-01-14T10:38:38.000Z | giscube_search/utils/apps.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 140 | 2018-06-18T10:27:28.000Z | 2022-03-23T09:53:15.000Z | giscube_search/utils/apps.py | aroiginfraplan/giscube-admin | b7f3131b0186f847f3902df97f982cb288b16a49 | [
"BSD-3-Clause"
] | 1 | 2021-04-13T11:20:54.000Z | 2021-04-13T11:20:54.000Z | from django.apps import apps
def giscube_search_get_app_modules():
"""Return the Python module for each installed app"""
return [i.module for i in apps.get_app_configs()]
| 25.857143 | 57 | 0.745856 | 29 | 181 | 4.448276 | 0.689655 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165746 | 181 | 6 | 58 | 30.166667 | 0.854305 | 0.259669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
721bd2fda2a7619972bc9fe0ff0565cb6c4c75eb | 207 | py | Python | python/smqtk/representation/data_set/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 82 | 2015-01-07T15:33:29.000Z | 2021-08-11T18:34:05.000Z | python/smqtk/representation/data_set/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 230 | 2015-04-08T14:36:51.000Z | 2022-03-14T17:55:30.000Z | python/smqtk/representation/data_set/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 65 | 2015-01-04T15:00:16.000Z | 2021-11-19T18:09:11.000Z | from .file_set import DataFileSet # noqa: F401
from .kvstore_backed import KVSDataSet # noqa: F401
from .memory_set import DataMemorySet # noqa: F401
from .psql import PostgresNativeDataSet # noqa: F401
| 41.4 | 53 | 0.78744 | 27 | 207 | 5.925926 | 0.518519 | 0.2 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068571 | 0.154589 | 207 | 4 | 54 | 51.75 | 0.845714 | 0.207729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7235a106d6049520e5c36b846858a739199b7143 | 39,999 | py | Python | tests/src/gretel_client/unit/transformers/conftest.py | franccesco/gretel-python-client | fd20dee07eba9657262edc902779142bf32c5b7c | [
"Apache-2.0"
] | null | null | null | tests/src/gretel_client/unit/transformers/conftest.py | franccesco/gretel-python-client | fd20dee07eba9657262edc902779142bf32c5b7c | [
"Apache-2.0"
] | null | null | null | tests/src/gretel_client/unit/transformers/conftest.py | franccesco/gretel-python-client | fd20dee07eba9657262edc902779142bf32c5b7c | [
"Apache-2.0"
] | null | null | null | import pytest
@pytest.fixture(scope='session')
def records_conditional():
return [
{'record':
{
'first_name': 'Alex',
'last_name': 'Watson',
'user_id': '0003',
'dni': 'He loves 8.8.8.8 for DNS',
'city': 'San Diego',
'state': 'California',
'lat': 112.22134,
'lon': 135.76433,
'user_consent': '1'
},
'metadata': {
'gretel_id': '2732c7ed44a8402f899a01e52a931985',
'fields': {
'lat': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '12.22134'}]}},
'lon': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '135.76433'}]}}}}
},
{'record':
{
'first_name': 'Alex',
'last_name': 'Ehrath',
'user_id': '0013',
'dni': 'He loves 192.168.8.254 for DNS',
'city': 'San Marcos',
'state': 'California',
'lat': 35.659491,
'lon': 139.72785,
'user_consent': '0'
},
'metadata': {
'gretel_id': '2732c7ed44a8402f899a01e52a931985',
'fields': {
'lat': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '35.659491'}]}},
'lon': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '139.72785'}]}}}}
}
]
@pytest.fixture(scope='session')
def record_dirty_fpe_check():
return {'Address': '317 Massa. Av.',
'City': 'Didim',
'Country': 'Eritrea',
'Credit Card': '601128 2195205 818',
'Customer ID': '169/61*009 38-34',
'Date': '2019-10-08',
'Name': 'Grimes, Bo H.',
'Zipcode': '745558'
}
@pytest.fixture(scope='session')
def record_meta_data_check():
return {'record': {'Address': '317 Massa. Av.',
'City': 'Didim',
'Country': 'Eritrea',
'Credit Card': '6011282195205818',
'Customer ID': '16961009 3834',
'Date': '2019-10-08',
'Name': 'Grimes, Bo H.',
'Zipcode': '745558'},
'metadata': {
'gretel_id': '2732c7ed44a8402f899a01e52a931985',
'fields': {
'Country': {
'ner': {
'labels': [{
'start': 0,
'end': 7,
'label': 'location',
'score': None,
'source': 'spacy',
'text': 'Eritrea'
}]
}
},
'Name': {'ner': {
'labels': [
{'start': 0,
'end': 6,
'label': 'person_name',
'score': None,
'source': 'spacy',
'text': 'Grimes'}]}},
'Credit Card': {'ner': {
'labels': [
{'start': 0,
'end': 16,
'label': 'credit_card_number',
'score': 1,
'source': 'regex.credit_card',
'text': '6011282195205818'}]}},
'Date': {'ner': {
'labels': [
{'start': 0,
'end': 10,
'label': 'date',
'score': 1.0,
'source': 'datetime',
'text': '2019-10-08'}]}}}}}
@pytest.fixture(scope='session')
def records_date_tweak():
return [
{
'first_name': 'Alex',
'last_name': 'Watson',
'user_id': '0003',
'dni': 'He loves 8.8.8.8 for DNS',
'city': 'San Diego',
'state': 'California',
'created': '2016-06-17T18:58:41Z'
},
{
'first_name': 'Alex',
'last_name': 'Ehrath',
'user_id': '0013',
'dni': 'He loves 192.168.8.254 for DNS',
'city': 'San Marcos',
'state': 'California',
'created': '2016-06-17'
}
]
@pytest.fixture(scope='session')
def record_and_meta_2():
record = {
'summary': 'Alex Watson <alex@gretel.ai> works at Gretel. Alexander Ehrath used to work at Qualcomm.',
'dni': 'He loves 8.8.8.8 for DNS',
'city': 'San Diego',
'state': 'California',
'stuff': 'nothing labeled here',
'latitude': 112.221
}
meta = {
'gretel_id': '2732c7ed44a8402f899a01e52a931985',
'fields': {
'summary': {
'ner': {
'labels': [
{
'start': 0,
'end': 11,
'score': 0.8,
'text': 'Alex Watson',
'label': 'person_name',
},
{
'start': 46,
'end': 62,
'score': 0.8,
'text': 'Alexander Ehrath',
'label': 'person_name',
},
{
'start': 13,
'end': 27,
'score': 0.9,
'text': 'alex@gretel.ai',
'label': 'email_address',
},
{
'start': 38,
'end': 44,
'score': 0.7,
'text': 'Gretel',
'label': 'company_name',
},
{
'start': 79,
'end': 87,
'score': 0.8,
'text': 'Qualcomm',
'label': 'company_name',
}
]
}
},
'dni': {
'ner': {
'labels': [
{
'start': 9,
'end': 16,
'score': 1.0,
'text': '8.8.8.8',
'label': 'ip_address'
}
]
}
},
'city': {
'ner': {
'labels': [
{
'start': 0,
'end': 9,
'score': 1.0,
'text': 'San Diego',
'label': 'location_city'
}
]
}
},
'state': {
'ner': {
'labels': [
{
'start': 0,
'end': 10,
'score': 1.0,
'text': 'California',
'label': 'location_state'
}
]
}
},
'latitude': {
'ner': {
'labels': [
{
'start': 0,
'end': 7,
'score': 1,
'text': '112.221',
'label': 'latitude'
}
]
}
}
}
}
return {'record': record, 'metadata': meta}
@pytest.fixture(scope='session')
def safecast_test_bucket():
records = {
'records':
[{'id': 'rrvewdk3dwb3',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002', 'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:55Z', 'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92, 'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.env_temp': 21.1, 'payload.bat_voltage': 7.64, 'payload.dev_comms_failures': 534,
'payload.dev_restarts': 648, 'payload.dev_free_memory': 53636, 'payload.dev_ntp_count': 1,
'payload.dev_last_failure': 'FAILsdcard', 'payload.service_uploaded': '2020-03-10T23:58:55Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': 'abf7a122a5a0c20588d239199c8c6d7f',
'payload.service_handler': 'i-051cab8ec0fe30bcd', 'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP', 'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan', 'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 24, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'latitude', 'score': 1, 'source': 'regex',
'text': '35.659491'}]}}, 'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '139.72785'}]}}}}},
{'id': '6gwrzlk665hv',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002',
'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002',
'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:54Z',
'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92,
'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.lnd_7128ec': 25,
'payload.service_uploaded': '2020-03-10T23:58:54Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': 'cb93606463ba99994f832177e39dc6a5',
'payload.service_handler': 'i-051a2a353509414f0',
'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP',
'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [{'start': 10, 'end': 24, 'label': 'ip_address',
'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '122.212.234.10'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '35.659491'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '139.72785'}]}}}}},
{'id': 'p6vn0zp1yja1',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002', 'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:54Z', 'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92, 'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.lnd_7318u': 12, 'payload.service_uploaded': '2020-03-10T23:58:54Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': '72f2b7cf2132bcc50ea68a2b6bdb6e2d',
'payload.service_handler': 'i-0c65ac97805549e0d', 'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP', 'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan', 'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 24, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'latitude', 'score': 1, 'source': 'regex', 'text': '35.659491'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '139.72785'}]}}}}}
]
}
return {'data': records}
@pytest.fixture(scope='session')
def safecast_test_bucket2():
records = {
'records':
[{'id': 'rrvewdk3dwb3',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002', 'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:55Z', 'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92, 'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.env_temp': 21.1, 'payload.bat_voltage': 7.64, 'payload.dev_comms_failures': 534,
'payload.dev_restarts': 648, 'payload.dev_free_memory': 53636, 'payload.dev_ntp_count': 1,
'payload.dev_last_failure': 'FAILsdcard', 'payload.service_uploaded': '2020-03-10T23:58:55Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': 'abf7a122a5a0c20588d239199c8c6d7f',
'payload.service_handler': 'i-051cab8ec0fe30bcd', 'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP', 'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan', 'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 24, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'latitude', 'score': 1, 'source': 'regex',
'text': '35.659491'}]}}, 'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '139.72785'}]}}}}},
{'id': '6gwrzlk665hv',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002',
'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002',
'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:54Z',
'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92,
'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.lnd_7128ec': 25,
'payload.service_uploaded': '2020-03-10T23:58:54Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': 'cb93606463ba99994f832177e39dc6a5',
'payload.service_handler': 'i-051a2a353509414f0',
'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP',
'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [{'start': 10, 'end': 24, 'label': 'ip_address',
'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '122.212.234.10'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '35.659491'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '139.72785'}]}}}}},
{'id': 'p6vn0zp1yja1',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10002', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10002', 'payload.device': 10002,
'payload.when_captured': '2020-03-10T23:58:54Z', 'payload.loc_lat': 35.659491,
'payload.loc_lon': 139.72785, 'payload.loc_alt': 92, 'payload.loc_olc': '8Q7XMP5H+Q4X',
'payload.lnd_7318u': 12, 'payload.service_uploaded': '2020-03-10T23:58:54Z',
'payload.service_transport': 'pointcast:122.212.234.10',
'payload.service_md5': '72f2b7cf2132bcc50ea68a2b6bdb6e2d',
'payload.service_handler': 'i-0c65ac97805549e0d', 'payload.ip_address': '122.212.234.10',
'payload.ip_country_code': 'JP', 'payload.ip_city': 'Shibuya',
'payload.ip_country_name': 'Japan', 'payload.ip_subdivision': 'Tokyo',
'payload.location': '35.659491,139.72785',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 24, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 14, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '122.212.234.10'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'latitude', 'score': 1, 'source': 'regex', 'text': '35.659491'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '139.72785'}]}}}}},
{'id': 'lnj9o0xo6euz',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'geigiecast:62007',
'payload.device_class': 'geigiecast',
'payload.device_sn': 'bGeigiecast #62007',
'payload.device': 62007,
'payload.when_captured': '2020-03-10T23:58:50Z',
'payload.loc_lat': 34.48273, 'payload.loc_lon': 136.16316,
'payload.loc_olc': '8Q6RF5M7+37V', 'payload.lnd_7318u': 44,
'payload.dev_test': True,
'payload.service_uploaded': '2020-03-10T23:58:51Z',
'payload.service_transport': 'geigiecast:61.205.85.144',
'payload.service_md5': 'b5b622d94f501074111ff6051a833e79',
'payload.service_handler': 'i-051cab8ec0fe30bcd',
'payload.ip_address': '61.205.85.144',
'payload.ip_country_code': 'JP',
'payload.ip_city': 'Kashihara-shi',
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': 'Nara',
'payload.location': '34.48273,136.16316',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [
{'start': 11, 'end': 24, 'label': 'ip_address', 'score': 1,
'source': 'regex', 'text': '61.205.85.144'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 13,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '61.205.85.144'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 8,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '34.48273'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '136.16316'}]}}}}},
{'id': 'o6vnl2lrypt1',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10042', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10042', 'payload.device': 10042,
'payload.when_captured': '2020-03-10T23:58:45Z', 'payload.loc_lat': 37.7233303,
'payload.loc_lon': 140.4767968, 'payload.loc_alt': 145, 'payload.loc_olc': '8R92PFFG+8PM',
'payload.env_temp': 23.8, 'payload.bat_voltage': 5.03, 'payload.dev_comms_failures': 5990,
'payload.dev_restarts': 1542, 'payload.dev_free_memory': 50588,
'payload.dev_last_failure': 'no EPOCH', 'payload.service_uploaded': '2020-03-10T23:58:44Z',
'payload.service_transport': 'pointcast:103.67.223.44',
'payload.service_md5': '044cb5d5e2cd9a873d35c7bce29ddd8d',
'payload.service_handler': 'i-051a2a353509414f0', 'payload.ip_address': '103.67.223.44',
'payload.ip_country_code': 'JP', 'payload.ip_city': None, 'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': None, 'payload.location': '37.7233303,140.4767968',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 23, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '103.67.223.44'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 13, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '103.67.223.44'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 10, 'label': 'latitude', 'score': 1, 'source': 'regex',
'text': '37.7233303'}]}}, 'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 11, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '140.4767968'}]}}}}},
{'id': '6gwrzlzpkjhv',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10042',
'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10042',
'payload.device': 10042,
'payload.when_captured': '2020-03-10T23:58:39Z',
'payload.loc_lat': 37.7233303,
'payload.loc_lon': 140.4767968, 'payload.loc_alt': 145,
'payload.loc_olc': '8R92PFFG+8PM',
'payload.lnd_7128ec': 15,
'payload.service_uploaded': '2020-03-10T23:58:39Z',
'payload.service_transport': 'pointcast:103.67.223.44',
'payload.service_md5': 'e7aa396b744d524f184830155a77ca97',
'payload.service_handler': 'i-051cab8ec0fe30bcd',
'payload.ip_address': '103.67.223.44',
'payload.ip_country_code': 'JP', 'payload.ip_city': None,
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': None,
'payload.location': '37.7233303,140.4767968',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [
{'start': 10, 'end': 23, 'label': 'ip_address', 'score': 1,
'source': 'regex', 'text': '103.67.223.44'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 13,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '103.67.223.44'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 10,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '37.7233303'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 11,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '140.4767968'}]}}}}},
{'id': 'kjzno9o6j9hz',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:20105', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #20105', 'payload.device': 20105,
'payload.when_captured': '2020-03-10T23:58:43Z', 'payload.loc_lat': 38.3151,
'payload.loc_lon': -123.0752, 'payload.loc_olc': '84CR8W8F+2WV', 'payload.lnd_78017w': 95,
'payload.service_uploaded': '2020-03-10T23:58:43Z',
'payload.service_transport': 'pointcast:12.235.42.3',
'payload.service_md5': 'afa4110616de9bb1b9cd3930eef9b50e',
'payload.service_handler': 'i-0c65ac97805549e0d', 'payload.ip_address': '12.235.42.3',
'payload.ip_country_code': 'US', 'payload.ip_city': 'Bodega Bay',
'payload.ip_country_name': 'United States', 'payload.ip_subdivision': 'California',
'payload.location': '38.3151,-123.0752',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 21, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '12.235.42.3'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 11, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '12.235.42.3'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 7, 'label': 'latitude', 'score': 1, 'source': 'regex', 'text': '38.3151'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '-123.0752'}]}}}}},
{'id': 'y6yp272rglb7',
'ingest_time': 'foo',
'data':
{'payload.device_urn': 'pointcast:10024',
'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10024',
'payload.device': 10024,
'payload.when_captured': '2020-03-10T23:58:33Z',
'payload.loc_lat': 37.54562, 'payload.loc_lon': 140.398995,
'payload.loc_alt': 238, 'payload.loc_olc': '8R92G9WX+6HX',
'payload.env_temp': 25.6, 'payload.bat_voltage': 8.36,
'payload.dev_comms_failures': 1155,
'payload.dev_restarts': 501,
'payload.dev_free_memory': 53348,
'payload.dev_ntp_count': 1, 'payload.dev_last_failure': '',
'payload.service_uploaded': '2020-03-10T23:58:33Z',
'payload.service_transport': 'pointcast:121.95.25.8',
'payload.service_md5': '07c33e23ea4c2d96457a5400b299abc5',
'payload.service_handler': 'i-0c65ac97805549e0d',
'payload.ip_address': '121.95.25.8',
'payload.ip_country_code': 'JP', 'payload.ip_city': None,
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': None,
'payload.location': '37.54562,140.398995',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [
{'start': 10, 'end': 21, 'label': 'ip_address', 'score': 1,
'source': 'regex', 'text': '121.95.25.8'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 11,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '121.95.25.8'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 8,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '37.54562'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 10,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '140.398995'}]}}}}},
{'id': '5gxrz17lvrt2',
'ingest_time': 'foo',
'data': {'payload.device_urn': 'ngeigie:74', 'payload.device_class': 'ngeigie',
'payload.device_sn': 'nGeigie #74', 'payload.device': 74,
'payload.when_captured': '2020-03-10T23:58:34Z',
'payload.loc_lat': 34.995197, 'payload.loc_lon': 135.764331,
'payload.loc_olc': '8Q6QXQW7+3PG', 'payload.lnd_7318u': 41,
'payload.service_uploaded': '2020-03-10T23:58:34Z',
'payload.service_transport': 'ngeigie:107.161.164.166',
'payload.service_md5': 'be9f5f5dcc6e8f308e5f44ccf2496eda',
'payload.service_handler': 'i-051a2a353509414f0',
'payload.ip_address': '107.161.164.166', 'payload.ip_country_code': 'US',
'payload.ip_city': 'New York', 'payload.ip_country_name': 'United States',
'payload.ip_subdivision': 'New York',
'payload.location': '34.995197,135.764331',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {'labels': [
{'start': 8, 'end': 23, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '107.161.164.166'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 15, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '107.161.164.166'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 9, 'label': 'latitude', 'score': 1, 'source': 'regex', 'text': '34.995197'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 10, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '135.764331'}]}}}}},
{'id': 'p6vn0z4ky9h1',
'ingest_time': 'foo',
'data': {
'payload.device_urn': 'pointcast:10042', 'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10042', 'payload.device': 10042,
'payload.when_captured': '2020-03-10T23:58:33Z', 'payload.loc_lat': 37.7233303,
'payload.loc_lon': 140.4767968, 'payload.loc_alt': 145, 'payload.loc_olc': '8R92PFFG+8PM',
'payload.lnd_7318u': 43, 'payload.service_uploaded': '2020-03-10T23:58:32Z',
'payload.service_transport': 'pointcast:103.67.223.44',
'payload.service_md5': '325d237c0f2fe8624e11d7baa00828a2',
'payload.service_handler': 'i-051a2a353509414f0', 'payload.ip_address': '103.67.223.44',
'payload.ip_country_code': 'JP', 'payload.ip_city': None, 'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': None, 'payload.location': '37.7233303,140.4767968',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'}, 'metadata': {'fields': {
'payload.service_transport': {'ner': {'labels': [
{'start': 10, 'end': 23, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '103.67.223.44'}]}}, 'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 13, 'label': 'ip_address', 'score': 1, 'source': 'regex',
'text': '103.67.223.44'}]}}, 'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 10, 'label': 'latitude', 'score': 1, 'source': 'regex',
'text': '37.7233303'}]}}, 'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 11, 'label': 'longitude', 'score': 1, 'source': 'regex',
'text': '140.4767968'}]}}}}},
{'id': 'g7v9gjl1xoh5',
'ingest_time': 'foo',
'data': {'payload.device_urn': 'pointcast:10024',
'payload.device_class': 'pointcast',
'payload.device_sn': 'Pointcast #10024',
'payload.device': 10024,
'payload.when_captured': '2020-03-10T23:58:32Z',
'payload.loc_lat': 37.54562,
'payload.loc_lon': 140.398995,
'payload.loc_alt': 238, 'payload.loc_olc': '8R92G9WX+6HX',
'payload.lnd_7318u': 45,
'payload.service_uploaded': '2020-03-10T23:58:32Z',
'payload.service_transport': 'pointcast:121.95.25.8',
'payload.service_md5': '0c8d1523969e1834cfa56b3472a30e31',
'payload.service_handler': 'i-051cab8ec0fe30bcd',
'payload.ip_address': '121.95.25.8',
'payload.ip_country_code': 'JP', 'payload.ip_city': None,
'payload.ip_country_name': 'Japan',
'payload.ip_subdivision': None,
'payload.location': '37.54562,140.398995',
'origin': 'arn:aws:sns:us-west-2:985752656544:ingest-measurements-prd'},
'metadata': {'fields': {'payload.service_transport': {'ner': {
'labels': [
{'start': 10, 'end': 21, 'label': 'ip_address', 'score': 1,
'source': 'regex', 'text': '121.95.25.8'}]}},
'payload.ip_address': {'ner': {'labels': [
{'start': 0, 'end': 11,
'label': 'ip_address', 'score': 1,
'source': 'regex',
'text': '121.95.25.8'}]}},
'payload.loc_lat': {'ner': {'labels': [
{'start': 0, 'end': 8,
'label': 'latitude', 'score': 1,
'source': 'regex',
'text': '37.54562'}]}},
'payload.loc_lon': {'ner': {'labels': [
{'start': 0, 'end': 10,
'label': 'longitude', 'score': 1,
'source': 'regex',
'text': '140.398995'}]}}}}}
]
}
return {'data': records}
| 55.019257 | 120 | 0.430861 | 3,505 | 39,999 | 4.785735 | 0.093866 | 0.04507 | 0.057589 | 0.061822 | 0.857935 | 0.840527 | 0.8199 | 0.787946 | 0.753011 | 0.744426 | 0 | 0.140671 | 0.39556 | 39,999 | 726 | 121 | 55.095041 | 0.553129 | 0 | 0 | 0.651622 | 0 | 0.00141 | 0.427236 | 0.118403 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009873 | false | 0 | 0.00141 | 0.005642 | 0.021157 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a0f950e5e3607982753914db99c67d586eb130ca | 48 | py | Python | africanus/model/coherency/__init__.py | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | 13 | 2018-04-06T09:36:13.000Z | 2021-04-13T13:11:00.000Z | africanus/model/coherency/__init__.py | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | 153 | 2018-03-28T14:13:48.000Z | 2022-02-03T07:49:17.000Z | africanus/model/coherency/__init__.py | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | 14 | 2018-03-29T13:30:52.000Z | 2021-06-12T02:56:55.000Z | # flake8: noqa
from .conversion import convert
| 12 | 31 | 0.770833 | 6 | 48 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.166667 | 48 | 3 | 32 | 16 | 0.9 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d00ffeb24f296ab9a5978bd8878a0c43e2130c8 | 127 | py | Python | daily/views.py | juthor/LikeLion_Hackaton_AltongE | 97a0380d8fd5a6909d006d6ceae349b14a05b13f | [
"MIT"
] | 1 | 2019-12-11T15:26:11.000Z | 2019-12-11T15:26:11.000Z | daily/views.py | juthor/LikeLion_Hackaton_AltongE | 97a0380d8fd5a6909d006d6ceae349b14a05b13f | [
"MIT"
] | null | null | null | daily/views.py | juthor/LikeLion_Hackaton_AltongE | 97a0380d8fd5a6909d006d6ceae349b14a05b13f | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def daily(request):
return render(request, 'daily.html') | 25.4 | 40 | 0.732283 | 17 | 127 | 5.470588 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173228 | 127 | 5 | 40 | 25.4 | 0.885714 | 0.181102 | 0 | 0 | 0 | 0 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9d3d8443aa121dec218463216a78f641047a50e8 | 478 | py | Python | tests/unit/test_main.py | andrewnsk/cover_pass | 6110a6d39c286958b1a2ab00e6e1bcfc4ef3de1b | [
"MIT"
] | null | null | null | tests/unit/test_main.py | andrewnsk/cover_pass | 6110a6d39c286958b1a2ab00e6e1bcfc4ef3de1b | [
"MIT"
] | null | null | null | tests/unit/test_main.py | andrewnsk/cover_pass | 6110a6d39c286958b1a2ab00e6e1bcfc4ef3de1b | [
"MIT"
] | null | null | null | import unittest
from coverpass.main import *
class Test(unittest.TestCase):
def test_print_me(self):
self.assertEqual(print_me(5), 15)
self.assertEqual(print_me(8), 18)
self.assertEqual(print_me(20), 30)
self.assertEqual(print_me(0), 10)
def test_return_val(self):
self.assertEqual(return_val(1), 2)
def test_return_val2(self):
self.assertEqual(return_val2(1), 2)
if __name__ == "__main__":
unittest.main()
| 21.727273 | 43 | 0.66318 | 66 | 478 | 4.5 | 0.424242 | 0.30303 | 0.26936 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050532 | 0.213389 | 478 | 21 | 44 | 22.761905 | 0.739362 | 0 | 0 | 0 | 0 | 0 | 0.016771 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.214286 | false | 0.071429 | 0.142857 | 0 | 0.428571 | 0.357143 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c205818113c6eeb43224563c8b96aa4ff4c255cb | 10,185 | py | Python | tests/test_compose.py | amarshall/boiga | faef732a59b7308b0f3be0d1d1ea047a405d641d | [
"MIT"
] | 2 | 2018-10-30T13:17:23.000Z | 2018-11-19T06:39:49.000Z | tests/test_compose.py | amarshall/boiga | faef732a59b7308b0f3be0d1d1ea047a405d641d | [
"MIT"
] | 2 | 2020-03-24T16:15:45.000Z | 2020-03-31T00:02:27.000Z | tests/test_compose.py | amarshall/boiga | faef732a59b7308b0f3be0d1d1ea047a405d641d | [
"MIT"
] | null | null | null | from boiga.compose import (
compose2, compose3, compose4, compose5, compose6, compose7
)
from tests.typecheck_helper import TypecheckResult, typecheck
# Ideally, these are @dataclass, but not in Python 3.6
class _Flow0:
a: int
def __init__(self, a: int) -> None: self.a = a # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow0) and (self.a == other.a)
class _Flow1: # noqa: E302
b: int
def __init__(self, b: int) -> None: self.b = b # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow1) and (self.b == other.b)
class _Flow2: # noqa: E302
c: int
def __init__(self, c: int) -> None: self.c = c # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow2) and (self.c == other.c)
class _Flow3: # noqa: E302
d: int
def __init__(self, d: int) -> None: self.d = d # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow3) and (self.d == other.d)
class _Flow4: # noqa: E302
e: int
def __init__(self, e: int) -> None: self.e = e # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow4) and (self.e == other.e)
class _Flow5: # noqa: E302
f: int
def __init__(self, f: int) -> None: self.f = f # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow5) and (self.f == other.f)
class _Flow6: # noqa: E302
g: int
def __init__(self, g: int) -> None: self.g = g # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow6) and (self.g == other.g)
class _Flow7: # noqa: E302
h: int
def __init__(self, h: int) -> None: self.h = h # noqa: E301
def __eq__(self, other: object) -> bool: # noqa: E301
return isinstance(other, _Flow7) and (self.h == other.h)
def _flow0(x: _Flow0) -> _Flow1: return _Flow1(x.a)
def _flow1(x: _Flow1) -> _Flow2: return _Flow2(x.b) # noqa: E302
def _flow2(x: _Flow2) -> _Flow3: return _Flow3(x.c) # noqa: E302
def _flow3(x: _Flow3) -> _Flow4: return _Flow4(x.d) # noqa: E302
def _flow4(x: _Flow4) -> _Flow5: return _Flow5(x.e) # noqa: E302
def _flow5(x: _Flow5) -> _Flow6: return _Flow6(x.f) # noqa: E302
def _flow6(x: _Flow6) -> _Flow7: return _Flow7(x.g) # noqa: E302
imports = [
'from boiga.compose import compose2, compose3, compose4, compose5, compose6, compose7',
'from tests.test_compose import _flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow6',
'from tests.test_compose import _Flow0, _Flow1, _Flow2, _Flow3, _Flow4, _Flow5, _Flow6, _Flow7',
]
def typecheck_flow(code: str) -> TypecheckResult:
return typecheck([*imports, code])
class TestCompose2:
def test_valid_flow_with_call(self) -> None:
assert compose2(_flow0, _flow1)(_Flow0(42)) == _Flow2(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose2(_flow0, _flow1)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose2(_flow0, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 2 of "compose2"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow('compose2(_flow0, _flow1)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow('compose2(_flow0, _flow1)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow2" has no attribute "foo"'
]
class TestCompose3:
def test_valid_flow_with_call(self) -> None:
flow = compose3(_flow0, _flow1, _flow2)
assert flow(_Flow0(42)) == _Flow3(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose3(_flow0, _flow1, _flow2)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose3(_flow0, _flow1, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 3 of "compose3"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow('compose3(_flow0, _flow1, _flow2)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow('compose3(_flow0, _flow1, _flow2)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow3" has no attribute "foo"'
]
class TestCompose4:
def test_valid_flow_with_call(self) -> None:
flow = compose4(_flow0, _flow1, _flow2, _flow3)
assert flow(_Flow0(42)) == _Flow4(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose4(_flow0, _flow1, _flow2, _flow3)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose4(_flow0, _flow1, _flow2, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 4 of "compose4"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow('compose4(_flow0, _flow1, _flow2, _flow3)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow('compose4(_flow0, _flow1, _flow2, _flow3)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow4" has no attribute "foo"'
]
class TestCompose5:
def test_valid_flow_with_call(self) -> None:
flow = compose5(_flow0, _flow1, _flow2, _flow3, _flow4)
assert flow(_Flow0(42)) == _Flow5(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose5(_flow0, _flow1, _flow2, _flow3, _flow4)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose5(_flow0, _flow1, _flow2, _flow3, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 5 of "compose5"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow(
'compose5(_flow0, _flow1, _flow2, _flow3, _flow4)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow(
'compose5(_flow0, _flow1, _flow2, _flow3, _flow4)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow5" has no attribute "foo"'
]
class TestCompose6:
def test_valid_flow_with_call(self) -> None:
flow = compose6(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5)
assert flow(_Flow0(42)) == _Flow6(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose6(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose6(_flow0, _flow1, _flow2, _flow3, _flow4, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 6 of "compose6"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow(
'compose6(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow(
'compose6(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow6" has no attribute "foo"'
]
class TestCompose7:
def test_valid_flow_with_call(self) -> None:
flow = compose7(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow6)
assert flow(_Flow0(42)) == _Flow7(42)
def test_valid_flow_typechecks(self) -> None:
result = typecheck_flow('compose7(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow6)')
assert result.ok
def test_invalid_flow_fns(self) -> None:
result = typecheck_flow('compose7(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow0)')
assert not result.ok
assert result.errors == [
'<string>:4: error: Cannot infer type argument 7 of "compose7"'
]
def test_invalid_flow_call(self) -> None:
result = typecheck_flow(
'compose7(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow6)(_Flow2(42))')
assert not result.ok
assert result.errors == [
'<string>:4: error: Argument 1 has incompatible type "_Flow2"; expected "_Flow0"'
]
def test_invalid_flow_result(self) -> None:
result = typecheck_flow(
'compose7(_flow0, _flow1, _flow2, _flow3, _flow4, _flow5, _flow6)(_Flow0(42)).foo')
assert not result.ok
assert result.errors == [
'<string>:4: error: "_Flow7" has no attribute "foo"'
]
| 39.324324 | 100 | 0.623957 | 1,271 | 10,185 | 4.666404 | 0.076318 | 0.053954 | 0.068285 | 0.09307 | 0.791941 | 0.769179 | 0.764289 | 0.756365 | 0.732254 | 0.679649 | 0 | 0.060084 | 0.253216 | 10,185 | 258 | 101 | 39.476744 | 0.719695 | 0.036426 | 0 | 0.431925 | 0 | 0.004695 | 0.268425 | 0.014106 | 0 | 0 | 0 | 0 | 0.225352 | 1 | 0.253521 | false | 0 | 0.032864 | 0.075117 | 0.431925 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c22ebd720448595c97fe43fe5da3005e56efe8c7 | 7,312 | py | Python | ui/faceForward.py | fsanges/glTools | 8ff0899de43784a18bd4543285655e68e28fb5e5 | [
"MIT"
] | 165 | 2015-01-26T05:22:04.000Z | 2022-03-22T02:50:41.000Z | ui/faceForward.py | qeeji/glTools | 8ff0899de43784a18bd4543285655e68e28fb5e5 | [
"MIT"
] | 5 | 2015-12-02T02:39:44.000Z | 2020-12-09T02:45:54.000Z | ui/faceForward.py | qeeji/glTools | 8ff0899de43784a18bd4543285655e68e28fb5e5 | [
"MIT"
] | 83 | 2015-02-10T17:18:24.000Z | 2022-02-10T07:16:47.000Z | import maya.cmds as mc
import glTools.tools.faceForward
import glTools.ui.utils
class UserInputError(Exception): pass
def faceForwardUI():
'''
'''
# Window
win = 'faceForwardUI'
if mc.window(win,q=True,ex=True): mc.deleteUI(win)
win = mc.window(win,t='Face Forward')
# Layout
formLayout = mc.formLayout(numberOfDivisions=100)
# Transform
transformTFB = mc.textFieldButtonGrp('faceForwardTransformTFB',label='Transform',text='',buttonLabel='Load Selected')
# Axis'
axisList = ['X','Y','Z','-X','-Y','-Z']
aimOMG = mc.optionMenuGrp('faceForwardAimOMG',label='Aim Axis')
for axis in axisList: mc.menuItem(label=axis)
upOMG = mc.optionMenuGrp('faceForwardUpOMG',label='Up Axis')
for axis in axisList: mc.menuItem(label=axis)
# Up Vector
upVecFFG = mc.floatFieldGrp('faceForwardUpVecFFG',label='Up Vector',numberOfFields=3,value1=0.0,value2=1.0,value3=0.0)
upVecTypeOMG = mc.optionMenuGrp('faceForwardUpVecTypeOMG',label='Up Vector Type')
for method in ['Current','Vector','Object','ObjectUp']: mc.menuItem(label=method)
upVecObjTFB = mc.textFieldButtonGrp('faceForwardUpVecObjTFB',label='Up Vector Object',text='',buttonLabel='Load Selected')
# Previous Frame
prevFrameCBG = mc.checkBoxGrp('faceForwardPrevFrameCBG',label='Prev Frame Velocity',numberOfCheckBoxes=1)
# Key
keyframeCBG = mc.checkBoxGrp('faceForwardKeyCBG',label='Set Keyframe',numberOfCheckBoxes=1)
# Buttons
faceForwardB = mc.button(label='Face Forward',c='glTools.ui.faceForward.faceForwardFromUI()')
cancelB = mc.button(label='Cancel',c='mc.deleteUI("'+win+'")')
# UI Callbacks
mc.textFieldButtonGrp(transformTFB,e=True,bc='glTools.ui.utils.loadTypeSel("'+transformTFB+'","","transform")')
mc.textFieldButtonGrp(upVecObjTFB,e=True,bc='glTools.ui.utils.loadTypeSel("'+upVecObjTFB+'","","transform")')
# Form Layout - MAIN
mc.formLayout(formLayout,e=True,af=[(transformTFB,'left',5),(transformTFB,'top',5),(transformTFB,'right',5)])
mc.formLayout(formLayout,e=True,af=[(aimOMG,'left',5),(aimOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(aimOMG,'top',5,transformTFB)])
mc.formLayout(formLayout,e=True,af=[(upOMG,'left',5),(upOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upOMG,'top',5,aimOMG)])
mc.formLayout(formLayout,e=True,af=[(upVecFFG,'left',5),(upVecFFG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecFFG,'top',5,upOMG)])
mc.formLayout(formLayout,e=True,af=[(upVecTypeOMG,'left',5),(upVecTypeOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecTypeOMG,'top',5,upVecFFG)])
mc.formLayout(formLayout,e=True,af=[(upVecObjTFB,'left',5),(upVecObjTFB,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecObjTFB,'top',5,upVecTypeOMG)])
mc.formLayout(formLayout,e=True,af=[(prevFrameCBG,'left',5),(prevFrameCBG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(prevFrameCBG,'top',5,upVecObjTFB)])
mc.formLayout(formLayout,e=True,af=[(keyframeCBG,'left',5),(keyframeCBG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(keyframeCBG,'top',5,prevFrameCBG)])
mc.formLayout(formLayout,e=True,af=[(faceForwardB,'right',5),(faceForwardB,'bottom',5)])
mc.formLayout(formLayout,e=True,ac=[(faceForwardB,'top',5,keyframeCBG)])
mc.formLayout(formLayout,e=True,ap=[(faceForwardB,'left',5,50)])
mc.formLayout(formLayout,e=True,af=[(cancelB,'left',5),(cancelB,'bottom',5)])
mc.formLayout(formLayout,e=True,ac=[(cancelB,'top',5,keyframeCBG)])
mc.formLayout(formLayout,e=True,ap=[(cancelB,'right',5,50)])
# Show Window
mc.showWindow(win)
def faceForwardFromUI():
'''
'''
pass
def faceForwardAnimUI():
'''
'''
# Window
win = 'faceForwardAnimUI'
if mc.window(win,q=True,ex=True): mc.deleteUI(win)
win = mc.window(win,t='Face Forward Anim')
# Layout
formLayout = mc.formLayout(numberOfDivisions=100)
# Transform
transformTFB = mc.textFieldButtonGrp('faceForwardAnimTransformTFB',label='Transform',text='',buttonLabel='Load Selected')
# Axis'
axisList = ['X','Y','Z','-X','-Y','-Z']
aimOMG = mc.optionMenuGrp('faceForwardAnimAimOMG',label='Aim Axis')
for axis in axisList: mc.menuItem(label=axis)
upOMG = mc.optionMenuGrp('faceForwardAnimUpOMG',label='Up Axis')
for axis in axisList: mc.menuItem(label=axis)
# Up Vector
upVecFFG = mc.floatFieldGrp('faceForwardAnimUpVecFFG',label='Up Vector',numberOfFields=3,value1=0.0,value2=1.0,value3=0.0)
upVecTypeOMG = mc.optionMenuGrp('faceForwardAnimUpVecTypeOMG',label='Up Vector Type')
for method in ['Current','Vector','Object','ObjectUp']: mc.menuItem(label=method)
upVecObjTFB = mc.textFieldButtonGrp('faceForwardAnimUpVecObjTFB',label='Up Vector Object',text='',buttonLabel='Load Selected')
# Start / End Frame
rangeFFG = mc.floatFieldGrp('faceForwardAnimRangeFFG',label='Start/End Frame',numberOfFields=2,value1=-1.0,value2=-1.0)
# Samples
samplesIFG = mc.intFieldGrp('faceForwardAnimSampleIFG',label='Samples',numberOfFields=1)
# Previous Frame
prevFrameCBG = mc.checkBoxGrp('faceForwardAnimPrevFrameCBG',label='Prev Frame Velocity',numberOfCheckBoxes=1)
# Buttons
faceForwardAnimB = mc.button(label='Face Forward',c='glTools.ui.faceForward.faceForwardAnimFromUI()')
cancelB = mc.button(label='Cancel',c='mc.deleteUI("'+win+'")')
# UI Callbacks
mc.textFieldButtonGrp(transformTFB,e=True,bc='glTools.ui.utils.loadTypeSel("'+transformTFB+'","","transform")')
mc.textFieldButtonGrp(upVecObjTFB,e=True,bc='glTools.ui.utils.loadTypeSel("'+upVecObjTFB+'","","transform")')
# Form Layout - MAIN
mc.formLayout(formLayout,e=True,af=[(transformTFB,'left',5),(transformTFB,'top',5),(transformTFB,'right',5)])
mc.formLayout(formLayout,e=True,af=[(aimOMG,'left',5),(aimOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(aimOMG,'top',5,transformTFB)])
mc.formLayout(formLayout,e=True,af=[(upOMG,'left',5),(upOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upOMG,'top',5,aimOMG)])
mc.formLayout(formLayout,e=True,af=[(upVecFFG,'left',5),(upVecFFG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecFFG,'top',5,upOMG)])
mc.formLayout(formLayout,e=True,af=[(upVecTypeOMG,'left',5),(upVecTypeOMG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecTypeOMG,'top',5,upVecFFG)])
mc.formLayout(formLayout,e=True,af=[(upVecObjTFB,'left',5),(upVecObjTFB,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(upVecObjTFB,'top',5,upVecTypeOMG)])
mc.formLayout(formLayout,e=True,af=[(rangeFFG,'left',5),(rangeFFG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(rangeFFG,'top',5,upVecObjTFB)])
mc.formLayout(formLayout,e=True,af=[(samplesIFG,'left',5),(samplesIFG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(samplesIFG,'top',5,rangeFFG)])
mc.formLayout(formLayout,e=True,af=[(prevFrameCBG,'left',5),(prevFrameCBG,'right',5)])
mc.formLayout(formLayout,e=True,ac=[(prevFrameCBG,'top',5,samplesIFG)])
mc.formLayout(formLayout,e=True,af=[(faceForwardAnimB,'right',5),(faceForwardAnimB,'bottom',5)])
mc.formLayout(formLayout,e=True,ac=[(faceForwardAnimB,'top',5,prevFrameCBG)])
mc.formLayout(formLayout,e=True,ap=[(faceForwardAnimB,'left',5,50)])
mc.formLayout(formLayout,e=True,af=[(cancelB,'left',5),(cancelB,'bottom',5)])
mc.formLayout(formLayout,e=True,ac=[(cancelB,'top',5,prevFrameCBG)])
mc.formLayout(formLayout,e=True,ap=[(cancelB,'right',5,50)])
# Show Window
mc.showWindow(win)
def faceForwardAnimFromUI():
'''
'''
pass
| 45.987421 | 127 | 0.731264 | 942 | 7,312 | 5.676221 | 0.139066 | 0.044885 | 0.181036 | 0.189265 | 0.786235 | 0.772022 | 0.750888 | 0.750888 | 0.692538 | 0.63905 | 0 | 0.016365 | 0.064004 | 7,312 | 158 | 128 | 46.278481 | 0.764904 | 0.033233 | 0 | 0.515464 | 0 | 0 | 0.19635 | 0.070868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041237 | false | 0.030928 | 0.030928 | 0 | 0.082474 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c23e08dbe819a23eb0927f62afa3675776d81143 | 42 | py | Python | slackbucket/tests/test_nothing.py | micole/slackbucket | e3cd4cd8db086011548a4c3bfeb233d4ded2270a | [
"MIT"
] | 2 | 2018-02-12T19:11:05.000Z | 2018-02-15T14:35:03.000Z | slackbucket/tests/test_nothing.py | micole/slackbucket | e3cd4cd8db086011548a4c3bfeb233d4ded2270a | [
"MIT"
] | 1 | 2021-06-01T21:52:25.000Z | 2021-06-01T21:52:25.000Z | slackbucket/tests/test_nothing.py | micole/slackbucket | e3cd4cd8db086011548a4c3bfeb233d4ded2270a | [
"MIT"
] | 1 | 2018-02-12T19:17:54.000Z | 2018-02-12T19:17:54.000Z |
def test_testing_setup():
assert True | 14 | 25 | 0.738095 | 6 | 42 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 3 | 26 | 14 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df9cffee43a7b86b9eebe28d02d6b6d42be4a58c | 5,368 | py | Python | tests/test_utils.py | vishalbelsare/category_encoders | 55636b5ae11dc45075a0c248028f17f9df93bbb9 | [
"BSD-3-Clause"
] | 1,178 | 2016-11-21T14:52:45.000Z | 2020-04-24T17:04:55.000Z | tests/test_utils.py | vishalbelsare/category_encoders | 55636b5ae11dc45075a0c248028f17f9df93bbb9 | [
"BSD-3-Clause"
] | 213 | 2016-11-28T15:33:57.000Z | 2020-04-27T14:15:57.000Z | tests/test_utils.py | vishalbelsare/category_encoders | 55636b5ae11dc45075a0c248028f17f9df93bbb9 | [
"BSD-3-Clause"
] | 262 | 2016-12-19T15:58:16.000Z | 2020-04-25T07:24:38.000Z | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector, convert_inputs
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0], index) # list
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0, 1, 0]], index) # list of lists (row)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0], [1], [0]], index) # list of lists (column)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([1, 0, 1]), index) # np vector
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1, 0, 1]]), index) # np matrix row
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1], [0], [1]]), index) # np matrix column
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(pd.Series([0, 1, 0], index=[4, 5, 6]), index) # series
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector(pd.DataFrame({'y': [0, 1, 0]}, index=[4, 5, 6]), index) # dataFrame
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector((0, 1, 0), index) # tuple
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(0, [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
result = convert_input_vector('a', [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
# multiple columns and rows should cause an error because it is unclear which column/row to use as the target
self.assertRaises(ValueError, convert_input_vector, (pd.DataFrame({'col1': [0, 1, 0], 'col2': [1, 0, 1]})), index)
self.assertRaises(ValueError, convert_input_vector, (np.array([[0, 1], [1, 0], [0, 1]])), index)
self.assertRaises(ValueError, convert_input_vector, ([[0, 1], [1, 0], [0, 1]]), index)
# edge scenarios (it is ok to raise an exception but please, provide then a helpful exception text)
_ = convert_input_vector(pd.Series(dtype=float), [])
_ = convert_input_vector([], [])
_ = convert_input_vector([[]], [])
_ = convert_input_vector(pd.DataFrame(), [])
def test_convert_inputs(self):
aindex = [2, 4, 5]
bindex = [1, 3, 4]
alist = [5, 3, 6]
aseries = pd.Series(alist, aindex)
barray = np.array([[7, 9], [4, 3], [0, 1]])
bframe = pd.DataFrame(barray, bindex)
X, y = convert_inputs(barray, alist)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == [0, 1, 2])
X, y = convert_inputs(barray, alist, index=aindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(barray, aseries, index=bindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(bframe, alist, index=[3, 1, 4])
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == bindex)
self.assertRaises(ValueError, convert_inputs, bframe, aseries)
# shape mismatch
self.assertRaises(ValueError, convert_inputs, barray, [1, 2, 3, 4])
| 45.880342 | 122 | 0.629098 | 721 | 5,368 | 4.582524 | 0.142857 | 0.105932 | 0.108959 | 0.104419 | 0.800545 | 0.752724 | 0.707324 | 0.702785 | 0.690678 | 0.659806 | 0 | 0.033022 | 0.221498 | 5,368 | 116 | 123 | 46.275862 | 0.757598 | 0.073398 | 0 | 0.56044 | 0 | 0 | 0.017346 | 0 | 0 | 0 | 0 | 0 | 0.637363 | 1 | 0.021978 | false | 0 | 0.043956 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5f1f28192d34bf5dfea7b6f978148aed15012091 | 94 | py | Python | office365/sharepoint/directory/user.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | null | null | null | office365/sharepoint/directory/user.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | null | null | null | office365/sharepoint/directory/user.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | null | null | null | from office365.runtime.client_object import ClientObject
class User(ClientObject):
pass
| 15.666667 | 56 | 0.808511 | 11 | 94 | 6.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.138298 | 94 | 5 | 57 | 18.8 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a02f848a044372d09ed7f566b0f7a24a31be37c3 | 203 | py | Python | channel-api/src/api/utils.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | 3 | 2020-11-17T23:19:20.000Z | 2021-03-29T15:08:56.000Z | channel-api/src/api/utils.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | null | null | null | channel-api/src/api/utils.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | 1 | 2020-12-11T00:26:33.000Z | 2020-12-11T00:26:33.000Z | from functools import wraps
from flask import request
def form(func):
@wraps(func)
def wrapped(*args, **kwrags):
return func(request.form.to_dict(), *args, **kwrags)
return wrapped
| 20.3 | 60 | 0.674877 | 27 | 203 | 5.037037 | 0.555556 | 0.147059 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 203 | 9 | 61 | 22.555556 | 0.844721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.