hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cb449631006b621472a513aa4e7dbf4b96c22827 | 73 | py | Python | tests/test_utils.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | tests/test_utils.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | tests/test_utils.py | nyue/MyActionPipeline | c730cd0442b8761cf7d2e082d328b0a3fbcd2641 | [
"Apache-2.0"
] | null | null | null | from snetwork import utils
def test_utils():
assert(utils.status())
| 14.6 | 26 | 0.726027 | 10 | 73 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 4 | 27 | 18.25 | 0.852459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb896461ccc070c99bc2875d139bfa9f8bcafab8 | 154 | py | Python | python/AStar/Graph.py | alexwhb/algorithm-practice | b15cffcefbd2b68ad280dd37ef1afc0305bf9d62 | [
"MIT"
] | 1 | 2015-10-29T01:51:48.000Z | 2015-10-29T01:51:48.000Z | python/AStar/Graph.py | alexwhb/algorithm-practice | b15cffcefbd2b68ad280dd37ef1afc0305bf9d62 | [
"MIT"
] | null | null | null | python/AStar/Graph.py | alexwhb/algorithm-practice | b15cffcefbd2b68ad280dd37ef1afc0305bf9d62 | [
"MIT"
] | null | null | null | __author__ = 'Winston'
class Graph(object):
def __init__(self):
self.edges = {}
def neighbors(self, id):
return self.edges[id]
| 15.4 | 29 | 0.597403 | 18 | 154 | 4.666667 | 0.666667 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 154 | 9 | 30 | 17.111111 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1debe6ce21c49d1b2239ad82226af197630b0697 | 54 | py | Python | university/tests.py | santiagofmunoz/ums_be | e788d2c7b0db8d571faeb4b673a9e16ebee74d32 | [
"MIT"
] | null | null | null | university/tests.py | santiagofmunoz/ums_be | e788d2c7b0db8d571faeb4b673a9e16ebee74d32 | [
"MIT"
] | null | null | null | university/tests.py | santiagofmunoz/ums_be | e788d2c7b0db8d571faeb4b673a9e16ebee74d32 | [
"MIT"
] | null | null | null | from django.test import TestCase
#TODO CREATE TESTS!
| 13.5 | 32 | 0.796296 | 8 | 54 | 5.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 54 | 3 | 33 | 18 | 0.934783 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38103bcee072b416507fd6181dbec6bc522b8ce0 | 327 | py | Python | app/views/backend/dashboard.py | mrefawaladam/eccomece-wiht-flask | c2e9df36970929935b103fc69c5c45b970b130b2 | [
"MIT"
] | null | null | null | app/views/backend/dashboard.py | mrefawaladam/eccomece-wiht-flask | c2e9df36970929935b103fc69c5c45b970b130b2 | [
"MIT"
] | null | null | null | app/views/backend/dashboard.py | mrefawaladam/eccomece-wiht-flask | c2e9df36970929935b103fc69c5c45b970b130b2 | [
"MIT"
] | null | null | null | from flask import render_template, request, jsonify,session
from app import app
from app import db
import bcrypt
from werkzeug.security import generate_password_hash, check_password_hash
@app.route("/dashboard-admin", methods=['GET'])
def dashboard_admin():
return render_template("backend/layouts/app.html")
| 25.153846 | 73 | 0.7737 | 44 | 327 | 5.590909 | 0.613636 | 0.113821 | 0.105691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 327 | 12 | 74 | 27.25 | 0.87234 | 0 | 0 | 0 | 1 | 0 | 0.131498 | 0.073395 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | true | 0.125 | 0.625 | 0.125 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 6 |
3813961bcbb444f51ee742ae98ae59b62ca109e9 | 57 | py | Python | template/language/__init__.py | clayne/syringe-1 | 4a431aa65c371a2018fca95145a3952ba802a609 | [
"BSD-2-Clause"
] | 25 | 2015-04-14T21:53:46.000Z | 2022-03-30T19:15:24.000Z | template/language/__init__.py | clayne/syringe-1 | 4a431aa65c371a2018fca95145a3952ba802a609 | [
"BSD-2-Clause"
] | 5 | 2020-03-23T20:19:59.000Z | 2021-05-24T19:38:31.000Z | template/language/__init__.py | clayne/syringe-1 | 4a431aa65c371a2018fca95145a3952ba802a609 | [
"BSD-2-Clause"
] | 7 | 2015-07-31T13:26:37.000Z | 2021-03-05T19:35:37.000Z | from . import vc
from . import vb
from . import python26
| 14.25 | 22 | 0.736842 | 9 | 57 | 4.666667 | 0.555556 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.210526 | 57 | 3 | 23 | 19 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
383b02e1edc04695201571af9d06260aa169ef7f | 567 | py | Python | invariance.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | invariance.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | invariance.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | from tto import *
def matrixT(a, b):
l = [(y-x)%12 for x in a for y in b]
return [l.count(z) for z in range(12)]
def matrixTI(a, b):
l = [(y+x)%12 for x in a for y in b]
return [l.count(z) for z in range(12)]
def matrixM(a, b):
ma = M5(a)
l = [(y-x)%12 for x in ma for y in b]
return [l.count(z) for z in range(12)]
def matrixMI(a, b):
ma = M5(a)
l = [(y+x)%12 for x in ma for y in b]
return [l.count(z) for z in range(12)]
def vtics(r):
l = [(x+y)%12 for x in r for y in r]
return [l.count(z) for z in range(12)]
| 23.625 | 42 | 0.544974 | 133 | 567 | 2.323308 | 0.180451 | 0.080906 | 0.097087 | 0.12945 | 0.770227 | 0.770227 | 0.770227 | 0.770227 | 0.770227 | 0.686084 | 0 | 0.054321 | 0.285714 | 567 | 23 | 43 | 24.652174 | 0.708642 | 0 | 0 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e050af7a7db1bc2ac62ff3d980e30d67039c27b | 32 | py | Python | datamodelutils/__init__.py | bwalsh/datamodelutils | 1bc23014802ee7fcafbeda28a2146cd2c6dc5e85 | [
"Apache-2.0"
] | null | null | null | datamodelutils/__init__.py | bwalsh/datamodelutils | 1bc23014802ee7fcafbeda28a2146cd2c6dc5e85 | [
"Apache-2.0"
] | null | null | null | datamodelutils/__init__.py | bwalsh/datamodelutils | 1bc23014802ee7fcafbeda28a2146cd2c6dc5e85 | [
"Apache-2.0"
] | null | null | null | import models
import validators
| 10.666667 | 17 | 0.875 | 4 | 32 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 2 | 18 | 16 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3871321c6f01b510e4cdef1a5e6fd9b894c46711 | 68 | py | Python | tradester/finance/assets/__init__.py | wrieg123/tradester | 440210940f80e94fde4d43841c729f63b05f597d | [
"MIT"
] | 5 | 2020-11-11T14:54:59.000Z | 2020-11-13T04:00:25.000Z | tradester/finance/assets/__init__.py | wrieg123/tradester | 440210940f80e94fde4d43841c729f63b05f597d | [
"MIT"
] | null | null | null | tradester/finance/assets/__init__.py | wrieg123/tradester | 440210940f80e94fde4d43841c729f63b05f597d | [
"MIT"
] | null | null | null | from .future import *
from .security import *
from .option import *
| 17 | 23 | 0.735294 | 9 | 68 | 5.555556 | 0.555556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 3 | 24 | 22.666667 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38728600087826cabe3cd5117092219308aab361 | 110 | py | Python | python/snowflake/test3.py | yc19890920/Learn | 3990e75b469225ba7b430539ef9a16abe89eb863 | [
"Apache-2.0"
] | 1 | 2021-01-11T06:30:44.000Z | 2021-01-11T06:30:44.000Z | python/snowflake/test3.py | yc19890920/Learn | 3990e75b469225ba7b430539ef9a16abe89eb863 | [
"Apache-2.0"
] | 23 | 2020-02-12T02:35:49.000Z | 2022-02-11T03:45:40.000Z | python/snowflake/test3.py | yc19890920/Learn | 3990e75b469225ba7b430539ef9a16abe89eb863 | [
"Apache-2.0"
] | 2 | 2020-04-08T15:39:46.000Z | 2020-10-10T10:13:09.000Z |
fp = open("1.txt", 'rb')
lines = fp.readlines()
print lines
print len(lines)
print len(set(lines))
| 12.222222 | 25 | 0.609091 | 17 | 110 | 3.941176 | 0.588235 | 0.298507 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.218182 | 110 | 8 | 26 | 13.75 | 0.767442 | 0 | 0 | 0 | 0 | 0 | 0.07 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
38d71120f85c38b6694284efb7ef6aa4c4ae8c33 | 7,632 | py | Python | leetcode_python/String/compare-version-numbers.py | yennanliu/CS_basics | 3c50c819897a572ff38179bfb0083a19b2325fde | [
"Unlicense"
] | 18 | 2019-08-01T07:45:02.000Z | 2022-03-31T18:05:44.000Z | leetcode_python/String/compare-version-numbers.py | yennanliu/CS_basics | 3c50c819897a572ff38179bfb0083a19b2325fde | [
"Unlicense"
] | null | null | null | leetcode_python/String/compare-version-numbers.py | yennanliu/CS_basics | 3c50c819897a572ff38179bfb0083a19b2325fde | [
"Unlicense"
] | 15 | 2019-12-29T08:46:20.000Z | 2022-03-08T14:14:05.000Z | # Time: O(n)
# Space: O(1)
# Compare two version numbers version1 and version1.
# If version1 > version2 return 1, if version1 < version2
# return -1, otherwise return 0.
#
# You may assume that the version strings are non-empty and
# contain only digits and the . character.
# The . character does not represent a decimal point and
# is used to separate number sequences.
# For instance, 2.5 is not "two and a half" or "half way to
# version three", it is the fifth second-level revision of
# the second first-level revision.
#
# Here is an example of version numbers ordering:
#
# 0.1 < 1.1 < 1.2 < 13.37
#
# V0
# IDEA : STRING
class Solution(object):
def compareVersion(self, version1, version2):
v1_split = version1.split('.')
v2_split = version2.split('.')
v1_len, v2_len = len(v1_split), len(v2_split)
maxLen = max(v1_len, v2_len)
for i in range(maxLen):
temp1, temp2 = 0, 0
if i < v1_len:
temp1 = int(v1_split[i])
if i < v2_len:
temp2 = int(v2_split[i])
if temp1 < temp2:
return -1
elif temp1 > temp2:
return 1
return 0
# V1
# https://blog.csdn.net/fuxuemingzhu/article/details/80821268
# IDEA : STRING
class Solution(object):
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
v1_split = version1.split('.')
v2_split = version2.split('.')
v1_len, v2_len = len(v1_split), len(v2_split)
maxLen = max(v1_len, v2_len)
for i in range(maxLen):
temp1, temp2 = 0, 0
if i < v1_len:
temp1 = int(v1_split[i])
if i < v2_len:
temp2 = int(v2_split[i])
if temp1 < temp2:
return -1
elif temp1 > temp2:
return 1
return 0
### Test case
s=Solution()
assert s.compareVersion("1.10", "1.9") == 1
assert s.compareVersion("1.10", "1.10.10") == -1
assert s.compareVersion("1.10.1.2", "1.10.1.3") == -1
assert s.compareVersion("1.10", "1.10") == 0
assert s.compareVersion("1.10.1", "1.10.1.1") == -1
assert s.compareVersion("0", "0") == 0
assert s.compareVersion("7.1", "7.12") == -1
assert s.compareVersion("0.0.1", "0") == 1
assert s.compareVersion("0.1", "0.2") == -1
# V2
class Solution(object):
def compareVersion(self, version1, version2):
version1_ = list(version1.split('.'))
version2_ = list(version2.split('.'))
len_ = max(len(version1_), len(version2_))
version1_ = version1_ + (len_ - len(version1_))*['0']
version2_ = version2_ + (len_ - len(version2_))*['0']
for i in range(len(version1_)):
if int(version1_[i]) > int(version2_[i]):
return 1
elif int(version1_[i]) < int(version2_[i]) :
return -1
else:
pass
return 0
# V3
import itertools
class Solution(object):
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
n1, n2 = len(version1), len(version2)
i, j = 0, 0
while i < n1 or j < n2:
v1, v2 = 0, 0
while i < n1 and version1[i] != '.':
v1 = v1 * 10 + int(version1[i])
i += 1
while j < n2 and version2[j] != '.':
v2 = v2 * 10 + int(version2[j])
j += 1
if v1 != v2:
return 1 if v1 > v2 else -1
i += 1
j += 1
return 0
# Time: O(n)
# Space: O(n)
# V4
def cmp(a, b):
return (a > b) - (a < b)
class Solution2(object):
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
v1, v2 = version1.split("."), version2.split(".")
if len(v1) > len(v2):
v2 += ['0' for _ in range(len(v1) - len(v2))]
elif len(v1) < len(v2):
v1 += ['0' for _ in range(len(v2) - len(v1))]
i = 0
while i < len(v1):
if int(v1[i]) > int(v2[i]):
return 1
elif int(v1[i]) < int(v2[i]):
return -1
else:
i += 1
return 0
def compareVersion2(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
v1 = [int(x) for x in version1.split('.')]
v2 = [int(x) for x in version2.split('.')]
while len(v1) != len(v2):
if len(v1) > len(v2):
v2.append(0)
else:
v1.append(0)
return cmp(v1, v2)
def compareVersion3(self, version1, version2):
splits = (list(map(int, v.split('.'))) for v in (version1, version2))
return cmp(*list(zip(*itertools.zip_longest(*splits, fillvalue=0))))
def compareVersion4(self, version1, version2):
main1, _, rest1 = ('0' + version1).partition('.')
main2, _, rest2 = ('0' + version2).partition('.')
return cmp(int(main1), int(main2)) or len(rest1 + rest2) and self.compareVersion4(rest1, rest2)
#V5
# Time: O(n)
# Space: O(1)
import itertools
class Solution(object):
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
n1, n2 = len(version1), len(version2)
i, j = 0, 0
while i < n1 or j < n2:
v1, v2 = 0, 0
while i < n1 and version1[i] != '.':
v1 = v1 * 10 + int(version1[i])
i += 1
while j < n2 and version2[j] != '.':
v2 = v2 * 10 + int(version2[j])
j += 1
if v1 != v2:
return 1 if v1 > v2 else -1
i += 1
j += 1
return 0
# Time: O(n)
# Space: O(n)
class Solution2(object):
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
v1, v2 = version1.split("."), version2.split(".")
if len(v1) > len(v2):
v2 += ['0' for _ in range(len(v1) - len(v2))]
elif len(v1) < len(v2):
v1 += ['0' for _ in range(len(v2) - len(v1))]
i = 0
while i < len(v1):
if int(v1[i]) > int(v2[i]):
return 1
elif int(v1[i]) < int(v2[i]):
return -1
else:
i += 1
return 0
def compareVersion2(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
v1 = [int(x) for x in version1.split('.')]
v2 = [int(x) for x in version2.split('.')]
while len(v1) != len(v2):
if len(v1) > len(v2):
v2.append(0)
else:
v1.append(0)
return cmp(v1, v2)
def compareVersion3(self, version1, version2):
splits = (map(int, v.split('.')) for v in (version1, version2))
return cmp(*zip(*itertools.izip_longest(*splits, fillvalue=0)))
def compareVersion4(self, version1, version2):
main1, _, rest1 = ('0' + version1).partition('.')
main2, _, rest2 = ('0' + version2).partition('.')
return cmp(int(main1), int(main2)) or len(rest1 + rest2) and self.compareVersion4(rest1, rest2)
| 29.8125 | 103 | 0.500524 | 974 | 7,632 | 3.868583 | 0.13963 | 0.023885 | 0.026008 | 0.026539 | 0.804406 | 0.78397 | 0.751062 | 0.744161 | 0.697983 | 0.697983 | 0 | 0.08396 | 0.355477 | 7,632 | 255 | 104 | 29.929412 | 0.682049 | 0.148192 | 0 | 0.844721 | 0 | 0 | 0.017929 | 0 | 0 | 0 | 0 | 0 | 0.055901 | 1 | 0.086957 | false | 0.006211 | 0.012422 | 0.006211 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a108fe22bcef6c5198524f1c09b058da68f2679 | 44 | py | Python | myelin/mdps/__init__.py | davidrobles/myelin | af76d4ec41f9b4f9be42fc12094aab6f879db770 | [
"MIT"
] | null | null | null | myelin/mdps/__init__.py | davidrobles/myelin | af76d4ec41f9b4f9be42fc12094aab6f879db770 | [
"MIT"
] | 9 | 2020-03-24T15:46:12.000Z | 2021-09-11T16:23:59.000Z | myelin/mdps/__init__.py | davidrobles/myelin | af76d4ec41f9b4f9be42fc12094aab6f879db770 | [
"MIT"
] | null | null | null | from myelin.mdps.gridworld import GridWorld
| 22 | 43 | 0.863636 | 6 | 44 | 6.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a4e49d5da9cd5bb9638f669ab41fcf128f1e6db | 1,016 | py | Python | dicom_to_cnn/enums/SopClassUID.py | wendyrvllr/Dicom-To-CNN | 56619b47877ad912d7fe33616d6596ce542705bb | [
"MIT"
] | 15 | 2020-06-16T07:08:44.000Z | 2021-11-18T10:45:57.000Z | dicom_to_cnn/enums/SopClassUID.py | wendyrvllr/Dicom-To-CNN | 56619b47877ad912d7fe33616d6596ce542705bb | [
"MIT"
] | 13 | 2020-04-30T08:57:06.000Z | 2020-05-11T21:19:55.000Z | dicom_to_cnn/enums/SopClassUID.py | salimkanoun/library-DICOM | be7976d122c238fe43b5b90230682d99e7250bbe | [
"MIT"
] | 4 | 2020-05-19T10:04:39.000Z | 2022-03-30T14:22:13.000Z | from enum import Enum
class CapturesSOPClass(Enum):
SecondaryCaptureImageStorage = '1.2.840.10008.5.1.4.1.1.7'
MultiframeSingleBitSecondaryCaptureImageStorage = '1.2.840.10008.5.1.4.1.1.7.1'
MultiframeGrayscaleByteSecondaryCaptureImageStorage = '1.2.840.10008.5.1.4.1.1.7.2'
MultiframeGrayscaleWordSecondaryCaptureImageStorage = '1.2.840.10008.5.1.4.1.1.7.3'
MultiframeTrueColorSecondaryCaptureImageStorage = '1.2.840.10008.5.1.4.1.1.7.4'
class ImageModalitiesSOPClass(Enum):
CT = '1.2.840.10008.5.1.4.1.1.2'
EnhancedCT = '1.2.840.10008.5.1.4.1.1.2.1'
PT = '1.2.840.10008.5.1.4.1.1.128'
EnhancedPT = '1.2.840.10008.5.1.4.1.1.130'
NM = '1.2.840.10008.5.1.4.1.1.20'
MR = '1.2.840.10008.5.1.4.1.1.4'
EnhancedMR = '1.2.840.10008.5.1.4.1.1.4.1'
MRSpectroscopy = '1.2.840.10008.5.1.4.1.1.4.2'
RTSTRUCT = '1.2.840.10008.5.1.4.1.1.481.3'
class RTModalitiesSOPClass(Enum):
RTSS = '1.2.840.10008.5.1.4.1.1.481.3',
RTDose = '1.2.840.10008.5.1.4.1.1.481.2' | 37.62963 | 88 | 0.66437 | 199 | 1,016 | 3.39196 | 0.175879 | 0.056296 | 0.075556 | 0.237037 | 0.386667 | 0.386667 | 0.386667 | 0.386667 | 0.386667 | 0.32 | 0 | 0.312289 | 0.126969 | 1,016 | 27 | 89 | 37.62963 | 0.448703 | 0 | 0 | 0 | 0 | 0.5 | 0.423795 | 0.423795 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
2a58dfd185185d7d95fcfe1472f02aeb62958c3c | 129 | py | Python | Part_2_intermediate/mod_2/lesson_6/ex_4_workaround/estudent/grade.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_6/ex_4_workaround/estudent/grade.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_6/ex_4_workaround/estudent/grade.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | class Grade:
def __init__(self, value):
self.value = value
def is_passing(self):
return self.value > 1
| 16.125 | 30 | 0.596899 | 17 | 129 | 4.235294 | 0.588235 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.310078 | 129 | 7 | 31 | 18.428571 | 0.797753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
2a7dd1302a69a15359cb3e0e7ef7429ecd9371d4 | 42,259 | py | Python | tests/test_init_tools.py | ericbarefoot/pyDeltaRCM | ab602a3153b203d0a0c1dd807713638622831411 | [
"MIT"
] | null | null | null | tests/test_init_tools.py | ericbarefoot/pyDeltaRCM | ab602a3153b203d0a0c1dd807713638622831411 | [
"MIT"
] | 1 | 2020-04-22T21:53:27.000Z | 2020-04-22T22:11:22.000Z | tests/test_init_tools.py | ericbarefoot/pyDeltaRCM | ab602a3153b203d0a0c1dd807713638622831411 | [
"MIT"
] | null | null | null | # unit tests for init_tools.py
import pytest
import unittest.mock as mock
import numpy as np
import os
from netCDF4 import Dataset
from pyDeltaRCM.model import DeltaModel
from . import utilities
class TestModelDomainSetup:
def test_inlet_size_specified(self, tmp_path):
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'out_dir', tmp_path / 'out_dir')
utilities.write_parameter_to_file(f, 'Length', 4000.)
utilities.write_parameter_to_file(f, 'Width', 8000.)
utilities.write_parameter_to_file(f, 'dx', 20)
utilities.write_parameter_to_file(f, 'N0_meters', 150)
utilities.write_parameter_to_file(f, 'L0_meters', 200)
f.close()
delta = DeltaModel(input_file=p)
assert delta.N0 == 8
assert delta.L0 == 10
def test_inlet_size_set_to_one_fourth_domain(self, tmp_path):
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'out_dir', tmp_path / 'out_dir')
utilities.write_parameter_to_file(f, 'Length', 4000.)
utilities.write_parameter_to_file(f, 'Width', 8000.)
utilities.write_parameter_to_file(f, 'dx', 20)
utilities.write_parameter_to_file(f, 'N0_meters', 5500)
utilities.write_parameter_to_file(f, 'L0_meters', 3300)
f.close()
delta = DeltaModel(input_file=p)
assert delta.N0 == 100
assert delta.L0 == 50
def test_x(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.x[0][-1] == 199
def test_y(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.y[0][-1] == 0
def test_cell_type(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
# wall type in corner
assert delta.cell_type[0, 0] == -2
def test_eta(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.eta[10, 10] == -5
def test_stage(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.stage[10, 10] == 0.0
def test_depth(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.depth[10, 10] == 5
def test_qx(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
# prescribe the qx at the inlet
assert delta.qx[0, delta.CTR] == 5
def test_qy(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.qy[0, delta.CTR] == 0
def test_qxn(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.qxn[0, 0] == 0
def test_qyn(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.qyn[0, 0] == 0
def test_qwn(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.qwn[0, 0] == 0
def test_ux(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.ux[0, delta.CTR] == 1
def test_uy(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.uy[0, delta.CTR] == 0
def test_uw(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.uw[0, delta.CTR] == 1
def test_qs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.qs[5, 5] == 0
def test_Vp_dep_sand(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.Vp_dep_sand) == 0
def test_Vp_dep_mud(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.Vp_dep_mud) == 0
def test_free_surf_flag(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.free_surf_flag) == 0
def test_indices(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.free_surf_walk_inds) == 0
def test_sfc_visit(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.sfc_visit) == 0
def test_sfc_sum(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert np.any(delta.sfc_sum) == 0
class TestCreateBoundaryConditions:
# base case during init is covered by tests elsewhere!
def test_change_variable_updated_bcs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
_delta = DeltaModel(input_file=p)
# what is Qw0 before?
assert _delta.Qw0 == 1250.0
# change u0
_delta.u0 = 2
# nothing has happened to Qw0 yet
assert _delta.u0 == 2
assert _delta.Qw0 == 1250.0
# now call to recreate, see what happened
_delta.create_boundary_conditions()
assert _delta.u0 == 2
assert _delta.Qw0 == 2500.0
class TestInitSubsidence:
def test_subsidence_bounds(self, tmp_path):
"""Test subsidence bounds."""
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'Length', 600.)
utilities.write_parameter_to_file(f, 'Width', 600.)
utilities.write_parameter_to_file(f, 'dx', 5)
utilities.write_parameter_to_file(f, 'toggle_subsidence', True)
f.close()
delta = DeltaModel(input_file=p)
# assert subsidence mask is binary
assert np.all(delta.subsidence_mask ==
delta.subsidence_mask.astype(bool))
# check specific regions
assert np.all(delta.subsidence_mask[delta.L0:, :] == 1)
assert np.all(delta.subsidence_mask[:delta.L0, :] == 0)
class TestLoadCheckpoint:
@mock.patch('pyDeltaRCM.shared_tools.set_random_state')
def test_load_standard_grid(self, patched, tmp_path):
"""Test that a run can be resumed when there are outputs.
"""
# create one delta, just to have a checkpoint file
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_checkpoint': True,
'save_eta_grids': True})
_delta = DeltaModel(input_file=p)
# make mocks
_delta.log_info = mock.MagicMock()
_delta.logger = mock.MagicMock()
_delta.init_output_file = mock.MagicMock()
# close the file so can be safely opened in load
_delta.output_netcdf.close()
# check checkpoint exists
assert os.path.isfile(os.path.join(
_delta.prefix, 'checkpoint.npz'))
assert os.path.isfile(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
# now mess up a field
_eta0 = np.copy(_delta.eta)
_rand_field = np.random.uniform(0, 1, size=_delta.eta.shape)
_delta.eta = _rand_field
assert np.all(_delta.eta == _rand_field)
# now resume from the checkpoint to restore the field
_delta.load_checkpoint()
# check that fields match
assert np.all(_delta.eta == _eta0)
# assertions on function calls
_call = [mock.call('Renaming old NetCDF4 output file', verbosity=2)]
_delta.log_info.assert_has_calls(_call, any_order=True)
_delta.logger.assert_not_called()
_delta.init_output_file.assert_not_called()
patched.assert_called()
@mock.patch('pyDeltaRCM.shared_tools.set_random_state')
def test_load_wo_netcdf_not_expected(self, patched, tmp_path):
"""
Test that a checkpoint can be loaded when the load does not expect
there to be any netcdf file.
"""
# create one delta, just to have a checkpoint file
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_checkpoint': True})
_delta = DeltaModel(input_file=p)
# make mocks
_delta.log_info = mock.MagicMock()
_delta.logger = mock.MagicMock()
_delta.init_output_file = mock.MagicMock()
assert os.path.isfile(os.path.join(
_delta.prefix, 'checkpoint.npz'))
assert not os.path.isfile(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
# now mess up a field
_eta0 = np.copy(_delta.eta)
_rand_field = np.random.uniform(0, 1, size=_delta.eta.shape)
_delta.eta = _rand_field
assert np.all(_delta.eta == _rand_field)
# now resume from the checkpoint to restore the field
_delta.load_checkpoint()
# check that fields match
assert np.all(_delta.eta == _eta0)
# assertions on function calls
_delta.log_info.assert_called()
_delta.logger.assert_not_called()
_delta.init_output_file.assert_not_called()
patched.assert_called()
@mock.patch('pyDeltaRCM.shared_tools.set_random_state')
def test_load_wo_netcdf_expected(self, patched, tmp_path):
"""
Test that a checkpoint can be loaded when the load expects there to be
a netcdf file. This will create a new netcdf file and raise a
warning.
"""
# define a yaml with NO outputs, but checkpoint
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_checkpoint': True,
'save_eta_grids': True})
_delta = DeltaModel(input_file=p)
# make mocks
_delta.log_info = mock.MagicMock()
_delta.logger = mock.MagicMock()
_delta.init_output_file = mock.MagicMock()
# close the file so can be safely opened in load
_delta.output_netcdf.close()
# check that files exist, and then delete nc
assert os.path.isfile(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
assert os.path.isfile(os.path.join(
_delta.prefix, 'checkpoint.npz'))
os.remove(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
# now mess up a field
_eta0 = np.copy(_delta.eta)
_rand_field = np.random.uniform(0, 1, size=_delta.eta.shape)
_delta.eta = _rand_field
assert np.all(_delta.eta == _rand_field)
# now resume from the checkpoint to restore the field
with pytest.warns(UserWarning, match=r'NetCDF4 output *.'):
_delta.load_checkpoint()
# check that fields match
assert np.all(_delta.eta == _eta0)
assert _delta._save_iter == 0
# assertions on function calls
_delta.log_info.assert_called()
_delta.logger.warning.assert_called()
_delta.init_output_file.assert_called()
patched.assert_called()
@mock.patch('pyDeltaRCM.shared_tools.set_random_state')
def test_load_already_open_netcdf_error(self, patched, tmp_path):
"""
Test that a checkpoint can be loaded when the load expects there to be
a netcdf file. This will create a new netcdf file and raise a
warning.
"""
# define a yaml with an output, and checkpoint
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_checkpoint': True,
'save_eta_grids': True})
_delta = DeltaModel(input_file=p)
# make mocks
_delta.log_info = mock.MagicMock()
_delta.logger = mock.MagicMock()
_delta.init_output_file = mock.MagicMock()
# close the file so can be safely opened in load
_delta.output_netcdf.close()
# check that files exist, and then open the nc back up
assert os.path.isfile(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
assert os.path.isfile(os.path.join(
_delta.prefix, 'checkpoint.npz'))
_ = Dataset(os.path.join(
_delta.prefix, 'pyDeltaRCM_output.nc'))
# now try to resume a model and should throw error
with pytest.raises(RuntimeError):
_delta.load_checkpoint()
class TestSettingConstants:
"""
tests for all of the constants
"""
def test_set_constant_g(self, tmp_path):
"""
check gravity
"""
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.g == 9.81
def test_set_constant_distances(self, tmp_path):
"""
check distances
"""
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.distances[0, 0] == pytest.approx(np.sqrt(2))
def test_set_ivec(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.ivec[0, 0] == pytest.approx(-np.sqrt(0.5))
def test_set_jvec(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.jvec[0, 0] == pytest.approx(-np.sqrt(0.5))
def test_set_iwalk(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.iwalk[0, 0] == -1
def test_set_jwalk(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.jwalk[0, 0] == -1
def test_kernel1(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.kernel1[0, 0] == 1
def test_kernel2(self, tmp_path):
# create delta from default options
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.kernel2[0, 0] == 1
class TestSettingParametersFromYAMLFile:
# tests for attrs set during yaml parsing
def test_init_verbose_default(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
delta = DeltaModel(input_file=p)
assert delta.verbose == 0
def test_init_verbose(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'verbose': 1})
delta = DeltaModel(input_file=p)
assert delta.verbose == 1
def test_init_seed_zero(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'seed': 0})
delta = DeltaModel(input_file=p)
assert delta.seed == 0
def test_init_Np_water(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'Np_water': 50})
_delta = DeltaModel(input_file=p)
assert _delta.init_Np_water == 50
def test_init_Np_sed(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'Np_sed': 60})
_delta = DeltaModel(input_file=p)
assert _delta.init_Np_sed == 60
def test_dx(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'dx': 20})
_delta = DeltaModel(input_file=p)
assert _delta.dx == 20
def test_itermax(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'itermax': 6})
_delta = DeltaModel(input_file=p)
assert _delta.itermax == 6
def test_h0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': 7.5})
_delta = DeltaModel(input_file=p)
assert _delta.h0 == 7.5
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': int(7)})
_delta = DeltaModel(input_file=p)
assert _delta.h0 == 7
def test_hb(self, tmp_path):
# take default from h0 if not given:
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': 7.5})
_delta = DeltaModel(input_file=p)
assert _delta.h0 == 7.5
assert _delta.hb == 7.5
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'hb': 7.5})
_delta = DeltaModel(input_file=p)
assert _delta.h0 == 5
assert _delta.hb == 7.5
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'hb': int(7)})
_delta = DeltaModel(input_file=p)
assert _delta.h0 == 5
assert _delta.hb == 7
def test_Nsmooth(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'Nsmooth': 6})
_delta = DeltaModel(input_file=p)
assert _delta.Nsmooth == 6
def test_SLR(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'SLR': 0.01})
_delta = DeltaModel(input_file=p)
assert _delta.SLR == 0.01
def test_omega_flow(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'omega_flow': 0.8})
_delta = DeltaModel(input_file=p)
assert _delta.omega_flow == 0.8
def test_lambda(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'sed_lag': 0.8})
_delta = DeltaModel(input_file=p)
assert _delta._lambda == 0.8
def test_alpha(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'alpha': 0.25})
_delta = DeltaModel(input_file=p)
assert _delta.alpha == 0.25
def test_stepmax_default(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'alpha': 0.25})
_delta = DeltaModel(input_file=p)
assert _delta.stepmax == 2 * (_delta.L + _delta.W)
def test_stepmax_integer(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'stepmax': 10})
_delta = DeltaModel(input_file=p)
assert _delta.stepmax == 10
def test_stepmax_float(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'stepmax': 11.0})
_delta = DeltaModel(input_file=p)
assert _delta.stepmax == 11
def test_save_eta_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_eta_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'eta' in _delta._save_var_list.keys()
def test_save_depth_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_depth_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'depth' in _delta._save_var_list.keys()
def test_save_stage_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_stage_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'stage' in _delta._save_var_list.keys()
def test_save_discharge_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_discharge_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'discharge' in _delta._save_var_list.keys()
def test_save_velocity_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_velocity_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'velocity' in _delta._save_var_list.keys()
def test_save_sedflux_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_sedflux_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'sedflux' in _delta._save_var_list.keys()
def test_save_sandfrac_grids(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_sandfrac_grids': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert 'sandfrac' in _delta._save_var_list.keys()
def test_save_discharge_components(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_discharge_components': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert _delta._save_discharge_components is True
assert 'discharge_x' in _delta._save_var_list.keys()
assert 'discharge_y' in _delta._save_var_list.keys()
def test_save_velocity_components(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_velocity_components': True})
_delta = DeltaModel(input_file=p)
assert _delta._save_any_grids is True
assert len(_delta._save_fig_list) == 0
assert _delta._save_velocity_components is True
assert 'velocity_x' in _delta._save_var_list.keys()
assert 'velocity_y' in _delta._save_var_list.keys()
def test_save_eta_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_eta_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_depth_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_depth_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_stage_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_stage_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_discharge_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_discharge_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_velocity_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_velocity_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_sedflux_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_sedflux_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_sandfrac_figs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_sandfrac_figs': True})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) > 0
assert _delta._save_any_grids is False
def test_save_figs_sequential(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'save_figs_sequential': False})
_delta = DeltaModel(input_file=p)
assert len(_delta._save_fig_list) == 0
assert _delta._save_any_grids is False
assert _delta._save_figs_sequential is False
def test_toggle_subsidence(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'toggle_subsidence': True})
_delta = DeltaModel(input_file=p)
assert _delta.toggle_subsidence is True
def test_start_subsidence(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'start_subsidence': 12345})
_delta = DeltaModel(input_file=p)
assert _delta.start_subsidence == 12345
def test_subsidence_rate(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'toggle_subsidence': True,
'subsidence_rate': 1e-9})
_delta = DeltaModel(input_file=p)
assert _delta.subsidence_rate == 1e-9
assert np.any(_delta.sigma > 0)
assert np.all(_delta.sigma <= (1e-9 * _delta.dt))
def test_sand_frac_bc(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'sand_frac_bc': -1})
_delta = DeltaModel(input_file=p)
assert _delta.sand_frac_bc == -1
class TestSettingOtherParametersFromYAMLSettings:
def test_theta_sand(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'coeff_theta_sand': 1.4,
'theta_water': 1.2})
_delta = DeltaModel(input_file=p)
assert _delta.theta_sand == 1.68
def test_theta_mud(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'coeff_theta_mud': 0.8,
'theta_water': 1.3})
_delta = DeltaModel(input_file=p)
assert _delta.theta_mud == 1.04
def test_U_dep_mud(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'coeff_U_dep_mud': 0.4325,
'u0': 2.2})
_delta = DeltaModel(input_file=p)
assert _delta.U_dep_mud == 0.9515
def test_U_ero_sand(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'coeff_U_ero_sand': 1.23,
'u0': 2.2})
_delta = DeltaModel(input_file=p)
assert _delta.U_ero_sand == 2.706
def test_U_ero_mud(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'coeff_U_ero_mud': 1.67,
'u0': 2.2})
_delta = DeltaModel(input_file=p)
assert _delta.U_ero_mud == 3.674
def test_L0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'L0_meters': 100,
'Length': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.L0 == 20
def test_N0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.N0 == 100
def test_L(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'Length': 1600,
'dx': 20})
_delta = DeltaModel(input_file=p)
assert _delta.L == 80
def test_W(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'Width': 1200,
'dx': 20})
_delta = DeltaModel(input_file=p)
assert _delta.W == 60
def test_u_max(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'u0': 2.3})
_delta = DeltaModel(input_file=p)
assert _delta.u_max == 4.6 # == 2*u0
def test_C0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'C0_percent': 10})
_delta = DeltaModel(input_file=p)
assert _delta.C0 == 0.1
def test_dry_depth(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': 0.5})
_delta = DeltaModel(input_file=p)
assert _delta.dry_depth == 0.05
def test_dry_depth_limiter(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': 20})
_delta = DeltaModel(input_file=p)
assert _delta.dry_depth == 0.1
def test_CTR(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'Length': 4000,
'Width': 6000,
'dx': 10})
_delta = DeltaModel(input_file=p)
assert _delta.CTR == 299 # 300th index
def test_CTR_small(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'Length': 40,
'Width': 40,
'dx': 10})
_delta = DeltaModel(input_file=p)
# small case, instead of 4/2-1=1 it is 4/2=2
assert _delta.CTR == 2
def test_gamma(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'S0': 0.01,
'dx': 10,
'u0': 3})
_delta = DeltaModel(input_file=p)
assert _delta.gamma == pytest.approx(0.10900000)
def test_V0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'h0': 3,
'dx': 15})
_delta = DeltaModel(input_file=p)
assert _delta.V0 == 675
def test_Qw0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.N0 == 100
assert _delta.Qw0 == 800
def test_qw0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 3})
_delta = DeltaModel(input_file=p)
assert _delta.qw0 == pytest.approx(2.4)
def test_Qp_water(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5,
'Np_water': 2300})
_delta = DeltaModel(input_file=p)
assert _delta.N0 == 100
assert _delta.Qw0 == 800
assert _delta.Qp_water == pytest.approx(0.347826087)
def test_dVs(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.V0 == 50
assert _delta.N0 == 100
assert _delta.Qw0 == 800
assert _delta.dVs == 50000
def test_Qs0(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'C0_percent': 10,
'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.Qw0 == 800
assert _delta.Qs0 == pytest.approx(800 * 0.1)
def test_Vp_sed(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5,
'Np_sed': 1450})
_delta = DeltaModel(input_file=p)
assert _delta.dVs == 50000
assert _delta.Vp_sed == 50000 / 1450
def test_stepmax_and_size_indices(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'Length': 1600,
'Width': 1200,
'dx': 20})
_delta = DeltaModel(input_file=p)
assert _delta.L == 80
assert _delta.W == 60
assert _delta.stepmax == (80 + 60) * 2
assert _delta.size_indices == (80 + 60)
def test_dt(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'C0_percent': 10,
'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.Qw0 == 800
assert _delta.dVs == 50000
assert _delta.Qs0 == pytest.approx(800 * 0.1)
assert _delta.dt == 625
def test_omega_flow_iter(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml', {'itermax': 7})
_delta = DeltaModel(input_file=p)
assert _delta.omega_flow_iter == pytest.approx(2 / 7)
def test_N_crossdiff(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5})
_delta = DeltaModel(input_file=p)
assert _delta.V0 == 50
assert _delta.N0 == 100
assert _delta.Qw0 == 800
assert _delta.dVs == 50000
assert _delta.N_crossdiff == int(round(50000 / 50))
def test_diffusion_multiplier(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'u0': 0.8,
'h0': 2,
'N0_meters': 500,
'Width': 6000,
'dx': 5,
'alpha': 0.3,
'C0_percent': 10})
_delta = DeltaModel(input_file=p)
assert _delta.V0 == 50
assert _delta.N0 == 100
assert _delta.Qw0 == 800
assert _delta.dVs == 50000
assert _delta.Qs0 == pytest.approx(800 * 0.1)
assert _delta.dt == 625
assert _delta.N_crossdiff == int(round(50000 / 50))
assert _delta.diffusion_multiplier == (625 / 1000 * 0.3 * 0.5 / 5**2)
def test_active_layer_thickness_float(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'active_layer_thickness': 2.7})
_delta = DeltaModel(input_file=p)
assert _delta.active_layer_thickness == 2.7
def test_active_layer_thickness_int(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml',
{'active_layer_thickness': 2})
_delta = DeltaModel(input_file=p)
assert _delta.active_layer_thickness == 2
def test_active_layer_thickness_default(self, tmp_path):
p = utilities.yaml_from_dict(tmp_path, 'input.yaml')
_delta = DeltaModel(input_file=p)
assert _delta.active_layer_thickness == _delta.h0 / 2
class TestInitMetadataList:
def test_save_list_exists(self, tmp_path):
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'out_dir', tmp_path / 'out_dir')
f.close()
delta = DeltaModel(input_file=p)
# check things about the metadata
assert hasattr(delta, '_save_var_list')
assert type(delta._save_var_list) == dict
assert 'meta' in delta._save_var_list.keys()
# save meta not on, so check that it is empty
assert delta._save_var_list['meta'] == {}
def test_default_meta_list(self, tmp_path):
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'out_dir', tmp_path / 'out_dir')
utilities.write_parameter_to_file(f, 'save_metadata', True)
f.close()
delta = DeltaModel(input_file=p)
# check things about the metadata
assert hasattr(delta, '_save_var_list')
assert type(delta._save_var_list) == dict
assert 'meta' in delta._save_var_list.keys()
# save meta on, so check that some expected values are there
assert 'L0' in delta._save_var_list['meta'].keys()
assert delta._save_var_list['meta']['L0'] == ['L0', 'cells', 'i8', ()]
assert 'H_SL' in delta._save_var_list['meta'].keys()
assert delta._save_var_list['meta']['H_SL'] == \
[None, 'meters', 'f4', 'total_time']
def test_netcdf_vars(self, tmp_path):
# test that stuff makes it to the netcdf file as expected
file_name = 'user_parameters.yaml'
p, f = utilities.create_temporary_file(tmp_path, file_name)
utilities.write_parameter_to_file(f, 'out_dir', tmp_path / 'out_dir')
utilities.write_parameter_to_file(f, 'save_eta_grids', True)
utilities.write_parameter_to_file(f, 'save_metadata', True)
f.close()
delta = DeltaModel(input_file=p)
# check things about the metadata
assert hasattr(delta, '_save_var_list')
assert type(delta._save_var_list) == dict
assert 'meta' in delta._save_var_list.keys()
# save meta on, so check that some expected values are there
assert 'L0' in delta._save_var_list['meta'].keys()
assert delta._save_var_list['meta']['L0'] == ['L0', 'cells', 'i8', ()]
assert 'H_SL' in delta._save_var_list['meta'].keys()
assert delta._save_var_list['meta']['H_SL'] == \
[None, 'meters', 'f4', 'total_time']
# check save var list for eta
assert 'eta' in delta._save_var_list.keys()
assert delta._save_var_list['eta'] == \
['eta', 'meters', 'f4', ('total_time', 'length', 'width')]
# force save to netcdf
delta.save_grids_and_figs()
# close netcdf
delta.output_netcdf.close()
# check out the netcdf
data = Dataset(os.path.join(delta.prefix, 'pyDeltaRCM_output.nc'),
'r+', format='NETCDF4')
# check for meta group
assert 'meta' in data.groups
# check for L0 a single value metadata
assert 'L0' in data['meta'].variables
assert data['meta']['L0'][0].data == delta.L0
# check H_SL a vector of metadata
assert 'H_SL' in data['meta'].variables
assert data['meta']['H_SL'].dimensions == ('total_time',)
assert data['time'].shape == data['meta']['H_SL'].shape
# check on the eta grid
assert 'eta' in data.variables
assert data['eta'].shape[0] == data['time'].shape[0]
assert data['eta'].shape[1] == delta.L
assert data['eta'].shape[2] == delta.W
| 40.246667 | 78 | 0.571855 | 5,303 | 42,259 | 4.248727 | 0.07788 | 0.070214 | 0.099419 | 0.119302 | 0.81683 | 0.808575 | 0.796813 | 0.777107 | 0.733833 | 0.712441 | 0 | 0.029025 | 0.323316 | 42,259 | 1,049 | 79 | 40.285033 | 0.758882 | 0.062851 | 0 | 0.583438 | 0 | 0 | 0.081839 | 0.006428 | 0 | 0 | 0 | 0 | 0.289837 | 1 | 0.136763 | false | 0 | 0.008783 | 0 | 0.155583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a87638af1d3fb7a7a4c3159f691e9a68b7c8243 | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_timeout.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_timeout.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_timeout.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/8c/19/c1/58f90b4770383d88804694a46b94ad40cc109101f95d8e58114ddf4648 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.489583 | 0 | 96 | 1 | 96 | 96 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a8d35be5ce39e40548c525327caff06670910fd | 135 | py | Python | migrations/826-add-bd-region.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | migrations/826-add-bd-region.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | migrations/826-add-bd-region.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | from mkt.constants import regions
from mkt.developers.cron import exclude_new_region
def run():
exclude_new_region([regions.BD])
| 19.285714 | 50 | 0.792593 | 20 | 135 | 5.15 | 0.65 | 0.135922 | 0.31068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125926 | 135 | 6 | 51 | 22.5 | 0.872881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
aa783fe2ff3963a92e111b4e833c86b5a211d9b9 | 170 | py | Python | tinydb_encrypted_jsonstorage/__init__.py | stefanthaler/tinydb-encrypted-jsonstorage | f2931c4ffafb9610d842cec9272c41d67e6a73d3 | [
"MIT"
] | 2 | 2020-11-06T19:01:54.000Z | 2021-01-17T23:56:50.000Z | tinydb_encrypted_jsonstorage/__init__.py | stefanthaler/tinydb-encrypted-jsonstorage | f2931c4ffafb9610d842cec9272c41d67e6a73d3 | [
"MIT"
] | null | null | null | tinydb_encrypted_jsonstorage/__init__.py | stefanthaler/tinydb-encrypted-jsonstorage | f2931c4ffafb9610d842cec9272c41d67e6a73d3 | [
"MIT"
] | 1 | 2022-02-06T07:09:49.000Z | 2022-02-06T07:09:49.000Z | # -*- coding: utf-8 -*-
"""
EncryptedJSONStorage class
Stores the data in an encrypted JSON file.
"""
from .encrypted_json_storage import EncryptedJSONStorage
| 17 | 56 | 0.711765 | 19 | 170 | 6.263158 | 0.842105 | 0.218487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.188235 | 170 | 9 | 57 | 18.888889 | 0.855072 | 0.547059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa81edfe46ea91849a04bb00f68df5e0b197ce8a | 182 | py | Python | mmtbx/command_line/superpose.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | mmtbx/command_line/superpose.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | mmtbx/command_line/superpose.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | # LIBTBX_SET_DISPATCHER_NAME mmtbx.superpose
from __future__ import division
import sys
import mmtbx.superpose
if __name__ == "__main__":
mmtbx.superpose.run(args=sys.argv[1:])
| 18.2 | 44 | 0.791209 | 25 | 182 | 5.16 | 0.68 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006211 | 0.115385 | 182 | 9 | 45 | 20.222222 | 0.795031 | 0.230769 | 0 | 0 | 0 | 0 | 0.058394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaa2da4676e52cf0633c27035119dd8986454e55 | 104 | py | Python | src/hub/dataload/sources/dbnsfp/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 39 | 2017-07-01T22:34:39.000Z | 2022-03-15T22:25:59.000Z | src/hub/dataload/sources/dbnsfp/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 105 | 2017-06-28T17:26:06.000Z | 2022-03-17T17:49:53.000Z | src/hub/dataload/sources/dbnsfp/__init__.py | erikyao/myvariant.info | a4eaaca7ab6c069199f8942d5afae2dece908147 | [
"Apache-2.0"
] | 15 | 2015-10-15T20:46:50.000Z | 2021-07-12T19:17:49.000Z | from .dbnsfp_upload import DBNSFPHG38Uploader, DBNSFPHG19Uploader
from .dbnsfp_dump import DBNSFPDumper
| 34.666667 | 65 | 0.884615 | 11 | 104 | 8.181818 | 0.727273 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042105 | 0.086538 | 104 | 2 | 66 | 52 | 0.905263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaa3699dfa793735299ecf5818073f9df4c654b4 | 5,714 | py | Python | chipseq_BH_PKi.py | rogerzou/chipseq_pcRNA | 555af5752a21ceb911f5aba2ceb604f8334aac63 | [
"MIT"
] | 1 | 2020-06-18T04:55:19.000Z | 2020-06-18T04:55:19.000Z | chipseq_BH_PKi.py | rogerzou/chipseq_pcRNA | 555af5752a21ceb911f5aba2ceb604f8334aac63 | [
"MIT"
] | null | null | null | chipseq_BH_PKi.py | rogerzou/chipseq_pcRNA | 555af5752a21ceb911f5aba2ceb604f8334aac63 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" 53BP1 and gH2AX ChIP-seq analysis for Cas9/pcRNA - DNA-PKcs inhibitor effect """
import src.chipseq as c
""" Home directory of BAM files and 'analysis' output directory; MODIFY AS APPROPRIATE. """
bs = "/Volumes/Lab-Home/rzou4/NGS_data/2_pcl/pcRNA_SRA1/"
bs_a = "/Volumes/Lab-Home/rzou4/NGS_data/2_pcl/pcRNA_SRA1/analysis/"
chrs = ['chr7', 'chr8']
""" Convert BAM file to WIG file that counts the number of reads in each window span. """
win = 5000
c.to_wiggle_windows(bs + "53bp1-1h-nL-nD-rep1.bam", bs_a + "53bp1-1h-nL-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-nL-nD-rep1.bam", bs_a + "53bp1-4h-nL-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-L-nD-rep1.bam", bs_a + "53bp1-4h-L-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-L-PKi-rep1.bam", bs_a + "53bp1-4h-L-PKi-rep1", win, chrs)
c.to_wiggle_windows(bs + "53bp1-1h-nL-nD-rep2.bam", bs_a + "53bp1-1h-nL-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-nL-nD-rep2.bam", bs_a + "53bp1-4h-nL-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-L-nD-rep2.bam", bs_a + "53bp1-4h-L-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "53bp1-4h-L-PKi-rep2.bam", bs_a + "53bp1-4h-L-PKi-rep2", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-1h-nL-nD-rep1.bam", bs_a + "gh2ax-1h-nL-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-nL-nD-rep1.bam", bs_a + "gh2ax-4h-nL-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-L-nD-rep1.bam", bs_a + "gh2ax-4h-L-nD-rep1", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-L-PKi-rep1.bam", bs_a + "gh2ax-4h-L-PKi-rep1", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-1h-nL-nD-rep2.bam", bs_a + "gh2ax-1h-nL-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-nL-nD-rep2.bam", bs_a + "gh2ax-4h-nL-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-L-nD-rep2.bam", bs_a + "gh2ax-4h-L-nD-rep2", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-4h-L-PKi-rep2.bam", bs_a + "gh2ax-4h-L-PKi-rep2", win, chrs)
c.to_wiggle_windows(bs + "53bp1-wt.bam", bs_a + "53bp1-wt", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-wt-sub1.bam", bs_a + "gh2ax-wt-sub1", win, chrs)
c.to_wiggle_windows(bs + "gh2ax-wt-sub2.bam", bs_a + "gh2ax-wt-sub2", win, chrs)
""" For each window span, count number of reads in each bin. """
win = 50000
numbins = 50
c.to_bins(bs + "53bp1-1h-nL-nD-rep1.bam", bs_a + "53bp1-1h-nL-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-nL-nD-rep1.bam", bs_a + "53bp1-4h-nL-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-L-nD-rep1.bam", bs_a + "53bp1-4h-L-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-L-PKi-rep1.bam", bs_a + "53bp1-4h-L-PKi-rep1", win, numbins, chrs)
c.to_bins(bs + "53bp1-1h-nL-nD-rep2.bam", bs_a + "53bp1-1h-nL-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-nL-nD-rep2.bam", bs_a + "53bp1-4h-nL-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-L-nD-rep2.bam", bs_a + "53bp1-4h-L-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "53bp1-4h-L-PKi-rep2.bam", bs_a + "53bp1-4h-L-PKi-rep2", win, numbins, chrs)
c.to_bins(bs + "gh2ax-1h-nL-nD-rep1.bam", bs_a + "gh2ax-1h-nL-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-nL-nD-rep1.bam", bs_a + "gh2ax-4h-nL-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-L-nD-rep1.bam", bs_a + "gh2ax-4h-L-nD-rep1", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-L-PKi-rep1.bam", bs_a + "gh2ax-4h-L-PKi-rep1", win, numbins, chrs)
c.to_bins(bs + "gh2ax-1h-nL-nD-rep2.bam", bs_a + "gh2ax-1h-nL-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-nL-nD-rep2.bam", bs_a + "gh2ax-4h-nL-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-L-nD-rep2.bam", bs_a + "gh2ax-4h-L-nD-rep2", win, numbins, chrs)
c.to_bins(bs + "gh2ax-4h-L-PKi-rep2.bam", bs_a + "gh2ax-4h-L-PKi-rep2", win, numbins, chrs)
c.to_bins(bs + "53bp1-wt.bam", bs_a + "53bp1-wt", win, numbins, chrs)
c.to_bins(bs + "gh2ax-wt-sub1.bam", bs_a + "gh2ax-wt-sub1", win, numbins, chrs)
c.to_bins(bs + "gh2ax-wt-sub2.bam", bs_a + "gh2ax-wt-sub2", win, numbins, chrs)
""" Get the average 53BP1 and gH2AX peaks in RPM """
c.avgwig(bs_a + "53bp1-1h-nL-nD-rep1.wig", bs_a + "53bp1-1h-nL-nD-rep2.wig", bs_a + "53bp1-1h-nL-nD-avg")
c.avgwig(bs_a + "53bp1-4h-nL-nD-rep1.wig", bs_a + "53bp1-4h-nL-nD-rep2.wig", bs_a + "53bp1-4h-nL-nD-avg")
c.avgwig(bs_a + "53bp1-4h-L-nD-rep1.wig", bs_a + "53bp1-4h-L-nD-rep2.wig", bs_a + "53bp1-4h-L-nD-avg")
c.avgwig(bs_a + "53bp1-4h-L-PKi-rep1.wig", bs_a + "53bp1-4h-L-PKi-rep2.wig", bs_a + "53bp1-4h-L-PKi-avg")
c.avgwig(bs_a + "gh2ax-1h-nL-nD-rep1.wig", bs_a + "gh2ax-1h-nL-nD-rep2.wig", bs_a + "gh2ax-1h-nL-nD-avg")
c.avgwig(bs_a + "gh2ax-4h-nL-nD-rep1.wig", bs_a + "gh2ax-4h-nL-nD-rep2.wig", bs_a + "gh2ax-4h-nL-nD-avg")
c.avgwig(bs_a + "gh2ax-4h-L-nD-rep1.wig", bs_a + "gh2ax-4h-L-nD-rep2.wig", bs_a + "gh2ax-4h-L-nD-avg")
c.avgwig(bs_a + "gh2ax-4h-L-PKi-rep1.wig", bs_a + "gh2ax-4h-L-PKi-rep2.wig", bs_a + "gh2ax-4h-L-PKi-avg")
c.avgwig(bs_a + "gh2ax-wt-sub1.wig", bs_a + "gh2ax-wt-sub2.wig", bs_a + "gh2ax-wt-avg")
""" Calculate percent change from averaged 53BP1 and gH2AX peaks in RPM """
c.percentchange(bs_a + "53bp1-1h-nL-nD-avg.wig", bs_a + "53bp1-4h-nL-nD-avg.wig", bs_a + "53bp1-1hto4h-delta")
c.percentchange(bs_a + "53bp1-4h-nL-nD-avg.wig", bs_a + "53bp1-4h-L-nD-avg.wig", bs_a + "53bp1-4hnLtoL-delta")
c.percentchange(bs_a + "53bp1-4h-L-nD-avg.wig", bs_a + "53bp1-4h-L-PKi-avg.wig", bs_a + "53bp1-4hnDtoD-delta")
c.percentchange(bs_a + "gh2ax-1h-nL-nD-avg.wig", bs_a + "gh2ax-4h-nL-nD-avg.wig", bs_a + "gh2ax-1hto4h-delta")
c.percentchange(bs_a + "gh2ax-4h-nL-nD-avg.wig", bs_a + "gh2ax-4h-L-nD-avg.wig", bs_a + "gh2ax-4hnLtoL-delta")
c.percentchange(bs_a + "gh2ax-4h-L-nD-avg.wig", bs_a + "gh2ax-4h-L-PKi-avg.wig", bs_a + "gh2ax-4hnDtoD-delta")
| 68.843373 | 110 | 0.673609 | 1,202 | 5,714 | 3.079867 | 0.076539 | 0.068071 | 0.095084 | 0.070232 | 0.901945 | 0.874122 | 0.861967 | 0.750135 | 0.71637 | 0.627229 | 0 | 0.090821 | 0.10203 | 5,714 | 82 | 111 | 69.682927 | 0.630676 | 0.017501 | 0 | 0 | 0 | 0 | 0.483002 | 0.27903 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016667 | 0 | 0.016667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aabb684723ac2d422bf1952ed4e0090e44e8a494 | 205 | py | Python | tzcode/hook/purchase_invoice.py | Lewinta/TZCode | 4a44ed85ec202446fdba0fd54e20680ed56b8ad3 | [
"MIT"
] | null | null | null | tzcode/hook/purchase_invoice.py | Lewinta/TZCode | 4a44ed85ec202446fdba0fd54e20680ed56b8ad3 | [
"MIT"
] | null | null | null | tzcode/hook/purchase_invoice.py | Lewinta/TZCode | 4a44ed85ec202446fdba0fd54e20680ed56b8ad3 | [
"MIT"
] | null | null | null | import frappe
from tzcode.hook.accounts_controller import cancel_gl_entries, delete_gl_entries
def on_cancel(doc, method):
cancel_gl_entries(doc)
def on_trash(doc, method):
delete_gl_entries(doc) | 25.625 | 80 | 0.809756 | 32 | 205 | 4.84375 | 0.5 | 0.232258 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117073 | 205 | 8 | 81 | 25.625 | 0.856354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aacf4e5b9c11c6565361558bd4d7f7d83518cf41 | 16,436 | py | Python | test/pyehr/ehr/services/dbmanager/querymanager/test_querymanager.py | madineniguna/EHR | 54ab760ca9a52e5c6c32b861ad6cbce41075d59f | [
"MIT"
] | 23 | 2015-01-29T00:15:05.000Z | 2021-01-23T18:21:00.000Z | test/pyehr/ehr/services/dbmanager/querymanager/test_querymanager.py | madineniguna/EHR | 54ab760ca9a52e5c6c32b861ad6cbce41075d59f | [
"MIT"
] | 2 | 2015-07-22T13:58:00.000Z | 2017-05-02T15:40:33.000Z | test/pyehr/ehr/services/dbmanager/querymanager/test_querymanager.py | madineniguna/EHR | 54ab760ca9a52e5c6c32b861ad6cbce41075d59f | [
"MIT"
] | 7 | 2015-07-10T13:02:01.000Z | 2021-02-19T10:51:52.000Z | import unittest, os, sys
from random import randint
from pyehr.ehr.services.dbmanager.querymanager import QueryManager
from pyehr.ehr.services.dbmanager.dbservices import DBServices
from pyehr.ehr.services.dbmanager.dbservices.wrappers import PatientRecord,\
ClinicalRecord, ArchetypeInstance
from pyehr.utils.services import get_service_configuration
CONF_FILE = os.getenv('SERVICE_CONFIG_FILE')
class TestQueryManager(unittest.TestCase):
def __init__(self, label):
super(TestQueryManager, self).__init__(label)
def setUp(self):
if CONF_FILE is None:
sys.exit('ERROR: no configuration file provided')
sconf = get_service_configuration(CONF_FILE)
self.dbs = DBServices(**sconf.get_db_configuration())
self.dbs.set_index_service(**sconf.get_index_configuration())
self.qmanager = QueryManager(**sconf.get_db_configuration())
self.qmanager.set_index_service(**sconf.get_index_configuration())
self.patients = list()
def tearDown(self):
for p in self.patients:
self.dbs.delete_patient(p, cascade_delete=True)
self.patients = None
pass
def _get_quantity(self, value, units):
return {
'magnitude': value,
'units': units
}
def _get_blood_pressure_data(self, systolic=None, diastolic=None, mean_arterial=None, pulse=None):
archetype_id = 'openEHR-EHR-OBSERVATION.blood_pressure.v1'
bp_doc = {"data": {"at0001": [{"events": [{"at0006": {"data": {"at0003": [{"items": {}}]}}}]}]}}
if systolic is not None:
bp_doc['data']['at0001'][0]['events'][0]['at0006']['data']['at0003'][0]['items']['at0004'] = \
{'value': self._get_quantity(systolic, 'mm[Hg]')}
if diastolic is not None:
bp_doc['data']['at0001'][0]['events'][0]['at0006']['data']['at0003'][0]['items']['at0005'] = \
{'value': self._get_quantity(diastolic, 'mm[Hg]')}
if mean_arterial is not None:
bp_doc['data']['at0001'][0]['events'][0]['at0006']['data']['at0003'][0]['items']['at1006'] = \
{'value': self._get_quantity(mean_arterial, 'mm[Hg]')}
if pulse is not None:
bp_doc['data']['at0001'][0]['events'][0]['at0006']['data']['at0003'][0]['items']['at1007'] = \
{'value': self._get_quantity(pulse, 'mm[Hg]')}
return archetype_id, bp_doc
def _get_encounter_data(self, archetypes):
archetype_id = 'openEHR-EHR-COMPOSITION.encounter.v1'
enc_doc = {'context': {'event_context': {'other_context': {'at0001': [{'items': {'at0002': archetypes}}]}}}}
return archetype_id, enc_doc
def _build_patients_batch(self, num_patients, num_ehr, systolic_range=None, diastolic_range=None):
records_details = dict()
for x in xrange(0, num_patients):
p = self.dbs.save_patient(PatientRecord('PATIENT_%02d' % x))
crecs = list()
for y in xrange(0, num_ehr):
systolic = randint(*systolic_range) if systolic_range else None
diastolic = randint(*diastolic_range) if diastolic_range else None
bp_arch = ArchetypeInstance(*self._get_blood_pressure_data(systolic, diastolic))
crecs.append(ClinicalRecord(bp_arch))
records_details.setdefault(p.record_id, []).append({'systolic': systolic, 'diastolic': diastolic})
_, p, _ = self.dbs.save_ehr_records(crecs, p)
self.patients.append(p)
return records_details
def _build_patients_batch_mixed(self, num_patients, num_ehr, systolic_range=None, diastolic_range=None):
records_details = dict()
for x in xrange(0, num_patients):
p = self.dbs.save_patient(PatientRecord('PATIENT_%02d' % x))
crecs = list()
for y in xrange(0, num_ehr):
systolic = randint(*systolic_range) if systolic_range else None
diastolic = randint(*diastolic_range) if diastolic_range else None
bp_arch = ArchetypeInstance(*self._get_blood_pressure_data(systolic, diastolic))
if randint(0, 2) == 1:
crecs.append(ClinicalRecord(ArchetypeInstance(*self._get_encounter_data([bp_arch]))))
records_details.setdefault(p.record_id, []).append({'systolic': systolic, 'diastolic': diastolic})
else:
crecs.append(ClinicalRecord(bp_arch))
_, p, _ = self.dbs.save_ehr_records(crecs, p)
self.patients.append(p)
return records_details
def test_simple_select_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
"""
batch_details = self._build_patients_batch(10, 10, (50, 100), (50, 100))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
details_results.extend(v)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_deep_select_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Composition c[openEHR-EHR-COMPOSITION.encounter.v1]
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
"""
batch_details = self._build_patients_batch_mixed(10, 10, (50, 100), (50, 100))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
details_results.extend(v)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_simple_where_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude >= 110
"""
batch_details = self._build_patients_batch(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 or x['diastolic'] >= 110:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_single_where_query2(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude != 180
"""
batch_details = self._build_patients_batch(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] != 180:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_single_where_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
"""
batch_details = self._build_patients_batch(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_deep_where_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Composition c[openEHR-EHR-COMPOSITION.encounter.v1]
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude >= 110
"""
batch_details = self._build_patients_batch_mixed(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 or x['diastolic'] >= 110:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_deep_where_query2(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Composition c[openEHR-EHR-COMPOSITION.encounter.v1]
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude != 110
"""
batch_details = self._build_patients_batch_mixed(10, 10, (0, 250), (109, 110))
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 or x['diastolic'] != 110:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_deeper_where_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Composition c[openEHR-EHR-COMPOSITION.encounter.v1]
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
AND o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude != 81
"""
batch_details = self._build_patients_batch_mixed(10, 10, (0, 250), (0, 200))
pass
results = self.qmanager.execute_aql_query(query)
details_results = list()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 and x['diastolic'] != 81:
details_results.append(x)
res = list(results.results)
self.assertEqual(sorted(details_results), sorted(res))
def test_simple_parametric_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e [uid=$ehrUid]
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
"""
batch_details = self._build_patients_batch(10, 10, (50, 100), (50, 100))
for patient_label, records in batch_details.iteritems():
results = self.qmanager.execute_aql_query(query, {'ehrUid': patient_label})
res = list(results.results)
self.assertEqual(sorted(records), sorted(res))
def test_simple_patients_selection(self):
query = """
SELECT e/ehr_id/value AS patient_identifier
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude >= 110
"""
batch_details = self._build_patients_batch(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query)
details_results = set()
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 or x['diastolic'] >= 110:
details_results.add(k)
res = list(results.get_distinct_results('patient_identifier'))
self.assertEqual(sorted(list(details_results)), sorted(res))
def test_count_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude >= 110
"""
batch_details = self._build_patients_batch(10, 10, (0, 250), (0, 200))
results = self.qmanager.execute_aql_query(query, count_only=True)
results_count = 0
for k, v in batch_details.iteritems():
for x in v:
if x['systolic'] >= 180 or x['diastolic'] >= 110:
results_count += 1
self.assertEqual(results_count, results)
def test_multiprocess_query(self):
query = """
SELECT o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude AS systolic,
o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude AS diastolic
FROM Ehr e
CONTAINS Observation o[openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE o/data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude >= 180
OR o/data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude >= 110
"""
_ = self._build_patients_batch_mixed(10, 10, (0, 250), (0, 200))
sp_results = self.qmanager.execute_aql_query(query)
mp_results = self.qmanager.execute_aql_query(query, query_processes=2)
self.assertEqual(sorted(sp_results.to_json()), sorted(mp_results.to_json()))
def suite():
suite = unittest.TestSuite()
suite.addTest(TestQueryManager('test_simple_select_query'))
suite.addTest(TestQueryManager('test_simple_where_query'))
suite.addTest(TestQueryManager('test_single_where_query'))
suite.addTest(TestQueryManager('test_single_where_query2'))
suite.addTest(TestQueryManager('test_deep_where_query'))
suite.addTest(TestQueryManager('test_deep_where_query2'))
suite.addTest(TestQueryManager('test_deeper_where_query'))
suite.addTest(TestQueryManager('test_simple_parametric_query'))
suite.addTest(TestQueryManager('test_simple_patients_selection'))
suite.addTest(TestQueryManager('test_deep_select_query'))
suite.addTest(TestQueryManager('test_count_query'))
suite.addTest(TestQueryManager('test_multiprocess_query'))
return suite
if __name__ == '__main__':
runner = unittest.TextTestRunner(verbosity=2)
runner.run(suite()) | 50.885449 | 118 | 0.651132 | 2,028 | 16,436 | 5.10355 | 0.088264 | 0.041546 | 0.066473 | 0.082899 | 0.831884 | 0.796522 | 0.752271 | 0.733333 | 0.714879 | 0.714396 | 0 | 0.076473 | 0.219518 | 16,436 | 323 | 119 | 50.885449 | 0.730355 | 0 | 0 | 0.62963 | 0 | 0.127946 | 0.373 | 0.228752 | 0 | 0 | 0 | 0 | 0.040404 | 1 | 0.070707 | false | 0.006734 | 0.020202 | 0.003367 | 0.114478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2aa21b67138fe51edb49f16d7da5c2f2ab50b449 | 3,381 | py | Python | core/permissions.py | ahmedemad3965/TodoAPI | a82cabbb580e13a14c899a1e08339783def835c6 | [
"Apache-2.0"
] | 1 | 2020-03-04T22:30:57.000Z | 2020-03-04T22:30:57.000Z | core/permissions.py | ahmedemad3965/TodoAPI | a82cabbb580e13a14c899a1e08339783def835c6 | [
"Apache-2.0"
] | 2 | 2020-06-23T15:15:18.000Z | 2022-01-13T02:19:51.000Z | core/permissions.py | ahmedemad3965/TodoAPI | a82cabbb580e13a14c899a1e08339783def835c6 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) Code Written and Tested by Ahmed Emad in 02/03/2020, 12:30
from rest_framework import permissions
from core.models import UserProfileModel, TodoGroupModel, TodoModel
class UserProfilePermissions(permissions.BasePermission):
"""The Permission class used by UserProfileView."""
safe_methods = {'GET', 'POST', 'HEAD', 'OPTIONS'}
def has_permission(self, request, view):
"""Checks if request is safe, if not it checks if
the user is authenticated and has a valid profile.
"""
if request.method in self.safe_methods:
return True
if request.user.is_authenticated and hasattr(request.user, 'profile'):
return True
return False
def has_object_permission(self, request, view, obj):
"""Checks if the user has permissions to update
or delete a user profile"""
if obj.account == request.user:
return True
return False
class TodoGroupPermissions(permissions.BasePermission):
"""The Permission class used by TodoGroupView."""
def has_permission(self, request, view):
"""Checks if the user is authenticated and has a valid profile."""
if request.user.is_authenticated and hasattr(request.user, 'profile'):
return True
return False
def has_object_permission(self, request, view, obj):
"""Checks if the user has the permissions to
update or delete a todo group
"""
if type(obj) == UserProfileModel:
if obj.account == request.user:
return True
return False
if obj.user.account == request.user:
return True
return False
class TodoPermissions(permissions.BasePermission):
"""The Permission class used by TodoItemView."""
def has_permission(self, request, view):
"""Checks if the user is authenticated and has a valid profile."""
if request.user.is_authenticated and hasattr(request.user, 'profile'):
return True
return False
def has_object_permission(self, request, view, obj):
"""Checks if the user has the permissions to see,
update or delete a todo
"""
if type(obj) == UserProfileModel:
if obj.account == request.user:
return True
return False
if type(obj) == TodoGroupModel:
if obj.user.account == request.user:
return True
return False
if obj.category.user.account == request.user:
return True
return False
class TodoAttachmentPermissions(permissions.BasePermission):
"""The Permission class used by TodoAttachmentView."""
def has_permission(self, request, view):
"""Checks if the user is authenticated and has a valid profile."""
if request.user.is_authenticated and hasattr(request.user, 'profile'):
return True
return False
def has_object_permission(self, request, view, obj):
"""Checks if the user has the permissions to
update or delete a todo
"""
if type(obj) == TodoModel:
if obj.category.user.account == request.user:
return True
return False
if obj.todo_item.category.user.account == request.user:
return True
return False
| 33.147059 | 78 | 0.629991 | 398 | 3,381 | 5.301508 | 0.193467 | 0.083412 | 0.090995 | 0.119431 | 0.795261 | 0.795261 | 0.781991 | 0.6891 | 0.654976 | 0.605213 | 0 | 0.004992 | 0.288968 | 3,381 | 101 | 79 | 33.475248 | 0.872712 | 0.241644 | 0 | 0.807018 | 0 | 0 | 0.018953 | 0 | 0 | 0 | 0 | 0.059406 | 0 | 1 | 0.140351 | false | 0 | 0.035088 | 0 | 0.701754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
2ae3803929be0ab8ae295cf2d1e1e1970535a8e7 | 75 | py | Python | melodic/lib/python2.7/dist-packages/control_msgs/srv/__init__.py | Dieptranivsr/Ros_Diep | d790e75e6f5da916701b11a2fdf3e03b6a47086b | [
"MIT"
] | null | null | null | melodic/lib/python2.7/dist-packages/control_msgs/srv/__init__.py | Dieptranivsr/Ros_Diep | d790e75e6f5da916701b11a2fdf3e03b6a47086b | [
"MIT"
] | 1 | 2021-07-08T10:26:06.000Z | 2021-07-08T10:31:11.000Z | melodic/lib/python2.7/dist-packages/control_msgs/srv/__init__.py | Dieptranivsr/Ros_Diep | d790e75e6f5da916701b11a2fdf3e03b6a47086b | [
"MIT"
] | null | null | null | from ._QueryCalibrationState import *
from ._QueryTrajectoryState import *
| 25 | 37 | 0.84 | 6 | 75 | 10.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 75 | 2 | 38 | 37.5 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d5e68e12b0ef30cd4df05542b9d7cb4e0fa7f95 | 4,270 | py | Python | Fall.py | gr0mph/FallChallenge2020 | 1c98d61276120d230c64c8062d248ed78eaabb3a | [
"MIT"
] | null | null | null | Fall.py | gr0mph/FallChallenge2020 | 1c98d61276120d230c64c8062d248ed78eaabb3a | [
"MIT"
] | null | null | null | Fall.py | gr0mph/FallChallenge2020 | 1c98d61276120d230c64c8062d248ed78eaabb3a | [
"MIT"
] | null | null | null | import sys, copy, heapq
import math, random
import numpy as np
################################################################################
#
################################################################################
class KanbanBoard():
def __init__(self,clone):
if clone is not None:
self._predict, = copy.copy(clone._predict)
self._memento = clone._memento
else :
self._predict, self._memento = None, None
pass
def predict(self):
return self._predict[0]
def update(self,state):
self._predict.pop(0)
@property
def memento(self):
if self._memento is None: return self
pass
self._predict = copy.copy(self._memento._predict)
@memento.setter
def memento(self,setter):
pass
self._memento._predict = copy.copy(setter._predict)
def correction(self,state):
pass
################################################################################
#
################################################################################
class AgentBoard():
def __init__(self,clone):
if clone is not None:
self.nb_ingred,self.nb_match,self.price = clone.nb_ingred,clone.nb_match,clone.price
self.action = clone.action
else :
self.nb_ingred,self.nb_match,self.price = 0, 0, 0
self.action = []
self._predict = copy.copy(clone._predict) if clone is not None else []
self._memento = clone._memento if clone is not None else self
def setup(self,state):
for k1, k2 in zip( f_ingred(self.action) , state):
pass
def compute(self,spell):
pass
def predict(self):
return next(iter(self._predict))
def update(self,state):
pass
@property
def memento(self):
if self._memento is None: return self
pass
self._predict = copy.copy(self._memento._predict)
return self
@memento.setter
def memento(self,setter):
self._memento = self.action[:]
self._memento._predict = copy.copy(setter._predict)
def correction(self,state):
pass
################################################################################
#
################################################################################
class KanbanSimu():
def __init__(self,clone):
if clone is not None:
pass
else :
pass
self._predict = copy.copy(clone._predict)
self._memento = clone._memento
def predict(self):
return next(iter(self._predict))
@update.setter
def update(self,state):
pass
@property
def memento(self):
if self._memento is None: return self
pass
self._predict = copy.copy(self._memento._predict)
@memento.setter
def memento(self,setter):
pass
self._memento._predict = copy.copy(setter._predict)
def correction(self,state):
pass
################################################################################
#
################################################################################
class AgentSimu():
def __init__(self,clone):
if clone is not None:
pass
else :
pass
self._predict = copy.copy(clone._predict)
self._memento = clone._memento
def predict(self):
return next(iter(self._predict))
@update.setter
def update(self,state):
pass
@property
def memento(self):
if self._memento is None: return self
pass
self._predict = copy.copy(self._memento._predict)
@memento.setter
def memento(self,setter):
pass
self._memento._predict = copy.copy(setter._predict)
def correction(self,state):
pass
################################################################################
#
################################################################################
if __name__ == '__main__':
_mine_ = KanbanBoard(None)
_opp_ = KanbanBoard(None)
_simu_ = KanbanSimu(None)
_mine_agent_ = {}
_opp_agent_ = {}
_simu_agent_ = {}
while True:
# READ
| 24.124294 | 96 | 0.480328 | 412 | 4,270 | 4.740291 | 0.143204 | 0.107015 | 0.092166 | 0.077829 | 0.757296 | 0.74296 | 0.710189 | 0.685612 | 0.634921 | 0.634921 | 0 | 0.002197 | 0.253864 | 4,270 | 176 | 97 | 24.261364 | 0.610797 | 0.000937 | 0 | 0.734513 | 0 | 0 | 0.002314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.185841 | 0.026549 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
2d60dfa5c6325689a1a4f78636ef8874de7328c0 | 129 | py | Python | src/augraphy/__init__.py | azf99/augraphy | 5266c60606f47ea5b25b4be254c46d5561883431 | [
"MIT"
] | null | null | null | src/augraphy/__init__.py | azf99/augraphy | 5266c60606f47ea5b25b4be254c46d5561883431 | [
"MIT"
] | null | null | null | src/augraphy/__init__.py | azf99/augraphy | 5266c60606f47ea5b25b4be254c46d5561883431 | [
"MIT"
] | null | null | null | from augraphy.base import *
from augraphy.augmentations import *
from augraphy.default import *
from augraphy.utilities import *
| 25.8 | 36 | 0.813953 | 16 | 129 | 6.5625 | 0.4375 | 0.457143 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124031 | 129 | 4 | 37 | 32.25 | 0.929204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2d61d204bc30f261ce8374dc831012424f8bc665 | 251 | py | Python | inac8hr/entities/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | inac8hr/entities/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | inac8hr/entities/__init__.py | th-bunratta/8hr.insomniac | 5173500a1ad7197096d513b38258aa65b035fcf3 | [
"BSD-3-Clause"
] | null | null | null | from inac8hr.entities.units import *
from inac8hr.entities.defenders import *
from inac8hr.entities.agents import AgentUnit, Ballot
from inac8hr.entities.info import *
from inac8hr.entities.generators import *
from inac8hr.entities.particles import *
| 35.857143 | 53 | 0.828685 | 32 | 251 | 6.5 | 0.375 | 0.317308 | 0.548077 | 0.480769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026549 | 0.099602 | 251 | 6 | 54 | 41.833333 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2d9c87724b09012c1a39c7b4eecb1c4174ed45cd | 35 | py | Python | modules/src/modules/builder/robots/__init__.py | Crazy-Ginger/MOSAR | 74f1a7ca1f17a90ede61f37d223a2ae4de6a1088 | [
"MIT"
] | null | null | null | modules/src/modules/builder/robots/__init__.py | Crazy-Ginger/MOSAR | 74f1a7ca1f17a90ede61f37d223a2ae4de6a1088 | [
"MIT"
] | null | null | null | modules/src/modules/builder/robots/__init__.py | Crazy-Ginger/MOSAR | 74f1a7ca1f17a90ede61f37d223a2ae4de6a1088 | [
"MIT"
] | 2 | 2020-09-18T00:02:16.000Z | 2021-02-22T23:42:30.000Z | from .cubemodule import CubeModule
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2def04cfae383d3df740ada0f5ac9e3895439455 | 32 | py | Python | python/testData/inspections/PyUnboundLocalVariableInspection/StarImportTopLevel.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyUnboundLocalVariableInspection/StarImportTopLevel.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyUnboundLocalVariableInspection/StarImportTopLevel.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from re import *
print(UNICODE)
| 10.666667 | 16 | 0.75 | 5 | 32 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 17 | 16 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9322642d5065647ebeabe99aad0ac378b4ce9bf6 | 35 | py | Python | uberfare/__init__.py | GustavoRPS/uberfare | b41b001d6c49b64534b12c6a73f85262af722a28 | [
"MIT"
] | 9 | 2018-02-27T17:37:43.000Z | 2020-12-15T01:48:40.000Z | uberfare/__init__.py | GustavoRPS/uberfare | b41b001d6c49b64534b12c6a73f85262af722a28 | [
"MIT"
] | 6 | 2018-02-27T07:00:16.000Z | 2019-05-09T18:53:59.000Z | philip/__init__.py | taion/pipf | a15c1a97757d7be172f0c82112dfd48004f337b9 | [
"MIT"
] | 1 | 2018-10-30T17:32:02.000Z | 2018-10-30T17:32:02.000Z | from .cli import cli # noqa: F401
| 17.5 | 34 | 0.685714 | 6 | 35 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.228571 | 35 | 1 | 35 | 35 | 0.777778 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
934b2945cf2d0be950c90c95a4de14b9e74f1254 | 72 | py | Python | geoapps/simpegPF/EM/Static/__init__.py | RichardScottOZ/geoapps | 5b3c1d4fd11add45992e8b2497312ac014272b69 | [
"MIT"
] | 3 | 2020-11-27T03:18:28.000Z | 2022-03-18T01:29:58.000Z | geoapps/simpegPF/EM/Static/__init__.py | sebhmg/geoapps | 1463ba4ec3c914abdc7403e54eca0ee2bbc3f4f4 | [
"MIT"
] | null | null | null | geoapps/simpegPF/EM/Static/__init__.py | sebhmg/geoapps | 1463ba4ec3c914abdc7403e54eca0ee2bbc3f4f4 | [
"MIT"
] | 1 | 2021-03-21T09:54:33.000Z | 2021-03-21T09:54:33.000Z | from . import DC
from . import IP
from . import SIP
from . import Utils
| 14.4 | 19 | 0.722222 | 12 | 72 | 4.333333 | 0.5 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 72 | 4 | 20 | 18 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93634f70f17efcdce76e0401de979900d95c618e | 3,246 | py | Python | pyzohoapi/inventory.py | s-agawane/pyZohoAPI | 5333ca2ca62bae151e06f7c4018b697bb4a72537 | [
"MIT"
] | 7 | 2021-03-10T11:05:59.000Z | 2022-01-22T23:08:27.000Z | pyzohoapi/inventory.py | s-agawane/pyZohoAPI | 5333ca2ca62bae151e06f7c4018b697bb4a72537 | [
"MIT"
] | 2 | 2021-03-11T00:28:30.000Z | 2022-01-17T18:33:33.000Z | pyzohoapi/inventory.py | s-agawane/pyZohoAPI | 5333ca2ca62bae151e06f7c4018b697bb4a72537 | [
"MIT"
] | 3 | 2021-03-10T14:06:02.000Z | 2021-11-19T06:27:29.000Z | # This file is part of pyZohoAPI, Copyright (C) Todd D. Esposito 2021.
# Distributed under the MIT License (see https://opensource.org/licenses/MIT).
from .core import ZohoAPIBase
from . import objecttypes
class ZohoInventory(ZohoAPIBase):
_scope = "ZohoInventory.FullAccess.all"
def get_endpoint(self, region):
return f"https://inventory.zoho.{self._regionmap[region]}/api/v1"
def Account(self, *args, **kwargs): return objecttypes.Account(self, *args, **kwargs)
def Bill(self, *args, **kwargs): return objecttypes.Bill(self, *args, **kwargs)
def Brand(self, *args, **kwargs): return objecttypes.Brand(self, *args, **kwargs)
def Bundle(self, *args, **kwargs): return objecttypes.Bundle(self, *args, **kwargs)
def CompositeItem(self, *args, **kwargs): return objecttypes.CompositeItem(self, *args, **kwargs)
def Contact(self, *args, **kwargs): return objecttypes.Contact(self, *args, **kwargs)
def CustomerPayment(self, *args, **kwargs): return objecttypes.CustomerPayment(self, *args, **kwargs)
def Currency(self, *args, **kwargs): return objecttypes.Currency(self, *args, **kwargs)
def Document(self, *args, **kwargs): return objecttypes.Document(self, *args, **kwargs)
def Invoice(self, *args, **kwargs): return objecttypes.Invoice(self, *args, **kwargs)
def Item(self, *args, **kwargs): return objecttypes.Item(self, *args, **kwargs)
def ItemAdjustment(self, *args, **kwargs): return objecttypes.ItemAdjustment(self, *args, **kwargs)
def ItemGroup(self, *args, **kwargs): return objecttypes.ItemGroup(self, *args, **kwargs)
def Organization(self, *args, **kwargs): return objecttypes.Organization(self, *args, **kwargs)
def Package(self, *args, **kwargs): return objecttypes.Package(self, *args, **kwargs)
def PriceList(self, *args, **kwargs): return objecttypes.PriceList(self, *args, **kwargs)
def PurchaseOrder(self, *args, **kwargs): return objecttypes.PurchaseOrder(self, *args, **kwargs)
def PurchaseReceive(self, *args, **kwargs): return objecttypes.PurchaseReceive(self, *args, **kwargs)
def RetainerInvoice(self, *args, **kwargs): return objecttypes.RetainerInvoice(self, *args, **kwargs)
def SalesOrder(self, *args, **kwargs): return objecttypes.SalesOrder(self, *args, **kwargs)
def SalesPerson(self, *args, **kwargs): return objecttypes.SalesPerson(self, *args, **kwargs)
def SalesReturn(self, *args, **kwargs): return objecttypes.SalesReturn(self, *args, **kwargs)
def ShipmentOrder(self, *args, **kwargs): return objecttypes.ShipmentOrder(self, *args, **kwargs)
def Tax(self, *args, **kwargs): return objecttypes.Tax(self, *args, **kwargs)
def TaxAuthority(self, *args, **kwargs): return objecttypes.TaxAuthority(self, *args, **kwargs)
def TaxExemption(self, *args, **kwargs): return objecttypes.TaxExemption(self, *args, **kwargs)
def TaxGroup(self, *args, **kwargs): return objecttypes.TaxGroup(self, *args, **kwargs)
def TransferOrder(self, *args, **kwargs): return objecttypes.TransferOrder(self, *args, **kwargs)
def User(self, *args, **kwargs): return objecttypes.User(self, *args, **kwargs)
def Warehouse(self, *args, **kwargs): return objecttypes.Warehouse(self, *args, **kwargs)
| 75.488372 | 105 | 0.709489 | 382 | 3,246 | 6.020942 | 0.198953 | 0.208696 | 0.365217 | 0.26087 | 0.404348 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001769 | 0.129082 | 3,246 | 42 | 106 | 77.285714 | 0.811815 | 0.04467 | 0 | 0 | 0 | 0 | 0.026791 | 0.009038 | 0 | 0 | 0 | 0 | 0 | 1 | 0.861111 | false | 0 | 0.055556 | 0.861111 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fae32204ce2bdf3a6e223dcf03448a0f804c7a21 | 2,262 | py | Python | src/evolvepy/generator/mutation/numeric_mutation.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | 1 | 2022-01-13T21:11:53.000Z | 2022-01-13T21:11:53.000Z | src/evolvepy/generator/mutation/numeric_mutation.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | null | null | null | src/evolvepy/generator/mutation/numeric_mutation.py | EltonCN/evolvepy | 4489264d6c03ea4f3c23ea665fdf12fe4ead1ccc | [
"MIT"
] | null | null | null | from numba.core.utils import chain_exception
import numpy as np
from numpy.typing import ArrayLike
import numba
from typing import Tuple
@numba.njit
def sum_mutation(chromosome:ArrayLike, existence_rate:float, gene_rate:float, mutation_range:Tuple[float, float]):
'''
It takes a chromosome and add a random value between the mutation range to its gene, then repeats the process with
the given probability.
Args:
chromosome (np.ArrayLike): array of chromosomes
existence_rate (float): probability of first mutation
gene_rate (float): probability of another gene mutation
mutation_range (Tuple[float, float]):
Returns:
new_cromosome (np.ArrayLike): new mutated individual
'''
chromosome = np.asarray(chromosome)
new_chromosome = chromosome.copy()
first = True
count = 0
if np.random.rand() < existence_rate:
while (first or np.random.rand() < gene_rate) and count < chromosome.shape[0]:
first = False
index = np.random.randint(0, chromosome.shape[0])
new_chromosome[index] = chromosome[index] + np.random.uniform(mutation_range[0], mutation_range[1])
count += 1
return new_chromosome
def mul_mutation(chromosome:ArrayLike, existence_rate:float, gene_rate:float, mutation_range:Tuple[float, float]):
'''
It takes a chromosome and multiply a random value between the mutation range to its gene, then repeats the process with
the given probability.
Args:
chromosome (np.ArrayLike): array of chromosomes
existence_rate (float): probability of first mutation
gene_rate (float): probability of another gene mutation
mutation_range (Tuple[float, float]):
Returns:
new_cromosome (np.ArrayLike): new mutated individual
'''
chromosome = np.asarray(chromosome)
new_chromosome = chromosome.copy()
first = True
if np.random.rand() < existence_rate:
while first or np.random.rand() < gene_rate:
first = False
index = np.random.randint(0, chromosome.shape[0])
new_chromosome[index] = new_chromosome[index] * np.random.uniform(mutation_range[0], mutation_range[1])
return new_chromosome | 35.34375 | 123 | 0.689655 | 286 | 2,262 | 5.342657 | 0.241259 | 0.085079 | 0.04712 | 0.060209 | 0.853403 | 0.853403 | 0.853403 | 0.853403 | 0.853403 | 0.853403 | 0 | 0.006279 | 0.225464 | 2,262 | 64 | 124 | 35.34375 | 0.865868 | 0.376658 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.178571 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8786d2ae8435362c7e89cd9591d41a2a4207c6cf | 24 | py | Python | tianye/dpcnv.py | Smaller-T/tianye | ddcb7276d37458c9e0445095fd2b6853e1aefe13 | [
"BSD-2-Clause"
] | null | null | null | tianye/dpcnv.py | Smaller-T/tianye | ddcb7276d37458c9e0445095fd2b6853e1aefe13 | [
"BSD-2-Clause"
] | null | null | null | tianye/dpcnv.py | Smaller-T/tianye | ddcb7276d37458c9e0445095fd2b6853e1aefe13 | [
"BSD-2-Clause"
] | null | null | null | import time
print(time)
| 8 | 11 | 0.791667 | 4 | 24 | 4.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 2 | 12 | 12 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
87910f90a9219e6b5327662e7d1ac6fcf57336b3 | 21 | py | Python | utils/__init__.py | kyanome/stats_models | 81e01d052eed1b734cef484ec1094e56ea425bc8 | [
"Apache-2.0"
] | null | null | null | utils/__init__.py | kyanome/stats_models | 81e01d052eed1b734cef484ec1094e56ea425bc8 | [
"Apache-2.0"
] | null | null | null | utils/__init__.py | kyanome/stats_models | 81e01d052eed1b734cef484ec1094e56ea425bc8 | [
"Apache-2.0"
] | null | null | null | from .types_ import * | 21 | 21 | 0.761905 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87996dde7a6c437b60120e11cd4ecd561eef0d22 | 183 | py | Python | facebook/admin.py | Lh4cKg/django-facebook | 6fd4035c3bbfc4a5eb3a831e0de57a58893c8672 | [
"MIT"
] | 1 | 2016-05-29T16:44:17.000Z | 2016-05-29T16:44:17.000Z | facebook/admin.py | Lh4cKg/django-facebook | 6fd4035c3bbfc4a5eb3a831e0de57a58893c8672 | [
"MIT"
] | 1 | 2021-06-10T18:40:26.000Z | 2021-06-10T18:40:26.000Z | facebook/admin.py | Lh4cKg/django-facebook | 6fd4035c3bbfc4a5eb3a831e0de57a58893c8672 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.contrib import admin
from .models import FacebookProfile
@admin.register(FacebookProfile)
class FacebookProfileAdmin(admin.ModelAdmin):
pass
| 18.3 | 45 | 0.770492 | 20 | 183 | 7.05 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00625 | 0.125683 | 183 | 9 | 46 | 20.333333 | 0.875 | 0.114754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
87ce8f7e247a14c067ad39d409d04cd71352ae46 | 36 | py | Python | tests/__init__.py | pudumagico/RLASP | bce5b87404fdca60e983e4a187e734c49ac923fa | [
"MIT"
] | null | null | null | tests/__init__.py | pudumagico/RLASP | bce5b87404fdca60e983e4a187e734c49ac923fa | [
"MIT"
] | 1 | 2021-06-02T16:55:33.000Z | 2021-06-04T14:30:54.000Z | tests/__init__.py | pudumagico/RLASP | bce5b87404fdca60e983e4a187e734c49ac923fa | [
"MIT"
] | 2 | 2021-03-22T14:46:49.000Z | 2021-03-31T16:12:12.000Z | from .test_mdp_blocksworld import *
| 18 | 35 | 0.833333 | 5 | 36 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35629128c2b2095f25cc33bddc2154559bc8fbfa | 98 | py | Python | Course_work/screen/screen_manager_app.py | ZayJob/OS-and-N | a1f24382c791b76ac82bd7c86fe8c6f834f620f1 | [
"MIT"
] | null | null | null | Course_work/screen/screen_manager_app.py | ZayJob/OS-and-N | a1f24382c791b76ac82bd7c86fe8c6f834f620f1 | [
"MIT"
] | null | null | null | Course_work/screen/screen_manager_app.py | ZayJob/OS-and-N | a1f24382c791b76ac82bd7c86fe8c6f834f620f1 | [
"MIT"
] | null | null | null | from kivy.uix.screenmanager import ScreenManager
class ScreenManagerApp(ScreenManager):
pass | 19.6 | 48 | 0.826531 | 10 | 98 | 8.1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 98 | 5 | 49 | 19.6 | 0.94186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3575a86180b86fd445e33384fa06b9b6f7456998 | 27,262 | py | Python | users/tests/test_permissions.py | thibaudcolas/directory-cms | d958360fe5491a92977d754cfd0d7f8a4695639e | [
"MIT"
] | null | null | null | users/tests/test_permissions.py | thibaudcolas/directory-cms | d958360fe5491a92977d754cfd0d7f8a4695639e | [
"MIT"
] | null | null | null | users/tests/test_permissions.py | thibaudcolas/directory-cms | d958360fe5491a92977d754cfd0d7f8a4695639e | [
"MIT"
] | null | null | null | import pytest
from django.urls import reverse
from rest_framework import status
from export_readiness.tests.factories import ArticlePageFactory
from users.tests.factories import (
AdminFactory,
BranchEditorFactory,
BranchModeratorFactory,
two_branches_with_users,
)
@pytest.mark.CMS_837
@pytest.mark.django_db
def test_branch_editors_should_only_see_pages_from_their_branch(root_page):
"""
This reproduces Wagtail's admin call to list pages in the 'Pages' menu.
Editors should only see app pages that share common root page
"""
env = two_branches_with_users(root_page)
resp_1 = env.editor_1_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_1.pk}&for_explorer=1'
)
assert resp_1.status_code == status.HTTP_200_OK
assert resp_1.json()['meta']['total_count'] == 1
assert resp_1.json()['items'][0]['id'] == env.listing_1.pk
resp_2 = env.editor_2_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_2.pk}&for_explorer=1'
)
assert resp_2.status_code == status.HTTP_200_OK
assert resp_2.json()['meta']['total_count'] == 1
assert resp_2.json()['items'][0]['id'] == env.listing_2.pk
@pytest.mark.quirk
@pytest.mark.CMS_837
@pytest.mark.django_db
def test_branch_editors_cannot_access_pages_not_from_their_branch(root_page):
"""
This reproduces situation when an editor would try to access page that
doesn't belong to they branch by simply changing page ID in the URL
"""
env = two_branches_with_users(root_page)
resp_1 = env.editor_1_client.get(f'/admin/pages/{env.home_2.pk}/edit/')
assert resp_1.status_code == status.HTTP_403_FORBIDDEN
resp_2 = env.editor_2_client.get(f'/admin/pages/{env.home_1.pk}/edit/')
assert resp_2.status_code == status.HTTP_403_FORBIDDEN
resp_3 = env.editor_1_client.get(f'/admin/pages/{env.home_2.pk}/')
assert resp_3.status_code == status.HTTP_302_FOUND
assert resp_3.url == f'/admin/pages/{env.home_1.pk}/'
resp_4 = env.editor_2_client.get(f'/admin/pages/{env.home_1.pk}/')
assert resp_4.status_code == status.HTTP_302_FOUND
assert resp_4.url == f'/admin/pages/{env.home_2.pk}/'
# Unfortunately on API level Wagtail allows users to list pages that
# belong to different branch
resp_6 = env.editor_1_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_2.pk}&for_explorer=1' # NOQA
)
assert resp_6.status_code == status.HTTP_200_OK
assert resp_6.json()['meta']['total_count'] == 1
assert resp_6.json()['items'][0]['id'] == env.listing_2.pk
@pytest.mark.CMS_837
@pytest.mark.django_db
def test_branch_moderators_should_only_see_pages_from_their_branch(root_page):
"""
This reproduces Wagtail's admin call to list pages in the 'Pages' menu.
Moderators should only see app pages that share common root page
"""
env = two_branches_with_users(root_page)
resp_1 = env.moderator_1_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_1.pk}&for_explorer=1'
)
assert resp_1.status_code == status.HTTP_200_OK
assert resp_1.json()['meta']['total_count'] == 1
assert resp_1.json()['items'][0]['id'] == env.listing_1.pk
resp_2 = env.moderator_2_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_2.pk}&for_explorer=1'
)
assert resp_2.status_code == status.HTTP_200_OK
assert resp_2.json()['meta']['total_count'] == 1
assert resp_2.json()['items'][0]['id'] == env.listing_2.pk
@pytest.mark.quirk
@pytest.mark.CMS_837
@pytest.mark.django_db
def test_moderators_cannot_access_pages_not_from_their_branch(root_page):
"""
This reproduces situation when a moderator would try to access page that
doesn't belong to they branch by simply changing page ID in the URL
"""
env = two_branches_with_users(root_page)
resp_1 = env.moderator_1_client.get(
f'/admin/pages/{env.home_2.pk}/edit/'
)
assert resp_1.status_code == status.HTTP_403_FORBIDDEN
resp_2 = env.moderator_2_client.get(
f'/admin/pages/{env.home_1.pk}/edit/'
)
assert resp_2.status_code == status.HTTP_403_FORBIDDEN
resp_3 = env.moderator_1_client.get(f'/admin/pages/{env.home_2.pk}/')
assert resp_3.status_code == status.HTTP_302_FOUND
assert resp_3.url == f'/admin/pages/{env.home_1.pk}/'
resp_4 = env.moderator_2_client.get(f'/admin/pages/{env.home_1.pk}/')
assert resp_4.status_code == status.HTTP_302_FOUND
assert resp_4.url == f'/admin/pages/{env.home_2.pk}/'
# Unfortunately on API level Wagtail allows users to list pages that
# belong to different branch
resp_6 = env.moderator_1_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_2.pk}&for_explorer=1'
)
assert resp_6.status_code == status.HTTP_200_OK
assert resp_6.json()['meta']['total_count'] == 1
assert resp_6.json()['items'][0]['id'] == env.listing_2.pk
@pytest.mark.django_db
def test_moderators_can_approve_revisions_only_for_pages_in_their_branch(
root_page
):
env = two_branches_with_users(root_page)
new_title = 'The title was modified'
env.article_2.title = new_title
revision = env.article_2.save_revision(
user=env.editor_2, submitted_for_moderation=True
)
resp_1 = env.moderator_1_client.post(
reverse('wagtailadmin_pages:approve_moderation', args=[revision.pk])
)
assert resp_1.status_code == status.HTTP_403_FORBIDDEN
# after publishing a page, user is redirected to the '/admin/' page
resp_2 = env.moderator_2_client.post(
reverse('wagtailadmin_pages:approve_moderation', args=[revision.pk]),
follow=True,
)
assert resp_2.status_code == status.HTTP_200_OK
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_can_create_child_pages_in_it(branch_factory, root_page):
branch = branch_factory.get(root_page)
data = {
'article_title': 'test article',
'article_teaser': 'test article',
'article_body_text': 'test article',
'title_en_gb': 'test article',
'slug': 'test-article',
}
resp_1 = branch.client.post(
reverse(
'wagtailadmin_pages:add',
args=[
branch.article._meta.app_label,
branch.article._meta.model_name,
branch.listing.pk],
),
data=data,
)
assert (
resp_1.status_code == status.HTTP_302_FOUND
), f'Something went wrong: {resp_1.context["form"].errors}'
# check if new page is visible in the 'Pages' menu
new_article_id = int(resp_1.url.split('/')[3]) # format is '/admin/pages/6/edit/' # NOQA
resp_2 = branch.client.get(
f'/admin/api/v2beta/pages/?child_of={branch.listing.pk}&for_explorer=1' # NOQA
)
assert resp_2.status_code == status.HTTP_200_OK
assert resp_2.json()['meta']['total_count'] == 2
assert resp_2.json()['items'][0]['id'] == branch.article.pk
assert resp_2.json()['items'][1]['id'] == new_article_id
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_cant_create_child_pages_without_mandatory_data(
branch_factory, root_page
):
branch = branch_factory.get(root_page)
mandatory_fields = {
'article_title',
'article_body_text',
'title_en_gb',
'slug',
}
data = {}
resp = branch.client.post(
reverse(
'wagtailadmin_pages:add',
args=[
branch.article._meta.app_label,
branch.article._meta.model_name,
branch.listing.pk
],
),
data=data,
)
assert resp.status_code == status.HTTP_200_OK
assert not (mandatory_fields - set(resp.context['form'].errors.keys()))
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
])
def test_branch_user_cant_create_pages_in_branch_they_dont_manage(
branch_factory, root_page
):
branch_1 = branch_factory.get(root_page)
branch_2 = branch_factory.get(root_page)
data = {
'article_title': 'test article',
'article_teaser': 'test article',
'article_body_text': 'test article',
'title_en_gb': 'test article',
'slug': 'test-article',
'action-publish': 'action-publish',
}
resp = branch_1.client.post(
reverse(
'wagtailadmin_pages:add',
args=[
branch_2.article._meta.app_label,
branch_2.article._meta.model_name,
branch_2.listing.pk
],
),
data=data,
)
assert resp.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.CMS_841
@pytest.mark.django_db
def test_admins_can_create_pages_in_any_branch(root_page):
env = two_branches_with_users(root_page)
# Add ExRed Article page
data_1 = {
'article_title': 'test article',
'article_teaser': 'test article',
'article_body_text': 'test article',
'title_en_gb': 'test article',
'slug': 'test-article',
'action-publish': 'action-publish',
}
resp_1 = env.admin_client.post(
reverse(
'wagtailadmin_pages:add',
args=[
env.article_1._meta.app_label,
env.article_1._meta.model_name,
env.listing_1.pk
],
),
data=data_1,
)
assert resp_1.status_code == status.HTTP_302_FOUND
assert resp_1.url.startswith('/admin/pages/') # format is /admin/pages/3/
# Add FAS Industry Article page
data_2 = {
'article_title': 'test article',
'article_teaser': 'test article',
'article_body_text': 'test article',
'title_en_gb': 'test article',
'body': 'this is a test page',
'slug': 'test-article',
'action-publish': 'action-publish',
'breadcrumbs_label_en_gb': 'test breadcrumb',
'introduction_title_en_gb': 'test introduction',
'author_name_en_gb': 'dit',
'job_title_en_gb': 'dit',
'proposition_text_en_gb': 'test proposition',
'call_to_action_text_en_gb': 'contact us',
'back_to_home_link_text_en_gb': 'home',
'social_share_title_en_gb': 'share',
'date_en_gb': '2019-01-01',
}
resp_2 = env.admin_client.post(
reverse(
'wagtailadmin_pages:add',
args=[
env.article_2._meta.app_label,
env.article_2._meta.model_name,
env.listing_2.pk
],
),
data=data_2,
)
assert resp_2.status_code == status.HTTP_302_FOUND
assert resp_2.url.startswith('/admin/pages/') # format is /admin/pages/3/
@pytest.mark.CMS_839
@pytest.mark.django_db
def test_editors_cannot_publish_child_pages(root_page):
env = two_branches_with_users(root_page)
draft_page = ArticlePageFactory(
parent=env.landing_1, live=False
)
revision = draft_page.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
resp = env.editor_1_client.post(
reverse('wagtailadmin_pages:approve_moderation', args=[revision.pk])
)
assert resp.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.CMS_839
@pytest.mark.django_db
def test_editors_cannot_unpublish_child_pages(root_page):
env = two_branches_with_users(root_page)
resp = env.editor_1_client.post(
reverse('wagtailadmin_pages:unpublish', args=[env.article_1.pk])
)
assert resp.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_can_submit_changes_for_moderation(
branch_factory, root_page
):
branch = branch_factory.get(root_page)
data = {
'article_title': 'new title',
'article_teaser': 'new teaser',
'article_body_text': 'new body text',
'title_en_gb': 'next title',
'action-submit': 'Submit for moderation', # this action triggers notification # NOQA
}
resp = branch.client.post(
reverse('wagtailadmin_pages:edit', args=[branch.article.pk]),
data=data
)
# on success, user should be redirected on parent page listing
assert resp.status_code == status.HTTP_302_FOUND, resp.context['form'].errors # NOQA
assert int(resp.url.split('/')[3]) == branch.listing.pk # format is /admin/pages/3/ # NOQA
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_can_view_drafts(branch_factory, root_page):
branch = branch_factory.get(root_page)
data = {
'article_title': 'new title',
'article_teaser': 'new teaser',
'article_body_text': 'new body text',
'title_en_gb': 'next title',
# omitted 'action-submit' means that pages was saved as draft
}
# Create a draft and stay on the same admin page
resp_1 = branch.client.post(
reverse('wagtailadmin_pages:edit', args=[branch.article.pk]), data=data
)
assert resp_1.status_code == status.HTTP_302_FOUND
assert 'has been updated' in resp_1.context['message']
assert int(resp_1.url.split('/')[3]) == branch.article.pk # format is /admin/pages/3/edit/ # NOQA
# Viewing draft will redirect user to the application site
resp_2 = branch.client.get(
reverse('wagtailadmin_pages:view_draft', args=[branch.article.pk])
)
assert resp_2.status_code == status.HTTP_302_FOUND
assert branch.article.slug in resp_2.url
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_can_list_revisions(branch_factory, root_page):
branch = branch_factory.get(root_page)
revision = branch.article.save_revision(
user=branch.user, submitted_for_moderation=True
)
revert_path = f'/admin/pages/{branch.article.pk}/revisions/{revision.pk}/revert/' # NOQA
resp = branch.client.get(
reverse('wagtailadmin_pages:revisions_index', args=[branch.article.pk])
)
assert resp.status_code == status.HTTP_200_OK
assert revert_path in resp.content.decode()
@pytest.mark.CMS_839
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
'branch_factory', [
BranchEditorFactory,
BranchModeratorFactory,
AdminFactory,
])
def test_branch_user_can_compare_changes_between_revisions(
branch_factory, root_page
):
branch = branch_factory.get(root_page)
new_title = 'The title was modified'
branch.article.title = new_title
revision = branch.article.save_revision(
user=branch.user, submitted_for_moderation=True
)
# compare current 'live' version of the page with the revision
resp = branch.client.get(
reverse(
'wagtailadmin_pages:revisions_compare',
args=[branch.article.pk, 'live', revision.id],
)
)
content = resp.content.decode()
assert resp.status_code == status.HTTP_200_OK
assert new_title in content
assert 'There are no differences between these two revisions' \
not in content
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
"branch_factory", [
BranchModeratorFactory,
AdminFactory,
])
def test_moderators_and_admins_can_publish_child_pages(
branch_factory, root_page
):
branch = branch_factory.get(root_page)
draft_page = ArticlePageFactory(parent=branch.listing, live=False)
revision = draft_page.save_revision(
user=branch.user, submitted_for_moderation=True,
)
resp = branch.client.post(
reverse('wagtailadmin_pages:approve_moderation', args=[revision.pk])
)
assert resp.status_code == status.HTTP_302_FOUND
assert resp.url == '/admin/'
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
@pytest.mark.parametrize(
"branch_factory", [
BranchModeratorFactory,
AdminFactory,
])
def test_moderators_and_admins_can_unpublish_child_pages(
branch_factory, root_page
):
branch = branch_factory.get(root_page)
resp = branch.client.post(
reverse('wagtailadmin_pages:unpublish', args=[branch.article.pk])
)
assert resp.status_code == status.HTTP_302_FOUND
assert int(resp.url.split('/')[3]) == branch.listing.pk # format is /admin/pages/4/ # NOQA
resp_2 = branch.client.get(
f'/admin/api/v2beta/pages/?child_of={branch.listing.pk}&for_explorer=1'
)
assert resp_2.status_code == status.HTTP_200_OK
article_status = resp_2.json()['items'][0]['meta']['status']
assert article_status['status'] == 'draft'
assert not article_status['live']
assert article_status['has_unpublished_changes']
@pytest.mark.quirk
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
def test_moderators_and_admins_can_view_revisions_from_other_branches(
root_page
):
"""
Unfortunately on API level Wagtail allows Moderators to view revisions from
other branches.
"""
env = two_branches_with_users(root_page)
revision_1 = env.article_1.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
revision_2 = env.article_2.save_revision(
user=env.editor_2, submitted_for_moderation=True
)
revert_path_1 = f'/admin/pages/{env.article_1.pk}/revisions/{revision_1.pk}/revert/' # NOQA
revert_path_2 = f'/admin/pages/{env.article_2.pk}/revisions/{revision_2.pk}/revert/' # NOQA
resp_1 = env.moderator_1_client.get(
reverse('wagtailadmin_pages:revisions_index',
args=[env.article_1.pk])
)
assert resp_1.status_code == status.HTTP_200_OK
content_1 = resp_1.content.decode()
assert revert_path_1 in content_1
assert revert_path_2 not in content_1
resp_2 = env.moderator_1_client.get(
reverse('wagtailadmin_pages:revisions_index',
args=[env.article_2.pk])
)
assert resp_2.status_code == status.HTTP_200_OK
content_2 = resp_2.content.decode()
assert revert_path_1 not in content_2
assert revert_path_2 in content_2
resp_3 = env.moderator_2_client.get(
reverse('wagtailadmin_pages:revisions_index',
args=[env.article_1.pk])
)
assert resp_3.status_code == status.HTTP_200_OK
content_3 = resp_3.content.decode()
assert revert_path_1 in content_3
assert revert_path_2 not in content_3
resp_4 = env.moderator_2_client.get(
reverse('wagtailadmin_pages:revisions_index',
args=[env.article_2.pk])
)
assert resp_4.status_code == status.HTTP_200_OK
content_4 = resp_4.content.decode()
assert revert_path_1 not in content_4
assert revert_path_2 in content_4
@pytest.mark.CMS_840
@pytest.mark.CMS_841
@pytest.mark.django_db
def test_moderators_can_reject_revision(root_page):
env = two_branches_with_users(root_page)
new_title = 'The title was modified'
env.article_1.title = new_title
revision = env.article_1.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
# Reject request for moderation
resp_1 = env.moderator_1_client.post(
reverse('wagtailadmin_pages:reject_moderation', args=[revision.pk])
)
assert resp_1.status_code == status.HTTP_302_FOUND
assert resp_1.url == '/admin/'
# Verify if rejection is visible
resp_2 = env.moderator_1_client.get(
reverse('wagtailadmin_pages:revisions_index', args=[env.article_1.pk])
)
assert resp_2.status_code == status.HTTP_200_OK
assert 'rejected for publication' in resp_2.content.decode()
@pytest.mark.CMS_841
@pytest.mark.django_db
def test_admins_can_reject_revision(root_page):
env = two_branches_with_users(root_page)
new_title = 'The title was modified'
env.article_1.title = new_title
revision = env.article_1.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
# Reject request for moderation
resp_1 = env.admin_client.post(
reverse('wagtailadmin_pages:reject_moderation', args=[revision.pk])
)
assert resp_1.status_code == status.HTTP_302_FOUND
assert resp_1.url == '/admin/'
# Verify if rejection is visible
resp_2 = env.admin_client.get(
reverse('wagtailadmin_pages:revisions_index', args=[env.article_1.pk])
)
assert resp_2.status_code == status.HTTP_200_OK
assert 'rejected for publication' in resp_2.content.decode()
@pytest.mark.CMS_840
@pytest.mark.django_db
def test_moderators_cannot_reject_revision_from_other_branch(root_page):
env = two_branches_with_users(root_page)
new_title = 'The title was modified'
env.article_1.title = new_title
revision = env.article_1.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
# Reject request for moderation
resp = env.moderator_2_client.post(
reverse('wagtailadmin_pages:reject_moderation', args=[revision.pk])
)
assert resp.status_code == status.HTTP_403_FORBIDDEN
@pytest.mark.CMS_836
@pytest.mark.django_db
def test_admins_should_be_able_to_access_all_pages_in_any_branch(root_page):
env = two_branches_with_users(root_page)
resp_1 = env.admin_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_1.pk}&for_explorer=1'
)
assert resp_1.status_code == status.HTTP_200_OK
resp_2 = env.admin_client.get(
f'/admin/api/v2beta/pages/?child_of={env.landing_2.pk}&for_explorer=1'
)
assert resp_2.status_code == status.HTTP_200_OK
resp_3 = env.admin_client.get(
f'/admin/api/v2beta/pages/?child_of={env.article_1.pk}&for_explorer=1'
)
assert resp_3.status_code == status.HTTP_200_OK
resp_4 = env.admin_client.get(
f'/admin/api/v2beta/pages/?child_of={env.article_2.pk}&for_explorer=1'
)
assert resp_4.status_code == status.HTTP_200_OK
@pytest.mark.quirk
@pytest.mark.CMS_836
@pytest.mark.django_db
def test_admins_should_be_able_to_reject_revision_from_any_branch(root_page):
"""
Somehow Wagtail doesn't show to the editor that revision was rejected
and thus we have to use Admin client to check that (in last assertion)
"""
env = two_branches_with_users(root_page)
# At this point there should be no revisions
resp_1 = env.editor_1_client.get(
reverse(
'wagtailadmin_pages:revisions_index',
args=[env.article_1.pk]
)
)
assert 'No revision of this page exist' in resp_1.content.decode()
# Make a change and save revision
new_title = 'The title was modified'
env.article_1.title = new_title
revision = env.article_1.save_revision(
user=env.editor_1, submitted_for_moderation=True
)
# Check if revision is visible
resp_2 = env.editor_1_client.get(
reverse(
'wagtailadmin_pages:revisions_index',
args=[env.article_1.pk]
)
)
assert new_title in resp_2.content.decode()
revert_url = f'/admin/pages/{env.article_1.pk}/revisions/{revision.pk}/revert/' # NOQA
assert revert_url in resp_2.content.decode()
# Reject request for moderation
resp_3 = env.admin_client.post(
reverse('wagtailadmin_pages:reject_moderation', args=[revision.pk])
)
assert resp_3.status_code == status.HTTP_302_FOUND
assert resp_3.url == '/admin/'
# Verify if rejection is visible
resp_4 = env.admin_client.get(
reverse(
'wagtailadmin_pages:revisions_index',
args=[env.article_1.pk]
)
)
assert resp_4.status_code == status.HTTP_200_OK
assert 'rejected for publication' in resp_4.content.decode()
@pytest.mark.CMS_841
@pytest.mark.django_db
def test_admins_should_have_permissions_to_manage_users(root_page):
"""Admins should have all required permissions to manage users."""
admin = AdminFactory.get(root_page)
permissions = {
'auth.add_group',
'auth.add_permission',
'auth.add_user',
'auth.change_group',
'auth.change_permission',
'auth.change_user',
'auth.delete_group',
'auth.delete_permission',
'auth.delete_user',
'wagtailusers.add_userprofile',
'wagtailusers.change_userprofile',
'wagtailusers.delete_userprofile',
}
assert not (permissions - admin.user.get_all_permissions())
@pytest.mark.CMS_838
@pytest.mark.django_db
@pytest.mark.parametrize(
"branch_factory", [
BranchEditorFactory,
BranchModeratorFactory,
])
def test_non_admin_user_should_not_have_permissions_to_manage_user_accounts(
branch_factory, root_page
):
"""Non-admin users should not have permissions to manage user accounts"""
branch = branch_factory.get(root_page)
permissions = {
'auth.add_group',
'auth.add_permission',
'auth.add_user',
'auth.change_group',
'auth.change_permission',
'auth.change_user',
'auth.delete_group',
'auth.delete_permission',
'auth.delete_user',
'wagtailusers.add_userprofile',
'wagtailusers.change_userprofile',
'wagtailusers.delete_userprofile',
}
assert (permissions - branch.user.get_all_permissions()) == permissions
@pytest.mark.CMS_838
@pytest.mark.django_db
@pytest.mark.parametrize(
"branch_factory", [
BranchEditorFactory,
BranchModeratorFactory,
])
def test_non_admin_user_should_not_be_able_to_access_manage_users_page(
branch_factory, root_page
):
"""Non-admin users can't access '/admin/users/' page"""
branch = branch_factory.get(root_page)
resp = branch.client.get(reverse('wagtailusers_users:index'), follow=True)
assert resp.status_code == status.HTTP_200_OK
assert resp.context['url'] == '/admin/pages/'
content = resp.content.decode()
assert 'Sorry, you do not have permission to access this area.' in content
@pytest.mark.CMS_838
@pytest.mark.django_db
def test_admin_user_should_be_able_to_access_manage_users_page(root_page):
"""Admins can access '/admin/users/' page"""
admin = AdminFactory.get(root_page)
resp = admin.client.get(reverse('wagtailusers_users:index'))
assert resp.status_code == status.HTTP_200_OK
| 32.610048 | 103 | 0.686853 | 3,732 | 27,262 | 4.714898 | 0.075295 | 0.048306 | 0.044556 | 0.055694 | 0.850818 | 0.819277 | 0.788759 | 0.761707 | 0.727381 | 0.685383 | 0 | 0.026716 | 0.202296 | 27,262 | 835 | 104 | 32.649102 | 0.782407 | 0.081909 | 0 | 0.658718 | 0 | 0 | 0.196234 | 0.114916 | 0 | 0 | 0 | 0 | 0.154993 | 1 | 0.040238 | false | 0 | 0.007452 | 0 | 0.04769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35963e29d7ff5b631deca84203925ec2dd4748ce | 23 | py | Python | pixel_exps/models/mnist/__init__.py | THUYimingLi/Semi-supervised_Robust_Training | 17a6d6fbb4ff3bc4951c1506981dbb6f87f1c26a | [
"Apache-2.0"
] | 14 | 2020-03-17T02:47:56.000Z | 2022-02-26T02:17:34.000Z | pixel_exps/models/mnist/__init__.py | THUYimingLi/Semi-supervised_Robust_Training | 17a6d6fbb4ff3bc4951c1506981dbb6f87f1c26a | [
"Apache-2.0"
] | 3 | 2020-09-25T22:34:41.000Z | 2022-02-09T23:34:55.000Z | pixel_exps/models/mnist/__init__.py | THUYimingLi/Semi-supervised_Robust_Training | 17a6d6fbb4ff3bc4951c1506981dbb6f87f1c26a | [
"Apache-2.0"
] | 4 | 2020-03-19T03:28:03.000Z | 2022-01-20T03:06:59.000Z | from . import small_cnn | 23 | 23 | 0.826087 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35b3b700e1829f6202883ffe961bc8d0579a9e73 | 159 | py | Python | Raw_String.py | BeenashPervaiz/Command_Line_Task | a603fbdd06717ff157ecd72881d08329413fd82c | [
"MIT"
] | null | null | null | Raw_String.py | BeenashPervaiz/Command_Line_Task | a603fbdd06717ff157ecd72881d08329413fd82c | [
"MIT"
] | null | null | null | Raw_String.py | BeenashPervaiz/Command_Line_Task | a603fbdd06717ff157ecd72881d08329413fd82c | [
"MIT"
] | null | null | null | # output: Line A \n Line B
print(r"Line A \n Line B")
# output: \" \n \t \'
print(r"\" \n \t \'")
#output: /\/\/\/\/\/
print(r"these are /\/\/\\/\/\mountains") | 26.5 | 40 | 0.496855 | 26 | 159 | 3.038462 | 0.423077 | 0.227848 | 0.151899 | 0.253165 | 0.278481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163522 | 159 | 6 | 40 | 26.5 | 0.593985 | 0.396226 | 0 | 0 | 0 | 0 | 0.505376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
35bb96de1939578fdd84fa8bcfce820bdd0e9e4b | 195 | py | Python | Codewars/8kyu/third-angle-of-a-triangle/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/8kyu/third-angle-of-a-triangle/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/8kyu/third-angle-of-a-triangle/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
Test.assert_equals(other_angle(30, 60), 90)
Test.assert_equals(other_angle(60, 60), 60)
Test.assert_equals(other_angle(43, 78), 59)
Test.assert_equals(other_angle(10, 20), 150)
| 27.857143 | 44 | 0.748718 | 36 | 195 | 3.833333 | 0.5 | 0.289855 | 0.463768 | 0.608696 | 0.753623 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157303 | 0.087179 | 195 | 6 | 45 | 32.5 | 0.617978 | 0.071795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ea2a5ecc8c9c6f2a008c085f932e0bf6aff80444 | 6,910 | py | Python | tests/auto_cluster_test.py | SeaOfOcean/EasyParallelLibrary | 93baaa851f5ce078b1c55032a27398a588ca4107 | [
"Apache-2.0"
] | 100 | 2022-02-23T08:54:35.000Z | 2022-03-31T04:02:38.000Z | tests/auto_cluster_test.py | chenyang472043503/EasyParallelLibrary | cd2873fe04c86c62e55418129ba2f1dc83d222b4 | [
"Apache-2.0"
] | null | null | null | tests/auto_cluster_test.py | chenyang472043503/EasyParallelLibrary | cd2873fe04c86c62e55418129ba2f1dc83d222b4 | [
"Apache-2.0"
] | 22 | 2022-02-23T09:02:01.000Z | 2022-03-18T03:24:00.000Z | # Copyright 2021 Alibaba Group Holding Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Test for cluster."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.python.platform import test
import epl
from epl.env import Env
from epl.ir.graph import Graph
from epl.ir.phase import ModelPhase
# pylint: disable=missing-docstring,unused-argument,unused-variable,
# pylint: disable=protected-access
class AutoClusterTest(test.TestCase):
def test_pipeline(self):
config = epl.Config()
config.pipeline.num_micro_batch = 2
epl.init(config)
with tf.Graph().as_default():
with epl.replicate(name="stage_0", device_count=1):
num_x = np.random.randint(0, 10, (500, 10)).astype(dtype=np.float32)
num_y = np.random.randint(0, 10, 500).astype(dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((num_x, num_y)) \
.batch(10).repeat(1)
iterator = dataset.make_initializable_iterator()
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS,
iterator.initializer)
x, _ = iterator.get_next()
dense1 = tf.layers.dense(inputs=x, units=16, activation=None)
with epl.replicate(name="stage_1", device_count=1):
logits = tf.layers.dense(inputs=dense1, units=10, activation=None)
loss = tf.reduce_mean(logits)
g = Graph.get()
self.assertEqual(g._current_model_phase, ModelPhase.FORWARD)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
train_op = optimizer.apply_gradients(list(zip(grads, tvars)))
self.assertTrue(Env.get().cluster is not None)
self.assertEqual(Env.get().cluster.worker_num, 1)
self.assertEqual(len(Env.get().cluster.virtual_devices), 2)
self.assertEqual(Env.get().cluster.virtual_devices[0]._slice_devices,
[['/job:worker/replica:0/task:0/device:GPU:0'], ['/job:worker/replica:0/task:0/device:GPU:2']])
self.assertEqual(Env.get().cluster.virtual_devices[1]._slice_devices,
[['/job:worker/replica:0/task:0/device:GPU:1'], ['/job:worker/replica:0/task:0/device:GPU:3']])
with tf.train.MonitoredTrainingSession() as sess:
loss_value = sess.run([train_op, loss])
print(loss_value)
def test_model_parallelism(self):
epl.init()
with tf.Graph().as_default():
with epl.replicate(name="stage_0", device_count=1):
num_x = np.random.randint(0, 10, (500, 10)).astype(dtype=np.float32)
num_y = np.random.randint(0, 10, 500).astype(dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((num_x, num_y)) \
.batch(10).repeat(1)
iterator = dataset.make_initializable_iterator()
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS,
iterator.initializer)
x, _ = iterator.get_next()
dense1 = tf.layers.dense(inputs=x, units=16, activation=None)
with epl.replicate(name="stage_1", device_count=1):
logits = tf.layers.dense(inputs=dense1, units=10, activation=None)
loss = tf.reduce_mean(logits)
g = Graph.get()
self.assertEqual(g._current_model_phase, ModelPhase.FORWARD)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
train_op = optimizer.apply_gradients(list(zip(grads, tvars)))
self.assertTrue(Env.get().cluster is not None)
self.assertEqual(Env.get().cluster.worker_num, 1)
self.assertEqual(len(Env.get().cluster.virtual_devices), 2)
self.assertEqual(Env.get().cluster.virtual_devices[0]._slice_devices,
[['/job:worker/replica:0/task:0/device:GPU:0'], ['/job:worker/replica:0/task:0/device:GPU:2']])
self.assertEqual(Env.get().cluster.virtual_devices[1]._slice_devices,
[['/job:worker/replica:0/task:0/device:GPU:1'], ['/job:worker/replica:0/task:0/device:GPU:3']])
with tf.train.MonitoredTrainingSession() as sess:
loss_value = sess.run([train_op, loss])
print(loss_value)
def test_dp(self):
epl.init()
with tf.Graph().as_default():
with epl.replicate(name="replica", device_count=1):
num_x = np.random.randint(0, 10, (500, 10)).astype(dtype=np.float32)
num_y = np.random.randint(0, 10, 500).astype(dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((num_x, num_y)) \
.batch(10).repeat(1)
iterator = dataset.make_initializable_iterator()
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS,
iterator.initializer)
x, _ = iterator.get_next()
dense1 = tf.layers.dense(inputs=x, units=16, activation=None)
logits = tf.layers.dense(inputs=dense1, units=10, activation=None)
loss = tf.reduce_mean(logits)
g = Graph.get()
self.assertEqual(g._current_model_phase, ModelPhase.FORWARD)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
train_op = optimizer.apply_gradients(list(zip(grads, tvars)))
self.assertTrue(Env.get().cluster is not None)
self.assertEqual(Env.get().cluster.worker_num, 1)
self.assertEqual(len(Env.get().cluster.virtual_devices), 1)
self.assertEqual(Env.get().cluster.virtual_devices[0]._slice_devices,
[['/job:worker/replica:0/task:0/device:GPU:0'], ['/job:worker/replica:0/task:0/device:GPU:1'],
['/job:worker/replica:0/task:0/device:GPU:2'], ['/job:worker/replica:0/task:0/device:GPU:3']])
with tf.train.MonitoredTrainingSession() as sess:
loss_value = sess.run([train_op, loss])
print(loss_value)
# pylint: enable=missing-docstring,unused-argument,unused-variable,
# pylint: enable=protected-access
if __name__ == '__main__':
test.main()
| 48.661972 | 118 | 0.668162 | 943 | 6,910 | 4.73913 | 0.208908 | 0.021929 | 0.040725 | 0.045648 | 0.788991 | 0.788991 | 0.788767 | 0.766391 | 0.766391 | 0.766167 | 0 | 0.028663 | 0.18712 | 6,910 | 141 | 119 | 49.007092 | 0.766957 | 0.127786 | 0 | 0.794643 | 0 | 0 | 0.089107 | 0.081945 | 0 | 0 | 0 | 0 | 0.151786 | 1 | 0.026786 | false | 0 | 0.089286 | 0 | 0.125 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ea4f21c943d9bfc2f63cd61ae1973dc65b42d081 | 2,764 | py | Python | advent2017_day1.py | coandco/advent2017 | 1613b0bc4a5c857cd89c739436a7318203eebee5 | [
"MIT"
] | null | null | null | advent2017_day1.py | coandco/advent2017 | 1613b0bc4a5c857cd89c739436a7318203eebee5 | [
"MIT"
] | null | null | null | advent2017_day1.py | coandco/advent2017 | 1613b0bc4a5c857cd89c739436a7318203eebee5 | [
"MIT"
] | null | null | null | INPUT_DATA = ["23736999148234612466339528635467298545732686574853341217977818839783527958414997199979851227942926872717"
"17554614189745585382464299867475324178461575265232389313518985482795494566944884334389827447822582791733"
"23381571985454236569393975735715331438256795579514159946537868358735936832487422938678194757687698143224"
"13924315122247513133713584379361174238326718615866572692796765558387548551551262614293535742185295377573"
"37489419269833777253861961874861313374585748298487237113559296846252235644894855975647683174328938366292"
"55273452776232319265422533449549956244791565573727762687439221862632722277129613329167189874939414298584"
"61649683922323919727756364185374619323254322281329819516934518649986614758655978152383459568349615158154"
"68291127455333477962136738149958491563216743796443231592591319254449612968211674836288123953915335725556"
"24159939279125341335147234653572977345582135728994395631685618135563662689854691976843435785879952751266"
"62764565398128189164382371752875734113674788151861143924687737393575815111918558792133217518933243652273"
"21442786134867165258972628792877729695294455117369249627772623949615475792487313432452419639147759912921"
"77151554446695134653596633433171866618541957233463548142173235821168156636824233487983766612338498874251"
"67299391744636686583261847549134125397326755611332324511384514812154652639699599117173983714747997864516"
"64179889182892878443845139743693979743788198485521539616518815281346248694545634888586252613567635627232"
"61767873542683796675797124322382732437235544965647934514871672522777378931524994784845817584793564974285"
"13986797218588718598735346848815528369846422641595158313835283994362129411726248355986766159629975398634"
"72447863395431745942664228157946584776298293834618292619945913188515879635548294593538928258479789718233"
"47219468516784857348649693185172199398234123745415271222891161175788713733444497592853221743138324235934"
"21665832371726771531874453768945911318854989673758163787955256882954836573831459385122111393291976784413"
"7362623398623853789938824592"]
def check_string_index(string, index):
next_index = (index + (len(string)/2)) % len(string)
return string[index] == string[next_index]
output_array = []
for index, character in enumerate(INPUT_DATA[0]):
if check_string_index(INPUT_DATA[0], index):
output_array.append(int(character))
print("%r" % output_array)
print("The answer is: %d" % sum(output_array))
| 76.777778 | 121 | 0.824891 | 77 | 2,764 | 29.441558 | 0.61039 | 0.019409 | 0.014116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.843277 | 0.138929 | 2,764 | 35 | 122 | 78.971429 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0.741297 | 0.734335 | 0 | 1 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0 | 0 | 0.068966 | 0.068966 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57c16404b1718f28eceb2e3222b58cb81adf72ed | 889 | py | Python | happyplace/blog/views.py | RifatPiyal/Happy_Place_Online | 2b3c0e3972d0d49a292646567f0ee81ca937577d | [
"MIT"
] | null | null | null | happyplace/blog/views.py | RifatPiyal/Happy_Place_Online | 2b3c0e3972d0d49a292646567f0ee81ca937577d | [
"MIT"
] | null | null | null | happyplace/blog/views.py | RifatPiyal/Happy_Place_Online | 2b3c0e3972d0d49a292646567f0ee81ca937577d | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.contrib.auth.decorators import login_required
# request and the html files locations to render
def home(request):
return render(request, 'blog/index.html')
def about(request):
return render(request, 'blog/about.html')
def contact(request):
return render(request, 'blog/Contact.html')
def FAQ(request):
return render(request, 'blog/FAQ.html')
def advice(request):
return render(request, 'blog/advice.html')
def login(request):
return render(request, 'blog/login.html')
def individual(request):
return render(request, 'blog/Quesform.html')
@login_required
def dashboard(request):
return render(request, 'blog/dashboard.html')
def c_dashboard(request):
return render(request, 'blog/Cdashboard.html')
def loginCounselor(request):
return render(request, 'blog/loginCounselor.html')
| 18.520833 | 57 | 0.732283 | 114 | 889 | 5.684211 | 0.280702 | 0.200617 | 0.29321 | 0.401235 | 0.490741 | 0.12037 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148481 | 889 | 47 | 58 | 18.914894 | 0.856011 | 0.051744 | 0 | 0 | 0 | 0 | 0.205251 | 0.02864 | 0 | 0 | 0 | 0 | 0 | 1 | 0.434783 | false | 0 | 0.086957 | 0.434783 | 0.956522 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
57d94bc7c962cc43424a145f6260aeacfc6a3f7e | 4,737 | py | Python | app/main/tests/test_step_ab_3.py | spetrovic450/ksvotes.org | 1fa25a4098657b5f2f89e345332a26b92b993ecd | [
"MIT"
] | 10 | 2018-08-28T13:35:27.000Z | 2021-07-17T18:01:04.000Z | app/main/tests/test_step_ab_3.py | spetrovic450/ksvotes.org | 1fa25a4098657b5f2f89e345332a26b92b993ecd | [
"MIT"
] | 253 | 2018-05-14T14:51:35.000Z | 2021-07-23T00:49:04.000Z | app/main/tests/test_step_ab_3.py | spetrovic450/ksvotes.org | 1fa25a4098657b5f2f89e345332a26b92b993ecd | [
"MIT"
] | 5 | 2019-09-05T15:10:32.000Z | 2021-09-30T23:37:04.000Z | from app.models import *
def create_registrant(db_session):
registrant = Registrant(
registration_value={
"name_first": "foo",
"name_last": "bar",
"dob": "01/01/2000",
"email": "foo@example.com",
"elections": "General",
},
county="TEST",
reg_lookup_complete = True,
is_citizen=True
)
registrant.save(db_session)
return registrant
def test_ab_3_no_address_provided(app, db_session, client):
registrant = create_registrant(db_session)
with client.session_transaction() as http_session:
http_session['session_id'] = str(registrant.session_id)
form_payload = {}
response = client.post('/ab/address', data=form_payload, follow_redirects=False)
assert response.status_code != 302
def test_ab_3_single_valid_address(app, db_session, client):
registrant = create_registrant(db_session)
with client.session_transaction() as http_session:
http_session['session_id'] = str(registrant.session_id)
form_payload = {
'addr': "707 Vermont St",
'unit': "Room A",
'city': "Lawrence",
'state': "KANSAS",
'zip': '66044'
}
response = client.post('/ab/address', data=form_payload, follow_redirects=False)
redirect_data = response.data.decode()
assert response.status_code == 302
assert ('/ab/identification' in redirect_data) == True
updated_registrant = db_session.query(Registrant).filter_by(session_id = registrant.session_id).first()
assert updated_registrant.registration_value.get('addr') == '707 Vermont St'
assert 'validated_addresses' in updated_registrant.registration_value
assert updated_registrant.registration_value['validated_addresses']['current_address']['state'] == 'KS'
def test_ab_3_single_address_no_county(app, db_session, client):
registrant = create_registrant(db_session)
registrant.county = None
registrant.save(db_session)
with client.session_transaction() as http_session:
http_session['session_id'] = str(registrant.session_id)
form_payload = {
'addr': "707 Vermont St",
'unit': "Room A",
'city': "Lawrence",
'state': "KANSAS",
'zip': '66044'
}
response = client.post('/ab/address', data=form_payload, follow_redirects=False)
updated_registrant = Registrant.lookup_by_session_id(registrant.session_id)
assert updated_registrant.county == 'Douglas'
def test_ab_3_single_invalid_address(app, db_session, client):
registrant = create_registrant(db_session)
with client.session_transaction() as http_session:
http_session['session_id'] = str(registrant.session_id)
form_payload = {
'addr': "123 Fake St",
'city': "FakeTown",
'state': "NA",
'zip': '00000'
}
response = client.post('/ab/address', data=form_payload, follow_redirects=False)
redirect_data = response.data.decode()
assert response.status_code == 302
assert ('/ab/identification' in redirect_data) == True
updated_registrant = db_session.query(Registrant).filter_by(session_id = registrant.session_id).first()
assert updated_registrant.registration_value.get('addr') == '123 Fake St'
assert 'validated_addresses' in updated_registrant.registration_value
assert updated_registrant.registration_value['validated_addresses'] == False
def test_ab_3_with_mail_address(app, db_session, client):
registrant = create_registrant(db_session)
with client.session_transaction() as http_session:
http_session['session_id'] = str(registrant.session_id)
form_payload = {
'addr': "707 Vermont St",
'unit': "Room A",
'city': "Lawrence",
'state': "KANSAS",
'zip': '66044',
'has_mail_addr': True,
'mail_addr': "707 Vermont St",
'mail_unit': "Room B",
'mail_city': "Lawrence",
'mail_state': "KANSAS",
'mail_zip': '66044',
}
response = client.post('/ab/address', data=form_payload, follow_redirects=False)
redirect_data = response.data.decode()
assert response.status_code == 302
assert ('/ab/identification' in redirect_data) == True
updated_registrant = db_session.query(Registrant).filter_by(session_id = registrant.session_id).first()
assert updated_registrant.registration_value.get('addr') == '707 Vermont St'
assert 'validated_addresses' in updated_registrant.registration_value
assert updated_registrant.registration_value['validated_addresses']['current_address']['state'] == 'KS'
assert updated_registrant.registration_value['validated_addresses']['mail_addr']['unit'] == 'RM B'
| 38.512195 | 108 | 0.682499 | 552 | 4,737 | 5.568841 | 0.177536 | 0.0527 | 0.096617 | 0.110605 | 0.833767 | 0.79473 | 0.784971 | 0.766103 | 0.766103 | 0.748861 | 0 | 0.019433 | 0.196116 | 4,737 | 122 | 109 | 38.827869 | 0.787815 | 0 | 0 | 0.578431 | 0 | 0 | 0.169939 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.058824 | false | 0 | 0.009804 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57dedec6856d1172552f63efe478e30ae3f0e2e5 | 38 | py | Python | configs/__init__.py | Longqi-S/keras_cpn | 53d241ecde4bff5073832dfb1ea9c1e08931520e | [
"MIT"
] | 23 | 2018-09-14T08:03:32.000Z | 2021-04-09T07:07:44.000Z | configs/__init__.py | JasonDu1993/keras_cpn | 53d241ecde4bff5073832dfb1ea9c1e08931520e | [
"MIT"
] | 3 | 2018-12-29T04:39:03.000Z | 2022-02-03T16:38:38.000Z | configs/__init__.py | JasonDu1993/keras_cpn | 53d241ecde4bff5073832dfb1ea9c1e08931520e | [
"MIT"
] | 8 | 2018-11-16T02:06:37.000Z | 2021-11-11T09:59:00.000Z | from . import e2e_CPN_ResNet50_FPN_cfg | 38 | 38 | 0.894737 | 7 | 38 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.078947 | 38 | 1 | 38 | 38 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57e4f6f8454e17a3f6b523a8b0739ccb806c7cb5 | 40,735 | py | Python | model.py | arthurmeyer/Saliency_Detection_Convolutional_Autoencoder | 4d27f24731dc6ab0086d2ee639a237b62d4bbe15 | [
"MIT"
] | 30 | 2017-06-20T02:32:58.000Z | 2021-03-10T02:37:13.000Z | model.py | arthurmeyer/Saliency_Detection_Convolutional_Autoencoder | 4d27f24731dc6ab0086d2ee639a237b62d4bbe15 | [
"MIT"
] | 2 | 2017-06-16T13:04:48.000Z | 2017-06-26T04:59:44.000Z | model.py | arthurmeyer/Saliency_Detection_Convolutional_Autoencoder | 4d27f24731dc6ab0086d2ee639a237b62d4bbe15 | [
"MIT"
] | 6 | 2017-12-04T09:28:48.000Z | 2019-09-03T12:34:37.000Z | """ --------------------------------------------------
author: arthur meyer
email: arthur.meyer.38@gmail.com
status: final
version: v2.0
--------------------------------------------------"""
from __future__ import division
import tensorflow as tf
import numpy as np
class MODEL(object):
"""
Model description:
conv : vgg
deconv : vgg + 1 more
fc layer : 2
loss : flexible
direct
connections : flexible (if yes then 111 110)
edge contrast : flexible
"""
def __init__(self, name, batch_size, learning_rate, wd, concat, l2_loss, penalty, coef):
"""
Args:
name : name of the model (used to create a specific folder to save/load parameters)
batch_size : batch size
learning_rate : learning_rate
wd : weight decay factor
concat : does this model include direct connections?
l2_loss : does this model use l2 loss (if not then cross entropy)
penalty : whether to use the edge contrast penalty
coef : coef for the edge contrast penalty
"""
self.name = 'saliency_' + name
self.losses = 'loss_of_' + self.name
self.losses_decay = 'loss_of_' + self.name +'_decay'
self.batch_size = batch_size
self.learning_rate = learning_rate
self.wd = wd
self.moving_avg_decay = 0.9999
self.concat = concat
self.l2_loss = l2_loss
self.penalty = penalty
self.coef = coef
self.parameters_conv = []
self.parameters_deconv = []
self.deconv = []
with tf.device('/cpu:0'):
# conv1_1
with tf.variable_scope(self.name + '_' + 'conv1_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 3, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_conv += [kernel, biases]
# conv1_2
with tf.variable_scope(self.name + '_' + 'conv1_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 64, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_conv += [kernel, biases]
# conv2_1
with tf.variable_scope(self.name + '_' + 'conv2_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 64, 128), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [128], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv2_2
with tf.variable_scope(self.name + '_' + 'conv2_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 128, 128), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [128], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv3_1
with tf.variable_scope(self.name + '_' + 'conv3_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 128, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv3_2
with tf.variable_scope(self.name + '_' + 'conv3_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv3_3
with tf.variable_scope(self.name + '_' + 'conv3_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv4_1
with tf.variable_scope(self.name + '_' + 'conv4_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_conv += [kernel, biases]
# conv4_2
with tf.variable_scope(self.name + '_' + 'conv4_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_conv += [kernel, biases]
# conv4_3
with tf.variable_scope(self.name + '_' + 'conv4_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_conv += [kernel, biases]
# conv5_1
with tf.variable_scope(self.name + '_' + 'conv5_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv5_2
with tf.variable_scope(self.name + '_' + 'conv5_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# conv5_3
with tf.variable_scope(self.name + '_' + 'conv5_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses_decay, weight_decay)
tf.add_to_collection(self.losses, weight_decay)
self.parameters_conv += [kernel, biases]
# fc1
with tf.variable_scope(self.name + '_' + 'fc1') as scope:
fc1w = tf.get_variable('fc1w', [7*7*512,4096], initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
fc1b = tf.get_variable('fc1b', [4096], initializer=tf.constant_initializer(0), dtype=tf.float32)
self.parameters_conv += [fc1w, fc1b]
# fc2
with tf.variable_scope(self.name + '_' + 'fc2') as scope:
fc2w = tf.get_variable('fc2w', [4096,4096], initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
fc2b = tf.get_variable('fc2b', [4096], initializer=tf.constant_initializer(0), dtype=tf.float32)
self.parameters_conv += [fc2w, fc2b]
# deconv0
with tf.variable_scope(self.name + '_' + 'deconv0') as scope:
if self.concat:
kernel = tf.get_variable('kernel', (3, 3, 1, 195), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
else:
kernel = tf.get_variable('kernel', (3, 3, 1, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [1], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.deconv += [kernel, biases]
# deconv1_1
with tf.variable_scope(self.name + '_' + 'deconv1_1') as scope:
if self.concat:
kernel = tf.get_variable('kernel', (3, 3, 64, 195), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
else:
kernel = tf.get_variable('kernel', (3, 3, 64, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv1_2
with tf.variable_scope(self.name + '_' + 'deconv1_2') as scope:
if self.concat:
kernel1 = tf.get_variable('kernel1', (3, 3, 64, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
kernel2 = tf.get_variable('kernel2', (3, 3, 64, 387), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(tf.concat(3,[kernel1,kernel2])), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [[kernel1,kernel2], biases]
else:
kernel = tf.get_variable('kernel', (3, 3, 64, 64), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv2_1
with tf.variable_scope(self.name + '_' + 'deconv2_1') as scope:
if self.concat:
kernel1 = tf.get_variable('kernel1', (3, 3, 64, 128), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
kernel2 = tf.get_variable('kernel2', (3, 3, 64, 387), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(tf.concat(3,[kernel1,kernel2])), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [[kernel1,kernel2], biases]
else:
kernel = tf.get_variable('kernel', (3, 3, 64, 128), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [64], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv2_2
with tf.variable_scope(self.name + '_' + 'deconv2_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 128, 128), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [128], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv3_1
with tf.variable_scope(self.name + '_' + 'deconv3_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 128, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [128], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv3_2
with tf.variable_scope(self.name + '_' + 'deconv3_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv3_3
with tf.variable_scope(self.name + '_' + 'deconv3_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 256), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv4_1
with tf.variable_scope(self.name + '_' + 'deconv4_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 256, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [256], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv4_2
with tf.variable_scope(self.name + '_' + 'deconv4_2') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv4_3
with tf.variable_scope(self.name + '_' + 'deconv4_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv5_1
with tf.variable_scope(self.name + '_' + 'deconv5_1') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv5_2
with tf.variable_scope(self.name + '_' + 'deconv5_2') as scope:
kernel = tf.get_variable('kernel', (3, 3,512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# deconv5_3
with tf.variable_scope(self.name + '_' + 'deconv5_3') as scope:
kernel = tf.get_variable('kernel', (3, 3, 512, 512), initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
biases = tf.get_variable('biases', [512], initializer=tf.constant_initializer(0), dtype=tf.float32)
weight_decay = tf.mul(tf.nn.l2_loss(kernel), self.wd)
tf.add_to_collection(self.losses, weight_decay)
tf.add_to_collection(self.losses_decay, weight_decay)
self.parameters_deconv += [kernel, biases]
# de_fc1
with tf.variable_scope(self.name + '_' + 'defc1') as scope:
fc1w = tf.get_variable('fc1w', [4096,7*7*512], initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
fc1b = tf.get_variable('fc1b', [7*7*512], initializer=tf.constant_initializer(0), dtype=tf.float32)
self.parameters_deconv += [fc1w, fc1b]
# de_fc2
with tf.variable_scope(self.name + '_' + 'defc2') as scope:
fc2w = tf.get_variable('fc2w', [4096,4096], initializer=tf.truncated_normal_initializer(stddev=1e-1, dtype=tf.float32), dtype=tf.float32)
fc2b = tf.get_variable('fc2b', [4096], initializer=tf.constant_initializer(0), dtype=tf.float32)
self.parameters_deconv += [fc2w, fc2b]
def display_info(self, verbosity):
"""
Display information about this model
Args:
verbosity : level of details to display
"""
print('------------------------------------------------------')
print('This model is %s' % (self.name))
print('------------------------------------------------------')
if verbosity > 0:
print('Learning rate: %0.8f -- Weight decay: %0.8f -- Cross entropy loss: %r' % (self.learning_rate , self.wd, not self.l2_loss))
print('------------------------------------------------------')
print('Direct connections: %r' % (self.concat))
print('------------------------------------------------------')
print('Edge contrast penalty: %r -- coefficient %0.5f' % (self.penalty, self.coef))
print('------------------------------------------------------\n')
def infer(self, images, inter_layer = False, arithmetic = None, debug = False):
"""
Return saliency map from given images
Args:
images : input images
inter_layer : whether we want to return the middle layer code
arithmetic : type of special operation on the middle layer encoding (1 is add, 2 subtract, 3 is linear combination)
debug : whether to return a extra value use for debug (control value)
Returns:
out : saliency maps of the input
control_value : some value used to debug training
inter_layer_out : value of the middle layer
"""
control_value = None
inter_layer_out = None
if self.concat:
detail = []
detail_bis = []
detail += [tf.image.resize_images(images,[112,112])]
detail_bis += [images]
# conv1_1
with tf.variable_scope(self.name + '_' + 'conv1_1') as scope:
conv = tf.nn.conv2d(images, self.parameters_conv[0], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[1])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
detail += [tf.image.resize_images(norm,[112,112])]
detail_bis += [norm]
# conv1_2
with tf.variable_scope(self.name + '_' + 'conv1_2') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[2], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[3])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
detail += [tf.image.resize_images(norm,[112,112])]
detail_bis += [norm]
# pool1
pool1 = tf.nn.max_pool(norm,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool1')
# conv2_1
with tf.variable_scope(self.name + '_' + 'conv2_1') as scope:
conv = tf.nn.conv2d(pool1, self.parameters_conv[4], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[5])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
detail += [norm]
# conv2_2
with tf.variable_scope(self.name + '_' + 'conv2_2') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[6], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[7])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
detail += [norm]
# pool2
pool2 = tf.nn.max_pool(norm,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool2')
# conv3_1
with tf.variable_scope(self.name + '_' + 'conv3_1') as scope:
conv = tf.nn.conv2d(pool2, self.parameters_conv[8], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[9])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv3_2
with tf.variable_scope(self.name + '_' + 'conv3_2') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[10], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[11])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv3_3
with tf.variable_scope(self.name + '_' + 'conv3_3') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[12], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[13])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# pool3
pool3 = tf.nn.max_pool(norm, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3')
# conv4_1
with tf.variable_scope(self.name + '_' + 'conv4_1') as scope:
conv = tf.nn.conv2d(pool3, self.parameters_conv[14], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[15])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv4_2
with tf.variable_scope(self.name + '_' + 'conv4_2') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[16], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[17])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv4_3
with tf.variable_scope(self.name + '_' + 'conv4_3') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[18], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[19])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# pool4
pool4 = tf.nn.max_pool(norm, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4')
# conv5_1
with tf.variable_scope(self.name + '_' + 'conv5_1') as scope:
conv = tf.nn.conv2d(pool4, self.parameters_conv[20], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[21])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv5_2
with tf.variable_scope(self.name + '_' + 'conv5_2') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[22], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[23])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# conv5_3
with tf.variable_scope(self.name + '_' + 'conv5_3') as scope:
conv = tf.nn.conv2d(norm, self.parameters_conv[24], [1, 1, 1, 1], padding='SAME')
out = tf.nn.bias_add(conv, self.parameters_conv[25])
relu = tf.nn.relu(out)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# pool5
pool5 = tf.nn.max_pool(norm,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool5')
# fc1
with tf.variable_scope(self.name + '_' + 'fc1') as scope:
pool5_flat = tf.reshape(pool5, [self.batch_size, -1])
fc1l = tf.nn.bias_add(tf.matmul(pool5_flat, self.parameters_conv[26]), self.parameters_conv[27])
fc1 = tf.nn.relu(fc1l)
# fc2
with tf.variable_scope(self.name + '_' + 'fc2') as scope:
fc2l = tf.nn.bias_add(tf.matmul(fc1, self.parameters_conv[28]), self.parameters_conv[29])
fc2 = tf.nn.relu(fc2l)
if inter_layer:
inter_layer_out = fc2
if arithmetic is not None:
if arithmetic == 3:
im1 = tf.squeeze(tf.split(0,self.batch_size,fc2)[0])
im2 = tf.squeeze(tf.split(0,self.batch_size,fc2)[1])
vec = tf.sub(im2,im1)
liste = []
for i in range(self.batch_size):
liste.append(im1+i/15*vec)
fc2 = tf.pack(liste)
elif arithmetic == 2:
norm = tf.sqrt(tf.reduce_sum(tf.square(fc2), 1, keep_dims=True))
fc2 = tf.div(fc2,norm)
im1 = tf.squeeze(tf.split(0,self.batch_size,fc2)[0])
fc2 = tf.sub(fc2,im1)
elif arithmetic == 1:
norm = tf.sqrt(tf.reduce_sum(tf.square(fc2), 1, keep_dims=True))
fc2 = tf.div(fc2,norm)
im1 = tf.squeeze(tf.split(0,self.batch_size,fc2)[0])
fc2 = tf.add(fc2,im1)
# de-fc2
with tf.variable_scope(self.name + '_' + 'defc2') as scope:
fc2l = tf.nn.bias_add(tf.matmul(fc2, self.parameters_deconv[28]), self.parameters_deconv[29])
fc2 = tf.nn.relu(fc2l)
# de-fc1
with tf.variable_scope(self.name + '_' + 'defc1') as scope:
fc1l = tf.nn.bias_add(tf.matmul(fc2, self.parameters_deconv[26]), self.parameters_deconv[27])
fc1 = tf.nn.relu(fc1l)
pool5_flat = tf.reshape(fc1, pool5.get_shape())
# deconv5_3
with tf.variable_scope(self.name + '_' + 'deconv5_3') as scope:
deconv = tf.nn.conv2d_transpose(pool5_flat, self.parameters_deconv[24], (self.batch_size,14,14,512), strides= [1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[25])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv5_2
with tf.variable_scope(self.name + '_' + 'deconv5_2') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[22], (self.batch_size,14,14,512), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[23])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv5_1
with tf.variable_scope(self.name + '_' + 'deconv5_1') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[20], (self.batch_size,14,14,512), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[21])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv4_3
with tf.variable_scope(self.name + '_' + 'deconv4_3') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[18], (self.batch_size,28,28,512), strides= [1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[19])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv4_2
with tf.variable_scope(self.name + '_' + 'deconv4_2') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[16], (self.batch_size,28,28,512), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[17])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv4_1
with tf.variable_scope(self.name + '_' + 'deconv4_1') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[14], (self.batch_size,28,28,256), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[15])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv3_3
with tf.variable_scope(self.name + '_' + 'deconv3_3') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[12], (self.batch_size,56,56,256), strides= [1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[13])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv3_2
with tf.variable_scope(self.name + '_' + 'deconv3_2') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[10], (self.batch_size,56,56,256), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[11])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
# deconv3_1
with tf.variable_scope(self.name + '_' + 'deconv3_1') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[8], (self.batch_size,56,56,128), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[9])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
add = tf.concat(3,detail)
add_bis = tf.concat(3,detail_bis)
if arithmetic:
add = tf.zeros_like(add)
add_bis = tf.zeros_like(add_bis)
# deconv2_2
with tf.variable_scope(self.name + '_' + 'deconv2_2') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[6], (self.batch_size,112,112,128), strides= [1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[7])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
norm = tf.concat(3,[norm,add])
# deconv2_1
with tf.variable_scope(self.name + '_' + 'deconv2_1') as scope:
deconv = tf.nn.conv2d_transpose(norm, tf.concat(3,self.parameters_deconv[4]), (self.batch_size,112,112,64), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[5])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
norm = tf.concat(3,[norm,add])
# deconv1_2
with tf.variable_scope(self.name + '_' + 'deconv1_2') as scope:
deconv = tf.nn.conv2d_transpose(norm, tf.concat(3,self.parameters_deconv[2]), (self.batch_size,224,224,64), strides= [1, 2, 2, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[3])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
norm = tf.concat(3,[norm,add_bis])
# deconv1_1
with tf.variable_scope(self.name + '_' + 'deconv1_1') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.parameters_deconv[0], (self.batch_size,224,224,64), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.parameters_deconv[1])
relu = tf.nn.relu(bias)
norm = tf.nn.lrn(relu, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
if self.concat:
norm = tf.concat(3,[norm,add_bis])
# deconv0
with tf.variable_scope(self.name + '_' + 'deconv0') as scope:
deconv = tf.nn.conv2d_transpose(norm, self.deconv[0], (self.batch_size,224,224,1), strides= [1, 1, 1, 1], padding='SAME')
bias = tf.nn.bias_add(deconv, self.deconv[1])
relu = tf.sigmoid(bias)
out = tf.squeeze(relu)
if debug:
control_value = tf.reduce_mean(relu)
return out, control_value, inter_layer_out
def loss(self, guess, labels, loss_bis = False):
"""
Return the loss for given saliency map with corresponding ground truth
Args:
guess : input saliency map
labels : corresponding ground truth
loss_bis : is it the main loss or the auxiliary one (for validation while training)
Returns:
loss_out : the loss value
"""
if self.l2_loss:
reconstruction = tf.reduce_sum(tf.square(guess - labels), [1,2])
reconstruction_mean = tf.reduce_mean(reconstruction)
if not loss_bis:
tf.add_to_collection(self.losses, reconstruction_mean)
else:
guess_flat = tf.reshape(guess, [self.batch_size, -1])
labels_flat = tf.reshape(labels, [self.batch_size, -1])
zero = tf.fill(tf.shape(guess_flat), 1e-7)
one = tf.fill(tf.shape(guess_flat), 1 - 1e-7)
ret_1 = tf.select(guess_flat > 1e-7, guess_flat, zero)
ret_2 = tf.select(ret_1 < 1 - 1e-7, ret_1, one)
loss = tf.reduce_mean(- labels_flat * tf.log(ret_2) - (1. - labels_flat) * tf.log(1. - ret_2))
if not loss_bis:
tf.add_to_collection(self.losses, loss)
elif loss_bis:
tf.add_to_collection(self.losses_decay, loss)
if self.penalty and not loss_bis:
labels_new = tf.reshape(labels, [self.batch_size, 224, 224, 1])
guess_new = tf.reshape(guess, [self.batch_size, 224, 224, 1])
filter_x = tf.constant(np.array([[0,0,0] , [-1,2,-1], [0,0,0]]).reshape((3,3,1,1)), dtype=tf.float32)
filter_y = tf.constant(np.array([[0,-1,0] , [0,2,0], [0,-1,0]]).reshape((3,3,1,1)), dtype=tf.float32)
gradient_x = tf.nn.conv2d(labels_new, filter_x, [1,1,1,1], padding = "SAME")
gradient_y = tf.nn.conv2d(labels_new, filter_y, [1,1,1,1], padding = "SAME")
result_x = tf.greater(gradient_x,0)
result_y = tf.greater(gradient_y,0)
keep = tf.cast(tf.logical_or(result_x,result_y), tf.float32) #edges
filter_neighboor_1 = tf.constant(np.array([[0,0,0], [0,1,-1], [0,0,0]]).reshape((3,3,1)), dtype=tf.float32)
filter_neighboor_2 = tf.constant(np.array([[0,-1,0], [0,1,0], [0,0,0]]).reshape((3,3,1)), dtype=tf.float32)
filter_neighboor_3 = tf.constant(np.array([[0,0,0], [-1,1,0], [0,0,0]]).reshape((3,3,1)), dtype=tf.float32)
filter_neighboor_4 = tf.constant(np.array([[0,0,0], [0,1,0], [0,-1,0]]).reshape((3,3,1)), dtype=tf.float32)
filter_neighboor = tf.pack([filter_neighboor_1,filter_neighboor_2,filter_neighboor_3,filter_neighboor_4], axis = 3)
compare = tf.square(keep * tf.nn.conv2d(guess_new, filter_neighboor, [1,1,1,1], padding = "SAME"))
compare_m = tf.nn.conv2d(labels_new, filter_neighboor, [1,1,1,1], padding = "SAME")
new_compare_m = tf.select(tf.equal(compare_m, 0), tf.ones([self.batch_size,224,224,4]), -1*tf.ones([self.batch_size,224,224,4])) #0 mean same so want to minimize and if not then diff so want to maximize
final_compare_m = keep * new_compare_m
score_ret = tf.reduce_sum(final_compare_m * compare, [1,2,3]) / (4*(tf.reduce_sum(keep,[1,2,3])+1e-7))
score = self.coef * tf.reduce_mean(score_ret)
tf.add_to_collection(self.losses, score)
if loss_bis:
loss_out = tf.add_n(tf.get_collection(self.losses_decay))
else:
loss_out = tf.add_n(tf.get_collection(self.losses))
return loss_out
def train(self, loss, global_step):
"""
Return a training step for the tensorflow graph
Args:
loss : loss to do sgd on
global_step : which step are we at
"""
opt = tf.train.AdamOptimizer(self.learning_rate)
grads = opt.compute_gradients(loss)
apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
variable_averages = tf.train.ExponentialMovingAverage(self.moving_avg_decay, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
with tf.control_dependencies([apply_gradient_op, variables_averages_op]):
train_op = tf.no_op(name='train')
return train_op | 52.971391 | 210 | 0.625948 | 5,913 | 40,735 | 4.137325 | 0.055978 | 0.024853 | 0.064666 | 0.048152 | 0.829096 | 0.821575 | 0.806859 | 0.799869 | 0.786094 | 0.775548 | 0 | 0.06324 | 0.220916 | 40,735 | 769 | 211 | 52.971391 | 0.707619 | 0.0626 | 0 | 0.639313 | 0 | 0.001908 | 0.041422 | 0.007186 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009542 | false | 0 | 0.005725 | 0 | 0.022901 | 0.017176 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17bb6ae46a339ba6b506e57521511c945633846b | 1,729 | py | Python | 8.py | juandarr/ProjectEuler | 951705ac62f550d7fbecdc3f35ab8c38b53b9225 | [
"MIT"
] | null | null | null | 8.py | juandarr/ProjectEuler | 951705ac62f550d7fbecdc3f35ab8c38b53b9225 | [
"MIT"
] | null | null | null | 8.py | juandarr/ProjectEuler | 951705ac62f550d7fbecdc3f35ab8c38b53b9225 | [
"MIT"
] | null | null | null | """
Finds the biggest product of n consecutive digits from a string of numbers
Author: Juan Rios
"""
import math
d = "731671765313306249192251196744265747423553491949349698352031277450"\
"6326239578318016984801869478851843858615607891129494954595017379583"\
"3195285320880551112540698747158523863050715693290963295227443043557"\
"6689664895044524452316173185640309871112172238311362229893423380308"\
"1353362766142828064444866452387493035890729629049156044077239071381"\
"0515859307960866701724271218839987979087922749219016997208880937766"\
"5727333001053367881220235421809751254540594752243525849077116705560"\
"1360483958644670632441572215539753697817977846174064955149290862569"\
"3219784686224828397224137565705605749026140797296865241453510047482"\
"1663704844031998900088952434506585412275886668811642717147992444292"\
"8230863465674813919123162824586178664583591245665294765456828489128"\
"8314260769004224219022671055626321111109370544217506941658960408071"\
"9840385096245544436298123098787992724428490918884580156166097919133"\
"8754992005240636899125607176060588611646710940507754100225698315520"\
"005593572972571636269561882670428252483600823257530420752963450"
# Finds the biggest product of n consecutive digits
def biggest_product(number, n):
max_number = 0
index = 0
while (index < len(number)-n+1):
num = 1
for digit in number[index:index+n]:
num *= int(digit)
if max_number < num:
max_number = num
index += 1
return max_number
if __name__ == "__main__":
n = 13
print('The biggest product of {0} consecutive digits from the string is {1}'.format(n, biggest_product(d,n))) | 45.5 | 113 | 0.802198 | 106 | 1,729 | 12.95283 | 0.509434 | 0.050983 | 0.037145 | 0.041515 | 0.06118 | 0.06118 | 0.06118 | 0.06118 | 0 | 0 | 0 | 0.677181 | 0.13823 | 1,729 | 38 | 113 | 45.5 | 0.244295 | 0.082707 | 0 | 0 | 0 | 0 | 0.681444 | 0.633312 | 0 | 1 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.1 | 0.033333 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17bbeec9328f757734c05c7b616c529e7eebb14c | 4,023 | py | Python | states/base.py | ILoveAndLikePizza/lichtkrant | 3165c0f0ffd6587c17f33ef5a858ed3b8e937d3e | [
"MIT"
] | 1 | 2021-10-16T10:19:26.000Z | 2021-10-16T10:19:26.000Z | states/base.py | ILoveAndLikePizza/lichtkrant | 3165c0f0ffd6587c17f33ef5a858ed3b8e937d3e | [
"MIT"
] | 5 | 2021-06-15T19:09:47.000Z | 2021-11-21T13:03:26.000Z | states/base.py | ILoveAndLikePizza/lichtkrant | 3165c0f0ffd6587c17f33ef5a858ed3b8e937d3e | [
"MIT"
] | 8 | 2021-07-08T15:29:28.000Z | 2022-03-18T07:57:24.000Z | from threading import Thread
from sys import stdout
from shutil import which
class BaseState(Thread):
name = "base"
def __init__(self):
super().__init__()
self.killed = False
self.on_pi = which("rpi-update")
def kill(self):
self.killed = True
def output_image(self, pil_image):
stdout.buffer.write(pil_image.tobytes())
def output_frame(self, frame):
stdout.buffer.write(frame)
@staticmethod
def text(text):
chars = {
"0": [[0, 1, 0], [1, 0, 1], [1, 0, 1], [1, 0, 1], [0, 1, 0]],
"1": [[0, 1, 0], [1, 1, 0], [0, 1, 0], [0, 1, 0], [1, 1, 1]],
"2": [[1, 1, 1], [0, 0, 1], [1, 1, 1], [1, 0, 0], [1, 1, 1]],
"3": [[1, 1, 1], [0, 0, 1], [1, 1, 1], [0, 0, 1], [1, 1, 1]],
"4": [[1, 0, 1], [1, 0, 1], [1, 1, 1], [0, 0, 1], [0, 0, 1]],
"5": [[1, 1, 1], [1, 0, 0], [1, 1, 1], [0, 0, 1], [1, 1, 1]],
"6": [[1, 1, 1], [1, 0, 0], [1, 1, 1], [1, 0, 1], [1, 1, 1]],
"7": [[1, 1, 1], [0, 0, 1], [0, 0, 1], [0, 0, 1], [0, 0, 1]],
"8": [[1, 1, 1], [1, 0, 1], [1, 1, 1], [1, 0, 1], [1, 1, 1]],
"9": [[1, 1, 1], [1, 0, 1], [1, 1, 1], [0, 0, 1], [1, 1, 1]],
"a": [[0, 1, 0], [1, 0, 1], [1, 1, 1], [1, 0, 1], [1, 0, 1]],
"b": [[1, 1, 0], [1, 0, 1], [1, 1, 0], [1, 0, 1], [1, 1, 0]],
"c": [[1, 1, 1], [1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 1, 1]],
"d": [[1, 1, 0], [1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 1, 0]],
"e": [[1, 1, 1], [1, 0, 0], [1, 1, 0], [1, 0, 0], [1, 1, 1]],
"f": [[1, 1, 1], [1, 0, 0], [1, 1, 0], [1, 0, 0], [1, 0, 0]],
"g": [[0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 0, 1], [0, 1, 1]],
"h": [[1, 0, 1], [1, 0, 1], [1, 1, 1], [1, 0, 1], [1, 0, 1]],
"i": [[1, 1, 1], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 1, 1]],
"j": [[0, 0, 1], [0, 0, 1], [0, 0, 1], [0, 0, 1], [1, 1, 1]],
"k": [[1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 0, 1], [1, 0, 1]],
"l": [[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 1, 1]],
"m": [[1, 0, 1], [1, 1, 1], [1, 1, 1], [1, 0, 1], [1, 0, 1]],
"n": [[1, 1, 0], [1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1]],
"o": [[1, 1, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 1, 1]],
"p": [[1, 1, 1], [1, 0, 1], [1, 1, 1], [1, 0, 0], [1, 0, 0]],
"q": [[1, 1, 1], [1, 0, 1], [1, 0, 1], [1, 1, 0], [0, 0, 1]],
"r": [[1, 1, 0], [1, 0, 1], [1, 1, 0], [1, 0, 1], [1, 0, 1]],
"s": [[0, 1, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0]],
"t": [[1, 1, 1], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0]],
"u": [[1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 1, 1]],
"v": [[1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1], [0, 1, 0]],
"w": [[1, 0, 1], [1, 0, 1], [1, 1, 1], [1, 1, 1], [1, 0, 1]],
"x": [[1, 0, 1], [1, 0, 1], [0, 1, 0], [1, 0, 1], [1, 0, 1]],
"y": [[1, 0, 1], [1, 0, 1], [0, 1, 1], [0, 0, 1], [0, 1, 0]],
"z": [[1, 1, 1], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1]],
" ": [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]],
".": [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 1, 0]],
"!": [[0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 0, 0], [0, 1, 0]]
}
buffer = [[], [], [], [], []]
for char_index, _ in enumerate(text):
char_buffer = chars.get(
str(text[char_index].lower()), [[], [], [], [], []])
for row_index in range(0, 5):
buffer[row_index].extend(char_buffer[row_index])
if char_index < len(text) - 1:
buffer[row_index].append(0)
return buffer
def flatten(self, s):
if s == []:
return s
if isinstance(s[0], list):
return self.flatten(s[0]) + self.flatten(s[1:])
return s[:1] + self.flatten(s[1:])
| 49.060976 | 73 | 0.305245 | 757 | 4,023 | 1.59181 | 0.116248 | 0.303734 | 0.234025 | 0.169295 | 0.487137 | 0.487137 | 0.481328 | 0.472199 | 0.441494 | 0.390041 | 0 | 0.235938 | 0.363659 | 4,023 | 81 | 74 | 49.666667 | 0.234766 | 0 | 0 | 0 | 0 | 0 | 0.013174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082192 | false | 0 | 0.041096 | 0 | 0.205479 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17f286e3dd70dc4498bd512ddb9cca82f1d9f717 | 31 | py | Python | livemark/plugins/task/__init__.py | gabrielbdornas/livemark | 1c717e14e05e29f6feda4900d1e8e79025b8117d | [
"MIT"
] | 73 | 2021-06-07T13:28:36.000Z | 2022-03-26T05:37:59.000Z | livemark/plugins/task/__init__.py | gabrielbdornas/livemark | 1c717e14e05e29f6feda4900d1e8e79025b8117d | [
"MIT"
] | 120 | 2021-06-04T12:51:01.000Z | 2022-03-21T11:11:36.000Z | livemark/plugins/task/__init__.py | gabrielbdornas/livemark | 1c717e14e05e29f6feda4900d1e8e79025b8117d | [
"MIT"
] | 7 | 2021-09-22T11:38:26.000Z | 2022-03-26T05:35:58.000Z | from .plugin import TaskPlugin
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa40362ee15552b836203c2d186c02fa7123ff45 | 25 | py | Python | pygpulab/__init__.py | grnydawn/pygpulab | 4edaf0fd46f31421c447de4b7835aa63563213a9 | [
"MIT"
] | 1 | 2020-10-17T19:44:21.000Z | 2020-10-17T19:44:21.000Z | pygpulab/__init__.py | grnydawn/pygpulab | 4edaf0fd46f31421c447de4b7835aa63563213a9 | [
"MIT"
] | null | null | null | pygpulab/__init__.py | grnydawn/pygpulab | 4edaf0fd46f31421c447de4b7835aa63563213a9 | [
"MIT"
] | null | null | null | from .main import GPULab
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4d421d15a52c8065b953b5959c708e541073a32 | 210 | py | Python | Solutions/Training/Lesson_04/__init__.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | Solutions/Training/Lesson_04/__init__.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | Solutions/Training/Lesson_04/__init__.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | from .frog_river_one import solution as frog_river_one
from .perm_check import solution as perm_check
from .missing_integer import solution as missing_integer
from .max_counters import solution as max_counters
| 42 | 56 | 0.866667 | 34 | 210 | 5.058824 | 0.382353 | 0.325581 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 210 | 4 | 57 | 52.5 | 0.924731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10291982e8ef2524fa123efa3d5a5aed88ce491d | 675 | py | Python | openapi_netdisco/models/__init__.py | mksoska/openapi-client-netdisco | d6444505307e4897a9fef1ded60a180eb764d4b8 | [
"MIT"
] | null | null | null | openapi_netdisco/models/__init__.py | mksoska/openapi-client-netdisco | d6444505307e4897a9fef1ded60a180eb764d4b8 | [
"MIT"
] | null | null | null | openapi_netdisco/models/__init__.py | mksoska/openapi-client-netdisco | d6444505307e4897a9fef1ded60a180eb764d4b8 | [
"MIT"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
App::Netdisco
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 2.050003
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
# import models into model package
from openapi_netdisco.models.address import Address
from openapi_netdisco.models.api_key import ApiKey
from openapi_netdisco.models.device import Device
from openapi_netdisco.models.port import Port
from openapi_netdisco.models.port_utilization import PortUtilization
from openapi_netdisco.models.vlan import Vlan
| 29.347826 | 124 | 0.804444 | 90 | 675 | 5.888889 | 0.477778 | 0.124528 | 0.215094 | 0.283019 | 0.109434 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.128889 | 675 | 22 | 125 | 30.681818 | 0.880952 | 0.426667 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1096426e9c04a02a11d912c98901a7c75900448f | 760 | py | Python | hack/transistors/data/make_part_by_doc.py | lukehsiao/lctes-hack | 9603646bfd5ee2832b0f4040f1924ec0409d550d | [
"MIT"
] | 3 | 2019-05-02T01:24:43.000Z | 2020-04-24T08:40:16.000Z | hack/transistors/data/make_part_by_doc.py | fonduer-apps/hack | d5b82140f3826b6d90156b20d9c6731b7fe07e8c | [
"MIT"
] | 2 | 2019-11-22T11:35:09.000Z | 2019-11-26T22:50:48.000Z | hack/transistors/data/make_part_by_doc.py | fonduer-apps/hack | d5b82140f3826b6d90156b20d9c6731b7fe07e8c | [
"MIT"
] | 1 | 2019-11-29T05:51:03.000Z | 2019-11-29T05:51:03.000Z | import csv
import pickle
parts_by_doc = dict()
with open("dev/dev_gold.csv") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
for row in csv_reader:
doc_name = row[0].upper()
part_num = row[2]
if doc_name not in parts_by_doc:
parts_by_doc[doc_name] = set()
parts_by_doc[doc_name].add(part_num)
with open("test/test_gold.csv") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
for row in csv_reader:
doc_name = row[0].upper()
part_num = row[2]
if doc_name not in parts_by_doc:
parts_by_doc[doc_name] = set()
parts_by_doc[doc_name].add(part_num)
print(parts_by_doc)
pickle.dump(parts_by_doc, open("parts_by_doc_new.pkl", "wb"))
| 28.148148 | 61 | 0.651316 | 128 | 760 | 3.53125 | 0.265625 | 0.154867 | 0.221239 | 0.115044 | 0.730089 | 0.730089 | 0.730089 | 0.730089 | 0.730089 | 0.730089 | 0 | 0.006826 | 0.228947 | 760 | 26 | 62 | 29.230769 | 0.764505 | 0 | 0 | 0.666667 | 0 | 0 | 0.076316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0.047619 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10aa9060329bcb0a7d2aa24adaf177d1fe6e0c2e | 27 | py | Python | aos_sw_api/syslog/__init__.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | null | null | null | aos_sw_api/syslog/__init__.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | 1 | 2020-12-24T15:36:56.000Z | 2021-01-28T23:19:57.000Z | aos_sw_api/syslog/__init__.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | 1 | 2021-02-16T23:26:28.000Z | 2021-02-16T23:26:28.000Z | from ._syslog import Syslog | 27 | 27 | 0.851852 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10b016e0388f12c87107b414f8009ea8e9f31803 | 255 | py | Python | Django2YingOps/conf/ibm.py | xiaoqying/YingOps | 5f7dae253d0f10c994a59f6b92e8795cd6230874 | [
"Apache-2.0"
] | 4 | 2020-04-10T00:18:47.000Z | 2020-06-14T01:38:34.000Z | Django2YingOps/conf/ibm.py | xiaoqying/YingOps | 5f7dae253d0f10c994a59f6b92e8795cd6230874 | [
"Apache-2.0"
] | 8 | 2020-05-12T01:25:47.000Z | 2022-02-10T10:29:26.000Z | Django2YingOps/conf/ibm.py | xiaoqying/YingOps | 5f7dae253d0f10c994a59f6b92e8795cd6230874 | [
"Apache-2.0"
] | 5 | 2020-07-09T13:30:08.000Z | 2022-01-30T01:13:04.000Z | def return_ibm_list():
ibm__ip_list = ['192.168.127.71', '192.168.127.72', '192.168.127.73', '192.168.127.74', '192.168.101.1',
'192.168.101.2', '192.168.101.3', '192.168.101.4', '192.168.101.5', '192.168.101.6']
return ibm__ip_list | 63.75 | 108 | 0.596078 | 51 | 255 | 2.823529 | 0.372549 | 0.416667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.479263 | 0.14902 | 255 | 4 | 109 | 63.75 | 0.184332 | 0 | 0 | 0 | 0 | 0 | 0.523438 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52dbb2bdf27880f7211c8fd4c67444978fd72c1b | 3,905 | py | Python | src/genie/libs/parser/iosxe/tests/ShowCryptoMibIpsecFlowmibTunnel/cli/equal/golden_1_expected.py | ykoehler/genieparser | b62cf622c3d8eab77c7b69e932c214ed04a2565a | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowCryptoMibIpsecFlowmibTunnel/cli/equal/golden_1_expected.py | ykoehler/genieparser | b62cf622c3d8eab77c7b69e932c214ed04a2565a | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowCryptoMibIpsecFlowmibTunnel/cli/equal/golden_1_expected.py | ykoehler/genieparser | b62cf622c3d8eab77c7b69e932c214ed04a2565a | [
"Apache-2.0"
] | null | null | null | expected_output = {
'ipsec_flowmib_tunnel': {
'total_vrf': 1,
'CC-INTERNET': {
'2': {
'out_sa_uncompress_algorithm': 'None',
'out_octets': 195339,
'lifetime_kb': 4608000,
'no_of_refresh': 0,
'out_drops': 0,
'out_packets': 3218,
'in_drops': 0,
'lifetime_sec': 3600,
'lifetime_threshold_kb': 64,
'lifetime_threshold_sec': 10,
'out_encryption_failures': 0,
'out_auth_failures': 0,
'in_sa_encrypt_algorithm': 'aes',
'index': '18',
'ipsec_keying': 'IKE',
'in_replay_drops': 0,
'local_address': '1.1.0.100',
'decompressed_octets': 386520,
'in_authentications': 3213,
'active_time': '00:05:49',
'current_sa': 2,
'in_sa_ah_auth_algorithm': 'None',
'in_sa_dh_group': 'None',
'in_sa_uncompress_algorithm': 'None',
'out_sa_dh_group': 'None',
'out_authentications': 3218,
'in_sa_esp_auth_algorithm': 'None',
'remote_address': '1.1.0.102',
'out_sa_ah_auth_algorithm': 'None',
'in_packets': 3213,
'in_decrypts': 3213,
'out_encryptions': 3218,
'expired_sa': 0,
'encap_mode': 2,
'in_octets': 386520,
'compressed_octets': 0,
'out_sa_encrypt_algorithm': 'aes',
'out_uncompressed_octets': 195339,
'in_decrypt_failures': 0,
'out_sa_esp_auth_algorithm': 'None',
'in_auth_failures': 0,
'decompressed_octets_1': 0,
'out_uncompressed_octets_1': 0
},
'1': {
'out_sa_uncompress_algorithm': 'None',
'out_octets': 194700,
'lifetime_kb': 4608000,
'no_of_refresh': 0,
'out_drops': 0,
'out_packets': 3222,
'in_drops': 0,
'lifetime_sec': 3600,
'lifetime_threshold_kb': 64,
'lifetime_threshold_sec': 10,
'out_encryption_failures': 0,
'out_auth_failures': 0,
'in_sa_encrypt_algorithm': 'aes',
'index': '17',
'ipsec_keying': 'IKE',
'in_replay_drops': 0,
'local_address': '1.1.0.100',
'decompressed_octets': 386160,
'in_authentications': 3210,
'active_time': '00:05:50',
'current_sa': 2,
'in_sa_ah_auth_algorithm': 'None',
'in_sa_dh_group': 'None',
'in_sa_uncompress_algorithm': 'None',
'out_sa_dh_group': 'None',
'out_authentications': 3222,
'in_sa_esp_auth_algorithm': 'None',
'remote_address': '1.1.0.101',
'out_sa_ah_auth_algorithm': 'None',
'in_packets': 3210,
'in_decrypts': 3210,
'out_encryptions': 3222,
'expired_sa': 0,
'encap_mode': 2,
'in_octets': 386160,
'compressed_octets': 0,
'out_sa_encrypt_algorithm': 'aes',
'out_uncompressed_octets': 194700,
'in_decrypt_failures': 0,
'out_sa_esp_auth_algorithm': 'None',
'in_auth_failures': 0,
'decompressed_octets_1': 0,
'out_uncompressed_octets_1': 0
}
}
}
}
| 39.846939 | 55 | 0.445839 | 356 | 3,905 | 4.441011 | 0.219101 | 0.098672 | 0.086022 | 0.072106 | 0.827324 | 0.827324 | 0.827324 | 0.780519 | 0.703352 | 0.703352 | 0 | 0.091865 | 0.439693 | 3,905 | 97 | 56 | 40.257732 | 0.630713 | 0 | 0 | 0.618557 | 0 | 0 | 0.406658 | 0.169526 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52efc58221641e011eecddfd3d4b3ec9b1825541 | 93,690 | py | Python | tests/integration/test_schema.py | dswiecki/karapace | b3cc47ee5cd14ed7113748e7fb36744c808f7131 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_schema.py | dswiecki/karapace | b3cc47ee5cd14ed7113748e7fb36744c808f7131 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_schema.py | dswiecki/karapace | b3cc47ee5cd14ed7113748e7fb36744c808f7131 | [
"Apache-2.0"
] | null | null | null | """
karapace - schema tests
Copyright (c) 2019 Aiven Ltd
See LICENSE for details
"""
from http import HTTPStatus
from kafka import KafkaProducer
from karapace import config
from karapace.rapu import is_success
from karapace.schema_registry_apis import KarapaceSchemaRegistry, SchemaErrorMessages
from karapace.utils import Client
from tests.utils import (
create_field_name_factory,
create_schema_name_factory,
create_subject_name_factory,
KafkaServers,
repeat_until_successful_request,
)
from typing import List, Tuple
import json as jsonlib
import os
import pytest
import requests
import ujson
baseurl = "http://localhost:8081"
@pytest.mark.parametrize("trail", ["", "/"])
async def test_union_to_union(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_union_to_union-{trail}")
subject_1 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_1}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
init_schema = {"name": "init", "type": "record", "fields": [{"name": "inner", "type": ["string", "int"]}]}
evolved = {"name": "init", "type": "record", "fields": [{"name": "inner", "type": ["null", "string"]}]}
evolved_compatible = {
"name": "init",
"type": "record",
"fields": [
{
"name": "inner",
"type": [
"int",
"string",
{"type": "record", "name": "foobar_fields", "fields": [{"name": "foo", "type": "string"}]},
],
}
],
}
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}", json={"schema": ujson.dumps(init_schema)}
)
assert res.status == 200
assert "id" in res.json()
res = await registry_async_client.post(f"subjects/{subject_1}/versions{trail}", json={"schema": ujson.dumps(evolved)})
assert res.status == 409
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}", json={"schema": ujson.dumps(evolved_compatible)}
)
assert res.status == 200
# fw compat check
subject_2 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_2}{trail}", json={"compatibility": "FORWARD"})
assert res.status == 200
res = await registry_async_client.post(
f"subjects/{subject_2}/versions{trail}", json={"schema": ujson.dumps(evolved_compatible)}
)
assert res.status == 200
assert "id" in res.json()
res = await registry_async_client.post(f"subjects/{subject_2}/versions{trail}", json={"schema": ujson.dumps(evolved)})
assert res.status == 409
res = await registry_async_client.post(
f"subjects/{subject_2}/versions{trail}", json={"schema": ujson.dumps(init_schema)}
)
assert res.status == 200
@pytest.mark.parametrize("trail", ["", "/"])
async def test_missing_subject_compatibility(registry_async_client: Client, trail: str) -> None:
subject = create_subject_name_factory(f"test_missing_subject_compatibility-{trail}")()
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps({"type": "string"})}
)
assert res.status_code == 200, f"{res} {subject}"
res = await registry_async_client.get(f"config/{subject}{trail}")
assert res.status == 404, f"{res} {subject}"
res = await registry_async_client.get(f"config/{subject}{trail}?defaultToGlobal=false")
assert res.status == 404, f"subject should have no compatibility when not defaulting to global: {res.json()}"
res = await registry_async_client.get(f"config/{subject}{trail}?defaultToGlobal=true")
assert res.status == 200, f"subject should have a compatibility when not defaulting to global: {res.json()}"
assert "compatibilityLevel" in res.json(), res.json()
@pytest.mark.parametrize("trail", ["", "/"])
async def test_record_union_schema_compatibility(registry_async_client: Client, trail: str) -> None:
subject = create_subject_name_factory(f"test_record_union_schema_compatibility-{trail}")()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
original_schema = {
"name": "bar",
"namespace": "foo",
"type": "record",
"fields": [
{
"name": "foobar",
"type": [
{
"type": "array",
"name": "foobar_items",
"items": {"type": "record", "name": "foobar_fields", "fields": [{"name": "foo", "type": "string"}]},
}
],
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(original_schema)}
)
assert res.status == 200
assert "id" in res.json()
evolved_schema = {
"name": "bar",
"namespace": "foo",
"type": "record",
"fields": [
{
"name": "foobar",
"type": [
{
"type": "array",
"name": "foobar_items",
"items": {
"type": "record",
"name": "foobar_fields",
"fields": [
{"name": "foo", "type": "string"},
{"name": "bar", "type": ["null", "string"], "default": None},
],
},
}
],
}
],
}
res = await registry_async_client.post(
f"compatibility/subjects/{subject}/versions/latest{trail}",
json={"schema": ujson.dumps(evolved_schema)},
)
assert res.status == 200
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(evolved_schema)}
)
assert res.status == 200
assert "id" in res.json()
# Check that we can delete the field as well
res = await registry_async_client.post(
f"compatibility/subjects/{subject}/versions/latest{trail}",
json={"schema": ujson.dumps(original_schema)},
)
assert res.status == 200
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(original_schema)}
)
assert res.status == 200
assert "id" in res.json()
@pytest.mark.parametrize("trail", ["", "/"])
async def test_record_nested_schema_compatibility(registry_async_client: Client, trail: str) -> None:
subject = create_subject_name_factory(f"test_record_nested_schema_compatibility-{trail}")()
res = await registry_async_client.put("config", json={"compatibility": "BACKWARD"})
assert res.status == 200
schema = {
"type": "record",
"name": "Objct",
"fields": [
{
"name": "first_name",
"type": "string",
},
{
"name": "nested_record_name",
"type": {
"name": "first_name_record",
"type": "record",
"fields": [
{
"name": "first_name",
"type": "string",
},
],
},
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert "id" in res.json()
# change string to integer in the nested record, should fail
schema["fields"][1]["type"]["fields"][0]["type"] = "int"
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 409
@pytest.mark.parametrize("trail", ["", "/"])
async def test_compatibility_endpoint(registry_async_client: Client, trail: str) -> None:
"""
Creates a subject with a schema.
Calls compatibility/subjects/{subject}/versions/latest endpoint
and checks it return is_compatible true for a compatible new schema
and false for incompatible schema.
"""
subject = create_subject_name_factory(f"test_compatibility_endpoint-{trail}")()
schema_name = create_schema_name_factory(f"test_compatibility_endpoint_{trail}")()
schema = {
"type": "record",
"name": schema_name,
"fields": [
{
"name": "age",
"type": "int",
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
# replace int with long
schema["fields"] = [{"type": "long", "name": "age"}]
res = await registry_async_client.post(
f"compatibility/subjects/{subject}/versions/latest{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert res.json() == {"is_compatible": True}
# replace int with string
schema["fields"] = [{"type": "string", "name": "age"}]
res = await registry_async_client.post(
f"compatibility/subjects/{subject}/versions/latest{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert res.json() == {"is_compatible": False}
@pytest.mark.parametrize("trail", ["", "/"])
async def test_type_compatibility(registry_async_client: Client, trail: str) -> None:
def _test_cases():
# Generate FORWARD, BACKWARD and FULL tests for primitive types
_CONVERSIONS = {
"int": {
"int": (True, True),
"long": (False, True),
"float": (False, True),
"double": (False, True),
},
"bytes": {
"bytes": (True, True),
"string": (True, True),
},
"boolean": {
"boolean": (True, True),
},
}
_INVALID_CONVERSIONS = [
("int", "boolean"),
("int", "string"),
("int", "bytes"),
("long", "boolean"),
("long", "string"),
("long", "bytes"),
("float", "boolean"),
("float", "string"),
("float", "bytes"),
("double", "boolean"),
("double", "string"),
("double", "bytes"),
]
for source, targets in _CONVERSIONS.items():
for target, (forward, backward) in targets.items():
yield "FORWARD", source, target, forward
yield "BACKWARD", source, target, backward
yield "FULL", target, source, forward and backward
if source != target:
yield "FORWARD", target, source, backward
yield "BACKWARD", target, source, forward
yield "FULL", source, target, forward and backward
for source, target in _INVALID_CONVERSIONS:
yield "FORWARD", source, target, False
yield "FORWARD", target, source, False
yield "BACKWARD", source, target, False
yield "BACKWARD", target, source, False
yield "FULL", target, source, False
yield "FULL", source, target, False
subject_name_factory = create_subject_name_factory(f"test_type_compatibility-{trail}")
for compatibility, source_type, target_type, expected in _test_cases():
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": compatibility})
schema = {
"type": "record",
"name": "Objct",
"fields": [
{
"name": "field",
"type": source_type,
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
schema["fields"][0]["type"] = target_type
res = await registry_async_client.post(
f"compatibility/subjects/{subject}/versions/latest{trail}",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert res.json() == {"is_compatible": expected}
@pytest.mark.parametrize("trail", ["", "/"])
async def test_record_schema_compatibility_forward(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_record_schema_compatibility_forward_{trail}")
subject = subject_name_factory()
schema_name = create_schema_name_factory(f"test_record_schema_compatibility_forward_{trail}")()
schema_1 = {
"type": "record",
"name": schema_name,
"fields": [
{
"name": "first_name",
"type": "string",
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema_1)},
)
assert res.status == 200
assert "id" in res.json()
schema_id = res.json()["id"]
res = await registry_async_client.put(f"/config/{subject}{trail}", json={"compatibility": "FORWARD"})
assert res.status == 200
schema_2 = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema_2)},
)
assert res.status == 200
assert "id" in res.json()
schema_id2 = res.json()["id"]
assert schema_id != schema_id2
schema_3a = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "last_name", "type": "string"},
{"name": "third_name", "type": "string", "default": "foodefaultvalue"},
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema_3a)},
)
# Fails because field removed
assert res.status == 409
res_json = res.json()
assert res_json["error_code"] == 409
schema_3b = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "age", "type": "long"},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema_3b)},
)
# Fails because incompatible type change
assert res.status == 409
res_json = res.json()
assert res_json["error_code"] == 409
schema_4 = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "third_name", "type": "string", "default": "foodefaultvalue"},
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": ujson.dumps(schema_4)},
)
assert res.status == 200
@pytest.mark.parametrize("trail", ["", "/"])
async def test_record_schema_compatibility_backward(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_record_schema_compatibility_backward_{trail}")
subject_1 = subject_name_factory()
schema_name = create_schema_name_factory(f"test_record_schema_compatibility_backward_{trail}")()
schema_1 = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "third_name", "type": "string", "default": "foodefaultvalue"},
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}",
json={"schema": ujson.dumps(schema_1)},
)
assert res.status == 200
res = await registry_async_client.put(f"config/{subject_1}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
# adds fourth_name w/o default, invalid
schema_2 = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "first_name", "type": "string"},
{"name": "last_name", "type": "string"},
{"name": "third_name", "type": "string", "default": "foodefaultvalue"},
{"name": "fourth_name", "type": "string"},
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}",
json={"schema": ujson.dumps(schema_2)},
)
assert res.status == 409
# Add a default value for the field
schema_2["fields"][3] = {"name": "fourth_name", "type": "string", "default": "foof"}
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}",
json={"schema": ujson.dumps(schema_2)},
)
assert res.status == 200
assert "id" in res.json()
# Try to submit schema with a different definition
schema_2["fields"][3] = {"name": "fourth_name", "type": "int", "default": 2}
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}",
json={"schema": ujson.dumps(schema_2)},
)
assert res.status == 409
subject_2 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_2}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
schema_1 = {"type": "record", "name": schema_name, "fields": [{"name": "first_name", "type": "string"}]}
res = await registry_async_client.post(f"subjects/{subject_2}/versions{trail}", json={"schema": ujson.dumps(schema_1)})
assert res.status == 200
schema_1["fields"].append({"name": "last_name", "type": "string"})
res = await registry_async_client.post(f"subjects/{subject_2}/versions{trail}", json={"schema": ujson.dumps(schema_1)})
assert res.status == 409
@pytest.mark.parametrize("trail", ["", "/"])
async def test_enum_schema_field_add_compatibility(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_enum_schema_field_add_compatibility-{trail}")
expected_results = [("BACKWARD", 200), ("FORWARD", 200), ("FULL", 200)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": compatibility})
assert res.status == 200
schema = {"type": "enum", "name": "Suit", "symbols": ["SPADES", "HEARTS", "DIAMONDS"]}
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Add a field
schema["symbols"].append("CLUBS")
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
@pytest.mark.parametrize("trail", ["", "/"])
async def test_array_schema_field_add_compatibility(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_array_schema_field_add_compatibility-{trail}")
expected_results = [("BACKWARD", 200), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": compatibility})
assert res.status == 200
schema = {"type": "array", "items": "int"}
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Modify the items type
schema["items"] = "long"
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
@pytest.mark.parametrize("trail", ["", "/"])
async def test_array_nested_record_compatibility(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_array_nested_record_compatibility-{trail}")
expected_results = [("BACKWARD", 409), ("FORWARD", 200), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": compatibility})
assert res.status == 200
schema = {
"type": "array",
"items": {"type": "record", "name": "object", "fields": [{"name": "first_name", "type": "string"}]},
}
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Add a second field to the record
schema["items"]["fields"].append({"name": "last_name", "type": "string"})
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
@pytest.mark.parametrize("trail", ["", "/"])
async def test_record_nested_array_compatibility(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_record_nested_array_compatibility-{trail}")
expected_results = [("BACKWARD", 200), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": compatibility})
assert res.status == 200
schema = {
"type": "record",
"name": "object",
"fields": [{"name": "simplearray", "type": {"type": "array", "items": "int"}}],
}
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Modify the array items type
schema["fields"][0]["type"]["items"] = "long"
res = await registry_async_client.post(f"subjects/{subject}/versions{trail}", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
async def test_map_schema_field_add_compatibility(
registry_async_client: Client,
) -> None: # TODO: Rename to pålain check map schema and add additional steps
subject_name_factory = create_subject_name_factory("test_map_schema_field_add_compatibility")
expected_results = [("BACKWARD", 200), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
schema = {"type": "map", "values": "int"}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Modify the items type
schema["values"] = "long"
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
async def test_enum_schema(registry_async_client: Client) -> None:
subject_name_factory = create_subject_name_factory("test_enum_schema")
for compatibility in ["BACKWARD", "FORWARD", "FULL"]:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
schema = {"type": "enum", "name": "testenum", "symbols": ["first"]}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
# Add a symbol.
schema["symbols"].append("second")
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Remove a symbol
schema["symbols"].pop(1)
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Change the name
schema["name"] = "another"
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 409
# Inside record
subject = subject_name_factory()
schema = {
"type": "record",
"name": "object",
"fields": [{"name": "enumkey", "type": {"type": "enum", "name": "testenum", "symbols": ["first"]}}],
}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
# Add a symbol.
schema["fields"][0]["type"]["symbols"].append("second")
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Remove a symbol
schema["fields"][0]["type"]["symbols"].pop(1)
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
# Change the name
schema["fields"][0]["type"]["name"] = "another"
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 409
@pytest.mark.parametrize("compatibility", ["BACKWARD", "FORWARD", "FULL"])
async def test_fixed_schema(registry_async_client: Client, compatibility: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_fixed_schema-{compatibility}")
status_code_allowed = 200
status_code_denied = 409
subject_1 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_1}", json={"compatibility": compatibility})
assert res.status == 200
schema = {"type": "fixed", "size": 16, "name": "md5", "aliases": ["testalias"]}
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema)})
# Add new alias
schema["aliases"].append("anotheralias")
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_allowed
# Try to change size
schema["size"] = 32
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_denied
# Try to change name
schema["size"] = 16
schema["name"] = "denied"
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_denied
# In a record
subject_2 = subject_name_factory()
schema = {
"type": "record",
"name": "object",
"fields": [{"name": "fixedkey", "type": {"type": "fixed", "size": 16, "name": "md5", "aliases": ["testalias"]}}],
}
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema)})
# Add new alias
schema["fields"][0]["type"]["aliases"].append("anotheralias")
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_allowed
# Try to change size
schema["fields"][0]["type"]["size"] = 32
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_denied
# Try to change name
schema["fields"][0]["type"]["size"] = 16
schema["fields"][0]["type"]["name"] = "denied"
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code_denied
async def test_primitive_schema(registry_async_client: Client) -> None:
subject_name_factory = create_subject_name_factory("test_primitive_schema")
expected_results = [("BACKWARD", 200), ("FORWARD", 200), ("FULL", 200)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
# Transition from string to bytes
schema = {"type": "string"}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
schema["type"] = "bytes"
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == status_code
expected_results = [("BACKWARD", 409), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
# Transition from string to int
schema = {"type": "string"}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
schema["type"] = "int"
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
async def test_union_comparing_to_other_types(registry_async_client: Client) -> None:
subject_name_factory = create_subject_name_factory("test_primitive_schema")
expected_results = [("BACKWARD", 409), ("FORWARD", 200), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
# Union vs non-union with the same schema
schema = [{"type": "array", "name": "listofstrings", "items": "string"}, "string"]
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
plain_schema = {"type": "string"}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(plain_schema)})
assert res.status == status_code
expected_results = [("BACKWARD", 200), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
# Non-union first
schema = {"type": "array", "name": "listofstrings", "items": "string"}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
union_schema = [{"type": "array", "name": "listofstrings", "items": "string"}, "string"]
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(union_schema)})
assert res.status == status_code
expected_results = [("BACKWARD", 409), ("FORWARD", 409), ("FULL", 409)]
for compatibility, status_code in expected_results:
subject = subject_name_factory()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
assert res.status == 200
# Union to a completely different schema
schema = [{"type": "array", "name": "listofstrings", "items": "string"}, "string"]
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
plain_wrong_schema = {"type": "int"}
res = await registry_async_client.post(
f"subjects/{subject}/versions", json={"schema": ujson.dumps(plain_wrong_schema)}
)
assert res.status == status_code
async def test_transitive_compatibility(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_transitive_compatibility")()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": "BACKWARD_TRANSITIVE"})
assert res.status == 200
schema0 = {
"type": "record",
"name": "Objct",
"fields": [
{"name": "age", "type": "int"},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema0)},
)
assert res.status == 200
schema1 = {
"type": "record",
"name": "Objct",
"fields": [
{"name": "age", "type": "int"},
{
"name": "first_name",
"type": "string",
"default": "John",
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema1)},
)
assert res.status == 200
schema2 = {
"type": "record",
"name": "Objct",
"fields": [
{"name": "age", "type": "int"},
{
"name": "first_name",
"type": "string",
},
{
"name": "last_name",
"type": "string",
"default": "Doe",
},
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema2)},
)
assert res.status == 409
res_json = res.json()
assert res_json["error_code"] == 409
async def assert_schema_versions(client: Client, trail: str, schema_id: int, expected: List[Tuple[str, int]]) -> None:
"""
Calls /schemas/ids/{schema_id}/versions and asserts the expected results were in the response.
"""
res = await client.get(f"/schemas/ids/{schema_id}/versions{trail}")
assert res.status_code == 200
# Schema Registry doesn't return an ordered list, Karapace does.
# Need to check equality ignoring ordering.
assert len(res.json()) == len(expected)
for e in ({"subject": e[0], "version": e[1]} for e in expected):
assert e in res.json()
async def assert_schema_versions_failed(client: Client, trail: str, schema_id: int, response_code: int = 404) -> None:
"""
Calls /schemas/ids/{schema_id}/versions and asserts the response code is the expected.
"""
res = await client.get(f"/schemas/ids/{schema_id}/versions{trail}")
assert res.status_code == response_code
async def register_schema(registry_async_client: Client, trail, subject: str, schema_str: str) -> Tuple[int, int]:
# Register to get the id
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": schema_str},
)
assert res.status == 200
schema_id = res.json()["id"]
# Get version
res = await registry_async_client.post(
f"subjects/{subject}{trail}",
json={"schema": schema_str},
)
assert res.status == 200
assert res.json()["id"] == schema_id
return schema_id, res.json()["version"]
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_versions_multiple_subjects_same_schema(registry_async_client: Client, trail: str) -> None:
"""
Tests case where there are multiple subjects with the same schema.
The schema/versions endpoint returns all these subjects.
"""
subject_name_factory = create_subject_name_factory(f"test_schema_versions_multiple_subjects_same_schema-{trail}")
schema_name_factory = create_schema_name_factory(f"test_schema_versions_multiple_subjects_same_schema_{trail}")
schema_1 = {
"type": "record",
"name": schema_name_factory(),
"fields": [
{
"name": "f1",
"type": "string",
},
{
"name": "f2",
"type": "string",
},
],
}
schema_str_1 = ujson.dumps(schema_1)
schema_2 = {
"type": "record",
"name": schema_name_factory(),
"fields": [
{
"name": "f1",
"type": "string",
}
],
}
schema_str_2 = ujson.dumps(schema_2)
subject_1 = subject_name_factory()
schema_id_1, version_1 = await register_schema(registry_async_client, trail, subject_1, schema_str_1)
schema_1_versions = [(subject_1, version_1)]
await assert_schema_versions(registry_async_client, trail, schema_id_1, schema_1_versions)
subject_2 = subject_name_factory()
schema_id_2, version_2 = await register_schema(registry_async_client, trail, subject_2, schema_str_1)
schema_1_versions = [(subject_1, version_1), (subject_2, version_2)]
assert schema_id_1 == schema_id_2
await assert_schema_versions(registry_async_client, trail, schema_id_1, schema_1_versions)
subject_3 = subject_name_factory()
schema_id_3, version_3 = await register_schema(registry_async_client, trail, subject_3, schema_str_1)
schema_1_versions = [(subject_1, version_1), (subject_2, version_2), (subject_3, version_3)]
assert schema_id_1 == schema_id_3
await assert_schema_versions(registry_async_client, trail, schema_id_1, schema_1_versions)
# subject_4 with different schema to check there are no side effects
subject_4 = subject_name_factory()
schema_id_4, version_4 = await register_schema(registry_async_client, trail, subject_4, schema_str_2)
schema_2_versions = [(subject_4, version_4)]
assert schema_id_1 != schema_id_4
await assert_schema_versions(registry_async_client, trail, schema_id_1, schema_1_versions)
await assert_schema_versions(registry_async_client, trail, schema_id_4, schema_2_versions)
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_versions_deleting(registry_async_client: Client, trail: str) -> None:
"""
Tests getting schema versions when removing a schema version and eventually the subject.
"""
subject = create_subject_name_factory(f"test_schema_versions_deleting_{trail}")()
schema_name = create_schema_name_factory(f"test_schema_versions_deleting_{trail}")()
schema_1 = {
"type": "record",
"name": schema_name,
"fields": [{"name": "field_1", "type": "string"}, {"name": "field_2", "type": "string"}],
}
schema_str_1 = ujson.dumps(schema_1)
schema_2 = {
"type": "record",
"name": schema_name,
"fields": [
{"name": "field_1", "type": "string"},
],
}
schema_str_2 = ujson.dumps(schema_2)
schema_id_1, version_1 = await register_schema(registry_async_client, trail, subject, schema_str_1)
schema_1_versions = [(subject, version_1)]
await assert_schema_versions(registry_async_client, trail, schema_id_1, schema_1_versions)
res = await registry_async_client.put(f"config/{subject}{trail}", json={"compatibility": "BACKWARD"})
assert res.status == 200
schema_id_2, version_2 = await register_schema(registry_async_client, trail, subject, schema_str_2)
schema_2_versions = [(subject, version_2)]
await assert_schema_versions(registry_async_client, trail, schema_id_2, schema_2_versions)
# Deleting one version, the other still found
res = await registry_async_client.delete("subjects/{}/versions/{}".format(subject, version_1))
assert res.status_code == 200
assert res.json() == version_1
await assert_schema_versions(registry_async_client, trail, schema_id_1, [])
await assert_schema_versions(registry_async_client, trail, schema_id_2, schema_2_versions)
# Deleting the subject, the schema version 2 cannot be found anymore
res = await registry_async_client.delete("subjects/{}".format(subject))
assert res.status_code == 200
assert res.json() == [version_2]
await assert_schema_versions(registry_async_client, trail, schema_id_1, [])
await assert_schema_versions(registry_async_client, trail, schema_id_2, [])
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_types(registry_async_client: Client, trail: str) -> None:
"""
Tests for /schemas/types endpoint.
"""
res = await registry_async_client.get(f"/schemas/types{trail}")
assert res.status_code == 200
json = res.json()
assert len(json) == 3
assert "AVRO" in json
assert "JSON" in json
assert "PROTOBUF" in json
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_repost(registry_async_client: Client, trail: str) -> None:
""" "
Repost same schema again to see that a new id is not generated but an old one is given back
"""
subject = create_subject_name_factory(f"test_schema_repost-{trail}")()
unique_field_factory = create_field_name_factory(trail)
unique = unique_field_factory()
schema_str = ujson.dumps({"type": "string", "unique": unique})
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": schema_str},
)
assert res.status == 200
assert "id" in res.json()
schema_id = res.json()["id"]
res = await registry_async_client.get(f"schemas/ids/{schema_id}{trail}")
assert res.status_code == 200
assert ujson.loads(res.json()["schema"]) == ujson.loads(schema_str)
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={"schema": schema_str},
)
assert res.status == 200
assert "id" in res.json()
assert schema_id == res.json()["id"]
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_missing_body(registry_async_client: Client, trail: str) -> None:
subject = create_subject_name_factory(f"test_schema_missing_body-{trail}")()
res = await registry_async_client.post(
f"subjects/{subject}/versions{trail}",
json={},
)
assert res.status == 422
assert res.json()["error_code"] == 42201
assert res.json()["message"] == "Empty schema"
async def test_schema_non_existing_id(registry_async_client: Client) -> None:
"""
Tests getting a non-existing schema id
"""
result = await registry_async_client.get(os.path.join("schemas/ids/123456789"))
assert result.json()["error_code"] == 40403
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_non_invalid_id(registry_async_client: Client, trail: str) -> None:
"""
Tests getting an invalid schema id
"""
result = await registry_async_client.get(f"schemas/ids/invalid{trail}")
assert result.status == 404
assert result.json()["error_code"] == 404
assert result.json()["message"] == "HTTP 404 Not Found"
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_subject_invalid_id(registry_async_client: Client, trail: str) -> None:
"""
Creates a subject with a schema and trying to find the invalid versions for the subject.
"""
subject = create_subject_name_factory(f"test_schema_subject_invalid_id-{trail}")()
unique_field_factory = create_field_name_factory(trail)
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": '{"type": "string", "foo": "string", "%s": "string"}' % unique_field_factory()},
)
assert res.status_code == 200
# Find an invalid version 0
res = await registry_async_client.get(f"subjects/{subject}/versions/0")
assert res.status_code == 422
assert res.json()["error_code"] == 42202
assert (
res.json()["message"]
== "The specified version '0' is not a valid version id. "
+ 'Allowed values are between [1, 2^31-1] and the string "latest"'
)
# Find an invalid version (too large)
res = await registry_async_client.get(f"subjects/{subject}/versions/15")
assert res.status_code == 404
assert res.json()["error_code"] == 40402
assert res.json()["message"] == "Version 15 not found."
async def test_schema_subject_post_invalid(registry_async_client: Client) -> None:
"""
Tests posting to /subjects/{subject} with different invalid values.
"""
subject_name_factory = create_subject_name_factory("test_schema_subject_post_invalid")
schema_str = ujson.dumps({"type": "string"})
# Create the subject
subject_1 = subject_name_factory()
res = await registry_async_client.post(
f"subjects/{subject_1}/versions",
json={"schema": schema_str},
)
assert res.status == 200
res = await registry_async_client.post(
f"subjects/{subject_1}",
json={"schema": ujson.dumps({"type": "invalid_type"})},
)
assert res.status == 500, "Invalid schema for existing subject should return 500"
assert res.json()["message"] == f"Error while looking up schema under subject {subject_1}"
# Subject is not found
subject_2 = subject_name_factory()
res = await registry_async_client.post(
f"subjects/{subject_2}",
json={"schema": schema_str},
)
assert res.status == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject_2}' not found."
# Schema not found for subject
res = await registry_async_client.post(
f"subjects/{subject_1}",
json={"schema": '{"type": "int"}'},
)
assert res.status == 404
assert res.json()["error_code"] == 40403
assert res.json()["message"] == "Schema not found"
# Schema not included in the request body
res = await registry_async_client.post(f"subjects/{subject_1}", json={})
assert res.status == 500
assert res.json()["error_code"] == 500
assert res.json()["message"] == f"Error while looking up schema under subject {subject_1}"
# Schema not included in the request body for subject that does not exist
subject_3 = subject_name_factory()
res = await registry_async_client.post(
f"subjects/{subject_3}",
json={},
)
assert res.status == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject_3}' not found."
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_lifecycle(registry_async_client: Client, trail: str) -> None:
subject = create_subject_name_factory(f"test_schema_lifecycle-{trail}")()
unique_field_factory = create_field_name_factory(trail)
unique_1 = unique_field_factory()
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps({"type": "string", "foo": "string", unique_1: "string"})},
)
assert res.status_code == 200
schema_id_1 = res.json()["id"]
unique_2 = unique_field_factory()
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps({"type": "string", "foo": "string", unique_2: "string"})},
)
schema_id_2 = res.json()["id"]
assert res.status_code == 200
assert schema_id_1 != schema_id_2
await assert_schema_versions(registry_async_client, trail, schema_id_1, [(subject, 1)])
await assert_schema_versions(registry_async_client, trail, schema_id_2, [(subject, 2)])
result = await registry_async_client.get(os.path.join(f"schemas/ids/{schema_id_1}"))
schema_json_1 = ujson.loads(result.json()["schema"])
assert schema_json_1["type"] == "string"
assert schema_json_1["foo"] == "string"
assert schema_json_1[unique_1] == "string"
result = await registry_async_client.get(os.path.join(f"schemas/ids/{schema_id_2}"))
schema_json_2 = ujson.loads(result.json()["schema"])
assert schema_json_2["type"] == "string"
assert schema_json_2["foo"] == "string"
assert schema_json_2[unique_2] == "string"
res = await registry_async_client.get("subjects")
assert res.status_code == 200
assert subject in res.json()
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status_code == 200
assert res.json() == [1, 2]
res = await registry_async_client.get(f"subjects/{subject}/versions/1")
assert res.status_code == 200
assert res.json()["subject"] == subject
assert ujson.loads(res.json()["schema"]) == schema_json_1
# Delete an actual version
res = await registry_async_client.delete(f"subjects/{subject}/versions/1")
assert res.status_code == 200
assert res.json() == 1
# Get the schema by id, still there, wasn't hard-deleted
res = await registry_async_client.get(f"schemas/ids/{schema_id_1}{trail}")
assert res.status_code == 200
assert ujson.loads(res.json()["schema"]) == schema_json_1
# Get the schema by id
res = await registry_async_client.get(f"schemas/ids/{schema_id_2}{trail}")
assert res.status_code == 200
# Get the versions, old version not found anymore (even if schema itself is)
await assert_schema_versions(registry_async_client, trail, schema_id_1, [])
await assert_schema_versions(registry_async_client, trail, schema_id_2, [(subject, 2)])
# Delete a whole subject
res = await registry_async_client.delete(f"subjects/{subject}")
assert res.status_code == 200
assert res.json() == [2]
# List all subjects, our subject shouldn't be in the list
res = await registry_async_client.get("subjects")
assert res.status_code == 200
assert subject not in res.json()
# After deleting the last version of a subject, it shouldn't be in the list
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": '{"type": "string", "unique": "%s"}' % unique_field_factory()},
)
assert res.status == 200
res = await registry_async_client.get("subjects")
assert subject in res.json()
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.json() == [3]
res = await registry_async_client.delete(f"subjects/{subject}/versions/3")
assert res.status_code == 200
res = await registry_async_client.get("subjects")
assert subject not in res.json()
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject}' not found."
res = await registry_async_client.get(f"subjects/{subject}/versions/latest")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject}' not found."
# Creating a new schema works after deleting the only available version
unique_3 = unique_field_factory()
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps({"type": "string", "foo": "string", unique_3: "string"})},
)
assert res.status == 200
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.json() == [4]
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_version_numbering(registry_async_client: Client, trail: str) -> None:
"""
Test updating the schema of a subject increases its version number.
Deletes the subjects and asserts that when recreated, has a greater version number.
"""
subject = create_subject_name_factory(f"test_schema_version_numbering-{trail}")()
unique_field_factory = create_field_name_factory(trail)
unique = unique_field_factory()
schema = {
"type": "record",
"name": unique,
"fields": [
{
"name": "first_name",
"type": "string",
}
],
}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
assert "id" in res.json()
res = await registry_async_client.put(f"config/{subject}", json={"compatibility": "FORWARD"})
assert res.status == 200
schema2 = {
"type": "record",
"name": unique,
"fields": [
{
"name": "first_name",
"type": "string",
},
{
"name": "last_name",
"type": "string",
},
],
}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema2)})
assert res.status == 200
assert "id" in res.json()
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status == 200
assert res.json() == [1, 2]
# Recreate subject
res = await registry_async_client.delete(f"subjects/{subject}")
assert res.status == 200
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status == 200
assert res.json() == [3] # Version number generation should now begin at 3
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_version_numbering_complex(registry_async_client: Client, trail: str) -> None:
"""
Tests that when fetching a more complex schema, it matches with the created one.
"""
subject = create_subject_name_factory(f"test_schema_version_numbering_complex-{trail}")()
unique_field_factory = create_field_name_factory(trail)
schema = {
"type": "record",
"name": "Object",
"fields": [
{
"name": "first_name",
"type": "string",
},
],
"unique": unique_field_factory(),
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema)},
)
schema_id = res.json()["id"]
res = await registry_async_client.get(f"subjects/{subject}/versions/1")
assert res.status == 200
assert res.json()["subject"] == subject
assert sorted(ujson.loads(res.json()["schema"])) == sorted(schema)
await assert_schema_versions(registry_async_client, trail, schema_id, [(subject, 1)])
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_three_subjects_sharing_schema(registry_async_client: Client, trail: str) -> None:
""" "
Submits two subjects with the same schema.
Submits a third subject initially with different schema. Updates to share the schema.
Asserts all three subjects have the same schema.
"""
subject_name_factory = create_subject_name_factory(f"test_schema_XXX-{trail}")
unique_field_factory = create_field_name_factory(trail)
# Submitting the exact same schema for a different subject should return the same schema ID.
subject_1 = subject_name_factory()
schema = {
"type": "record",
"name": "Object",
"fields": [
{
"name": "just_a_value",
"type": "string",
},
{
"name": unique_field_factory(),
"type": "string",
},
],
}
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
assert "id" in res.json()
schema_id_1 = res.json()["id"]
# New subject with the same schema
subject_2 = subject_name_factory()
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 200
assert "id" in res.json()
schema_id_2 = res.json()["id"]
assert schema_id_1 == schema_id_2
# It also works for multiple versions in a single subject
subject_3 = subject_name_factory()
res = await registry_async_client.put(
f"config/{subject_3}", json={"compatibility": "NONE"}
) # We don't care about the compatibility in this test
res = await registry_async_client.post(
f"subjects/{subject_3}/versions",
json={"schema": '{"type": "string"}'},
)
assert res.status == 200
res = await registry_async_client.post(
f"subjects/{subject_3}/versions",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert res.json()["id"] == schema_id_1 # Same ID as in the previous test step
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_subject_version_schema(registry_async_client: Client, trail: str) -> None:
"""
Tests for the /subjects/(string: subject)/versions/(versionId: version)/schema endpoint.
"""
subject_name_factory = create_subject_name_factory(f"test_schema_subject_version_schema_{trail}")
schema_name = create_schema_name_factory(f"test_schema_subject_version_schema_{trail}")()
# The subject version schema endpoint returns the correct results
subject_1 = subject_name_factory()
schema = {
"type": "record",
"name": schema_name,
"fields": [
{
"name": "just_a_value",
"type": "string",
}
],
}
schema_str = ujson.dumps(schema)
res = await registry_async_client.post(
f"subjects/{subject_1}/versions",
json={"schema": schema_str},
)
assert res.status == 200
res = await registry_async_client.get(f"subjects/{subject_1}/versions/1/schema")
assert res.status == 200
assert res.json() == ujson.loads(schema_str)
subject_2 = subject_name_factory()
res = await registry_async_client.get(f"subjects/{subject_2}/versions/1/schema") # Invalid subject
assert res.status == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject_2}' not found."
res = await registry_async_client.get(f"subjects/{subject_1}/versions/2/schema")
assert res.status == 404
assert res.json()["error_code"] == 40402
assert res.json()["message"] == "Version 2 not found."
res = await registry_async_client.get(f"subjects/{subject_1}/versions/latest/schema")
assert res.status == 200
assert res.json() == ujson.loads(schema_str)
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_same_subject(registry_async_client: Client, trail: str) -> None:
"""
The same schema JSON should be returned when checking the same schema str against the same subject
"""
subject_name_factory = create_subject_name_factory(f"test_schema_same_subject_{trail}")
schema_name = create_schema_name_factory(f"test_schema_same_subject_{trail}")()
schema_str = ujson.dumps(
{
"type": "record",
"name": schema_name,
"fields": [
{
"name": "f",
"type": "string",
}
],
}
)
subject = subject_name_factory()
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": schema_str},
)
assert res.status == 200
schema_id = res.json()["id"]
res = await registry_async_client.post(
f"subjects/{subject}",
json={"schema": schema_str},
)
assert res.status == 200
# Switch the str schema to a dict for comparison
json = res.json()
json["schema"] = ujson.loads(json["schema"])
assert json == {"id": schema_id, "subject": subject, "schema": ujson.loads(schema_str), "version": 1}
@pytest.mark.parametrize("trail", ["", "/"])
async def test_schema_version_number_existing_schema(registry_async_client: Client, trail: str) -> None:
"""
Tests creating the same schemas for two subjects. Asserts the schema ids are the same for both subjects.
"""
subject_name_factory = create_subject_name_factory(f"test_schema_version_number_existing_schema-{trail}")
unique_field_factory = create_field_name_factory(trail)
subject_1 = subject_name_factory()
res = await registry_async_client.put(
f"config/{subject_1}", json={"compatibility": "NONE"}
) # We don't care about compatibility
unique = unique_field_factory()
schema_1 = {
"type": "record",
"name": "Object",
"fields": [
{
"name": "just_a_value",
"type": "string",
},
{
"name": f"{unique}",
"type": "string",
},
],
}
schema_2 = {
"type": "record",
"name": "Object",
"fields": [
{
"name": "just_a_value2",
"type": "string",
},
{
"name": f"{unique}",
"type": "string",
},
],
}
schema_3 = {
"type": "record",
"name": "Object",
"fields": [
{
"name": "just_a_value3",
"type": "int",
},
{
"name": f"{unique}",
"type": "string",
},
],
}
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema_1)})
assert res.status == 200
schema_id_1 = res.json()["id"]
res = await registry_async_client.post(f"subjects/{subject_1}/versions", json={"schema": ujson.dumps(schema_2)})
assert res.status == 200
schema_id_2 = res.json()["id"]
assert schema_id_2 > schema_id_1
# Reuse the first schema in another subject
subject_2 = subject_name_factory()
res = await registry_async_client.put(
f"config/{subject_2}", json={"compatibility": "NONE"}
) # We don't care about compatibility
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema_1)})
assert res.status == 200
assert res.json()["id"] == schema_id_1
# Create a new schema
res = await registry_async_client.post(f"subjects/{subject_2}/versions", json={"schema": ujson.dumps(schema_3)})
assert res.status == 200
schema_id_3 = res.json()["id"]
assert res.json()["id"] == schema_id_3
assert schema_id_3 > schema_id_2
@pytest.mark.parametrize("trail", ["", "/"])
async def test_config(registry_async_client: Client, trail: str) -> None:
subject_name_factory = create_subject_name_factory(f"test_config-{trail}")
# Tests /config endpoint
res = await registry_async_client.put(f"config{trail}", json={"compatibility": "FULL"})
assert res.status_code == 200
assert res.json()["compatibility"] == "FULL"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.get(f"config{trail}")
assert res.status_code == 200
assert res.json()["compatibilityLevel"] == "FULL"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.put(f"config{trail}", json={"compatibility": "NONE"})
assert res.status_code == 200
assert res.json()["compatibility"] == "NONE"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.put(f"config{trail}", json={"compatibility": "nonexistentmode"})
assert res.status_code == 422
assert res.json()["error_code"] == 42203
assert res.json()["message"] == SchemaErrorMessages.INVALID_COMPATIBILITY_LEVEL.value
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Create a new subject so we can try setting its config
subject_1 = subject_name_factory()
res = await registry_async_client.post(
f"subjects/{subject_1}/versions{trail}",
json={"schema": '{"type": "string"}'},
)
assert res.status_code == 200
assert "id" in res.json()
res = await registry_async_client.get(f"config/{subject_1}{trail}")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == SchemaErrorMessages.SUBJECT_NOT_FOUND_FMT.value.format(subject=subject_1)
res = await registry_async_client.put(f"config/{subject_1}{trail}", json={"compatibility": "FULL"})
assert res.status_code == 200
assert res.json()["compatibility"] == "FULL"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.get(f"config/{subject_1}{trail}")
assert res.status_code == 200
assert res.json()["compatibilityLevel"] == "FULL"
# It's possible to add a config to a subject that doesn't exist yet
subject_2 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_2}{trail}", json={"compatibility": "FULL"})
assert res.status_code == 200
assert res.json()["compatibility"] == "FULL"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# The subject doesn't exist from the schema point of view
res = await registry_async_client.get(f"subjects/{subject_2}/versions")
assert res.status_code == 404
res = await registry_async_client.post(
f"subjects/{subject_2}/versions",
json={"schema": '{"type": "string"}'},
)
assert res.status_code == 200
assert "id" in res.json()
res = await registry_async_client.get(f"config/{subject_2}")
assert res.status_code == 200
assert res.json()["compatibilityLevel"] == "FULL"
# Test that config is returned for a subject that does not have an existing schema
subject_3 = subject_name_factory()
res = await registry_async_client.put(f"config/{subject_3}", json={"compatibility": "NONE"})
assert res.status == 200
assert res.json()["compatibility"] == "NONE"
res = await registry_async_client.get(f"config/{subject_3}")
assert res.status == 200
assert res.json()["compatibilityLevel"] == "NONE"
async def test_http_headers(registry_async_client: Client) -> None:
res = await registry_async_client.get("subjects", headers={"Accept": "application/json"})
assert res.headers["Content-Type"] == "application/json"
# The default is received when not specifying
res = await registry_async_client.get("subjects")
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Giving an invalid Accept value
res = await registry_async_client.get("subjects", headers={"Accept": "application/vnd.schemaregistry.v2+json"})
assert res.status == 406
assert res.json()["message"] == "HTTP 406 Not Acceptable"
# PUT with an invalid Content type
res = await registry_async_client.put("config", json={"compatibility": "NONE"}, headers={"Content-Type": "text/html"})
assert res.status == 415
assert res.json()["message"] == "HTTP 415 Unsupported Media Type"
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Multiple Accept values
res = await registry_async_client.get(
"subjects", headers={"Accept": "text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2"}
)
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Weight works
res = await registry_async_client.get(
"subjects",
headers={"Accept": "application/vnd.schemaregistry.v2+json; q=0.1, application/vnd.schemaregistry+json; q=0.9"},
)
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry+json"
# Accept without any subtype works
res = await registry_async_client.get("subjects", headers={"Accept": "application/*"})
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.get("subjects", headers={"Accept": "text/*"})
assert res.status == 406
assert res.json()["message"] == "HTTP 406 Not Acceptable"
# Accept without any type works
res = await registry_async_client.get("subjects", headers={"Accept": "*/does_not_matter"})
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Default return is correct
res = await registry_async_client.get("subjects", headers={"Accept": "*"})
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.get("subjects", headers={"Accept": "*/*"})
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
# Octet-stream is supported as a Content-Type
res = await registry_async_client.put(
"config", json={"compatibility": "FULL"}, headers={"Content-Type": "application/octet-stream"}
)
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
res = await registry_async_client.get("subjects", headers={"Accept": "application/octet-stream"})
assert res.status == 406
# Parse Content-Type correctly
res = await registry_async_client.put(
"config",
json={"compatibility": "NONE"},
headers={"Content-Type": "application/vnd.schemaregistry.v1+json; charset=utf-8"},
)
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
assert res.json()["compatibility"] == "NONE"
# Works with other than the default charset
res = await registry_async_client.put_with_data(
"config",
data='{"compatibility": "NONE"}'.encode("utf-16"),
headers={"Content-Type": "application/vnd.schemaregistry.v1+json; charset=utf-16"},
)
assert res.status == 200
assert res.headers["Content-Type"] == "application/vnd.schemaregistry.v1+json"
assert res.json()["compatibility"] == "NONE"
if "SERVER_URI" in os.environ:
for content_header in [
{},
{"Content-Type": "application/json"},
{"content-type": "application/json"},
{"CONTENT-Type": "application/json"},
{"coNTEnt-tYPe": "application/json"},
]:
path = os.path.join(os.getenv("SERVER_URI"), "subjects/unknown_subject")
res = requests.request("POST", path, data=b"{}", headers=content_header)
assert res.status_code == 404, res.content
async def test_schema_body_validation(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_schema_body_validation")()
post_endpoints = {f"subjects/{subject}", f"subjects/{subject}/versions"}
for endpoint in post_endpoints:
# Wrong field name
res = await registry_async_client.post(endpoint, json={"invalid_field": "invalid_value"})
assert res.status == 422
assert res.json()["error_code"] == 422
assert res.json()["message"] == "Unrecognized field: invalid_field"
# Additional field
res = await registry_async_client.post(
endpoint, json={"schema": '{"type": "string"}', "invalid_field": "invalid_value"}
)
assert res.status == 422
assert res.json()["error_code"] == 422
assert res.json()["message"] == "Unrecognized field: invalid_field"
# Invalid body type
res = await registry_async_client.post(endpoint, json="invalid")
assert res.status == 500
assert res.json()["error_code"] == 500
assert res.json()["message"] == "Internal Server Error"
async def test_version_number_validation(registry_async_client: Client) -> None:
"""
Creates a subject and schema. Tests that the endpoints
subjects/{subject}/versions/{version} and
subjects/{subject}/versions/{version}/schema
return correct values both with valid and invalid parameters.
"""
subject = create_subject_name_factory("test_version_number_validation")()
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": '{"type": "string"}'},
)
assert res.status_code == 200
assert "id" in res.json()
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status == 200
schema_version = res.json()[0]
invalid_schema_version = schema_version - 1
version_endpoints = {f"subjects/{subject}/versions/$VERSION", f"subjects/{subject}/versions/$VERSION/schema"}
for endpoint in version_endpoints:
# Valid schema id
res = await registry_async_client.get(endpoint.replace("$VERSION", str(schema_version)))
assert res.status == 200
# Invalid number
res = await registry_async_client.get(endpoint.replace("$VERSION", str(invalid_schema_version)))
assert res.status == 422
assert res.json()["error_code"] == 42202
assert (
res.json()["message"] == f"The specified version '{invalid_schema_version}' is not a valid version id. "
'Allowed values are between [1, 2^31-1] and the string "latest"'
)
# Valid latest string
res = await registry_async_client.get(endpoint.replace("$VERSION", "latest"))
assert res.status == 200
# Invalid string
res = await registry_async_client.get(endpoint.replace("$VERSION", "invalid"))
assert res.status == 422
assert res.json()["error_code"] == 42202
assert (
res.json()["message"] == "The specified version 'invalid' is not a valid version id. "
'Allowed values are between [1, 2^31-1] and the string "latest"'
)
async def test_common_endpoints(registry_async_client: Client) -> None:
res = await registry_async_client.get("")
assert res.status == 200
assert res.json() == {}
async def test_invalid_namespace(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_invalid_namespace")()
schema = {"type": "record", "name": "foo", "namespace": "foo-bar-baz", "fields": []}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.status == 422, res.json()
json_res = res.json()
assert json_res["error_code"] == 44201, json_res
expected_message = (
"Invalid AVRO schema. Error: foo-bar-baz is not a valid Avro name because it does not match the pattern "
"(?:^|\\.)[A-Za-z_][A-Za-z0-9_]*$"
)
assert json_res["message"] == expected_message, json_res
async def test_schema_remains_constant(registry_async_client: Client) -> None:
"""
Creates a subject with schema. Asserts the schema is the same when fetching it using schemas/ids/{schema_id}
"""
subject = create_subject_name_factory("test_schema_remains_constant")()
schema_name = create_schema_name_factory("test_schema_remains_constant")()
schema = {
"type": "record",
"name": schema_name,
"namespace": "foo_bar_baz",
"fields": [{"type": "string", "name": "bla"}],
}
schema_str = ujson.dumps(schema)
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": schema_str})
assert res.ok, res.json()
schema_id = res.json()["id"]
res = await registry_async_client.get(f"schemas/ids/{schema_id}")
assert ujson.loads(res.json()["schema"]) == ujson.loads(schema_str)
async def test_malformed_kafka_message(
kafka_servers: KafkaServers, registry_async: KarapaceSchemaRegistry, registry_async_client: Client
) -> None:
if registry_async:
topic = registry_async.config["topic_name"]
else:
topic = config.DEFAULTS["topic_name"]
producer = KafkaProducer(bootstrap_servers=kafka_servers.bootstrap_servers)
message_key = {"subject": "foo", "version": 1, "magic": 1, "keytype": "SCHEMA"}
import random
schema_id = random.randint(20000, 30000)
payload = {"schema": jsonlib.dumps({"foo": "bar"}, indent=None, separators=(",", ":"))}
message_value = {"deleted": False, "id": schema_id, "subject": "foo", "version": 1}
message_value.update(payload)
producer.send(topic, key=ujson.dumps(message_key).encode(), value=ujson.dumps(message_value).encode()).get()
path = f"schemas/ids/{schema_id}"
res = await repeat_until_successful_request(
registry_async_client.get,
path,
json_data=None,
headers=None,
error_msg=f"Schema id {schema_id} not found",
timeout=20,
sleep=1,
)
res_data = res.json()
assert res_data == payload, res_data
async def test_inner_type_compat_failure(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_inner_type_compat_failure")()
sc = {
"type": "record",
"name": "record_line_movement_multiple_deleted",
"namespace": "sya",
"fields": [
{
"name": "meta",
"type": {"type": "record", "name": "meta", "fields": [{"name": "date", "type": "long"}]},
}
],
}
ev = {
"type": "record",
"name": "record_line_movement_multiple_deleted",
"namespace": "sya",
"fields": [
{
"name": "meta",
"type": {
"type": "record",
"name": "meta",
"fields": [{"name": "date", "type": {"type": "long", "logicalType": "timestamp-millis"}}],
},
}
],
}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(sc)})
assert res.ok
sc_id = res.json()["id"]
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(ev)})
assert res.ok
assert sc_id != res.json()["id"]
async def test_anon_type_union_failure(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_anon_type_union_failure")()
schema = {
"type": "record",
"name": "record_line_movement_updated",
"fields": [
{
"name": "dependencies",
"type": [
"null",
{
"type": "record",
"name": "record_line_movement_updated_dependencies",
"fields": [
{
"name": "coefficient",
"type": ["null", "double"],
}
],
},
],
},
],
}
evolved = {
"type": "record",
"name": "record_line_movement_updated",
"fields": [
{
"name": "dependencies",
"type": [
"null",
{
"type": "record",
"name": "record_line_movement_updated_dependencies",
"fields": [
{
"name": "coefficient",
"type": ["null", "double"],
# This is literally the only diff...
"doc": "Coeff of unit product",
}
],
},
],
},
],
}
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(schema)})
assert res.ok
sc_id = res.json()["id"]
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(evolved)})
assert res.ok
assert sc_id != res.json()["id"]
@pytest.mark.parametrize("compatibility", ["FULL", "FULL_TRANSITIVE"])
async def test_full_transitive_failure(registry_async_client: Client, compatibility: str) -> None:
subject = create_subject_name_factory(f"test_full_transitive_failure-{compatibility}")()
init = {
"type": "record",
"name": "order",
"namespace": "example",
"fields": [
{
"name": "someField",
"type": [
"null",
{
"type": "record",
"name": "someEmbeddedRecord",
"namespace": "example",
"fields": [{"name": "name", "type": "string"}],
},
],
"default": "null",
}
],
}
evolved = {
"type": "record",
"name": "order",
"namespace": "example",
"fields": [
{
"name": "someField",
"type": [
"null",
{
"type": "record",
"name": "someEmbeddedRecord",
"namespace": "example",
"fields": [{"name": "name", "type": "string"}, {"name": "price", "type": "int"}],
},
],
"default": "null",
}
],
}
await registry_async_client.put(f"config/{subject}", json={"compatibility": compatibility})
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(init)})
assert res.ok
res = await registry_async_client.post(f"subjects/{subject}/versions", json={"schema": ujson.dumps(evolved)})
assert not res.ok
assert res.status == 409
async def test_invalid_schemas(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_invalid_schemas")()
repated_field = {
"type": "record",
"name": "myrecord",
"fields": [{"type": "string", "name": "name"}, {"type": "string", "name": "name", "default": "test"}],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(repated_field)},
)
assert res.status != 500, "an invalid schema should not cause a server crash"
assert not is_success(HTTPStatus(res.status)), "an invalid schema must not be a success"
async def test_schema_hard_delete_version(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_schema_hard_delete_version")()
res = await registry_async_client.put("config", json={"compatibility": "BACKWARD"})
assert res.status == 200
schemav1 = {
"type": "record",
"name": "myenumtest",
"fields": [
{
"type": {
"type": "enum",
"name": "enumtest",
"symbols": ["first", "second"],
},
"name": "faa",
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schemav1)},
)
assert res.status == 200
assert "id" in res.json()
schemav1_id = res.json()["id"]
schemav2 = {
"type": "record",
"name": "myenumtest",
"fields": [
{
"type": {
"type": "enum",
"name": "enumtest",
"symbols": ["first", "second", "third"],
},
"name": "faa",
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schemav2)},
)
assert res.status == 200
assert "id" in res.json()
schemav2_id = res.json()["id"]
assert schemav1_id != schemav2_id
# Cannot directly hard delete schema v1
res = await registry_async_client.delete(f"subjects/{subject}/versions/1?permanent=true")
assert res.status_code == 404
assert res.json()["error_code"] == 40407
assert res.json()["message"] == f"Subject '{subject}' Version 1 was not deleted first before being permanently deleted"
# Soft delete schema v1
res = await registry_async_client.delete(f"subjects/{subject}/versions/1")
assert res.status_code == 200
assert res.json() == 1
# Cannot soft delete twice
res = await registry_async_client.delete(f"subjects/{subject}/versions/1")
assert res.status_code == 404
assert res.json()["error_code"] == 40406
assert (
res.json()["message"] == f"Subject '{subject}' Version 1 was soft deleted.Set permanent=true to delete permanently"
)
res = await registry_async_client.get(f"subjects/{subject}/versions/1")
assert res.status_code == 404
assert res.json()["error_code"] == 40402
assert res.json()["message"] == "Version 1 not found."
# Hard delete schema v1
res = await registry_async_client.delete(f"subjects/{subject}/versions/1?permanent=true")
assert res.status_code == 200
# Cannot hard delete twice
res = await registry_async_client.delete(f"subjects/{subject}/versions/1?permanent=true")
assert res.status_code == 404
assert res.json()["error_code"] == 40402
assert res.json()["message"] == "Version 1 not found."
async def test_schema_hard_delete_whole_schema(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_schema_hard_delete_whole_schema")()
res = await registry_async_client.put("config", json={"compatibility": "BACKWARD"})
assert res.status == 200
schemav1 = {
"type": "record",
"name": "myenumtest",
"fields": [
{
"type": {
"type": "enum",
"name": "enumtest",
"symbols": ["first", "second"],
},
"name": "faa",
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schemav1)},
)
assert res.status == 200
assert "id" in res.json()
schemav1_id = res.json()["id"]
schemav2 = {
"type": "record",
"name": "myenumtest",
"fields": [
{
"type": {
"type": "enum",
"name": "enumtest",
"symbols": ["first", "second", "third"],
},
"name": "faa",
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schemav2)},
)
assert res.status == 200
assert "id" in res.json()
schemav2_id = res.json()["id"]
assert schemav1_id != schemav2_id
# Hard delete whole schema cannot be done before soft delete
res = await registry_async_client.delete(f"subjects/{subject}?permanent=true")
assert res.status_code == 404
assert res.json()["error_code"] == 40405
assert res.json()["message"] == f"Subject '{subject}' was not deleted first before being permanently deleted"
# Soft delete whole schema
res = await registry_async_client.delete(f"subjects/{subject}")
assert res.status_code == 200
assert res.json() == [1, 2]
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject}' not found."
# Hard delete whole schema
res = await registry_async_client.delete(f"subjects/{subject}?permanent=true")
assert res.status_code == 200
assert res.json() == [1, 2]
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject}' not found."
async def test_schema_hard_delete_and_recreate(registry_async_client: Client) -> None:
subject = create_subject_name_factory("test_schema_hard_delete_and_recreate")()
schema_name = create_schema_name_factory("test_schema_hard_delete_and_recreate")()
res = await registry_async_client.put("config", json={"compatibility": "BACKWARD"})
assert res.status == 200
schema = {
"type": "record",
"name": schema_name,
"fields": [
{
"type": {
"type": "enum",
"name": "enumtest",
"symbols": ["first", "second"],
},
"name": "faa",
}
],
}
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert "id" in res.json()
schema_id = res.json()["id"]
# Soft delete whole schema
res = await registry_async_client.delete(f"subjects/{subject}")
assert res.status_code == 200
# Recreate with same subject after soft delete
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert "id" in res.json()
assert schema_id == res.json()["id"], "after soft delete the same schema registered, the same identifier"
# Soft delete whole schema
res = await registry_async_client.delete(f"subjects/{subject}")
assert res.status_code == 200
# Hard delete whole schema
res = await registry_async_client.delete(f"subjects/{subject}?permanent=true")
assert res.status_code == 200
res = await registry_async_client.get(f"subjects/{subject}/versions")
assert res.status_code == 404
assert res.json()["error_code"] == 40401
assert res.json()["message"] == f"Subject '{subject}' not found."
# Recreate with same subject after hard delete
res = await registry_async_client.post(
f"subjects/{subject}/versions",
json={"schema": ujson.dumps(schema)},
)
assert res.status == 200
assert "id" in res.json()
assert schema_id == res.json()["id"], "after permanent deleted the same schema registered, the same identifier"
async def test_invalid_schema_should_provide_good_error_messages(registry_async_client: Client) -> None:
"""The user should receive an informative error message when the format is invalid"""
subject_name_factory = create_subject_name_factory("test_schema_subject_post_invalid_data")
test_subject = subject_name_factory()
schema_str = ujson.dumps({"type": "string"})
res = await registry_async_client.post(
f"subjects/{test_subject}/versions",
json={"schema": schema_str[:-1]},
)
assert res.json()["message"] == "Invalid AVRO schema. Error: Expecting ',' delimiter: line 1 column 17 (char 16)"
# Unfortunately the AVRO library doesn't provide a good error message, it just raises an TypeError
schema_str = ujson.dumps({"type": "enum", "name": "error"})
res = await registry_async_client.post(
f"subjects/{test_subject}/versions",
json={"schema": schema_str},
)
assert (
res.json()["message"]
== "Invalid AVRO schema. Error: Enum symbols must be a sequence of strings, but it is <class 'NoneType'>"
)
# This is an upstream bug in the python AVRO library, until the bug is fixed we should at least have a nice error message
schema_str = ujson.dumps({"type": "enum", "name": "error", "symbols": {}})
res = await registry_async_client.post(
f"subjects/{test_subject}/versions",
json={"schema": schema_str},
)
assert (
res.json()["message"]
== "Invalid AVRO schema. Error: Enum symbols must be a sequence of strings, but it is <class 'dict'>"
)
# This is an upstream bug in the python AVRO library, until the bug is fixed we should at least have a nice error message
schema_str = ujson.dumps({"type": "enum", "name": "error", "symbols": ["A", "B"]})
res = await registry_async_client.post(
f"subjects/{test_subject}/versions",
json={"schema": schema_str},
)
assert res.json()["message"] == "Invalid AVRO schema. Error: error is a reserved type name."
| 39.766553 | 125 | 0.619351 | 10,970 | 93,690 | 5.092799 | 0.051504 | 0.054128 | 0.107468 | 0.103959 | 0.830994 | 0.804378 | 0.785387 | 0.761223 | 0.726946 | 0.689303 | 0 | 0.018166 | 0.235596 | 93,690 | 2,355 | 126 | 39.783439 | 0.761928 | 0.04405 | 0 | 0.587892 | 0 | 0.002655 | 0.24011 | 0.099219 | 0 | 0 | 0 | 0.000425 | 0.225704 | 1 | 0.000531 | false | 0 | 0.007435 | 0 | 0.008497 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e26822ca73448a2af4561529c0f6af08019f62d | 204 | py | Python | code/hiking/__init__.py | david-liu/hiking | a031ba66472809d2a01201fea9bdd5f12fcc19de | [
"Apache-2.0"
] | null | null | null | code/hiking/__init__.py | david-liu/hiking | a031ba66472809d2a01201fea9bdd5f12fcc19de | [
"Apache-2.0"
] | 1 | 2018-11-07T08:33:17.000Z | 2018-11-07T08:33:17.000Z | code/hiking/__init__.py | david-liu/hiking | a031ba66472809d2a01201fea9bdd5f12fcc19de | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from hiking.application import HikingApplication
from hiking import service as hiking_service | 29.142857 | 48 | 0.882353 | 26 | 204 | 6.346154 | 0.461538 | 0.181818 | 0.290909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112745 | 204 | 7 | 49 | 29.142857 | 0.911602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eabd80905af39789f4fdebf327d38313aba790ab | 15,441 | py | Python | tests/test_cli/test_add/test_connection.py | cyenyxe/agents-aea | c2aec9127028ae13def3f69fbc80d35400de1565 | [
"Apache-2.0"
] | null | null | null | tests/test_cli/test_add/test_connection.py | cyenyxe/agents-aea | c2aec9127028ae13def3f69fbc80d35400de1565 | [
"Apache-2.0"
] | 1 | 2020-02-21T14:28:13.000Z | 2020-03-05T14:53:53.000Z | tests/test_cli/test_add/test_connection.py | cyenyxe/agents-aea | c2aec9127028ae13def3f69fbc80d35400de1565 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2018-2019 Fetch.AI Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------
"""This test module contains the tests for the `aea add connection` sub-command."""
import os
import shutil
import tempfile
import unittest.mock
from pathlib import Path
from jsonschema import ValidationError
import yaml
import aea.configurations.base
from aea.cli import cli
from aea.configurations.base import DEFAULT_CONNECTION_CONFIG_FILE
from ...common.click_testing import CliRunner
from ...conftest import CLI_LOG_OPTION, CUR_PATH
class TestAddConnectionFailsWhenConnectionAlreadyExists:
"""Test that the command 'aea add connection' fails when the connection already exists."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_name = "local"
cls.connection_author = "fetchai"
cls.connection_version = "0.1.0"
cls.connection_id = (
cls.connection_author
+ "/"
+ cls.connection_name
+ ":"
+ cls.connection_version
)
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
os.chdir(cls.agent_name)
# add connection first time
result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
assert result.exit_code == 0
# add connection again
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
@unittest.mock.patch("aea.cli.add.fetch_package")
def test_add_connection_from_registry_positive(self, fetch_package_mock):
"""Test add from registry positive result."""
public_id = aea.configurations.base.PublicId("author", "name", "0.1.0")
obj_type = "connection"
result = self.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "--registry", obj_type, str(public_id)],
standalone_mode=False,
)
assert result.exit_code == 0
fetch_package_mock.assert_called_once_with(
obj_type, public_id=public_id, cwd="."
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_error_message_connection_already_existing(self):
"""Test that the log error message is fixed.
The expected message is: 'A connection with id '{connection_id}' already exists. Aborting...'
"""
s = "A connection with id '{}/{}' already exists. Aborting...".format(
self.connection_author, self.connection_name
)
self.mocked_logger_error.assert_called_once_with(s)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
class TestAddConnectionFailsWhenConnectionWithSameAuthorAndNameButDifferentVersion:
"""Test that 'aea add connection' fails when the connection with different version already exists."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_name = "local"
cls.connection_author = "fetchai"
cls.connection_version = "0.1.0"
cls.connection_id = (
cls.connection_author
+ "/"
+ cls.connection_name
+ ":"
+ cls.connection_version
)
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
os.chdir(cls.agent_name)
# add connection first time
result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
assert result.exit_code == 0
# add connection again, but with different version number
# first, change version number to package
different_version = "0.1.1"
different_id = (
cls.connection_author + "/" + cls.connection_name + ":" + different_version
)
config_path = Path(
cls.t,
"packages",
cls.connection_author,
"connections",
cls.connection_name,
DEFAULT_CONNECTION_CONFIG_FILE,
)
config = yaml.safe_load(config_path.open())
config["version"] = different_version
yaml.safe_dump(config, config_path.open(mode="w"))
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", different_id],
standalone_mode=False,
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_error_message_connection_already_existing(self):
"""Test that the log error message is fixed.
The expected message is: 'A connection with id '{connection_id}' already exists. Aborting...'
"""
s = "A connection with id '{}' already exists. Aborting...".format(
self.connection_author + "/" + self.connection_name
)
self.mocked_logger_error.assert_called_once_with(s)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
class TestAddConnectionFailsWhenConnectionNotInRegistry:
"""Test that the command 'aea add connection' fails when the connection is not in the registry."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_id = "author/unknown_connection:0.1.0"
cls.connection_name = "unknown_connection"
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
os.chdir(cls.agent_name)
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_error_message_connection_already_existing(self):
"""Test that the log error message is fixed.
The expected message is: 'Cannot find connection: '{connection_name}''
"""
s = "Cannot find connection: '{}'.".format(self.connection_id)
self.mocked_logger_error.assert_called_once_with(s)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
class TestAddConnectionFailsWhenDifferentPublicId:
"""Test that the command 'aea add connection' fails when the connection has not the same public id."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_id = "different_author/local:0.1.0"
cls.connection_name = "unknown_connection"
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
os.chdir(cls.agent_name)
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_error_message_connection_wrong_public_id(self):
"""Test that the log error message is fixed."""
s = "Cannot find connection: '{}'.".format(self.connection_id)
self.mocked_logger_error.assert_called_once_with(s)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
class TestAddConnectionFailsWhenConfigFileIsNotCompliant:
"""Test that the command 'aea add connection' fails when the configuration file is not compliant with the schema."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_id = "fetchai/local:0.1.0"
cls.connection_name = "local"
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
# change the serialization of the AgentConfig class so to make the parsing to fail.
cls.patch = unittest.mock.patch.object(
aea.configurations.base.ConnectionConfig,
"from_json",
side_effect=ValidationError("test error message"),
)
cls.patch.__enter__()
os.chdir(cls.agent_name)
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_configuration_file_not_valid(self):
"""Test that the log error message is fixed.
The expected message is: 'Cannot find connection: '{connection_name}''
"""
self.mocked_logger_error.assert_called_once_with(
"Connection configuration file not valid: test error message"
)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
cls.patch.__exit__()
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
class TestAddConnectionFailsWhenDirectoryAlreadyExists:
"""Test that the command 'aea add connection' fails when the destination directory already exists."""
@classmethod
def setup_class(cls):
"""Set the test up."""
cls.runner = CliRunner()
cls.agent_name = "myagent"
cls.cwd = os.getcwd()
cls.t = tempfile.mkdtemp()
cls.connection_id = "fetchai/local:0.1.0"
cls.connection_name = "local"
cls.patch = unittest.mock.patch.object(aea.cli.common.logger, "error")
cls.mocked_logger_error = cls.patch.__enter__()
# copy the 'packages' directory in the parent of the agent folder.
shutil.copytree(Path(CUR_PATH, "..", "packages"), Path(cls.t, "packages"))
os.chdir(cls.t)
result = cls.runner.invoke(
cli, [*CLI_LOG_OPTION, "create", cls.agent_name], standalone_mode=False
)
assert result.exit_code == 0
os.chdir(cls.agent_name)
Path(
cls.t,
cls.agent_name,
"vendor",
"fetchai",
"connections",
cls.connection_name,
).mkdir(parents=True, exist_ok=True)
cls.result = cls.runner.invoke(
cli,
[*CLI_LOG_OPTION, "add", "connection", cls.connection_id],
standalone_mode=False,
)
def test_exit_code_equal_to_1(self):
"""Test that the exit code is equal to 1 (i.e. catchall for general errors)."""
assert self.result.exit_code == 1
def test_file_exists_error(self):
"""Test that the log error message is fixed.
The expected message is: 'Cannot find connection: '{connection_name}''
"""
s = "[Errno 17] File exists: './vendor/fetchai/connections/{}'".format(
self.connection_name
)
self.mocked_logger_error.assert_called_once_with(s)
@classmethod
def teardown_class(cls):
"""Tear the test down."""
os.chdir(cls.cwd)
try:
shutil.rmtree(cls.t)
except (OSError, IOError):
pass
| 35.334096 | 120 | 0.610712 | 1,843 | 15,441 | 4.941942 | 0.123711 | 0.048529 | 0.025033 | 0.029644 | 0.74473 | 0.74473 | 0.74473 | 0.72881 | 0.72881 | 0.715195 | 0 | 0.00586 | 0.270578 | 15,441 | 436 | 121 | 35.415138 | 0.802806 | 0.221488 | 0 | 0.713333 | 0 | 0 | 0.079516 | 0.009961 | 0 | 0 | 0 | 0 | 0.073333 | 1 | 0.083333 | false | 0.02 | 0.04 | 0 | 0.143333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eac2a7e78e61c38796b31d44b94b976ad8a1c7ed | 1,230 | py | Python | tests/test_func.py | Cologler/bytecode2ast-python | 407b261a493e018bc86388040ddfb6fb0e4b96d9 | [
"MIT"
] | null | null | null | tests/test_func.py | Cologler/bytecode2ast-python | 407b261a493e018bc86388040ddfb6fb0e4b96d9 | [
"MIT"
] | null | null | null | tests/test_func.py | Cologler/bytecode2ast-python | 407b261a493e018bc86388040ddfb6fb0e4b96d9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2019~2999 - Cologler <skyoflw@gmail.com>
# ----------
#
# ----------
from utils import get_instrs_from_b2a, get_instrs
def test_func_pass():
def func():
pass
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_func_posarg_bin_op():
def func(x):
return (((((x + 1) - 1) * 1) / 1) % 1)
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_func_multi_ret():
def func(x, y, z):
return x, y, z
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_some_func():
def func():
if a == 1:
return 2
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_call_func():
def func():
iter(a, b)
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_call_func_wk():
def func():
iter(a, b, c=c, d=d, e=e)
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_call_func_anyargs():
def func():
iter(a, *args, b, c=d, **kwargs)
assert get_instrs(func) == get_instrs_from_b2a(func)
def test_call_func_generic_args_unpack():
def func():
iter(*a, **b)
assert get_instrs(func) == get_instrs_from_b2a(func)
| 21.206897 | 56 | 0.61626 | 191 | 1,230 | 3.65445 | 0.246073 | 0.232092 | 0.167622 | 0.206304 | 0.630372 | 0.611748 | 0.611748 | 0.611748 | 0.611748 | 0.611748 | 0 | 0.026399 | 0.230081 | 1,230 | 57 | 57 | 21.578947 | 0.710665 | 0.079675 | 0 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.470588 | false | 0.058824 | 0.029412 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d828ef6cecd1f0cbb8c8e1614940796271327c5a | 2,914 | py | Python | Q52NQueensII.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | Q52NQueensII.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | Q52NQueensII.py | ChenliangLi205/LeetCode | 6c547c338eb05042cb68f57f737dce483964e2fd | [
"MIT"
] | null | null | null | class Solution:
def totalNQueens(self, n):
"""
:type n: int
:rtype: int
"""
if n == 1:
return 1
if n < 4:
return 0
vboard = [[0]*n for _ in range(n)]
results = [0]
def BackTrack(k):
if k >= n:
return
for i in range(n):
if not vboard[k][i]:
SetBoard(k, i)
if k == n-1:
results[0] += 1
BackTrack(k+1)
UnSetBoard(k, i)
def SetBoard(row, col):
manipulated = set()
for i in range(n):
vboard[row][i] += 1
manipulated.add((row, i))
for i in range(n):
if (i, col) not in manipulated:
vboard[i][col] += 1
manipulated.add((i, col))
startRow, startCol = row, col
while startRow > 0 and startCol > 0:
startRow -= 1
startCol -= 1
while startRow < n and startCol < n:
if (startRow, startCol) not in manipulated:
vboard[startRow][startCol] += 1
manipulated.add((startRow, startCol))
startRow += 1
startCol += 1
startRow, startCol = row, col
while startRow > 0 and startCol < n-1:
startRow -= 1
startCol += 1
while startRow < n and startCol >= 0:
if (startRow, startCol) not in manipulated:
vboard[startRow][startCol] += 1
startRow += 1
startCol -= 1
def UnSetBoard(row, col):
manipulated = set()
for i in range(n):
vboard[row][i] -= 1
manipulated.add((row, i))
for i in range(n):
if (i, col) not in manipulated:
vboard[i][col] -= 1
manipulated.add((i, col))
startRow, startCol = row, col
while startRow > 0 and startCol > 0:
startRow -= 1
startCol -= 1
while startRow < n and startCol < n:
if (startRow, startCol) not in manipulated:
vboard[startRow][startCol] -= 1
manipulated.add((startRow, startCol))
startRow += 1
startCol += 1
startRow, startCol = row, col
while startRow > 0 and startCol < n-1:
startRow -= 1
startCol += 1
while startRow < n and startCol >= 0:
if (startRow, startCol) not in manipulated:
vboard[startRow][startCol] -= 1
startRow += 1
startCol -= 1
BackTrack(0)
return results[0] | 34.690476 | 59 | 0.418668 | 290 | 2,914 | 4.203448 | 0.117241 | 0.183757 | 0.111567 | 0.11813 | 0.813782 | 0.813782 | 0.802297 | 0.802297 | 0.802297 | 0.802297 | 0 | 0.030831 | 0.487989 | 2,914 | 84 | 60 | 34.690476 | 0.786193 | 0.008236 | 0 | 0.671053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0 | 0 | 0.118421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d82ed315f751b6c61c35bb5e8407acd17ffa89cf | 28 | py | Python | maec/vocabs/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | 22 | 2015-02-19T14:07:05.000Z | 2021-03-25T00:34:12.000Z | maec/vocabs/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | 28 | 2015-02-21T03:06:48.000Z | 2019-09-17T17:27:31.000Z | maec/vocabs/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | 15 | 2015-06-19T18:10:32.000Z | 2021-02-17T05:59:06.000Z | from .vocabs import * # noqa | 28 | 28 | 0.714286 | 4 | 28 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.869565 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc4bb09e24833bf8a8cf498711cd417f2e2e9bd7 | 101 | py | Python | scitbx/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 155 | 2016-11-23T12:52:16.000Z | 2022-03-31T15:35:44.000Z | scitbx/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 590 | 2016-12-10T11:31:18.000Z | 2022-03-30T23:10:09.000Z | scitbx/__init__.py | dperl-sol/cctbx_project | b9e390221a2bc4fd00b9122e97c3b79c632c6664 | [
"BSD-3-Clause-LBNL"
] | 115 | 2016-11-15T08:17:28.000Z | 2022-02-09T15:30:14.000Z | from __future__ import absolute_import, division, print_function
import libtbx.forward_compatibility
| 33.666667 | 64 | 0.891089 | 12 | 101 | 6.916667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 101 | 2 | 65 | 50.5 | 0.892473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
dc588af25823fa925e78af37bd0f99831c9c7b3b | 47 | py | Python | devops_env/models.py | angelquin1986/DevopsTestBP | b1136f792879df0c8f7cd06db5febed3b686d12f | [
"Apache-2.0"
] | null | null | null | devops_env/models.py | angelquin1986/DevopsTestBP | b1136f792879df0c8f7cd06db5febed3b686d12f | [
"Apache-2.0"
] | 4 | 2020-01-23T22:20:33.000Z | 2021-06-10T21:31:51.000Z | devops_env/models.py | angelquin1986/DevopsTestBP | b1136f792879df0c8f7cd06db5febed3b686d12f | [
"Apache-2.0"
] | null | null | null | """
Se registran todos los modelos del ORM
""" | 15.666667 | 39 | 0.680851 | 7 | 47 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191489 | 47 | 3 | 40 | 15.666667 | 0.842105 | 0.829787 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.333333 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc850764597a1427d50dfe3dbf6cddd95136879b | 77 | py | Python | voronoiz/__init__.py | WarrenWeckesser/voronoiz | aad2b62bda3015bfb998142b8694c555a518d748 | [
"MIT"
] | 1 | 2022-03-23T18:11:34.000Z | 2022-03-23T18:11:34.000Z | voronoiz/__init__.py | WarrenWeckesser/voronoiz | aad2b62bda3015bfb998142b8694c555a518d748 | [
"MIT"
] | null | null | null | voronoiz/__init__.py | WarrenWeckesser/voronoiz | aad2b62bda3015bfb998142b8694c555a518d748 | [
"MIT"
] | null | null | null |
from ._voronoi_l1 import voronoi_l1
from ._voronoi_grid import voronoi_grid
| 19.25 | 39 | 0.857143 | 12 | 77 | 5 | 0.416667 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.116883 | 77 | 3 | 40 | 25.666667 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4f1d7c99e91f87f2c316ea2f696eff3196d4cc5 | 2,141 | py | Python | 04-FaceRecognition-II/thetensorclan-backend-heroku/models/classifiers.py | amitkml/TSAI-DeepVision-EVA4.0-Phase-2 | f9e232b3eb6ce20f522136523e79208ed85a1f28 | [
"MIT"
] | 1 | 2021-03-21T08:45:05.000Z | 2021-03-21T08:45:05.000Z | 04-FaceRecognition-II/thetensorclan-backend-heroku/models/classifiers.py | amitkml/TSAI-DeepVision-EVA4.0-Phase-2 | f9e232b3eb6ce20f522136523e79208ed85a1f28 | [
"MIT"
] | null | null | null | 04-FaceRecognition-II/thetensorclan-backend-heroku/models/classifiers.py | amitkml/TSAI-DeepVision-EVA4.0-Phase-2 | f9e232b3eb6ce20f522136523e79208ed85a1f28 | [
"MIT"
] | null | null | null | import torchvision.transforms as T
from torchvision.transforms import Compose
import torch.nn.functional as F
from utils import setup_logger
logger = setup_logger(__name__)
def classify_resnet34_imagenet(model, classes, image):
trans: Compose = T.Compose([
T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
img_tensor = trans(image).unsqueeze(0)
predicted = model(img_tensor).squeeze(0)
predicted = F.softmax(predicted)
sorted_values = predicted.argsort(descending=True).cpu().numpy()
top10pred = list(map(lambda x: {'class_idx': x.item(),'class_name': classes[x], 'confidence': predicted[x].item()}, sorted_values))[:10]
return top10pred
def classify_mobilenetv2_ifo(model, classes, image):
trans: Compose = T.Compose([
T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(mean=[0.533459901809692, 0.584880530834198, 0.615305066108704], std=[0.172962218523026, 0.167985364794731, 0.184633478522301])
])
img_tensor = trans(image).unsqueeze(0)
predicted = model(img_tensor).squeeze(0)
predicted = F.softmax(predicted)
sorted_values = predicted.argsort(descending=True).cpu().numpy()
logger.info(sorted_values)
top4pred = list(map(lambda x: {'class_idx': x.item(), 'class_name': classes[x], 'confidence': predicted[x].item()}, sorted_values))[:4]
return top4pred
def classify_indian_face(model, classes, image):
"""
The image sent to this MUST be aligned
"""
trans: Compose = T.Compose([
T.Resize(160),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
img_tensor = trans(image).unsqueeze(0)
predicted = model(img_tensor).squeeze(0)
predicted = F.softmax(predicted)
sorted_values = predicted.argsort(descending=True).cpu().numpy()
logger.info(sorted_values)
top4pred = list(map(lambda x: {'class_idx': x.item(), 'class_name': classes[x], 'confidence': predicted[x].item()}, sorted_values))[:4]
return top4pred
| 31.028986 | 146 | 0.668846 | 281 | 2,141 | 4.982206 | 0.291815 | 0.068571 | 0.036429 | 0.042857 | 0.722143 | 0.722143 | 0.702857 | 0.702857 | 0.702857 | 0.702857 | 0 | 0.102331 | 0.178421 | 2,141 | 68 | 147 | 31.485294 | 0.693576 | 0.017749 | 0 | 0.733333 | 0 | 0 | 0.041707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.088889 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76019db36d48b54abaf8d87b8e9ae45148612d5e | 226 | py | Python | language-learning/python/tricks/type_annotation.py | imteekay/programming-language-research | a9b05d70a669d789d79e1bd916bcc088958ff9df | [
"MIT"
] | 24 | 2019-05-24T03:22:42.000Z | 2021-09-30T01:04:17.000Z | language-learning/python/tricks/type_annotation.py | imteekay/programming-language-research | a9b05d70a669d789d79e1bd916bcc088958ff9df | [
"MIT"
] | null | null | null | language-learning/python/tricks/type_annotation.py | imteekay/programming-language-research | a9b05d70a669d789d79e1bd916bcc088958ff9df | [
"MIT"
] | 3 | 2019-11-22T19:04:23.000Z | 2021-04-15T19:40:47.000Z | def add_this(a, b):
return a + b
print(add_this('hello ', 'world'))
print(add_this(1, 2))
def typed_add_this(a: int, b: int) -> int:
return a + b
print(typed_add_this('hello ', 'world'))
print(typed_add_this(1, 2))
| 18.833333 | 42 | 0.641593 | 42 | 226 | 3.238095 | 0.309524 | 0.308824 | 0.264706 | 0.191176 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02139 | 0.172566 | 226 | 11 | 43 | 20.545455 | 0.705882 | 0 | 0 | 0.25 | 0 | 0 | 0.097345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
5210c372b2e10e5e56666f87dd9a84412549fea7 | 26 | py | Python | axelrod/strategies/__init__.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 33 | 2015-02-20T11:36:48.000Z | 2022-02-16T17:02:06.000Z | axelrod/strategies/__init__.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 108 | 2015-02-18T14:15:44.000Z | 2020-05-08T10:39:58.000Z | axelrod/strategies/__init__.py | DumisaniZA/Axelrod | e59fc40ebb705afe05cea6f30e282d1e9c621259 | [
"MIT"
] | 41 | 2015-02-18T13:40:04.000Z | 2021-05-31T06:08:10.000Z | from _strategies import *
| 13 | 25 | 0.807692 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52557d76dfc907893245be4a7bf8324b8feaa2c7 | 459 | py | Python | openrec/modules/extractions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | openrec/modules/extractions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | openrec/modules/extractions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | from openrec.modules.extractions.extraction import Extraction
from openrec.modules.extractions.identity_mapping import IdentityMapping
from openrec.modules.extractions.latent_factor import LatentFactor
from openrec.modules.extractions.look_up import LookUp
from openrec.modules.extractions.multi_layer_fc import MultiLayerFC
from openrec.modules.extractions.sdae import SDAE
from openrec.modules.extractions.temporal_latent_factor import TemporalLatentFactor
| 57.375 | 83 | 0.893246 | 56 | 459 | 7.196429 | 0.392857 | 0.191067 | 0.312655 | 0.503722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061002 | 459 | 7 | 84 | 65.571429 | 0.935035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8729bfef06b7dc5a56687f2a5718d7529d8f872f | 214 | py | Python | treadmill/plugins/scheduler/algorithm/__init__.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 2 | 2017-03-20T07:13:33.000Z | 2017-05-03T03:39:53.000Z | treadmill/plugins/scheduler/algorithm/__init__.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 12 | 2017-07-10T07:04:06.000Z | 2017-07-26T09:32:54.000Z | treadmill/plugins/scheduler/algorithm/__init__.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 2 | 2017-05-04T11:25:32.000Z | 2017-07-11T09:10:01.000Z | from .predicates import match_app_constraints, match_app_lifetime,\
alive_servers
from .priorities import spread
__all__ = ['match_app_constraints', 'match_app_lifetime',
'alive_servers', 'spread']
| 30.571429 | 67 | 0.761682 | 25 | 214 | 5.96 | 0.48 | 0.214765 | 0.255034 | 0.322148 | 0.630872 | 0.630872 | 0.630872 | 0.630872 | 0 | 0 | 0 | 0 | 0.149533 | 214 | 6 | 68 | 35.666667 | 0.818681 | 0 | 0 | 0 | 0 | 0 | 0.271028 | 0.098131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
875e180f24a328c589a7596a3781009ff29c77f6 | 36 | py | Python | backend/vobla/schemas/args/__init__.py | Nuqlear/voila | 05ada753425ee62e1edd06f945e58e29e808409b | [
"MIT"
] | 2 | 2017-12-12T14:28:43.000Z | 2018-01-24T10:58:27.000Z | backend/vobla/schemas/args/__init__.py | Nuqlear/voila | 05ada753425ee62e1edd06f945e58e29e808409b | [
"MIT"
] | 21 | 2020-03-05T18:58:11.000Z | 2022-02-02T20:00:34.000Z | backend/vobla/schemas/args/__init__.py | Nuqlear/voila | 05ada753425ee62e1edd06f945e58e29e808409b | [
"MIT"
] | 2 | 2017-12-13T22:43:56.000Z | 2018-01-24T17:14:29.000Z | from vobla.schemas.args import auth
| 18 | 35 | 0.833333 | 6 | 36 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e758d2f77b738567fcd384667e214f87b969c25 | 3,971 | py | Python | onionperf/tests/test_reprocessing.py | amaleewilson/onionperf | 34da5e8c2aa764d70141972ea2520f881e90cb03 | [
"BSD-3-Clause"
] | 1 | 2020-05-07T16:58:22.000Z | 2020-05-07T16:58:22.000Z | onionperf/tests/test_reprocessing.py | NullHypothesis/onionperf | 17b0cf422adc81d356d1db4aba46b495e9487d0b | [
"BSD-3-Clause"
] | null | null | null | onionperf/tests/test_reprocessing.py | NullHypothesis/onionperf | 17b0cf422adc81d356d1db4aba46b495e9487d0b | [
"BSD-3-Clause"
] | null | null | null | import os
import pkg_resources
import datetime
import tempfile
import shutil
from nose.tools import *
from onionperf import analysis
from onionperf import reprocessing
def absolute_data_path(relative_path=""):
"""
Returns an absolute path for test data given a relative path.
"""
return pkg_resources.resource_filename("onionperf",
"tests/data/" + relative_path)
DATA_DIR = absolute_data_path()
def test_log_collection_tgen():
log_list = reprocessing.collect_logs(DATA_DIR, '*tgen.log')
well_known_list = [ DATA_DIR + 'logs/onionperf.tgen.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.tgen.log' ]
assert_equals(log_list, well_known_list )
def test_log_collection_torctl():
log_list = reprocessing.collect_logs(DATA_DIR, '*torctl.log')
well_known_list = [ DATA_DIR + 'logs/onionperf.torctl.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.torctl.log' ]
assert_equals(log_list, well_known_list )
def test_log_match():
tgen_logs = reprocessing.collect_logs(DATA_DIR, '*tgen.log')
torctl_logs = reprocessing.collect_logs(DATA_DIR, '*torctl.log')
log_pairs = reprocessing.match(tgen_logs, torctl_logs, None)
well_known_list = [(DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.tgen.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.torctl.log', datetime.datetime(2019, 1, 10, 0, 0))]
assert_equals(log_pairs, well_known_list)
def test_log_match_no_log_date():
tgen_logs = reprocessing.collect_logs(DATA_DIR, '*perf.tgen.log')
torctl_logs = reprocessing.collect_logs(DATA_DIR, '*perf.torctl.log')
log_pairs = reprocessing.match(tgen_logs, torctl_logs, None)
well_known_list = []
assert_equals(log_pairs, well_known_list)
def test_log_match_with_filter_date():
tgen_logs = reprocessing.collect_logs(DATA_DIR, '*tgen.log')
torctl_logs = reprocessing.collect_logs(DATA_DIR, '*torctl.log')
test_date = datetime.date(2019, 1, 10)
log_pairs = reprocessing.match(tgen_logs, torctl_logs, test_date)
well_known_list = [(DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.tgen.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.torctl.log', datetime.datetime(2019, 1, 10, 0, 0))]
assert_equals(log_pairs, well_known_list)
def test_log_match_with_wrong_filter_date():
tgen_logs = reprocessing.collect_logs(DATA_DIR, '*tgen.log')
torctl_logs = reprocessing.collect_logs(DATA_DIR, '*torctl.log')
test_date = datetime.date(2017, 1, 1)
log_pairs = reprocessing.match(tgen_logs, torctl_logs, test_date)
well_known_list = []
assert_equals(log_pairs, well_known_list)
def test_analyze_func_json():
pair = (DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.tgen.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.torctl.log', datetime.datetime(2019, 1, 10, 0, 0))
work_dir = tempfile.mkdtemp()
reprocessing.analyze_func(work_dir, None, False, pair)
json_file = os.path.join(work_dir, "2019-01-10.onionperf.analysis.json.xz")
assert(os.path.exists(json_file))
shutil.rmtree(work_dir)
def test_multiprocess_logs():
pairs = [(DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.tgen.log', DATA_DIR + 'logs/onionperf_2019-01-10_23:59:59.torctl.log', datetime.datetime(2019, 1, 10, 0, 0))]
work_dir = tempfile.mkdtemp()
reprocessing.multiprocess_logs(pairs, work_dir)
json_file = os.path.join(work_dir, "2019-01-10.onionperf.analysis.json.xz")
assert(os.path.exists(json_file))
shutil.rmtree(work_dir)
def test_end_to_end():
tgen_logs = reprocessing.collect_logs(DATA_DIR, '*tgen.log')
torctl_logs = reprocessing.collect_logs(DATA_DIR, '*torctl.log')
log_pairs = reprocessing.match(tgen_logs, torctl_logs, None)
work_dir = tempfile.mkdtemp()
reprocessing.multiprocess_logs(log_pairs, work_dir)
json_file = os.path.join(work_dir, "2019-01-10.onionperf.analysis.json.xz")
assert(os.path.exists(json_file))
shutil.rmtree(work_dir)
| 46.174419 | 180 | 0.737346 | 603 | 3,971 | 4.545605 | 0.119403 | 0.063845 | 0.037942 | 0.118205 | 0.831448 | 0.831448 | 0.831448 | 0.771981 | 0.73039 | 0.713243 | 0 | 0.062117 | 0.13649 | 3,971 | 85 | 181 | 46.717647 | 0.737241 | 0.015361 | 0 | 0.507246 | 0 | 0 | 0.192347 | 0.153826 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.144928 | false | 0 | 0.115942 | 0 | 0.275362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e79b6b9cbd482baf0b6f7d587a144653085dd75 | 26 | py | Python | src/pingem/__init__.py | sandernes/pingem | 027d070ff30ca888752382967d75e59236ae728e | [
"MIT"
] | null | null | null | src/pingem/__init__.py | sandernes/pingem | 027d070ff30ca888752382967d75e59236ae728e | [
"MIT"
] | null | null | null | src/pingem/__init__.py | sandernes/pingem | 027d070ff30ca888752382967d75e59236ae728e | [
"MIT"
] | null | null | null | from pinger import Pinger
| 13 | 25 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ec3eae29e1a38fda77b556ce1dd97613156e9fd | 16,358 | py | Python | pyVmomi/_typeinfo_lookup.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | pyVmomi/_typeinfo_lookup.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | pyVmomi/_typeinfo_lookup.py | xweichu/pyvmomi | 77aedef02974a63517a079c482e49fd9890c09a4 | [
"Apache-2.0"
] | null | null | null | # ******* WARNING - AUTO GENERATED CODE - DO NOT EDIT *******
from .VmomiSupport import CreateDataType, CreateManagedType
from .VmomiSupport import CreateEnumType
from .VmomiSupport import AddVersion, AddVersionParent
from .VmomiSupport import AddBreakingChangesInfo
from .VmomiSupport import F_LINK, F_LINKABLE
from .VmomiSupport import F_OPTIONAL, F_SECRET
from .VmomiSupport import newestVersions, ltsVersions
from .VmomiSupport import dottedVersions, oldestVersions
AddVersion("vmodl.version.version0", "", "", 0, "vim25")
AddVersion("lookup.version.version1", "lookup", "1.0", 0, "")
AddVersion("vmodl.version.version1", "", "", 0, "vim25")
AddVersion("vmodl.version.version2", "", "", 0, "vim25")
AddVersion("lookup.version.version1_5", "lookup", "version1_5", 0, "")
AddVersion("lookup.version.version2", "lookup", "2.0", 0, "")
AddVersion("lookup.version.version3_0", "lookup", "3.0", 0, "")
AddVersion("lookup.version.version4_0", "lookup", "4.0", 0, "")
AddVersionParent("vmodl.version.version0", "vmodl.version.version0")
AddVersionParent("lookup.version.version1", "vmodl.version.version0")
AddVersionParent("lookup.version.version1", "lookup.version.version1")
AddVersionParent("lookup.version.version1", "vmodl.version.version1")
AddVersionParent("lookup.version.version1", "vmodl.version.version2")
AddVersionParent("vmodl.version.version1", "vmodl.version.version0")
AddVersionParent("vmodl.version.version1", "vmodl.version.version1")
AddVersionParent("vmodl.version.version2", "vmodl.version.version0")
AddVersionParent("vmodl.version.version2", "vmodl.version.version1")
AddVersionParent("vmodl.version.version2", "vmodl.version.version2")
AddVersionParent("lookup.version.version1_5", "vmodl.version.version0")
AddVersionParent("lookup.version.version1_5", "lookup.version.version1")
AddVersionParent("lookup.version.version1_5", "vmodl.version.version1")
AddVersionParent("lookup.version.version1_5", "vmodl.version.version2")
AddVersionParent("lookup.version.version1_5", "lookup.version.version1_5")
AddVersionParent("lookup.version.version2", "vmodl.version.version0")
AddVersionParent("lookup.version.version2", "lookup.version.version1")
AddVersionParent("lookup.version.version2", "vmodl.version.version1")
AddVersionParent("lookup.version.version2", "vmodl.version.version2")
AddVersionParent("lookup.version.version2", "lookup.version.version1_5")
AddVersionParent("lookup.version.version2", "lookup.version.version2")
AddVersionParent("lookup.version.version3_0", "vmodl.version.version0")
AddVersionParent("lookup.version.version3_0", "lookup.version.version1")
AddVersionParent("lookup.version.version3_0", "vmodl.version.version1")
AddVersionParent("lookup.version.version3_0", "vmodl.version.version2")
AddVersionParent("lookup.version.version3_0", "lookup.version.version1_5")
AddVersionParent("lookup.version.version3_0", "lookup.version.version2")
AddVersionParent("lookup.version.version3_0", "lookup.version.version3_0")
AddVersionParent("lookup.version.version4_0", "vmodl.version.version0")
AddVersionParent("lookup.version.version4_0", "lookup.version.version1")
AddVersionParent("lookup.version.version4_0", "vmodl.version.version1")
AddVersionParent("lookup.version.version4_0", "vmodl.version.version2")
AddVersionParent("lookup.version.version4_0", "lookup.version.version1_5")
AddVersionParent("lookup.version.version4_0", "lookup.version.version2")
AddVersionParent("lookup.version.version4_0", "lookup.version.version3_0")
AddVersionParent("lookup.version.version4_0", "lookup.version.version4_0")
newestVersions.Add("lookup.version.version4_0")
ltsVersions.Add("lookup.version.version4_0")
dottedVersions.Add("lookup.version.version4_0")
oldestVersions.Add("lookup.version.version1")
CreateManagedType("lookup.DeploymentInformationService", "LookupDeploymentInformationService", "vmodl.ManagedObject", "lookup.version.version1", None, [("retrieveHaBackupConfiguration", "RetrieveHaBackupConfiguration", "lookup.version.version1", (), (0, "lookup.HaBackupNodeConfiguration", "lookup.HaBackupNodeConfiguration"), "LookupService.Administrator", None)])
CreateDataType("lookup.HaBackupNodeConfiguration", "LookupHaBackupNodeConfiguration", "vmodl.DynamicData", "lookup.version.version1", [("dbType", "string", "lookup.version.version1", 0), ("dbJdbcUrl", "string", "lookup.version.version1", 0), ("dbUser", "string", "lookup.version.version1", 0), ("dbPass", "string", "lookup.version.version1", F_SECRET)])
CreateManagedType("lookup.L10n", "LookupL10n", "vmodl.ManagedObject", "lookup.version.version1", [("defaultLocale", "string", "lookup.version.version1", 0, "System.Anonymous"), ("supportedLocales", "string[]", "lookup.version.version1", 0, "System.Anonymous")], [("setLocale", "SetLocale", "lookup.version.version1", (("locale", "string", "lookup.version.version1", 0, None),), (0, "string", "string"), "LookupService.Administrator", None), ("getLocale", "GetLocale", "lookup.version.version1", (), (0, "string", "string"), "System.Anonymous", None)])
CreateManagedType("lookup.LookupService", "LookupLookupService", "vmodl.ManagedObject", "lookup.version.version1", None, [("registerService", "RegisterService", "lookup.version.version1", (("registrationForm", "lookup.ServiceRegistrationForm", "lookup.version.version1", 0, None),), (0, "lookup.Service", "lookup.Service"), "LookupService.Administrator", ["lookup.fault.ServiceFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("unregisterService", "UnregisterService", "lookup.version.version1", (("serviceId", "string", "lookup.version.version1", 0, None),), (0, "void", "void"), "LookupService.Owner", ["lookup.fault.UnsupportedSiteFault", "lookup.fault.EntryNotFoundFault", "lookup.fault.ServiceFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("updateService", "UpdateService", "lookup.version.version1", (("service", "lookup.Service", "lookup.version.version1", 0, None),), (0, "void", "void"), "LookupService.Owner", ["lookup.fault.UnsupportedSiteFault", "lookup.fault.EntryNotFoundFault", "lookup.fault.ServiceFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("find", "Find", "lookup.version.version1", (("searchCriteria", "lookup.SearchCriteria", "lookup.version.version1", 0, None),), (F_OPTIONAL, "lookup.Service[]", "lookup.Service[]"), "System.Anonymous", ["lookup.fault.ServiceFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("findService", "FindService", "lookup.version.version1", (("serviceId", "string", "lookup.version.version1", 0, None),), (F_OPTIONAL, "lookup.Service", "lookup.Service"), "System.Anonymous", ["lookup.fault.ServiceFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("getViSite", "GetViSite", "lookup.version.version1", (), (0, "string", "string"), "System.Anonymous", ["lookup.fault.ServiceFault", ])])
CreateDataType("lookup.SearchCriteria", "LookupSearchCriteria", "vmodl.DynamicData", "lookup.version.version1", [("serviceType", "vmodl.URI", "lookup.version.version1", F_OPTIONAL), ("viSite", "string", "lookup.version.version1", F_OPTIONAL), ("endpointProtocol", "string", "lookup.version.version1", F_OPTIONAL)])
CreateDataType("lookup.Service", "LookupService", "vmodl.DynamicData", "lookup.version.version1", [("serviceId", "string", "lookup.version.version1", 0), ("version", "string", "lookup.version.version1", 0), ("type", "vmodl.URI", "lookup.version.version1", 0), ("ownerId", "string", "lookup.version.version1", F_OPTIONAL), ("serviceName", "string", "lookup.version.version1", F_OPTIONAL), ("description", "string", "lookup.version.version1", F_OPTIONAL), ("endpoints", "lookup.ServiceEndpoint[]", "lookup.version.version1", 0), ("viSite", "string", "lookup.version.version1", 0), ("productId", "string", "lookup.version.version1", F_OPTIONAL)])
CreateDataType("lookup.ServiceContent", "LookupServiceContent", "vmodl.DynamicData", "lookup.version.version1", [("lookupService", "lookup.LookupService", "lookup.version.version1", 0), ("serviceRegistration", "lookup.ServiceRegistration", "lookup.version.version2", 0), ("deploymentInformationService", "lookup.DeploymentInformationService", "lookup.version.version1", 0), ("l10n", "lookup.L10n", "lookup.version.version1", 0)])
CreateDataType("lookup.ServiceEndpoint", "LookupServiceEndpoint", "vmodl.DynamicData", "lookup.version.version1", [("sslTrustAnchor", "string", "lookup.version.version1", F_OPTIONAL), ("url", "vmodl.URI", "lookup.version.version1", 0), ("protocol", "string", "lookup.version.version1", 0)])
CreateEnumType("lookup.ServiceEndpoint.EndpointProtocol", "LookupServiceEndpointEndpointProtocol", "lookup.version.version1", ["vmomi", "wsTrust", "rest", "http", "unknown"])
CreateManagedType("lookup.ServiceInstance", "LookupServiceInstance", "vmodl.ManagedObject", "lookup.version.version1", None, [("retrieveServiceContent", "RetrieveServiceContent", "lookup.version.version1", (), (0, "lookup.ServiceContent", "lookup.ServiceContent"), "System.Anonymous", None)])
CreateManagedType("lookup.ServiceRegistration", "LookupServiceRegistration", "vmodl.ManagedObject", "lookup.version.version2", None, [("create", "Create", "lookup.version.version2", (("serviceId", "string", "lookup.version.version2", 0, None),("createSpec", "lookup.ServiceRegistration.CreateSpec", "lookup.version.version2", 0, None),), (0, "void", "void"), "LookupService.Owner", ["lookup.fault.EntryExistsFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("delete", "Delete", "lookup.version.version2", (("serviceId", "string", "lookup.version.version2", 0, None),), (0, "void", "void"), "LookupService.Owner", ["lookup.fault.EntryNotFoundFault", "vmodl.fault.SecurityError", ]), ("set", "Set", "lookup.version.version2", (("serviceId", "string", "lookup.version.version2", 0, None),("serviceSpec", "lookup.ServiceRegistration.SetSpec", "lookup.version.version2", 0, None),), (0, "void", "void"), "LookupService.Owner", ["lookup.fault.EntryNotFoundFault", "vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("setTrustAnchor", "SetTrustAnchor", "lookup.version.version3_0", (("filter", "lookup.ServiceRegistration.Filter", "lookup.version.version3_0", 0, None),("trustAnchors", "string[]", "lookup.version.version3_0", 0, None),), (F_OPTIONAL, "int", "int"), "LookupService.Owner", ["vmodl.fault.InvalidArgument", "vmodl.fault.SecurityError", ]), ("get", "Get", "lookup.version.version2", (("serviceId", "string", "lookup.version.version2", 0, None),), (0, "lookup.ServiceRegistration.Info", "lookup.ServiceRegistration.Info"), "System.Anonymous", ["lookup.fault.EntryNotFoundFault", ]), ("list", "List", "lookup.version.version2", (("filterCriteria", "lookup.ServiceRegistration.Filter", "lookup.version.version2", F_OPTIONAL, None),), (F_OPTIONAL, "lookup.ServiceRegistration.Info[]", "lookup.ServiceRegistration.Info[]"), "System.Anonymous", None), ("getSiteId", "GetSiteId", "lookup.version.version2", (), (0, "string", "string"), "System.Anonymous", None)])
CreateDataType("lookup.ServiceRegistration.MutableServiceInfo", "LookupServiceRegistrationMutableServiceInfo", "vmodl.DynamicData", "lookup.version.version2", [("serviceVersion", "string", "lookup.version.version2", 0), ("vendorNameResourceKey", "string", "lookup.version.version2", F_OPTIONAL), ("vendorNameDefault", "string", "lookup.version.version2", F_OPTIONAL), ("vendorProductInfoResourceKey", "string", "lookup.version.version2", F_OPTIONAL), ("vendorProductInfoDefault", "string", "lookup.version.version2", F_OPTIONAL), ("serviceEndpoints", "lookup.ServiceRegistration.Endpoint[]", "lookup.version.version2", F_OPTIONAL), ("serviceAttributes", "lookup.ServiceRegistration.Attribute[]", "lookup.version.version2", F_OPTIONAL), ("serviceNameResourceKey", "string", "lookup.version.version2", F_OPTIONAL), ("serviceNameDefault", "string", "lookup.version.version2", F_OPTIONAL), ("serviceDescriptionResourceKey", "string", "lookup.version.version2", F_OPTIONAL), ("serviceDescriptionDefault", "string", "lookup.version.version2", F_OPTIONAL)])
CreateDataType("lookup.ServiceRegistration.CommonServiceInfo", "LookupServiceRegistrationCommonServiceInfo", "lookup.ServiceRegistration.MutableServiceInfo", "lookup.version.version2", [("ownerId", "string", "lookup.version.version2", 0), ("serviceType", "lookup.ServiceRegistration.ServiceType", "lookup.version.version2", 0), ("nodeId", "string", "lookup.version.version2", F_OPTIONAL)])
CreateDataType("lookup.ServiceRegistration.CreateSpec", "LookupServiceRegistrationCreateSpec", "lookup.ServiceRegistration.CommonServiceInfo", "lookup.version.version2", None)
CreateDataType("lookup.ServiceRegistration.SetSpec", "LookupServiceRegistrationSetSpec", "lookup.ServiceRegistration.MutableServiceInfo", "lookup.version.version2", None)
CreateDataType("lookup.ServiceRegistration.Info", "LookupServiceRegistrationInfo", "lookup.ServiceRegistration.CommonServiceInfo", "lookup.version.version2", [("serviceId", "string", "lookup.version.version2", 0), ("siteId", "string", "lookup.version.version2", 0)])
CreateDataType("lookup.ServiceRegistration.ServiceType", "LookupServiceRegistrationServiceType", "vmodl.DynamicData", "lookup.version.version2", [("product", "string", "lookup.version.version2", 0), ("type", "string", "lookup.version.version2", 0)])
CreateDataType("lookup.ServiceRegistration.Endpoint", "LookupServiceRegistrationEndpoint", "vmodl.DynamicData", "lookup.version.version2", [("url", "vmodl.URI", "lookup.version.version2", 0), ("endpointType", "lookup.ServiceRegistration.EndpointType", "lookup.version.version2", 0), ("sslTrust", "string[]", "lookup.version.version2", F_OPTIONAL), ("endpointAttributes", "lookup.ServiceRegistration.Attribute[]", "lookup.version.version2", F_OPTIONAL)])
CreateDataType("lookup.ServiceRegistration.EndpointType", "LookupServiceRegistrationEndpointType", "vmodl.DynamicData", "lookup.version.version2", [("protocol", "string", "lookup.version.version2", F_OPTIONAL), ("type", "string", "lookup.version.version2", F_OPTIONAL)])
CreateDataType("lookup.ServiceRegistration.Attribute", "LookupServiceRegistrationAttribute", "vmodl.DynamicData", "lookup.version.version2", [("key", "string", "lookup.version.version2", 0), ("value", "string", "lookup.version.version2", 0)])
CreateDataType("lookup.ServiceRegistration.Filter", "LookupServiceRegistrationFilter", "vmodl.DynamicData", "lookup.version.version2", [("siteId", "string", "lookup.version.version2", F_OPTIONAL), ("nodeId", "string", "lookup.version.version2", F_OPTIONAL), ("serviceType", "lookup.ServiceRegistration.ServiceType", "lookup.version.version2", F_OPTIONAL), ("endpointType", "lookup.ServiceRegistration.EndpointType", "lookup.version.version2", F_OPTIONAL), ("endpointTrustAnchor", "string", "lookup.version.version3_0", F_OPTIONAL), ("searchAllSsoDomains", "boolean", "lookup.version.version4_0", F_OPTIONAL)])
CreateDataType("lookup.ServiceRegistrationForm", "LookupServiceRegistrationForm", "vmodl.DynamicData", "lookup.version.version1", [("version", "string", "lookup.version.version1", 0), ("type", "vmodl.URI", "lookup.version.version1", 0), ("ownerId", "string", "lookup.version.version1", F_OPTIONAL), ("serviceName", "string", "lookup.version.version1", F_OPTIONAL), ("description", "string", "lookup.version.version1", F_OPTIONAL), ("endpoints", "lookup.ServiceEndpoint[]", "lookup.version.version1", 0), ("productId", "string", "lookup.version.version1", F_OPTIONAL), ("legacyId", "string", "lookup.version.version1_5", F_OPTIONAL)])
CreateDataType("lookup.fault.ServiceFault", "LookupFaultServiceFault", "vmodl.MethodFault", "lookup.version.version1", [("errorMessage", "string", "lookup.version.version1", F_OPTIONAL)])
CreateDataType("lookup.fault.UnsupportedSiteFault", "LookupFaultUnsupportedSiteFault", "lookup.fault.ServiceFault", "lookup.version.version1", [("operatingSite", "string", "lookup.version.version1", 0), ("requestedSite", "string", "lookup.version.version1", 0)])
CreateDataType("lookup.fault.EntryExistsFault", "LookupFaultEntryExistsFault", "lookup.fault.ServiceFault", "lookup.version.version2", [("name", "string", "lookup.version.version2", 0)])
CreateDataType("lookup.fault.EntryNotFoundFault", "LookupFaultEntryNotFoundFault", "lookup.fault.ServiceFault", "lookup.version.version1", [("name", "string", "lookup.version.version1", 0)])
| 188.022989 | 1,999 | 0.761279 | 1,576 | 16,358 | 7.847716 | 0.11231 | 0.1913 | 0.14772 | 0.055142 | 0.675614 | 0.55967 | 0.443645 | 0.371928 | 0.250162 | 0.158069 | 0 | 0.022124 | 0.049456 | 16,358 | 86 | 2,000 | 190.209302 | 0.773297 | 0.003607 | 0 | 0 | 1 | 0 | 0.677364 | 0.526109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.012195 | 0.097561 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ecfafcadc1c3932bc3b30e9b30dd204b3e706f7 | 442 | py | Python | cell2cell/datasets/__init__.py | ckmah/cell2cell | ce18bbb63e12f9b1da8699567dec9a2a8b78f824 | [
"BSD-3-Clause"
] | 16 | 2020-09-30T01:53:43.000Z | 2022-03-25T09:58:54.000Z | cell2cell/datasets/__init__.py | ckmah/cell2cell | ce18bbb63e12f9b1da8699567dec9a2a8b78f824 | [
"BSD-3-Clause"
] | 2 | 2021-08-09T21:26:54.000Z | 2021-11-08T14:47:39.000Z | cell2cell/datasets/__init__.py | ckmah/cell2cell | ce18bbb63e12f9b1da8699567dec9a2a8b78f824 | [
"BSD-3-Clause"
] | 3 | 2021-11-08T07:47:44.000Z | 2022-03-30T18:40:00.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from cell2cell.datasets.heuristic_data import (HeuristicGOTerms)
from cell2cell.datasets.random_data import (generate_random_rnaseq, generate_random_ppi, generate_random_cci_scores,
generate_random_metadata)
from cell2cell.datasets.toy_data import (generate_toy_distance, generate_toy_rnaseq, generate_toy_ppi, generate_toy_metadata) | 55.25 | 125 | 0.776018 | 52 | 442 | 6.115385 | 0.403846 | 0.176101 | 0.198113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010811 | 0.162896 | 442 | 8 | 125 | 55.25 | 0.848649 | 0.047511 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0df34dbf6775b3c40bd4ab5fb49417cfd81cac8b | 6,109 | py | Python | geonode/geonode/themes/migrations/0002_auto_20181015_1208.py | ttungbmt/BecaGIS_GeoPortal | 6c05f9fc020ec4ccf600ba2503a06c2231443920 | [
"MIT"
] | null | null | null | geonode/geonode/themes/migrations/0002_auto_20181015_1208.py | ttungbmt/BecaGIS_GeoPortal | 6c05f9fc020ec4ccf600ba2503a06c2231443920 | [
"MIT"
] | null | null | null | geonode/geonode/themes/migrations/0002_auto_20181015_1208.py | ttungbmt/BecaGIS_GeoPortal | 6c05f9fc020ec4ccf600ba2503a06c2231443920 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.15 on 2018-10-15 00:08
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('geonode_themes', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='geonodethemecustomization',
name='jumbotron_site_description',
),
migrations.AddField(
model_name='geonodethemecustomization',
name='jumbotron_cta_hide',
field=models.BooleanField(default=False, verbose_name='Hide call to action'),
),
migrations.AddField(
model_name='geonodethemecustomization',
name='jumbotron_cta_link',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Call to action link'),
),
migrations.AddField(
model_name='geonodethemecustomization',
name='jumbotron_cta_text',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Call to action text'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_administrative_area',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_city',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_country',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_delivery_point',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_email',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_facsimile',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_name',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_position',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_postal_code',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_street',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contact_voice',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='contactus',
field=models.BooleanField(default=False, verbose_name='Enable contact us box'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='date',
field=models.DateTimeField(auto_now_add=True, help_text='This will not appear anywhere.'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='description',
field=models.TextField(blank=True, help_text='This will not appear anywhere.', null=True),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='is_enabled',
field=models.BooleanField(default=False, help_text='Enabling this theme will disable the current enabled theme (if any)'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='jumbotron_bg',
field=models.ImageField(blank=True, null=True, upload_to='img/%Y/%m', verbose_name='Jumbotron background'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='jumbotron_welcome_content',
field=models.TextField(blank=True, null=True, verbose_name='Jumbotron content'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='jumbotron_welcome_hide',
field=models.BooleanField(default=False, help_text='Check this if the jumbotron backgroud image already contains text', verbose_name='Hide text in the jumbotron'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='jumbotron_welcome_title',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Jumbotron title'),
),
migrations.AlterField(
model_name='geonodethemecustomization',
name='name',
field=models.CharField(help_text='This will not appear anywhere.', max_length=100),
),
migrations.AlterField(
model_name='partner',
name='href',
field=models.CharField(max_length=255, verbose_name='Website'),
),
migrations.AlterField(
model_name='partner',
name='name',
field=models.CharField(help_text='This will not appear anywhere.', max_length=100),
),
migrations.AlterField(
model_name='partner',
name='title',
field=models.CharField(max_length=255, verbose_name='Display name'),
),
]
| 41 | 175 | 0.616468 | 570 | 6,109 | 6.438596 | 0.201754 | 0.066213 | 0.222343 | 0.248501 | 0.814986 | 0.777384 | 0.719074 | 0.634605 | 0.456948 | 0.456948 | 0 | 0.017191 | 0.276314 | 6,109 | 148 | 176 | 41.277027 | 0.812938 | 0.011295 | 0 | 0.673759 | 1 | 0 | 0.244492 | 0.123406 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007092 | 0 | 0.028369 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21c519e91a853baad22df5f76771efbf62708c6e | 104 | py | Python | Comic-Con and k-means/Conclusion/distances.py | jetbrains-academy/Machine-Learning-101 | 7b583dbff1e90115296dcaeac78ca88363c158c9 | [
"MIT"
] | null | null | null | Comic-Con and k-means/Conclusion/distances.py | jetbrains-academy/Machine-Learning-101 | 7b583dbff1e90115296dcaeac78ca88363c158c9 | [
"MIT"
] | 10 | 2021-11-22T16:51:52.000Z | 2022-02-14T12:57:57.000Z | Comic-Con and k-means/Reading an image/distances.py | jetbrains-academy/Machine-Learning-101 | 7b583dbff1e90115296dcaeac78ca88363c158c9 | [
"MIT"
] | null | null | null | import numpy as np
def euclidean_distance(A, B):
return np.sqrt(np.sum(np.square(A - B), axis=1))
| 17.333333 | 52 | 0.673077 | 20 | 104 | 3.45 | 0.75 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.173077 | 104 | 5 | 53 | 20.8 | 0.790698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
df444ee0974b62adfae498b528c0fb8786edef20 | 381 | py | Python | gaphor/UML/diagramitems.py | bertob/gaphor | a1d6f8dd8c878f299980bba6c055436148573274 | [
"Apache-2.0"
] | 867 | 2018-01-09T00:19:09.000Z | 2022-03-31T02:49:23.000Z | gaphor/UML/diagramitems.py | burakozturk16/gaphor | 86267a5200ac4439626d35d306dbb376c3800107 | [
"Apache-2.0"
] | 790 | 2018-01-13T23:47:07.000Z | 2022-03-31T16:04:27.000Z | gaphor/UML/diagramitems.py | burakozturk16/gaphor | 86267a5200ac4439626d35d306dbb376c3800107 | [
"Apache-2.0"
] | 117 | 2018-01-09T02:24:49.000Z | 2022-03-23T08:07:42.000Z | """All Item's defined in the diagram package.
This module makes it easier to load a diagram item.
"""
from gaphor.diagram.general import *
from gaphor.UML.actions import *
from gaphor.UML.classes import *
from gaphor.UML.components import *
from gaphor.UML.interactions import *
from gaphor.UML.profiles import *
from gaphor.UML.states import *
from gaphor.UML.usecases import *
| 27.214286 | 51 | 0.776903 | 58 | 381 | 5.103448 | 0.482759 | 0.27027 | 0.378378 | 0.449324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136483 | 381 | 13 | 52 | 29.307692 | 0.899696 | 0.249344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10d99b3035afd2d9b23e9e982e651a8acd30df6c | 61,186 | py | Python | tests/integration/test_api.py | neuro-inc/platform-monitoring | 4b9127c7ca56a64ff4c6142093663318d06918b9 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_api.py | neuro-inc/platform-monitoring | 4b9127c7ca56a64ff4c6142093663318d06918b9 | [
"Apache-2.0"
] | 52 | 2021-11-15T03:14:13.000Z | 2022-03-31T03:16:01.000Z | tests/integration/test_api.py | neuro-inc/platform-monitoring | 4b9127c7ca56a64ff4c6142093663318d06918b9 | [
"Apache-2.0"
] | null | null | null | import asyncio
import json
import re
import signal
import textwrap
import time
from collections.abc import AsyncIterator, Awaitable, Callable, Iterator
from dataclasses import dataclass
from typing import Any, Union
from unittest import mock
from uuid import uuid4
import aiohttp
import aiohttp.hdrs
import pytest
from aiohttp import WSServerHandshakeError
from aiohttp.web import HTTPOk
from aiohttp.web_exceptions import (
HTTPAccepted,
HTTPBadRequest,
HTTPForbidden,
HTTPNoContent,
HTTPUnauthorized,
)
from async_timeout import timeout
from yarl import URL
from platform_monitoring.api import create_app
from platform_monitoring.config import Config, ContainerRuntimeConfig, PlatformApiConfig
from platform_monitoring.kube_client import JobNotFoundException
from .conftest import ApiAddress, create_local_app_server, random_str
from .conftest_auth import _User
from tests.integration.conftest_kube import MyKubeClient
async def expect_prompt(ws: aiohttp.ClientWebSocketResponse) -> bytes:
_ansi_re = re.compile(rb"\033\[[;?0-9]*[a-zA-Z]")
_exit_re = re.compile(rb"exit \d+\Z")
try:
ret: bytes = b""
async with timeout(3):
while not ret.strip().endswith(b"#") and not _exit_re.match(ret.strip()):
msg = await ws.receive()
if msg.type in (
aiohttp.WSMsgType.CLOSE,
aiohttp.WSMsgType.CLOSING,
aiohttp.WSMsgType.CLOSED,
):
break
print(msg.data)
assert msg.data[0] == 1
ret += _ansi_re.sub(b"", msg.data[1:])
return ret
except asyncio.TimeoutError:
raise AssertionError(f"[Timeout] {ret!r}")
@dataclass(frozen=True)
class MonitoringApiEndpoints:
address: ApiAddress
@property
def api_v1_endpoint(self) -> URL:
return URL(f"http://{self.address.host}:{self.address.port}/api/v1")
@property
def ping_url(self) -> URL:
return self.api_v1_endpoint / "ping"
@property
def secured_ping_url(self) -> URL:
return self.api_v1_endpoint / "secured-ping"
@property
def endpoint(self) -> URL:
return self.api_v1_endpoint / "jobs"
@property
def jobs_capacity_url(self) -> URL:
return self.endpoint / "capacity"
def generate_top_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "top"
def generate_log_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "log"
def generate_log_ws_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "log_ws"
def generate_save_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "save"
def generate_kill_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "kill"
def generate_attach_url(
self,
job_id: str,
*,
tty: bool = False,
stdin: bool = False,
stdout: bool = True,
stderr: bool = True,
logs: bool = False,
) -> URL:
url = self.endpoint / job_id / "attach"
return url.with_query(
tty=int(tty),
stdin=int(stdin),
stdout=int(stdout),
stderr=int(stderr),
logs=int(logs),
)
def generate_resize_url(self, job_id: str, *, w: int, h: int) -> URL:
url = self.endpoint / job_id / "resize"
return url.with_query(w=w, h=h)
def generate_exec_create_url(self, job_id: str) -> URL:
return self.endpoint / job_id / "exec_create"
def generate_exec_resize_url(
self, job_id: str, exec_id: str, *, w: int, h: int
) -> URL:
url = self.endpoint / job_id / exec_id / "exec_resize"
return url.with_query(w=w, h=h)
def generate_exec_inspect_url(self, job_id: str, exec_id: str) -> URL:
return self.endpoint / job_id / exec_id / "exec_inspect"
def generate_exec_start_url(self, job_id: str, exec_id: str) -> URL:
return self.endpoint / job_id / exec_id / "exec_start"
def generate_exec_url(
self,
job_id: str,
cmd: str,
*,
tty: bool = False,
stdin: bool = False,
stdout: bool = True,
stderr: bool = True,
) -> URL:
url = self.endpoint / job_id / "exec"
return url.with_query(
cmd=cmd,
tty=int(tty),
stdin=int(stdin),
stdout=int(stdout),
stderr=int(stderr),
)
def generate_port_forward_url(self, job_id: str, port: Union[int, str]) -> URL:
return self.endpoint / job_id / "port_forward" / str(port)
@dataclass(frozen=True)
class PlatformApiEndpoints:
url: URL
@property
def endpoint(self) -> URL:
return self.url
@property
def platform_config_url(self) -> URL:
return self.endpoint / "config"
@property
def jobs_base_url(self) -> URL:
return self.endpoint / "jobs"
def generate_job_url(self, job_id: str) -> URL:
return self.jobs_base_url / job_id
@pytest.fixture
async def monitoring_api(config: Config) -> AsyncIterator[MonitoringApiEndpoints]:
app = await create_app(config)
async with create_local_app_server(app, port=8080) as address:
yield MonitoringApiEndpoints(address=address)
@pytest.fixture
async def monitoring_api_s3_storage(
config_s3_storage: Config,
) -> AsyncIterator[MonitoringApiEndpoints]:
app = await create_app(config_s3_storage)
async with create_local_app_server(app, port=8080) as address:
yield MonitoringApiEndpoints(address=address)
@pytest.fixture
async def platform_api(
platform_api_config: PlatformApiConfig,
) -> AsyncIterator[PlatformApiEndpoints]:
yield PlatformApiEndpoints(url=platform_api_config.url)
class JobsClient:
def __init__(
self,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
user: _User,
) -> None:
self._platform_api = platform_api
self._client = client
self._user = user
@property
def user(self) -> _User:
return self._user
@property
def headers(self) -> dict[str, str]:
return self._user.headers
async def get_job_by_id(self, job_id: str) -> dict[str, Any]:
url = self._platform_api.generate_job_url(job_id)
async with self._client.get(url, headers=self.headers) as response:
response_text = await response.text()
assert response.status == HTTPOk.status_code, response_text
result = await response.json()
return result
async def get_job_materialized_by_id(
self,
job_id: str,
) -> bool:
url = self._platform_api.generate_job_url(job_id).with_query(
_tests_check_materialized="True"
)
async with self._client.get(url, headers=self.headers) as response:
response_text = await response.text()
assert response.status == HTTPOk.status_code, response_text
return (await response.json())["materialized"]
async def long_polling_by_job_id(
self, job_id: str, status: str, interval_s: float = 0.5, max_time: float = 180
) -> dict[str, Any]:
t0 = time.monotonic()
while True:
response = await self.get_job_by_id(job_id)
if response["status"] == status:
return response
await asyncio.sleep(max(interval_s, time.monotonic() - t0))
current_time = time.monotonic() - t0
if current_time > max_time:
pytest.fail(f"too long: {current_time:.3f} sec; resp: {response}")
interval_s *= 1.5
async def wait_job_dematerialized(
self, job_id: str, interval_s: float = 0.5, max_time: float = 300
) -> None:
t0 = time.monotonic()
while True:
is_materialized = await self.get_job_materialized_by_id(job_id)
if not is_materialized:
return
await asyncio.sleep(max(interval_s, time.monotonic() - t0))
current_time = time.monotonic() - t0
if current_time > max_time:
pytest.fail(f"too long: {current_time:.3f} sec;")
interval_s *= 1.5
async def delete_job(self, job_id: str, assert_success: bool = True) -> None:
url = self._platform_api.generate_job_url(job_id)
async with self._client.delete(url, headers=self.headers) as response:
if assert_success:
assert response.status == HTTPNoContent.status_code
async def drop_job(self, job_id: str, assert_success: bool = True) -> None:
url = self._platform_api.generate_job_url(job_id) / "drop"
async with self._client.post(url, headers=self.headers) as response:
if assert_success:
assert response.status == HTTPNoContent.status_code
@pytest.fixture
def jobs_client_factory(
platform_api: PlatformApiEndpoints, client: aiohttp.ClientSession
) -> Iterator[Callable[[_User], JobsClient]]:
def impl(user: _User) -> JobsClient:
return JobsClient(platform_api, client, user=user)
yield impl
@pytest.fixture
async def jobs_client(
regular_user1: _User,
jobs_client_factory: Callable[[_User], JobsClient],
) -> JobsClient:
return jobs_client_factory(regular_user1)
@pytest.fixture
def job_request_factory() -> Callable[[], dict[str, Any]]:
def _factory() -> dict[str, Any]:
return {
"container": {
"image": "ubuntu",
"command": "true",
"resources": {"cpu": 0.1, "memory_mb": 32},
}
}
return _factory
@pytest.fixture
async def job_submit(
job_request_factory: Callable[[], dict[str, Any]]
) -> dict[str, Any]:
return job_request_factory()
@pytest.fixture
async def job_factory(
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_request_factory: Callable[[], dict[str, Any]],
) -> AsyncIterator[Callable[[str], Awaitable[str]]]:
jobs: list[str] = []
async def _f(command: str, name: str = "") -> str:
request_payload = job_request_factory()
request_payload["container"]["command"] = command
if name:
request_payload["name"] = name
async with client.post(
platform_api.jobs_base_url,
headers=jobs_client.headers,
json=request_payload,
) as response:
assert response.status == HTTPAccepted.status_code, await response.text()
result = await response.json()
job_id = result["id"]
jobs.append(job_id)
await jobs_client.long_polling_by_job_id(job_id, status="running")
return job_id
yield _f
for job_id in jobs:
await jobs_client.delete_job(job_id)
for job_id in jobs:
job = await jobs_client.get_job_by_id(job_id)
if job["status"] == "cancelled":
# Wait until job is deleted from k8s
await jobs_client.wait_job_dematerialized(job_id)
@pytest.fixture
async def infinite_job(job_factory: Callable[[str], Awaitable[str]]) -> str:
return await job_factory("tail -f /dev/null")
@pytest.fixture
def job_name() -> str:
return f"test-job-{random_str()}"
@pytest.fixture
async def named_infinite_job(
job_factory: Callable[[str, str], Awaitable[str]], job_name: str
) -> str:
return await job_factory("tail -f /dev/null", job_name)
class TestApi:
async def test_ping(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.get(monitoring_api.ping_url) as resp:
assert resp.status == HTTPOk.status_code
text = await resp.text()
assert text == "Pong"
async def test_ping_includes_version(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.get(monitoring_api.ping_url) as resp:
assert resp.status == HTTPOk.status_code
assert "platform-monitoring" in resp.headers["X-Service-Version"]
async def test_secured_ping(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
) -> None:
headers = jobs_client.headers
async with client.get(monitoring_api.secured_ping_url, headers=headers) as resp:
assert resp.status == HTTPOk.status_code
text = await resp.text()
assert text == "Secured Pong"
async def test_secured_ping_no_token_provided_unauthorized(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
url = monitoring_api.secured_ping_url
async with client.get(url) as resp:
assert resp.status == HTTPUnauthorized.status_code
async def test_secured_ping_non_existing_token_unauthorized(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
token_factory: Callable[[str], str],
) -> None:
url = monitoring_api.secured_ping_url
token = token_factory("non-existing-user")
headers = {"Authorization": f"Bearer {token}"}
async with client.get(url, headers=headers) as resp:
assert resp.status == HTTPUnauthorized.status_code
async def test_ping_unknown_origin(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.get(
monitoring_api.ping_url, headers={"Origin": "http://unknown"}
) as response:
assert response.status == HTTPOk.status_code, await response.text()
assert "Access-Control-Allow-Origin" not in response.headers
async def test_ping_allowed_origin(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.get(
monitoring_api.ping_url, headers={"Origin": "https://neu.ro"}
) as resp:
assert resp.status == HTTPOk.status_code, await resp.text()
assert resp.headers["Access-Control-Allow-Origin"] == "https://neu.ro"
assert resp.headers["Access-Control-Allow-Credentials"] == "true"
assert resp.headers["Access-Control-Expose-Headers"]
async def test_ping_options_no_headers(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.options(monitoring_api.ping_url) as resp:
assert resp.status == HTTPForbidden.status_code, await resp.text()
assert await resp.text() == (
"CORS preflight request failed: "
"origin header is not specified in the request"
)
async def test_ping_options_unknown_origin(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.options(
monitoring_api.ping_url,
headers={
"Origin": "http://unknown",
"Access-Control-Request-Method": "GET",
},
) as resp:
assert resp.status == HTTPForbidden.status_code, await resp.text()
assert await resp.text() == (
"CORS preflight request failed: "
"origin 'http://unknown' is not allowed"
)
async def test_ping_options(
self, monitoring_api: MonitoringApiEndpoints, client: aiohttp.ClientSession
) -> None:
async with client.options(
monitoring_api.ping_url,
headers={
"Origin": "https://neu.ro",
"Access-Control-Request-Method": "GET",
},
) as resp:
assert resp.status == HTTPOk.status_code, await resp.text()
assert resp.headers["Access-Control-Allow-Origin"] == "https://neu.ro"
assert resp.headers["Access-Control-Allow-Credentials"] == "true"
assert resp.headers["Access-Control-Allow-Methods"] == "GET"
async def test_get_capacity(
self,
monitoring_api: MonitoringApiEndpoints,
regular_user1: _User,
client: aiohttp.ClientSession,
) -> None:
async with client.get(
monitoring_api.jobs_capacity_url,
headers=regular_user1.headers,
) as resp:
assert resp.status == HTTPOk.status_code, await resp.text()
result = await resp.json()
assert "cpu-small" in result
async def test_get_capacity_forbidden(
self,
monitoring_api: MonitoringApiEndpoints,
regular_user_factory: Callable[..., Awaitable[_User]],
client: aiohttp.ClientSession,
cluster_name: str,
) -> None:
user = await regular_user_factory(cluster_name="default2")
async with client.get(
monitoring_api.jobs_capacity_url, headers=user.headers
) as resp:
assert resp.status == HTTPForbidden.status_code, await resp.text()
result = await resp.json()
assert {
"uri": f"job://{cluster_name}/{user.name}",
"action": "read",
} in result["missing"]
assert {
"uri": f"cluster://{cluster_name}/access",
"action": "read",
} in result["missing"]
class TestTopApi:
async def test_top_ok(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
num_request = 2
records = []
url = monitoring_api.generate_top_url(job_id=infinite_job)
async with client.ws_connect(url, headers=jobs_client.headers) as ws:
# TODO move this ws communication to JobClient
while True:
msg = await ws.receive()
if msg.type == aiohttp.WSMsgType.CLOSE:
break
else:
records.append(json.loads(msg.data))
if len(records) == num_request:
# TODO (truskovskiyk 09/12/18) do not use protected prop
# https://github.com/aio-libs/aiohttp/issues/3443
proto = ws._writer.protocol
assert proto.transport is not None
proto.transport.close()
break
assert len(records) == num_request
for message in records:
assert message == {
"cpu": mock.ANY,
"memory": mock.ANY,
"timestamp": mock.ANY,
}
async def test_top_shared_by_name(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_name: str,
named_infinite_job: str,
regular_user2: _User,
share_job: Callable[..., Awaitable[None]],
) -> None:
await share_job(jobs_client.user, regular_user2, job_name)
url = monitoring_api.generate_top_url(named_infinite_job)
async with client.ws_connect(url, headers=regular_user2.headers) as ws:
proto = ws._writer.protocol
assert proto.transport is not None
proto.transport.close()
async def test_top_no_permissions_unauthorized(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
job_submit: dict[str, Any],
regular_user1: _User,
regular_user2: _User,
) -> None:
url = platform_api.jobs_base_url
async with client.post(
url, headers=regular_user1.headers, json=job_submit
) as resp:
assert resp.status == HTTPAccepted.status_code
payload = await resp.json()
job_id = payload["id"]
url = monitoring_api.generate_top_url(job_id)
with pytest.raises(WSServerHandshakeError, match="Invalid response status"):
async with client.ws_connect(url, headers=regular_user2.headers):
pass
async def test_top_no_auth_token_provided_unauthorized(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
infinite_job: str,
) -> None:
url = monitoring_api.generate_top_url(job_id=infinite_job)
with pytest.raises(WSServerHandshakeError, match="Invalid response status"):
async with client.ws_connect(url):
pass
async def test_top_non_running_job(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
job = infinite_job
await jobs_client.delete_job(job)
await jobs_client.long_polling_by_job_id(job_id=job, status="cancelled")
num_request = 2
records = []
url = monitoring_api.generate_top_url(job_id=job)
async with client.ws_connect(url, headers=jobs_client.headers) as ws:
# TODO move this ws communication to JobClient
while True:
msg = await ws.receive()
if msg.type == aiohttp.WSMsgType.CLOSE:
break
else:
records.append(json.loads(msg.data))
if len(records) == num_request:
# TODO (truskovskiyk 09/12/18) do not use protected prop
# https://github.com/aio-libs/aiohttp/issues/3443
proto = ws._writer.protocol
assert proto.transport is not None
proto.transport.close()
break
assert not records
async def test_top_non_existing_job(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
) -> None:
headers = jobs_client.headers
job_id = f"job-{uuid4()}"
url = platform_api.generate_job_url(job_id)
async with client.get(url, headers=headers) as response:
assert response.status == aiohttp.web.HTTPBadRequest.status_code
payload = await response.text()
assert "no such job" in payload
url = monitoring_api.generate_top_url(job_id=job_id)
with pytest.raises(WSServerHandshakeError, match="Invalid response status"):
async with client.ws_connect(url, headers=headers):
pass
async def test_top_silently_wait_when_job_pending(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {1..10}; do echo $i; sleep 1; done"'
job_submit["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=job_submit) as resp:
assert resp.status == HTTPAccepted.status_code
payload = await resp.json()
job_id = payload["id"]
assert payload["status"] == "pending"
num_request = 2
records = []
job_top_url = monitoring_api.generate_top_url(job_id)
async with client.ws_connect(job_top_url, headers=headers) as ws:
job = await jobs_client.get_job_by_id(job_id=job_id)
assert job["status"] == "pending"
# silently waiting for a job becomes running
msg = await ws.receive()
job = await jobs_client.get_job_by_id(job_id=job_id)
assert job["status"] == "running"
assert msg.type == aiohttp.WSMsgType.TEXT
while True:
msg = await ws.receive()
if msg.type == aiohttp.WSMsgType.CLOSE:
break
else:
records.append(json.loads(msg.data))
if len(records) == num_request:
# TODO (truskovskiyk 09/12/18) do not use protected prop
# https://github.com/aio-libs/aiohttp/issues/3443
proto = ws._writer.protocol
assert proto.transport is not None
proto.transport.close()
break
assert len(records) == num_request
for message in records:
assert message == {
"cpu": mock.ANY,
"memory": mock.ANY,
"timestamp": mock.ANY,
}
await jobs_client.delete_job(job_id=job_id)
async def test_top_close_when_job_succeeded(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {1..2}; do echo $i; sleep 1; done"'
job_submit["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=job_submit) as response:
assert response.status == HTTPAccepted.status_code
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="succeeded")
job_top_url = monitoring_api.generate_top_url(job_id)
async with client.ws_connect(job_top_url, headers=headers) as ws:
msg = await ws.receive()
job = await jobs_client.get_job_by_id(job_id=job_id)
assert msg.type == aiohttp.WSMsgType.CLOSE
assert job["status"] == "succeeded"
await jobs_client.delete_job(job_id=job_id)
class TestLogApi:
async def test_log_no_permissions_forbidden(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
job_submit: dict[str, Any],
regular_user1: _User,
regular_user2: _User,
cluster_name: str,
) -> None:
url = platform_api.jobs_base_url
async with client.post(
url, headers=regular_user1.headers, json=job_submit
) as resp:
assert resp.status == HTTPAccepted.status_code
payload = await resp.json()
job_id = payload["id"]
url = monitoring_api.generate_log_url(job_id)
async with client.get(url, headers=regular_user2.headers) as resp:
assert resp.status == HTTPForbidden.status_code
result = await resp.json()
assert result == {
"missing": [
{
"uri": f"job://{cluster_name}/{regular_user1.name}/{job_id}",
"action": "read",
}
]
}
async def test_log_no_auth_token_provided_unauthorized(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
url = monitoring_api.generate_top_url(job_id=infinite_job)
async with client.get(url) as resp:
assert resp.status == HTTPUnauthorized.status_code
async def test_job_log(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {1..5}; do echo $i; sleep 1; done"'
request_payload = job_submit
request_payload["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=request_payload) as response:
assert response.status == HTTPAccepted.status_code, await response.text()
result = await response.json()
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id, "succeeded")
url = monitoring_api.generate_log_url(job_id)
async with client.get(url, headers=headers) as response:
assert response.status == HTTPOk.status_code
assert response.content_type == "text/plain"
assert response.charset == "utf-8"
assert response.headers["Transfer-Encoding"] == "chunked"
assert "Content-Encoding" not in response.headers
actual_payload = await response.read()
expected_payload = "\n".join(str(i) for i in range(1, 6)) + "\n"
assert actual_payload == expected_payload.encode()
async def test_job_log_ws(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {1..5}; do echo $i; sleep 1; done"'
request_payload = job_submit
request_payload["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=request_payload) as response:
assert response.status == HTTPAccepted.status_code, await response.text()
result = await response.json()
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id, "succeeded")
url = monitoring_api.generate_log_ws_url(job_id)
async with client.ws_connect(url, headers=headers) as ws:
ws_data = []
async for msg in ws:
ws_data.append(msg.data)
actual_payload = b"".join(ws_data)
expected_payload = "\n".join(str(i) for i in range(1, 6)) + "\n"
assert actual_payload == expected_payload.encode()
async def test_log_shared_by_name(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
job_submit: dict[str, Any],
job_name: str,
regular_user1: _User,
regular_user2: _User,
share_job: Callable[[_User, _User, str], Awaitable[None]],
) -> None:
job_submit["name"] = job_name
url = platform_api.jobs_base_url
async with client.post(
url, headers=regular_user1.headers, json=job_submit
) as resp:
assert resp.status == HTTPAccepted.status_code, await resp.text()
payload = await resp.json()
job_id = payload["id"]
assert payload["name"] == job_name
await share_job(regular_user1, regular_user2, job_name)
url = monitoring_api.generate_log_url(job_id)
async with client.get(url, headers=regular_user2.headers) as resp:
assert resp.status == HTTPOk.status_code
async def test_job_log_cleanup(
self,
monitoring_api_s3_storage: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
kube_client: MyKubeClient,
) -> None:
command = 'bash -c "for i in {1..5}; do echo $i; done; sleep 100"'
request_payload = job_submit
request_payload["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=request_payload) as response:
assert response.status == HTTPAccepted.status_code, await response.text()
result = await response.json()
job_id = result["id"]
# Jobs is canceled so its pod removed immediately
await jobs_client.long_polling_by_job_id(job_id, "running")
await jobs_client.delete_job(job_id)
async def _wait_no_pod() -> None:
while True:
try:
await kube_client.get_pod(job_id)
except JobNotFoundException:
return
await asyncio.wait_for(_wait_no_pod(), timeout=60)
url = monitoring_api_s3_storage.generate_log_url(job_id)
async with client.get(url, headers=headers) as response:
actual_payload = await response.read()
expected_payload = "\n".join(str(i) for i in range(1, 6)) + "\n"
assert actual_payload == expected_payload.encode()
async with client.delete(url, headers=headers) as response:
assert response.status == HTTPNoContent.status_code
async with client.get(url, headers=headers) as response:
actual_payload = await response.read()
assert actual_payload == b""
async def test_job_logs_removed_on_drop(
self,
monitoring_api_s3_storage: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
kube_client: MyKubeClient,
) -> None:
command = 'bash -c "exit 0"'
request_payload = job_submit
request_payload["container"]["command"] = command
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=request_payload) as response:
assert response.status == HTTPAccepted.status_code, await response.text()
result = await response.json()
job_id = result["id"]
# Jobs is canceled so its pod removed immediately
await jobs_client.long_polling_by_job_id(job_id, "succeeded")
# Drop request
await jobs_client.drop_job(job_id)
async def _wait_no_job() -> None:
while True:
try:
await jobs_client.get_job_by_id(job_id)
except AssertionError:
return
await asyncio.wait_for(_wait_no_job(), timeout=10)
class TestSaveApi:
async def test_save_no_permissions_forbidden(
self,
monitoring_api: MonitoringApiEndpoints,
platform_api: PlatformApiEndpoints,
client: aiohttp.ClientSession,
job_submit: dict[str, Any],
regular_user1: _User,
regular_user2: _User,
cluster_name: str,
) -> None:
url = platform_api.jobs_base_url
async with client.post(
url, headers=regular_user1.headers, json=job_submit
) as resp:
assert resp.status == HTTPAccepted.status_code
payload = await resp.json()
job_id = payload["id"]
url = monitoring_api.generate_save_url(job_id)
async with client.post(url, headers=regular_user2.headers) as resp:
assert resp.status == HTTPForbidden.status_code
result = await resp.json()
assert result == {
"missing": [
{
"uri": f"job://{cluster_name}/{regular_user1.name}/{job_id}",
"action": "write",
}
]
}
async def test_save_no_auth_token_provided_unauthorized(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
url = monitoring_api.generate_save_url(job_id=infinite_job)
async with client.post(url) as resp:
assert resp.status == HTTPUnauthorized.status_code
async def test_save_non_existing_job(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
) -> None:
headers = jobs_client.headers
job_id = f"job-{uuid4()}"
url = monitoring_api.generate_save_url(job_id=job_id)
async with client.post(url, headers=headers) as resp:
assert resp.status == HTTPBadRequest.status_code, str(resp)
assert "no such job" in await resp.text()
async def test_save_unknown_registry_host(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
url = monitoring_api.generate_save_url(job_id=infinite_job)
headers = jobs_client.headers
payload = {"container": {"image": "unknown:5000/alpine:latest"}}
async with client.post(url, headers=headers, json=payload) as resp:
assert resp.status == HTTPBadRequest.status_code, str(resp)
resp_payload = await resp.json()
assert "Unknown registry host" in resp_payload["error"]
async def test_save_not_running_job(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
config: Config,
kube_client: MyKubeClient,
) -> None:
await jobs_client.delete_job(infinite_job)
await kube_client.wait_pod_is_terminated(
pod_name=infinite_job, allow_pod_not_exists=True
)
url = monitoring_api.generate_save_url(job_id=infinite_job)
headers = jobs_client.headers
payload = {
"container": {"image": f"{config.registry.host}/alpine:{infinite_job}"}
}
async with client.post(url, headers=headers, json=payload) as resp:
assert resp.status == HTTPOk.status_code, str(resp)
chunks = [
json.loads(chunk.decode("utf-8"))
async for chunk in resp.content
if chunk
]
debug = f"Received chunks: `{chunks}`"
assert len(chunks) == 1, debug
assert "not running" in chunks[0]["error"], debug
async def test_save_push_failed_job_exception_raised(
self,
config_factory: Callable[..., Config],
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
invalid_runtime_config = ContainerRuntimeConfig(name="docker", port=1)
config = config_factory(container_runtime=invalid_runtime_config)
app = await create_app(config)
async with create_local_app_server(app, port=8080) as address:
monitoring_api = MonitoringApiEndpoints(address=address)
url = monitoring_api.generate_save_url(job_id=infinite_job)
headers = jobs_client.headers
image = f"{config.registry.host}/alpine:{infinite_job}"
payload = {"container": {"image": image}}
async with client.post(url, headers=headers, json=payload) as resp:
assert resp.status == HTTPOk.status_code, str(resp)
chunks = [
json.loads(chunk.decode("utf-8"))
async for chunk in resp.content
if chunk
]
debug = f"Received chunks: `{chunks}`"
assert len(chunks) == 1, debug
error = chunks[0]["error"]
assert "Unexpected error: Cannot connect to host" in error, debug
assert "Connect call failed" in error, debug
async def test_save_ok(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
config: Config,
) -> None:
url = monitoring_api.generate_save_url(job_id=infinite_job)
headers = jobs_client.headers
repository = f"{config.registry.host}/alpine"
image = f"{repository}:{infinite_job}"
payload = {"container": {"image": image}}
async with client.post(url, headers=headers, json=payload) as resp:
assert resp.status == HTTPOk.status_code, str(resp)
chunks = [
json.loads(chunk.decode("utf-8"))
async for chunk in resp.content
if chunk
]
debug = f"Received chunks: `{chunks}`"
assert isinstance(chunks, list), debug
assert all(isinstance(s, dict) for s in chunks), debug
assert len(chunks) >= 4, debug # 2 for commit(), >=2 for push()
# here we rely on chunks to be received in correct order
assert chunks[0]["status"] == "CommitStarted", debug
assert chunks[0]["details"]["image"] == image, debug
assert re.match(r"\w{64}", chunks[0]["details"]["container"]), debug
assert chunks[1] == {"status": "CommitFinished"}, debug
msg = f"The push refers to repository [{repository}]"
assert chunks[2].get("status") == msg, debug
assert chunks[-1].get("aux", {}).get("Tag") == infinite_job, debug
async def test_save_shared_by_name(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_name: str,
named_infinite_job: str,
regular_user2: _User,
share_job: Callable[..., Awaitable[None]],
config: Config,
) -> None:
await share_job(jobs_client.user, regular_user2, job_name, action="write")
url = monitoring_api.generate_save_url(job_id=named_infinite_job)
repository = f"{config.registry.host}/alpine"
image = f"{repository}:{named_infinite_job}"
payload = {"container": {"image": image}}
async with client.post(
url, headers=regular_user2.headers, json=payload
) as resp:
assert resp.status == HTTPOk.status_code, await resp.text()
class TestAttachApi:
async def test_attach_forbidden(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
) -> None:
url = monitoring_api.generate_attach_url(
job_id="anything", stdout=True, stderr=True
)
try:
async with client.ws_connect(url):
pass
except WSServerHandshakeError as e:
assert e.headers and e.headers.get("X-Error")
async def test_attach_nontty_stdout(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {0..9}; do echo $i; sleep 1; done"'
job_submit["container"]["command"] = command
headers = jobs_client.headers
async with client.post(
platform_api.jobs_base_url, headers=headers, json=job_submit
) as response:
assert response.status == 202
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, stdout=True, stderr=True
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
content = []
async for msg in ws:
content.append(msg.data)
expected = (
b"".join(f"\x01{i}\n".encode("ascii") for i in range(10))
+ b'\x03{"exit_code": 0}'
)
assert b"".join(content) in expected
await jobs_client.delete_job(job_id)
async def test_attach_nontty_stdout_shared_by_name(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
job_name: str,
regular_user2: _User,
share_job: Callable[..., Awaitable[None]],
) -> None:
command = 'bash -c "for i in {0..9}; do echo $i; sleep 1; done"'
job_submit["container"]["command"] = command
job_submit["name"] = job_name
async with client.post(
platform_api.jobs_base_url, headers=jobs_client.headers, json=job_submit
) as response:
assert response.status == 202, await response.text()
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
assert result["name"] == job_name
await share_job(jobs_client.user, regular_user2, job_name, action="write")
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, stdout=True, stderr=True
)
async with client.ws_connect(url, headers=regular_user2.headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
content = []
async for msg in ws:
content.append(msg.data)
expected = (
b"".join(f"\x01{i}\n".encode("ascii") for i in range(10))
+ b'\x03{"exit_code": 0}'
)
assert b"".join(content) in expected
await jobs_client.delete_job(job_id)
async def test_attach_nontty_stderr(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = 'bash -c "for i in {0..9}; do echo $i >&2; sleep 1; done"'
job_submit["container"]["command"] = command
headers = jobs_client.headers
async with client.post(
platform_api.jobs_base_url, headers=headers, json=job_submit
) as response:
assert response.status == 202
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, stdout=True, stderr=True
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
content = []
async for msg in ws:
content.append(msg.data)
expected = (
b"".join(f"\x02{i}\n".encode("ascii") for i in range(10))
+ b'\x03{"exit_code": 0}'
)
assert b"".join(content) in expected
await jobs_client.delete_job(job_id)
async def test_attach_tty(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = "sh"
job_submit["container"]["command"] = command
job_submit["container"]["tty"] = True
headers = jobs_client.headers
async with client.post(
platform_api.jobs_base_url, headers=headers, json=job_submit
) as response:
assert response.status == 202
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, tty=True, stdin=True, stdout=True, stderr=False
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
await ws.send_bytes(b"\x00\n")
assert await expect_prompt(ws) == b"\r\n# "
await ws.send_bytes(b"\x00echo 'abc'\n")
assert await expect_prompt(ws) == b"echo 'abc'\r\nabc\r\n# "
await jobs_client.delete_job(job_id)
async def test_attach_tty_exit_code(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = "sh"
job_submit["container"]["command"] = command
job_submit["container"]["tty"] = True
headers = jobs_client.headers
async with client.post(
platform_api.jobs_base_url, headers=headers, json=job_submit
) as response:
assert response.status == 202
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, tty=True, stdin=True, stdout=True, stderr=False
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
await ws.send_bytes(b"\x00\n")
assert await expect_prompt(ws) == b"\r\n# "
await ws.send_bytes(b"\x00exit 1\n")
assert await expect_prompt(ws) == b"exit 1\r\n"
while True:
msg = await ws.receive(timeout=5)
if msg.type in (
aiohttp.WSMsgType.CLOSE,
aiohttp.WSMsgType.CLOSING,
aiohttp.WSMsgType.CLOSED,
):
break
if msg.data[0] == 3:
payload = json.loads(msg.data[1:])
# Zero code is returned even if we exited with non zero code.
# The same behavior as in kubectl.
assert payload["exit_code"] == 0
break
await jobs_client.delete_job(job_id)
async def test_reattach_just_after_exit(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_submit: dict[str, Any],
) -> None:
command = "sh"
job_submit["container"]["command"] = command
job_submit["container"]["tty"] = True
headers = jobs_client.headers
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=job_submit) as response:
assert response.status == 202
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_attach_url(
job_id=job_id, tty=True, stdin=True, stdout=True, stderr=False
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5) # empty message is sent to stdout
await ws.send_bytes(b"\x00\n")
assert await expect_prompt(ws) == b"\r\n# "
await ws.send_bytes(b"\x00exit 1\n")
assert await expect_prompt(ws) == b"exit 1\r\n"
await asyncio.sleep(2) # Allow poller to collect pod
with pytest.raises(WSServerHandshakeError) as err:
async with client.ws_connect(url, headers=headers):
pass
assert err.value.status == 404
await jobs_client.long_polling_by_job_id(job_id, status="failed")
await jobs_client.delete_job(job_id)
class TestExecApi:
async def test_exec_notty_stdout(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
url = monitoring_api.generate_exec_url(
infinite_job,
"sh -c 'sleep 5; echo abc'",
tty=False,
stdin=False,
stdout=True,
stderr=True,
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5)
data = await ws.receive_bytes()
assert data == b"\x01abc\n"
await jobs_client.delete_job(infinite_job)
async def test_exec_notty_stdout_shared_by_name(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_name: str,
named_infinite_job: str,
regular_user2: _User,
share_job: Callable[..., Awaitable[None]],
) -> None:
await share_job(jobs_client.user, regular_user2, job_name, action="write")
url = monitoring_api.generate_exec_url(
named_infinite_job,
"sh -c 'sleep 5; echo abc'",
tty=False,
stdin=False,
stdout=True,
stderr=True,
)
async with client.ws_connect(url, headers=regular_user2.headers) as ws:
await ws.receive_bytes(timeout=5)
data = await ws.receive_bytes()
assert data == b"\x01abc\n"
await jobs_client.delete_job(named_infinite_job)
async def test_exec_notty_stderr(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
url = monitoring_api.generate_exec_url(
infinite_job,
"sh -c 'sleep 5; echo abc 1>&2'",
tty=False,
stdin=False,
stdout=True,
stderr=True,
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5)
data = await ws.receive_bytes()
assert data == b"\x02abc\n"
await jobs_client.delete_job(infinite_job)
async def test_exec_tty(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
url = monitoring_api.generate_exec_url(
infinite_job,
"sh",
tty=True,
stdin=True,
stdout=True,
stderr=False,
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5)
assert (await expect_prompt(ws)).strip() == b"#"
await ws.send_bytes(b"\x00echo 'abc'\n")
assert await expect_prompt(ws) == b"echo 'abc'\r\nabc\r\n# "
await jobs_client.delete_job(infinite_job)
async def test_exec_tty_exit_code(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
url = monitoring_api.generate_exec_url(
infinite_job,
"sh",
tty=True,
stdin=True,
stdout=True,
stderr=False,
)
async with client.ws_connect(url, headers=headers) as ws:
await ws.receive_bytes(timeout=5)
assert (await expect_prompt(ws)).strip() == b"#"
await ws.send_bytes(b"\x00exit 42\n")
assert await expect_prompt(ws) == b"exit 42\r\n"
while True:
msg = await ws.receive(timeout=5)
if msg.type in (
aiohttp.WSMsgType.CLOSE,
aiohttp.WSMsgType.CLOSING,
aiohttp.WSMsgType.CLOSED,
):
break
if msg.data[0] == 3:
payload = json.loads(msg.data[1:])
assert payload["exit_code"] == 42
break
await jobs_client.delete_job(infinite_job)
class TestKillApi:
async def test_kill(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
url = monitoring_api.generate_kill_url(infinite_job)
async with client.post(url, headers=headers) as response:
assert response.status == 204, await response.text()
result = await jobs_client.long_polling_by_job_id(infinite_job, status="failed")
assert result["history"]["exit_code"] == 128 + signal.SIGKILL, result
async def test_kill_shared_by_name(
self,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
job_name: str,
named_infinite_job: str,
regular_user2: _User,
share_job: Callable[..., Awaitable[None]],
) -> None:
await share_job(jobs_client.user, regular_user2, job_name, action="write")
url = monitoring_api.generate_kill_url(named_infinite_job)
async with client.post(url, headers=regular_user2.headers) as response:
assert response.status == 204, await response.text()
result = await jobs_client.long_polling_by_job_id(
named_infinite_job, status="failed"
)
assert result["history"]["exit_code"] == 128 + signal.SIGKILL, result
class TestPortForward:
async def test_port_forward_bad_port(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
# String port is invalid
url = monitoring_api.generate_port_forward_url(infinite_job, "abc")
async with client.get(url, headers=headers) as response:
assert response.status == 400, await response.text()
async def test_port_forward_cannot_connect(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
jobs_client: JobsClient,
infinite_job: str,
) -> None:
headers = jobs_client.headers
# Port 60001 is not handled
url = monitoring_api.generate_port_forward_url(infinite_job, 60001)
async with client.get(url, headers=headers) as response:
assert response.status == 400, await response.text()
@pytest.mark.minikube
async def test_port_forward_ok(
self,
platform_api: PlatformApiEndpoints,
monitoring_api: MonitoringApiEndpoints,
client: aiohttp.ClientSession,
job_submit: dict[str, Any],
jobs_client: JobsClient,
) -> None:
headers = jobs_client.headers
py = textwrap.dedent(
"""\
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("0.0.0.0", 60002))
sock.listen()
cli, addr = sock.accept()
while True:
data = cli.recv(1024)
if not data:
break
cli.sendall(b"rep-"+data)
"""
)
command = f"python -c '{py}'"
job_submit["container"]["command"] = command
job_submit["container"]["image"] = "python:latest"
url = platform_api.jobs_base_url
async with client.post(url, headers=headers, json=job_submit) as response:
assert response.status == HTTPAccepted.status_code
result = await response.json()
assert result["status"] in ["pending"]
job_id = result["id"]
await jobs_client.long_polling_by_job_id(job_id=job_id, status="running")
url = monitoring_api.generate_port_forward_url(job_id, 60002)
async with client.ws_connect(url, headers=headers) as ws:
for i in range(3):
data = str(i).encode("ascii")
await ws.send_bytes(data)
ret = await ws.receive_bytes()
assert ret == b"rep-" + data
await ws.close()
| 35.614668 | 88 | 0.608227 | 7,049 | 61,186 | 5.066534 | 0.065683 | 0.02352 | 0.03066 | 0.044772 | 0.816235 | 0.784678 | 0.762082 | 0.738646 | 0.715714 | 0.693874 | 0 | 0.008012 | 0.296195 | 61,186 | 1,717 | 89 | 35.635411 | 0.821332 | 0.016883 | 0 | 0.689922 | 0 | 0.005638 | 0.063499 | 0.011716 | 0 | 0 | 0 | 0.000582 | 0.105004 | 1 | 0.021142 | false | 0.003524 | 0.017618 | 0.016209 | 0.07611 | 0.000705 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
800e6c29ffa3d080f6d4d07198193d958bb5dfa3 | 349 | py | Python | django/pages/views.py | wanyaworld/SearchThis | ea172d303679595158c55df6ec06168693e5c141 | [
"MIT"
] | null | null | null | django/pages/views.py | wanyaworld/SearchThis | ea172d303679595158c55df6ec06168693e5c141 | [
"MIT"
] | null | null | null | django/pages/views.py | wanyaworld/SearchThis | ea172d303679595158c55df6ec06168693e5c141 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def homePageView(request):
return HttpResponse('Hello, World!')
def documentPageView(request):
return HttpResponse('This is document.')
def queryPageView(request, arg):
return HttpResponse('This is query result. Your query is.' + arg)
| 26.846154 | 69 | 0.759312 | 43 | 349 | 6.162791 | 0.581395 | 0.203774 | 0.188679 | 0.181132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148997 | 349 | 12 | 70 | 29.083333 | 0.892256 | 0.065903 | 0 | 0 | 0 | 0 | 0.203704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.25 | 0.375 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3380b8672496719da6bac16383479c73c4d3940a | 22 | py | Python | eve_rights/__init__.py | SakiiR/eve-rights | 8af4be16c7afe7d99a42b8c5d2407f17a22866fb | [
"MIT"
] | null | null | null | eve_rights/__init__.py | SakiiR/eve-rights | 8af4be16c7afe7d99a42b8c5d2407f17a22866fb | [
"MIT"
] | null | null | null | eve_rights/__init__.py | SakiiR/eve-rights | 8af4be16c7afe7d99a42b8c5d2407f17a22866fb | [
"MIT"
] | null | null | null | from .main import Eve
| 11 | 21 | 0.772727 | 4 | 22 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33b0b081865040bee5fc91dcecd37602efac9f4b | 35,565 | py | Python | tests/naif_pds4_bundler/functional/test_insight.py | NASA-PDS/naif-pds4-bundler | bd7207d157ec9cae60f42cb9ea387ac194b1671c | [
"Apache-2.0"
] | null | null | null | tests/naif_pds4_bundler/functional/test_insight.py | NASA-PDS/naif-pds4-bundler | bd7207d157ec9cae60f42cb9ea387ac194b1671c | [
"Apache-2.0"
] | null | null | null | tests/naif_pds4_bundler/functional/test_insight.py | NASA-PDS/naif-pds4-bundler | bd7207d157ec9cae60f42cb9ea387ac194b1671c | [
"Apache-2.0"
] | null | null | null | """Functional Test Family for InSight Archive Generation."""
import glob
import os
import shutil
from pds.naif_pds4_bundler.__main__ import main
def test_insight_basic(self):
"""Test for basic execution of the pipeline.
Test complete pipeline with basic Insight data: FKs, IKs and a SCLK.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_26.plan"
faucet = "bundle"
shutil.copy2(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list",
)
shutil.copytree("../data/insight", "insight")
shutil.copytree("../data/kernels", "kernels")
with open("../data/insight.list", "r") as i:
for line in i:
with open(f"insight/insight_spice/{line[0:-1]}", "w"):
pass
main(config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_diff_previous_none(self):
"""Test for Diff Files compared with default files.
Testcase in which products should be compared with previous increment
of the archive. The reporting of diff files is set to none.
The pipeline stops before copying the previous increment files
to the staging area.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.copytree("../data/insight", "insight")
main(config, plan, faucet, silent=self.silent, log=self.log, diff="")
def test_insight_diff_previous_all(self):
"""Test for Diff Files compared with previous archive version (1).
Testcase in which products are compared with previous increment of the
archive. The pipeline stops before copying the previous increment
files to the staging area. Diffs are reported both in the log and in
files.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.copytree("../data/insight", "insight")
main(config, plan, faucet, silent=self.silent, log=self.log, diff="all")
def test_insight_diff_previous_files(self):
"""Test for Diff Files compared with previous archive version (2).
Testcase in which products are compared with previous increment of the
archive. The pipeline stops before copying the previous increment
files to the staging area. Diffs are reported only in files.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.copytree("../data/insight", "insight")
main(config, plan, faucet, silent=self.silent, log=self.log, diff="files")
def test_insight_diff_previous_log(self):
"""Test for Diff Files compared with previous archive version (3).
Testcase in which products are compared with previous increment of the
archive. The pipeline stops before copying the previous increment
files to the staging area. Diffs are reported only in log.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.copytree("../data/insight", "insight")
main(config, plan, faucet, silent=self.silent, log=self.log, diff="log")
def test_insight_diff_templates(self):
"""Test for Diff Files compared with templates.
Testcase in which products are compared with the templates to
generate the products and to similar kernels; the ``bundle_directory``
files are not present. The pipeline stops before copying the
previous increment files to the staging area.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
os.makedirs("insight/insight_spice/spice_kernels/sclk", exist_ok=True)
shutil.copy2(
"../data/insight/insight_spice/spice_kernels/sclk/marcob_fake_v01.xml",
"insight/insight_spice/spice_kernels/sclk",
)
with open("insight/insight_spice/spice_kernels/sclk/marcob_fake_v01.tsc", "w"):
pass
main(config, plan, faucet, silent=self.silent, log=self.log, diff="all")
def test_insight_files_in_staging(self):
"""Test for products present in staging directory.
Testcase in which products are already present in the staging
directory. The log provides error messages but the process is not
stopped. Process is finished before moving all files to the final
area.
The tests also tests the obtaining of checksums from already existing
labels.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.move("staging", "staging_old")
shutil.copytree("../data/insight", "staging")
os.makedirs("insight", exist_ok=True)
#
# The files that are added on top of the staging area need to have
# their coverage extracted. Those are the files provided in the
# insight_08.list.
#
with open("../data/insight.list", "r") as i:
for line in i:
with open(f"staging/insight_spice/{line[0:-1]}", "w"):
pass
shutil.copy2(
"../data/kernels/sclk/NSY_SCLKSCET.00019.tsc",
"staging/insight_spice/spice_kernels/sclk/nsy_sclkscet_00019.tsc",
)
shutil.copy2(
"../data/kernels/ck/insight_ida_enc_200829_201220_v1.bc",
"staging/insight_spice/spice_kernels/ck/",
)
shutil.copy2(
"../data/kernels/ck/insight_ida_enc_200829_201220_v1.xml",
"staging/insight_spice/spice_kernels/ck/",
)
shutil.copy2(
"../data/kernels/ck/insight_ida_pot_200829_201220_v1.bc",
"staging/insight_spice/spice_kernels/ck/",
)
shutil.copy2(
"../data/kernels/ck/insight_ida_pot_200829_201220_v1.xml",
"staging/insight_spice/spice_kernels/ck/",
)
shutil.copy2(
"../data/kernels/mk/insight_v08.tm",
"staging/insight_spice/spice_kernels/mk/",
)
main(config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_previous_spiceds(self):
"""Test SPICEDS from previous release.
Testcase for when the SPICEDS file is not provided
via configuration but the previous version is available.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
shutil.copytree("../data/insight", "insight")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<spiceds>../data/spiceds_test.html</spiceds>" in line:
n.write(" <spiceds> </spiceds>\n")
else:
n.write(line)
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
main(updated_config, plan, faucet, silent=self.silent, log=self.log, diff="all")
def test_insight_start_finish(self):
"""Test Archive increment start and finish times from configuration.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
shutil.copytree("../data/insight", "insight")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "</readme>" in line:
n.write(
"</readme>\n"
"<release_date>2021-04-04</release_date>\n"
"<increment_start>2021-04-03T20:53:00Z"
"</increment_start>\n"
"<increment_finish>"
"2021-04-23T20:53:00Z</increment_finish>\n"
)
else:
n.write(line)
main(updated_config, plan, faucet, silent=self.silent, log=self.log, diff="")
def test_insight_incorrect_times(self):
"""Test for incorrect increment start and finish times via configuration.
Test is successful if NPB raises run time errors for each NPB call.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
shutil.copytree("../data/insight", "insight")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<mission_start>2018-05-05T11:05:00Z</mission_start>" in line:
n.write(
" "
"<mission_start>2018-05-05T11:05:00"
"</mission_start>\n"
)
else:
n.write(line)
with self.assertRaises(RuntimeError):
main(
updated_config, plan, faucet, silent=self.silent, log=self.log, diff=""
)
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "</readme>" in line:
n.write(
" </readme>\n"
" <release_date>2021</release_date>\n"
)
else:
n.write(line)
with self.assertRaises(RuntimeError):
main(
updated_config, plan, faucet, silent=self.silent, log=self.log, diff=""
)
def test_insight_mk_input(self):
"""Test incorrect input MK information.
The MK configuration includes indications of how the INSIGHT MK should
be named, and even if the kernel is provided manually, NPB still checks
the expected name and raises an error.
Test is successful if first run with ``insight_2021_v08.tm`` signals
this run time error::
RuntimeError: Meta-kernel insight_2021_v08.tm has not been matched in configuration.
and then NPB executes without errors with `insight_v08.tm``
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "working/insight.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
os.makedirs("insight", exist_ok=True)
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write(
" <mk_inputs>\n"
" <file>working/insight_2021_v08.tm"
"</file>\n"
" </mk_inputs>\n"
' <mk name="insight_v$VERSION.tm">\n'
)
else:
n.write(line)
with open("working/insight_2021_v08.tm", "w") as p:
p.write("test")
with open("working/insight.plan", "w") as p:
p.write("nsy_sclkscet_00019.tsc")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
with self.assertRaises(RuntimeError):
main(
updated_config,
plan,
faucet,
silent=self.silent,
log=self.log,
diff="all",
)
main(updated_config, plan, clear='working/insight_release_08.file_list',
silent=self.silent, log=self.log, diff="all")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<file> </file>" in line:
n.write(" <file>../data/insight_v08.tm" "</file>\n")
else:
n.write(line)
main(updated_config, plan, faucet, silent=self.silent, log=self.log, diff="all")
def test_insight_mks_input(self):
"""Test MKs with incorrect file architecture.
A MKs has the appropriate architecture if its first line is::
KPL/MK
The test is successful if the following error is raised::
spiceypy.utils.exceptions.SpiceFILEREADFAILED:
================================================================================
Toolkit version: CSPICE66
SPICE(FILEREADFAILED) --
An Attempt to Read a File Failed
Attempt to read from file 'working/insight_v08.tm' failed. IOSTAT = -1.
getfat_c --> GETFAT
================================================================================
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "working/insight.plan"
faucet = "staging"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write(
" <mk_inputs>\n"
"<file>working/insight_v08.tm</file>\n"
"<file>working/insight_v09.tm</file>\n"
" </mk_inputs>\n"
' <mk name="insight_v$VERSION.tm">\n'
)
else:
n.write(line)
with open("working/insight_v08.tm", "w") as p:
p.write("test")
with open("working/insight_v09.tm", "w") as p:
p.write("test")
with open("working/insight.plan", "w") as p:
p.write("nsy_sclkscet_00019.tsc")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
with self.assertRaises(BaseException):
main(
updated_config,
plan,
faucet,
silent=self.silent,
log=self.log,
diff="all",
)
def test_insight_mks_inputs_coverage(self):
"""Test MK coverage not determined from kernels in MK.
Testcase for when one of the meta-kernels does not include the SPK/CK
that determines the coverage of the meta-kernel (implemented after
M2020 Chronos meta-kernel generation.)
NPB log provides the following message::
WARNING : -- No kernel(s) found to determine meta-kernel coverage. Mission times will be used:
WARNING : 2018-05-05T11:05:00Z - 2050-01-01T00:00:00Z
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_00.plan"
faucet = "bundle"
shutil.copy2(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list",
)
shutil.copytree("../data/insight", "insight")
shutil.copytree("../data/kernels", "kernels")
os.remove("kernels/mk/insight_v08.tm")
shutil.copy2("../data/insight_v00.tm", "working/insight_v00.tm")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write(
" <mk_inputs>\n"
"<file>working/insight_v00.tm</file>\n"
" </mk_inputs>\n"
' <mk name="insight_v$VERSION.tm">\n'
)
else:
n.write(line)
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_mks_coverage_in_final(self):
"""Test MK coverage determined by kernel in ``bundle_directory``.
Test MK coverage determination from kernel in MK but not present in
the current release. NPB will report it in the log as follows::
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/spk/insight_cru_ops_v1.bsp.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/spk/insight_edl_rec_v1.bsp.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_180505_181127_v1.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_181127_190331_v2.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_190331_190629_v2.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_190629_190918_v2.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_190925_190929_v1.bc.
WARNING : It will not be used to determine the coverage.
INFO : -- File insight_ida_enc_190929_191120_v1.bc used to determine coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_191120_200321_v1.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_200321_200623_v1.bc.
WARNING : It will not be used to determine the coverage.
WARNING : -- File not present in final area: /insight/insight_spice/spice_kernels/ck/insight_ida_enc_200623_200829_v1.bc.
WARNING : It will not be used to determine the coverage.
INFO : -- Meta-kernel coverage: 2019-11-07T02:00:00Z - 2020-11-07T03:00:00Z
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_00.plan"
faucet = "bundle"
shutil.copy2(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list",
)
shutil.copytree("../data/insight", "insight")
shutil.copytree("../data/kernels", "kernels")
shutil.copy2("../data/insight_v08.tm", "working/insight_v08.tm")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write(
" <mk_inputs>\n"
"<file>working/insight_v08.tm</file>\n"
" </mk_inputs>\n"
' <mk name="insight_v$VERSION.tm">\n'
)
else:
n.write(line)
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_generate_mk(self):
"""Test MK generation with no MK input.
Test is successful if NPB is executed without errors.
"""
shutil.copytree("../data/kernels", "kernels")
os.makedirs("insight", exist_ok=True)
config = "../config/insight.xml"
plan = "working/insight.plan"
faucet = "staging"
with open(plan, "w") as p:
p.write("nsy_sclkscet_00019.tsc")
main(config, plan=plan, faucet=faucet, silent=self.silent, log=self.log)
def test_insight_no_spiceds_in_conf(self):
"""Test when no SPICEDS is provided via configuration.
Testcase for when the SPICEDS file is not provided
via configuration but the previous version is available.
The WARNING message provided by the NPB log is as follows::
INFO : -- No spiceds file provided.
INFO : -- Previous spiceds found: /insight/insight_spice/document/spiceds_v001.html
The first call to NPB is done with a configuration file with the
``<spiceds>`` element whereas the second one does not.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
shutil.copytree("../data/kernels", "kernels")
shutil.copytree("../data/insight", "insight")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<spiceds>../data/spiceds_insight.html</spiceds>" in line:
n.write(" <spiceds></spiceds>\n")
else:
n.write(line)
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<spiceds>../data/spiceds_insight.html</spiceds>" in line:
n.write("")
else:
n.write(line)
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
main(
updated_config,
plan,
faucet,
silent=self.silent,
log=self.log,
diff="all",
)
def test_insight_no_spiceds(self):
"""Test when no SPICEDS is available.
Testcase for when the SPICEDS file is not provided
via configuration and the previous version is not available.
The test is successful if the following error is raised::
RuntimeError: spiceds not provided and not available from previous releases.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "staging"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<spiceds>../data/spiceds_insight.html</spiceds>" in line:
n.write(" <spiceds></spiceds>\n")
else:
n.write(line)
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
with self.assertRaises(RuntimeError):
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_no_readme(self):
"""Test when the readme file is not present.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "bundle"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
main(config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_no_readme_in_config(self):
"""Test when the readme file is not provided via configuration.
The first run raises an the following error::
RuntimeError: readme file not present in configuration.
because the readme file is not present in the ``bundle_directory``.
The second run includes the readme file in the bundle directory and
therefore it does not raise an error.
The test is successful if the conditions specified above are met.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "bundle"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
write_config = True
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<readme>" in line:
write_config = False
if write_config:
n.write(line)
if "</readme>" in line:
write_config = True
with self.assertRaises(RuntimeError):
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_readme_incomplete_in_config(self):
"""Test when the readme file configuration is not complete.
The error is detected by the XML validation against the schema.
The test is successful if the following error is raised::
Reason: The content of element 'readme' is not complete. Tag 'cognisant_authority' expected.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
plan = "../data/insight_release_08.plan"
faucet = "bundle"
os.makedirs("insight", exist_ok=True)
shutil.copytree("../data/kernels", "kernels")
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
write_config = True
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<cognisant_authority>" in line:
write_config = False
if write_config:
n.write(line)
if "</cognisant_authority>" in line:
write_config = True
with self.assertRaises(KeyError):
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_no_kernels(self):
"""Test without bundle and no input kernels provided but a SPICEDS is.
NPB log will include the following WARNING message::
WARNING : -- No kernels will be added to the increment.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
faucet = "bundle"
os.makedirs("insight", exist_ok=True)
os.makedirs("kernels", exist_ok=True)
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
main(config, plan=False, faucet=faucet, silent=self.silent, log=self.log)
def test_insight_no_kernels_with_bundle(self):
"""Test without input kernels provided but a SPICEDS is.
NPB log will include the following WARNING message::
WARNING : -- No kernels will be added to the increment.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
faucet = "bundle"
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
os.makedirs("kernels", exist_ok=True)
shutil.copytree("../data/insight", "insight")
main(config, plan=False, faucet=faucet, silent=self.silent, log=self.log)
def test_insight_only_checksums(self):
"""Test without any input (kernels or SPICEDS).
No inputs are provided at all but checksums are generated.
Test is successful if NPB is executed without errors.
"""
config = "../config/insight.xml"
updated_config = "working/insight.xml"
faucet = "bundle"
for file in glob.glob("../data/insight_release_0[0-7].kernel_list"):
shutil.copy2(file, "working")
shutil.copytree("../data/insight", "insight")
os.makedirs("kernels", exist_ok=True)
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if "<spiceds>../data/spiceds_insight.html</spiceds>" in line:
n.write(" " "<spiceds> </spiceds>\n")
else:
n.write(line)
main(updated_config, plan=False, faucet=faucet, silent=self.silent, log=self.log)
def test_insight_extra_mk_pattern(self):
"""Test to generate the INSIGHT archive with an extra MK pattern.
This test was generated to support a bug found in the MAVEN release 27.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_08.plan"
updated_config = "working/insight_updated.xml"
shutil.copy2(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list",
)
shutil.copytree("../data/insight", "insight")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write('<mk name="insight_$DATE_v$VERSION.tm">\n')
elif '<pattern length="2">VERSION</pattern>' in line:
n.write('<pattern length="2">VERSION</pattern>\n')
n.write('<pattern length="4">DATE</pattern>\n')
else:
n.write(line)
updated_plan = 'working/insight_updated.plan'
with open(plan, "r") as c:
with open(updated_plan, "w") as n:
for line in c:
if 'insight_v08.tm' in line:
n.write(
'insight_2021_v08.tm \\\n'
)
else:
n.write(line)
try:
shutil.copytree("../data/kernels", "kernels")
except BaseException:
pass
#
# Remove the MK used for other tests. MK will be generated
# by NPB.
#
os.remove("kernels/mk/insight_v08.tm")
main(
updated_config, updated_plan, faucet="bundle", silent=self.silent, log=self.log
)
def test_insight_increment_with_misc(self):
"""Test to generate the INSIGHT archive incrementing the checksums.
This test was generated to support a bug found in the MAVEN release 27.
"""
config = "../config/insight.xml"
plan = "working/insight_release_08.plan"
updated_config = "working/insight_updated.xml"
os.mkdir("insight")
shutil.copytree("../data/regression/insight_spice", "insight/insight_spice")
shutil.copytree("../data/kernels", "kernels")
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<mk name="insight_v$VERSION.tm">' in line:
n.write('<mk name="insight_$DATE_v$VERSION.tm">\n')
elif '<pattern length="2">VERSION</pattern>' in line:
n.write('<pattern length="2">VERSION</pattern>\n')
n.write('<pattern length="4">DATE</pattern>\n')
elif '<kernel pattern="insight_v[0-9][0-9].tm">' in line:
n.write('<kernel pattern="insight_[0-9]{4}_v[0-9][0-9].tm">\n')
else:
n.write(line)
with open(plan, "w") as n:
n.write("insight_2021_v08.tm")
shutil.copy2("../data/kernels/mk/insight_v08.tm",
"kernels/mk/insight_2021_v08.tm")
main(updated_config, plan, faucet="bundle", silent=self.silent, log=self.log)
def test_insight_missing_bundle_directory(self):
"""Test for missing bundle directory.
Test is successful if an error message is raised.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_26.plan"
faucet = "bundle"
shutil.copy(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list")
shutil.copytree("../data/kernels", "kernels")
with self.assertRaises(RuntimeError):
main(config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_missing_staging_directory_nok(self):
"""Test for missing staging directory."""
config = "../config/insight.xml"
updated_config = "insight.xml"
plan = "../data/insight_release_26.plan"
faucet = "list"
shutil.copytree("../data/insight", "insight")
shutil.copytree("../data/kernels", "kernels")
with open("../data/insight.list", "r") as i:
for line in i:
with open(f"insight/insight_spice/{line[0:-1]}", "w"):
pass
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<staging_directory>staging</staging_directory>' in line:
n.write('<staging_directory>insight</staging_directory>\n')
else:
n.write(line)
with self.assertRaises(RuntimeError):
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
with open(config, "r") as c:
with open(updated_config, "w") as n:
for line in c:
if '<staging_directory>staging</staging_directory>' in line:
n.write('<staging_directory>working</staging_directory>\n')
else:
n.write(line)
with self.assertRaises(RuntimeError):
main(updated_config, plan, faucet, silent=self.silent, log=self.log)
def test_insight_flat_kernel_directory(self):
"""Test that kernels are obtained from a flat kernel directory.
No sub-directories present in the kernel directory structure expt for FKs.
"""
config = "../config/insight.xml"
plan = "../data/insight_release_26.plan"
faucet = "bundle"
shutil.copy2(
"../data/insight_release_basic.kernel_list",
"working/insight_release_07.kernel_list",
)
shutil.copytree("../data/insight", "insight")
os.mkdir("kernels")
shutil.copytree("../data/kernels/fk", "kernels/fk")
shutil.copytree("../data/kernels/ik", "kernels/", dirs_exist_ok=True)
shutil.copytree("../data/kernels/lsk", "kernels/", dirs_exist_ok=True)
shutil.copytree("../data/kernels/mk", "kernels/", dirs_exist_ok=True)
shutil.copytree("../data/kernels/sclk", "kernels/", dirs_exist_ok=True)
with open("../data/insight.list", "r") as i:
for line in i:
with open(f"insight/insight_spice/{line[0:-1]}", "w"):
pass
main(config, plan, faucet, silent=self.silent, log=self.log)
| 35.004921 | 128 | 0.613721 | 4,570 | 35,565 | 4.652735 | 0.076805 | 0.03673 | 0.039787 | 0.029488 | 0.789117 | 0.766214 | 0.739548 | 0.723416 | 0.702911 | 0.689178 | 0 | 0.022669 | 0.260762 | 35,565 | 1,015 | 129 | 35.039409 | 0.786086 | 0.26976 | 0 | 0.756849 | 0 | 0.001712 | 0.32341 | 0.218465 | 0 | 0 | 0 | 0 | 0.017123 | 1 | 0.047945 | false | 0.010274 | 0.006849 | 0 | 0.054795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33f04180224c13decfecdcc16acf9d09bf9360b5 | 154 | py | Python | prngmgr_inventory/__init__.py | wolcomm/ansible-prngmgr-inventory | 1277ceb9a3911e43b62d26f29e26a52f029a55e8 | [
"Apache-2.0"
] | 1 | 2019-05-15T16:00:59.000Z | 2019-05-15T16:00:59.000Z | prngmgr_inventory/__init__.py | wolcomm/ansible-prngmgr-inventory | 1277ceb9a3911e43b62d26f29e26a52f029a55e8 | [
"Apache-2.0"
] | null | null | null | prngmgr_inventory/__init__.py | wolcomm/ansible-prngmgr-inventory | 1277ceb9a3911e43b62d26f29e26a52f029a55e8 | [
"Apache-2.0"
] | null | null | null | """
PrngMgr dynamic inventory module
================================
An ansible dynamic inventory module to fetch peering session data from prngmgr
"""
| 22 | 78 | 0.623377 | 16 | 154 | 6 | 0.75 | 0.333333 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12987 | 154 | 6 | 79 | 25.666667 | 0.716418 | 0.941558 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33f5c7c095ae530d3dd37b4637ff50a4ac0f7267 | 54,884 | py | Python | events/tests/snapshots/snap_test_api.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | null | null | null | events/tests/snapshots/snap_test_api.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | 157 | 2019-10-08T07:58:59.000Z | 2022-03-20T23:00:17.000Z | events/tests/snapshots/snap_test_api.py | City-of-Helsinki/kukkuu | 61f26bc622928fd04f6a397f832aaffff789e806 | [
"MIT"
] | 3 | 2019-10-07T12:06:26.000Z | 2022-01-25T14:03:14.000Z | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots["test_add_event_group[model_perm] 1"] = {
"data": {
"addEventGroup": {
"eventGroup": {
"image": "",
"imageAltText": "Image alt text",
"project": {"year": 2020},
"publishedAt": None,
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event group test",
"shortDescription": "Short desc",
}
],
}
}
}
}
snapshots["test_add_event_group[object_perm] 1"] = {
"data": {
"addEventGroup": {
"eventGroup": {
"image": "",
"imageAltText": "Image alt text",
"project": {"year": 2020},
"publishedAt": None,
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event group test",
"shortDescription": "Short desc",
}
],
}
}
}
}
snapshots["test_add_event_project_user 1"] = {
"data": {
"addEvent": {
"event": {
"capacityPerOccurrence": 30,
"duration": 1000,
"image": "",
"imageAltText": "Image alt text",
"participantsPerInvite": "FAMILY",
"project": {"year": 2020},
"publishedAt": None,
"readyForEventGroupPublishing": False,
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event test",
"shortDescription": "Short desc",
}
],
}
}
}
}
snapshots["test_add_occurrence_project_user 1"] = {
"data": {
"addOccurrence": {
"occurrence": {
"capacity": 35,
"capacityOverride": None,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "FI",
"ticketSystem": {"type": "INTERNAL"},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_add_occurrence_ticket_system_url 1"] = {
"data": {
"addOccurrence": {
"occurrence": {
"capacity": 9,
"capacityOverride": None,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "FI",
"ticketSystem": {"type": "TICKETMASTER", "url": "https://example.com"},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_add_ticketmaster_event 1"] = {
"data": {"addEvent": {"event": {"ticketSystem": {"type": "TICKETMASTER"}}}}
}
snapshots["test_child_enrol_occurence_from_different_project 1"] = {
"data": {
"enrolOccurrence": {
"enrolment": {
"child": {"firstName": "Brandon"},
"createdAt": "2020-12-12T00:00:00+00:00",
"occurrence": {"time": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_delete_event_group[model_perm] 1"] = {
"data": {"deleteEventGroup": {"__typename": "DeleteEventGroupMutationPayload"}}
}
snapshots["test_delete_event_group[object_perm] 1"] = {
"data": {"deleteEventGroup": {"__typename": "DeleteEventGroupMutationPayload"}}
}
snapshots["test_enrol_occurrence 1"] = {
"data": {
"enrolOccurrence": {
"enrolment": {
"child": {"firstName": "Brandy"},
"createdAt": "2020-12-12T00:00:00+00:00",
"occurrence": {"time": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_enrolment_visibility 1"] = {
"data": {
"occurrence": {
"enrolmentCount": 4,
"enrolments": {"edges": [{"node": {"child": {"firstName": "Brandy"}}}]},
"event": {
"capacityPerOccurrence": 25,
"duration": 1,
"image": "http://testserver/media/series.jpg",
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"publishedAt": "2020-12-12T00:00:00+00:00",
"translations": [
{
"description": "Law ago respond yard door indicate country. Direction traditional whether serious sister work. Beat pressure unit toward movie by.",
"languageCode": "FI",
"name": "Detail audience campaign college career fight data.",
"shortDescription": "Last in able local garden modern they.",
}
],
},
"occurrenceLanguage": "FI",
"remainingCapacity": 21,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T00:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": "Theory go home memory respond improve office. Near increase process truth list pressure. Capital city sing himself yard stuff. Option PM put matter benefit.",
"additionalInfo": """Policy data control as receive.
Teacher subject family around year. Space speak sense person the probably deep.
Social believe policy security score. Turn argue present throw spend prevent.""",
"address": """404 Figueroa Trace
Pollardview, RI 68038""",
"arrivalInstructions": """Significant land especially can quite industry relationship. Which president smile staff country actually generation. Age member whatever open effort clear.
Local challenge box myself last.""",
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"languageCode": "FI",
"name": "Dog hospital number.",
"wwwUrl": "https://www.beck-sherman.com/",
}
]
},
}
}
}
snapshots["test_enrolment_visibility_project_user 1"] = {
"data": {
"occurrence": {
"enrolments": {"edges": [{"node": {"child": {"firstName": "ME ME ME"}}}]}
}
}
}
snapshots["test_event_filter_by_project 1"] = {
"data": {"events": {"edges": [{"node": {"name": "Should be visible"}}]}}
}
snapshots["test_event_group_events_filtering_by_available_for_child_id 1"] = {
"data": {"eventGroup": {"events": {"edges": [{"node": {"name": "ME ME ME"}}]}}}
}
snapshots["test_event_group_events_filtering_by_available_for_child_id 2"] = {
"data": {
"eventGroup": {
"events": {
"edges": [
{
"node": {
"name": "Performance race story capital city sing himself."
}
},
{"node": {"name": "ME ME ME"}},
]
}
}
}
}
snapshots["test_event_group_query_normal_user_and_project_user[False] 1"] = {
"data": {"eventGroup": None}
}
snapshots["test_event_group_query_normal_user_and_project_user[False] 2"] = {
"data": {
"eventGroup": {
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"events": {"edges": []},
"image": "thank.jpg",
"imageAltText": "",
"name": "Lead behind everyone agency start majority.",
"project": {"year": 2020},
"publishedAt": None,
"shortDescription": "Answer entire increase thank certainly again thought.",
"translations": [
{
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Lead behind everyone agency start majority.",
"shortDescription": "Answer entire increase thank certainly again thought.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
}
}
snapshots["test_event_group_query_normal_user_and_project_user[True] 1"] = {
"data": {
"eventGroup": {
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"events": {"edges": []},
"image": "thank.jpg",
"imageAltText": "",
"name": "Lead behind everyone agency start majority.",
"project": {"year": 2020},
"publishedAt": "2020-12-12T00:00:00+00:00",
"shortDescription": "Answer entire increase thank certainly again thought.",
"translations": [
{
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Lead behind everyone agency start majority.",
"shortDescription": "Answer entire increase thank certainly again thought.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
}
}
snapshots["test_event_group_query_normal_user_and_project_user[True] 2"] = {
"data": {
"eventGroup": {
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"events": {"edges": []},
"image": "thank.jpg",
"imageAltText": "",
"name": "Lead behind everyone agency start majority.",
"project": {"year": 2020},
"publishedAt": "2020-12-12T00:00:00+00:00",
"shortDescription": "Answer entire increase thank certainly again thought.",
"translations": [
{
"description": """Page box child care any concern. Defense level church use.
Never news behind. Beat at success decade either enter everything. Newspaper force newspaper business himself exist.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Lead behind everyone agency start majority.",
"shortDescription": "Answer entire increase thank certainly again thought.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
}
}
snapshots["test_event_group_query_wrong_project 1"] = {"data": {"eventGroup": None}}
snapshots["test_event_query_normal_user 1"] = {
"data": {
"event": {
"capacityPerOccurrence": 35,
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"duration": 181,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "",
"name": "Poor lawyer treat free heart significant.",
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"remainingCapacity": 35,
"ticketSystem": {"type": "INTERNAL"},
"time": "1971-04-30T08:38:26+00:00",
"venue": {
"translations": [
{
"description": "Later evening southern would according strong. Analysis season project executive entire.",
"languageCode": "FI",
"name": "Skill down subject town range north skin.",
}
]
},
}
}
]
},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"project": {"year": 2020},
"publishedAt": "2020-12-12T00:00:00+00:00",
"shortDescription": "Together history perform.",
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"shortDescription": "Together history perform.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
}
}
snapshots["test_event_ticket_system_password_assignation 1"] = {
"data": {
"event": {
"ticketSystem": {
"childPassword": "the correct password",
"type": "TICKETMASTER",
}
}
}
}
snapshots["test_event_ticket_system_password_assignation 2"] = {
"data": {
"event": {
"ticketSystem": {
"childPassword": "the correct password",
"type": "TICKETMASTER",
}
}
}
}
snapshots["test_events_and_event_groups_query_normal_user 1"] = {
"data": {
"eventsAndEventGroups": {
"edges": [
{"node": {"__typename": "EventNode", "name": "Published Event"}},
{
"node": {
"__typename": "EventGroupNode",
"name": "Published EventGroup",
}
},
]
}
}
}
snapshots[
"test_events_and_event_groups_query_project_filtering First project in filter, permission to see both projects"
] = {
"data": {
"eventsAndEventGroups": {
"edges": [
{"node": {"__typename": "EventNode", "name": "The project's Event"}},
{
"node": {
"__typename": "EventGroupNode",
"name": "The project's EventGroup",
}
},
]
}
}
}
snapshots[
"test_events_and_event_groups_query_project_filtering No filter, no permission to see another project"
] = {
"data": {
"eventsAndEventGroups": {
"edges": [
{"node": {"__typename": "EventNode", "name": "The project's Event"}},
{
"node": {
"__typename": "EventGroupNode",
"name": "The project's EventGroup",
}
},
]
}
}
}
snapshots[
"test_events_and_event_groups_query_project_filtering No filter, permission to see both projects"
] = {
"data": {
"eventsAndEventGroups": {
"edges": [
{"node": {"__typename": "EventNode", "name": "The project's Event"}},
{
"node": {
"__typename": "EventNode",
"name": "Another project's Event",
}
},
{
"node": {
"__typename": "EventGroupNode",
"name": "The project's EventGroup",
}
},
{
"node": {
"__typename": "EventGroupNode",
"name": "Another project's EventGroup",
}
},
]
}
}
}
snapshots["test_events_and_event_groups_query_project_user 1"] = {
"data": {
"eventsAndEventGroups": {
"edges": [
{"node": {"__typename": "EventNode", "name": "I should be the first"}},
{
"node": {
"__typename": "EventGroupNode",
"name": "I should be the in the middle",
}
},
{"node": {"__typename": "EventNode", "name": "I should be the last"}},
]
}
}
}
snapshots["test_events_query_normal_user 1"] = {
"data": {
"events": {
"edges": [
{
"node": {
"capacityPerOccurrence": 35,
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"duration": 181,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "",
"name": "Poor lawyer treat free heart significant.",
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"remainingCapacity": 35,
"ticketSystem": {"type": "INTERNAL"},
"time": "1971-04-30T08:38:26+00:00",
"venue": {
"translations": [
{
"description": "Later evening southern would according strong. Analysis season project executive entire.",
"languageCode": "FI",
"name": "Skill down subject town range north skin.",
}
]
},
}
}
]
},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"project": {"year": 2020},
"publishedAt": "2020-12-12T00:00:00+00:00",
"shortDescription": "Together history perform.",
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"shortDescription": "Together history perform.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
}
]
}
}
}
snapshots["test_events_query_project_user 1"] = {
"data": {
"events": {
"edges": [
{
"node": {
"capacityPerOccurrence": 35,
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"duration": 181,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "",
"name": "Poor lawyer treat free heart significant.",
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"remainingCapacity": 35,
"ticketSystem": {"type": "INTERNAL"},
"time": "2014-01-28T14:12:00+00:00",
"venue": {
"translations": [
{
"description": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"languageCode": "FI",
"name": "Land especially can quite industry relationship very.",
}
]
},
}
}
]
},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"project": {"year": 2020},
"publishedAt": "2020-12-12T00:00:00+00:00",
"shortDescription": "Together history perform.",
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"shortDescription": "Together history perform.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
},
{
"node": {
"capacityPerOccurrence": 49,
"createdAt": "2020-12-12T00:00:00+00:00",
"description": """Wonder everything pay parent theory go home. Book and interesting sit future dream party. Truth list pressure stage history.
If his their best. Election stay every something base.""",
"duration": 42,
"image": "http://testserver/media/us.jpg",
"imageAltText": "",
"name": "Skill down subject town range north skin.",
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"remainingCapacity": 49,
"ticketSystem": {"type": "INTERNAL"},
"time": "2001-12-31T04:39:12+00:00",
"venue": {
"translations": [
{
"description": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"languageCode": "FI",
"name": "Land especially can quite industry relationship very.",
}
]
},
}
}
]
},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"project": {"year": 2020},
"publishedAt": None,
"shortDescription": "Later evening southern would according strong.",
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Wonder everything pay parent theory go home. Book and interesting sit future dream party. Truth list pressure stage history.
If his their best. Election stay every something base.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Skill down subject town range north skin.",
"shortDescription": "Later evening southern would according strong.",
}
],
"updatedAt": "2020-12-12T00:00:00+00:00",
}
},
]
}
}
}
snapshots["test_occurrence_available_capacity_and_enrolment_count 1"] = {
"data": {
"occurrence": {
"enrolmentCount": 3,
"enrolments": {"edges": []},
"event": {
"capacityPerOccurrence": 9,
"duration": 1,
"image": "http://testserver/media/law.jpg",
"participantsPerInvite": "FAMILY",
"publishedAt": "2020-12-12T00:00:00+00:00",
"translations": [
{
"description": "Able last in able local. Quite nearly gun two born land. Yeah trouble method yard campaign former model.",
"languageCode": "FI",
"name": "Always sport return student light a point.",
"shortDescription": "Who Mrs public east site chance.",
}
],
},
"occurrenceLanguage": "FI",
"remainingCapacity": 6,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T00:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": """Sit enter stand himself from daughter order. Sign discover eight.
Scientist service wonder everything pay. Moment strong hand push book and interesting sit.""",
"additionalInfo": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"address": """04883 Mary Corner
Port Mikeview, IN 23956""",
"arrivalInstructions": "Benefit treat final central. Past ready join enjoy. Huge get this success commercial recently from.",
"description": """Together history perform. Respond draw military dog hospital number. Certainly again thought summer because serious listen.
Page box child care any concern. Defense level church use.""",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"wwwUrl": "http://www.brooks.com/",
}
]
},
}
}
}
snapshots["test_occurrence_capacity[0-0] 1"] = {
"data": {
"occurrence": {
"capacity": 0,
"capacityOverride": 0,
"enrolmentCount": 0,
"remainingCapacity": 0,
}
}
}
snapshots["test_occurrence_capacity[5-0] 1"] = {
"data": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 0,
"remainingCapacity": 5,
}
}
}
snapshots["test_occurrence_capacity[5-4] 1"] = {
"data": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 4,
"remainingCapacity": 1,
}
}
}
snapshots["test_occurrence_capacity[5-5] 1"] = {
"data": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 5,
"remainingCapacity": 0,
}
}
}
snapshots["test_occurrence_capacity[5-6] 1"] = {
"data": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 6,
"remainingCapacity": 0,
}
}
}
snapshots["test_occurrence_capacity[None-0] 1"] = {
"data": {
"occurrence": {
"capacity": 10,
"capacityOverride": None,
"enrolmentCount": 0,
"remainingCapacity": 10,
}
}
}
snapshots["test_occurrence_capacity[None-10] 1"] = {
"data": {
"occurrence": {
"capacity": 10,
"capacityOverride": None,
"enrolmentCount": 10,
"remainingCapacity": 0,
}
}
}
snapshots["test_occurrence_capacity[None-11] 1"] = {
"data": {
"occurrence": {
"capacity": 10,
"capacityOverride": None,
"enrolmentCount": 11,
"remainingCapacity": 0,
}
}
}
snapshots["test_occurrence_capacity[None-9] 1"] = {
"data": {
"occurrence": {
"capacity": 10,
"capacityOverride": None,
"enrolmentCount": 9,
"remainingCapacity": 1,
}
}
}
snapshots["test_occurrence_query_normal_user 1"] = {
"data": {
"occurrence": {
"enrolmentCount": 0,
"enrolments": {"edges": []},
"event": {
"capacityPerOccurrence": 9,
"duration": 1,
"image": "http://testserver/media/law.jpg",
"participantsPerInvite": "FAMILY",
"publishedAt": "2020-12-12T00:00:00+00:00",
"translations": [
{
"description": "Able last in able local. Quite nearly gun two born land. Yeah trouble method yard campaign former model.",
"languageCode": "FI",
"name": "Always sport return student light a point.",
"shortDescription": "Who Mrs public east site chance.",
}
],
},
"occurrenceLanguage": "FI",
"remainingCapacity": 9,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T00:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": """Sit enter stand himself from daughter order. Sign discover eight.
Scientist service wonder everything pay. Moment strong hand push book and interesting sit.""",
"additionalInfo": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"address": """04883 Mary Corner
Port Mikeview, IN 23956""",
"arrivalInstructions": "Benefit treat final central. Past ready join enjoy. Huge get this success commercial recently from.",
"description": """Together history perform. Respond draw military dog hospital number. Certainly again thought summer because serious listen.
Page box child care any concern. Defense level church use.""",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"wwwUrl": "http://www.brooks.com/",
}
]
},
}
}
}
snapshots["test_occurrence_ticket_system 1"] = {
"data": {
"occurrence": {
"ticketSystem": {"type": "TICKETMASTER", "url": "https://example.com"}
}
}
}
snapshots["test_occurrences_filter_by_date 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "1970-01-02T00:00:00+00:00"}},
{"node": {"time": "1970-01-02T00:00:00+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_event 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "1970-01-01T12:00:00+00:00"}},
{"node": {"time": "1970-01-01T12:00:00+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_language 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "2005-09-07T17:47:05+00:00"}},
{"node": {"time": "2016-04-25T18:13:39+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_project 1"] = {
"data": {
"occurrences": {"edges": [{"node": {"time": "1970-01-01T12:00:00+00:00"}}]}
}
}
snapshots["test_occurrences_filter_by_time 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "1970-01-01T11:00:00+00:00"}},
{"node": {"time": "1970-01-02T11:00:00+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_upcoming 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "1970-01-01T00:00:00+00:00"}},
{"node": {"time": "2020-12-12T00:00:00+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_upcoming_with_leeway[False] 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "2020-12-11T23:29:00+00:00"}},
{"node": {"time": "2020-12-11T23:31:00+00:00"}},
]
}
}
}
snapshots["test_occurrences_filter_by_upcoming_with_leeway[True] 1"] = {
"data": {
"occurrences": {"edges": [{"node": {"time": "2020-12-11T23:31:00+00:00"}}]}
}
}
snapshots["test_occurrences_filter_by_venue 1"] = {
"data": {
"occurrences": {
"edges": [
{"node": {"time": "1998-11-25T00:15:59+00:00"}},
{"node": {"time": "2016-01-01T13:37:17+00:00"}},
{"node": {"time": "2018-03-30T01:34:27+00:00"}},
]
}
}
}
snapshots["test_occurrences_query_normal_user 1"] = {
"data": {
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"event": {
"capacityPerOccurrence": 9,
"duration": 1,
"image": "http://testserver/media/law.jpg",
"participantsPerInvite": "FAMILY",
"publishedAt": "2020-12-12T00:00:00+00:00",
"translations": [
{
"description": "Able last in able local. Quite nearly gun two born land. Yeah trouble method yard campaign former model.",
"languageCode": "FI",
"name": "Always sport return student light a point.",
"shortDescription": "Who Mrs public east site chance.",
}
],
},
"remainingCapacity": 9,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T00:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": """Sit enter stand himself from daughter order. Sign discover eight.
Scientist service wonder everything pay. Moment strong hand push book and interesting sit.""",
"additionalInfo": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"address": """04883 Mary Corner
Port Mikeview, IN 23956""",
"arrivalInstructions": "Benefit treat final central. Past ready join enjoy. Huge get this success commercial recently from.",
"description": """Together history perform. Respond draw military dog hospital number. Certainly again thought summer because serious listen.
Page box child care any concern. Defense level church use.""",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"wwwUrl": "http://www.brooks.com/",
}
]
},
}
}
]
}
}
}
snapshots["test_occurrences_query_project_user 1"] = {
"data": {
"occurrences": {
"edges": [
{
"node": {
"enrolmentCount": 0,
"event": {
"capacityPerOccurrence": 9,
"duration": 1,
"image": "http://testserver/media/law.jpg",
"participantsPerInvite": "FAMILY",
"publishedAt": "2020-12-12T00:00:00+00:00",
"translations": [
{
"description": "Able last in able local. Quite nearly gun two born land. Yeah trouble method yard campaign former model.",
"languageCode": "FI",
"name": "Always sport return student light a point.",
"shortDescription": "Who Mrs public east site chance.",
}
],
},
"remainingCapacity": 9,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T00:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": """Sit enter stand himself from daughter order. Sign discover eight.
Scientist service wonder everything pay. Moment strong hand push book and interesting sit.""",
"additionalInfo": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"address": """04883 Mary Corner
Port Mikeview, IN 23956""",
"arrivalInstructions": "Benefit treat final central. Past ready join enjoy. Huge get this success commercial recently from.",
"description": """Together history perform. Respond draw military dog hospital number. Certainly again thought summer because serious listen.
Page box child care any concern. Defense level church use.""",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"wwwUrl": "http://www.brooks.com/",
}
]
},
}
},
{
"node": {
"enrolmentCount": 0,
"event": {
"capacityPerOccurrence": 47,
"duration": 245,
"image": "http://testserver/media/answer.jpg",
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"publishedAt": None,
"translations": [
{
"description": """Indeed discuss challenge school rule wish. Along hear follow sometimes.
Far magazine on summer.""",
"languageCode": "FI",
"name": "Notice rule huge realize at rather.",
"shortDescription": "Once strong artist save decide listen.",
}
],
},
"remainingCapacity": 47,
"ticketSystem": {"type": "INTERNAL"},
"time": "2020-12-12T06:00:00+00:00",
"venue": {
"translations": [
{
"accessibilityInfo": """Sit enter stand himself from daughter order. Sign discover eight.
Scientist service wonder everything pay. Moment strong hand push book and interesting sit.""",
"additionalInfo": "Training thought price. Effort clear and local challenge box. Care figure mention wrong when lead involve.",
"address": """04883 Mary Corner
Port Mikeview, IN 23956""",
"arrivalInstructions": "Benefit treat final central. Past ready join enjoy. Huge get this success commercial recently from.",
"description": """Together history perform. Respond draw military dog hospital number. Certainly again thought summer because serious listen.
Page box child care any concern. Defense level church use.""",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"wwwUrl": "http://www.brooks.com/",
}
]
},
}
},
]
}
}
}
snapshots["test_publish_event[model_perm] 1"] = {
"data": {"publishEvent": {"event": {"publishedAt": "2020-12-12T00:00:00+00:00"}}}
}
snapshots["test_publish_event[object_perm] 1"] = {
"data": {"publishEvent": {"event": {"publishedAt": "2020-12-12T00:00:00+00:00"}}}
}
snapshots["test_publish_event_group[model_perm] 1"] = {
"data": {
"publishEventGroup": {
"eventGroup": {
"events": {
"edges": [{"node": {"publishedAt": "2020-12-12T00:00:00+00:00"}}]
},
"publishedAt": "2020-12-12T00:00:00+00:00",
}
}
}
}
snapshots["test_publish_event_group[object_perm] 1"] = {
"data": {
"publishEventGroup": {
"eventGroup": {
"events": {
"edges": [{"node": {"publishedAt": "2020-12-12T00:00:00+00:00"}}]
},
"publishedAt": "2020-12-12T00:00:00+00:00",
}
}
}
}
snapshots["test_publish_ticketmaster_event[model_perm-False] 1"] = {
"data": {"publishEvent": {"event": {"publishedAt": "2020-12-12T00:00:00+00:00"}}}
}
snapshots["test_publish_ticketmaster_event[object_perm-False] 1"] = {
"data": {"publishEvent": {"event": {"publishedAt": "2020-12-12T00:00:00+00:00"}}}
}
snapshots["test_required_translation 1"] = {
"data": {
"addEvent": {
"event": {
"capacityPerOccurrence": 30,
"duration": 1000,
"image": "",
"imageAltText": "Image alt text",
"participantsPerInvite": "FAMILY",
"project": {"year": 2020},
"publishedAt": None,
"readyForEventGroupPublishing": False,
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event test",
"shortDescription": "Short desc",
}
],
}
}
}
}
snapshots["test_set_enrolment_attendance[None] 1"] = {
"data": {"setEnrolmentAttendance": {"enrolment": {"attended": None}}}
}
snapshots["test_set_enrolment_attendance[True] 1"] = {
"data": {"setEnrolmentAttendance": {"enrolment": {"attended": True}}}
}
snapshots["test_unenrol_occurrence 1"] = {
"data": {
"unenrolOccurrence": {
"child": {"firstName": "Robert"},
"occurrence": {"time": "2020-12-12T00:00:00+00:00"},
}
}
}
snapshots["test_update_event_group[model_perm] 1"] = {
"data": {
"updateEventGroup": {
"eventGroup": {
"image": "teacher.jpg",
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event group test in suomi",
"shortDescription": "Short desc",
},
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "SV",
"name": "Event group test in swedish",
"shortDescription": "Short desc",
},
],
}
}
}
}
snapshots["test_update_event_group[object_perm] 1"] = {
"data": {
"updateEventGroup": {
"eventGroup": {
"image": "teacher.jpg",
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event group test in suomi",
"shortDescription": "Short desc",
},
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "SV",
"name": "Event group test in swedish",
"shortDescription": "Short desc",
},
],
}
}
}
}
snapshots["test_update_event_project_user 1"] = {
"data": {
"updateEvent": {
"event": {
"capacityPerOccurrence": 30,
"duration": 1000,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "Image alt text",
"occurrences": {"edges": []},
"participantsPerInvite": "FAMILY",
"readyForEventGroupPublishing": True,
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "FI",
"name": "Event test in suomi",
"shortDescription": "Short desc",
},
{
"description": "desc",
"imageAltText": "Image alt text",
"languageCode": "SV",
"name": "Event test in swedish",
"shortDescription": "Short desc",
},
],
}
}
}
}
snapshots["test_update_event_ready_for_event_group_publishing 1"] = {
"data": {
"updateEvent": {
"event": {
"capacityPerOccurrence": 35,
"duration": 181,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "",
"occurrences": {"edges": []},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"readyForEventGroupPublishing": True,
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"shortDescription": "Together history perform.",
}
],
}
}
}
}
snapshots["test_update_internal_ticket_system_event_capacity_required 1"] = {
"data": {
"updateEvent": {
"event": {
"capacityPerOccurrence": 5,
"duration": 181,
"image": "http://testserver/media/teacher.jpg",
"imageAltText": "",
"occurrences": {"edges": []},
"participantsPerInvite": "CHILD_AND_1_OR_2_GUARDIANS",
"readyForEventGroupPublishing": True,
"ticketSystem": {"type": "INTERNAL"},
"translations": [
{
"description": """Least then top sing. Serious listen police shake. Page box child care any concern.
Agree room laugh prevent make. Our very television beat at success decade.""",
"imageAltText": "",
"languageCode": "FI",
"name": "Poor lawyer treat free heart significant.",
"shortDescription": "Together history perform.",
}
],
}
}
}
}
snapshots["test_update_occurrence_project_user 1"] = {
"data": {
"updateOccurrence": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 0,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "SV",
"remainingCapacity": 5,
"ticketSystem": {"type": "INTERNAL"},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_update_occurrence_ticket_system_url[False-False] 1"] = {
"data": {
"updateOccurrence": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 0,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "SV",
"remainingCapacity": 5,
"ticketSystem": {
"type": "TICKETMASTER",
"url": "https://updated.example.com",
},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_update_occurrence_ticket_system_url[False-True] 1"] = {
"data": {
"updateOccurrence": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 0,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "SV",
"remainingCapacity": 5,
"ticketSystem": {"type": "TICKETMASTER", "url": ""},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_update_occurrence_ticket_system_url[True-False] 1"] = {
"data": {
"updateOccurrence": {
"occurrence": {
"capacity": 5,
"capacityOverride": 5,
"enrolmentCount": 0,
"event": {"createdAt": "2020-12-12T00:00:00+00:00"},
"occurrenceLanguage": "SV",
"remainingCapacity": 5,
"ticketSystem": {
"type": "TICKETMASTER",
"url": "https://updated.example.com",
},
"time": "1986-12-12T16:40:48+00:00",
"venue": {"createdAt": "2020-12-12T00:00:00+00:00"},
}
}
}
}
snapshots["test_update_ticketmaster_event[False] 1"] = {
"data": {"updateEvent": {"event": {"ticketSystem": {"type": "TICKETMASTER"}}}}
}
| 39.541787 | 206 | 0.444374 | 4,158 | 54,884 | 5.770322 | 0.122655 | 0.035677 | 0.03301 | 0.02134 | 0.89026 | 0.847622 | 0.825491 | 0.795524 | 0.761764 | 0.744175 | 0 | 0.058609 | 0.431401 | 54,884 | 1,387 | 207 | 39.570296 | 0.710225 | 0.00113 | 0 | 0.580153 | 0 | 0.00687 | 0.471588 | 0.108539 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.003053 | 0.001527 | 0 | 0.001527 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33f6e6f39e443cb0a6cd934ece280e6f40c20801 | 77 | py | Python | src/web/modules/auth/services/None.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | 2 | 2017-04-30T07:29:23.000Z | 2017-04-30T07:36:27.000Z | src/web/modules/auth/services/None.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | src/web/modules/auth/services/None.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | # ldap authenticate
def authenticate(p, username, password):
return True
| 19.25 | 40 | 0.753247 | 9 | 77 | 6.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 3 | 41 | 25.666667 | 0.90625 | 0.220779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
33ffe2b17a11ee2112b85e091637306d10da3b8b | 6,068 | py | Python | satellite/tests/controller/test_flow_handlers.py | aslepakurov/vgs-satellite | 9865a51a88987a32ea0ef89e7c39d2acea5f039a | [
"Apache-2.0"
] | null | null | null | satellite/tests/controller/test_flow_handlers.py | aslepakurov/vgs-satellite | 9865a51a88987a32ea0ef89e7c39d2acea5f039a | [
"Apache-2.0"
] | null | null | null | satellite/tests/controller/test_flow_handlers.py | aslepakurov/vgs-satellite | 9865a51a88987a32ea0ef89e7c39d2acea5f039a | [
"Apache-2.0"
] | null | null | null | import json
from datetime import datetime
from unittest.mock import Mock
from mitmproxy.flow import Error
from satellite.proxy import exceptions
from .base import BaseHandlerTestCase
from ..factories import load_flow
class TestFlowsHandler(BaseHandlerTestCase):
def test_ok(self):
self.proxy_manager.get_flows = Mock(
return_value=[load_flow('http_raw')],
)
response = self.fetch(self.get_url('/flows.json'))
self.assertEqual(response.code, 200)
self.assertMatchSnapshot(json.loads(response.body))
class TestFlowHandler(BaseHandlerTestCase):
def test_get_ok(self):
flow = load_flow('http_raw')
self.proxy_manager.get_flow = Mock(return_value=flow)
response = self.fetch(self.get_url(f'/flows/{flow.id}'))
self.assertEqual(response.code, 200)
self.proxy_manager.get_flow.assert_called_once_with(flow.id)
self.assertMatchSnapshot(json.loads(response.body))
def test_flow_with_error(self):
flow = load_flow('http_raw')
flow.error = Error('Test error', datetime.utcnow().timestamp())
self.proxy_manager.get_flow = Mock(return_value=flow)
response = self.fetch(self.get_url(f'/flows/{flow.id}'))
self.assertEqual(response.code, 200)
response_data = json.loads(response.body)
self.assertEqual(response_data['error'], {
'msg': flow.error.msg,
'timestamp': flow.error.timestamp,
})
def test_get_absent_flow(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
self.proxy_manager.get_flow.side_effect = exceptions.UnexistentFlowError(flow_id)
response = self.fetch(self.get_url(f'/flows/{flow_id}'))
self.assertEqual(response.code, 404)
self.proxy_manager.get_flow.assert_called_once_with(flow_id)
def test_delete_ok(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
response = self.fetch(
self.get_url(f'/flows/{flow_id}'),
method='DELETE',
)
self.assertEqual(response.code, 200)
self.proxy_manager.remove_flow.assert_called_once_with(flow_id)
def test_delete_absent_flow(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
self.proxy_manager.remove_flow.side_effect = exceptions.UnexistentFlowError(flow_id)
response = self.fetch(
self.get_url(f'/flows/{flow_id}'),
method='DELETE',
)
self.assertEqual(response.code, 404)
self.proxy_manager.remove_flow.assert_called_once_with(flow_id)
def test_update_ok(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
flow_data = {'flow': 'data'}
response = self.fetch(
self.get_url(f'/flows/{flow_id}'),
method='PUT',
body=json.dumps(flow_data),
headers={'Content-Type': 'application/json'},
)
self.assertEqual(response.code, 200)
self.proxy_manager.update_flow.assert_called_once_with(flow_id, flow_data)
def test_update_absent_flow(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
flow_data = {'flow': 'data'}
self.proxy_manager.update_flow.side_effect = exceptions.UnexistentFlowError(flow_id)
response = self.fetch(
self.get_url(f'/flows/{flow_id}'),
method='PUT',
body=json.dumps(flow_data),
headers={'Content-Type': 'application/json'},
)
self.assertEqual(response.code, 404)
self.proxy_manager.update_flow.assert_called_once_with(flow_id, flow_data)
def test_update_error(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
flow_data = {'flow': 'data'}
self.proxy_manager.update_flow.side_effect = exceptions.FlowUpdateError(flow_id)
response = self.fetch(
self.get_url(f'/flows/{flow_id}'),
method='PUT',
body=json.dumps(flow_data),
headers={'Content-Type': 'application/json'},
)
self.assertEqual(response.code, 400)
self.proxy_manager.update_flow.assert_called_once_with(flow_id, flow_data)
class TestDuplicateFlowHandler(BaseHandlerTestCase):
def test_ok(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
new_flow_id = '599c2bed-c79a-4ddb-a9df-92cdf999a3a7'
self.proxy_manager.duplicate_flow.return_value = new_flow_id
response = self.fetch(
self.get_url(f'/flows/{flow_id}/duplicate'),
method='POST',
body=b'',
)
self.assertEqual(response.code, 200)
self.assertEqual(response.body.decode(), new_flow_id)
self.proxy_manager.duplicate_flow.assert_called_once_with(flow_id)
def test_absent_error(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
self.proxy_manager.duplicate_flow.side_effect = exceptions.UnexistentFlowError(flow_id)
response = self.fetch(
self.get_url(f'/flows/{flow_id}/duplicate'),
method='POST',
body=b'',
)
self.assertEqual(response.code, 404)
self.proxy_manager.duplicate_flow.assert_called_once_with(flow_id)
class TestReplayFlowHandler(BaseHandlerTestCase):
def test_ok(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
response = self.fetch(
self.get_url(f'/flows/{flow_id}/replay'),
method='POST',
body=b'',
)
self.assertEqual(response.code, 200)
self.proxy_manager.replay_flow.assert_called_once_with(flow_id)
def test_absent_error(self):
flow_id = '23f11ab7-e071-4997-97f3-ace07bb9e56d'
self.proxy_manager.replay_flow.side_effect = exceptions.UnexistentFlowError(flow_id)
response = self.fetch(
self.get_url(f'/flows/{flow_id}/replay'),
method='POST',
body=b'',
)
self.assertEqual(response.code, 404)
self.proxy_manager.replay_flow.assert_called_once_with(flow_id)
| 38.405063 | 95 | 0.658207 | 741 | 6,068 | 5.140351 | 0.11336 | 0.066159 | 0.088212 | 0.071672 | 0.824101 | 0.801785 | 0.760567 | 0.750591 | 0.746653 | 0.744815 | 0 | 0.054031 | 0.22528 | 6,068 | 157 | 96 | 38.649682 | 0.756222 | 0 | 0 | 0.622222 | 0 | 0 | 0.136618 | 0.081411 | 0 | 0 | 0 | 0 | 0.207407 | 1 | 0.096296 | false | 0 | 0.051852 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d5cd852359c1fdca30a625410239f440aa35e3a | 277 | py | Python | copypaster/stories/__init__.py | kr1surb4n/copypaster | f4dd696a2f4332ec0120305f6db314d5ff2854cb | [
"MIT"
] | null | null | null | copypaster/stories/__init__.py | kr1surb4n/copypaster | f4dd696a2f4332ec0120305f6db314d5ff2854cb | [
"MIT"
] | null | null | null | copypaster/stories/__init__.py | kr1surb4n/copypaster | f4dd696a2f4332ec0120305f6db314d5ff2854cb | [
"MIT"
] | null | null | null | """Here we load all the stories.
We do that by importing whole file.
The order is probably important.
"""
from .button_grid import * # noqa
from .buttons import * # noqa
from .clipboard import * # noqa
from .state_buttons import * # noqa
from .gtk_events import * # noqa
| 25.181818 | 36 | 0.714801 | 41 | 277 | 4.756098 | 0.634146 | 0.25641 | 0.287179 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198556 | 277 | 10 | 37 | 27.7 | 0.878378 | 0.451264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d600836a0ca18e838c016f5dea9a5d9990de605 | 269 | py | Python | info/serializers.py | barretto-c/django-covid-19 | ef0b2ef47ec39a35c47d42f8f56d361af4f032c1 | [
"MIT"
] | null | null | null | info/serializers.py | barretto-c/django-covid-19 | ef0b2ef47ec39a35c47d42f8f56d361af4f032c1 | [
"MIT"
] | 5 | 2021-03-30T13:51:04.000Z | 2021-09-22T19:23:23.000Z | info/serializers.py | barretto-c/django-covid-19 | ef0b2ef47ec39a35c47d42f8f56d361af4f032c1 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from info.models import CovidData
# class CovidDataSerializer(serializers.HyperlinkedModelSerializer):
class CovidDataSerializer(serializers.ModelSerializer):
class Meta:
model = CovidData
fields = '__all__'
| 26.9 | 68 | 0.784387 | 24 | 269 | 8.583333 | 0.666667 | 0.23301 | 0.339806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159851 | 269 | 9 | 69 | 29.888889 | 0.911504 | 0.245353 | 0 | 0 | 0 | 0 | 0.034826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d81c751b59dd2fc31d3f0da9f37f2f4c8b9ae91 | 145 | py | Python | eth_scapy_someip/__init__.py | jamores/eth-scapy-someip | 1f62c0e3e86a3669975d8d8e52a807fdb2dc5cd3 | [
"MIT"
] | 49 | 2018-03-26T08:33:49.000Z | 2022-03-31T08:42:55.000Z | eth_scapy_someip/__init__.py | jamores/eth-scapy-someip | 1f62c0e3e86a3669975d8d8e52a807fdb2dc5cd3 | [
"MIT"
] | 8 | 2018-12-20T12:37:32.000Z | 2021-05-18T06:43:00.000Z | eth_scapy_someip/__init__.py | jamores/eth-scapy-someip | 1f62c0e3e86a3669975d8d8e52a807fdb2dc5cd3 | [
"MIT"
] | 19 | 2018-07-23T12:40:54.000Z | 2022-03-11T07:38:51.000Z | from .eth_scapy_someip import SOMEIP
from .eth_scapy_sd import SD
from scapy.packet import bind_layers
# Layer binding
bind_layers(SOMEIP,SD)
| 18.125 | 36 | 0.82069 | 24 | 145 | 4.708333 | 0.458333 | 0.123894 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131034 | 145 | 7 | 37 | 20.714286 | 0.896825 | 0.089655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d516d53b8e181aa51108c9b4f760cf3e90f5c7ed | 168 | py | Python | notebooks/qnlpws_utils/__init__.py | sntownsend/QuantumComputingProject2021 | 9c43227b01e44b117e8fe39472e5d74a785f4711 | [
"MIT"
] | null | null | null | notebooks/qnlpws_utils/__init__.py | sntownsend/QuantumComputingProject2021 | 9c43227b01e44b117e8fe39472e5d74a785f4711 | [
"MIT"
] | null | null | null | notebooks/qnlpws_utils/__init__.py | sntownsend/QuantumComputingProject2021 | 9c43227b01e44b117e8fe39472e5d74a785f4711 | [
"MIT"
] | null | null | null | from collections import namedtuple
Vocab = namedtuple('Vocab', ['noun', 'unamb_verb', 'amb_verb'])
Params = namedtuple('Params', ['noun', 'unamb_verb', 'amb_verb'])
| 24 | 65 | 0.696429 | 20 | 168 | 5.65 | 0.5 | 0.265487 | 0.230089 | 0.283186 | 0.353982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113095 | 168 | 6 | 66 | 28 | 0.758389 | 0 | 0 | 0 | 0 | 0 | 0.331325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d52c4ec6bed5ed08c6b34567645d0fbfb63e83ce | 154 | py | Python | indy_node/server/req_handlers/read_req_handlers/get_nym_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | null | null | null | indy_node/server/req_handlers/read_req_handlers/get_nym_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | null | null | null | indy_node/server/req_handlers/read_req_handlers/get_nym_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | null | null | null | from plenum.server.request_handlers.handler_interfaces.read_request_handler import ReadRequestHandler
class GetNymHandler(ReadRequestHandler):
pass
| 25.666667 | 101 | 0.87013 | 16 | 154 | 8.125 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084416 | 154 | 5 | 102 | 30.8 | 0.921986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d55686cc00e190de33f90f39bb74a0ed38f5fa4a | 154 | py | Python | components/shopkeep.py | DavidKohler/Roguelike-Python-Game | 6fbeeb109e6a61042f74cde2329bc4a0ef7b76ff | [
"MIT"
] | null | null | null | components/shopkeep.py | DavidKohler/Roguelike-Python-Game | 6fbeeb109e6a61042f74cde2329bc4a0ef7b76ff | [
"MIT"
] | null | null | null | components/shopkeep.py | DavidKohler/Roguelike-Python-Game | 6fbeeb109e6a61042f74cde2329bc4a0ef7b76ff | [
"MIT"
] | null | null | null | class Shopkeep:
'''
Component that defines entity as shopkeep
'''
def __init__(self, is_keeper=False):
self.is_keeper = is_keeper
| 22 | 45 | 0.649351 | 19 | 154 | 4.894737 | 0.684211 | 0.258065 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25974 | 154 | 6 | 46 | 25.666667 | 0.815789 | 0.266234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d5994631a6b176f48b6274020239e1caa908953f | 5,565 | py | Python | temp/badprint.py | mjm2129/osscap2020 | 7edb83e09168bdbc80b863aecc65da5efbb966cd | [
"Apache-2.0"
] | null | null | null | temp/badprint.py | mjm2129/osscap2020 | 7edb83e09168bdbc80b863aecc65da5efbb966cd | [
"Apache-2.0"
] | 10 | 2020-10-11T09:51:43.000Z | 2020-11-29T13:45:36.000Z | temp/badprint.py | mjm2129/osscap2020 | 7edb83e09168bdbc80b863aecc65da5efbb966cd | [
"Apache-2.0"
] | null | null | null | num_1=[[0,1,0,0],
[1,1,0,0],
[0,1,0,0],
[0,1,0,0],
[1,1,1,0],
[0,0,0,0]]
num_2=[[0,1,0,0],
[1,0,1,0],
[0,0,1,0],
[0,1,0,0],
[1,1,1,0],
[0,0,0,0]]
num_3=[[1,1,1,0],
[0,0,1,0],
[0,1,0,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]]
num_4=[[1,0,1,0],
[1,0,1,0],
[1,1,1,0],
[0,0,1,0],
[0,0,1,0],
[0,0,0,0]]
num_5=[[1,1,1,0],
[1,0,0,0],
[1,1,0,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]]
num_6=[[0,1,1,0],
[1,0,0,0],
[1,1,0,0],
[1,0,1,0],
[0,1,0,0],
[0,0,0,0]]
num_7=[[1,1,1,0],
[0,0,1,0],
[0,1,0,0],
[1,0,0,0],
[1,0,0,0],
[0,0,0,0]]
num_8=[[0,1,1,0],
[1,0,1,0],
[0,1,0,0],
[1,0,1,0],
[1,1,0,0],
[0,0,0,0]]
num_9=[[0,1,0,0],
[1,0,1,0],
[0,1,1,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]]
num_0=[[0,1,0,0],
[1,0,1,0],
[1,1,1,0],
[1,0,1,0],
[0,1,0,0],
[0,0,0,0]]
list=[[[0,1,0,0],
[1,1,0,0],
[0,1,0,0],
[0,1,0,0],
[1,1,1,0],
[0,0,0,0]],
[[0,1,0,0],
[1,0,1,0],
[0,0,1,0],
[0,1,0,0],
[1,1,1,0],
[0,0,0,0]],
[[1,1,1,0],
[0,0,1,0],
[0,1,0,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]],
[[1,0,1,0],
[1,0,1,0],
[1,1,1,0],
[0,0,1,0],
[0,0,1,0],
[0,0,0,0]],
[[1,1,1,0],
[1,0,0,0],
[1,1,0,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]],
[[0,1,1,0],
[1,0,0,0],
[1,1,0,0],
[1,0,1,0],
[0,1,0,0],
[0,0,0,0]],
[[1,1,1,0],
[0,0,1,0],
[0,1,0,0],
[1,0,0,0],
[1,0,0,0],
[0,0,0,0]],
[[0,1,1,0],
[1,0,1,0],
[0,1,0,0],
[1,0,1,0],
[1,1,0,0],
[0,0,0,0]],
[[0,1,0,0],
[1,0,1,0],
[0,1,1,0],
[0,0,1,0],
[1,1,0,0],
[0,0,0,0]],
[[0,1,0,0],
[1,0,1,0],
[1,1,1,0],
[1,0,1,0],
[0,1,0,0],
[0,0,0,0]]]
aaa=[]
'''
def printtime(a):
aaa=[]
a=str(a)
for j in range(5):
for i in ['1','2','3','4','5','6','7','8','9','0']:
if i==a[j]:
if a[j]=='1':
aaa.append(num_1)
if a[j]=='2':
aaa.append(num_2)
if a[j]=='3':
aaa.append(num_3)
if a[j]=='4':
aaa.append(num_4)
if a[j]=='5':
aaa.append(num_5)
if a[j]=='6':
aaa.append(num_6)
if a[j]=='7':
aaa.append(num_7)
if a[j]=='8':
aaa.append(num_8)
if a[j]=='9':
aaa.append(num_9)
if a[j]=='0':
aaa.append(num_0)
print(aaa[0])
print(aaa[1])'''
def printgoodgameset(a,p_time):
for i in range(16):
for j in range(32):
LED.screen[i][j]=0
aaa=[]
a=str(p_time)
for j in range(5):
for i,p in zip(['1','2','3','4','5','6','7','8','9','0'],[num_1,num_2,num_3,num_4,num_5,num_6,num_7,num_8,num_9,num_0]):
if i==a[j]:
aaa.append(p)
write1([SP,P,a,SP,W,I,N,ex],5)
write2(aaa,5)
time.sleep(2)
for i in range(16):
for j in range(32):
LED.screen[i][j]=0
for i in [num_3,num_2,num_1]:
write0([SP,SP,SP,i],3)
time.sleep(1)
clean2()
for i in range(16):
for j in range(32):
LED.screen[i][j]=0
a=14
b=12
c=10
d=55
def clean3():
for i in range(6,13,1):
for j in range(15,19,1):
LED.screen[i][j]=0
def writecal(a,b,c,d):
q=0
for letter in a:
for i in range(6):
for j in range(4):
if letter[i][j]!=0:
LED.screen[i+c][j+d]=letter[i][j]+b-1
q+=4
def printbadgameset(a,p1_len,p2_len,p3_len,p4_len):
for i in range(16):
for j in range(32):
LED.screen[i][j]=0
p1_len=str(p1_len)
p2_len=str(p2_len)
p3_len=str(p3_len)
p4_len=str(p4_len)
ccc=[]
for u in [p1_len, p2_len, p3_len, p4_len]:
bbb=[]
for j in range(len(u)):
for i,p in zip(['1','2','3','4','5','6','7','8','9','0'],[num_1,num_2,num_3,num_4,num_5,num_6,num_7,num_8,num_9,num_0]):
if i==u[j]:
bbb.append(p)
ccc.append(bbb)
writecal(ccc[0],7,0,0)
writecal(ccc[1],7,24,0)
writecal(ccc[2],7,0,10)
writecal(ccc[3],7,24,10)
for i in [num_3,num_2,num_1]:
write0([SP,SP,SP,i],3)
time.sleep(1)
clean3()
for i in range(16):
for j in range(32):
LED.screen[i][j]=0
| 21 | 141 | 0.311411 | 991 | 5,565 | 1.684157 | 0.066599 | 0.22888 | 0.199521 | 0.155782 | 0.624326 | 0.591372 | 0.591372 | 0.571001 | 0.545237 | 0.545237 | 0 | 0.221157 | 0.456424 | 5,565 | 264 | 142 | 21.079545 | 0.330579 | 0 | 0 | 0.71123 | 0 | 0 | 0.004519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02139 | false | 0 | 0 | 0 | 0.02139 | 0.010695 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63a820c6777ddcd0f40c679094dbf18f1bd7cd3b | 2,160 | py | Python | app/grandchallenge/cases/migrations/0004_auto_20210929_1055.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 101 | 2018-04-11T14:48:04.000Z | 2022-03-28T00:29:48.000Z | app/grandchallenge/cases/migrations/0004_auto_20210929_1055.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 1,733 | 2018-03-21T11:56:16.000Z | 2022-03-31T14:58:30.000Z | app/grandchallenge/cases/migrations/0004_auto_20210929_1055.py | nlessmann/grand-challenge.org | 36abf6ccb40e2fc3fd3ff00e81deabd76f7e1ef8 | [
"Apache-2.0"
] | 42 | 2018-06-08T05:49:07.000Z | 2022-03-29T08:43:01.000Z | # Generated by Django 3.1.13 on 2021-09-29 10:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("cases", "0003_auto_20210406_0753"),
]
operations = [
migrations.AddField(
model_name="image",
name="patient_age",
field=models.CharField(blank=True, default="", max_length=4),
),
migrations.AddField(
model_name="image",
name="patient_birth_date",
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name="image",
name="patient_id",
field=models.CharField(blank=True, default="", max_length=64),
),
migrations.AddField(
model_name="image",
name="patient_name",
field=models.CharField(blank=True, default="", max_length=324),
),
migrations.AddField(
model_name="image",
name="patient_sex",
field=models.CharField(
blank=True,
choices=[("M", "Male"), ("F", "Female"), ("O", "Other")],
default="",
max_length=1,
),
),
migrations.AddField(
model_name="image",
name="series_description",
field=models.CharField(blank=True, default="", max_length=64),
),
migrations.AddField(
model_name="image",
name="series_instance_uid",
field=models.CharField(blank=True, default="", max_length=64),
),
migrations.AddField(
model_name="image",
name="study_date",
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name="image",
name="study_description",
field=models.CharField(blank=True, default="", max_length=64),
),
migrations.AddField(
model_name="image",
name="study_instance_uid",
field=models.CharField(blank=True, default="", max_length=64),
),
]
| 31.304348 | 75 | 0.531481 | 205 | 2,160 | 5.434146 | 0.292683 | 0.16158 | 0.206463 | 0.24237 | 0.809695 | 0.783662 | 0.783662 | 0.60772 | 0.52693 | 0.52693 | 0 | 0.032775 | 0.336111 | 2,160 | 68 | 76 | 31.764706 | 0.744073 | 0.021296 | 0 | 0.612903 | 1 | 0 | 0.113636 | 0.01089 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016129 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
892fd11fa107da2aa9eb53a557b1a0ce360d9660 | 27 | py | Python | bentoml/evalml.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | 1 | 2021-06-12T17:04:07.000Z | 2021-06-12T17:04:07.000Z | bentoml/evalml.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | 4 | 2021-05-16T08:06:25.000Z | 2021-11-13T08:46:36.000Z | bentoml/evalml.py | francoisserra/BentoML | 213e9e9b39e055286f2649c733907df88e6d2503 | [
"Apache-2.0"
] | null | null | null | # TODO: add evalml support
| 13.5 | 26 | 0.740741 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 1 | 27 | 27 | 0.909091 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8933dc823629c1390cba4f6f9b0075e909d8e740 | 14,447 | py | Python | tests/Parser/TestFoplParser.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | tests/Parser/TestFoplParser.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | tests/Parser/TestFoplParser.py | oIi123/TableauxProver | cb527f91f5c2d0393fbfcb3fb501b4480e0c9031 | [
"MIT"
] | null | null | null | import unittest
from antlr4 import RecognitionException
from src.Model.FoplExpressionTree import Predicate, Var, Func, Not, And, Or, Impl, Eq, AllQuantor, ExistentialQuantor, \
Const
from src.Parser.FoplParser import FoplParser
class TestCorrectFoplParser(unittest.TestCase):
def test_atom_1(self):
wff = "P()"
expr = Predicate("P", [])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_atom_2(self):
wff = "Person(X)"
expr = Predicate("Person", [Const("X")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["X"])
def test_atom_3(self):
wff = "Person(X,Y,Z)"
expr = Predicate("Person", [Const("X"), Const("Y"), Const("Z")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["X", "Y", "Z"])
def test_atom_4(self):
wff = "Person( X, Y, Z)"
expr = Predicate("Person", [Const("X"), Const("Y"), Const("Z")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["X", "Y", "Z"])
def test_atom_5(self):
wff = "Person(f())"
expr = Predicate("Person", [Func("f", [])])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_atom_6(self):
wff = "Person(f(X,Y,Z))"
expr = Predicate("Person", [Func("f", [Const("X"), Const("Y"), Const("Z")])])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["X", "Y", "Z"])
def test_atom_7(self):
wff = "Person(A,f(X,g(Y,Z)),B)"
expr = Predicate("Person", [Const("A"), Func("f", [Const("X"), Func("g", [Const("Y"), Const("Z")])]), Const("B")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "X", "Y", "Z", "B"])
def test_atom_8(self):
wff = "(A)a Person(a,B)"
expr = AllQuantor(Var("a"), Predicate("Person", [Var("a"), Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["B"])
def test_atom_9(self):
wff = "(A)a,b Person(a,b)"
expr = AllQuantor(Var("a"), AllQuantor(Var("b"), Predicate("Person", [Var("a"), Var("b")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, [])
def test_atom_10(self):
wff = "(E)a Person(a,B)"
expr = ExistentialQuantor(Var("a"), Predicate("Person", [Var("a"), Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["B"])
def test_atom_11(self):
wff = "(E)a,b Person(a,b)"
expr = ExistentialQuantor(Var("a"), ExistentialQuantor(Var("b"), Predicate("Person", [Var("a"), Var("b")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, [])
def test_not_1(self):
wff = "!Person(A)"
expr = Not(Predicate("Person", [Const("A")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A"])
def test_not_2(self):
wff = "! Person(A)"
expr = Not(Predicate("Person", [Const("A")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A"])
def test_not_3(self):
wff = "!!Person(A)"
expr = Not(Not(Predicate("Person", [Const("A")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A"])
def test_and_1(self):
wff = "Person(A)&Person(B)"
expr = And(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_and_2(self):
wff = "Person(A) &Person(B)"
expr = And(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_and_3(self):
wff = "Person(A) & Person(B)"
expr = And(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_or_1(self):
wff = "Person(A)|Person(B)"
expr = Or(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_or_2(self):
wff = "Person(A) |Person(B)"
expr = Or(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_or_3(self):
wff = "Person(A) | Person(B)"
expr = Or(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_impl_1(self):
wff = "Person(A)->Person(B)"
expr = Impl(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_impl_2(self):
wff = "Person(A) ->Person(B)"
expr = Impl(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_impl_3(self):
wff = "Person(A) -> Person(B)"
expr = Impl(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_eq_1(self):
wff = "Person(A)<->Person(B)"
expr = Eq(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_eq_2(self):
wff = "Person(A) <->Person(B)"
expr = Eq(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_eq_3(self):
wff = "Person(A) <-> Person(B)"
expr = Eq(Predicate("Person", [Const("A")]), Predicate("Person", [Const("B")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B"])
def test_all_quant_1(self):
wff = "(A)a Person(a)"
expr = AllQuantor(Var("a"), Predicate("Person", [Var("a")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_all_quant_2(self):
wff = "(A)a,b Person(a)"
expr = AllQuantor(Var("a"), AllQuantor(Var("b"), Predicate("Person", [Var("a")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_all_quant_3(self):
wff = "(A)a , b Person(a)"
expr = AllQuantor(Var("a"), AllQuantor(Var("b"), Predicate("Person", [Var("a")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_ex_quant_1(self):
wff = "(E)a Person(a)"
expr = ExistentialQuantor(Var("a"), Predicate("Person", [Var("a")]))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_ex_quant_2(self):
wff = "(E)a,b Person(a)"
expr = ExistentialQuantor(Var("a"), ExistentialQuantor(Var("b"), Predicate("Person", [Var("a")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_ex_quant_3(self):
wff = "(E)a , b Person(a)"
expr = ExistentialQuantor(Var("a"), ExistentialQuantor(Var("b"), Predicate("Person", [Var("a")])))
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_clamp_1(self):
wff = "(Person(A))"
expr = Predicate("Person", [Const("A")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A"])
def test_clamp_2(self):
wff = "( Person(A) )"
expr = Predicate("Person", [Const("A")])
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A"])
def test_complex_1(self):
wff = "(A)a (E)b (Person(a)&Person(b)->Parent(a,b))"
expr = AllQuantor(
Var("a"),
ExistentialQuantor(
Var("b"),
Impl(
And(Predicate("Person", [Var("a")]), Predicate("Person", [Var("b")])),
Predicate("Parent", [Var("a"), Var("b")])
)
)
)
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_complex_2(self):
wff = "(A)a,b (E)c (Person(a)&Person(b)&Person(c)&Parent(a,b)&Parent(b,c)->Grandparent(a,c))"
expr = AllQuantor(
Var("a"),
AllQuantor(
Var("b"),
ExistentialQuantor(
Var("c"),
Impl(
And(
And(
And(
And(
Predicate("Person", [Var("a")]),
Predicate("Person", [Var("b")])
),
Predicate("Person", [Var("c")])
),
Predicate("Parent", [Var("a"), Var("b")])
),
Predicate("Parent", [Var("b"), Var("c")])
),
Predicate("Grandparent", [Var("a"), Var("c")])
)
)
)
)
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
def test_complex_3(self):
wff = "(P(A)->K(f(A,B))&P(C))<->!(E(C,D)|K(A))"
expr = Eq(
Impl(
Predicate("P", [Const("A")]),
And(
Predicate("K", [Func("f", [Const("A"), Const("B")])]),
Predicate("P", [Const("C")])
)
),
Not(
Or(
Predicate("E", [Const("C"), Const("D")]),
Predicate("K", [Const("A")])
)
)
)
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B", "C", "D"])
def test_complex_4(self):
wff = "P(A)|P(B)&P(C)|P(D)"
expr = Or(
Or(
Predicate("P", [Const("A")]),
And(
Predicate("P", [Const("B")]),
Predicate("P", [Const("C")])
)
),
Predicate("P", [Const("D")])
)
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["A", "B", "C", "D"])
def test_complex_5(self):
wff = "(A)a (P(a)->K(f(a,B))&P(C)<->!(E(C,D)|K(a)))"
expr = AllQuantor(
Var("a"),
Eq(
Impl(
Predicate("P", [Var("a")]),
And(
Predicate("K", [Func("f", [Var("a"), Const("B")])]),
Predicate("P", [Const("C")])
)
),
Not(
Or(
Predicate("E", [Const("C"), Const("D")]),
Predicate("K", [Var("a")])
)
)
)
)
tree = FoplParser.parse(wff)
self.assertEqual(tree.expr, expr)
self.assertEqual(tree.constants, ["B", "C", "D"])
class TestIncorrectFoplParser(unittest.TestCase):
def test_atom_1(self):
nwff = "P("
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_atom_2(self):
nwff = "P)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_atom_3(self):
nwff = "P(X,)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_atom_4(self):
nwff = "P(X,f()"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_atom_5(self):
nwff = "P(X,f(X,))"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_atom_6(self):
nwff = "P(X_123)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_invalid_variable_1(self):
nwff = "P(x)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_invalid_variable_2(self):
nwff = "(A)x P(x,y)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_invalid_variable_3(self):
nwff = "P(X,y)"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_invalid_scope_override_1(self):
nwff = "(A)x (P(x) -> (E)x P(x))"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
def test_invalid_scope_override_2(self):
nwff = "(E)x (P(x) -> (A)x P(x))"
self.assertRaises(RecognitionException, FoplParser.parse, nwff)
if __name__ == '__main__':
unittest.main()
| 30.478903 | 122 | 0.518378 | 1,654 | 14,447 | 4.454051 | 0.045949 | 0.138455 | 0.175377 | 0.116465 | 0.906475 | 0.87634 | 0.837926 | 0.818379 | 0.806977 | 0.79001 | 0 | 0.005539 | 0.300132 | 14,447 | 473 | 123 | 30.54334 | 0.723074 | 0 | 0 | 0.628319 | 0 | 0.011799 | 0.09829 | 0.015851 | 0 | 0 | 0 | 0 | 0.233038 | 1 | 0.147493 | false | 0 | 0.011799 | 0 | 0.165192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8951a8f9d183c16451e19ff8822b66bacbcb6c14 | 39 | py | Python | pynq_computervision/overlays/bare_hdmi/__init__.py | xupsh/PYNQ-ComputerVision | bccbc43838922f81bd0fa0c57807ad9622b904df | [
"BSD-3-Clause"
] | 2 | 2019-01-01T15:30:53.000Z | 2022-03-29T23:18:16.000Z | pynq_computervision/overlays/bare_hdmi/__init__.py | xupsh/PYNQ-ComputerVision | bccbc43838922f81bd0fa0c57807ad9622b904df | [
"BSD-3-Clause"
] | 1 | 2020-02-04T12:16:20.000Z | 2020-02-04T12:16:20.000Z | pynq_computervision/overlays/bare_hdmi/__init__.py | budaidai/PYNQ | 18386878544e71dbac743a69ba2dc0ff73f39c7b | [
"BSD-3-Clause"
] | 1 | 2018-05-29T12:18:18.000Z | 2018-05-29T12:18:18.000Z | from .bare_hdmi import BareHDMIOverlay
| 19.5 | 38 | 0.871795 | 5 | 39 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
895d6f2a4150409b7e1042f72b7daf9799d381fb | 46 | py | Python | terrascript/http/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/http/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/http/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/http/r.py
import terrascript
| 9.2 | 24 | 0.76087 | 6 | 46 | 5.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 4 | 25 | 11.5 | 0.897436 | 0.456522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.