hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
80c2c8943ef07c483151f723f33f2e25c9812ed4 | 823 | py | Python | ctreport_selenium/ctreport_html/scripts/imagemodal.py | naveens33/ctreport-selenium | 9553b5c4b8deb52e46cf0fb3e1ea7092028cf090 | [
"MIT"
] | 2 | 2020-08-30T13:12:52.000Z | 2020-09-03T05:38:28.000Z | ctreport_selenium/ctreport_html/scripts/imagemodal.py | naveens33/ctreport-selenium | 9553b5c4b8deb52e46cf0fb3e1ea7092028cf090 | [
"MIT"
] | 5 | 2020-01-10T07:01:24.000Z | 2020-06-25T10:49:43.000Z | ctreport_selenium/ctreport_html/scripts/imagemodal.py | naveens33/ctreport-selenium | 9553b5c4b8deb52e46cf0fb3e1ea7092028cf090 | [
"MIT"
] | 1 | 2020-10-13T02:27:04.000Z | 2020-10-13T02:27:04.000Z | content = '''
<script>
function createimagemodal(path,cap) {
var html = '<div id="modalWindow1" class="modal" data-keyboard="false" data-backdrop="static">\
<span class="close1" onclick=deletemodal("modalWindow1") data-dismiss="modal">×</span>\
<img class="modal-content" id="img01" style="max-height: -webkit-fill-available; width: auto;"></img>\
<div id="caption"></div>\
</div>';
$("#imagemodal").html(html);
$("#modalWindow1").modal();
var modalImg = document.getElementById("img01");
var captionText = document.getElementById("caption");
modalImg.src = path;
captionText.innerHTML = cap;
}
</script>
'''
| 45.722222 | 126 | 0.526124 | 72 | 823 | 6.013889 | 0.583333 | 0.023095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014109 | 0.311057 | 823 | 17 | 127 | 48.411765 | 0.749559 | 0 | 0 | 0 | 0 | 0.176471 | 0.979344 | 0.400972 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
80da3138e8cac3f24617b1c64d4a39e96388be05 | 111 | py | Python | baseline/__init__.py | leonkozlowski/baseline | 0403b922a9a1e600dd58df064f0e1652b4a99d95 | [
"MIT"
] | null | null | null | baseline/__init__.py | leonkozlowski/baseline | 0403b922a9a1e600dd58df064f0e1652b4a99d95 | [
"MIT"
] | null | null | null | baseline/__init__.py | leonkozlowski/baseline | 0403b922a9a1e600dd58df064f0e1652b4a99d95 | [
"MIT"
] | null | null | null | """Top-level package for baseline."""
__author__ = """Leon Kozlowski"""
__email__ = "leonkozlowski@gmail.com"
| 22.2 | 37 | 0.711712 | 12 | 111 | 5.916667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 4 | 38 | 27.75 | 0.717172 | 0.279279 | 0 | 0 | 0 | 0 | 0.5 | 0.310811 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
80fac091a53cec58756d148edb036bb9100199df | 622 | py | Python | webApi/books_api/book/models.py | FreeN1ckname/web_api | 50b6ffc03f918e25d36ff11caa1cf5d83628646b | [
"MIT"
] | null | null | null | webApi/books_api/book/models.py | FreeN1ckname/web_api | 50b6ffc03f918e25d36ff11caa1cf5d83628646b | [
"MIT"
] | null | null | null | webApi/books_api/book/models.py | FreeN1ckname/web_api | 50b6ffc03f918e25d36ff11caa1cf5d83628646b | [
"MIT"
] | null | null | null | from django.db import models
class Author(models.Model):
name = models.CharField(max_length=255)
def __str__(self):
return self.name
class Book(models.Model):
image_path = models.TextField()
title = models.CharField(max_length=120)
description = models.TextField()
full_description = models.TextField()
author = models.ForeignKey('Author', related_name='books', on_delete=models.CASCADE)
pages = models.IntegerField()
genre = models.CharField(max_length=255)
date = models.DateField()
price = models.IntegerField()
def __str__(self):
return self.title
| 25.916667 | 88 | 0.699357 | 74 | 622 | 5.675676 | 0.5 | 0.107143 | 0.128571 | 0.171429 | 0.22381 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.189711 | 622 | 23 | 89 | 27.043478 | 0.815476 | 0 | 0 | 0.117647 | 0 | 0 | 0.017685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0.117647 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
80fb06b25902668929d58eb3b509e5307fafd92f | 3,737 | py | Python | src/features/feature_extraction.py | AalaaNagy88/Data_Extrafilteration | 79a3af71ec6e020f5c97b73f48421a58c9d0cbd0 | [
"FTL"
] | null | null | null | src/features/feature_extraction.py | AalaaNagy88/Data_Extrafilteration | 79a3af71ec6e020f5c97b73f48421a58c9d0cbd0 | [
"FTL"
] | null | null | null | src/features/feature_extraction.py | AalaaNagy88/Data_Extrafilteration | 79a3af71ec6e020f5c97b73f48421a58c9d0cbd0 | [
"FTL"
] | null | null | null | import tldextract
import pandas as pd
from collections import Counter
import numpy as np
"""
Args:
str_obj: raw data
Returns:
number of uppercase character in the raw data.
"""
def get_count_upper_case_letters(str_obj):
count = 0
for elem in str_obj:
if elem.isupper():
count += 1
return count
"""
Args:
str_obj: raw data
Returns:
number of lowercase character in the raw data.
"""
def get_count_lower_case_letters(str_obj):
count = 0
for elem in str_obj:
if (elem.islower()==True) and (elem.isdigit()==False) :
count += 1
return count
"""
Args:
str_obj: raw data
Returns:
number of numeric character in the raw data.
"""
def get_count_numeric_letters(str_obj):
count = 0
for elem in str_obj:
if elem.isnumeric():
count += 1
return count
"""
Args:
str_obj: raw data
Returns:
number of special character in the raw data.
"""
def get_count_special_character(str_obj):
count= 0
for elem in str_obj:
if (elem.isalpha()) or (elem.isdigit() or elem == "."):
continue
else:
count += 1
return count
"""
Args:
str_obj: raw data
Returns:
subdomain,domain,suffix.
"""
def divide_url(str_obj):
subdomain,domain,suffix=tldextract.extract(str_obj)
return subdomain,domain,suffix
"""
Args:
str_obj: raw data
Returns:
count of all character expected '.'
"""
def get_character_count(str_obj):
count= 0
for elem in str_obj:
if elem==".":
continue
else:
count += 1
return count
"""
Args:
str_obj: raw data
Returns:
number of character of subdomain
"""
def get_subdomain_len(str_obj):
subdomain,_,__=divide_url(str_obj)
return get_character_count(subdomain)
"""
Args:
str_obj: raw data
Returns:
value of the entropy
"""
def entropy(str_obj):
p, lens = Counter(str_obj), np.float(len(str_obj))
return -np.sum( count/lens * np.log2(count/lens) for count in p.values())
"""
Args:
str_obj: raw data
Returns:
divide the url into label by using '.' in split
"""
def get_num_labels(str_obj):
N =len(str_obj.split('.'))
return N
"""
Args:
str_obj: raw data
Returns:
number of the label
"""
def get_len_labels(str_obj):
return [len(l) for l in str_obj.split('.')]
"""
Args:
str_obj: raw data
Returns:
number of character in the label of longest label
"""
def get_max_label(str_obj):
return max(get_len_labels(str_obj))
"""
Args:
str_obj: raw data
Returns:
average of all number of charachter/ the length of labels
"""
def get_average_label(str_obj):
le=get_len_labels(str_obj)
return sum(le)/len(le)
"""
Args:
str_obj: raw data
Returns:
longest label word
"""
def get_longest_word(str_obj):
M = get_max_label(str_obj)
lens = get_len_labels(str_obj)
return str_obj.split('.')[lens.index(max(lens))]
"""
Args:
str_obj: raw data
Returns:
second level domain
"""
def get_sld(str_obj):
_,sld,__=divide_url(str_obj)
return sld
"""
Args:
str_obj: raw data
Returns:
total length of subdomain and domain together
"""
def get_len(str_obj):
subdomain,sld,__=divide_url(str_obj)
return get_character_count(subdomain)+get_character_count(sld)
"""
Args:
str_obj: raw data
Returns:
A boolean value of if there is a subdomain of not.
"""
def check_subdomain(str_obj):
subdomain,_,__=divide_url(str_obj)
return 0 if subdomain==0 else 1 | 19.463542 | 75 | 0.617073 | 527 | 3,737 | 4.163188 | 0.180266 | 0.139471 | 0.072926 | 0.094804 | 0.544667 | 0.520966 | 0.41021 | 0.385597 | 0.27484 | 0.209207 | 0 | 0.005204 | 0.280171 | 3,737 | 192 | 76 | 19.463542 | 0.810409 | 0 | 0 | 0.371429 | 0 | 0 | 0.002397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.228571 | false | 0 | 0.057143 | 0.028571 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0393a02a3f437f1ea1a0d4adcfe7ef1fd52c869a | 441 | py | Python | mmdet3d/core/anchor/__init__.py | HuangCongQing/mmdetection3d | 63197070ec99f3d1b4569e7f23558646371fffdc | [
"Apache-2.0"
] | 2 | 2022-01-28T01:43:48.000Z | 2022-03-08T14:12:49.000Z | mmdet3d/core/anchor/__init__.py | HuangCongQing/mmdetection3d | 63197070ec99f3d1b4569e7f23558646371fffdc | [
"Apache-2.0"
] | 1 | 2021-10-02T15:11:59.000Z | 2021-10-04T02:58:30.000Z | mmdet3d/core/anchor/__init__.py | HuangCongQing/mmdetection3d | 63197070ec99f3d1b4569e7f23558646371fffdc | [
"Apache-2.0"
] | 1 | 2021-08-30T06:16:47.000Z | 2021-08-30T06:16:47.000Z | # Copyright (c) OpenMMLab. All rights reserved.
from mmdet.core.anchor import build_anchor_generator
from .anchor_3d_generator import (AlignedAnchor3DRangeGenerator,
AlignedAnchor3DRangeGeneratorPerCls,
Anchor3DRangeGenerator)
__all__ = [
'AlignedAnchor3DRangeGenerator', 'Anchor3DRangeGenerator',
'build_anchor_generator', 'AlignedAnchor3DRangeGeneratorPerCls'
]
| 40.090909 | 70 | 0.709751 | 29 | 441 | 10.448276 | 0.586207 | 0.072607 | 0.132013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020649 | 0.231293 | 441 | 10 | 71 | 44.1 | 0.873156 | 0.102041 | 0 | 0 | 0 | 0 | 0.274112 | 0.274112 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
03a98e551f1502caec750eff2235e3f624baba42 | 422 | py | Python | tests/ex8_tests.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | tests/ex8_tests.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | tests/ex8_tests.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | from nose.tools import *
from exercises import ex8
def test_palindrome():
'''
Test whether our palindrome check returns True
'''
test_is_palindrome = ex8.is_palindrome('radar')
assert_true(test_palindrome)
def test_not_palindrome():
'''
Test whether our palindrome check returns False
'''
test_is_not_palindrome = ex8.is_palindrome('test')
assert_false(test_is_not_palindrome)
| 21.1 | 54 | 0.718009 | 54 | 422 | 5.314815 | 0.351852 | 0.146341 | 0.146341 | 0.167247 | 0.487805 | 0.320557 | 0.320557 | 0 | 0 | 0 | 0 | 0.00885 | 0.196682 | 422 | 19 | 55 | 22.210526 | 0.837758 | 0.222749 | 0 | 0 | 0 | 0 | 0.030405 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
03be16b71d5df8fec6290c0562c7583fd15661ab | 26,373 | py | Python | sysinv/sysinv/sysinv/sysinv/tests/api/test_helm_charts.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | null | null | null | sysinv/sysinv/sysinv/sysinv/tests/api/test_helm_charts.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | null | null | null | sysinv/sysinv/sysinv/sysinv/tests/api/test_helm_charts.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
Tests for the helm chart methods.
"""
import mock
from six.moves import http_client
from sysinv.tests.api import base
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
class FakeConductorAPI(object):
def __init__(self):
self.app_has_system_plugins = mock.MagicMock()
self.get_helm_application_namespaces = mock.MagicMock()
self.get_active_helm_applications = mock.MagicMock()
self.get_helm_chart_overrides = mock.MagicMock()
self.merge_overrides = mock.MagicMock()
class FakeException(Exception):
pass
class ApiHelmChartTestCaseMixin(base.FunctionalTest,
dbbase.ControllerHostTestCase):
# API_HEADERS are a generic header passed to most API calls
API_HEADERS = {'User-Agent': 'sysinv-test'}
# API_PREFIX is the prefix for the URL
API_PREFIX = '/helm_charts'
# RESULT_KEY is the python table key for the list of results
RESULT_KEY = 'charts'
# expected_api_fields are attributes that should be populated by
# an API query
expected_api_fields = ['name',
'namespace',
'user_overrides',
'system_overrides',
'app_id']
# hidden_api_fields are attributes that should not be populated by
# an API query
hidden_api_fields = ['app_id']
def setUp(self):
super(ApiHelmChartTestCaseMixin, self).setUp()
self.fake_conductor_api = FakeConductorAPI()
p = mock.patch('sysinv.conductor.rpcapi.ConductorAPI')
self.mock_conductor_api = p.start()
self.mock_conductor_api.return_value = self.fake_conductor_api
self.addCleanup(p.stop)
self.helm_app = self._create_db_app()
self.helm_override_obj_one = self._create_db_overrides(
appid=self.helm_app.id,
chart_name='ceph-pools-audit',
chart_namespace='kube-system',
system_override_attr={"enabled": True},
user_override="global:\n replicas: \"2\"\n")
self.helm_override_obj_two = self._create_db_overrides(
appid=self.helm_app.id,
chart_name='rbd-provisioner',
chart_namespace='kube-system',
system_override_attr={"enabled": False},
user_override="global:\n replicas: \"3\"\n")
self.fake_helm_apps = self.fake_conductor_api.get_active_helm_applications
self.fake_ns = self.fake_conductor_api.get_helm_application_namespaces
self.fake_override = self.fake_conductor_api.get_helm_chart_overrides
self.fake_merge_overrides = self.fake_conductor_api.merge_overrides
self.fake_system_app = self.fake_conductor_api.app_has_system_plugins
def exception_helm_override(self):
print('Raised a fake exception')
raise FakeException
def get_single_url_helm_override_list(self, app_name):
return '%s/?app_name=%s' % (self.API_PREFIX, app_name)
def get_single_url_helm_override(self, app_name, chart_name, namespace):
return '%s/%s?name=%s&namespace=%s' % (self.API_PREFIX, app_name,
chart_name, namespace)
def _create_db_app(self, obj_id=None):
return dbutils.create_test_app(id=obj_id, name='platform-integ-apps',
app_version='1.0-8',
manifest_name='platform-integration-manifest',
manifest_file='manifest.yaml',
status='applied',
active=True)
def _create_db_overrides(self, appid, chart_name, chart_namespace,
system_override_attr, user_override, obj_id=None):
return dbutils.create_test_helm_overrides(id=obj_id,
app_id=appid,
name=chart_name,
namespace=chart_namespace,
system_overrides=system_override_attr,
user_overrides=user_override)
class ApiHelmChartListTestSuiteMixin(ApiHelmChartTestCaseMixin):
""" Helm Override List GET operations
"""
def setUp(self):
super(ApiHelmChartListTestSuiteMixin, self).setUp()
def test_fetch_success_helm_override_list(self):
# Return a namespace dictionary
self.fake_ns.return_value = {'ceph-pools-audit': ['kube-system'],
'rbd-provisioner': ['kube-system']}
url = self.get_single_url_helm_override_list('platform-integ-apps')
response = self.get_json(url)
# Verify the values of the response with the object values in database
self.assertEqual(len(response[self.RESULT_KEY]), 2)
# py36 preserves insertion order, whereas py27 does not
result_one = response[self.RESULT_KEY][0]
result_two = response[self.RESULT_KEY][1]
self.assertTrue(result_one['name'] == self.helm_override_obj_one.name or
result_two['name'] == self.helm_override_obj_one.name)
self.assertTrue(result_one['name'] == self.helm_override_obj_two.name or
result_two['name'] == self.helm_override_obj_two.name)
if(result_one['name'] == self.helm_override_obj_one.name):
self.assertTrue(result_one['enabled'] == [True])
self.assertTrue(result_two['enabled'] == [False])
else:
self.assertTrue(result_two['enabled'] == [True])
self.assertTrue(result_one['enabled'] == [False])
def test_fetch_helm_override_list_exception(self):
# Raise an exception while finding helm charts for an application
self.fake_ns.side_effect = self.exception_helm_override
url = self.get_single_url_helm_override_list('platform-integ-apps')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Unable to get the helm charts for application "
"platform-integ-apps",
response.json['error_message'])
def test_fetch_helm_override_list_invalid_value(self):
self.fake_ns.return_value = {'ceph-pools-audit': ['kube-system']}
url = self.get_single_url_helm_override_list('invalid_app_name')
# Pass an invalid value for app name
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Application invalid_app_name not found.",
response.json['error_message'])
class ApiHelmChartShowTestSuiteMixin(ApiHelmChartTestCaseMixin):
""" Helm Override Show GET operations
"""
def setUp(self):
super(ApiHelmChartShowTestSuiteMixin, self).setUp()
def test_no_system_override(self):
self.fake_system_app.return_value = False
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit', 'kube-system')
response = self.get_json(url)
# Verify the values of the response with the values stored in database
self.assertEqual(response['name'], self.helm_override_obj_one.name)
self.assertIn(self.helm_override_obj_one.namespace,
response['namespace'])
def test_fetch_helm_override_show_invalid_application(self):
url = self.get_single_url_helm_override('invalid_value',
'ceph-pools-audit', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Application invalid_value not found.",
response.json['error_message'])
def test_fetch_helm_override_show_invalid_helm_chart(self):
self.fake_system_app.return_value = False
url = self.get_single_url_helm_override('platform-integ-apps',
'invalid_value', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Unable to get the helm chart attributes for chart "
"invalid_value under Namespace kube-system",
response.json['error_message'])
def test_fetch_helm_override_show_invalid_namespace(self):
self.fake_system_app.return_value = False
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit',
'invalid_value')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Unable to get the helm chart attributes for chart "
"ceph-pools-audit under Namespace invalid_value",
response.json['error_message'])
def test_fetch_helm_override_show_empty_name(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'',
'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Name must be specified.",
response.json['error_message'])
def test_fetch_helm_override_show_empty_namespace(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit',
'')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Namespace must be specified.",
response.json['error_message'])
def test_fetch_helm_override_no_system_overrides_fetched(self):
# Return system apps
self.fake_helm_apps.return_value = ['platform-integ-apps']
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Unable to get the helm chart overrides for chart "
"ceph-pools-audit under Namespace kube-system",
response.json['error_message'])
def test_fetch_success_helm_override_show(self):
# Return system apps
self.fake_helm_apps.return_value = ['platform-integ-apps']
# Return helm chart overrides
self.fake_override.return_value = {"enabled": True}
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.status_code, http_client.OK)
self.assertEqual(response.content_type, 'application/json')
# Verify the values of the response with the values in database
self.assertEqual(response.json['name'],
self.helm_override_obj_one.name)
self.assertEqual(response.json['namespace'],
self.helm_override_obj_one.namespace)
self.assertEqual(response.json['attributes'],
"enabled: true\n")
self.assertEqual(response.json['system_overrides'],
"{enabled: true}\n")
self.assertEqual(response.json['user_overrides'],
"global:\n replicas: \"2\"\n")
self.assertEqual(response.json['combined_overrides'], {})
class ApiHelmChartDeleteTestSuiteMixin(ApiHelmChartTestCaseMixin):
""" Helm Override delete operations
"""
def setUp(self):
super(ApiHelmChartDeleteTestSuiteMixin, self).setUp()
# Test that a valid DELETE operation is successful
def test_delete_helm_override_success(self):
self.fake_system_app.return_value = False
# Verify that user override exists initially
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.json['user_overrides'],
'global:\n replicas: \"3\"\n')
# Perform delete operation
response = self.delete(url, expect_errors=True)
# Verify the expected API response for the delete
self.assertEqual(response.status_code, http_client.NO_CONTENT)
# Verify that the user override is deleted
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.json['user_overrides'], None)
def test_delete_helm_override_empty_name(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'',
'kube-system')
response = self.delete(url, expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Name must be specified.", response.json['error_message'])
def test_delete_helm_override_empty_namespace(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'ceph-pools-audit',
'')
response = self.delete(url, expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Namespace must be specified.",
response.json['error_message'])
def test_delete_helm_override_invalid_application(self):
url = self.get_single_url_helm_override('invalid_application',
'ceph-pools-audit', 'kube-system')
response = self.delete(url, expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Application invalid_application not found.",
response.json['error_message'])
def test_delete_helm_override_invalid_helm_override(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'invalid_name', 'invalid_namespace')
response = self.delete(url, expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.NO_CONTENT)
class ApiHelmChartPatchTestSuiteMixin(ApiHelmChartTestCaseMixin):
""" Helm Override patch operations
"""
def setUp(self):
super(ApiHelmChartPatchTestSuiteMixin, self).setUp()
def test_success_helm_override_patch(self):
# Return system apps
self.fake_helm_apps.return_value = ['platform-integ-apps']
# Return helm chart overrides
self.fake_override.return_value = {"enabled": True}
self.fake_merge_overrides.return_value = "global:\n replicas: \"2\"\n"
# Pass a non existant field to be patched by the API
response = self.patch_json(self.get_single_url_helm_override(
'platform-integ-apps',
'rbd-provisioner', 'kube-system'),
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.status_code, http_client.OK)
# Verify that the helm override was updated
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.json['user_overrides'],
'global:\n replicas: \"2\"\n')
def test_helm_override_patch_attribute(self):
# Return system apps
self.fake_helm_apps.return_value = ['platform-integ-apps']
# Return helm chart overrides
self.fake_override.return_value = {"enabled": False}
self.fake_merge_overrides.return_value = "global:\n replicas: \"2\"\n"
# Pass a non existant field to be patched by the API
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.patch_json(url,
{'attributes': {"enabled": "false"},
'flag': '',
'values': {}},
headers=self.API_HEADERS,
expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.status_code, http_client.OK)
# Verify that the helm chart attribute was updated
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.json['attributes'], 'enabled: false\n')
def test_patch_invalid_application(self):
url = self.get_single_url_helm_override('invalid_app_name',
'rbd-provisioner', 'kube-system')
response = self.patch_json(url,
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Application invalid_app_name not found.",
response.json['error_message'])
def test_patch_empty_name(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'',
'kube-system')
response = self.patch_json(url,
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Name must be specified.", response.json['error_message'])
def test_patch_empty_namespace(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner',
'')
response = self.patch_json(url,
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Namespace must be specified.",
response.json['error_message'])
def test_patch_invalid_attribute(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.patch_json(url,
{'attributes': {"invalid_attr": "false"},
'flag': '',
'values': {}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Invalid chart attribute: invalid_attr must "
"be one of [enabled]",
response.json['error_message'])
def test_patch_invalid_flag(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.patch_json(url,
{'attributes': {},
'flag': 'invalid_flag',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Invalid flag: invalid_flag must be either 'reuse' "
"or 'reset'.",
response.json['error_message'])
def test_patch_invalid_helm_override(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'invalid_name', 'invalid_namespace')
response = self.patch_json(url,
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2']}},
headers=self.API_HEADERS,
expect_errors=True)
self.fake_system_app.return_value = False
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.status_code, http_client.OK)
# Verify the values of the response with the values in database
self.assertEqual(response.json['name'], 'invalid_name')
self.assertIn('invalid_namespace', response.json['namespace'])
def test_patch_multiple_values(self):
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner', 'kube-system')
response = self.patch_json(url,
{'attributes': {},
'flag': 'reuse',
'values': {'files': [],
'set': ['global.replicas=2,'
'global.defaultStorageClass=generic']}},
headers=self.API_HEADERS,
expect_errors=True)
# Verify appropriate exception is raised
self.assertEqual(response.status_code, http_client.BAD_REQUEST)
self.assertIn("Invalid input: One (or more) set overrides contains "
"multiple values. Consider using --values "
"option instead.", response.json['error_message'])
def test_success_helm_override_patch_reset_flag(self):
# Return system apps
self.fake_helm_apps.return_value = ['platform-integ-apps']
# Return helm chart overrides
self.fake_override.return_value = {"enabled": True}
self.fake_merge_overrides.return_value = "global:\n replicas: \"2\"\n"
url = self.get_single_url_helm_override('platform-integ-apps',
'rbd-provisioner',
'kube-system')
# Pass a non existant field to be patched by the API
response = self.patch_json(url,
{'attributes': {},
'flag': 'reset',
'values': {}},
headers=self.API_HEADERS,
expect_errors=True)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.status_code, http_client.OK)
# Verify that the helm override was updated
response = self.get_json(url, expect_errors=True)
self.assertEqual(response.json['user_overrides'], None)
| 47.950909 | 82 | 0.580442 | 2,711 | 26,373 | 5.395057 | 0.086315 | 0.055791 | 0.077055 | 0.031724 | 0.766375 | 0.729728 | 0.684398 | 0.666416 | 0.638726 | 0.621701 | 0 | 0.001746 | 0.326812 | 26,373 | 549 | 83 | 48.038251 | 0.822068 | 0.091723 | 0 | 0.568878 | 0 | 0 | 0.162162 | 0.005238 | 0 | 0 | 0 | 0 | 0.191327 | 1 | 0.094388 | false | 0.002551 | 0.012755 | 0.010204 | 0.147959 | 0.002551 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
03cf70b70e04dff18e7c128c10b1ca87882d373d | 697 | py | Python | src/eduid_webapp/lookup_mobile_proofing/schemas.py | SUNET/eduid-webapp | 8e531f288d50d18a5c9182003fff2ab6670a44c3 | [
"BSD-3-Clause"
] | null | null | null | src/eduid_webapp/lookup_mobile_proofing/schemas.py | SUNET/eduid-webapp | 8e531f288d50d18a5c9182003fff2ab6670a44c3 | [
"BSD-3-Clause"
] | 161 | 2017-04-13T07:56:38.000Z | 2021-03-12T13:46:38.000Z | src/eduid_webapp/lookup_mobile_proofing/schemas.py | SUNET/eduid-webapp | 8e531f288d50d18a5c9182003fff2ab6670a44c3 | [
"BSD-3-Clause"
] | 3 | 2016-05-16T20:25:49.000Z | 2018-07-27T12:10:58.000Z | # -*- coding: utf-8 -*-
from marshmallow import fields
from eduid_common.api.schemas.base import EduidSchema, FluxStandardAction
from eduid_common.api.schemas.csrf import CSRFRequestMixin, CSRFResponseMixin
from eduid_common.api.schemas.validators import validate_nin
__author__ = 'lundberg'
class LookupMobileProofingRequestSchema(EduidSchema, CSRFRequestMixin):
nin = fields.String(required=True, validate=validate_nin)
class LookupMobileProofingResponseSchema(FluxStandardAction):
class ResponsePayload(EduidSchema, CSRFResponseMixin):
success = fields.Boolean(required=True)
message = fields.String(required=False)
payload = fields.Nested(ResponsePayload)
| 30.304348 | 77 | 0.800574 | 69 | 697 | 7.956522 | 0.507246 | 0.04918 | 0.081967 | 0.098361 | 0.136612 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001631 | 0.120517 | 697 | 22 | 78 | 31.681818 | 0.893964 | 0.030129 | 0 | 0 | 0 | 0 | 0.011869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
03e02068990d33c803296a496627d839aa8395e7 | 158 | py | Python | dbtest.py | AnykeyNL/AutonomousDB_AutonomousLinux_Python | 2c7bc7379c639e4762164e5d681202a58928ea63 | [
"Apache-2.0"
] | null | null | null | dbtest.py | AnykeyNL/AutonomousDB_AutonomousLinux_Python | 2c7bc7379c639e4762164e5d681202a58928ea63 | [
"Apache-2.0"
] | null | null | null | dbtest.py | AnykeyNL/AutonomousDB_AutonomousLinux_Python | 2c7bc7379c639e4762164e5d681202a58928ea63 | [
"Apache-2.0"
] | 1 | 2020-08-04T08:51:17.000Z | 2020-08-04T08:51:17.000Z | import cx_Oracle
DB = "xxx_high"
DB_USER = "admin"
DB_PASSWORD = "password"
connection = cx_Oracle.connect(DB_USER, DB_PASSWORD, DB)
print ("Connected")
| 13.166667 | 56 | 0.734177 | 23 | 158 | 4.73913 | 0.565217 | 0.146789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14557 | 158 | 11 | 57 | 14.363636 | 0.807407 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.166667 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
03f7f6933e025ddd5c53810d695306a37e6f015d | 204 | py | Python | core/const.py | michakinchen1988/spotify-downloader | 3db04ce63510236c9a7a00f7b30391683c4c31c3 | [
"MIT"
] | null | null | null | core/const.py | michakinchen1988/spotify-downloader | 3db04ce63510236c9a7a00f7b30391683c4c31c3 | [
"MIT"
] | null | null | null | core/const.py | michakinchen1988/spotify-downloader | 3db04ce63510236c9a7a00f7b30391683c4c31c3 | [
"MIT"
] | 1 | 2021-07-27T00:32:12.000Z | 2021-07-27T00:32:12.000Z | import logzero
_log_format = ("%(color)s%(levelname)s:%(end_color)s %(message)s")
formatter = logzero.LogFormatter(fmt=_log_format)
# options
log = logzero.setup_logger(formatter=formatter)
args = None
| 22.666667 | 66 | 0.759804 | 28 | 204 | 5.321429 | 0.607143 | 0.120805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093137 | 204 | 8 | 67 | 25.5 | 0.805405 | 0.034314 | 0 | 0 | 0 | 0 | 0.246154 | 0.184615 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
03fca20871cb164cff545a00227143d29ef6f47f | 1,146 | py | Python | cucoloris/background.py | woernerm/cucoloris | c551751d72e873802451cb9afdf83678438806c4 | [
"MIT"
] | null | null | null | cucoloris/background.py | woernerm/cucoloris | c551751d72e873802451cb9afdf83678438806c4 | [
"MIT"
] | null | null | null | cucoloris/background.py | woernerm/cucoloris | c551751d72e873802451cb9afdf83678438806c4 | [
"MIT"
] | null | null | null | """Defines a background widget to create a single-color background."""
from os.path import dirname as _dirname
from typing import Optional as _Optional
from kivy.lang.builder import Builder as _Builder
from kivy.properties import ListProperty as _ListProperty
from kivy.uix.widget import Widget as _Widget
_Builder.load_file(_dirname(__file__) + '\\background.kv')
class Background(_Widget):
"""Simple single-color background.
Kivy does not provide a background by default. This widget adds one
that you can select the color of.
"""
color = _ListProperty([1, 1, 1])
"""Color of the background.
The color must be given as RGB values, each between 0 and 1.
"""
def __init__(self, color:_Optional[list] = None, **kwargs):
"""Initialization method of the widget.
Args:
color: The color of the background given as RGB values,
each between 0 and 1.
**kwargs: Keyed arguments passed on to the base class
of the widget.
"""
self.color = color if color else self.color
super(Background, self).__init__(**kwargs)
| 31.833333 | 71 | 0.67452 | 155 | 1,146 | 4.83871 | 0.43871 | 0.026667 | 0.056 | 0.053333 | 0.085333 | 0.085333 | 0.085333 | 0.085333 | 0.085333 | 0 | 0 | 0.008102 | 0.246073 | 1,146 | 35 | 72 | 32.742857 | 0.859954 | 0.354276 | 0 | 0 | 0 | 0 | 0.026738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ff005b367c5ea64c0b0770d45246378de3d85d1a | 227 | py | Python | settings.py | cowgoat88/dcord-bcoin | cc464597d45fd46d88fc7e051edcb736580155d8 | [
"MIT"
] | null | null | null | settings.py | cowgoat88/dcord-bcoin | cc464597d45fd46d88fc7e051edcb736580155d8 | [
"MIT"
] | 1 | 2021-03-31T19:57:50.000Z | 2021-03-31T19:57:50.000Z | settings.py | cowgoat88/dcord-bcoin | cc464597d45fd46d88fc7e051edcb736580155d8 | [
"MIT"
] | null | null | null | from dotenv import load_dotenv
import os
load_dotenv()
DISCORD_CLIENT = os.getenv('DISCORD_CLIENT')
DISCORD_GUILD = os.getenv('DISCORD_GUILD')
DISCORD_CHANNEL = os.getenv('DISCORD_CHANNEL')
WORLDCOIN = os.getenv('WORLDCOIN')
| 22.7 | 46 | 0.792952 | 31 | 227 | 5.548387 | 0.354839 | 0.186047 | 0.261628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092511 | 227 | 9 | 47 | 25.222222 | 0.834951 | 0 | 0 | 0 | 0 | 0 | 0.22467 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ff099386e9d6fdebc72bb87d2f9efa87d694485c | 848 | py | Python | app/core/utils/urls.py | sololuz/cibb-web | 665de9832e8a262f9051f4075572f5aed0553f6e | [
"BSD-3-Clause"
] | null | null | null | app/core/utils/urls.py | sololuz/cibb-web | 665de9832e8a262f9051f4075572f5aed0553f6e | [
"BSD-3-Clause"
] | null | null | null | app/core/utils/urls.py | sololuz/cibb-web | 665de9832e8a262f9051f4075572f5aed0553f6e | [
"BSD-3-Clause"
] | null | null | null | # -*- encoding:utf-8 -*-
import django_sites as sites
from django.core.urlresolvers import reverse as django_reverse
URL_TEMPLATE = "{scheme}://{domain}/{path}"
def build_url(path, scheme="http", domain="localhost"):
return URL_TEMPLATE.format(scheme=scheme, domain=domain, path=path.lstrip("/"))
def is_absolute_url(path):
"""Test wether or not `path` is absolute url."""
return path.startswith("http")
def get_absolute_url(path):
"""Return a path as an absolute url."""
if is_absolute_url(path):
return path
site = sites.get_current()
return build_url(path, scheme=site.scheme, domain=site.domain)
def reverse(viewname, *args, **kwargs):
"""Same behavior as django's reverse but uses django_sites to compute absolute url."""
return get_absolute_url(django_reverse(viewname, *args, **kwargs))
| 29.241379 | 90 | 0.708726 | 119 | 848 | 4.907563 | 0.386555 | 0.131849 | 0.066781 | 0.061644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001395 | 0.154481 | 848 | 28 | 91 | 30.285714 | 0.81311 | 0.213443 | 0 | 0 | 0 | 0 | 0.067588 | 0.039939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.071429 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
205d1226d2532017ddc86cb1c0e4bd29f812d038 | 286 | py | Python | seki/utils.py | ariwaranosai/seki | e0ef5a212055329226aa6cb16be6df6fa9397b37 | [
"MIT"
] | null | null | null | seki/utils.py | ariwaranosai/seki | e0ef5a212055329226aa6cb16be6df6fa9397b37 | [
"MIT"
] | null | null | null | seki/utils.py | ariwaranosai/seki | e0ef5a212055329226aa6cb16be6df6fa9397b37 | [
"MIT"
] | null | null | null | """
Useful functions
"""
def singleton(cls):
"""turn a class into a singleton class"""
instances = {}
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance | 23.833333 | 49 | 0.604895 | 34 | 286 | 5.029412 | 0.529412 | 0.128655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265734 | 286 | 12 | 50 | 23.833333 | 0.814286 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
205e569db1515e54181b82aafbfe8267d2e271fb | 5,475 | py | Python | mllaunchpad/model_interface.py | TheFenrisLycaon/mllaunchpad | 9de65d1bee29c64f4c3dee536d036c4cfeb3f079 | [
"Apache-2.0"
] | 15 | 2019-07-20T17:23:29.000Z | 2021-06-03T11:16:54.000Z | mllaunchpad/model_interface.py | TheFenrisLycaon/mllaunchpad | 9de65d1bee29c64f4c3dee536d036c4cfeb3f079 | [
"Apache-2.0"
] | 97 | 2019-07-19T11:22:16.000Z | 2022-03-22T14:17:25.000Z | mllaunchpad/model_interface.py | TheFenrisLycaon/mllaunchpad | 9de65d1bee29c64f4c3dee536d036c4cfeb3f079 | [
"Apache-2.0"
] | 7 | 2019-07-25T09:26:25.000Z | 2022-03-22T09:32:41.000Z | # Stdlib imports
import abc
import logging
logger = logging.getLogger(__name__)
class ModelMakerInterface(abc.ABC):
"""Abstract model factory interface for Data-Scientist-created models.
Please inherit from this class and put your training code into the
method "create_trained_model". This method will be called by the framework when
your model needs to be (re)trained.
Why not simply use static methods?
We want to make it possible for create_trained_model to pass extra info
test_trained_model without extending the latter with optional keyword
arguments that might be confusing for the 90% of cases where they are
not needed. So we rely on the smarts of the person inheriting from this
class to find a solution/shortcuts if they want to do more difficult
things e.g. want to do the train/test split themselves.
"""
@abc.abstractmethod
def create_trained_model(
self, model_conf, data_sources, data_sinks, old_model=None
):
"""Implement this method, including data prep/feature creation.
No need to test your model here. Put testing code in test_trained_model, which
will be called automatically after training.
(Feel free to put common code for preparing data into another function,
class, library, ...)
Params:
model_conf: the model configuration dict from the config file
data_sources: dict containing the data sources (this includes your train/validation/test data), as configured in the config file
data_sinks: dict containing the data sinks, as configured in the config file. Usually unused when training.
old_model: contains an old model, if it exists, which can be used for incremental training. default: None
Return:
The trained model/data/anything which you want to use in the predict()
function. (usually simply a fitted model object, but can be anything,
like a dict of several models, a model with some extra info, etc.)
(Whatever you return here gets automatically stuffed into your
ModelInterface-inherited object and is accessible there using
predict's model parameter (or the self.contents attribute.))
"""
@abc.abstractmethod
def test_trained_model(self, model_conf, data_sources, data_sinks, model):
"""Implement this method, including data prep/feature creation.
This method will be called to re-test a model, e.g. to check whether
it has to be re-trained.
(Feel free to put common code for preparing data into another function,
class, library, ...)
Params:
model_conf: the model configuration dict from the config file
data_sources: dict containing the data sources (this includes your train/validation/test data), as configured in the config file
data_sinks: dict containing the data sinks, as configured in the config file. Usually unused when testing.
model: your model object (whatever you returned in create_trained_model)
Return:
Return a dict of metrics (like 'accuracy', 'f1', 'confusion_matrix', etc.)
"""
class ModelInterface(abc.ABC):
"""Abstract model interface for Data-Scientist-created models.
Please inherit from this class when creating your model to make
it usable for ModelApi.
You don't need to create this object yourself when training.
It is created automatically and the model/info returned from create_trained_model
is made accessible to you through the self.contents attribute.
"""
def __init__(self, contents=None):
"""If you overwrite __init__, please call super().__init__(...)
at the beginning. Otherwise, you need to assign self.contents to the
contents parameter manually in __init__.
Params:
contents: any object that is needed for prediction (usually a trained
classifier or predictor). It is passed to the predict()
method as the "model" parameter for convenience.
"""
self.contents = contents
self.have_columns_been_ordered = False
@abc.abstractmethod
def predict(self, model_conf, data_sources, data_sinks, model, args_dict):
"""Implement this method, including data prep/feature creation based on argsDict.
argsDict can also contain an id which the model can use to fetch data
from any data_sources.
(Feel free to put common code for preparing data into another function,
class, library, ...)
Params:
model_conf: the model configuration dict from the config file
data_sources: dict containing the data sources
data_sinks: dict containing the data sinks, as configured in the config file.
model: your model object (whatever you returned in create_trained_model)
argsDict: parameters the API was called with, dict of strings (any type conversion needs to be done by you)
Return:
Prediction result as a dictionary/list structure which will be automatically turned into JSON.
"""
def __del__(self):
"""Clean up any resources (temporary files, sockets, etc.).
If you overwrite this method, please call super().__del__() at the beginning.
"""
| 48.026316 | 140 | 0.685479 | 737 | 5,475 | 4.994573 | 0.312076 | 0.0326 | 0.028253 | 0.03423 | 0.387395 | 0.375441 | 0.375441 | 0.375441 | 0.322195 | 0.297745 | 0 | 0.000743 | 0.262831 | 5,475 | 113 | 141 | 48.451327 | 0.911298 | 0.782283 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
206ba10a933fb72d2b19d0e9ab24a9f9550bd9cb | 307 | py | Python | HR_pythonIsLeap.py | bluewitch/Code-Blue-Python | 07230fbc8e20d263950ad2476e79da12b64cff2d | [
"MIT"
] | null | null | null | HR_pythonIsLeap.py | bluewitch/Code-Blue-Python | 07230fbc8e20d263950ad2476e79da12b64cff2d | [
"MIT"
] | null | null | null | HR_pythonIsLeap.py | bluewitch/Code-Blue-Python | 07230fbc8e20d263950ad2476e79da12b64cff2d | [
"MIT"
] | 1 | 2020-02-13T14:47:12.000Z | 2020-02-13T14:47:12.000Z | def is_leap(year):
leap = False
# Write your logic here
# thought process
#if year%4==0:
# return True
#elif year%100==0:
# return False
#elif year%400==0:
# return True
# Optimized, Python 3
return ((year%4==0)and(year%100!=0)or(year%400==0))
| 20.466667 | 55 | 0.543974 | 45 | 307 | 3.688889 | 0.533333 | 0.126506 | 0.072289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.315961 | 307 | 14 | 56 | 21.928571 | 0.690476 | 0.488599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
206d6601ea3b48f7c66bd3d4ac31c2e1ee8263b1 | 182 | py | Python | config/custom_components/ziggonext/const.py | LRvdLinden/homeassistant-config | 4f0e8bb08329b8af08fc90cb1699a9314e297ab7 | [
"MIT"
] | 288 | 2021-04-27T07:25:04.000Z | 2022-03-23T14:38:36.000Z | config/custom_components/ziggonext/const.py | givemhell/homeassistant-config | 8ca951d299cb4df19e5fcc37bfea38c9f04f5a2a | [
"MIT"
] | 6 | 2021-04-30T10:47:24.000Z | 2022-01-12T01:14:15.000Z | config/custom_components/ziggonext/const.py | givemhell/homeassistant-config | 8ca951d299cb4df19e5fcc37bfea38c9f04f5a2a | [
"MIT"
] | 28 | 2021-04-30T23:58:07.000Z | 2022-02-15T04:33:46.000Z | """Constants for the Ziggo Mediabox Next integration."""
ZIGGO_API = "ziggo_api"
CONF_COUNTRY_CODE = "country_code"
RECORD = "record"
REWIND = "rewind"
FAST_FORWARD = "fast_forward" | 26 | 56 | 0.758242 | 24 | 182 | 5.458333 | 0.625 | 0.122137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120879 | 182 | 7 | 57 | 26 | 0.81875 | 0.274725 | 0 | 0 | 0 | 0 | 0.354331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
206f2e5690a8c8a5753b6c5a8fda903d7277cf28 | 2,159 | py | Python | peeringdb_server/import_views.py | vegu/peeringdb | b4a025c912f14706642d7edc9db6da998e024b8f | [
"BSD-2-Clause"
] | null | null | null | peeringdb_server/import_views.py | vegu/peeringdb | b4a025c912f14706642d7edc9db6da998e024b8f | [
"BSD-2-Clause"
] | null | null | null | peeringdb_server/import_views.py | vegu/peeringdb | b4a025c912f14706642d7edc9db6da998e024b8f | [
"BSD-2-Clause"
] | null | null | null | import json
import base64
from django.http import JsonResponse, HttpResponse
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth import authenticate
from django_namespace_perms.util import has_perms
from ratelimit.decorators import ratelimit, is_ratelimited
from peeringdb_server import ixf
from peeringdb_server.models import IXLan
RATELIMITS = settings.RATELIMITS
def enable_basic_auth(fn):
"""
a simple decorator to enable basic auth for a specific view
"""
def wrapped(request, *args, **kwargs):
if 'HTTP_AUTHORIZATION' in request.META:
auth = request.META['HTTP_AUTHORIZATION'].split()
if len(auth) == 2:
if auth[0].lower() == "basic":
username, password = base64.b64decode(auth[1]).split(':', 1)
request.user = authenticate(username=username, password=password)
if not request.user:
return JsonResponse({"non_field_errors":["Invalid credentials"]}, status=401)
return fn(request, *args, **kwargs)
return wrapped
@ratelimit(key="ip", rate=RATELIMITS["view_import_ixlan_ixf_preview"])
@enable_basic_auth
def view_import_ixlan_ixf_preview(request, ixlan_id):
# check if request was blocked by rate limiting
was_limited = getattr(request, "limited", False)
if was_limited:
return JsonResponse({
"non_field_errors": [
_("Please wait a bit before requesting " \
"another ixf import preview.")
]
}, status=400)
try:
ixlan = IXLan.objects.get(id=ixlan_id)
except IXLan.DoesNotExist:
return JsonResponse({
"non_field_errors": [_("Ixlan not found")]
}, status=404)
if not has_perms(request.user, ixlan, "update"):
return JsonResponse({
"non_field_errors": [_("Permission denied")]
}, status=403)
importer = ixf.Importer()
importer.update(ixlan, save=False)
return HttpResponse(
json.dumps(importer.log, indent=2), content_type="application/json")
| 33.215385 | 101 | 0.654933 | 249 | 2,159 | 5.522088 | 0.437751 | 0.036364 | 0.061091 | 0.075636 | 0.129455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014119 | 0.245484 | 2,159 | 64 | 102 | 33.734375 | 0.829957 | 0.049097 | 0 | 0.0625 | 0 | 0 | 0.137457 | 0.014237 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.041667 | 0.333333 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2079c90bff1772e1e966affe418857a7e4bb91bd | 339 | py | Python | source/ex17.py | aurelo/lphw | 8e1ecddc52a7c91fd0f53d4174c1079c63a10a81 | [
"MIT"
] | null | null | null | source/ex17.py | aurelo/lphw | 8e1ecddc52a7c91fd0f53d4174c1079c63a10a81 | [
"MIT"
] | null | null | null | source/ex17.py | aurelo/lphw | 8e1ecddc52a7c91fd0f53d4174c1079c63a10a81 | [
"MIT"
] | null | null | null | from sys import argv
from os.path import exists
script, in_file, out_file = argv
print "Copying from %r to %r" % (in_file, out_file)
print "Does the out file exists? %r" % exists(out_file)
print "Ready, hit RETURN to continue, CTRL-C to abort!"
raw_input()
open(out_file, 'w').write(open(in_file).read());
print "Alright, all done!"
| 21.1875 | 55 | 0.710914 | 60 | 339 | 3.883333 | 0.55 | 0.150215 | 0.077253 | 0.111588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156342 | 339 | 15 | 56 | 22.6 | 0.814685 | 0 | 0 | 0 | 0 | 0 | 0.339233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
2081bf6de4e032f1f9d0a3bf558e292b89055291 | 461 | py | Python | addonpayments/api/common/responses.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 2 | 2018-04-11T13:53:38.000Z | 2018-12-09T13:10:18.000Z | addonpayments/api/common/responses.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 2 | 2019-03-28T12:49:16.000Z | 2019-03-28T12:52:09.000Z | addonpayments/api/common/responses.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 8 | 2017-07-10T13:32:23.000Z | 2021-08-23T10:55:52.000Z | # -*- encoding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from addonpayments.responses import SdkResponse
class ApiResponse(SdkResponse):
"""
Class representing the API response.
You can consult the specific documentation of all API response fields on the website
https://desarrolladores.addonpayments.com
"""
hash_fields = ['timestamp', 'merchantid', 'orderid', 'result', 'message', 'pasref', 'authcode']
| 27.117647 | 99 | 0.726681 | 50 | 461 | 6.56 | 0.78 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002604 | 0.167028 | 461 | 16 | 100 | 28.8125 | 0.851563 | 0.409978 | 0 | 0 | 0 | 0 | 0.21371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2081e6615943f8a86ea6dcb14ff82a37b5c3e2da | 2,532 | py | Python | config/scripts/sf_rotate_mysql_passwords.py | earlren1014/RedHat-Software-Factory | dd50eba4e353945886ebceb5dd608179d608b956 | [
"Apache-2.0"
] | null | null | null | config/scripts/sf_rotate_mysql_passwords.py | earlren1014/RedHat-Software-Factory | dd50eba4e353945886ebceb5dd608179d608b956 | [
"Apache-2.0"
] | null | null | null | config/scripts/sf_rotate_mysql_passwords.py | earlren1014/RedHat-Software-Factory | dd50eba4e353945886ebceb5dd608179d608b956 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import yaml
import uuid
import subprocess
# from:
# http://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data # flake8: noqa
def should_use_block(value):
for c in u"\u000a\u000d\u001c\u001d\u001e\u0085\u2028\u2029":
if c in value:
return True
return False
def my_represent_scalar(self, tag, value, style=None):
if style is None:
if should_use_block(value):
style = '|'
else:
style = self.default_style
node = yaml.representer.ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
yaml.representer.BaseRepresenter.represent_scalar = my_represent_scalar
# end from
# from: http://pyyaml.org/ticket/64
class MyDumper(yaml.Dumper):
def increase_indent(self, flow=False, indentless=False):
return super(MyDumper, self).increase_indent(flow, False)
# end from
creds = yaml.load(open("/etc/puppet/hiera/sf/sfcreds.yaml").read())
sqls = []
fqdn = yaml.load(open("/etc/puppet/hiera/sf/sfconfig.yaml").read())['fqdn']
for user in ('redmine', 'gerrit', 'nodepool', 'etherpad', 'lodgeit', 'graphite', 'grafana', 'cauth', 'managesf'):
key = "creds_%s_sql_pwd" % user
pwd = str(uuid.uuid4())
# Allow connection from remote services
if user == 'redmine':
sqls.append("SET PASSWORD FOR '%s'@'redmine.%s' = PASSWORD('%s');" % (
user, fqdn, pwd
))
elif user == 'gerrit':
sqls.append("SET PASSWORD FOR '%s'@'gerrit.%s' = PASSWORD('%s');" % (
user, fqdn, pwd
))
elif user == 'nodepool':
sqls.append("SET PASSWORD FOR '%s'@'nodepool.%s' = PASSWORD('%s');" % (
user, fqdn, pwd
))
elif user in ('grafana', 'gnocchi'):
sqls.append("SET PASSWORD FOR '%s'@'statsd.%s' = PASSWORD('%s');" % (
user, fqdn, pwd
))
# Always allow connection from managesf for all-in-one compatibility
sqls.append("SET PASSWORD FOR '%s'@'managesf.%s' = PASSWORD('%s');" % (
user, fqdn, pwd
))
creds[key] = pwd
open("/etc/puppet/hiera/sf/sfcreds.yaml", "w").write(yaml.dump(creds, default_flow_style=False, Dumper=MyDumper))
ret = subprocess.Popen(["mysql", "-uroot", "-p%s" % creds['creds_mysql_root_pwd'], "-e", " ".join(sqls)]).wait()
if ret:
print "Error: Couldn't update database passwords... (rc: %d)" % ret
exit(subprocess.Popen(["sfconfig.sh"]).wait())
| 33.76 | 118 | 0.625592 | 335 | 2,532 | 4.659701 | 0.41791 | 0.032031 | 0.04164 | 0.067265 | 0.225496 | 0.225496 | 0.118514 | 0.055734 | 0 | 0 | 0 | 0.018887 | 0.205371 | 2,532 | 74 | 119 | 34.216216 | 0.756958 | 0.116509 | 0 | 0.188679 | 0 | 0 | 0.283214 | 0.066427 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.113208 | 0.056604 | null | null | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
208f84844a77651e5c6b60a88b1d5aed349bd63c | 298 | py | Python | tests/valid/circular_extension.py | anthem-ai/fhir-types | 42348655fb3a9b3f131b911d6bc0782da8c14ce4 | [
"Apache-2.0"
] | 2 | 2022-02-03T00:51:30.000Z | 2022-02-03T18:42:43.000Z | tests/valid/circular_extension.py | anthem-ai/fhir-types | 42348655fb3a9b3f131b911d6bc0782da8c14ce4 | [
"Apache-2.0"
] | null | null | null | tests/valid/circular_extension.py | anthem-ai/fhir-types | 42348655fb3a9b3f131b911d6bc0782da8c14ce4 | [
"Apache-2.0"
] | null | null | null | from fhir_types import FHIR_Extension
p: FHIR_Extension = {
"url": "http://hl7.org/fhir/StructureDefinition/patient-birthPlace",
"valueBoolean": True,
"valueAddress": {"city": "Springfield", "state": "Massachusetts", "country": "US"},
"_valueTime": {
"id": "123",
},
}
| 27.090909 | 87 | 0.627517 | 29 | 298 | 6.310345 | 0.862069 | 0.142077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.181208 | 298 | 10 | 88 | 29.8 | 0.733607 | 0 | 0 | 0 | 0 | 0 | 0.47651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2097cd625dc7bed0891d01d8ae4ffee3bd6ebc99 | 137 | py | Python | init_db.py | mengguoru/yourRobot | bc0276f36913932be50b99fa22b7debddf6d9e71 | [
"MIT"
] | null | null | null | init_db.py | mengguoru/yourRobot | bc0276f36913932be50b99fa22b7debddf6d9e71 | [
"MIT"
] | null | null | null | init_db.py | mengguoru/yourRobot | bc0276f36913932be50b99fa22b7debddf6d9e71 | [
"MIT"
] | null | null | null | import pickle
data = {
'aa':'blabla',
}
if __name__ == '__main__':
with open('data.pkl','wb') as db:
pickle.dump(data,db) | 19.571429 | 37 | 0.576642 | 19 | 137 | 3.736842 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226277 | 137 | 7 | 38 | 19.571429 | 0.669811 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
20ae501dee2d73c590109443f239d98243ef2d97 | 581 | py | Python | tests/test_is_host_available.py | c-pher/PyWinOS | a16a16a24abaa53a06b9365b2535c8ab31a7fdfb | [
"MIT"
] | 4 | 2020-04-17T15:54:43.000Z | 2020-11-08T06:39:05.000Z | tests/test_is_host_available.py | c-pher/PyWinOS | a16a16a24abaa53a06b9365b2535c8ab31a7fdfb | [
"MIT"
] | 65 | 2020-01-05T21:45:17.000Z | 2022-03-31T16:50:20.000Z | tests/test_is_host_available.py | c-pher/PyWinOS | a16a16a24abaa53a06b9365b2535c8ab31a7fdfb | [
"MIT"
] | null | null | null | from pywinos import WinOSClient
def test_is_host_available_remote():
tool = WinOSClient('8.8.8.8')
response = tool.is_host_available(port=53)
assert response, 'Response is not True'
def test_is_host_unavailable_remote():
tool = WinOSClient('8.8.8.8')
response = tool.is_host_available(port=22)
assert not response, 'Response is not False'
def test_is_host_available_locally():
"""Execute method locally. Must be True"""
tool = WinOSClient(host='')
response = tool.is_host_available()
assert response, 'Local host is available always'
| 26.409091 | 53 | 0.72117 | 82 | 581 | 4.890244 | 0.341463 | 0.089776 | 0.187032 | 0.097257 | 0.456359 | 0.279302 | 0.279302 | 0.279302 | 0.279302 | 0.279302 | 0 | 0.025 | 0.173838 | 581 | 21 | 54 | 27.666667 | 0.810417 | 0.061962 | 0 | 0.153846 | 0 | 0 | 0.157699 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.230769 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
20c119d600f2e8bb548756d54a35842d203d9516 | 552 | py | Python | main/migrations/0006_auto_20201231_0458.py | AyushHazard/Samskritam | c5db8e712afe24737cacc6e6f3f27e3fcbe83e26 | [
"MIT"
] | null | null | null | main/migrations/0006_auto_20201231_0458.py | AyushHazard/Samskritam | c5db8e712afe24737cacc6e6f3f27e3fcbe83e26 | [
"MIT"
] | null | null | null | main/migrations/0006_auto_20201231_0458.py | AyushHazard/Samskritam | c5db8e712afe24737cacc6e6f3f27e3fcbe83e26 | [
"MIT"
] | 3 | 2021-01-05T18:40:57.000Z | 2021-05-14T07:56:20.000Z | # Generated by Django 3.1.4 on 2020-12-31 04:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0005_auto_20201230_1633'),
]
operations = [
migrations.AlterField(
model_name='competition',
name='end_time',
field=models.DateTimeField(null=True),
),
migrations.AlterField(
model_name='competition',
name='start_time',
field=models.DateTimeField(null=True),
),
]
| 23 | 50 | 0.583333 | 55 | 552 | 5.727273 | 0.672727 | 0.126984 | 0.15873 | 0.184127 | 0.507937 | 0.507937 | 0 | 0 | 0 | 0 | 0 | 0.080729 | 0.304348 | 552 | 23 | 51 | 24 | 0.739583 | 0.081522 | 0 | 0.470588 | 1 | 0 | 0.132673 | 0.045545 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
20c14c1456596b7495e0b1d777334751383c0c59 | 2,518 | py | Python | python-dsl/buck_parser/struct.py | lakshmi2005/buck | 012a59d5d2e5a45b483e85fb190d2b67ea0c56ab | [
"Apache-2.0"
] | 1 | 2018-02-28T06:26:56.000Z | 2018-02-28T06:26:56.000Z | python-dsl/buck_parser/struct.py | lakshmi2005/buck | 012a59d5d2e5a45b483e85fb190d2b67ea0c56ab | [
"Apache-2.0"
] | 1 | 2018-12-10T15:54:22.000Z | 2018-12-10T19:30:37.000Z | python-dsl/buck_parser/struct.py | lakshmi2005/buck | 012a59d5d2e5a45b483e85fb190d2b67ea0c56ab | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import with_statement
import copy
import json
class StructEncoder(json.JSONEncoder):
"""Extends built-in JSONEncoder to support Struct serialization."""
def default(self, o):
if isinstance(o, Struct):
return o._asdict()
class Struct(object):
"""
An immutable container using the keyword arguments as attributes.
__setattr__ makes sure that fields can be mutated only during initialization.
__getattr__ delegates attribute reads to internal dictionary.
"""
_KWARGS_ATTRIBUTE_NAME = "__kwargs"
def __init__(self, **kwargs):
super(Struct, self).__setattr__(self._KWARGS_ATTRIBUTE_NAME, kwargs)
def _get_kwargs(self):
return super(Struct, self).__getattribute__(self._KWARGS_ATTRIBUTE_NAME)
def __getattr__(self, item):
"""Handles retrieval of attributes not explicitly defined in this instance."""
try:
return dict.__getitem__(self._get_kwargs(), item)
except KeyError as e:
raise AttributeError(e)
def __setattr__(self, key, value):
"""Handles attribute writes on this instance.
All writes fail to ensure immutability.
"""
raise AttributeError("Mutation of struct attributes (%r) is not allowed." % key)
def to_json(self):
"""Creates a JSON string representation of this struct instance."""
return json.dumps(self, cls=StructEncoder, separators=(",", ":"), sort_keys=True)
def _asdict(self):
"""Converts this struct into dict."""
return self._get_kwargs()
def __deepcopy__(self, memodict=None):
"""Returns a deep copy of this instance."""
return Struct(**copy.deepcopy(self._get_kwargs(), memo=memodict or {}))
def __eq__(self, other):
return isinstance(other, Struct) and self._get_kwargs() == other._get_kwargs()
def __repr__(self):
return "struct(" + ".".join(
[str(key) + "=" + repr(value) for key, value in self._get_kwargs().iteritems()]
) + ")"
def struct(**kwargs):
"""Creates an immutable container using the keyword arguments as attributes.
It can be used to group multiple values and/or functions together. Example:
def _my_function():
return 3
s = struct(x = 2, foo = _my_function)
return s.x + s.foo() # returns 5
"""
return Struct(**kwargs)
| 31.873418 | 91 | 0.664019 | 301 | 2,518 | 5.23588 | 0.44186 | 0.039975 | 0.041244 | 0.031726 | 0.106599 | 0.071066 | 0.071066 | 0.071066 | 0.071066 | 0 | 0 | 0.001555 | 0.233916 | 2,518 | 78 | 92 | 32.282051 | 0.815448 | 0.328435 | 0 | 0 | 0 | 0 | 0.044025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.297297 | false | 0 | 0.162162 | 0.081081 | 0.783784 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
20c77c1c8433649c0e1526e0616294036f9ec64a | 1,632 | py | Python | env/lib/python3.8/site-packages/_plotly_utils/importers.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11,750 | 2015-10-12T07:03:39.000Z | 2022-03-31T20:43:15.000Z | env/lib/python3.8/site-packages/_plotly_utils/importers.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,951 | 2015-10-12T00:41:25.000Z | 2022-03-31T22:19:26.000Z | env/lib/python3.8/site-packages/_plotly_utils/importers.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,623 | 2015-10-15T14:40:27.000Z | 2022-03-28T16:05:50.000Z | import importlib
def relative_import(parent_name, rel_modules=(), rel_classes=()):
"""
Helper function to import submodules lazily in Python 3.7+
Parameters
----------
rel_modules: list of str
list of submodules to import, of the form .submodule
rel_classes: list of str
list of submodule classes/variables to import, of the form ._submodule.Foo
Returns
-------
tuple
Tuple that should be assigned to __all__, __getattr__ in the caller
"""
module_names = {rel_module.split(".")[-1]: rel_module for rel_module in rel_modules}
class_names = {rel_path.split(".")[-1]: rel_path for rel_path in rel_classes}
def __getattr__(import_name):
# In Python 3.7+, lazy import submodules
# Check for submodule
if import_name in module_names:
rel_import = module_names[import_name]
return importlib.import_module(rel_import, parent_name)
# Check for submodule class
if import_name in class_names:
rel_path_parts = class_names[import_name].split(".")
rel_module = ".".join(rel_path_parts[:-1])
class_name = import_name
class_module = importlib.import_module(rel_module, parent_name)
return getattr(class_module, class_name)
raise AttributeError(
"module {__name__!r} has no attribute {name!r}".format(
name=import_name, __name__=parent_name
)
)
__all__ = list(module_names) + list(class_names)
def __dir__():
return __all__
return __all__, __getattr__, __dir__
| 32 | 88 | 0.643995 | 204 | 1,632 | 4.705882 | 0.279412 | 0.072917 | 0.0375 | 0.020833 | 0.085417 | 0.054167 | 0 | 0 | 0 | 0 | 0 | 0.005853 | 0.267157 | 1,632 | 50 | 89 | 32.64 | 0.796823 | 0.27451 | 0 | 0 | 0 | 0 | 0.043517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.478261 | 0.043478 | 0.782609 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
20ccec155416be06b51221defce20e96420c2759 | 924 | py | Python | renderchan/contrib/ffmpeg.py | ave4/RenderChan | 979c150abb8fcffebfe39410ed638dc2edeb4933 | [
"BSD-3-Clause"
] | 30 | 2015-02-12T13:21:30.000Z | 2019-12-09T07:29:47.000Z | renderchan/contrib/ffmpeg.py | ave4/RenderChan | 979c150abb8fcffebfe39410ed638dc2edeb4933 | [
"BSD-3-Clause"
] | 53 | 2015-12-20T17:04:00.000Z | 2019-11-11T07:54:50.000Z | renderchan/contrib/ffmpeg.py | ave4/RenderChan | 979c150abb8fcffebfe39410ed638dc2edeb4933 | [
"BSD-3-Clause"
] | 7 | 2015-08-10T01:38:28.000Z | 2020-02-14T20:06:28.000Z |
__author__ = 'Konstantin Dmitriev'
from renderchan.module import RenderChanModule
from renderchan.utils import which
import subprocess
import os
import random
class RenderChanFfmpegModule(RenderChanModule):
def __init__(self):
RenderChanModule.__init__(self)
self.conf['binary']=self.findBinary("ffmpeg")
self.conf["packetSize"]=0
def getInputFormats(self):
return ["mov", "avi", "mpg", "mp4"]
def getOutputFormats(self):
return ["png"]
def render(self, filename, outputPath, startFrame, endFrame, format, updateCompletion, extraParams={}):
updateCompletion(0.0)
if not os.path.exists(outputPath):
os.mkdir(outputPath)
# TODO: Progress callback
commandline=[self.conf['binary'], "-i", filename, os.path.join(outputPath,"output_%04d.png")]
subprocess.check_call(commandline)
updateCompletion(1.0)
| 25.666667 | 107 | 0.678571 | 96 | 924 | 6.385417 | 0.583333 | 0.039152 | 0.045677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.203463 | 924 | 35 | 108 | 26.4 | 0.822011 | 0.024892 | 0 | 0 | 0 | 0 | 0.088071 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.181818 | false | 0 | 0.227273 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
20d738bb7fcb6220be8cb545c1d61a558af84d64 | 860 | py | Python | webdev/users/urls.py | h-zanetti/jewelry-manager | 74166b89f492303b8ebf5ff8af058f394eb2a28b | [
"MIT"
] | null | null | null | webdev/users/urls.py | h-zanetti/jewelry-manager | 74166b89f492303b8ebf5ff8af058f394eb2a28b | [
"MIT"
] | 103 | 2021-04-25T21:28:11.000Z | 2022-03-15T01:36:31.000Z | webdev/users/urls.py | h-zanetti/jewelry-manager | 74166b89f492303b8ebf5ff8af058f394eb2a28b | [
"MIT"
] | null | null | null | from django.urls import path
from webdev.users import views
from django.contrib.auth import views as auth_views
urlpatterns = [
path('login/', views.login_view, name='login'),
path('logout/', auth_views.LogoutView.as_view(), name='logout'),
path('password-reset/', auth_views.PasswordResetView.as_view(template_name='users/base_form.html'), name='password_reset'),
path('password-reset/done/', auth_views.PasswordResetDoneView.as_view(template_name='users/password_reset_done.html'), name='password_reset_done'),
path('password-reset-confirm/<uidb64>/<token>/', auth_views.PasswordResetConfirmView.as_view(template_name='users/base_form.html'), name='password_reset_confirm'),
path('password-reset-complete/', auth_views.PasswordResetView.as_view(template_name='users/password_reset_complete.html'), name='password_reset_complete'),
] | 71.666667 | 167 | 0.781395 | 114 | 860 | 5.640351 | 0.27193 | 0.202177 | 0.105754 | 0.111975 | 0.354588 | 0.354588 | 0.354588 | 0.278383 | 0.161742 | 0.161742 | 0 | 0.002503 | 0.07093 | 860 | 12 | 168 | 71.666667 | 0.802253 | 0 | 0 | 0 | 0 | 0 | 0.354239 | 0.200929 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.363636 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
20d9858742730a9f45537a15f559a969c21b5e26 | 2,615 | py | Python | qlib/rl/reward.py | SunsetWolf/qlib | 89972f6c6f9fa629b4f74093d4ba1e93c9f7a5e5 | [
"MIT"
] | 1 | 2021-12-14T13:48:38.000Z | 2021-12-14T13:48:38.000Z | qlib/rl/reward.py | SunsetWolf/qlib | 89972f6c6f9fa629b4f74093d4ba1e93c9f7a5e5 | [
"MIT"
] | null | null | null | qlib/rl/reward.py | SunsetWolf/qlib | 89972f6c6f9fa629b4f74093d4ba1e93c9f7a5e5 | [
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
from __future__ import annotations
from typing import Generic, Any, TypeVar, TYPE_CHECKING
from qlib.typehint import final
if TYPE_CHECKING:
from .utils.env_wrapper import EnvWrapper
SimulatorState = TypeVar("SimulatorState")
class Reward(Generic[SimulatorState]):
"""
Reward calculation component that takes a single argument: state of simulator. Returns a real number: reward.
Subclass should implement ``reward(simulator_state)`` to implement their own reward calculation recipe.
"""
env: EnvWrapper | None = None
@final
def __call__(self, simulator_state: SimulatorState) -> float:
return self.reward(simulator_state)
def reward(self, simulator_state: SimulatorState) -> float:
"""Implement this method for your own reward."""
raise NotImplementedError("Implement reward calculation recipe in `reward()`.")
def log(self, name, value):
self.env.logger.add_scalar(name, value)
class RewardCombination(Reward):
"""Combination of multiple reward."""
def __init__(self, rewards: dict[str, tuple[Reward, float]]):
self.rewards = rewards
def reward(self, simulator_state: Any) -> float:
total_reward = 0.0
for name, (reward_fn, weight) in self.rewards.items():
rew = reward_fn(simulator_state) * weight
total_reward += rew
self.log(name, rew)
return total_reward
# TODO:
# reward_factory is disabled for now
# _RegistryConfigReward = RegistryConfig[REWARDS]
# @configclass
# class _WeightedRewardConfig:
# weight: float
# reward: _RegistryConfigReward
# RewardConfig = Union[_RegistryConfigReward, Dict[str, Union[_RegistryConfigReward, _WeightedRewardConfig]]]
# def reward_factory(reward_config: RewardConfig) -> Reward:
# """
# Use this factory to instantiate the reward from config.
# Simply using ``reward_config.build()`` might not work because reward can have complex combinations.
# """
# if isinstance(reward_config, dict):
# # as reward combination
# rewards = {}
# for name, rew in reward_config.items():
# if not isinstance(rew, _WeightedRewardConfig):
# # default weight is 1.
# rew = _WeightedRewardConfig(weight=1., rew=rew)
# # no recursive build in this step
# rewards[name] = (rew.reward.build(), rew.weight)
# return RewardCombination(rewards)
# else:
# # single reward
# return reward_config.build()
| 30.764706 | 113 | 0.676482 | 286 | 2,615 | 6.038462 | 0.405594 | 0.048639 | 0.031268 | 0.037058 | 0.063694 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001982 | 0.228298 | 2,615 | 84 | 114 | 31.130952 | 0.853816 | 0.538432 | 0 | 0 | 0 | 0 | 0.055846 | 0 | 0 | 0 | 0 | 0.011905 | 0 | 1 | 0.2 | false | 0 | 0.16 | 0.04 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
20e1b6b253cd3a7c949a31dc6912f101b7ccd4af | 1,197 | py | Python | 7dof_arm_reaching/environment.py | YilunZhou/RoCUS | fb64aeac941f454b1b7edbc7c5cf06123ee2c4d4 | [
"MIT"
] | 7 | 2020-11-20T20:45:49.000Z | 2021-12-14T19:27:20.000Z | 7dof_arm_reaching/environment.py | YilunZhou/RoCUS | fb64aeac941f454b1b7edbc7c5cf06123ee2c4d4 | [
"MIT"
] | 1 | 2021-03-03T03:57:21.000Z | 2021-03-03T03:57:21.000Z | 7dof_arm_reaching/environment.py | YilunZhou/RoCUS | fb64aeac941f454b1b7edbc7c5cf06123ee2c4d4 | [
"MIT"
] | 4 | 2020-11-20T17:00:27.000Z | 2021-04-01T00:53:50.000Z |
import numpy as np
from pybulletgym_rocus.envs.roboschool.envs.manipulation.panda_reacher_env import PandaReacherEnv
class PandaEnv(PandaReacherEnv):
'''
a modified environment that runs for a maximum of 500 time steps, but terminates as soon as the target is reached.
also supports calculating the log probability of a particular environment configuration (i.e. target location) under the prior.
'''
def __init__(self, shelf=True, timelimit=500, target_threshold=0.03):
super(PandaEnv, self).__init__(shelf=shelf)
self.timelimit = timelimit
self.target_threshold = target_threshold
def reset(self, **kwargs):
self.cur_time = 0
super().reset(**kwargs)
def step(self, a):
s, r, done, info = super().step(a)
self.cur_time += 1
done = done or self.cur_time == self.timelimit or np.linalg.norm(s[19:22]) <= self.target_threshold
return s, r, done, info
def render(self, mode='human', **kwargs):
super().render(mode=mode, **kwargs)
def s(self):
return self.robot.calc_state()
def log_prior(self, target_loc):
assert -0.5 <= target_loc[0] <= -0.05 or 0.05 <= target_loc[0] <= 0.5
assert -0.3 <= target_loc[1] <= 0.2 and 0.65 <= target_loc[2] <= 1
return 0
| 34.2 | 129 | 0.716792 | 190 | 1,197 | 4.384211 | 0.447368 | 0.054022 | 0.039616 | 0.02401 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037512 | 0.153718 | 1,197 | 34 | 130 | 35.205882 | 0.784798 | 0.203008 | 0 | 0 | 0 | 0 | 0.005314 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.26087 | false | 0 | 0.086957 | 0.043478 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4554a8f3004d63b39eaf98c606679de48cf1f389 | 738 | py | Python | api/logging_utils.py | idrissneumann/devoxx2021-redhat-amadeus-contest | 2c444939251e9102aced740aaa20c75edec9c89b | [
"Apache-2.0"
] | null | null | null | api/logging_utils.py | idrissneumann/devoxx2021-redhat-amadeus-contest | 2c444939251e9102aced740aaa20c75edec9c89b | [
"Apache-2.0"
] | null | null | null | api/logging_utils.py | idrissneumann/devoxx2021-redhat-amadeus-contest | 2c444939251e9102aced740aaa20c75edec9c89b | [
"Apache-2.0"
] | null | null | null | import sys
from datetime import datetime
from common_utils import get_env_var
def eprint(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
def is_error_log(log_level):
return any(log_level.upper() == l for l in ['ERROR', 'FATAL'])
LOG_LEVEL = get_env_var('LOG_LEVEL', 'INFO').upper()
def is_log_enable(log_level):
return LOG_LEVEL == 'DEBUG' or any(l == log_level.upper() for l in ['INFO', 'WARN', 'ERROR', 'FATAL'])
def log_msg(log_level, function_name, msg):
log_tpl = "{} - [{}][{}] {}"
if is_log_enable(log_level):
msg = log_tpl.format(datetime.now().isoformat(), log_level, function_name, msg)
if is_error_log(log_level):
eprint(msg)
else:
print(msg)
| 29.52 | 106 | 0.646341 | 111 | 738 | 4.036036 | 0.369369 | 0.196429 | 0.040179 | 0.058036 | 0.267857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196477 | 738 | 24 | 107 | 30.75 | 0.755481 | 0 | 0 | 0 | 0 | 0 | 0.084011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.166667 | 0.111111 | 0.5 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
455f673103e0eed5fffa4839d6ae0ad86efdb013 | 1,911 | py | Python | dl/layers/activation.py | nuka137/DeepLearningFramework | 613881e46b48c2206b9424a49106455cb2336d2e | [
"MIT"
] | 10 | 2020-06-28T05:50:41.000Z | 2022-01-30T01:31:43.000Z | dl/layers/activation.py | nuka137/DeepLearningFramework | 613881e46b48c2206b9424a49106455cb2336d2e | [
"MIT"
] | null | null | null | dl/layers/activation.py | nuka137/DeepLearningFramework | 613881e46b48c2206b9424a49106455cb2336d2e | [
"MIT"
] | 1 | 2020-07-26T12:36:32.000Z | 2020-07-26T12:36:32.000Z | import numpy as np
from .layer_base import LayerBase
class ReluLayer(LayerBase):
def __init__(self):
super().__init__()
self.cache = {}
def id(self):
return "Relu"
def forward(self, x):
y = np.maximum(x, 0)
self.cache["is_negative"] = (x < 0)
return y
def backward(self, dy):
is_negative = self.cache["is_negative"]
dx = dy
dx[is_negative] = 0
return dx
class SigmoidLayer(LayerBase):
def __init__(self):
super().__init__()
self.cache = {}
def id(self):
return "Sigmoid"
def forward(self, x):
y = 1 / (1 + np.exp(-x))
self.cache["y"] = y
return y
def backward(self, dy):
y = self.cache["y"]
dx = y * (1 - y) * dy
return dx
class TanhLayer(LayerBase):
def __init__(self):
super().__init__()
self.cache = {}
def id(self):
return "Tanh"
def forward(self, x):
y = np.tanh(x)
self.cache["y"] = y
return y
def backward(self, dy):
y = self.cache["y"]
dx = (1 - np.power(y, 2)) * dy
return dx
class SoftmaxWithLossLayer(LayerBase):
def __init__(self):
super().__init__()
self.cache = {}
def id(self):
return "SoftmaxWithLoss"
def forward(self, x, target):
batch_size = target.shape[0]
c = np.max(x)
y = np.exp(x - c) / np.sum(np.exp(x - c), axis=1, keepdims=True)
loss = -np.sum(np.sum(target * np.log(y), axis=1)) / batch_size
self.cache["target"] = target.copy()
self.cache["y"] = y.copy()
return loss
def backward(self, dy=1):
y = self.cache["y"].copy()
target = self.cache["target"].copy()
batch_size = target.shape[0]
dx = dy * (y - target) / batch_size
return dx
| 17.694444 | 72 | 0.513344 | 248 | 1,911 | 3.790323 | 0.205645 | 0.134043 | 0.06383 | 0.085106 | 0.455319 | 0.393617 | 0.329787 | 0.329787 | 0.329787 | 0.329787 | 0 | 0.010334 | 0.341706 | 1,911 | 107 | 73 | 17.859813 | 0.736884 | 0 | 0 | 0.538462 | 0 | 0 | 0.036649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.246154 | false | 0 | 0.030769 | 0.061538 | 0.523077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
45756c733858fb1b429fac0af8fad3f0bc5564a4 | 298 | py | Python | ifpi/Anos em cachorro.py | AlexCaprian/Python | 4d343330bb4e82f639ca103b119f0a9eeee916e0 | [
"MIT"
] | null | null | null | ifpi/Anos em cachorro.py | AlexCaprian/Python | 4d343330bb4e82f639ca103b119f0a9eeee916e0 | [
"MIT"
] | null | null | null | ifpi/Anos em cachorro.py | AlexCaprian/Python | 4d343330bb4e82f639ca103b119f0a9eeee916e0 | [
"MIT"
] | null | null | null | #A variável 'i' recebe um valor pelo teclado do usuário.
i=int(input(f'Quantos anos você tem?'))
#A variável 'c' recebe um valor do resultado da formula de calculo entre i e 7
c=i*7
#imprime 'Se você fosse um cachorro, você teria {c} anos.'
print(f'Se você fosse um cachorro, você teria {c} anos.') | 49.666667 | 78 | 0.728188 | 58 | 298 | 3.741379 | 0.551724 | 0.082949 | 0.119816 | 0.119816 | 0.322581 | 0.322581 | 0.322581 | 0.322581 | 0.322581 | 0 | 0 | 0.008032 | 0.16443 | 298 | 6 | 79 | 49.666667 | 0.863454 | 0.634228 | 0 | 0 | 0 | 0 | 0.64486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
457b8baa9dc2a57b69b19a76b35e3d431414817d | 2,689 | py | Python | applications/post/migrations/0003_auto_20210630_0138.py | diegovasconcelo/FusionAI | 7dbd1c7c0465e742ed1d4a2604244abbfccbea59 | [
"MIT"
] | null | null | null | applications/post/migrations/0003_auto_20210630_0138.py | diegovasconcelo/FusionAI | 7dbd1c7c0465e742ed1d4a2604244abbfccbea59 | [
"MIT"
] | null | null | null | applications/post/migrations/0003_auto_20210630_0138.py | diegovasconcelo/FusionAI | 7dbd1c7c0465e742ed1d4a2604244abbfccbea59 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-06-30 01:38
import ckeditor_uploader.fields
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('post', '0002_alter_article_content'),
]
operations = [
migrations.AlterModelOptions(
name='article',
options={'verbose_name': 'article', 'verbose_name_plural': 'articles'},
),
migrations.AlterModelOptions(
name='category',
options={'verbose_name': 'category', 'verbose_name_plural': 'categories'},
),
migrations.AlterField(
model_name='article',
name='category',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='post.category', verbose_name='category'),
),
migrations.AlterField(
model_name='article',
name='content',
field=ckeditor_uploader.fields.RichTextUploadingField(verbose_name='content'),
),
migrations.AlterField(
model_name='article',
name='cover_page',
field=models.BooleanField(default=False, verbose_name='cover page?'),
),
migrations.AlterField(
model_name='article',
name='image',
field=models.ImageField(upload_to='entry', verbose_name='image'),
),
migrations.AlterField(
model_name='article',
name='in_home',
field=models.BooleanField(default=False, verbose_name='On the home page?'),
),
migrations.AlterField(
model_name='article',
name='public',
field=models.BooleanField(default=False, verbose_name='public?'),
),
migrations.AlterField(
model_name='article',
name='title',
field=models.CharField(max_length=200, verbose_name='title'),
),
migrations.AlterField(
model_name='article',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
migrations.AlterField(
model_name='category',
name='description',
field=models.CharField(max_length=200, unique=True, verbose_name='description'),
),
migrations.AlterField(
model_name='category',
name='name',
field=models.CharField(max_length=30, unique=True, verbose_name='name'),
),
]
| 35.381579 | 131 | 0.599479 | 255 | 2,689 | 6.14902 | 0.282353 | 0.098214 | 0.159439 | 0.184949 | 0.485332 | 0.466837 | 0.220663 | 0.076531 | 0.076531 | 0.076531 | 0 | 0.013423 | 0.279658 | 2,689 | 75 | 132 | 35.853333 | 0.796076 | 0.015991 | 0 | 0.521739 | 1 | 0 | 0.142209 | 0.009834 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057971 | 0 | 0.101449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4588c4fcc40ce3008dbd5ffa6e671ec8200362a1 | 323 | py | Python | coding_challenge/ship_manager/apps.py | jojacobsen/coding_challenge | 94335f00f57a6c4d64cbc2b282a0ca099445e866 | [
"MIT"
] | 1 | 2022-03-06T15:40:56.000Z | 2022-03-06T15:40:56.000Z | coding_challenge/ship_manager/apps.py | jojacobsen/coding_challenge | 94335f00f57a6c4d64cbc2b282a0ca099445e866 | [
"MIT"
] | null | null | null | coding_challenge/ship_manager/apps.py | jojacobsen/coding_challenge | 94335f00f57a6c4d64cbc2b282a0ca099445e866 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
class ShipManagerConfig(AppConfig):
"""
Register Ship Manager app in project.
"""
default_auto_field = "django.db.models.BigAutoField"
name = "coding_challenge.ship_manager"
verbose_name = _("Ship Manager")
| 24.846154 | 56 | 0.736842 | 38 | 323 | 6.052632 | 0.736842 | 0.143478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 323 | 12 | 57 | 26.916667 | 0.864662 | 0.114551 | 0 | 0 | 0 | 0 | 0.259259 | 0.214815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
459c476dbb994977cccfc07529daecee67d5327b | 1,004 | py | Python | app/api/cms/schema/__init__.py | zyjImmortal/lin-cms-flask | 9df43a168d763c7cf5b5cd992228f7ea98dc394a | [
"MIT"
] | null | null | null | app/api/cms/schema/__init__.py | zyjImmortal/lin-cms-flask | 9df43a168d763c7cf5b5cd992228f7ea98dc394a | [
"MIT"
] | null | null | null | app/api/cms/schema/__init__.py | zyjImmortal/lin-cms-flask | 9df43a168d763c7cf5b5cd992228f7ea98dc394a | [
"MIT"
] | null | null | null | from typing import List, Optional
from lin import BaseModel, ParameterError
from pydantic import EmailStr, Field, validator
class EmailSchema(BaseModel):
email: Optional[str] = Field(description="用户邮箱")
@validator("email")
def check_email(cls, v, values, **kwargs):
return EmailStr.validate(v) if v else ""
class ResetPasswordSchema(BaseModel):
new_password: str = Field(description="新密码", min_length=6, max_length=22)
confirm_password: str = Field(description="确认密码", min_length=6, max_length=22)
@validator("confirm_password")
def passwords_match(cls, v, values, **kwargs):
if v != values["new_password"]:
raise ParameterError("两次输入的密码不一致,请输入相同的密码")
return v
class GroupIdListSchema(BaseModel):
group_ids: List[int] = Field(description="用户组ID列表")
@validator("group_ids", each_item=True)
def check_group_id(cls, v, values, **kwargs):
if v <= 0:
raise ParameterError("用户组ID必须大于0")
return v
| 29.529412 | 82 | 0.686255 | 121 | 1,004 | 5.570248 | 0.454545 | 0.094955 | 0.08457 | 0.071217 | 0.118694 | 0.118694 | 0 | 0 | 0 | 0 | 0 | 0.009913 | 0.196215 | 1,004 | 33 | 83 | 30.424242 | 0.825279 | 0 | 0 | 0.086957 | 0 | 0 | 0.088645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0.26087 | 0.130435 | 0.043478 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
459c75fab109a09ee63edc358d41aeab08842a24 | 556 | py | Python | setup.py | hamhaingaurav/Cypher | 789224cb4a7539976c9e58bdf9f51e00a1a83514 | [
"MIT"
] | null | null | null | setup.py | hamhaingaurav/Cypher | 789224cb4a7539976c9e58bdf9f51e00a1a83514 | [
"MIT"
] | null | null | null | setup.py | hamhaingaurav/Cypher | 789224cb4a7539976c9e58bdf9f51e00a1a83514 | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name = 'cypher',
version = '1.0',
author = 'shashi',
author_email = 'skssunny30@gmail.com',
description = 'Password Encryptor by suggesting wheather a password is strong or not',
long_description_content_type = 'text/markdown',
url = 'https://github.com/walkershashi/Cypher',
packages = setuptools.find_packages(),
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
)
| 30.888889 | 90 | 0.651079 | 58 | 556 | 6.155172 | 0.844828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.226619 | 556 | 17 | 91 | 32.705882 | 0.818605 | 0 | 0 | 0 | 0 | 0 | 0.471223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.0625 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
459f489516bb7118fe1f7af1a23c4fa4c32e22bb | 315 | py | Python | sorting_visualizer/variables.py | debdutgoswami/sorting-visualizer | e39e805acf22339b8ee06f8c8cd483e9c03ba3a4 | [
"MIT"
] | 3 | 2020-01-07T15:47:32.000Z | 2020-09-13T14:05:32.000Z | sorting_visualizer/variables.py | debdutgoswami/sorting-visualizer | e39e805acf22339b8ee06f8c8cd483e9c03ba3a4 | [
"MIT"
] | 3 | 2020-10-04T18:03:36.000Z | 2020-10-08T07:13:40.000Z | sorting_visualizer/variables.py | debdutgoswami/sorting-visualizer | e39e805acf22339b8ee06f8c8cd483e9c03ba3a4 | [
"MIT"
] | 3 | 2020-10-04T18:15:54.000Z | 2021-01-20T19:43:49.000Z | sortname = {
'bubblesort': f'Bubble Sort O(n\N{SUPERSCRIPT TWO})', 'insertionsort': f'Insertion Sort O(n\N{SUPERSCRIPT TWO})',
'selectionsort': f'Selection Sort O(n\N{SUPERSCRIPT TWO})', 'mergesort': 'Merge Sort O(n log n)',
'quicksort': 'Quick Sort O(n log n)', 'heapsort': 'Heap Sort O(n log n)'
} | 63 | 118 | 0.647619 | 49 | 315 | 4.163265 | 0.408163 | 0.147059 | 0.176471 | 0.102941 | 0.455882 | 0.308824 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165079 | 315 | 5 | 119 | 63 | 0.775665 | 0 | 0 | 0 | 0 | 0 | 0.753205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
45b6bbeb3cf897842dbb59970794a0753263b183 | 410 | py | Python | osbot_jira/lambdas/elk_to_slack.py | pbx-gs/OSBot-jira | 7677afee1f80398ddcccd6b45423bf6adc20b970 | [
"Apache-2.0"
] | 1 | 2021-04-02T05:58:23.000Z | 2021-04-02T05:58:23.000Z | osbot_jira/lambdas/elk_to_slack.py | pbx-gs/OSBot-jira | 7677afee1f80398ddcccd6b45423bf6adc20b970 | [
"Apache-2.0"
] | 1 | 2021-09-03T09:55:39.000Z | 2021-09-03T09:55:39.000Z | osbot_jira/lambdas/elk_to_slack.py | filetrust/OSBot-jira | d753fff59cf938cf94a51bf8bc7981691524b686 | [
"Apache-2.0"
] | 2 | 2021-04-02T05:58:29.000Z | 2021-09-03T09:43:29.000Z | from osbot_aws.Dependencies import load_dependency
from osbot_aws.helpers.Lambda_Helpers import log_to_elk
def run(event, context):
try:
load_dependency("elastic")
from osbot_jira.api.elk.Elk_To_Slack import ELK_to_Slack
return ELK_to_Slack().handle_lambda_event(event)
except Exception as error:
log_to_elk("[elk_to_slack][Error]: {0}".format(error) , level='error')
| 34.166667 | 78 | 0.736585 | 61 | 410 | 4.622951 | 0.491803 | 0.070922 | 0.141844 | 0.092199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002933 | 0.168293 | 410 | 11 | 79 | 37.272727 | 0.824047 | 0 | 0 | 0 | 0 | 0 | 0.092683 | 0.053659 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
45c064d0dfd0fe78f2ae6705a1e636119d95f20a | 406 | py | Python | rsa_demo/app/forms.py | henocdz/cryptography_escom | 67b619d8e59687e8ca42ce355458000415b0111d | [
"MIT"
] | 1 | 2016-06-03T12:39:23.000Z | 2016-06-03T12:39:23.000Z | rsa_demo/app/forms.py | henocdz/cryptography_escom | 67b619d8e59687e8ca42ce355458000415b0111d | [
"MIT"
] | null | null | null | rsa_demo/app/forms.py | henocdz/cryptography_escom | 67b619d8e59687e8ca42ce355458000415b0111d | [
"MIT"
] | null | null | null | from flask.ext.wtf import Form
from wtforms import StringField, BooleanField, TextField, TextAreaField
from wtforms.validators import Required, URL
class EncryptForm(Form):
e = StringField('e')
n = StringField('n')
url = StringField('URL public key in b64')
from_url = BooleanField('PubKey Base64', default=True)
message = TextAreaField('Message to encrypt', validators=[Required()])
| 33.833333 | 74 | 0.736453 | 49 | 406 | 6.081633 | 0.591837 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011696 | 0.157635 | 406 | 11 | 75 | 36.909091 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0.133005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
45c9f149c34ac07ae881545640d3fb333592273a | 13,604 | py | Python | tests/test_commands.py | orest-d/liquer | 7a5b5a69cf673b4a849dd2da3050ccd75081e454 | [
"MIT"
] | 3 | 2019-12-10T10:22:36.000Z | 2019-12-12T16:36:11.000Z | tests/test_commands.py | orest-d/liquer | 7a5b5a69cf673b4a849dd2da3050ccd75081e454 | [
"MIT"
] | null | null | null | tests/test_commands.py | orest-d/liquer | 7a5b5a69cf673b4a849dd2da3050ccd75081e454 | [
"MIT"
] | 2 | 2019-11-14T16:26:52.000Z | 2021-07-26T04:53:54.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Unit tests for LiQuer argument parser and command registry.
"""
import pytest
from liquer.commands import *
from liquer.state import State
class TestArgumentParser:
def test_argument_parser(self):
parser = GENERIC_AP + INT_AP + FLOAT_AP + BOOLEAN_AP
metadata = [None, None, None, None]
assert parser.parse(metadata, ["abc", "123", "23.4", "T"]) == (
["abc", 123, 23.4, True],
[],
)
assert parser.parse(metadata, ["abc", "123", "23.4", "T", "xxx"]) == (
["abc", 123, 23.4, True],
["xxx"],
)
def test_argument_parse_meta(self):
parser = GENERIC_AP + INT_AP + FLOAT_AP + BOOLEAN_AP
metadata = [111, 222, 333, 444]
assert parser.parse_meta(metadata, ["abc", "123", "23.4", "T", "xxx"]) == (
["abc", 123, 23.4, True],
[("abc", 111), (123, 222), (23.4, 333), (True, 444)],
["xxx"],
)
def test_argument_parse_meta_with_context(self):
parser = GENERIC_AP + CONTEXT_AP + INT_AP + FLOAT_AP + BOOLEAN_AP
metadata = [111, "ctx", 222, 333, 444]
class MyContext:
raw_query = "query"
context = MyContext()
assert parser.parse_meta(
metadata, ["abc", "123", "23.4", "T", "xxx"], context=context
) == (
["abc", context, 123, 23.4, True],
[("abc", 111), (context, "ctx"), (123, 222), (23.4, 333), (True, 444)],
["xxx"],
)
def test_argument_parser_add(self):
parser = GENERIC_AP + INT_AP
parser += FLOAT_AP
parser += BOOLEAN_AP
metadata = [None, None, None, None]
assert parser.parse(metadata, ["abc", "123", "23.4", "T"]) == (
["abc", 123, 23.4, True],
[],
)
def test_list_argument_parser(self):
parser = GENERIC_AP + INT_AP + FLOAT_AP + BOOLEAN_AP + LIST_AP
metadata = [None, None, None, None, None]
assert parser.parse(metadata, ["abc", "123", "23.4", "T"]) == (
["abc", 123, 23.4, True, []],
[],
)
assert parser.parse(metadata, ["abc", "123", "23.4", "T", "1234", "2345"]) == (
["abc", 123, 23.4, True, ["1234", "2345"]],
[],
)
class TestCommands:
def test_from_callable(self):
def test_callable(state, a, b: bool, c=123):
pass
meta = command_metadata_from_callable(test_callable)
assert meta.name == "test_callable"
assert meta.arguments[0]["name"] == "a"
assert meta.arguments[0]["type"] == None
assert meta.arguments[1]["name"] == "b"
assert meta.arguments[1]["type"] == "bool"
assert meta.arguments[2]["name"] == "c"
assert meta.arguments[2]["type"] == "int"
def test_parser_from_callable(self):
def test_callable(state, a, b: bool, c=123):
pass
meta = command_metadata_from_callable(test_callable)
parser = argument_parser_from_command_metadata(meta)
assert parser.parse(meta.arguments, ["abc", "T", "234"]) == (
["abc", True, 234],
[],
)
def test_parser_from_callable(self):
def test_callable(state, a, b: bool, c=123):
pass
meta = command_metadata_from_callable(test_callable)
parser = argument_parser_from_command_metadata(meta)
assert parser.parse(meta.arguments, ["abc", "T", "234"]) == (
["abc", True, 234],
[],
)
def test_command(self):
reset_command_registry()
@command
def test_callable(state, a: int, b=123): # has state as a first argument
return a + b
result = command_registry().executables["root"]["test_callable"](State(), "1")
assert result.get() == 124
def test_first_command(self):
reset_command_registry()
@first_command
def test_callable(a: int, b=123):
return a + b
result = command_registry().executables["root"]["test_callable"](State(), "1")
assert result.get() == 124
# def test_evaluate_command(self):
# reset_command_registry()
# @command
# def test_callable(state, a: int, b=123): # has state as a first argument
# return a+b
# cmd = ["test_callable", "1"]
# result = command_registry().evaluate_command(
# State(), cmd)
# assert result.get() == 124
# assert result.metadata["commands"][-1] == cmd
# def test_evaluate_command_with_attributes(self):
# reset_command_registry()
# @command(ABC="def")
# def test_callable(state, a: int, b=123): # has state as a first argument
# return a+b
# cmd = ["test_callable", "1"]
# result = command_registry().evaluate_command(
# State(), cmd)
# assert result.get() == 124
# assert result.metadata["commands"][-1] == cmd
# assert result.metadata["attributes"]["ABC"] == "def"
# def test_evaluate_chaining_attributes(self):
# reset_command_registry()
# @command(ABC="def")
# def test_callable1(state, a: int, b=123): # has state as a first argument
# return a+b
# @command
# def test_callable2(state): # has state as a first argument
# return state
# cmd1 = ["test_callable1", "1"]
# cmd2 = ["test_callable2"]
# state1 = command_registry().evaluate_command(
# State(), cmd1)
# assert state1.get() == 124
# assert state1.metadata["attributes"]["ABC"] == "def"
# state2 = command_registry().evaluate_command(
# state1, cmd2)
# assert state2.get() == 124
# assert state2.metadata["attributes"]["ABC"] == "def"
def test_evaluate_chaining_exceptions(self):
import importlib
from liquer import evaluate
import liquer.ext.basic
from liquer.commands import reset_command_registry
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
@command(ABC="def", ns="testns", context_menu="menu")
def test_callable1(state, a: int, b=123): # has state as a first argument
return a + b
@command
def test_callable2(state): # has state as a first argument
return state
state1 = evaluate("ns-testns/test_callable1-1")
assert state1.get() == 124
assert state1.metadata["attributes"]["ABC"] == "def"
assert state1.metadata["attributes"]["ns"] == "testns"
assert state1.metadata["attributes"]["context_menu"] == "menu"
state2 = evaluate("ns-testns/test_callable1-1/test_callable2")
assert state2.get() == 124
assert state2.metadata["attributes"]["ABC"] == "def"
assert state2.metadata["attributes"].get("ns") != "testns"
assert "context_menu" not in state2.metadata["attributes"]
def test_evaluate_chaining_volatile(self):
import importlib
from liquer import evaluate
import liquer.ext.basic
from liquer.commands import reset_command_registry
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
@first_command(volatile=True)
def test_volatile():
return 123
@first_command
def test_nonvolatile():
return 234
@command
def test_callable2(x): # has state as a first argument
return x * 10
state1 = evaluate("test_volatile/test_callable2")
assert state1.get() == 1230
assert state1.is_volatile()
state2 = evaluate("test_nonvolatile/test_callable2")
assert state2.get() == 2340
assert not state2.is_volatile()
# def test_state_command(self):
# reset_command_registry()
# @command
# def statecommand(state): # has state as a first argument
# assert isinstance(state, State)
# return 123+state.get()
# assert command_registry().evaluate_command(
# State().with_data(1), ["statecommand"]).get() == 124
# def test_nonstate_command(self):
# reset_command_registry()
# @command
# def nonstatecommand(x: int): # has state as a first argument
# assert x == 1
# return 123+x
# assert command_registry().evaluate_command(
# State().with_data(1), ["nonstatecommand"]).get() == 124
def test_as_dict(self):
reset_command_registry()
@command
def somecommand(x: int): # has state as a first argument
return 123 + x
assert "somecommand" in command_registry().as_dict()["root"]
def test_duplicate_registration(self):
reset_command_registry()
def somecommand(x: int): # has state as a first argument
return 123 + x
command(somecommand)
command(somecommand)
assert "somecommand" in command_registry().as_dict()["root"]
def test_changing_attributes(self):
reset_command_registry()
def somecommand(x: int): # has state as a first argument
return 123 + x
command(somecommand)
assert (
"abc" not in command_registry().metadata["root"]["somecommand"].attributes
)
command(abc="def")(somecommand)
assert (
"def"
== command_registry().metadata["root"]["somecommand"].attributes["abc"]
)
def test_registration_modification(self):
import importlib
import liquer.ext.basic
from liquer.commands import reset_command_registry
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
assert "flag" in command_registry().as_dict()["root"]
try:
@command
def flag(name):
return f"New definition of flag called with {name}"
redefined = True
except:
redefined = False
assert not redefined
try:
@first_command(modify_command=True)
def flag(name):
return f"New definition of flag called with {name}"
redefined = True
except:
redefined = False
assert redefined
# Cleanup
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
def test_registration_namespace(self):
import importlib
from liquer import evaluate
import liquer.ext.basic
from liquer.commands import reset_command_registry
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
assert "flag" in command_registry().as_dict()["root"]
try:
@command(ns="new")
def flag(state, name):
return f"New definition of flag called with {name}"
redefined = True
except:
redefined = False
assert redefined
# assert evaluate("/flag-test-f/flag-test/state_variable-test").get() == True
# assert evaluate("/flag-test-f/ns-root/flag-test/state_variable-test").get() == True
assert (
evaluate("/flag-test-f/ns-new/flag-test/state_variable-test").get() == False
)
# Cleanup
reset_command_registry() # prevent double-registration
# Hack to enforce registering of the commands
importlib.reload(liquer.ext.basic)
class TestRemoteCommandsRegistry:
def test_encode_decode_registration(self):
def f(x):
return x * 102
metadata = command_metadata_from_callable(
f, has_state_argument=False, attributes={}
)
b = RemoteCommandRegistry.encode_registration(f, metadata)
assert b[0] == b"B"[0]
r_f, r_metadata, r_modify = RemoteCommandRegistry.decode_registration(b)
assert r_f(3) == 306
def test_encode_decode_registration_base64(self):
def f(x):
return x * 102
metadata = command_metadata_from_callable(
f, has_state_argument=False, attributes={}
)
b = RemoteCommandRegistry.encode_registration_base64(f, metadata)
assert b[0] == b"E"[0]
r_f, r_metadata, r_modify = RemoteCommandRegistry.decode_registration(b)
assert r_f(3) == 306
def test_serialized_registration(self):
from liquer import evaluate
reset_command_registry()
def f(x: int):
return x * 102
metadata = command_metadata_from_callable(
f, has_state_argument=False, attributes={}
)
b = command_registry().encode_registration(f, metadata)
enable_remote_registration()
command_registry().register_remote_serialized(b)
assert evaluate("f-3").get() == 306
reset_command_registry()
disable_remote_registration()
| 34.180905 | 100 | 0.577624 | 1,512 | 13,604 | 5.026455 | 0.101852 | 0.076974 | 0.057895 | 0.015395 | 0.744605 | 0.715658 | 0.67 | 0.649868 | 0.636974 | 0.618947 | 0 | 0.036607 | 0.30322 | 13,604 | 397 | 101 | 34.267003 | 0.765165 | 0.233681 | 0 | 0.539683 | 0 | 0 | 0.07501 | 0.016938 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.154762 | false | 0.011905 | 0.099206 | 0.063492 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
45cfb31451b7ccd0b5486b47b34bfd8bfeee9e8a | 347 | py | Python | problem6.py | edpark13/euler | f16b5688164bdeaffbd1541efd14b84b09cce067 | [
"MIT"
] | null | null | null | problem6.py | edpark13/euler | f16b5688164bdeaffbd1541efd14b84b09cce067 | [
"MIT"
] | null | null | null | problem6.py | edpark13/euler | f16b5688164bdeaffbd1541efd14b84b09cce067 | [
"MIT"
] | null | null | null | def sum_square_difference(n):
"""
Find the difference between the sum of the numbers 1 to n squared and the
numbers 1 to n squared added
"""
square = 0
sum = 0
for i in xrange(1, n + 1):
square += i**2
sum += i
return sum**2 - square
if __name__ == '__main__':
print sum_square_difference(100)
| 23.133333 | 77 | 0.599424 | 54 | 347 | 3.62963 | 0.5 | 0.091837 | 0.193878 | 0.132653 | 0.214286 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0.045833 | 0.308357 | 347 | 14 | 78 | 24.785714 | 0.770833 | 0 | 0 | 0 | 0 | 0 | 0.035556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
45d4dcd79eeb74e6b5a6245fe2982ac8c5b70227 | 781 | py | Python | CodeForces/Round553Div2/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | 1 | 2018-11-25T04:15:45.000Z | 2018-11-25T04:15:45.000Z | CodeForces/Round553Div2/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | null | null | null | CodeForces/Round553Div2/C.py | takaaki82/Java-Lessons | c4f11462bf84c091527dde5f25068498bfb2cc49 | [
"MIT"
] | 2 | 2018-08-08T13:01:14.000Z | 2018-11-25T12:38:36.000Z | l, r = map(int, input().split())
mod = 10 ** 9 + 7
def f(x):
if x == 0:
return 0
res = 1
cnt = 2
f = 1
b_s = 2
e_s = 4
b_f = 3
e_f = 9
x -= 1
while x > 0:
if f:
res += cnt * (b_s + e_s) // 2
b_s = e_s + 2
e_s = e_s + 2 * (4 * cnt)
else:
res += cnt * (b_f + e_f) // 2
b_f = e_f + 2
e_f = e_f + 2 * (4 * cnt)
x -= cnt
if x < 0:
if f:
b_s -= 2
res -= abs(x) * (b_s + b_s - abs(x + 1) * 2) // 2
else:
b_f -= 2
res -= abs(x) * (b_f + b_f - abs(x + 1) * 2) // 2
cnt *= 2
f = 1 - f
return res
print((f(r) - f(l - 1)) % mod)
| 20.025641 | 65 | 0.303457 | 134 | 781 | 1.604478 | 0.216418 | 0.055814 | 0.04186 | 0.055814 | 0.24186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 0.542894 | 781 | 38 | 66 | 20.552632 | 0.504202 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0 | 0 | 0.088235 | 0.029412 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
45df7a9e90a8e22a5d2f412a2546ad4c08e905f1 | 630 | py | Python | lessons/my_mysql.py | TinyZzh/note-python | 7698f01ece80d6670d2af5707c32504f49eba36c | [
"Apache-2.0"
] | null | null | null | lessons/my_mysql.py | TinyZzh/note-python | 7698f01ece80d6670d2af5707c32504f49eba36c | [
"Apache-2.0"
] | null | null | null | lessons/my_mysql.py | TinyZzh/note-python | 7698f01ece80d6670d2af5707c32504f49eba36c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
import MySQLdb
config = {'host': '127.0.0.1', 'db': 'fps_ins_master', 'user': 'root', 'passwd': 'wooduan'}
try:
conn = MySQLdb.connect(host=config['host'], db=config['db'], user=config['user'], passwd=config['passwd'])
conn.set_character_set('utf8')
cur = conn.cursor()
cur.execute("show databases;")
# select
cur.execute("SELECT * FROM `t_character` LIMIT 0,1;")
rows = cur.fetchall()
for row in rows:
print row
cur.close()
conn.close()
except MySQLdb.Error, e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
print "endl"
| 22.5 | 110 | 0.601587 | 90 | 630 | 4.155556 | 0.566667 | 0.053476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023669 | 0.195238 | 630 | 27 | 111 | 23.333333 | 0.714004 | 0.071429 | 0 | 0 | 0 | 0 | 0.249141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.125 | 0.0625 | null | null | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
afe1c12b684b6cb0c6b6e172109e21e99d78f73e | 293 | py | Python | app_account/admin.py | bogdandrienko/chrysotile-minerals | 47a4097e29ee40f2606807e28b2da466dfd7f3f4 | [
"MIT"
] | 1 | 2021-02-13T08:40:51.000Z | 2021-02-13T08:40:51.000Z | app_account/admin.py | bogdandrienko/chrysotile-minerals | 47a4097e29ee40f2606807e28b2da466dfd7f3f4 | [
"MIT"
] | null | null | null | app_account/admin.py | bogdandrienko/chrysotile-minerals | 47a4097e29ee40f2606807e28b2da466dfd7f3f4 | [
"MIT"
] | null | null | null | from django.contrib import admin
admin.site.site_header = 'Панель управления' # default: "Django Administration"
admin.site.index_title = 'Администрирование сайта' # default: "Site administration"
admin.site.site_title = 'Админка' # default: "Django site admin"
| 41.857143 | 87 | 0.703072 | 32 | 293 | 6.34375 | 0.5 | 0.133005 | 0.128079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201365 | 293 | 6 | 88 | 48.833333 | 0.867521 | 0.313993 | 0 | 0 | 0 | 0 | 0.238579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aff5a6502482469561a1efb845851b55a167ed6a | 225 | py | Python | seaborn2.py | tobiasaditya/datascience_beginner | fa6868073951259e0a5f8a702de0bcc17c13d295 | [
"MIT"
] | null | null | null | seaborn2.py | tobiasaditya/datascience_beginner | fa6868073951259e0a5f8a702de0bcc17c13d295 | [
"MIT"
] | null | null | null | seaborn2.py | tobiasaditya/datascience_beginner | fa6868073951259e0a5f8a702de0bcc17c13d295 | [
"MIT"
] | null | null | null | import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
database = sb.load_dataset("tips")
print(database)
sb.jointplot(x='tip',y='total_bill',data=database)
plt.show() | 20.454545 | 50 | 0.773333 | 39 | 225 | 4.410256 | 0.666667 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 225 | 11 | 51 | 20.454545 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0.075221 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.555556 | 0 | 0.555556 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b30383e1e0812334365d6d00fc5de29d0adbcd14 | 450 | py | Python | dags/api_exports/s3_path_helper.py | AgentIQ/aiq-airflow | e4463e00602dcdae26334d252502781534feeac8 | [
"Apache-2.0"
] | null | null | null | dags/api_exports/s3_path_helper.py | AgentIQ/aiq-airflow | e4463e00602dcdae26334d252502781534feeac8 | [
"Apache-2.0"
] | 12 | 2020-04-03T17:05:53.000Z | 2021-12-01T22:55:39.000Z | dags/api_exports/s3_path_helper.py | AgentIQ/aiq-airflow | e4463e00602dcdae26334d252502781534feeac8 | [
"Apache-2.0"
] | null | null | null | from airflow.models import Variable
from tools.utils.file_util import append_date_to_path
from os import path
ENV = Variable.get('ENVIRONMENT')
def get_exports_bucket_name():
return 'exports-api'
def invalid_data_dir_name():
return 'invalid-data'
def get_s3_invalid_data_subfolder_path():
return append_date_to_path(path.join(ENV,
invalid_data_dir_name())
)
| 21.428571 | 65 | 0.66 | 58 | 450 | 4.758621 | 0.5 | 0.15942 | 0.086957 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00304 | 0.268889 | 450 | 20 | 66 | 22.5 | 0.835866 | 0 | 0 | 0 | 0 | 0 | 0.075556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b306da6d5f950c8b54bd072e16269a79509ad8f3 | 759 | py | Python | app/config.py | gnstaxo/vlaskola | cc6c4b7f85fa7c484c03b80732cdc0a6d66e42d3 | [
"MIT"
] | null | null | null | app/config.py | gnstaxo/vlaskola | cc6c4b7f85fa7c484c03b80732cdc0a6d66e42d3 | [
"MIT"
] | null | null | null | app/config.py | gnstaxo/vlaskola | cc6c4b7f85fa7c484c03b80732cdc0a6d66e42d3 | [
"MIT"
] | null | null | null | import os
from datetime import timedelta
from flask import url_for
class Config(object):
DATABASE = {
'name': 'vlaskola',
'engine': 'peewee.PostgresqlDatabase',
'user': 'postgres'
}
SECRET_KEY = os.environ.get("SECRET_KEY")
SECURITY_REGISTERABLE = True
SECURITY_SEND_REGISTER_EMAIL = False
SECURITY_PASSWORD_SALT = os.environ.get(
"SECURITY_PASSWORD_SALT")
SECURITY_FLASH_MESSAGES = False
SECURITY_URL_PREFIX = '/api/accounts'
SECURITY_REDIRECT_BEHAVIOR = "spa"
SECURITY_CSRF_PROTECT_MECHANISMS = ["session", "basic"]
SECURITY_CSRF_IGNORE_UNAUTH_ENDPOINTS = True
SECURITY_CSRF_COOKIE = {"key": "XSRF-TOKEN"}
WTF_CSRF_CHECK_DEFAULT = False
WTF_CSRF_TIME_LIMIT = None
| 27.107143 | 59 | 0.706192 | 87 | 759 | 5.793103 | 0.655172 | 0.071429 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202899 | 759 | 27 | 60 | 28.111111 | 0.833058 | 0 | 0 | 0 | 0 | 0 | 0.168643 | 0.061924 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.136364 | 0 | 0.772727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b3186f1e4992e9372f7e2b537bf6791464dc0c9f | 284 | py | Python | src/contnext_viewer/constants.py | ContNeXt/web_app | 0ace1077ee07902cadca684e4e06b3e91cea437f | [
"MIT"
] | 3 | 2022-01-14T11:56:08.000Z | 2022-01-14T12:36:42.000Z | src/contnext_viewer/constants.py | ContNeXt/web_app | 0ace1077ee07902cadca684e4e06b3e91cea437f | [
"MIT"
] | null | null | null | src/contnext_viewer/constants.py | ContNeXt/web_app | 0ace1077ee07902cadca684e4e06b3e91cea437f | [
"MIT"
] | null | null | null | CONTEXT = {'cell_line': 'Cell Line',
'cellline': 'Cell Line',
'cell_type': 'Cell Type',
'celltype': 'Cell Type',
'tissue': 'Tissue',
'interactome': 'Interactome'
}
HIDDEN_FOLDER = '.contnext'
ZENODO_URL = 'https://zenodo.org/record/5831786/files/data.zip?download=1' | 31.555556 | 74 | 0.65493 | 34 | 284 | 5.352941 | 0.647059 | 0.131868 | 0.131868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.147887 | 284 | 9 | 74 | 31.555556 | 0.719008 | 0 | 0 | 0 | 0 | 0 | 0.603509 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b322432a799a1d8bfebdcca1a1d23a0b8419147c | 869 | py | Python | src/curt/curt/modules/smarthome/base_provider.py | sanyaade-teachings/cep | 59e22b148c3a95eff521ce75cf4eacbcfb074115 | [
"MIT"
] | 108 | 2021-08-09T17:10:39.000Z | 2022-03-21T21:59:03.000Z | src/curt/curt/modules/smarthome/base_provider.py | sanyaade-teachings/cep | 59e22b148c3a95eff521ce75cf4eacbcfb074115 | [
"MIT"
] | 15 | 2021-09-19T01:25:25.000Z | 2022-03-28T18:47:49.000Z | src/curt/curt/modules/smarthome/base_provider.py | sanyaade-teachings/cep | 59e22b148c3a95eff521ce75cf4eacbcfb074115 | [
"MIT"
] | 14 | 2021-08-10T04:42:17.000Z | 2022-03-28T16:30:34.000Z | """
Copyright (C) Cortic Technology Corp. - All Rights Reserved
Written by Michael Ng <michaelng@cortic.ca>, 2021
"""
from abc import abstractmethod
class BaseProvider:
def __init__(self):
self.token = ""
@abstractmethod
def config_control_handler(self, params):
pass
def command(self, params):
data = params["ready_data"][0]
if data["control_type"] == "get_devices":
return self.get_devices(data)
elif data["control_type"] == "light":
return self.control_light(data)
elif data["control_type"] == "media_player":
return self.control_media_player(data)
@abstractmethod
def get_devices(self, data):
pass
@abstractmethod
def control_light(self, data):
pass
@abstractmethod
def control_media_player(self, data):
pass | 24.138889 | 59 | 0.634062 | 99 | 869 | 5.363636 | 0.434343 | 0.12806 | 0.084746 | 0.071563 | 0.222222 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0.0078 | 0.262371 | 869 | 36 | 60 | 24.138889 | 0.820593 | 0.125432 | 0 | 0.333333 | 0 | 0 | 0.098535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.166667 | 0.041667 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b3257a07a0498038daca551c93dd11773e3cde75 | 261 | py | Python | react-python/leadmanager/leadmanager/urls.py | tienduy-nguyen/django-web | 4b4078042b9c388dd00907a09d10e2a7889f4dea | [
"MIT"
] | null | null | null | react-python/leadmanager/leadmanager/urls.py | tienduy-nguyen/django-web | 4b4078042b9c388dd00907a09d10e2a7889f4dea | [
"MIT"
] | 6 | 2021-04-08T20:20:47.000Z | 2022-02-13T20:21:29.000Z | react-python/leadmanager/leadmanager/urls.py | tienduy-nguyen/django-web | 4b4078042b9c388dd00907a09d10e2a7889f4dea | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('', include('frontend.urls')), # home page react
path('', include('leads.urls')), # leads page api
path('', include('accounts.urls')), # accounts page api
]
| 26.1 | 58 | 0.67433 | 33 | 261 | 5.333333 | 0.484848 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164751 | 261 | 9 | 59 | 29 | 0.807339 | 0.183908 | 0 | 0 | 0 | 0 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b32c54c34d8c482d4cbe8eb7df6bc337529fc1cb | 13,751 | py | Python | tools/moduletests/unit/test_arpignore.py | ketanbhut/aws-ec2rescue-linux | 3a4c096f31005ea3b3c36bd8e6f840d457ccc937 | [
"Apache-2.0"
] | null | null | null | tools/moduletests/unit/test_arpignore.py | ketanbhut/aws-ec2rescue-linux | 3a4c096f31005ea3b3c36bd8e6f840d457ccc937 | [
"Apache-2.0"
] | null | null | null | tools/moduletests/unit/test_arpignore.py | ketanbhut/aws-ec2rescue-linux | 3a4c096f31005ea3b3c36bd8e6f840d457ccc937 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""
Unit tests for the arpignore module
"""
import os
import subprocess
import sys
import unittest
import mock
import moduletests.src.arpignore
try:
# Python 2.x
from cStringIO import StringIO
except ImportError:
# Python 3.x
from io import StringIO
if sys.hexversion >= 0x3040000:
# contextlib.redirect_stdout was introduced in Python 3.4
import contextlib
else:
# contextlib2 is a backport of contextlib from Python 3.5 and is compatible with Python2/3
import contextlib2 as contextlib
# builtins was named __builtin__ in Python 2 so accommodate the change for the purposes of mocking the open call
if sys.version_info >= (3,):
builtins_name = "builtins"
else:
builtins_name = "__builtin__"
class Testarpignore(unittest.TestCase):
config_file_path = "/etc/sysctl.d/55-arp-ignore.conf"
def setUp(self):
self.output = StringIO()
def tearDown(self):
self.output.close()
@mock.patch("subprocess.check_output")
def test_detect_noproblem(self, check_output_mock):
"""Test that no problem is detected with expected-good output."""
check_output_mock.return_value = "arp_ignore = 0"
self.assertFalse(moduletests.src.arpignore.detect())
self.assertTrue(check_output_mock.called)
@mock.patch("subprocess.check_output")
def test_detect_problem(self, check_output_mock):
"""Test that the problem is detected with expected-bad output."""
check_output_mock.return_value = "arp_ignore = 1"
self.assertTrue(moduletests.src.arpignore.detect())
self.assertTrue(check_output_mock.called)
@mock.patch("subprocess.check_output", side_effect=["net.ipv4.conf.all.arp_ignore = 1",
subprocess.CalledProcessError(1, "test")])
def test_fix_sysctlfail(self, check_output_mock):
with contextlib.redirect_stdout(self.output):
self.assertRaises(subprocess.CalledProcessError, moduletests.src.arpignore.fix, self.config_file_path)
self.assertTrue(check_output_mock.called)
self.assertTrue(self.output.getvalue().endswith(
"[UNFIXED] net.ipv4.conf.all.arp_ignore=0 failed for running system\n"))
@mock.patch("subprocess.check_output")
@mock.patch("moduletests.src.arpignore.os.path.exists", side_effect=[False])
@mock.patch("moduletests.src.arpignore.open", side_effect=IOError)
def test_fix_write_new_fail(self, open_mock, exists_mock, check_output_mock):
check_output_mock.return_value = "net.ipv4.conf.lo.arp_announce = 0\nnet.ipv4.conf.all.arp_ignore = 1"
with contextlib.redirect_stdout(self.output):
self.assertRaises(IOError, moduletests.src.arpignore.fix, self.config_file_path)
self.assertTrue(open_mock.called)
self.assertTrue(exists_mock.called)
self.assertTrue(check_output_mock.called)
self.assertTrue(self.output.getvalue().endswith(
"[UNFIXED] Unable to open /etc/sysctl.d/55-arp-ignore.conf and write to it.\n"))
@mock.patch("subprocess.check_output")
@mock.patch("moduletests.src.arpignore.os.path.exists", side_effect=[False])
@mock.patch("moduletests.src.arpignore.open", mock.mock_open())
def test_fix_write_new_success(self, exists_mock, check_output_mock):
check_output_mock.return_value = "net.ipv4.conf.lo.arp_announce = 0\nnet.ipv4.conf.all.arp_ignore = 1"
with contextlib.redirect_stdout(self.output):
self.assertTrue(moduletests.src.arpignore.fix(self.config_file_path))
self.assertTrue(self.output.getvalue().endswith("[FIXED] /etc/sysctl.d/55-arp-ignore.conf written.\n"))
self.assertTrue(exists_mock.called)
self.assertTrue(check_output_mock.called)
@mock.patch("subprocess.check_output")
@mock.patch("moduletests.src.arpignore.os.path.exists", side_effect=[True])
def test_fix_success(self, exists_mock, check_output_mock):
check_output_mock.return_value = "net.ipv4.conf.all.arp_ignore = 1\nsome_other = 0"
open_mock = mock.mock_open(read_data="#comment\n"
"net.ipv4.conf.all.arp_ignore = 1\n"
"net.ipv4.conf.lo.arp_ignore = 0\n"
"garbage\n")
# mock_open does not have support for iteration so it must be added manually
# readline() until a blank line is reached (the sentinel)
def iter_func(self):
return iter(self.readline, "")
open_mock.return_value.__iter__ = iter_func
def py3_next_func(self):
return next(iter(self.readline, ""))
if sys.hexversion >= 0x3000000:
open_mock.return_value.__next__ = py3_next_func
with mock.patch("moduletests.src.arpignore.open", open_mock):
with contextlib.redirect_stdout(self.output):
self.assertTrue(moduletests.src.arpignore.fix(self.config_file_path))
self.assertTrue(self.output.getvalue().endswith("[FIXED] /etc/sysctl.d/55-arp-ignore.conf written.\n"))
self.assertEqual(str(open_mock.mock_calls), "[call('/etc/sysctl.d/55-arp-ignore.conf', 'r'),\n"
" call().__enter__(),\n call().readlines(),\n"
" call().__exit__(None, None, None),\n"
" call('/etc/sysctl.d/55-arp-ignore.conf', 'w'),\n"
" call().__enter__(),\n"
" call().write('#comment\\nnet.ipv4.conf.lo.arp_ignore = 0'),\n"
" call().write('\\n'),\n"
" call().write('net.ipv4.conf.all.arp_ignore = 0'),\n"
" call().write('\\n'),\n"
" call().__exit__(None, None, None)]")
self.assertTrue(exists_mock.called)
self.assertTrue(check_output_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict", return_value=dict())
@mock.patch("moduletests.src.arpignore.detect", return_value=False)
def test_run_success(self, detect_mock, config_mock):
with contextlib.redirect_stdout(self.output):
self.assertTrue(moduletests.src.arpignore.run())
self.assertEqual(self.output.getvalue(), "Determining if any interfaces are set to ignore arp requests\n"
"[SUCCESS] arp ignore is disabled for all interfaces.\n")
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict")
@mock.patch("moduletests.src.arpignore.detect", return_value=True)
def test_run_no_remediate(self, detect_mock, config_mock):
config_mock.return_value = {"BACKUP_DIR": "/var/tmp/ec2rl",
"LOG_DIR": "/var/tmp/ec2rl",
"BACKED_FILES": dict(),
"REMEDIATE": False,
"SUDO": True}
with contextlib.redirect_stdout(self.output):
self.assertFalse(moduletests.src.arpignore.run())
self.assertTrue("[UNFIXED] Remediation impossible without sudo and --remediate.\n"
"-- Running as root/sudo: True\n"
"-- Required --remediate flag specified: False\n"
"[FAILURE] arp ignore is enabled for one or more interfaces. Please see the module log\n"
in self.output.getvalue())
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict")
@mock.patch("moduletests.src.arpignore.detect", return_value=True)
@mock.patch("moduletests.src.arpignore.os.path.isfile", return_value=True)
@mock.patch("moduletests.src.arpignore.backup", return_value=True)
@mock.patch("moduletests.src.arpignore.fix", return_value=True)
@mock.patch("moduletests.src.arpignore.restore", return_value=True)
def test_run_failure_isfile(self,
restore_mock,
fix_mock,
backup_mock,
isfile_mock,
detect_mock,
config_mock):
config_mock.return_value = {"BACKUP_DIR": "/var/tmp/ec2rl",
"LOG_DIR": "/var/tmp/ec2rl",
"BACKED_FILES": {self.config_file_path: "/some/path"},
"REMEDIATE": True,
"SUDO": True}
with contextlib.redirect_stdout(self.output):
self.assertFalse(moduletests.src.arpignore.run())
self.assertTrue("[FAILURE] arp ignore is enabled for one or more interfaces. "
"Please see the module log"
in self.output.getvalue())
self.assertTrue(restore_mock.called)
self.assertTrue(fix_mock.called)
self.assertTrue(backup_mock.called)
self.assertTrue(isfile_mock.called)
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict")
@mock.patch("moduletests.src.arpignore.detect", return_value=True)
@mock.patch("moduletests.src.arpignore.os.path.isfile", return_value=False)
@mock.patch("moduletests.src.arpignore.fix", return_value=True)
def test_run_failure(self, fix_mock, isfile_mock, detect_mock, config_mock):
config_mock.return_value = {"BACKUP_DIR": "/var/tmp/ec2rl",
"LOG_DIR": "/var/tmp/ec2rl",
"BACKED_FILES": dict(),
"REMEDIATE": True,
"SUDO": True}
with contextlib.redirect_stdout(self.output):
self.assertFalse(moduletests.src.arpignore.run())
self.assertTrue("[FAILURE] arp ignore is enabled for one or more interfaces. "
"Please see the module log"
in self.output.getvalue())
self.assertTrue(fix_mock.called)
self.assertTrue(isfile_mock.called)
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict")
@mock.patch("moduletests.src.arpignore.detect", side_effect=(True, False))
@mock.patch("moduletests.src.arpignore.os.path.isfile", return_value=False)
@mock.patch("moduletests.src.arpignore.fix", return_value=True)
def test_run_fix(self, fix_mock, isfile_mock, detect_mock, config_mock):
config_mock.return_value = {"BACKUP_DIR": "/var/tmp/ec2rl",
"LOG_DIR": "/var/tmp/ec2rl",
"BACKED_FILES": dict(),
"REMEDIATE": True,
"SUDO": True}
with contextlib.redirect_stdout(self.output):
self.assertTrue(moduletests.src.arpignore.run())
self.assertEqual(self.output.getvalue(), "Determining if any interfaces are set to ignore arp requests\n"
"[SUCCESS] arp ignore is disabled for all interfaces "
"after remediation.\n")
self.assertTrue(fix_mock.called)
self.assertTrue(isfile_mock.called)
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict")
@mock.patch("moduletests.src.arpignore.detect", side_effect=Exception)
@mock.patch("moduletests.src.arpignore.restore", return_value=True)
def test_run_exception(self, restore_mock, detect_mock, config_mock):
config_mock.return_value = {"BACKUP_DIR": "/var/tmp/ec2rl",
"LOG_DIR": "/var/tmp/ec2rl",
"BACKED_FILES": {self.config_file_path: "/some/path"},
"REMEDIATE": True,
"SUDO": True}
with contextlib.redirect_stdout(self.output):
self.assertFalse(moduletests.src.arpignore.run())
self.assertTrue(restore_mock.called)
self.assertTrue(detect_mock.called)
self.assertTrue(config_mock.called)
@mock.patch("moduletests.src.arpignore.get_config_dict", side_effect=IOError)
def test_run_failure_config_exception(self, config_mock):
with contextlib.redirect_stdout(self.output):
self.assertFalse(moduletests.src.arpignore.run())
self.assertTrue(self.output.getvalue().endswith("Review the logs to determine the cause of the issue.\n"))
self.assertTrue(config_mock.called)
| 52.888462 | 116 | 0.61661 | 1,592 | 13,751 | 5.146985 | 0.157035 | 0.076886 | 0.117891 | 0.078594 | 0.749085 | 0.719185 | 0.696241 | 0.664633 | 0.625946 | 0.605077 | 0 | 0.008993 | 0.272198 | 13,751 | 259 | 117 | 53.092664 | 0.809752 | 0.080794 | 0 | 0.552885 | 0 | 0.019231 | 0.256288 | 0.146394 | 0 | 0 | 0.001428 | 0 | 0.269231 | 1 | 0.081731 | false | 0 | 0.052885 | 0.009615 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b3410b7be2d9e7a7ca55f4f09e61f02b2e5dc114 | 15,379 | py | Python | radical_translations/core/tests/test_models.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | 3 | 2022-02-08T18:03:44.000Z | 2022-03-18T18:10:43.000Z | radical_translations/core/tests/test_models.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | 19 | 2020-05-11T15:36:35.000Z | 2022-02-08T11:26:40.000Z | radical_translations/core/tests/test_models.py | kingsdigitallab/radical_translations | c18ca1ccc0ab2d88ae472dc2eda58e2ff9dcc76a | [
"MIT"
] | null | null | null | from collections import defaultdict
from typing import Dict
import pytest
from radical_translations.agents.models import Organisation, Person
from radical_translations.core.models import (
Classification,
Contribution,
Resource,
ResourceRelationship,
Title,
)
from radical_translations.utils.models import Date
pytestmark = pytest.mark.django_db
class TestTitle:
def test___str__(self):
t = Title(main_title="main")
assert "main" in str(t)
assert ":" not in str(t)
t = Title(main_title="main", subtitle="sub")
assert "main" in str(t)
assert ":" in str(t)
assert "sub" in str(t)
def test_get_or_create(self):
assert Title.get_or_create(None, None) is None
mt = "hello"
title = Title.get_or_create(mt)
assert title is not None
assert mt == title.main_title
for mt in ["untitled", "translation"]:
title = Title.get_or_create(mt)
assert title is not None
assert mt == title.main_title
assert 1 == title.subtitle
title = Title.get_or_create(mt)
assert title is not None
assert mt == title.main_title
assert 2 == title.subtitle
@pytest.mark.usefixtures("vocabulary")
class TestClassification:
@pytest.mark.usefixtures("resource")
def test_get_or_create(self, resource: Resource):
assert Classification.get_or_create(None, None) is None
assert Classification.get_or_create(resource, None) is None
assert Classification.get_or_create(None, "adapted") is None
assert Classification.get_or_create(resource, "source-text") is not None
assert Classification.get_or_create(resource, "unknown") is not None
assert Classification.get_or_create(resource, "adapted") is not None
@pytest.mark.usefixtures("vocabulary")
class TestContribution:
@pytest.mark.usefixtures("entry_original", "person", "resource")
def test_from_gsx_entry(
self,
resource: Resource,
entry_original: Dict[str, Dict[str, str]],
person: Person,
):
assert Contribution.from_gsx_entry(None, None, None, None) is None
assert Contribution.from_gsx_entry(resource, None, None, None) is None
assert Contribution.from_gsx_entry(None, entry_original, None, None) is None
contributions = Contribution.from_gsx_entry(
resource, entry_original, "authors", "author"
)
assert contributions is not None
assert len(contributions) == 1
entry_original["gsx$authors"]["$t"] = person.name
contributions = Contribution.from_gsx_entry(
resource, entry_original, "authors", "author"
)
assert contributions is not None
assert len(contributions) == 1
@pytest.mark.usefixtures("person", "resource")
def test_get_or_create(self, person: Person, resource: Resource):
role = "tester"
assert Contribution.get_or_create(None, None, None) is None
assert Contribution.get_or_create(resource, None, None) is None
assert Contribution.get_or_create(None, person, None) is None
assert Contribution.get_or_create(resource, person, None) is not None
assert (
Contribution.get_or_create(resource, person, None, "Pseudo Nym") is not None
)
c = Contribution.get_or_create(resource, person, role)
assert c is not None
assert role in c.roles.first().label.lower()
@pytest.mark.usefixtures("person", "resource")
def test___str__(self, person: Person, resource: Resource):
c = Contribution.get_or_create(resource, person, None, None)
assert person.name in str(c)
pseudonym = "Pseudo Nym"
c = Contribution.get_or_create(resource, person, None, pseudonym)
assert pseudonym in str(c)
assert person.name in str(c)
@pytest.mark.usefixtures("vocabulary")
class TestResource:
@pytest.mark.usefixtures("resource")
def test_has_date_radical(self, resource):
assert not resource.has_date_radical()
resource.date = Date(date_radical="year 1")
resource.date.save()
assert resource.has_date_radical()
@pytest.mark.usefixtures("entry_original", "entry_translation")
def test_is_original(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert resource.is_original() is True
resource = Resource.from_gsx_entry(entry_translation)
assert resource.is_original() is False
@pytest.mark.usefixtures("entry_original")
def test_get_paratext(self, entry_original: Dict[str, Dict[str, str]]):
resource = Resource.from_gsx_entry(entry_original)
assert resource.get_paratext().count() == 1
paratext = Resource.paratext_from_gsx_entry(entry_original, resource)
assert paratext.get_paratext().count() == 0
@pytest.mark.usefixtures("entry_original")
def test_paratext_of(self, entry_original: Dict[str, Dict[str, str]]):
resource = Resource.from_gsx_entry(entry_original)
assert resource.is_paratext() is False
assert resource.paratext_of() is None
paratext = Resource.paratext_from_gsx_entry(entry_original, resource)
assert paratext.is_paratext() is True
paratext_of = paratext.paratext_of()
assert paratext_of is not None
assert paratext_of.id == resource.id
@pytest.mark.usefixtures("entry_original")
def test_get_date(self, entry_original: Dict[str, Dict[str, str]]):
resource = Resource.from_gsx_entry(entry_original)
assert resource.date is not None
assert resource.get_date() is not None
paratext = Resource.paratext_from_gsx_entry(entry_original, resource)
assert paratext.date is None
assert paratext.get_date() is not None
assert paratext.get_date() == resource.get_date()
@pytest.mark.usefixtures("entry_original", "entry_translation", "entry_edition")
def test_is_translation(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
entry_edition: Dict[str, Dict[str, str]],
):
Resource.from_gsx_entry(entry_original)
resource = Resource.relationships_from_gsx_entry(entry_original)
assert resource.is_translation() is False
Resource.from_gsx_entry(entry_translation)
resource = Resource.relationships_from_gsx_entry(entry_translation)
assert resource.is_translation() is True
Resource.from_gsx_entry(entry_edition)
resource = Resource.relationships_from_gsx_entry(entry_edition)
assert resource.is_translation() is True
@pytest.mark.usefixtures("entry_original", "entry_edition")
def test_get_authors(
self,
entry_original: Dict[str, Dict[str, str]],
entry_edition: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert "Constantin" in resource.get_authors()
resource = Resource.from_gsx_entry(entry_edition)
authors = resource.get_authors()
assert "Samson" in authors
assert ";" in authors
assert "Dalila" in authors
@pytest.mark.usefixtures("entry_original", "entry_translation")
def test_get_authors_source_text(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert resource.get_authors_source_text() is None
Resource.from_gsx_entry(entry_translation)
resource = Resource.relationships_from_gsx_entry(entry_translation)
authors = resource.get_authors_source_text()
assert authors is not None
assert "Constantin" in authors[0].name
@pytest.mark.usefixtures("entry_original", "entry_translation")
def test_get_classification_edition(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert resource.get_classification_edition().lower() == "source-text"
resource = Resource.from_gsx_entry(entry_translation)
assert resource.get_classification_edition().lower() == "integral"
@pytest.mark.usefixtures("entry_original", "entry_translation")
def test_get_language_names(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert resource.get_language_names() == "French"
resource = Resource.from_gsx_entry(entry_translation)
assert resource.get_language_names() == "English"
@pytest.mark.usefixtures("entry_original", "entry_translation")
def test_get_place_names(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
):
resource = Resource.from_gsx_entry(entry_original)
assert resource.get_place_names() == "Paris"
resource = Resource.from_gsx_entry(entry_translation)
assert resource.get_place_names() == "London"
@pytest.mark.usefixtures(
"entry_original",
"entry_translation",
"entry_edition",
"organisation",
"person",
)
def test_from_gsx_entry(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
entry_edition: Dict[str, Dict[str, str]],
organisation: Organisation,
person: Person,
):
assert Resource.from_gsx_entry(None) is None
entry = defaultdict(defaultdict)
entry["gsx$title"]["$t"] = ""
assert Resource.from_gsx_entry(entry) is None
entry["gsx$title"]["$t"] = "Work 1"
assert Resource.from_gsx_entry(entry) is not None
entry["gsx$authors"]["$t"] = f"{person.name}"
resource = Resource.from_gsx_entry(entry)
assert resource.contributions.first().agent.name == person.name
entry["gsx$language"]["$t"] = "French [fr]; English [en]"
resource = Resource.from_gsx_entry(entry)
assert "French" in resource.get_language_names()
entry_original["gsx$authors"]["$t"] = ""
entry_original["gsx$organisation"]["$t"] = ""
resource = Resource.from_gsx_entry(entry_original)
assert resource is not None
assert "ruines" in resource.title.main_title
assert resource.date.date_display == "1791"
assert "Paris" in resource.places.first().place.address
entry_original["gsx$authors"]["$t"] = f"{person.name}"
resource = Resource.from_gsx_entry(entry_original)
assert resource.contributions.count() == 1
entry_original["gsx$organisation"]["$t"] = f"{organisation.name}"
resource = Resource.from_gsx_entry(entry_original)
assert resource.contributions.count() == 2
@pytest.mark.usefixtures("resource")
def test_languages_from_gsx_entry(self, resource: Resource):
entry = defaultdict(defaultdict)
entry["gsx$language"]["$t"] = "French [fr]"
assert Resource.languages_from_gsx_entry(None, None) is None
assert Resource.languages_from_gsx_entry(resource, None) is None
assert Resource.languages_from_gsx_entry(None, entry) is None
languages = Resource.languages_from_gsx_entry(resource, entry)
assert languages is not None
assert len(languages) == 1
entry["gsx$language"]["$t"] = "French [fr]; English [en]"
languages = Resource.languages_from_gsx_entry(resource, entry)
assert languages is not None
assert len(languages) == 2
@pytest.mark.usefixtures("resource")
def test_subjects_from_gsx_entry(self, resource: Resource):
entry = defaultdict(defaultdict)
entry["gsx$genre"]["$t"] = "essay"
assert Resource.subjects_from_gsx_entry(None, None) is None
assert Resource.subjects_from_gsx_entry(resource, None) is None
assert Resource.subjects_from_gsx_entry(None, entry) is None
subjects = Resource.subjects_from_gsx_entry(resource, entry)
assert subjects is not None
assert len(subjects) == 1
entry["gsx$genre"]["$t"] = "essay; letter"
subjects = Resource.subjects_from_gsx_entry(resource, entry)
assert subjects is not None
assert len(subjects) == 2
@pytest.mark.usefixtures("entry_original", "resource")
def test_paratext_from_gsx_entry(
self,
entry_original: Dict[str, Dict[str, str]],
resource: Resource,
):
assert Resource.paratext_from_gsx_entry(None, None) is None
assert Resource.paratext_from_gsx_entry(entry_original, None) is None
assert Resource.paratext_from_gsx_entry(None, resource) is None
paratext = Resource.paratext_from_gsx_entry(entry_original, resource)
assert paratext is not None
assert paratext.title == resource.title
assert paratext.summary is not None
assert paratext.notes is not None
assert paratext.relationships.count() == 1
@pytest.mark.usefixtures("entry_original", "entry_translation", "entry_edition")
def test_relationships_from_gsx_entry(
self,
entry_original: Dict[str, Dict[str, str]],
entry_translation: Dict[str, Dict[str, str]],
entry_edition: Dict[str, Dict[str, str]],
):
assert Resource.relationships_from_gsx_entry(None) is None
Resource.from_gsx_entry(entry_original)
resource = Resource.relationships_from_gsx_entry(entry_original)
assert resource is not None
assert resource.relationships.count() == 0
Resource.from_gsx_entry(entry_translation)
resource = Resource.relationships_from_gsx_entry(entry_translation)
assert resource is not None
assert resource.relationships.count() == 1
entry = defaultdict(defaultdict)
entry["gsx$title"]["$t"] = "Discours sur le gouvernement"
Resource.from_gsx_entry(entry)
entry = defaultdict(defaultdict)
entry["gsx$title"]["$t"] = "Discourses concerning government"
Resource.from_gsx_entry(entry)
Resource.from_gsx_entry(entry_edition)
resource = Resource.relationships_from_gsx_entry(entry_edition)
assert resource is not None
assert resource.relationships.count() == 2
@pytest.mark.usefixtures("vocabulary")
class TestResourceRelationship:
@pytest.mark.usefixtures("resource")
def test_get_or_create(self, resource: Resource):
assert ResourceRelationship.get_or_create(None, None, None) is None
assert ResourceRelationship.get_or_create(resource, None, resource) is None
assert (
ResourceRelationship.get_or_create(resource, "child of", resource) is None
)
rr = ResourceRelationship.get_or_create(resource, "related to", resource)
assert rr is not None
assert resource.relationships.count() == 1
| 38.066832 | 88 | 0.675597 | 1,845 | 15,379 | 5.406504 | 0.069919 | 0.047018 | 0.080602 | 0.071579 | 0.806516 | 0.713383 | 0.667469 | 0.59589 | 0.544762 | 0.446917 | 0 | 0.002013 | 0.224657 | 15,379 | 403 | 89 | 38.16129 | 0.834535 | 0 | 0 | 0.48318 | 0 | 0 | 0.076143 | 0 | 0 | 0 | 0 | 0 | 0.348624 | 1 | 0.070336 | false | 0 | 0.018349 | 0 | 0.103976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b34659616a666e655e69acc258aeb8b0540e5cba | 498 | py | Python | Bionic/core/Text.py | TarunavBA/Bionic | 1b8981e9f4615db28a2a6800b5f1e0f0b83beed5 | [
"MIT"
] | 4 | 2021-11-11T09:47:57.000Z | 2021-11-11T14:38:50.000Z | Bionic/core/Text.py | TarunavBA/Bionic | 1b8981e9f4615db28a2a6800b5f1e0f0b83beed5 | [
"MIT"
] | 2 | 2021-11-11T09:41:18.000Z | 2021-11-11T14:12:23.000Z | Bionic/core/Text.py | TarunavBA/Bionic | 1b8981e9f4615db28a2a6800b5f1e0f0b83beed5 | [
"MIT"
] | 1 | 2021-11-11T14:02:00.000Z | 2021-11-11T14:02:00.000Z | def Text(Text="", classname="", Type="h1", TextStyle="", Id="0"):
if TextStyle != "":
if classname != "":
return f"""<{Type} id={Id} class='{classname}' style='{TextStyle}'>{Text}</{Type}>"""
else:
return f"""<{Type} id={Id} style='{TextStyle}'>{Text}</{Type}>"""
else:
if classname != "":
return f"""<{Type} id={Id} class='{classname}'>{Text}</{Type}>"""
else:
return f"""<{Type} id={Id} >{Text}</{Type}>"""
| 38.307692 | 97 | 0.463855 | 54 | 498 | 4.277778 | 0.259259 | 0.121212 | 0.190476 | 0.225108 | 0.753247 | 0.580087 | 0.580087 | 0.580087 | 0.34632 | 0 | 0 | 0.005391 | 0.25502 | 498 | 12 | 98 | 41.5 | 0.617251 | 0 | 0 | 0.454545 | 0 | 0 | 0.417671 | 0.210843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b348724e3c00bf807ec5e719cacee4f0dcc38030 | 1,964 | py | Python | gf2_mul.py | fast-crypto-lab/Frobenius_AFFT | 219893fbc5a37edb541cfb47cd46071bb79d3835 | [
"MIT"
] | 11 | 2018-02-17T04:08:58.000Z | 2021-12-14T02:17:46.000Z | gf2_mul.py | fast-crypto-lab/Frobenius_AFFT | 219893fbc5a37edb541cfb47cd46071bb79d3835 | [
"MIT"
] | null | null | null | gf2_mul.py | fast-crypto-lab/Frobenius_AFFT | 219893fbc5a37edb541cfb47cd46071bb79d3835 | [
"MIT"
] | 3 | 2018-03-12T06:17:36.000Z | 2021-05-25T11:15:48.000Z | # Copyright (C) 2017 Ming-Shing Chen
def gf2_mul( a , b ):
return a&b
# gf4 := gf2[x]/x^2+x+1
# 4 and , 3 xor
def gf4_mul( a , b ):
a0 = a&1
a1 = (a>>1)&1
b0 = b&1
b1 = (b>>1)&1
ab0 = a0&b0
ab1 = (a1&b0)^(a0&b1)
ab2 = a1&b1
ab0 ^= ab2
ab1 ^= ab2
ab0 ^= (ab1<<1)
return ab0
# gf16 := gf4[y]/y^2+y+x
# gf16 mul: xor: 18 ,and: 12
def gf16_mul( a , b ):
a0 = a&3
a1 = (a>>2)&3
b0 = b&3
b1 = (b>>2)&3
a0b0 = gf4_mul( a0 , b0 )
a1b1 = gf4_mul( a1 , b1 )
a0a1xb0b1_a0b0 = gf4_mul( a0^a1 , b0^b1 ) ^ a0b0
rd0 = gf4_mul( 2 , a1b1 )
a0b0 ^= rd0
return a0b0|(a0a1xb0b1_a0b0<<2)
# gf256 := gf16[x]/x^2 + x + 0x8
def gf256_mul( a , b ):
a0 = a&15
a1 = (a>>4)&15
b0 = b&15
b1 = (b>>4)&15
ab0 = gf16_mul( a0 , b0 )
ab1 = gf16_mul( a1 , b0 ) ^ gf16_mul( a0 , b1 )
ab2 = gf16_mul( a1 , b1 )
ab0 ^= gf16_mul( ab2 , 8 )
ab1 ^= ab2
ab0 ^= (ab1<<4)
return ab0
#382 bit operations
def gf216_mul( a , b ):
a0 = a&0xff
a1 = (a>>8)&0xff
b0 = b&0xff
b1 = (b>>8)&0xff
a0b0 = gf256_mul( a0 , b0 )
a1b1 = gf256_mul( a1 , b1 )
#a0b1_a1b0 = gf16_mul( a0^a1 , b0^b1 ) ^ a0b0 ^ a1b1 ^a1b1
a0b1_a1b0 = gf256_mul( a0^a1 , b0^b1 ) ^ a0b0
rd0 = gf256_mul( a1b1 , 0x80 )
return (a0b1_a1b0<<8)|(rd0^a0b0)
#gf65536 := gf256[x]/x^2 + x + 0x80
def gf65536_mul( a , b ):
a0 = a&0xff;
a1 = (a>>8)&0xff;
b0 = b&0xff;
b1 = (b>>8)&0xff;
ab0 = gf256_mul( a0 , b0 );
ab2 = gf256_mul( a1 , b1 );
ab1 = gf256_mul( a0^a1 , b0^b1 )^ab0;
return (ab1<<8)^(ab0^gf256_mul(ab2,0x80));
#gf832 := gf65536[x]/x^2 + x + 0x8000
def gf832_mul( a , b ):
a0 = a&0xffff;
a1 = (a>>16)&0xffff;
b0 = b&0xffff;
b1 = (b>>16)&0xffff;
ab0 = gf65536_mul( a0 , b0 );
ab2 = gf65536_mul( a1 , b1 );
ab1 = gf65536_mul( a0^a1 , b0^b1 )^ab0;
return (ab1<<16)^(ab0^gf65536_mul(ab2,0x8000));
def gf832_inv(a) :
r = a
for i in range(2,32):
r = gf832_mul(r,r)
r = gf832_mul(r,a)
return gf832_mul(r,r)
| 21.11828 | 64 | 0.539715 | 378 | 1,964 | 2.703704 | 0.156085 | 0.053816 | 0.034247 | 0.041096 | 0.204501 | 0.17319 | 0.148728 | 0.113503 | 0.068493 | 0.068493 | 0 | 0.261954 | 0.265275 | 1,964 | 92 | 65 | 21.347826 | 0.446292 | 0.153259 | 0 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042373 | 0 | 0 | 1 | 0.112676 | false | 0 | 0 | 0.014085 | 0.225352 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b3510d12c4a5ad5e4ba7e1619068d9f611a5a69e | 1,190 | py | Python | config/atari/env_wrapper.py | PeppaCat/EfficientZero | b0e98197abfc36ab34faac043ecea9b756b11d54 | [
"MIT"
] | 1 | 2021-12-12T09:04:34.000Z | 2021-12-12T09:04:34.000Z | config/atari/env_wrapper.py | PeppaCat/EfficientZero | b0e98197abfc36ab34faac043ecea9b756b11d54 | [
"MIT"
] | null | null | null | config/atari/env_wrapper.py | PeppaCat/EfficientZero | b0e98197abfc36ab34faac043ecea9b756b11d54 | [
"MIT"
] | null | null | null | import numpy as np
from core.game import Game
from core.utils import arr_to_str
class AtariWrapper(Game):
def __init__(self, env, discount: float, cvt_string=True):
"""Atari Wrapper
Parameters
----------
env: Any
another env wrapper
discount: float
discount of env
cvt_string: bool
True -> convert the observation into string in the replay buffer
"""
super().__init__(env, env.action_space.n, discount)
self.cvt_string = cvt_string
def legal_actions(self):
return [_ for _ in range(self.env.action_space.n)]
def step(self, action):
observation, reward, done, info = self.env.step(action)
observation = observation.astype(np.uint8)
if self.cvt_string:
observation = arr_to_str(observation)
return observation, reward, done, info
def reset(self, **kwargs):
observation = self.env.reset(**kwargs)
observation = observation.astype(np.uint8)
if self.cvt_string:
observation = arr_to_str(observation)
return observation
def close(self):
self.env.close()
| 27.045455 | 76 | 0.617647 | 142 | 1,190 | 5 | 0.387324 | 0.076056 | 0.033803 | 0.042254 | 0.273239 | 0.273239 | 0.273239 | 0.273239 | 0.273239 | 0.273239 | 0 | 0.002367 | 0.289916 | 1,190 | 43 | 77 | 27.674419 | 0.83787 | 0.159664 | 0 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.130435 | 0.043478 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b35491db9849a7903d7123c4ca5120ff9a494fc3 | 524 | py | Python | src/chat/views.py | rehhg/chat-app | 07e3a1a3b222e18905bfa0f74597cfa5f055e1b0 | [
"Apache-2.0"
] | null | null | null | src/chat/views.py | rehhg/chat-app | 07e3a1a3b222e18905bfa0f74597cfa5f055e1b0 | [
"Apache-2.0"
] | 6 | 2019-12-05T09:30:43.000Z | 2021-09-22T18:03:06.000Z | src/chat/views.py | rehhg/chat-app | 07e3a1a3b222e18905bfa0f74597cfa5f055e1b0 | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth import get_user_model
from django.shortcuts import get_object_or_404
from .models import Chat, Contact
User = get_user_model()
def get_last_10_messages(chat_id):
chat = get_object_or_404(Chat, id=chat_id)
return reversed(chat.messages.order_by('-timestamp').all()[:10])
def get_user_contact(username):
user = get_object_or_404(User, username=username)
return get_object_or_404(Contact, user=user)
def get_current_chat(chat_id):
return get_object_or_404(Chat, id=chat_id)
| 24.952381 | 68 | 0.778626 | 86 | 524 | 4.383721 | 0.325581 | 0.095491 | 0.145889 | 0.185676 | 0.206897 | 0.137931 | 0.137931 | 0.137931 | 0 | 0 | 0 | 0.041485 | 0.125954 | 524 | 20 | 69 | 26.2 | 0.781659 | 0 | 0 | 0 | 0 | 0 | 0.019084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.083333 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b36066fa60b3b8a3a79900f3ccf72771d6fcc450 | 180 | py | Python | functions.py | iamnicoj/pythonplay | f847038524c59a5fe658712a2cf4f904ad52401e | [
"MIT"
] | null | null | null | functions.py | iamnicoj/pythonplay | f847038524c59a5fe658712a2cf4f904ad52401e | [
"MIT"
] | 6 | 2021-03-02T21:28:15.000Z | 2021-03-17T23:35:44.000Z | functions.py | iamnicoj/pythonplay | f847038524c59a5fe658712a2cf4f904ad52401e | [
"MIT"
] | null | null | null | import math
def addition (x, y):
return x + y
def circle_area (r):
return math.pi * (r ** 2)
print(f"add 1 + 2 = {addition(1, 2)}")
print(f"area 2 = {circle_area(2)}")
| 15 | 38 | 0.577778 | 33 | 180 | 3.090909 | 0.484848 | 0.039216 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050725 | 0.233333 | 180 | 11 | 39 | 16.363636 | 0.688406 | 0 | 0 | 0 | 0 | 0 | 0.294444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.285714 | 0.714286 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
2fa91e4679e20e123785be90b3eabd53aba8c4b5 | 699 | py | Python | x_rebirth_station_calculator/station_data/ol__technology_megacomplex.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | 1 | 2016-04-17T11:00:22.000Z | 2016-04-17T11:00:22.000Z | x_rebirth_station_calculator/station_data/ol__technology_megacomplex.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | x_rebirth_station_calculator/station_data/ol__technology_megacomplex.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | from x_rebirth_station_calculator.station_data import modules
from x_rebirth_station_calculator.station_data.station_base import Station
names = {'L044': 'Technology Megacomplex',
'L049': 'Techno-Megaplex'}
smodules = [modules.QTubeYFab(production_method='ar', efficiency=174),
modules.QTubeYFab(production_method='ar', efficiency=164),
modules.PPPPlant(production_method='ar', efficiency=157),
modules.PPPPlant(production_method='ar', efficiency=164),
modules.EMFacTower(production_method='ar', efficiency=158),
modules.WarheadForge(production_method='ar', efficiency=158)]
OL_TechnologyMegacomplex = Station(names, smodules)
| 46.6 | 74 | 0.738197 | 75 | 699 | 6.666667 | 0.413333 | 0.192 | 0.216 | 0.336 | 0.658 | 0.534 | 0.16 | 0 | 0 | 0 | 0 | 0.040404 | 0.150215 | 699 | 14 | 75 | 49.928571 | 0.801347 | 0 | 0 | 0 | 0 | 0 | 0.081545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2fae73d2344876612b9d4f846c244a1e9cfde325 | 179 | py | Python | python_BJ/10872_recursion_factorial.py | DongUk-Park/im_coma | b5c1acdea3957dead6d884f5d4b257df3ae931aa | [
"MIT"
] | null | null | null | python_BJ/10872_recursion_factorial.py | DongUk-Park/im_coma | b5c1acdea3957dead6d884f5d4b257df3ae931aa | [
"MIT"
] | 1 | 2022-03-22T02:30:53.000Z | 2022-03-22T02:31:43.000Z | python_BJ/10872_recursion_factorial.py | DongUk-Park/im_coma | b5c1acdea3957dead6d884f5d4b257df3ae931aa | [
"MIT"
] | null | null | null | def factorial(n):
if n == 0:
return 1
elif n == 1:
return 1
else:
return factorial(n-1) * n
example = int(input())
print(factorial(example)) | 14.916667 | 33 | 0.530726 | 25 | 179 | 3.8 | 0.52 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042373 | 0.340782 | 179 | 12 | 34 | 14.916667 | 0.762712 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.444444 | 0.111111 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2fd1561b101a01e25ce5c2b3bb703bdaebce6934 | 2,071 | py | Python | api/python/src/tgdb/impl/atomics.py | TIBCOSoftware/tgdb-client | 92d0c8401003420cacdd90de56db877eae2d02d1 | [
"Apache-2.0"
] | 19 | 2018-01-22T19:45:34.000Z | 2018-11-21T20:51:17.000Z | api/python/src/tgdb/impl/atomics.py | TIBCOSoftware/tgdb-client | 92d0c8401003420cacdd90de56db877eae2d02d1 | [
"Apache-2.0"
] | null | null | null | api/python/src/tgdb/impl/atomics.py | TIBCOSoftware/tgdb-client | 92d0c8401003420cacdd90de56db877eae2d02d1 | [
"Apache-2.0"
] | 8 | 2016-10-29T17:41:06.000Z | 2021-02-25T19:18:49.000Z | """
* Copyright 2019 TIBCO Software Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); You may not use this file except
* in compliance with the License.
* A copy of the License is included in the distribution package with this file.
* You also may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* File name :attr.py
* Created on: 5/15/2019
* Created by: suresh
*
* SVN Id: $Id: atomics.py 3256 2019-06-10 03:31:30Z ssubrama $
*
* This file encapsulates basic Atomic functions needed
"""
from multiprocessing import *
"""
* typecode is one of the following
* 'c' = char
* 'u' = wchar
* 'b' = byte
* 'B' = ubyte
* 'h' = short
* 'H' = ushort
* 'i' = int
* 'I' = uint
* 'l' = long
* 'L' = ulong
* 'q' = longlong
* 'Q' = ulonglong
* 'f' = float
* 'd' = double
"""
class AtomicReference(object):
ref: Value = None
def __init__(self, typecode, initial):
self.ref = Value(typecode, initial, lock=True)
def increment(self):
with self.ref.get_lock():
self.ref.value += 1
return self.ref.value
def decrement(self):
with self.ref.get_lock():
self.ref.value -= 1
return self.ref.value
def get(self):
with self.ref.get_lock():
return self.ref.value
def set(self, v):
with self.ref.get_lock():
oldv = self.ref.value
self.ref.value = v
return oldv
# @property
# def value(self):
# with self.ref.get_lock():
# return self.ref
#
# @value.setter
# def value(self, value):
# with self.ref.get_lock():
# self.ref = value
| 24.364706 | 99 | 0.610333 | 285 | 2,071 | 4.4 | 0.501754 | 0.089314 | 0.095694 | 0.066986 | 0.220893 | 0.177033 | 0.177033 | 0.177033 | 0.15311 | 0.15311 | 0 | 0.023364 | 0.276678 | 2,071 | 84 | 100 | 24.654762 | 0.813752 | 0.512796 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.047619 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2fd621bcac24989af51f35582819e2cd47078101 | 2,398 | py | Python | clima/password_store.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | 1 | 2021-05-21T12:54:32.000Z | 2021-05-21T12:54:32.000Z | clima/password_store.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | 4 | 2020-03-24T17:37:35.000Z | 2020-12-03T13:22:35.000Z | clima/password_store.py | d3rp/fissle | 770a140e42e6d8f7d55b3211a6ba691d2a915a2d | [
"Apache-2.0"
] | null | null | null | import glob
from pathlib import Path
from subprocess import check_output
PW_STORE_PATH = str(Path.home() / ".password-store")
def get_gpg_id(rel_p, mapped_gpg_ids):
# print(f'looking for gpg-id in {rel_p}')
if rel_p not in mapped_gpg_ids:
if len(rel_p) > 2:
return get_gpg_id(str(Path(rel_p).parent), mapped_gpg_ids)
else:
print(f'Could not find gpg-id for {rel_p}')
else:
return mapped_gpg_ids[rel_p]
def map_gpg_id():
mapped_gpg_ids = {}
for f in glob.glob(f'{PW_STORE_PATH}/**/.gpg-id', recursive=True):
p = Path(f)
with open(f, 'r', encoding='UTF-8') as id_file:
mapped_gpg_ids.update({str(Path(get_rel_p(f)).parent): id_file.read().strip()})
return mapped_gpg_ids
def get_rel_p(p):
return str(p).replace(PW_STORE_PATH + '/', '')
CACHED_IDS = map_gpg_id()
def test_keymapping():
mapped_gpg_ids = map_gpg_id()
for f in glob.glob(f'{PW_STORE_PATH}/**/*.gpg', recursive=True):
rp = get_rel_p(f)
print(f'{rp} id -> {get_gpg_id(rp, mapped_gpg_ids)}')
def decrypt_file_with_id(gpg_file, gpg_id) -> str:
cmd = ['gpg', '--quiet', '--recipient', gpg_id, '--decrypt', gpg_file]
result = ''
try:
result = check_output(cmd, universal_newlines=True)
except:
pass
return result.strip()
def decrypt(keyname):
result = ''
for f in glob.glob(f'{PW_STORE_PATH}/**/*.gpg', recursive=True):
this_name = Path(f).stem
if this_name == keyname:
rp = get_rel_p(f)
gpg_id = get_gpg_id(rp, CACHED_IDS)
result = decrypt_file_with_id(f, gpg_id)
break
return result
def get_secrets(configuration_tuple):
params = [t for t in configuration_tuple._asdict()]
secrets = {}
try:
for p in params:
secret = decrypt(p)
if len(secret) > 0:
secrets.update({p: secret})
except:
# TODO: Wanted to keep this lean, as one might not have gpg installed and what not..
# Need to provide for example a configuration key 'use gpg' that the user can use to
# enable this feature. This way it won't get in the way of other use cases.
pass
return secrets
def test_decrypt():
keyname = 'pace_eden_lookup_signid'
print(decrypt(keyname))
# test_keymapping()
# test_decrypt()
| 26.065217 | 92 | 0.619683 | 365 | 2,398 | 3.827397 | 0.30137 | 0.050107 | 0.077309 | 0.021475 | 0.112384 | 0.080888 | 0.080888 | 0.080888 | 0.080888 | 0.080888 | 0 | 0.001684 | 0.257298 | 2,398 | 91 | 93 | 26.351648 | 0.782706 | 0.130108 | 0 | 0.237288 | 0 | 0 | 0.108225 | 0.046657 | 0 | 0 | 0 | 0.010989 | 0 | 1 | 0.135593 | false | 0.050847 | 0.050847 | 0.016949 | 0.305085 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2fda5ea88b8dc238f7d09610ce06d4c604e53aa9 | 690 | py | Python | plugins/auth/fps_auth/config.py | jtpio/jupyverse | 2419c7d2cee618b791908f8a63db9f7b18cab78e | [
"MIT"
] | null | null | null | plugins/auth/fps_auth/config.py | jtpio/jupyverse | 2419c7d2cee618b791908f8a63db9f7b18cab78e | [
"MIT"
] | null | null | null | plugins/auth/fps_auth/config.py | jtpio/jupyverse | 2419c7d2cee618b791908f8a63db9f7b18cab78e | [
"MIT"
] | null | null | null | from fps.config import PluginModel, get_config # type: ignore
from fps.hooks import register_config, register_plugin_name # type: ignore
from pydantic import SecretStr
from typing import Literal
class AuthConfig(PluginModel):
client_id: str = ""
client_secret: SecretStr = SecretStr("")
redirect_uri: str = ""
mode: Literal["noauth", "token", "user"] = "token"
cookie_secure: bool = (
False # FIXME: should default to True, and set to False for tests
)
clear_users: bool = False
login_url: str = "/login_page"
def get_auth_config():
return get_config(AuthConfig)
c = register_config(AuthConfig)
n = register_plugin_name("authenticator")
| 27.6 | 75 | 0.711594 | 88 | 690 | 5.386364 | 0.590909 | 0.029536 | 0.059072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192754 | 690 | 24 | 76 | 28.75 | 0.850987 | 0.12029 | 0 | 0 | 0 | 0 | 0.072968 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0.055556 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2fdcf38e782e03a1735de2215b700a65d3019698 | 61 | py | Python | kmeans_practice/__init__.py | Miaotaizhou/Rush | 22c4883bc6b3ca74cfe4f19374245b0feeca9242 | [
"Unlicense"
] | 1 | 2019-06-02T07:35:04.000Z | 2019-06-02T07:35:04.000Z | matplot/__init__.py | Miaotaizhou/Rush | 22c4883bc6b3ca74cfe4f19374245b0feeca9242 | [
"Unlicense"
] | null | null | null | matplot/__init__.py | Miaotaizhou/Rush | 22c4883bc6b3ca74cfe4f19374245b0feeca9242 | [
"Unlicense"
] | null | null | null | #! usr/bin/python3
# *-* coding=UTF-8 *-*
__author__ = "Rush" | 20.333333 | 22 | 0.606557 | 8 | 61 | 4.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.131148 | 61 | 3 | 23 | 20.333333 | 0.584906 | 0.622951 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2fe3e8b0d9b68720308882034355271028be5908 | 511 | py | Python | Others/code_festival/code-formula-2014-quala/b.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 2 | 2020-06-12T09:54:23.000Z | 2021-05-04T01:34:07.000Z | Others/code_festival/code-formula-2014-quala/b.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 961 | 2020-06-23T07:26:22.000Z | 2022-03-31T21:34:52.000Z | Others/code_festival/code-formula-2014-quala/b.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
def main():
a, b = map(int, input().split())
pins = ['x' for _ in range(10)]
p = list(map(int, input().split()))
for pi in p:
pins[pi - 1] = '.'
if b > 0:
q = list(map(int, input().split()))
for qi in q:
pins[qi - 1] = 'o'
print(' '.join(map(str, pins[6:])))
print(' '.join(map(str, pins[3:6])))
print(' '.join(map(str, pins[1:3])))
print(' '.join(map(str, pins[0])))
if __name__ == '__main__':
main()
| 18.925926 | 43 | 0.457926 | 77 | 511 | 2.922078 | 0.415584 | 0.16 | 0.213333 | 0.266667 | 0.551111 | 0.382222 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.295499 | 511 | 26 | 44 | 19.653846 | 0.591667 | 0.041096 | 0 | 0 | 0 | 0 | 0.030738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.0625 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2ff03c06b3c15b5db327599412c4e9192c9280a3 | 2,900 | py | Python | mnist_models/models.py | kevinnowland/mnist-models | 34c3d5c1aa5a6b394b26bf828fd955e77010ab28 | [
"MIT"
] | 1 | 2021-10-12T19:53:56.000Z | 2021-10-12T19:53:56.000Z | mnist_models/models.py | kevinnowland/mnist-models | 34c3d5c1aa5a6b394b26bf828fd955e77010ab28 | [
"MIT"
] | null | null | null | mnist_models/models.py | kevinnowland/mnist-models | 34c3d5c1aa5a6b394b26bf828fd955e77010ab28 | [
"MIT"
] | null | null | null | """module of classes holding models, pretrained or otherwise. The
SVMModel and LogisticModel classes do not admit any hyperparameter
changes, so any change requires using the basic MnistModel class.
"""
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from typing import TypeVar
Model = TypeVar('Model')
class MnistModel:
"""Base class from which model specific classes will be built.
:param model: an untrained model object to wrap into this class. must
have fit(X, y) and predict(y) functions.
:type: sklearn style model object
"""
def __init__(self, model: str) -> None:
"""constructor method
"""
self.__model = model
self.__accuracy = None
self.__is_trained = False
@property
def model(self) -> Model:
"""the model the class contains
:return: the model
:rtype: model object
"""
return self.__model
@property
def accuracy(self) -> float:
"""accuracy on test set once the model has been fit
:return: test set accuracy
:rtype: float or None
"""
return self.__accuracy
@property
def is_trained(self) -> bool:
"""whether the model has been trained or not
:return: whether the model has been trained or not
:rtype: bool
"""
return self.__is_trained
def fit(self, X_train: np.ndarray, y_train: np.ndarray,
X_test: np.ndarray, y_test: np.ndarray) -> None:
"""fit the model and record accuracy from test set."""
if self.model is not None:
self.model.fit(X_train, y_train)
y_pred = self.model.predict(X_test)
self.__accuracy = accuracy_score(y_test, y_pred)
self.__is_trained = True
def predict(self, X: np.ndarray) -> np.ndarray:
"""predict with the model
:param X: set to predict on
:type X: numpy array of form (num_samples, num_features)
:return: predictions
:rtype: numpy.array or None if model is not trained
"""
if self.is_trained:
return self.model.predict(X)
class SVMModel(MnistModel):
"""class for the SVM model """
def __init__(self) -> None:
"""constructor
"""
super().__init__(SVC())
def __str__(self) -> str:
"""string representation
"""
return "SVMModel(is_trained={})".format(self.is_trained)
class LogisticModel(MnistModel):
"""class for logistic regression model """
def __init__(self) -> None:
"""constructor
"""
super().__init__(LogisticRegression(max_iter=1000))
def __str__(self) -> str:
"""string representation
"""
return "LogisticModel(is_trained={})".format(self.is_trained)
| 27.102804 | 73 | 0.621724 | 358 | 2,900 | 4.843575 | 0.307263 | 0.046713 | 0.044983 | 0.025952 | 0.16263 | 0.16263 | 0.130334 | 0.085352 | 0 | 0 | 0 | 0.001918 | 0.281034 | 2,900 | 106 | 74 | 27.358491 | 0.829736 | 0.381724 | 0 | 0.175 | 0 | 0 | 0.035264 | 0.032116 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2ff8683d54f0baf2f067b5ed99fb95d4bfc93daa | 1,552 | py | Python | sentiment_analyser/tests/task.py | gorkemyontem/SWE-573-2020 | 6a9ca57d294066fcc0db640f45d38d7341754a68 | [
"MIT"
] | null | null | null | sentiment_analyser/tests/task.py | gorkemyontem/SWE-573-2020 | 6a9ca57d294066fcc0db640f45d38d7341754a68 | [
"MIT"
] | 35 | 2020-11-02T17:06:35.000Z | 2021-03-10T07:56:03.000Z | sentiment_analyser/tests/task.py | gorkemyontem/SWE-573-2020 | 6a9ca57d294066fcc0db640f45d38d7341754a68 | [
"MIT"
] | 1 | 2021-02-02T14:38:27.000Z | 2021-02-02T14:38:27.000Z | from django.test import TestCase
from accounts.models import CustomUser
from analyser.models import SentenceAnalysis, SubmissionAnalysis, CommentAnalysis, TagMeAnalysis, TagMeSentenceAnalysis
from analyser.tasks import one_time_schedules, polarity_analysis_submission_task, polarity_analysis_comment_task, tagme_analysis_sentences_task
from scraper.models import Subreddit, AuthorRedditor, Submission, Comments
from scraper.service import RedditAuth, ScraperService, RedditModelService
from django_q.tasks import async_task
from django_q.models import Schedule
from django.utils import timezone
import datetime
import pprint
class TestTask(TestCase):
def test_one_time_schedules(self):
one_time_schedules()
schedule1 = Schedule.objects.get(func='scraper.tasks.crawl_subreddits')
schedule2 = Schedule.objects.get(func='analyser.tasks.polarity_analysis_submission_task')
schedule3 = Schedule.objects.get(func='analyser.tasks.polarity_analysis_comment_task')
self.assertEqual(str(schedule1), "scraper.tasks.crawl_subreddits")
self.assertEqual(str(schedule2), "analyser.tasks.polarity_analysis_submission_task")
self.assertEqual(str(schedule3), "analyser.tasks.polarity_analysis_comment_task")
def test_polarity_analysis_submission_task(self):
polarity_analysis_submission_task(1, 1)
def test_polarity_analysis_comment_task(self):
polarity_analysis_comment_task(1, 1)
def test_tagme_analysis_sentences_task(self):
tagme_analysis_sentences_task(1, 1)
| 47.030303 | 143 | 0.808634 | 185 | 1,552 | 6.491892 | 0.297297 | 0.133222 | 0.108243 | 0.124896 | 0.257286 | 0.174854 | 0.084929 | 0.084929 | 0 | 0 | 0 | 0.008811 | 0.122423 | 1,552 | 32 | 144 | 48.5 | 0.872981 | 0 | 0 | 0 | 0 | 0 | 0.158607 | 0.158607 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0.153846 | false | 0 | 0.423077 | 0 | 0.615385 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6425f2c944c2381e7dfa5825c89f2ef3d64a2cae | 176 | py | Python | Strings/task1.py | poojavaibhavsahu/Pooja_Python | 58122bfa8586883145042b11fe1cc013c803ab4f | [
"bzip2-1.0.6"
] | null | null | null | Strings/task1.py | poojavaibhavsahu/Pooja_Python | 58122bfa8586883145042b11fe1cc013c803ab4f | [
"bzip2-1.0.6"
] | null | null | null | Strings/task1.py | poojavaibhavsahu/Pooja_Python | 58122bfa8586883145042b11fe1cc013c803ab4f | [
"bzip2-1.0.6"
] | null | null | null | mystr="Python is a multipurpose and simply learning langauge"
for i in mystr:
print(i,end=" ")
print()
print(mystr.find("simply"))
print(mystr[0:11]+ " programming")
| 14.666667 | 62 | 0.681818 | 26 | 176 | 4.615385 | 0.692308 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.164773 | 176 | 11 | 63 | 16 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0.417143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
6436e8075fa6e700f03151fbf03a3ac9c13938d5 | 2,418 | py | Python | pirates/pvp/SiegeManagerBase.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 3 | 2021-02-25T06:38:13.000Z | 2022-03-22T07:00:15.000Z | pirates/pvp/SiegeManagerBase.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | null | null | null | pirates/pvp/SiegeManagerBase.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 1 | 2021-02-25T06:38:17.000Z | 2021-02-25T06:38:17.000Z | # uncompyle6 version 3.2.0
# Python bytecode 2.4 (62061)
# Decompiled from: Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]
# Embedded file name: pirates.pvp.SiegeManagerBase
from pirates.piratesbase import PLocalizer
class SiegeManagerBase:
__module__ = __name__
ANNOUNCER_ZONE = 555
def __init__(self):
self._useIslandRegen = False
self._useRepairSpots = False
self._useRepairKit = False
def setUseIslandRegen(self, useIslandRegen):
if self._useIslandRegen:
not useIslandRegen and self._disableUseIslandRegen()
else:
if not self._useIslandRegen and useIslandRegen:
self._enableUseIslandRegen()
self._useIslandRegen = useIslandRegen
def _enableUseIslandRegen(self):
messenger.send(self.getUseIslandRegenEvent(), [True])
def _disableUseIslandRegen(self):
messenger.send(self.getUseIslandRegenEvent(), [False])
def getUseIslandRegenEvent(self):
return 'useIslandRegen-%s' % self.doId
def getUseIslandRegen(self):
return self._useIslandRegen
def setUseRepairSpots(self, useRepairSpots):
if self._useRepairSpots:
not useRepairSpots and self._disableRepairSpots()
else:
if not self._useRepairSpots and useRepairSpots:
self._enableRepairSpots()
self._useRepairSpots = useRepairSpots
def _enableRepairSpots(self):
messenger.send(self.getUseRepairSpotsEvent(), [True])
def _disableRepairSpots(self):
messenger.send(self.getUseRepairSpotsEvent(), [False])
def getUseRepairSpotsEvent(self):
return 'useRepairSpots-%s' % self.doId
def getUseRepairSpots(self):
return self._useRepairSpots
def setUseRepairKit(self, useRepairKit):
if self._useRepairKit:
not useRepairKit and self._disableRepairKit()
else:
if not self._useRepairKit and useRepairKit:
self._enableRepairKit()
self._useRepairKit = useRepairKit
def _enableRepairKit(self):
messenger.send(self.getUseRepairKitEvent(), [True])
def _disableRepairKit(self):
messenger.send(self.getUseRepairKitEvent(), [False])
def getUseRepairKitEvent(self):
return 'useRepairKit-%s' % self.doId
def getUseRepairKit(self):
return self._useRepairKit | 32.675676 | 104 | 0.682796 | 224 | 2,418 | 7.191964 | 0.325893 | 0.067039 | 0.063315 | 0.078212 | 0.157666 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025876 | 0.232837 | 2,418 | 74 | 105 | 32.675676 | 0.842588 | 0.084367 | 0 | 0.056604 | 0 | 0 | 0.022172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.301887 | false | 0 | 0.018868 | 0.113208 | 0.490566 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ff26022a0d5a9ce7ac9b056578febf2cfa016805 | 1,475 | py | Python | nixio/__init__.py | hkchekc/nixpy | 632dae8ab46a770111b529c2c8ee5c4acb87aaab | [
"BSD-3-Clause"
] | null | null | null | nixio/__init__.py | hkchekc/nixpy | 632dae8ab46a770111b529c2c8ee5c4acb87aaab | [
"BSD-3-Clause"
] | null | null | null | nixio/__init__.py | hkchekc/nixpy | 632dae8ab46a770111b529c2c8ee5c4acb87aaab | [
"BSD-3-Clause"
] | 1 | 2021-06-09T11:31:38.000Z | 2021-06-09T11:31:38.000Z | # -*- coding: utf-8 -*-
# Copyright © 2014, German Neuroinformatics Node (G-Node)
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted under the terms of the BSD License. See
# LICENSE file in the root of the Project.
# NIX object classes
from .file import File
from .block import Block
from .group import Group
from .data_array import DataArray
from .tag import Tag
from .multi_tag import MultiTag
from .source import Source
from .section import Section, S
from .property import Property, OdmlType
from .feature import Feature
from .data_frame import DataFrame
from .dimensions import SampledDimension, RangeDimension, SetDimension
from . import validator
# enums
from .file import FileMode
from .data_array import DataSliceMode
from .datatype import DataType
from .dimension_type import DimensionType
from .link_type import LinkType
from .compression import Compression
# version
from .info import VERSION
__all__ = ("File", "Block", "Group", "DataArray", "DataFrame", "Tag",
"MultiTag", "Source", "Section", "S", "Feature", "Property",
"OdmlType", "SampledDimension", "RangeDimension", "SetDimension",
"FileMode", "DataSliceMode", "DataType", "DimensionType",
"LinkType", "Compression", "validator")
__author__ = ('Christian Kellner, Adrian Stoewer, Andrey Sobolev, Jan Grewe, '
'Balint Morvai, Achilleas Koutsou')
__version__ = VERSION
| 33.522727 | 78 | 0.738305 | 177 | 1,475 | 6.056497 | 0.485876 | 0.022388 | 0.026119 | 0.035448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004078 | 0.168814 | 1,475 | 43 | 79 | 34.302326 | 0.869494 | 0.208136 | 0 | 0 | 0 | 0 | 0.247405 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ff3463474b813d6e888b77f0d45642a0ba6df6cd | 521 | py | Python | libtcd/compat.py | dairiki/python-libtcd | c9d1fd3f30e3088f125bf05fb592f30daf9de51d | [
"BSD-3-Clause"
] | null | null | null | libtcd/compat.py | dairiki/python-libtcd | c9d1fd3f30e3088f125bf05fb592f30daf9de51d | [
"BSD-3-Clause"
] | null | null | null | libtcd/compat.py | dairiki/python-libtcd | c9d1fd3f30e3088f125bf05fb592f30daf9de51d | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
"""
from __future__ import absolute_import
from six import text_type
try:
from collections import OrderedDict # noqa
except ImportError: # pragma: NO COVER
from ordereddict import OrderedDict # noqa
def bytes_(s, encoding='latin-1', errors='strict'):
""" If ``s`` is an instance of ``text_type``, return
``s.encode(encoding, errors)``, otherwise return ``s``"""
if isinstance(s, text_type):
return s.encode(encoding, errors)
return s
| 26.05 | 61 | 0.639155 | 65 | 521 | 4.984615 | 0.553846 | 0.08642 | 0.12963 | 0.092593 | 0.216049 | 0.216049 | 0.216049 | 0 | 0 | 0 | 0 | 0.004975 | 0.228407 | 521 | 19 | 62 | 27.421053 | 0.800995 | 0.293666 | 0 | 0 | 0 | 0 | 0.037464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.5 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ff600fe0862f46bd480b7c108bdd6917a0602990 | 533 | py | Python | molsysmt/item/openmm_PDBFile/to_mdtraj_Topology.py | uibcdf/MolModMTs | 4f6b6f671a9fa3e73008d1e9c48686d5f20a6573 | [
"MIT"
] | null | null | null | molsysmt/item/openmm_PDBFile/to_mdtraj_Topology.py | uibcdf/MolModMTs | 4f6b6f671a9fa3e73008d1e9c48686d5f20a6573 | [
"MIT"
] | null | null | null | molsysmt/item/openmm_PDBFile/to_mdtraj_Topology.py | uibcdf/MolModMTs | 4f6b6f671a9fa3e73008d1e9c48686d5f20a6573 | [
"MIT"
] | null | null | null | from molsysmt._private.exceptions import *
from molsysmt._private.digestion import *
def to_mdtraj_Topology(item, atom_indices='all', syntaxis='MolSysMT'):
if check:
digest_item(item, 'openmm.PDBFile')
atom_indices = digest_atom_indices(atom_indices)
from .to_openmm_Topology import to_openmm_Topology
from ..openmm_Topology import to_mdtraj_Topology
tmp_item = to_openmm_Topology(item, atom_indices=atom_indices)
tmp_item = openmm_Topology_to_mdtraj_Topology(tmp_item)
return tmp_item
| 28.052632 | 70 | 0.772983 | 71 | 533 | 5.394366 | 0.309859 | 0.172324 | 0.125326 | 0.120104 | 0.120104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155722 | 533 | 18 | 71 | 29.611111 | 0.851111 | 0 | 0 | 0 | 0 | 0 | 0.046992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ff84e0acd9eeee88b162afdeb1c68b61e29bbceb | 15,667 | py | Python | test_nmd.py | jessicalettes/nmd-exons | 2f014c952ec17b4b3936ff504b6406abc2cab4e4 | [
"MIT"
] | null | null | null | test_nmd.py | jessicalettes/nmd-exons | 2f014c952ec17b4b3936ff504b6406abc2cab4e4 | [
"MIT"
] | null | null | null | test_nmd.py | jessicalettes/nmd-exons | 2f014c952ec17b4b3936ff504b6406abc2cab4e4 | [
"MIT"
] | null | null | null | import gffutils
import pandas.util.testing as pdt
import pandas as pd
import pytest
@pytest.fixture
def database():
return '/Users/rhythmicstar/projects/exon_evolution//gencode.v19.' \
'annotation.outrigger.nmdtest.gtf.db'
@pytest.fixture
def exon_ids():
return ('exon:chr10:101510126-101510153:+',
'exon:chr7:33075546-33075600:-',
'exon:chr12:42778742-42778798:+',
'exon:chr8:29931393-29931571:-',
'exon:chr3:186502353-186502486:+',
'exon:chr3:42661508-42661535:+',
'exon:chr14:20785953-20786133:-')
@pytest.fixture
def single_exon_id():
return 'exon:chr3:186502751-186502890:+'
@pytest.fixture
def is_exon_nmd():
return 'Exclusion causes NMD (annotated)', \
'Inclusion causes NMD (annotated)', \
'Exclusion causes NMD (found stop codon)', \
'Inclusion causes NMD (found stop codon)', \
'Splicing not known to cause NMD', \
'Splicing not known to cause NMD', \
'Splicing not known to cause NMD'
@pytest.fixture()
def stop_codon_exon_ids():
return [('stop_codon:chr3:186506106-186506108:+',
'exon:chr3:186506099-186506205:+', True),
('stop_codon:chr17:44101535-44101537:+',
'exon:chr17:44101322-44101549:+', False)]
@pytest.fixture()
def parent_transcripts_of_exon():
return {'ENST00000323963.5', 'ENST00000498746.1', 'ENST00000440191.2',
'ENST00000425053.1'}
@pytest.fixture()
def all_transcripts_of_exon():
return {'ENST00000497177.1', 'ENST00000429589.1', 'ENST00000486805.1',
'ENST00000466362.1', 'ENST00000441007.1', 'ENST00000475653.1',
'ENST00000465792.1', 'ENST00000440191.2', 'ENST00000445596.1',
'ENST00000465222.1', 'ENST00000425053.1', 'ENST00000492144.1',
'ENST00000494445.1', 'ENST00000467585.1', 'ENST00000498746.1',
'ENST00000461021.1', 'ENST00000465032.1', 'ENST00000443963.1',
'ENST00000426808.1', 'ENST00000495049.1', 'ENST00000491473.1',
'ENST00000323963.5', 'ENST00000496382.1', 'ENST00000468362.1',
'ENST00000475409.1', 'ENST00000465267.1', 'ENST00000485101.1',
'ENST00000356531.5'}
@pytest.fixture()
def inc_trans_without_exon_ids():
return ['start_codon:chr11:57480091-57480093:+',
'CDS:chr11:57480091-57480279:+:0',
'exon:chr11:57505080-57505140:+',
'exon:chr11:57505826-57505902:+',
'exon:chr11:57506136-57506242:+',
'exon:chr11:57506446-57506511:+',
'exon:chr11:57506603-57506732:+']
@pytest.fixture()
def inc_trans_with_exon_ids():
return ['start_codon:chr11:57480091-57480093:+',
'CDS:chr11:57480091-57480279:+:0',
'exon:chr11:57505080-57505140:+',
'exon:chr11:57505385-57505498:+',
'exon:chr11:57505826-57505902:+',
'exon:chr11:57506136-57506242:+',
'exon:chr11:57506446-57506511:+',
'exon:chr11:57506603-57506732:+']
@pytest.fixture()
def exc_trans_without_exon_ids():
return ['start_codon:chr12:42729705-42729707:+',
'CDS:chr12:42729705-42729776:+:0',
'exon:chr12:42729685-42729776:+',
'exon:chr12:42745687-42745851:+',
'exon:chr12:42748963-42749024:+',
'exon:chr12:42768665-42768876:+',
'exon:chr12:42781258-42781337:+',
'exon:chr12:42787372-42787491:+',
'exon:chr12:42792656-42792796:+']
@pytest.fixture()
def exc_trans_with_exon_ids():
return ['start_codon:chr12:42729705-42729707:+',
'CDS:chr12:42729705-42729776:+:0',
'exon:chr12:42729685-42729776:+',
'exon:chr12:42745687-42745851:+',
'exon:chr12:42748963-42749024:+',
'exon:chr12:42768665-42768876:+',
'exon:chr12:42778742-42778798:+',
'exon:chr12:42781258-42781337:+',
'exon:chr12:42787372-42787491:+',
'exon:chr12:42792656-42792796:+']
@pytest.fixture()
def true_exc_nmd():
return True
@pytest.fixture()
def true_inc_nmd():
return True
@pytest.fixture()
def strand_true_exc_nmd():
return '+'
@pytest.fixture()
def strand_true_inc_nmd():
return '+'
@pytest.fixture()
def true_dict():
return {'ENST00000443963.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501386-186501428:+',
'exon:chr3:186502218-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186507686:+'],
'ENST00000441007.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186500994-186501139:+',
'exon:chr3:186501237-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502448:+'],
'ENST00000498746.1': ['start_codon:chr3:186502243-186502245:+',
'CDS:chr3:186502243-186502266:+:0',
'exon:chr3:186501992-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186502751-186502890:+',
'exon:chr3:186503672-186503702:+'],
'ENST00000429589.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501366-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505268-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186506929:+'],
'ENST00000323963.5': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501336-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186502751-186502890:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186507689:+'],
'ENST00000445596.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501094-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186502751-186502836:+'],
'ENST00000425053.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501366-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186502751-186502890:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506099-186506205:+',
'exon:chr3:186506914-186507670:+'],
'ENST00000440191.2': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501366-186501428:+',
'exon:chr3:186502218-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186502751-186502890:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186507686:+'],
'ENST00000426808.1': ['start_codon:chr3:186501400-186501402:+',
'CDS:chr3:186501400-186501428:+:0',
'exon:chr3:186501361-186501428:+',
'exon:chr3:186502221-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186507686:+'],
'ENST00000356531.5': ['start_codon:chr3:186502423-186502425:+',
'CDS:chr3:186502423-186502485:+:0',
'exon:chr3:186501386-186501428:+',
'exon:chr3:186502218-186502266:+',
'exon:chr3:186502353-186502485:+',
'exon:chr3:186503672-186503840:+',
'exon:chr3:186503953-186504062:+',
'exon:chr3:186504291-186504434:+',
'exon:chr3:186504916-186505053:+',
'exon:chr3:186505284-186505373:+',
'exon:chr3:186505592-186505671:+',
'exon:chr3:186506914-186507683:+']}
class TestNMDExons(object):
@pytest.fixture
def nmd_exons(self, database, exon_ids):
from nmd import NMDExons
return NMDExons(database, exon_ids)
@pytest.fixture
def inc_trans_without_exon(self, nmd_exons, inc_trans_without_exon_ids):
return [nmd_exons.db[x] for x in inc_trans_without_exon_ids]
@pytest.fixture
def inc_trans_with_exon(self, nmd_exons, inc_trans_with_exon_ids):
return [nmd_exons.db[x] for x in inc_trans_with_exon_ids]
@pytest.fixture
def exc_trans_without_exon(self, nmd_exons, exc_trans_without_exon_ids):
return [nmd_exons.db[x] for x in exc_trans_without_exon_ids]
@pytest.fixture
def exc_trans_with_exon(self, nmd_exons, exc_trans_with_exon_ids):
return [nmd_exons.db[x] for x in exc_trans_with_exon_ids]
def test___init__(self, nmd_exons, exon_ids):
assert isinstance(nmd_exons.db, gffutils.FeatureDB)
pdt.assert_equal(nmd_exons.exon_ids, exon_ids)
def test_find_nmd_exons(self, nmd_exons, is_exon_nmd, exon_ids):
test = nmd_exons.find_nmd_exons()
true = pd.Series(is_exon_nmd, index=exon_ids)
pdt.assert_series_equal(test, true)
def test__is_this_exon_nmd(self, is_exon_nmd, exon_ids, nmd_exons):
for exon_id, true in zip(exon_ids, is_exon_nmd):
test = nmd_exons._is_this_exon_nmd(exon_id)
assert test == true
def test__get_transcripts_with_exon(self, database, single_exon_id,
nmd_exons, parent_transcripts_of_exon):
db = gffutils.FeatureDB(database)
exon = db[single_exon_id]
test = nmd_exons._get_transcripts_with_exon(exon)
true = parent_transcripts_of_exon
assert test == true
def test__get_all_transcripts_overlapping_exon(self, database,
single_exon_id, nmd_exons,
all_transcripts_of_exon):
db = gffutils.FeatureDB(database)
exon = db[single_exon_id]
test = nmd_exons._get_all_transcripts_overlapping_exon(exon)
true = all_transcripts_of_exon
assert test == true
def test__create_dict(self, all_transcripts_of_exon, strand_true_exc_nmd,
nmd_exons, true_dict):
test = nmd_exons._get_exons_from_transcripts(all_transcripts_of_exon,
strand_true_exc_nmd)
test = dict((key, [v.id for v in values]) for key, values in
test.items())
true = true_dict
pdt.assert_dict_equal(test, true)
def test__inclusion_nmd(self, inc_trans_without_exon, inc_trans_with_exon,
strand_true_inc_nmd, nmd_exons, true_inc_nmd):
test = nmd_exons._inclusion_nmd(inc_trans_without_exon,
inc_trans_with_exon,
strand_true_inc_nmd)
true = true_inc_nmd
assert test == true
def test__exclusion_nmd(self, exc_trans_without_exon, exc_trans_with_exon,
strand_true_exc_nmd, nmd_exons, true_exc_nmd):
test = nmd_exons._exclusion_nmd(exc_trans_without_exon,
exc_trans_with_exon,
strand_true_exc_nmd)
true = true_exc_nmd
assert test == true
def test_nmd_stop_codon_nmd(self, database, stop_codon_exon_ids,
nmd_exons):
for stop_codon_exon_pair in stop_codon_exon_ids:
db = gffutils.FeatureDB(database)
stop = stop_codon_exon_pair[0]
stop_codon = db[stop]
exon = db[stop_codon_exon_pair[1]]
true = stop_codon_exon_pair[2]
test = nmd_exons.stop_codon_nmd(exon, stop_codon)
assert test == true
| 46.489614 | 79 | 0.521478 | 1,436 | 15,667 | 5.470752 | 0.155292 | 0.09165 | 0.04277 | 0.033096 | 0.652113 | 0.60998 | 0.585412 | 0.554099 | 0.509801 | 0.509801 | 0 | 0.336262 | 0.360694 | 15,667 | 336 | 80 | 46.627976 | 0.448083 | 0 | 0 | 0.53125 | 0 | 0 | 0.372503 | 0.311802 | 0 | 0 | 0 | 0 | 0.034722 | 1 | 0.104167 | false | 0 | 0.017361 | 0.069444 | 0.197917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ffa01cd378091f29a97da598df11c7c6f86be82c | 487 | py | Python | python/makoutil.py | WalrusCow/wmcd.ca | 67339f82aa2121d4a6d32e9fe5112819c7106450 | [
"MIT"
] | null | null | null | python/makoutil.py | WalrusCow/wmcd.ca | 67339f82aa2121d4a6d32e9fe5112819c7106450 | [
"MIT"
] | 3 | 2015-03-05T04:52:21.000Z | 2015-03-07T07:57:05.000Z | python/makoutil.py | WalrusCow/blerg | 67339f82aa2121d4a6d32e9fe5112819c7106450 | [
"MIT"
] | null | null | null | import os
import functools
from mako.lookup import TemplateLookup
DIR = os.path.dirname(os.path.abspath(__file__))
templateDirs = [os.path.join(DIR, 'templates')]
templateLookup = TemplateLookup(directories=templateDirs)
def serveTemplate(path):
def deco(func):
@functools.wraps(func)
def _serve(*args, **kwargs):
templ = templateLookup.get_template(path)
return templ.render(**func(*args, **kwargs))
return _serve
return deco
| 25.631579 | 57 | 0.687885 | 55 | 487 | 5.963636 | 0.527273 | 0.054878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197125 | 487 | 18 | 58 | 27.055556 | 0.838875 | 0 | 0 | 0 | 0 | 0 | 0.01848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.214286 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ffa3b0352abacb60759e4f95084d6f64481920aa | 1,910 | py | Python | OR_web_GUI/packages/extendGPIO.py | AntaeusNar/Orange-Relay | 5a327a531831946f921b81648df500b9c5aab73d | [
"MIT"
] | null | null | null | OR_web_GUI/packages/extendGPIO.py | AntaeusNar/Orange-Relay | 5a327a531831946f921b81648df500b9c5aab73d | [
"MIT"
] | 8 | 2018-11-27T16:12:45.000Z | 2021-06-10T21:10:58.000Z | OR_web_GUI/packages/extendGPIO.py | AntaeusNar/Orange-Relay | 5a327a531831946f921b81648df500b9c5aab73d | [
"MIT"
] | 1 | 2021-02-24T19:13:44.000Z | 2021-02-24T19:13:44.000Z | # This file will load only if OPi.GPIO fails because of a Dev environment.
# The basic idea is that when a pin is made HIGH or LOW it is writen into a file,
# and then when the input is checked it reads the file.......
from . import extendJSON as JSON
# Values
LOW = 0
HIGH = 1
# Modes
BCM = 11
BOARD = 10
# Pull
PUD_OFF = 20
PUD_DOWN = 21
PUD_UP = 22
# Edges
RISING = 31
FALLING = 32
BOTH = 33
# Functions
OUT = 0
IN = 1
SERIAL = 40
SPI = 41
I2C = 42
HARD_PWM = 43
UNKNOWN = -1
def setwarnings( a): pass
def setmode(a): pass
def getmode(): return BCM
def setup(channel, state, initial=0, pull_up_down=None): pass
def output(channel, state):
"""
To set the output state of a GPIO pin:
:param channel:
:return:
"""
# should try to open the json file containing the pin dict with error handling options
try:
pins = JSON.getJSONfile('pins.json')
pins = {int(k): v for k, v in pins.items()}
except EnvironmentError:
pins = {}
if channel not in pins:
pins[channel] = state
else:
pins[channel] = state
JSON.writeJSONfile('pins.json', pins)
return state
def input(channel):
"""
To read the value of a GPIO pin:
:param channel:
:return:
"""
# Should try to open the json file containing the pin dict
try:
pins = JSON.getJSONfile('pins.json')
pins = {int(k): v for k, v in pins.items()}
except EnvironmentError:
pins = {}
if channel not in pins:
return LOW
else:
state = pins[channel]
return state
def cleanup(a=None): pass
def wait_for_edge(channel, edge): pass
def add_event_detect(channel, edge, callback=None, bouncetime=None): pass
def add_event_callback(channel, callback=None): pass
def remove_event_detect(channel): pass
def event_detected(channel): return False
def gpio_function(channel): return OUT | 17.363636 | 90 | 0.648691 | 290 | 1,910 | 4.217241 | 0.427586 | 0.045789 | 0.035977 | 0.016353 | 0.282911 | 0.282911 | 0.282911 | 0.282911 | 0.282911 | 0.282911 | 0 | 0.021862 | 0.257592 | 1,910 | 110 | 91 | 17.363636 | 0.840621 | 0.267539 | 0 | 0.346154 | 0 | 0 | 0.020089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.153846 | 0.019231 | 0.057692 | 0.326923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
442879a13cc8117ac0195ded874e2c236d1e67cb | 229 | py | Python | Python/leetcode/InorderSuccessorInBst.py | darrencheng0817/AlgorithmLearning | aec1ddd0c51b619c1bae1e05f940d9ed587aa82f | [
"MIT"
] | 2 | 2015-12-02T06:44:01.000Z | 2016-05-04T21:40:54.000Z | Python/leetcode/InorderSuccessorInBst.py | darrencheng0817/AlgorithmLearning | aec1ddd0c51b619c1bae1e05f940d9ed587aa82f | [
"MIT"
] | null | null | null | Python/leetcode/InorderSuccessorInBst.py | darrencheng0817/AlgorithmLearning | aec1ddd0c51b619c1bae1e05f940d9ed587aa82f | [
"MIT"
] | null | null | null | '''
Created on 1.12.2016
@author: Darren
'''
'''
Given a binary search tree and a node in it, find the in-order successor of that node in the BST.
Note: If the given node has no in-order successor in the tree, return null.
'''
| 20.818182 | 97 | 0.703057 | 43 | 229 | 3.744186 | 0.674419 | 0.074534 | 0.198758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038043 | 0.196507 | 229 | 10 | 98 | 22.9 | 0.836957 | 0.161572 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4428f26c499145e61d25da9876a3e0c72bf9e081 | 32,780 | py | Python | compiler/front_end/symbol_resolver_test.py | chloeyutianyi/emboss | ec9b566848d322e0afd598327a6e81a8c7953008 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | compiler/front_end/symbol_resolver_test.py | chloeyutianyi/emboss | ec9b566848d322e0afd598327a6e81a8c7953008 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | compiler/front_end/symbol_resolver_test.py | chloeyutianyi/emboss | ec9b566848d322e0afd598327a6e81a8c7953008 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for emboss.front_end.symbol_resolver."""
import unittest
from compiler.front_end import glue
from compiler.front_end import symbol_resolver
from compiler.front_end import test_util
from compiler.util import error
_HAPPY_EMB = """
struct Foo:
0 [+4] UInt uint_field
4 [+4] Bar bar_field
8 [+16] UInt[4] array_field
struct Bar:
0 [+4] Qux bar
enum Qux:
ABC = 1
DEF = 2
struct FieldRef:
n-4 [+n] UInt:8[n] data
offset-4 [+offset] UInt:8[offset] data2
0 [+4] UInt offset (n)
struct VoidLength:
0 [+10] UInt:8[] ten_bytes
enum Quux:
ABC = 1
DEF = ABC
struct UsesParameter(x: UInt:8):
0 [+x] UInt:8[] block
"""
class ResolveSymbolsTest(unittest.TestCase):
"""Tests for symbol_resolver.resolve_symbols()."""
def _construct_ir_multiple(self, file_dict, primary_emb_name):
ir, unused_debug_info, errors = glue.parse_emboss_file(
primary_emb_name,
test_util.dict_file_reader(file_dict),
stop_before_step="resolve_symbols")
assert not errors
return ir
def _construct_ir(self, emb_text, name="happy.emb"):
return self._construct_ir_multiple({name: emb_text}, name)
def test_struct_field_atomic_type_resolution(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[0].structure
atomic_field1_reference = struct_ir.field[0].type.atomic_type.reference
self.assertEqual(atomic_field1_reference.canonical_name.object_path, ["UInt"
])
self.assertEqual(atomic_field1_reference.canonical_name.module_file, "")
atomic_field2_reference = struct_ir.field[1].type.atomic_type.reference
self.assertEqual(atomic_field2_reference.canonical_name.object_path, ["Bar"
])
self.assertEqual(atomic_field2_reference.canonical_name.module_file,
"happy.emb")
def test_struct_field_enum_type_resolution(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[1].structure
atomic_field_reference = struct_ir.field[0].type.atomic_type.reference
self.assertEqual(atomic_field_reference.canonical_name.object_path, ["Qux"])
self.assertEqual(atomic_field_reference.canonical_name.module_file,
"happy.emb")
def test_struct_field_array_type_resolution(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
array_field_type = ir.module[0].type[0].structure.field[2].type.array_type
array_field_reference = array_field_type.base_type.atomic_type.reference
self.assertEqual(array_field_reference.canonical_name.object_path, ["UInt"])
self.assertEqual(array_field_reference.canonical_name.module_file, "")
def test_inner_type_resolution(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
array_field_type = ir.module[0].type[0].structure.field[2].type.array_type
array_field_reference = array_field_type.base_type.atomic_type.reference
self.assertEqual(array_field_reference.canonical_name.object_path, ["UInt"])
self.assertEqual(array_field_reference.canonical_name.module_file, "")
def test_struct_field_resolution_in_expression_in_location(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[3].structure
field0_loc = struct_ir.field[0].location
abbreviation_reference = field0_loc.size.field_reference.path[0]
self.assertEqual(abbreviation_reference.canonical_name.object_path,
["FieldRef", "offset"])
self.assertEqual(abbreviation_reference.canonical_name.module_file,
"happy.emb")
field0_start_left = field0_loc.start.function.args[0]
nested_abbreviation_reference = field0_start_left.field_reference.path[0]
self.assertEqual(nested_abbreviation_reference.canonical_name.object_path,
["FieldRef", "offset"])
self.assertEqual(nested_abbreviation_reference.canonical_name.module_file,
"happy.emb")
field1_loc = struct_ir.field[1].location
direct_reference = field1_loc.size.field_reference.path[0]
self.assertEqual(direct_reference.canonical_name.object_path, ["FieldRef",
"offset"])
self.assertEqual(direct_reference.canonical_name.module_file, "happy.emb")
field1_start_left = field1_loc.start.function.args[0]
nested_direct_reference = field1_start_left.field_reference.path[0]
self.assertEqual(nested_direct_reference.canonical_name.object_path,
["FieldRef", "offset"])
self.assertEqual(nested_direct_reference.canonical_name.module_file,
"happy.emb")
def test_struct_field_resolution_in_expression_in_array_length(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[3].structure
field0_array_type = struct_ir.field[0].type.array_type
field0_array_element_count = field0_array_type.element_count
abbreviation_reference = field0_array_element_count.field_reference.path[0]
self.assertEqual(abbreviation_reference.canonical_name.object_path,
["FieldRef", "offset"])
self.assertEqual(abbreviation_reference.canonical_name.module_file,
"happy.emb")
field1_array_type = struct_ir.field[1].type.array_type
direct_reference = field1_array_type.element_count.field_reference.path[0]
self.assertEqual(direct_reference.canonical_name.object_path, ["FieldRef",
"offset"])
self.assertEqual(direct_reference.canonical_name.module_file, "happy.emb")
def test_struct_parameter_resolution(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[6].structure
size_ir = struct_ir.field[0].location.size
self.assertTrue(size_ir.HasField("field_reference"))
self.assertEqual(size_ir.field_reference.path[0].canonical_name.object_path,
["UsesParameter", "x"])
def test_enum_value_resolution_in_expression_in_enum_field(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
enum_ir = ir.module[0].type[5].enumeration
value_reference = enum_ir.value[1].value.constant_reference
self.assertEqual(value_reference.canonical_name.object_path,
["Quux", "ABC"])
self.assertEqual(value_reference.canonical_name.module_file, "happy.emb")
def test_symbol_resolution_in_expression_in_void_array_length(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
struct_ir = ir.module[0].type[4].structure
array_type = struct_ir.field[0].type.array_type
# The symbol resolver should ignore void fields.
self.assertEqual("automatic", array_type.WhichOneof("size"))
def test_name_definitions_have_correct_canonical_names(self):
ir = self._construct_ir(_HAPPY_EMB)
self.assertEqual([], symbol_resolver.resolve_symbols(ir))
foo_name = ir.module[0].type[0].name
self.assertEqual(foo_name.canonical_name.object_path, ["Foo"])
self.assertEqual(foo_name.canonical_name.module_file, "happy.emb")
uint_field_name = ir.module[0].type[0].structure.field[0].name
self.assertEqual(uint_field_name.canonical_name.object_path, ["Foo",
"uint_field"])
self.assertEqual(uint_field_name.canonical_name.module_file, "happy.emb")
foo_name = ir.module[0].type[2].name
self.assertEqual(foo_name.canonical_name.object_path, ["Qux"])
self.assertEqual(foo_name.canonical_name.module_file, "happy.emb")
def test_duplicate_type_name(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] UInt field\n"
"struct Foo:\n"
" 0 [+4] UInt bar\n", "duplicate_type.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
self.assertEqual([
[error.error("duplicate_type.emb",
ir.module[0].type[1].name.source_location,
"Duplicate name 'Foo'"),
error.note("duplicate_type.emb",
ir.module[0].type[0].name.source_location,
"Original definition")]
], errors)
def test_duplicate_field_name_in_struct(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] UInt field\n"
" 4 [+4] UInt field\n", "duplicate_field.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
struct = ir.module[0].type[0].structure
self.assertEqual([[
error.error("duplicate_field.emb",
struct.field[1].name.source_location,
"Duplicate name 'field'"),
error.note("duplicate_field.emb",
struct.field[0].name.source_location,
"Original definition")
]], errors)
def test_duplicate_abbreviation_in_struct(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] UInt field1 (f)\n"
" 4 [+4] UInt field2 (f)\n",
"duplicate_field.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
struct = ir.module[0].type[0].structure
self.assertEqual([[
error.error("duplicate_field.emb",
struct.field[1].abbreviation.source_location,
"Duplicate name 'f'"),
error.note("duplicate_field.emb",
struct.field[0].abbreviation.source_location,
"Original definition")
]], errors)
def test_abbreviation_duplicates_field_name_in_struct(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] UInt field\n"
" 4 [+4] UInt field2 (field)\n",
"duplicate_field.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
struct = ir.module[0].type[0].structure
self.assertEqual([[
error.error("duplicate_field.emb",
struct.field[1].abbreviation.source_location,
"Duplicate name 'field'"),
error.note("duplicate_field.emb",
struct.field[0].name.source_location,
"Original definition")
]], errors)
def test_field_name_duplicates_abbreviation_in_struct(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] UInt field (field2)\n"
" 4 [+4] UInt field2\n", "duplicate_field.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
struct = ir.module[0].type[0].structure
self.assertEqual([[
error.error("duplicate_field.emb",
struct.field[1].name.source_location,
"Duplicate name 'field2'"),
error.note("duplicate_field.emb",
struct.field[0].abbreviation.source_location,
"Original definition")
]], errors)
def test_duplicate_value_name_in_enum(self):
ir = self._construct_ir("enum Foo:\n"
" BAR = 1\n"
" BAR = 1\n", "duplicate_enum.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
self.assertEqual([[
error.error(
"duplicate_enum.emb",
ir.module[0].type[0].enumeration.value[1].name.source_location,
"Duplicate name 'BAR'"),
error.note(
"duplicate_enum.emb",
ir.module[0].type[0].enumeration.value[0].name.source_location,
"Original definition")
]], errors)
def test_ambiguous_name(self):
# struct UInt will be ambiguous with the external UInt in the prelude.
ir = self._construct_ir("struct UInt:\n"
" 0 [+4] Int:8[4] field\n"
"struct Foo:\n"
" 0 [+4] UInt bar\n", "ambiguous.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
# Find the UInt definition in the prelude.
for type_ir in ir.module[1].type:
if type_ir.name.name.text == "UInt":
prelude_uint = type_ir
break
ambiguous_type_ir = ir.module[0].type[1].structure.field[0].type.atomic_type
self.assertEqual([[
error.error("ambiguous.emb",
ambiguous_type_ir.reference.source_name[0].source_location,
"Ambiguous name 'UInt'"), error.note(
"", prelude_uint.name.source_location,
"Possible resolution"),
error.note("ambiguous.emb", ir.module[0].type[0].name.source_location,
"Possible resolution")
]], errors)
def test_missing_name(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+4] Bar field\n",
"missing.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
missing_type_ir = ir.module[0].type[0].structure.field[0].type.atomic_type
self.assertEqual([
[error.error("missing.emb",
missing_type_ir.reference.source_name[0].source_location,
"No candidate for 'Bar'")]
], errors)
def test_missing_leading_name(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+Num.FOUR] UInt field\n", "missing.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
missing_expr_ir = ir.module[0].type[0].structure.field[0].location.size
self.assertEqual([
[error.error(
"missing.emb",
missing_expr_ir.constant_reference.source_name[0].source_location,
"No candidate for 'Num'")]
], errors)
def test_missing_trailing_name(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+Num.FOUR] UInt field\n"
"enum Num:\n"
" THREE = 3\n", "missing.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
missing_expr_ir = ir.module[0].type[0].structure.field[0].location.size
self.assertEqual([
[error.error(
"missing.emb",
missing_expr_ir.constant_reference.source_name[1].source_location,
"No candidate for 'FOUR'")]
], errors)
def test_missing_middle_name(self):
ir = self._construct_ir("struct Foo:\n"
" 0 [+Num.NaN.FOUR] UInt field\n"
"enum Num:\n"
" FOUR = 4\n", "missing.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
missing_expr_ir = ir.module[0].type[0].structure.field[0].location.size
self.assertEqual([
[error.error(
"missing.emb",
missing_expr_ir.constant_reference.source_name[1].source_location,
"No candidate for 'NaN'")]
], errors)
def test_inner_resolution(self):
ir = self._construct_ir(
"struct OuterStruct:\n"
"\n"
" struct InnerStruct2:\n"
" 0 [+1] InnerStruct.InnerEnum inner_enum\n"
"\n"
" struct InnerStruct:\n"
" enum InnerEnum:\n"
" ONE = 1\n"
"\n"
" 0 [+1] InnerEnum inner_enum\n"
"\n"
" 0 [+InnerStruct.InnerEnum.ONE] InnerStruct.InnerEnum inner_enum\n",
"nested.emb")
errors = symbol_resolver.resolve_symbols(ir)
self.assertFalse(errors)
outer_struct = ir.module[0].type[0]
inner_struct = outer_struct.subtype[1]
inner_struct_2 = outer_struct.subtype[0]
inner_enum = inner_struct.subtype[0]
self.assertEqual(["OuterStruct", "InnerStruct"],
list(inner_struct.name.canonical_name.object_path))
self.assertEqual(["OuterStruct", "InnerStruct", "InnerEnum"],
list(inner_enum.name.canonical_name.object_path))
self.assertEqual(["OuterStruct", "InnerStruct2"],
list(inner_struct_2.name.canonical_name.object_path))
outer_field = outer_struct.structure.field[0]
outer_field_end_ref = outer_field.location.size.constant_reference
self.assertEqual(
["OuterStruct", "InnerStruct", "InnerEnum", "ONE"], list(
outer_field_end_ref.canonical_name.object_path))
self.assertEqual(
["OuterStruct", "InnerStruct", "InnerEnum"],
list(outer_field.type.atomic_type.reference.canonical_name.object_path))
inner_field_2_type = inner_struct_2.structure.field[0].type.atomic_type
self.assertEqual(
["OuterStruct", "InnerStruct", "InnerEnum"
], list(inner_field_2_type.reference.canonical_name.object_path))
def test_resolution_against_anonymous_bits(self):
ir = self._construct_ir("struct Struct:\n"
" 0 [+1] bits:\n"
" 7 [+1] Flag last_packet\n"
" 5 [+2] enum inline_inner_enum:\n"
" AA = 0\n"
" BB = 1\n"
" CC = 2\n"
" DD = 3\n"
" 0 [+5] UInt header_size (h)\n"
" 0 [+h] UInt:8[] header_bytes\n"
"\n"
"struct Struct2:\n"
" 0 [+1] Struct.InlineInnerEnum value\n",
"anonymity.emb")
errors = symbol_resolver.resolve_symbols(ir)
self.assertFalse(errors)
struct1 = ir.module[0].type[0]
struct1_bits_field = struct1.structure.field[0]
struct1_bits_field_type = struct1_bits_field.type.atomic_type.reference
struct1_byte_field = struct1.structure.field[4]
inner_bits = struct1.subtype[0]
inner_enum = struct1.subtype[1]
self.assertTrue(inner_bits.HasField("structure"))
self.assertTrue(inner_enum.HasField("enumeration"))
self.assertTrue(inner_bits.name.is_anonymous)
self.assertFalse(inner_enum.name.is_anonymous)
self.assertEqual(["Struct", "InlineInnerEnum"],
list(inner_enum.name.canonical_name.object_path))
self.assertEqual(
["Struct", "InlineInnerEnum", "AA"],
list(inner_enum.enumeration.value[0].name.canonical_name.object_path))
self.assertEqual(
list(inner_bits.name.canonical_name.object_path),
list(struct1_bits_field_type.canonical_name.object_path))
self.assertEqual(2, len(inner_bits.name.canonical_name.object_path))
self.assertEqual(
["Struct", "header_size"],
list(struct1_byte_field.location.size.field_reference.path[0].
canonical_name.object_path))
def test_duplicate_name_in_different_inline_bits(self):
ir = self._construct_ir(
"struct Struct:\n"
" 0 [+1] bits:\n"
" 7 [+1] Flag a\n"
" 1 [+1] bits:\n"
" 0 [+1] Flag a\n", "duplicate_in_anon.emb")
errors = error.filter_errors(symbol_resolver.resolve_symbols(ir))
supertype = ir.module[0].type[0]
self.assertEqual([[
error.error(
"duplicate_in_anon.emb",
supertype.structure.field[3].name.source_location,
"Duplicate name 'a'"),
error.note(
"duplicate_in_anon.emb",
supertype.structure.field[1].name.source_location,
"Original definition")
]], errors)
def test_duplicate_name_in_same_inline_bits(self):
ir = self._construct_ir(
"struct Struct:\n"
" 0 [+1] bits:\n"
" 7 [+1] Flag a\n"
" 0 [+1] Flag a\n", "duplicate_in_anon.emb")
errors = symbol_resolver.resolve_symbols(ir)
supertype = ir.module[0].type[0]
self.assertEqual([[
error.error(
"duplicate_in_anon.emb",
supertype.structure.field[2].name.source_location,
"Duplicate name 'a'"),
error.note(
"duplicate_in_anon.emb",
supertype.structure.field[1].name.source_location,
"Original definition")
]], error.filter_errors(errors))
def test_import_type_resolution(self):
importer = ('import "ed.emb" as ed\n'
"struct Ff:\n"
" 0 [+1] ed.Gg gg\n")
imported = ("struct Gg:\n"
" 0 [+1] UInt qq\n")
ir = self._construct_ir_multiple({"ed.emb": imported, "er.emb": importer},
"er.emb")
errors = symbol_resolver.resolve_symbols(ir)
self.assertEqual([], errors)
def test_duplicate_import_name(self):
importer = ('import "ed.emb" as ed\n'
'import "ed.emb" as ed\n'
"struct Ff:\n"
" 0 [+1] ed.Gg gg\n")
imported = ("struct Gg:\n"
" 0 [+1] UInt qq\n")
ir = self._construct_ir_multiple({"ed.emb": imported, "er.emb": importer},
"er.emb")
errors = symbol_resolver.resolve_symbols(ir)
# Note: the error is on import[2] duplicating import[1] because the implicit
# prelude import is import[0].
self.assertEqual([
[error.error("er.emb",
ir.module[0].foreign_import[2].local_name.source_location,
"Duplicate name 'ed'"),
error.note("er.emb",
ir.module[0].foreign_import[1].local_name.source_location,
"Original definition")]
], errors)
def test_import_enum_resolution(self):
importer = ('import "ed.emb" as ed\n'
"struct Ff:\n"
" if ed.Gg.GG == ed.Gg.GG:\n"
" 0 [+1] UInt gg\n")
imported = ("enum Gg:\n"
" GG = 0\n")
ir = self._construct_ir_multiple({"ed.emb": imported, "er.emb": importer},
"er.emb")
errors = symbol_resolver.resolve_symbols(ir)
self.assertEqual([], errors)
def test_that_double_import_names_are_syntactically_invalid(self):
# There are currently no checks in resolve_symbols that it is not possible
# to get to symbols imported by another module, because it is syntactically
# invalid. This may change in the future, in which case this test should be
# fixed by adding an explicit check to resolve_symbols and checking the
# error message here.
importer = ('import "ed.emb" as ed\n'
"struct Ff:\n"
" 0 [+1] ed.ed2.Gg gg\n")
imported = 'import "ed2.emb" as ed2\n'
imported2 = ("struct Gg:\n"
" 0 [+1] UInt qq\n")
unused_ir, unused_debug_info, errors = glue.parse_emboss_file(
"er.emb",
test_util.dict_file_reader({"ed.emb": imported,
"ed2.emb": imported2,
"er.emb": importer}),
stop_before_step="resolve_symbols")
assert errors
def test_no_error_when_inline_name_aliases_outer_name(self):
# The inline enum's complete type should be Foo.Foo. During parsing, the
# name is set to just "Foo", but symbol resolution should a) select the
# correct Foo, and b) not complain that multiple Foos could match.
ir = self._construct_ir(
"struct Foo:\n"
" 0 [+1] enum foo:\n"
" BAR = 0\n")
errors = symbol_resolver.resolve_symbols(ir)
self.assertEqual([], errors)
field = ir.module[0].type[0].structure.field[0]
self.assertEqual(
["Foo", "Foo"],
list(field.type.atomic_type.reference.canonical_name.object_path))
def test_no_error_when_inline_name_in_anonymous_bits_aliases_outer_name(self):
# There is an extra layer of complexity when an inline type appears inside
# of an inline bits.
ir = self._construct_ir(
"struct Foo:\n"
" 0 [+1] bits:\n"
" 0 [+4] enum foo:\n"
" BAR = 0\n")
errors = symbol_resolver.resolve_symbols(ir)
self.assertEqual([], error.filter_errors(errors))
field = ir.module[0].type[0].subtype[0].structure.field[0]
self.assertEqual(
["Foo", "Foo"],
list(field.type.atomic_type.reference.canonical_name.object_path))
class ResolveFieldReferencesTest(unittest.TestCase):
"""Tests for symbol_resolver.resolve_field_references()."""
def _construct_ir_multiple(self, file_dict, primary_emb_name):
ir, unused_debug_info, errors = glue.parse_emboss_file(
primary_emb_name,
test_util.dict_file_reader(file_dict),
stop_before_step="resolve_field_references")
assert not errors
return ir
def _construct_ir(self, emb_text, name="happy.emb"):
return self._construct_ir_multiple({name: emb_text}, name)
def test_subfield_resolution(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg gg\n"
" 1 [+gg.qq] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = symbol_resolver.resolve_field_references(ir)
self.assertFalse(errors)
ff = ir.module[0].type[0]
location_end_path = ff.structure.field[1].location.size.field_reference.path
self.assertEqual(["Ff", "gg"],
list(location_end_path[0].canonical_name.object_path))
self.assertEqual(["Gg", "qq"],
list(location_end_path[1].canonical_name.object_path))
def test_aliased_subfield_resolution(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg real_gg\n"
" 1 [+gg.qq] UInt:8[] data\n"
" let gg = real_gg\n"
"struct Gg:\n"
" 0 [+1] UInt real_qq\n"
" let qq = real_qq", "subfield.emb")
errors = symbol_resolver.resolve_field_references(ir)
self.assertFalse(errors)
ff = ir.module[0].type[0]
location_end_path = ff.structure.field[1].location.size.field_reference.path
self.assertEqual(["Ff", "gg"],
list(location_end_path[0].canonical_name.object_path))
self.assertEqual(["Gg", "qq"],
list(location_end_path[1].canonical_name.object_path))
def test_aliased_aliased_subfield_resolution(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg really_real_gg\n"
" 1 [+gg.qq] UInt:8[] data\n"
" let gg = real_gg\n"
" let real_gg = really_real_gg\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = symbol_resolver.resolve_field_references(ir)
self.assertFalse(errors)
ff = ir.module[0].type[0]
location_end_path = ff.structure.field[1].location.size.field_reference.path
self.assertEqual(["Ff", "gg"],
list(location_end_path[0].canonical_name.object_path))
self.assertEqual(["Gg", "qq"],
list(location_end_path[1].canonical_name.object_path))
def test_subfield_resolution_fails(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg gg\n"
" 1 [+gg.rr] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
self.assertEqual([
[error.error("subfield.emb", ir.module[0].type[0].structure.field[
1].location.size.field_reference.path[1].source_name[
0].source_location, "No candidate for 'rr'")]
], errors)
def test_subfield_resolution_failure_shortcuts_further_resolution(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg gg\n"
" 1 [+gg.rr.qq] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
self.assertEqual([
[error.error("subfield.emb", ir.module[0].type[0].structure.field[
1].location.size.field_reference.path[1].source_name[
0].source_location, "No candidate for 'rr'")]
], errors)
def test_subfield_resolution_failure_with_aliased_name(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg gg\n"
" 1 [+gg.gg] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
self.assertEqual([
[error.error("subfield.emb", ir.module[0].type[0].structure.field[
1].location.size.field_reference.path[1].source_name[
0].source_location, "No candidate for 'gg'")]
], errors)
def test_subfield_resolution_failure_with_array(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg[1] gg\n"
" 1 [+gg.qq] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq\n", "subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
self.assertEqual([
[error.error("subfield.emb", ir.module[0].type[0].structure.field[
1].location.size.field_reference.path[0].source_name[
0].source_location, "Cannot access member of array 'gg'")]
], errors)
def test_subfield_resolution_failure_with_int(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] UInt gg_source\n"
" 1 [+gg.qq] UInt:8[] data\n"
" let gg = gg_source + 1\n",
"subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
error_field = ir.module[0].type[0].structure.field[1]
error_reference = error_field.location.size.field_reference
error_location = error_reference.path[0].source_name[0].source_location
self.assertEqual([
[error.error("subfield.emb", error_location,
"Cannot access member of noncomposite field 'gg'")]
], errors)
def test_subfield_resolution_failure_with_int_no_cascade(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] UInt gg_source\n"
" 1 [+qqx] UInt:8[] data\n"
" let gg = gg_source + 1\n"
" let yy = gg.no_field\n"
" let qqx = yy.x\n"
" let qqy = yy.y\n",
"subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
error_field = ir.module[0].type[0].structure.field[3]
error_reference = error_field.read_transform.field_reference
error_location = error_reference.path[0].source_name[0].source_location
self.assertEqual([
[error.error("subfield.emb", error_location,
"Cannot access member of noncomposite field 'gg'")]
], errors)
def test_subfield_resolution_failure_with_abbreviation(self):
ir = self._construct_ir(
"struct Ff:\n"
" 0 [+1] Gg gg\n"
" 1 [+gg.q] UInt:8[] data\n"
"struct Gg:\n"
" 0 [+1] UInt qq (q)\n", "subfield.emb")
errors = error.filter_errors(symbol_resolver.resolve_field_references(ir))
self.assertEqual([
# TODO(bolms): Make the error message clearer, in this case.
[error.error("subfield.emb", ir.module[0].type[0].structure.field[
1].location.size.field_reference.path[1].source_name[
0].source_location, "No candidate for 'q'")]
], errors)
if __name__ == "__main__":
unittest.main()
| 43.76502 | 80 | 0.619738 | 4,075 | 32,780 | 4.74135 | 0.08 | 0.066767 | 0.020496 | 0.028259 | 0.766575 | 0.728844 | 0.709849 | 0.667719 | 0.632731 | 0.589876 | 0 | 0.017473 | 0.259732 | 32,780 | 748 | 81 | 43.823529 | 0.778744 | 0.049512 | 0 | 0.571646 | 0 | 0 | 0.189471 | 0.007778 | 0 | 0 | 0 | 0.001337 | 0.150915 | 1 | 0.068598 | false | 0 | 0.041159 | 0.003049 | 0.118902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
442e61a29cbb223c86a15ddd35212e3b4b25e8f8 | 3,893 | py | Python | progressivis/storage/base.py | jdfekete/progressivis | 3bc79ce229cd628ef0aa4663136a674743697b47 | [
"BSD-2-Clause"
] | 51 | 2015-09-14T16:31:02.000Z | 2022-01-12T17:56:53.000Z | progressivis/storage/base.py | jdfekete/progressivis | 3bc79ce229cd628ef0aa4663136a674743697b47 | [
"BSD-2-Clause"
] | 10 | 2017-11-15T15:10:05.000Z | 2022-01-19T07:36:43.000Z | progressivis/storage/base.py | jdfekete/progressivis | 3bc79ce229cd628ef0aa4663136a674743697b47 | [
"BSD-2-Clause"
] | 5 | 2017-11-14T20:20:56.000Z | 2020-01-22T06:26:51.000Z | from abc import ABCMeta, abstractmethod, abstractproperty
from contextlib import contextmanager
class StorageObject(metaclass=ABCMeta):
@abstractproperty
def name(self):
pass
@abstractproperty
def attrs(self):
pass
@abstractproperty
def __len__(self):
pass
class Attribute(metaclass=ABCMeta):
@abstractmethod
def __getitem__(self, name):
pass
@abstractmethod
def __setitem__(self, name, value):
pass
@abstractmethod
def __delitem__(self, name):
pass
@abstractmethod
def __len__(self):
pass
@abstractmethod
def __iter__(self):
pass
@abstractmethod
def __contains__(self, name):
pass
class DatasetFactory(StorageObject):
@abstractmethod
def create_dataset(self, name, shape=None, dtype=None, data=None, **kwds):
pass
@abstractmethod
def require_dataset(self, name, shape, dtype, exact=False, **kwds):
pass
@abstractmethod
def __getitem__(self, name):
pass
@abstractmethod
def __delitem__(self, name):
pass
@abstractmethod
def __contains__(self, name):
pass
class Group(DatasetFactory):
default = None
default_internal = None
@abstractmethod
def create_dataset(self, name, shape=None, dtype=None, data=None, **kwds):
pass
@abstractmethod
def require_dataset(self, name, shape, dtype, exact=False, **kwds):
pass
@abstractmethod
def require_group(self, name):
pass
@abstractmethod
def __getitem__(self, name):
pass
@abstractmethod
def __delitem__(self, name):
pass
@abstractmethod
def __contains__(self, name):
pass
def close_all():
pass
class Dataset(StorageObject):
@abstractproperty
def shape(self):
pass
@abstractproperty
def dtype(self):
pass
@abstractproperty
def maxshape(self):
pass
@abstractproperty
def fillvalue(self):
pass
@abstractproperty
def chunks(self):
pass
@abstractmethod
def resize(self, size, axis=None):
pass
@abstractproperty
def size(self):
pass
@abstractmethod
def __getitem__(self, args):
pass
@abstractmethod
def __setitem__(self, args, val):
pass
def read_direct(self, dest, source_sel=None, dest_sel=None):
dest[dest_sel] = self[source_sel]
class StorageEngine(Group):
_engines = dict()
default = None
def __init__(self, name, create_dataset_kwds=None):
# print('# creating storage engine %s'% name)
# import pdb; pdb.set_trace()
assert name not in StorageEngine._engines
self._name = name
StorageEngine._engines[name] = self
if StorageEngine.default is None:
StorageEngine.default = self.name
self._create_dataset_kwds = create_dataset_kwds or {}
@staticmethod
@contextmanager
def default_engine(engine):
if engine not in StorageEngine._engines:
raise ValueError('Unknown storage engine %s', engine)
saved = StorageEngine.default
try:
StorageEngine.default = engine
yield saved
finally:
StorageEngine.default = saved
@property
def create_dataset_kwds(self):
return self._create_dataset_kwds
@property
def name(self):
return self._name
def open(self, name, flags, **kwds):
pass
def close(self, name, flags, **kwds):
pass
def flush(self):
pass
@staticmethod
def lookup(engine):
default=StorageEngine._engines.get(StorageEngine.default)
return StorageEngine._engines.get(engine, default)
@staticmethod
def engines():
return StorageEngine._engines
| 20.489474 | 78 | 0.629335 | 395 | 3,893 | 5.951899 | 0.217722 | 0.071459 | 0.15185 | 0.077414 | 0.347086 | 0.308805 | 0.288388 | 0.288388 | 0.244151 | 0.219906 | 0 | 0 | 0.288723 | 3,893 | 189 | 79 | 20.597884 | 0.849043 | 0.017981 | 0 | 0.614286 | 0 | 0 | 0.00655 | 0 | 0 | 0 | 0 | 0 | 0.007143 | 1 | 0.285714 | false | 0.235714 | 0.014286 | 0.021429 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
44489873a2925a9022ddd77df4045ad6d3d84cfb | 2,102 | py | Python | tcex/api/tc/v3/security/users/user_filter.py | GShepherdTC/tcex | 70b1199b8bb9e63f53e2ba792489267108c909cd | [
"Apache-2.0"
] | null | null | null | tcex/api/tc/v3/security/users/user_filter.py | GShepherdTC/tcex | 70b1199b8bb9e63f53e2ba792489267108c909cd | [
"Apache-2.0"
] | null | null | null | tcex/api/tc/v3/security/users/user_filter.py | GShepherdTC/tcex | 70b1199b8bb9e63f53e2ba792489267108c909cd | [
"Apache-2.0"
] | null | null | null | """User TQL Filter"""
# standard library
from enum import Enum
# first-party
from tcex.api.tc.v3.api_endpoints import ApiEndpoints
from tcex.api.tc.v3.filter_abc import FilterABC
from tcex.api.tc.v3.tql.tql_type import TqlType
class UserFilter(FilterABC):
"""Filter Object for Users"""
@property
def _api_endpoint(self) -> str:
"""Return the API endpoint."""
return ApiEndpoints.USERS.value
def first_name(self, operator: Enum, first_name: str) -> None:
"""Filter First Name based on **firstName** keyword.
Args:
operator: The operator enum for the filter.
first_name: The first name of the user.
"""
self._tql.add_filter('firstName', operator, first_name, TqlType.STRING)
def group_id(self, operator: Enum, group_id: int) -> None:
"""Filter groupID based on **groupId** keyword.
Args:
operator: The operator enum for the filter.
group_id: The ID of the group the user belongs to.
"""
self._tql.add_filter('groupId', operator, group_id, TqlType.INTEGER)
def id(self, operator: Enum, id: int) -> None: # pylint: disable=redefined-builtin
"""Filter ID based on **id** keyword.
Args:
operator: The operator enum for the filter.
id: The ID of the user.
"""
self._tql.add_filter('id', operator, id, TqlType.INTEGER)
def last_name(self, operator: Enum, last_name: str) -> None:
"""Filter Last Name based on **lastName** keyword.
Args:
operator: The operator enum for the filter.
last_name: The last name of the user.
"""
self._tql.add_filter('lastName', operator, last_name, TqlType.STRING)
def user_name(self, operator: Enum, user_name: str) -> None:
"""Filter User Name based on **userName** keyword.
Args:
operator: The operator enum for the filter.
user_name: The user name of the user.
"""
self._tql.add_filter('userName', operator, user_name, TqlType.STRING)
| 33.365079 | 87 | 0.625595 | 275 | 2,102 | 4.669091 | 0.210909 | 0.093458 | 0.062305 | 0.08567 | 0.316199 | 0.266355 | 0.266355 | 0.246885 | 0.179128 | 0 | 0 | 0.001948 | 0.267364 | 2,102 | 62 | 88 | 33.903226 | 0.831818 | 0.399619 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
444ab2b16f42dba2506a2b12c27ffc11ca0c3557 | 297 | py | Python | processes/views.py | cybrvybe/AU7OMA7A-BI | 1895abdd4b1ec0908b08ece65df460fc23925f25 | [
"MIT"
] | null | null | null | processes/views.py | cybrvybe/AU7OMA7A-BI | 1895abdd4b1ec0908b08ece65df460fc23925f25 | [
"MIT"
] | null | null | null | processes/views.py | cybrvybe/AU7OMA7A-BI | 1895abdd4b1ec0908b08ece65df460fc23925f25 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from rest_framework import viewsets
from .models import Process
from .serializers import ProcessSerializer
# Create your views here.
class ProcessView(viewsets.ModelViewSet):
serializer_class = ProcessSerializer
queryset = Process.objects.all()
| 33 | 43 | 0.79798 | 33 | 297 | 7.121212 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 297 | 8 | 44 | 37.125 | 0.93254 | 0.077441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
444db0b1db514119c2f27418d38ccb473c3f040f | 3,740 | py | Python | tests/contrib/django/test_cache_wrapper.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | 1 | 2019-11-24T23:09:29.000Z | 2019-11-24T23:09:29.000Z | tests/contrib/django/test_cache_wrapper.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | null | null | null | tests/contrib/django/test_cache_wrapper.py | tophatmonocle/dd-trace-py | 7db12f1c398c07cd5baf91c571aed672dbb6496d | [
"BSD-3-Clause"
] | 1 | 2019-11-24T23:09:30.000Z | 2019-11-24T23:09:30.000Z | # 3rd party
from nose.tools import eq_, ok_, assert_raises
from django.core.cache import caches
# testing
from .utils import DjangoTraceTestCase
class DjangoCacheTest(DjangoTraceTestCase):
"""
Ensures that the tracing doesn't break the Django
cache framework
"""
def test_wrapper_get_and_set(self):
# get the default cache
cache = caches['default']
value = cache.get('missing_key')
eq_(value, None)
cache.set('a_key', 50)
value = cache.get('a_key')
eq_(value, 50)
def test_wrapper_add(self):
# get the default cache
cache = caches['default']
cache.add('a_key', 50)
value = cache.get('a_key')
eq_(value, 50)
# add should not update a key if it's present
cache.add('a_key', 40)
value = cache.get('a_key')
eq_(value, 50)
def test_wrapper_delete(self):
# get the default cache
cache = caches['default']
cache.set('a_key', 50)
cache.delete('a_key')
value = cache.get('a_key')
eq_(value, None)
def test_wrapper_incr_safety(self):
# get the default cache
cache = caches['default']
# it should fail not because of our wrapper
with assert_raises(ValueError) as ex:
cache.incr('missing_key')
# the error is not caused by our tracer
eq_(ex.exception.args[0], "Key 'missing_key' not found")
# an error trace must be sent
spans = self.tracer.writer.pop()
eq_(len(spans), 2)
span = spans[0]
eq_(span.resource, 'incr')
eq_(span.name, 'django.cache')
eq_(span.span_type, 'cache')
eq_(span.error, 1)
def test_wrapper_incr(self):
# get the default cache
cache = caches['default']
cache.set('value', 0)
value = cache.incr('value')
eq_(value, 1)
value = cache.get('value')
eq_(value, 1)
def test_wrapper_decr_safety(self):
# get the default cache
cache = caches['default']
# it should fail not because of our wrapper
with assert_raises(ValueError) as ex:
cache.decr('missing_key')
# the error is not caused by our tracer
eq_(ex.exception.args[0], "Key 'missing_key' not found")
# an error trace must be sent
spans = self.tracer.writer.pop()
eq_(len(spans), 3)
span = spans[0]
eq_(span.resource, 'decr')
eq_(span.name, 'django.cache')
eq_(span.span_type, 'cache')
eq_(span.error, 1)
def test_wrapper_decr(self):
# get the default cache
cache = caches['default']
cache.set('value', 0)
value = cache.decr('value')
eq_(value, -1)
value = cache.get('value')
eq_(value, -1)
def test_wrapper_get_many(self):
# get the default cache
cache = caches['default']
cache.set('a_key', 50)
cache.set('another_key', 60)
values = cache.get_many(['a_key', 'another_key'])
ok_(isinstance(values, dict))
eq_(values['a_key'], 50)
eq_(values['another_key'], 60)
def test_wrapper_set_many(self):
# get the default cache
cache = caches['default']
cache.set_many({'a_key': 50, 'another_key': 60})
eq_(cache.get('a_key'), 50)
eq_(cache.get('another_key'), 60)
def test_wrapper_delete_many(self):
# get the default cache
cache = caches['default']
cache.set('a_key', 50)
cache.set('another_key', 60)
cache.delete_many(['a_key', 'another_key'])
eq_(cache.get('a_key'), None)
eq_(cache.get('another_key'), None)
| 27.910448 | 64 | 0.579144 | 498 | 3,740 | 4.158635 | 0.186747 | 0.034766 | 0.0676 | 0.082086 | 0.758088 | 0.704973 | 0.656688 | 0.645099 | 0.625785 | 0.604056 | 0 | 0.01866 | 0.297861 | 3,740 | 133 | 65 | 28.120301 | 0.769992 | 0.150802 | 0 | 0.566265 | 0 | 0 | 0.128107 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 1 | 0.120482 | false | 0 | 0.036145 | 0 | 0.168675 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
44764406d795c33ec5ab16ac135f2c7098aa4053 | 2,943 | py | Python | PyPoll/main.py | laquita44/python-challenge | ca48470dbe2fca591b137d108890e9d6662531d3 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | PyPoll/main.py | laquita44/python-challenge | ca48470dbe2fca591b137d108890e9d6662531d3 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | PyPoll/main.py | laquita44/python-challenge | ca48470dbe2fca591b137d108890e9d6662531d3 | [
"CNRI-Python",
"RSA-MD"
] | null | null | null | import os
import csv
import collections
from collections import Counter
# variables in list
candidate_votes = []
candidate_selection = []
# set File path
file_path = os.path.join("..", "Resources", "election_data.csv")
# set path for reader
with open(file_path) as csvfile:
csv_reader = csv.reader(csvfile, delimiter=",")
# read the header row first
csv_header = next(csv_reader)
# loop through each row
for row in csv_reader:
candidate_votes.append(row[2])
# change list to ascending order
list = sorted(candidate_votes)
# arrange the sorted list
arrange_list = list
# count votes per candidate in ascending order to append to list
candidate_count = Counter (arrange_list)
candidate_selection.append(candidate_count.most_common())
# calculate the percentage of votes per candidate
for choice in candidate_selection:
holder_one = format((choice[0][1])*100/(sum(candidate_count.values())),'.3f')
holder_two = format((choice[1][1])*100/(sum(candidate_count.values())),'.3f')
holder_three = format((choice[2][1])*100/(sum(candidate_count.values())),'.3f')
holder_four = format((choice[3][1])*100/(sum(candidate_count.values())),'.3f')
# Print values, sum, and percentages position
print("Election Results")
print("-------------------------")
print(f"Total Votes: {sum(candidate_count.values())}")
print("-------------------------")
print(f"{candidate_selection[0][0][0]}: {holder_one}% ({candidate_selection[0][0][1]})")
print(f"{candidate_selection[0][1][0]}: {holder_two}% ({candidate_selection[0][1][1]})")
print(f"{candidate_selection[0][2][0]}: {holder_three}% ({candidate_selection[0][2][1]})")
print(f"{candidate_selection[0][3][0]}: {holder_four}% ({candidate_selection[0][3][1]})")
print("-------------------------")
print(f"Winner: {candidate_selection[0][0][0]}")
print("-------------------------")
# -->> Export results to text file
voter_info_file = os.path.join("out_pypoll.txt")
with open(voter_info_file, "w") as outfile:
outfile.write("Election Results\n")
outfile.write("-------------------------\n")
outfile.write(f"Total Votes: {sum(candidate_count.values())}\n")
outfile.write("-------------------------\n")
outfile.write(f"{candidate_selection[0][0][0]}: {holder_one}% ({candidate_selection[0][0][1]})\n")
outfile.write(f"{candidate_selection[0][1][0]}: {holder_two}% ({candidate_selection[0][1][1]})\n")
outfile.write(f"{candidate_selection[0][2][0]}: {holder_three}% ({candidate_selection[0][2][1]})\n")
outfile.write(f"{candidate_selection[0][3][0]}: {holder_four}% ({candidate_selection[0][3][1]})\n")
outfile.write("-------------------------\n")
outfile.write(f"Winner: {candidate_selection[0][0][0]}\n")
outfile.write("-------------------------\n")
| 39.24 | 105 | 0.608563 | 376 | 2,943 | 4.606383 | 0.220745 | 0.218245 | 0.19746 | 0.092379 | 0.471709 | 0.468822 | 0.460162 | 0.32679 | 0.243649 | 0.243649 | 0 | 0.031765 | 0.154944 | 2,943 | 74 | 106 | 39.77027 | 0.664656 | 0.117567 | 0 | 0.177778 | 0 | 0.177778 | 0.441434 | 0.337052 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088889 | 0 | 0.088889 | 0.244444 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4476fec67019a93fd8b9f049af7fa03342216a08 | 423 | gyp | Python | binding.gyp | epsitec-sa/node-darwin-ipc | 3ae4327629cfbf64bab66b47f10a035dc7cfa7cd | [
"MIT"
] | null | null | null | binding.gyp | epsitec-sa/node-darwin-ipc | 3ae4327629cfbf64bab66b47f10a035dc7cfa7cd | [
"MIT"
] | null | null | null | binding.gyp | epsitec-sa/node-darwin-ipc | 3ae4327629cfbf64bab66b47f10a035dc7cfa7cd | [
"MIT"
] | null | null | null | {
"targets": [
{
"target_name": "sharedMemory",
"include_dirs": [
"<!(node -e \"require('napi-macros')\")"
],
"sources": [ "./src/sharedMemory.cpp" ],
"libraries": [],
},
{
"target_name": "messaging",
"include_dirs": [
"<!(node -e \"require('napi-macros')\")"
],
"sources": [ "./src/messaging.cpp" ],
"libraries": [],
}
]
} | 21.15 | 50 | 0.432624 | 31 | 423 | 5.774194 | 0.516129 | 0.111732 | 0.167598 | 0.178771 | 0.480447 | 0.480447 | 0.480447 | 0.480447 | 0.480447 | 0 | 0 | 0 | 0.321513 | 423 | 20 | 51 | 21.15 | 0.623693 | 0 | 0 | 0.4 | 0 | 0 | 0.459906 | 0.051887 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
448af547f6bde50905fdf9b748648859ea3fd814 | 429 | py | Python | Usefull_python2/protectedSample.py | innacroft/Python_codes | 76bfba52858550e89c53dcbc72e5d0ef608edc64 | [
"MIT"
] | null | null | null | Usefull_python2/protectedSample.py | innacroft/Python_codes | 76bfba52858550e89c53dcbc72e5d0ef608edc64 | [
"MIT"
] | null | null | null | Usefull_python2/protectedSample.py | innacroft/Python_codes | 76bfba52858550e89c53dcbc72e5d0ef608edc64 | [
"MIT"
] | null | null | null | def protected(func):
def wrapper(password):
if password=='platzi':
return func()
else:
print('Contrasena invalida')
return wrapper
@protected
def protected_func():
print('To contrasena es correcta')
if __name__ == "__main__":
password=str(raw_input('Ingresa tu contrasena: '))
# wrapper=protected(protected_func)
# wrapper(password)
protected_func(password) | 22.578947 | 56 | 0.645688 | 45 | 429 | 5.888889 | 0.488889 | 0.196226 | 0.120755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 429 | 19 | 57 | 22.578947 | 0.815385 | 0.118881 | 0 | 0 | 0 | 0 | 0.218667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.307692 | 0 | 0 | 0.384615 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
44962fab5e344275aa089bcbc43a459b73b36419 | 3,227 | py | Python | Python/PythonIntro/Exercises.py | IQSS/workshops | f060ee3b883c93cb4eb1410b13c3d9f8509233d8 | [
"CC-BY-4.0"
] | 30 | 2017-03-25T14:18:11.000Z | 2019-05-27T18:00:49.000Z | Python/PythonIntro/Exercises.py | IQSS/workshops | f060ee3b883c93cb4eb1410b13c3d9f8509233d8 | [
"CC-BY-4.0"
] | 2 | 2018-08-13T18:52:32.000Z | 2018-08-22T19:49:01.000Z | Python/PythonIntro/Exercises.py | IQSS/workshops | f060ee3b883c93cb4eb1410b13c3d9f8509233d8 | [
"CC-BY-4.0"
] | 97 | 2017-03-24T14:35:46.000Z | 2019-08-07T10:56:39.000Z | # ---
# jupyter:
# jupytext:
# formats: ipynb,md,py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.3'
# jupytext_version: 0.8.6
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Python exercise solutions
# ## Exercise: Reading text from a file and splitting
# *Alice's Adventures in Wonderland* is full of memorable characters. The main characters from the story are listed, one-per-line, in the file named `Characters.txt`.
#
# NOTE: we will not always explicitly demonstrate everything you need to know in order to complete an exercise. Instead we focus on teaching you how to discover available methods and how use the help function to learn how to use them. It is expected that you will spend some time during the exercises looking for appropriate methods and perhaps reading documentation.
# 1. Open the `Characters.txt` file and read its contents.
# 2. Split text on newlines to produce a list with one element per line.
# Store the result as "alice_characters".
# ## Exercise: count the number of main characters
# So far we've learned that there are 12 chapters, around 830 paragraphs, and about 26 thousand words in *Alice's Adventures in Wonderland*. Along the way we've also learned how to open a file and read its contents, split strings, calculate the length of objects, discover methods for string and list objects, and index/subset lists in Python. Now it is time for you to put these skills to use to learn something about the main characters in the story.
# 1. Count the number of main characters in the story (i.e., get the length
# of the list you created in previous exercise).
# 2. Extract and print just the first character from the list you created in
# the previous exercise.
# 3. (BONUS, optional): Sort the list you created in step 2 alphabetically, and then extract the last element.
# ## Exercise: Iterating and counting things
# Now that we know how to iterate using for-loops and list comprehensions the possibilities really start to open up. For example, we can use these techniques to count the number of times each character appears in the story.
# +
# 1. Make sure you have both the text and the list of characters.
#
# Open and read both "Alice_in_wonderland.txt" and
# "Characters.txt" if you have not already done so.
# +
# 2. Which chapter has the most words?
#
# Split the text into chaptes (i.e., split on "CHAPTER ")
# and use a for-loop or list comprehension to iterate over
# the chapters. For each chapter, split it into words and
# calculate the length.
# +
# 3. How many times is each character mentioned in the text?
# Iterate over the list of characters using a for-loop or
# list comprehension. For each character, call the count method
# with that character as the argument.
# +
# 4. (BONUS, optional): Put the character counts computed
# above in a dictionary with character names as the keys and
# counts as the values.
# +
# 5. (BONUS, optional): Use a nested for-loop or nested comprehension
# to calculate the number of times each character is mentioned
# in each chapter.
| 36.258427 | 453 | 0.735048 | 510 | 3,227 | 4.635294 | 0.401961 | 0.01269 | 0.018613 | 0.020305 | 0.153553 | 0.072758 | 0 | 0 | 0 | 0 | 0 | 0.00972 | 0.202975 | 3,227 | 88 | 454 | 36.670455 | 0.909409 | 0.952897 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4496c20f5c4fb764a3ac99e4d1b647621d408376 | 260 | py | Python | py/data.py | JuliaTagBot/Faceless.jl | db6e20659a2ba589468adf36b67cf9e7f4325bfe | [
"MIT"
] | 2 | 2015-11-29T06:25:24.000Z | 2019-07-19T17:19:32.000Z | py/data.py | JuliaTagBot/Faceless.jl | db6e20659a2ba589468adf36b67cf9e7f4325bfe | [
"MIT"
] | null | null | null | py/data.py | JuliaTagBot/Faceless.jl | db6e20659a2ba589468adf36b67cf9e7f4325bfe | [
"MIT"
] | 2 | 2016-03-27T19:08:07.000Z | 2020-02-08T11:29:35.000Z |
import h5py
import numpy as np
from os.path import expanduser
def load_data(filename=expanduser("~/data/CK/dataset_10708.h5")):
with h5py.File(filename,'r') as hf:
data = hf.get('dataset')
np_data = np.array(data)
return np_data
| 21.666667 | 65 | 0.669231 | 40 | 260 | 4.25 | 0.6 | 0.070588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039024 | 0.211538 | 260 | 11 | 66 | 23.636364 | 0.790244 | 0 | 0 | 0 | 0 | 0 | 0.131783 | 0.100775 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
922365e16b4f6b9c4ba62a86e1a0fde777a967ac | 2,484 | py | Python | faker/providers/phone_number/ro_RO/__init__.py | jeffwright13/faker | 9192d5143d5f1b832cc0f44b3f7ee89ca28c975a | [
"MIT"
] | 12,077 | 2015-01-01T18:30:07.000Z | 2022-03-31T23:22:01.000Z | faker/providers/phone_number/ro_RO/__init__.py | jeffwright13/faker | 9192d5143d5f1b832cc0f44b3f7ee89ca28c975a | [
"MIT"
] | 1,306 | 2015-01-03T05:18:55.000Z | 2022-03-31T02:43:04.000Z | env/lib/python3.9/site-packages/faker/providers/phone_number/ro_RO/__init__.py | simotwo/AbileneParadox-ddd | c85961efb37aba43c0d99ed1c36d083507e2b2d3 | [
"MIT"
] | 1,855 | 2015-01-08T14:20:10.000Z | 2022-03-25T17:23:32.000Z | from .. import Provider as PhoneNumberProvider
class Provider(PhoneNumberProvider):
formats = (
'021 ### ####',
'0231 ### ###',
'0232 ### ###',
'0233 ### ###',
'0234 ### ###',
'0235 ### ###',
'0236 ### ###',
'0237 ### ###',
'0238 ### ###',
'0239 ### ###',
'0240 ### ###',
'0241 ### ###',
'0242 ### ###',
'0243 ### ###',
'0244 ### ###',
'0245 ### ###',
'0246 ### ###',
'0247 ### ###',
'0248 ### ###',
'0249 ### ###',
'0250 ### ###',
'0251 ### ###',
'0252 ### ###',
'0253 ### ###',
'0254 ### ###',
'0255 ### ###',
'0256 ### ###',
'0257 ### ###',
'0258 ### ###',
'0259 ### ###',
'0260 ### ###',
'0261 ### ###',
'0262 ### ###',
'0263 ### ###',
'0264 ### ###',
'0265 ### ###',
'0266 ### ###',
'0267 ### ###',
'0268 ### ###',
'0269 ### ###',
'0786 ### ###',
'0760 ### ###',
'0761 ### ###',
'0762 ### ###',
'0763 ### ###',
'0764 ### ###',
'0765 ### ###',
'0766 ### ###',
'0767 ### ###',
'0785 ### ###',
'0768 ### ###',
'0769 ### ###',
'0784 ### ###',
'0770 ### ###',
'0772 ### ###',
'0771 ### ###',
'0749 ### ###',
'0750 ### ###',
'0751 ### ###',
'0752 ### ###',
'0753 ### ###',
'0754 ### ###',
'0755 ### ###',
'0756 ### ###',
'0757 ### ###',
'0758 ### ###',
'0759 ### ###',
'0748 ### ###',
'0747 ### ###',
'0746 ### ###',
'0740 ### ###',
'0741 ### ###',
'0742 ### ###',
'0743 ### ###',
'0744 ### ###',
'0745 ### ###',
'0711 ### ###',
'0727 ### ###',
'0725 ### ###',
'0724 ### ###',
'0726 ### ###',
'0723 ### ###',
'0722 ### ###',
'0721 ### ###',
'0720 ### ###',
'0728 ### ###',
'0729 ### ###',
'0730 ### ###',
'0739 ### ###',
'0738 ### ###',
'0737 ### ###',
'0736 ### ###',
'0735 ### ###',
'0734 ### ###',
'0733 ### ###',
'0732 ### ###',
'0731 ### ###',
'0780 ### ###',
'0788 ### ###',
)
| 23.433962 | 46 | 0.190419 | 108 | 2,484 | 4.37963 | 0.981481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287691 | 0.447262 | 2,484 | 105 | 47 | 23.657143 | 0.05681 | 0 | 0 | 0 | 0 | 0 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009709 | 0 | 0.029126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
922369ae6a13292571a8462a372cd8effcdf3fba | 590 | py | Python | 5oop-data-type.py | syedmurad1/OOP-Python | d09627269c12ce901677ec1053bdf565861030d7 | [
"MIT"
] | null | null | null | 5oop-data-type.py | syedmurad1/OOP-Python | d09627269c12ce901677ec1053bdf565861030d7 | [
"MIT"
] | null | null | null | 5oop-data-type.py | syedmurad1/OOP-Python | d09627269c12ce901677ec1053bdf565861030d7 | [
"MIT"
] | null | null | null | # Text Type: str
# Numeric Types: int, float, complex
# Sequence Types: list, tuple, range
# Mapping Type: dict
# Set Types: set, frozenset
# Boolean Type: bool
# Binary Types: bytes, bytearray, memoryview
x = float(1) # x will be 1.0
y = float(2.8) # y will be 2.8
z = float("3") # z will be 3.0
w = float("4.2") # w will be 4.2
print("--------------------------------------------------------")
num =6
print(num)
print(str(num)+ " is my num")
# print(num + "my num")
# print(num + "my num") - will not work
print("--------------------------------------------------------")
| 23.6 | 65 | 0.49661 | 85 | 590 | 3.447059 | 0.482353 | 0.081911 | 0.068259 | 0.088737 | 0.105802 | 0.105802 | 0 | 0 | 0 | 0 | 0 | 0.031056 | 0.181356 | 590 | 24 | 66 | 24.583333 | 0.575569 | 0.522034 | 0 | 0.222222 | 0 | 0 | 0.475472 | 0.422642 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
923b5d8d8b333f5ff110aed7e719352a8d17e1a9 | 1,054 | py | Python | vectorhub/encoders/text/base.py | vector-ai/vectorhub | 17c2f342cef2ff7bcc02c8f3914e79ad92071a9e | [
"Apache-2.0"
] | 385 | 2020-10-26T13:12:11.000Z | 2021-10-07T15:14:48.000Z | vectorhub/encoders/text/base.py | vector-ai/vectorhub | 17c2f342cef2ff7bcc02c8f3914e79ad92071a9e | [
"Apache-2.0"
] | 24 | 2020-10-29T13:16:31.000Z | 2021-08-31T06:47:33.000Z | vectorhub/encoders/text/base.py | vector-ai/vectorhub | 17c2f342cef2ff7bcc02c8f3914e79ad92071a9e | [
"Apache-2.0"
] | 45 | 2020-10-29T15:25:19.000Z | 2021-09-05T21:50:57.000Z | """
Base Text2Vec Model
"""
import warnings
from ...base import Base2Vec
from abc import ABC, abstractmethod
from typing import Union, List, Dict
class BaseText2Vec(Base2Vec, ABC):
def read(self, text: str):
"""An abstract method to specify the read method to read the data.
"""
pass
@property
def test_word(self):
return "dummy word"
@property
def vector_length(self):
"""
Set the vector length of the model.
"""
if hasattr(self, "_vector_length"):
return self._vector_length
else:
print(f"The vector length is not explicitly stated so we are inferring " + \
"from our test word - {self.test_word}.")
setattr(self, "_vector_length", len(self.encode(self.test_word)))
return self._vector_length
@vector_length.setter
def vector_length(self, value):
self._vector_length = value
@abstractmethod
def encode(self, words: Union[List[str]]):
pass | 27.736842 | 88 | 0.606262 | 127 | 1,054 | 4.905512 | 0.448819 | 0.192616 | 0.128411 | 0.060995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005435 | 0.301708 | 1,054 | 38 | 89 | 27.736842 | 0.841033 | 0.121442 | 0 | 0.24 | 0 | 0 | 0.157418 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.08 | 0.16 | 0.04 | 0.52 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
9253e004856f00085511368daf084cddb829b4f8 | 992 | py | Python | bm25/inverted_index.py | nlp-greyfoss/nlp-algorithms | 7fc269dbab03b8712ab4d1a9fbc9fcb1e375c749 | [
"Apache-2.0"
] | null | null | null | bm25/inverted_index.py | nlp-greyfoss/nlp-algorithms | 7fc269dbab03b8712ab4d1a9fbc9fcb1e375c749 | [
"Apache-2.0"
] | null | null | null | bm25/inverted_index.py | nlp-greyfoss/nlp-algorithms | 7fc269dbab03b8712ab4d1a9fbc9fcb1e375c749 | [
"Apache-2.0"
] | null | null | null | import collections
class Dictionary(dict):
'''
data structure for InvertedIndex
word -> postings(doc_id->word_count)
'''
def __missing__(self, key):
postings = collections.defaultdict(int)
self[key] = postings
return postings
class InvertedIndex:
def __init__(self):
self.dictionary = Dictionary()
def add(self, doc_id, doc):
for word in doc:
postings = self.dictionary[word]
postings[doc_id] += 1
def __contains__(self, word):
return word in self.dictionary
def __getitem__(self, word):
return self.dictionary[word]
def __len__(self):
'''
:return: word count in corpus
'''
return len(self.dictionary)
def get_doc_frequency(self, word):
'''
get number of docs word occurs in
:param word:
:return:
'''
return len(self.dictionary[word])
| 22.545455 | 48 | 0.561492 | 104 | 992 | 5.105769 | 0.346154 | 0.158192 | 0.101695 | 0.06403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001538 | 0.344758 | 992 | 43 | 49 | 23.069767 | 0.815385 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.047619 | 0.095238 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9254c4f956bff0d116a8b3ed0549f4b3aa0b7f63 | 190 | py | Python | dash/app.py | jotavaladouro/motorwat_dashboard | 92539807b8524dcded0d2658815a155eac61a27f | [
"MIT"
] | null | null | null | dash/app.py | jotavaladouro/motorwat_dashboard | 92539807b8524dcded0d2658815a155eac61a27f | [
"MIT"
] | null | null | null | dash/app.py | jotavaladouro/motorwat_dashboard | 92539807b8524dcded0d2658815a155eac61a27f | [
"MIT"
] | null | null | null | """
Create dasp app using Flask
"""
import dash
from flask import Flask
server = Flask(__name__)
app = dash.Dash(__name__, server=server)
app.config.suppress_callback_exceptions = True
| 13.571429 | 46 | 0.757895 | 26 | 190 | 5.153846 | 0.576923 | 0.164179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 190 | 13 | 47 | 14.615385 | 0.82716 | 0.142105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
926f9d5782274124e53c746d6d36c2b55224def1 | 485 | py | Python | idb/cli/commands/kill.py | sergey-plevako-badoo/FBSimulatorControl | 117af8508ba7405bdbacd29ec95a0523b3926ad3 | [
"MIT"
] | 1 | 2019-06-12T16:46:25.000Z | 2019-06-12T16:46:25.000Z | idb/cli/commands/kill.py | BalestraPatrick/idb | 9deac2af129e7595c303c121944034c556202454 | [
"MIT"
] | null | null | null | idb/cli/commands/kill.py | BalestraPatrick/idb | 9deac2af129e7595c303c121944034c556202454 | [
"MIT"
] | 1 | 2021-08-20T08:04:16.000Z | 2021-08-20T08:04:16.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
from argparse import Namespace
from idb.cli.commands.base import BaseCommand
from idb.client.client import IdbClient
class KillCommand(BaseCommand):
@property
def description(self) -> str:
return "Kill the idb daemon"
@property
def name(self) -> str:
return "kill"
async def _run_impl(self, args: Namespace) -> None:
await IdbClient.kill()
| 23.095238 | 71 | 0.692784 | 62 | 485 | 5.387097 | 0.693548 | 0.041916 | 0.077844 | 0.101796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002618 | 0.212371 | 485 | 20 | 72 | 24.25 | 0.871728 | 0.187629 | 0 | 0.166667 | 0 | 0 | 0.058673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
927432564b3ddaf25d9edbbff2e5c01710c29b1f | 55 | py | Python | dallinger/version.py | istresearch/Dallinger | 47e4967ded9e01edbc8c1ae7132c9ec30a87f116 | [
"MIT"
] | 1 | 2020-01-29T04:13:26.000Z | 2020-01-29T04:13:26.000Z | dallinger/version.py | jcpeterson/Dallinger | 55bf00efddb19ab8b7201b65c461996793edf6f4 | [
"MIT"
] | null | null | null | dallinger/version.py | jcpeterson/Dallinger | 55bf00efddb19ab8b7201b65c461996793edf6f4 | [
"MIT"
] | 1 | 2019-02-07T14:16:39.000Z | 2019-02-07T14:16:39.000Z | """Dallinger version number."""
__version__ = "4.0.0"
| 13.75 | 31 | 0.654545 | 7 | 55 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.127273 | 55 | 3 | 32 | 18.333333 | 0.604167 | 0.454545 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
92777d7e1071268c9dcf7c21200e7017df67f42d | 11,915 | py | Python | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/vport/protocols/pimsm/router/interface/crprange/crprange.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/vport/protocols/pimsm/router/interface/crprange/crprange.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/vport/protocols/pimsm/router/interface/crprange/crprange.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null |
# Copyright 1997 - 2018 by IXIA Keysight
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from ixnetwork_restpy.base import Base
from ixnetwork_restpy.files import Files
class CrpRange(Base):
"""The CrpRange class encapsulates a user managed crpRange node in the ixnetwork hierarchy.
An instance of the class can be obtained by accessing the CrpRange property from a parent instance.
The internal properties list will be empty when the property is accessed and is populated from the server using the find method.
The internal properties list can be managed by the user by using the add and remove methods.
"""
_SDM_NAME = 'crpRange'
def __init__(self, parent):
super(CrpRange, self).__init__(parent)
@property
def AdvertisementHoldTime(self):
"""The time interval (in seconds) between two consecutive Candidate RP advertisements.
Returns:
number
"""
return self._get_attribute('advertisementHoldTime')
@AdvertisementHoldTime.setter
def AdvertisementHoldTime(self, value):
self._set_attribute('advertisementHoldTime', value)
@property
def BackOffInterval(self):
"""The back off time interval for the C-RP-Adv messages.
Returns:
number
"""
return self._get_attribute('backOffInterval')
@BackOffInterval.setter
def BackOffInterval(self, value):
self._set_attribute('backOffInterval', value)
@property
def CrpAddress(self):
"""Start address of the set of candidate RPs to be simulated.
Returns:
str
"""
return self._get_attribute('crpAddress')
@CrpAddress.setter
def CrpAddress(self, value):
self._set_attribute('crpAddress', value)
@property
def Enabled(self):
"""Enables/disables a Candidate RP range on the fly. The default is disabled.
Returns:
bool
"""
return self._get_attribute('enabled')
@Enabled.setter
def Enabled(self, value):
self._set_attribute('enabled', value)
@property
def GroupAddress(self):
"""Starting group address of the group range for which the candidate RP will advertise candidacy.
Returns:
str
"""
return self._get_attribute('groupAddress')
@GroupAddress.setter
def GroupAddress(self, value):
self._set_attribute('groupAddress', value)
@property
def GroupCount(self):
"""Number of groups in the range.
Returns:
number
"""
return self._get_attribute('groupCount')
@GroupCount.setter
def GroupCount(self, value):
self._set_attribute('groupCount', value)
@property
def GroupMaskLen(self):
"""Mask width (prefix length in bits) for the group range.
Returns:
number
"""
return self._get_attribute('groupMaskLen')
@GroupMaskLen.setter
def GroupMaskLen(self, value):
self._set_attribute('groupMaskLen', value)
@property
def MeshingType(self):
"""It indicates if the mappings for groups and RP addresses are Fully-Meshed or One-To-One.
Returns:
str(fullyMeshed|oneToOne)
"""
return self._get_attribute('meshingType')
@MeshingType.setter
def MeshingType(self, value):
self._set_attribute('meshingType', value)
@property
def PeriodicAdvertisementInterval(self):
"""Rate controlling variable indicating how many C-RP-Adv messages can be sent in the specified time interval.
Returns:
number
"""
return self._get_attribute('periodicAdvertisementInterval')
@PeriodicAdvertisementInterval.setter
def PeriodicAdvertisementInterval(self, value):
self._set_attribute('periodicAdvertisementInterval', value)
@property
def PriorityChangeInterval(self):
"""Time interval after which priority of all the RPs get changed, if priority type is incremental or random.
Returns:
number
"""
return self._get_attribute('priorityChangeInterval')
@PriorityChangeInterval.setter
def PriorityChangeInterval(self, value):
self._set_attribute('priorityChangeInterval', value)
@property
def PriorityType(self):
"""It indicates the type of priority to be held by the candidate RPs (CRPs). The options are Same, Incremental, and Random.
Returns:
str(same|incremental|random)
"""
return self._get_attribute('priorityType')
@PriorityType.setter
def PriorityType(self, value):
self._set_attribute('priorityType', value)
@property
def PriorityValue(self):
"""Value of priority field sent in candidate RP advertisement messages.
Returns:
number
"""
return self._get_attribute('priorityValue')
@PriorityValue.setter
def PriorityValue(self, value):
self._set_attribute('priorityValue', value)
@property
def RouterCount(self):
"""Total number of candidate RPs to be simulated starting from C-RP Address. A contiguous address range is used for this RP range simulation.
Returns:
number
"""
return self._get_attribute('routerCount')
@RouterCount.setter
def RouterCount(self, value):
self._set_attribute('routerCount', value)
@property
def TriggeredCrpMessageCount(self):
"""The number of times CRP advertisements is sent to the newly elected Bootstrap Router.
Returns:
number
"""
return self._get_attribute('triggeredCrpMessageCount')
@TriggeredCrpMessageCount.setter
def TriggeredCrpMessageCount(self, value):
self._set_attribute('triggeredCrpMessageCount', value)
def add(self, AdvertisementHoldTime=None, BackOffInterval=None, CrpAddress=None, Enabled=None, GroupAddress=None, GroupCount=None, GroupMaskLen=None, MeshingType=None, PeriodicAdvertisementInterval=None, PriorityChangeInterval=None, PriorityType=None, PriorityValue=None, RouterCount=None, TriggeredCrpMessageCount=None):
"""Adds a new crpRange node on the server and retrieves it in this instance.
Args:
AdvertisementHoldTime (number): The time interval (in seconds) between two consecutive Candidate RP advertisements.
BackOffInterval (number): The back off time interval for the C-RP-Adv messages.
CrpAddress (str): Start address of the set of candidate RPs to be simulated.
Enabled (bool): Enables/disables a Candidate RP range on the fly. The default is disabled.
GroupAddress (str): Starting group address of the group range for which the candidate RP will advertise candidacy.
GroupCount (number): Number of groups in the range.
GroupMaskLen (number): Mask width (prefix length in bits) for the group range.
MeshingType (str(fullyMeshed|oneToOne)): It indicates if the mappings for groups and RP addresses are Fully-Meshed or One-To-One.
PeriodicAdvertisementInterval (number): Rate controlling variable indicating how many C-RP-Adv messages can be sent in the specified time interval.
PriorityChangeInterval (number): Time interval after which priority of all the RPs get changed, if priority type is incremental or random.
PriorityType (str(same|incremental|random)): It indicates the type of priority to be held by the candidate RPs (CRPs). The options are Same, Incremental, and Random.
PriorityValue (number): Value of priority field sent in candidate RP advertisement messages.
RouterCount (number): Total number of candidate RPs to be simulated starting from C-RP Address. A contiguous address range is used for this RP range simulation.
TriggeredCrpMessageCount (number): The number of times CRP advertisements is sent to the newly elected Bootstrap Router.
Returns:
self: This instance with all currently retrieved crpRange data using find and the newly added crpRange data available through an iterator or index
Raises:
ServerError: The server has encountered an uncategorized error condition
"""
return self._create(locals())
def remove(self):
"""Deletes all the crpRange data in this instance from server.
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
self._delete()
def find(self, AdvertisementHoldTime=None, BackOffInterval=None, CrpAddress=None, Enabled=None, GroupAddress=None, GroupCount=None, GroupMaskLen=None, MeshingType=None, PeriodicAdvertisementInterval=None, PriorityChangeInterval=None, PriorityType=None, PriorityValue=None, RouterCount=None, TriggeredCrpMessageCount=None):
"""Finds and retrieves crpRange data from the server.
All named parameters support regex and can be used to selectively retrieve crpRange data from the server.
By default the find method takes no parameters and will retrieve all crpRange data from the server.
Args:
AdvertisementHoldTime (number): The time interval (in seconds) between two consecutive Candidate RP advertisements.
BackOffInterval (number): The back off time interval for the C-RP-Adv messages.
CrpAddress (str): Start address of the set of candidate RPs to be simulated.
Enabled (bool): Enables/disables a Candidate RP range on the fly. The default is disabled.
GroupAddress (str): Starting group address of the group range for which the candidate RP will advertise candidacy.
GroupCount (number): Number of groups in the range.
GroupMaskLen (number): Mask width (prefix length in bits) for the group range.
MeshingType (str(fullyMeshed|oneToOne)): It indicates if the mappings for groups and RP addresses are Fully-Meshed or One-To-One.
PeriodicAdvertisementInterval (number): Rate controlling variable indicating how many C-RP-Adv messages can be sent in the specified time interval.
PriorityChangeInterval (number): Time interval after which priority of all the RPs get changed, if priority type is incremental or random.
PriorityType (str(same|incremental|random)): It indicates the type of priority to be held by the candidate RPs (CRPs). The options are Same, Incremental, and Random.
PriorityValue (number): Value of priority field sent in candidate RP advertisement messages.
RouterCount (number): Total number of candidate RPs to be simulated starting from C-RP Address. A contiguous address range is used for this RP range simulation.
TriggeredCrpMessageCount (number): The number of times CRP advertisements is sent to the newly elected Bootstrap Router.
Returns:
self: This instance with matching crpRange data retrieved from the server available through an iterator or index
Raises:
ServerError: The server has encountered an uncategorized error condition
"""
return self._select(locals())
def read(self, href):
"""Retrieves a single instance of crpRange data from the server.
Args:
href (str): An href to the instance to be retrieved
Returns:
self: This instance with the crpRange data from the server available through an iterator or index
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
return self._read(href)
| 41.515679 | 324 | 0.754343 | 1,532 | 11,915 | 5.819843 | 0.180809 | 0.019067 | 0.020413 | 0.034545 | 0.616196 | 0.566958 | 0.53275 | 0.516487 | 0.516487 | 0.516487 | 0 | 0.000816 | 0.176836 | 11,915 | 286 | 325 | 41.660839 | 0.908238 | 0.65833 | 0 | 0.142857 | 0 | 0 | 0.111548 | 0.050275 | 0 | 0 | 0 | 0 | 0 | 1 | 0.336735 | false | 0 | 0.020408 | 0 | 0.55102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
927f700fa8c1d0afea3e7c9f34840323c68915d7 | 630 | py | Python | koosli/decorators.py | Koosli/koosli.org | 56d8276bb4f68727df95419222429379ec5be302 | [
"MIT"
] | null | null | null | koosli/decorators.py | Koosli/koosli.org | 56d8276bb4f68727df95419222429379ec5be302 | [
"MIT"
] | null | null | null | koosli/decorators.py | Koosli/koosli.org | 56d8276bb4f68727df95419222429379ec5be302 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from functools import update_wrapper
from flask import g, request, redirect, url_for, current_app, abort
def admin_required(user):
'''Ensure that the currently logged in user is an admin.
If user is not logged in, redirect to login page.
'''
def decorator(fn):
def wrapped_function(*args, **kwargs):
if not user.is_authenticated():
return redirect(url_for('user.login'))
elif not user.is_admin():
abort(403)
return fn(*args, **kwargs)
return update_wrapper(wrapped_function, fn)
return decorator
| 27.391304 | 67 | 0.626984 | 81 | 630 | 4.753086 | 0.54321 | 0.062338 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008753 | 0.274603 | 630 | 22 | 68 | 28.636364 | 0.833698 | 0.2 | 0 | 0 | 0 | 0 | 0.02045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9281dad69f90ff3e460784ee907241649c3dd3e4 | 350 | py | Python | wechat/download.py | dadoyi/wechat_worker | d932219e44a74358815682157b29243fb18fbb83 | [
"MIT"
] | null | null | null | wechat/download.py | dadoyi/wechat_worker | d932219e44a74358815682157b29243fb18fbb83 | [
"MIT"
] | null | null | null | wechat/download.py | dadoyi/wechat_worker | d932219e44a74358815682157b29243fb18fbb83 | [
"MIT"
] | null | null | null | import urllib.request
name = 'icon/icon18_1.png'
# 网络上图片的地址
img_src = 'http://www.htmlsucai.com/demo/2018/07/%e4%bb%bf%e5%be%ae%e4%bf%a1%e7%bd%91%e9%a1%b5%e7%89%88%e8%81%8a%e5%a4%a9%e7%95%8c%e9%9d%a2%e6%a8%a1%e6%9d%bf/images/'+name
# 将远程数据下载到本地,第二个参数就是要保存到本地的文件名
urllib.request.urlretrieve(img_src,'D:/project/wechat/public/static/images/'+name)
| 31.818182 | 171 | 0.737143 | 70 | 350 | 3.642857 | 0.742857 | 0.101961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129129 | 0.048571 | 350 | 10 | 172 | 35 | 0.636637 | 0.105714 | 0 | 0 | 0 | 0.25 | 0.677419 | 0.125806 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
928d944ebda78c94fdd823261090c9ef2c57c3af | 3,465 | py | Python | sqlpuzzle/_queries/delete.py | Dundee/python-sqlpuzzle | 260524922a0645c9bf94a9779195f93ef2c78cba | [
"MIT"
] | 8 | 2015-03-19T11:25:32.000Z | 2020-09-02T11:30:10.000Z | sqlpuzzle/_queries/delete.py | Dundee/python-sqlpuzzle | 260524922a0645c9bf94a9779195f93ef2c78cba | [
"MIT"
] | 7 | 2015-03-23T14:34:28.000Z | 2022-02-21T12:36:01.000Z | sqlpuzzle/_queries/delete.py | Dundee/python-sqlpuzzle | 260524922a0645c9bf94a9779195f93ef2c78cba | [
"MIT"
] | 4 | 2018-11-28T21:59:27.000Z | 2020-01-05T01:50:08.000Z | from sqlpuzzle.exceptions import ConfirmDeleteAllException
from sqlpuzzle._queries.options import Options
from sqlpuzzle._queryparts import Tables, Where
from .query import Query
__all__ = ('Delete',)
class DeleteOptions(Options):
_definition_of_options = {
'ignore': {
'off': '',
'on': 'IGNORE',
},
}
def ignore(self, allow=True):
self._options['ignore'] = 'on' if allow else 'off'
class Delete(Query):
"""
Example:
.. code-block:: python
>>> sqlpuzzle.delete_from('t').where(id=1)
<Delete: DELETE FROM "t" WHERE "id" = 1>
"""
_queryparts = {
'delete_options': DeleteOptions,
'tables': Tables,
'references': Tables,
'where': Where,
}
_query_template = 'DELETE%(delete_options)s%(tables)s FROM%(references)s%(where)s'
def __init__(self, *tables):
super().__init__()
self._allow_delete_all = False
self._tables.set(*tables)
def __str__(self):
if not self._where.is_set and not self._allow_delete_all:
raise ConfirmDeleteAllException()
return super().__str__()
def allow_delete_all(self):
"""
Allow query without ``WHERE`` condition.
By default delete without condition will raise exception
:py:class:`ConfirmDeleteAllException <sqlpuzzle.exceptions.ConfirmDeleteAllException>`.
If you want really delete all rows without condition, allow it by calling
this method.
.. code-block:: python
>>> sqlpuzzle.delete_from('t')
Traceback (most recent call last):
...
ConfirmDeleteAllException: Are you sure, that you want delete all records?
>>> sqlpuzzle.delete_from('t').allow_delete_all()
<Delete: DELETE FROM "t">
"""
self._allow_delete_all = True
return self
def forbid_delete_all(self):
"""
Forbid query without WHERE condition.
By default delete without condition will raise exception
:py:class:`ConfirmDeleteAllException <sqlpuzzle.exceptions.ConfirmDeleteAllException>`.
It can be allowed by calling method :py:meth:`~.allow_delete_all`. If
you want to again forbid it, call this method.
"""
self._allow_delete_all = False
return self
def delete(self, *tables):
self._tables.set(*tables)
return self
def from_(self, *args, **kwds):
self._references.set(*args, **kwds)
return self
def from_table(self, table, alias=None):
self._references.set((table, alias))
return self
def from_tables(self, *args, **kwds):
self.from_(*args, **kwds)
return self
def join(self, table):
self._references.join(table)
return self
def inner_join(self, table):
self._references.inner_join(table)
return self
def left_join(self, table):
self._references.left_join(table)
return self
def right_join(self, table):
self._references.right_join(table)
return self
def on(self, *args, **kwds):
self._references.on(*args, **kwds)
return self
def where(self, *args, **kwds):
self._where.where(*args, **kwds)
return self
# DELETE OPTIONS
def ignore(self, allow=True):
self._delete_options.ignore(allow)
return self
| 27.283465 | 95 | 0.612121 | 389 | 3,465 | 5.251928 | 0.22108 | 0.063632 | 0.069995 | 0.035242 | 0.395497 | 0.220754 | 0.182085 | 0.147822 | 0.147822 | 0.147822 | 0 | 0.000801 | 0.279076 | 3,465 | 126 | 96 | 27.5 | 0.817054 | 0.285137 | 0 | 0.275362 | 0 | 0 | 0.057381 | 0.026719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.231884 | false | 0 | 0.057971 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
92a9dd245291f088114d2e3fa89af75f96e4c592 | 342 | py | Python | docs/examples_src/raw_query_usage/parse_with_unset_default.py | dynalz/odmantic | f20f08f8ab1768534c1e743f7539bfe4f8c73bdd | [
"0BSD"
] | 486 | 2020-10-19T05:33:53.000Z | 2022-03-30T12:54:57.000Z | docs/examples_src/raw_query_usage/parse_with_unset_default.py | dynalz/odmantic | f20f08f8ab1768534c1e743f7539bfe4f8c73bdd | [
"0BSD"
] | 183 | 2020-10-19T18:15:25.000Z | 2022-03-31T04:59:21.000Z | docs/examples_src/raw_query_usage/parse_with_unset_default.py | dynalz/odmantic | f20f08f8ab1768534c1e743f7539bfe4f8c73bdd | [
"0BSD"
] | 53 | 2020-10-19T09:35:01.000Z | 2022-03-31T20:39:51.000Z | from bson import ObjectId
from odmantic import Model
class Player(Model):
name: str
level: int = 1
document = {"name": "Leeroy", "_id": ObjectId("5f8352a87a733b8b18b0cb27")}
user = Player.parse_doc(document)
print(repr(user))
#> Player(
#> id=ObjectId("5f8352a87a733b8b18b0cb27"),
#> name="Leeroy",
#> level=1,
#> )
| 17.1 | 74 | 0.660819 | 38 | 342 | 5.894737 | 0.578947 | 0.089286 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121429 | 0.181287 | 342 | 19 | 75 | 18 | 0.678571 | 0.269006 | 0 | 0 | 0 | 0 | 0.15102 | 0.097959 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.625 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
92ae2c7f2b20ff4f228a10bfadebba1e54547e2f | 106 | py | Python | 6/pizza.py | liuhanyu200/pygame | 38a68e779e6b0a63edb1758fca98ebbf40bb0444 | [
"BSD-3-Clause"
] | null | null | null | 6/pizza.py | liuhanyu200/pygame | 38a68e779e6b0a63edb1758fca98ebbf40bb0444 | [
"BSD-3-Clause"
] | null | null | null | 6/pizza.py | liuhanyu200/pygame | 38a68e779e6b0a63edb1758fca98ebbf40bb0444 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding:utf-8 -*-
pizza = {
'crust': 'thick',
'toppings': ['mushrooms', 'extra cheese'],
}
| 13.25 | 46 | 0.5 | 10 | 106 | 5.3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.226415 | 106 | 7 | 47 | 15.142857 | 0.634146 | 0.188679 | 0 | 0 | 0 | 0 | 0.46988 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.