hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3a1b3de82b0cb02451c59c3a93b30506f022268a | 188 | py | Python | config/urls.py | laactech/django-security-headers-example | 86ea0b7209f8871c32100ada31fe00aa4a8e9f63 | [
"BSD-3-Clause"
] | 1 | 2019-10-09T22:08:27.000Z | 2019-10-09T22:08:27.000Z | config/urls.py | laactech/django-security-headers-example | 86ea0b7209f8871c32100ada31fe00aa4a8e9f63 | [
"BSD-3-Clause"
] | 7 | 2020-06-05T23:45:57.000Z | 2022-02-10T10:40:54.000Z | config/urls.py | laactech/django-security-headers-example | 86ea0b7209f8871c32100ada31fe00aa4a8e9f63 | [
"BSD-3-Clause"
] | null | null | null | from django.urls import path
from django_security_headers_example.core.views import LandingPageView
urlpatterns = [
path("", view=LandingPageView.as_view(), name="landing_page"),
]
| 20.888889 | 70 | 0.776596 | 23 | 188 | 6.130435 | 0.73913 | 0.141844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117021 | 188 | 8 | 71 | 23.5 | 0.849398 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
28b3ccf3dc2b42165b93b678794b29f7f7e89aa1 | 545 | py | Python | calchas_datamodel/idExpression.py | s-i-newton/calchas-datamodel | eda5e2de37849d6d4766cd680bc75fec8e923f85 | [
"Apache-2.0"
] | null | null | null | calchas_datamodel/idExpression.py | s-i-newton/calchas-datamodel | eda5e2de37849d6d4766cd680bc75fec8e923f85 | [
"Apache-2.0"
] | 2 | 2017-06-01T14:14:09.000Z | 2017-06-20T10:01:13.000Z | calchas_datamodel/idExpression.py | s-i-newton/calchas | 13472f837605eff26010a28af9981ba8750e9af9 | [
"Apache-2.0"
] | null | null | null | from .abstractExpression import AbstractExpression
from typing import TypeVar
T = TypeVar('T')
class IdExpression(AbstractExpression):
def __init__(self, id_: T):
self.id = id_
def __repr__(self) -> str:
return str(self.id)
def __eq__(self, other) -> bool:
if self is other:
return True
if type(other) == IdExpression:
return self.id == other.id
return False
def __hash__(self):
return hash(self.id)
def get_id(self) -> T:
return self.id
| 20.961538 | 50 | 0.60367 | 67 | 545 | 4.626866 | 0.373134 | 0.116129 | 0.058065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.300917 | 545 | 25 | 51 | 21.8 | 0.813648 | 0 | 0 | 0 | 0 | 0 | 0.001835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.111111 | 0.166667 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
28d465fa30e0433de2c84494a0e0f5ac46a9f8f7 | 1,978 | py | Python | src/adage/decorators.py | lukasheinrich/dagger | 353c15cd97ff5150eff128f34cf1666c78826524 | [
"MIT"
] | 31 | 2018-07-12T10:33:39.000Z | 2021-12-01T22:49:42.000Z | src/adage/decorators.py | lukasheinrich/dagger | 353c15cd97ff5150eff128f34cf1666c78826524 | [
"MIT"
] | 10 | 2021-02-15T20:13:43.000Z | 2022-02-03T00:48:34.000Z | src/adage/decorators.py | lukasheinrich/dagger | 353c15cd97ff5150eff128f34cf1666c78826524 | [
"MIT"
] | 3 | 2019-05-31T18:04:15.000Z | 2021-08-23T12:00:18.000Z | import functools
def adageop(func):
"""
Decorator that adds a 's' attribute to a function
The attribute can be used to partially define
the function call, except for the 'adageobj'
keyword argument, the return value is a
single-argument ('adageobj') function
"""
def partial(*args,**kwargs):
return functools.partial(func,*args,**kwargs)
func.s = partial
return func
class Rule(object):
def __init__(self,predicate,body):
self.predicate = predicate
self.body = body
def applicable(self,adageobj):
return self.predicate(adageobj)
def apply(self,adageobj):
return self.body(adageobj = adageobj)
def adagetask(func):
"""
Decorator that adds a 's' attribute to a function
The attribute can be used to fully define a function
call to be executed at a later time. The result
will be a zero-argument callable
"""
try:
from celery import shared_task
func.celery = shared_task(func)
except ImportError:
pass
def partial(*args,**kwargs):
return functools.partial(func,*args,**kwargs)
func.s = partial
return func
def callbackrule(after = None):
"""
A decorator that creates a adage Rule from a callback function
The after argument is expected to container a dictionary
of node identifiers. The callback is expected have two arguments
A dictionary with the same keys as in after as keys, and the
corresponding nodes as values, as well as the adajeobj will be
passed to the callback
"""
after = after or {}
def decorator(func):
def predicate(adageobj):
return all([adageobj.dag.getNode(node).successful() for node in after.values()])
def body(adageobj):
depnodes = {k:adageobj.dag.getNode(v) for k,v in after.items()}
func(depnodes = depnodes, adageobj = adageobj)
return Rule(predicate,body)
return decorator
| 30.90625 | 92 | 0.664307 | 262 | 1,978 | 4.992366 | 0.358779 | 0.030581 | 0.025994 | 0.03211 | 0.220183 | 0.220183 | 0.220183 | 0.220183 | 0.220183 | 0.220183 | 0 | 0 | 0.255308 | 1,978 | 63 | 93 | 31.396825 | 0.887984 | 0.371082 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.323529 | false | 0.029412 | 0.088235 | 0.147059 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e904784d726457730b96e531625f80ef01e860f9 | 906 | py | Python | test/top.py | persianpros/transmissionrpc | 5e6a8487ca7684459ef9d3b375b207535ae2b9dd | [
"MIT"
] | null | null | null | test/top.py | persianpros/transmissionrpc | 5e6a8487ca7684459ef9d3b375b207535ae2b9dd | [
"MIT"
] | null | null | null | test/top.py | persianpros/transmissionrpc | 5e6a8487ca7684459ef9d3b375b207535ae2b9dd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# 2013-03, Erik Svensson <erik.public@gmail.com>
# Licensed under the MIT license.
import unittest
import transmissionrpc
class TopTest(unittest.TestCase):
def testConstants(self):
self.assertTrue(isinstance(transmissionrpc.__author__, str))
self.assertTrue(isinstance(transmissionrpc.__version_major__, int))
self.assertTrue(isinstance(transmissionrpc.__version_minor__, int))
self.assertTrue(isinstance(transmissionrpc.__version__, str))
self.assertTrue(isinstance(transmissionrpc.__copyright__, str))
self.assertTrue(isinstance(transmissionrpc.__license__, str))
self.assertEqual('{0}.{1}'.format(transmissionrpc.__version_major__, transmissionrpc.__version_minor__), transmissionrpc.__version__)
def suite():
suite = unittest.TestLoader().loadTestsFromTestCase(TopTest)
return suite
| 39.391304 | 142 | 0.738411 | 87 | 906 | 7.229885 | 0.471264 | 0.133545 | 0.228935 | 0.372019 | 0.36725 | 0.155803 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.155629 | 906 | 22 | 143 | 41.181818 | 0.810458 | 0.110375 | 0 | 0 | 0 | 0 | 0.008974 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e9225ac8234cba226c9c33772de98e2d065d77b6 | 349 | py | Python | chapter2/bandit.py | mtrazzi/understanding-rl | 83a9b7608c805189a39b4ef81893f6ebe982f9e1 | [
"MIT"
] | 95 | 2020-04-26T12:36:07.000Z | 2020-05-02T13:23:47.000Z | chapter2/bandit.py | 3outeille/rl-book-challenge | b02595b0aec3e9632ef5d9814e925384931089bd | [
"MIT"
] | 2 | 2020-09-24T20:29:29.000Z | 2021-11-27T11:17:45.000Z | chapter2/bandit.py | 3outeille/rl-book-challenge | b02595b0aec3e9632ef5d9814e925384931089bd | [
"MIT"
] | 15 | 2020-04-27T04:10:02.000Z | 2020-04-30T21:42:04.000Z | import numpy as np
class Bandit:
def __init__(self, k=10, mean=0):
self.k = k
self.q = np.random.randn(k) + mean
self.old_q = [q_val for q_val in self.q] # copy
def max_action(self):
return np.argmax(self.q)
def reward(self, action):
return np.random.normal(self.q[action])
def reset(self):
self.q = self.old_q
| 19.388889 | 52 | 0.638968 | 63 | 349 | 3.396825 | 0.444444 | 0.116822 | 0.074766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.226361 | 349 | 17 | 53 | 20.529412 | 0.781481 | 0.011461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.083333 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3a8e21c35da0565b1474e19643e2481a81691a35 | 14,317 | py | Python | utils/lists.py | luciano1337/legion-bot | 022d1ef9eb77a26b57929f800dd55770206f8852 | [
"MIT"
] | null | null | null | utils/lists.py | luciano1337/legion-bot | 022d1ef9eb77a26b57929f800dd55770206f8852 | [
"MIT"
] | null | null | null | utils/lists.py | luciano1337/legion-bot | 022d1ef9eb77a26b57929f800dd55770206f8852 | [
"MIT"
] | null | null | null | pozehug = [
'https://media1.tenor.com/images/4d89d7f963b41a416ec8a55230dab31b/tenor.gif?itemid=5166500',
'https://media1.tenor.com/images/c7efda563983124a76d319813155bd8e/tenor.gif?itemid=15900664',
'https://media1.tenor.com/images/daffa3b7992a08767168614178cce7d6/tenor.gif?itemid=15249774',
'https://media1.tenor.com/images/7e30687977c5db417e8424979c0dfa99/tenor.gif?itemid=10522729',
'https://media1.tenor.com/images/5ccc34d0e6f1dccba5b1c13f8539db77/tenor.gif?itemid=17694740'
]
raspunsuri = [
'Da', 'Nu', 'Ghiceste..', 'Absolut.',
'Desigur.', 'Fara indoiala fratimiu.',
'Cel mai probabil.', 'Daca vreau eu',
'Ajutor acest copil are iq-ul scazut!',
'https://i.imgur.com/9x18D5m.png',
'Sa speram', 'Posibil.',
'Ce vorbesti sampist cordit',
'Se prea poate', 'Atata poti cumetre',
'Daca doresc', 'Teapa cumetre',
'Milsugi grav', 'https://www.youtube.com/watch?v=1MwqNFO_rM4',
'Nu stiu ca nu sunt creativa', 'Nu stiu', 'Asa te-ai nascut bai asta', 'Yamete Kudasaiii.. ^_^', 'E prost dal in morti lui!',
'Nu il poti judeca.'
]
lovitura = [
'https://media1.tenor.com/images/9ea4fb41d066737c0e3f2d626c13f230/tenor.gif?itemid=7355956',
'https://media1.tenor.com/images/612e257ab87f30568a9449998d978a22/tenor.gif?itemid=16057834',
'https://media1.tenor.com/images/528ff731635b64037fab0ba6b76d8830/tenor.gif?itemid=17078255',
'https://media1.tenor.com/images/153b2f1bfd3c595c920ce60f1553c5f7/tenor.gif?itemid=10936993',
'https://media1.tenor.com/images/f9f121a46229ea904209a07cae362b3e/tenor.gif?itemid=7859254',
'https://media1.tenor.com/images/477821d58203a6786abea01d8cf1030e/tenor.gif?itemid=7958720'
]
pisica = [
'https://media1.tenor.com/images/730c85cb58041d4345404a67641fcd46/tenor.gif?itemid=4351869',
'https://media1.tenor.com/images/f78e68053fcaf23a6ba7fbe6b0b6cff2/tenor.gif?itemid=10614631',
'https://media1.tenor.com/images/8ab88b79885ab587f84cbdfbc3b87835/tenor.gif?itemid=15917800',
'https://media1.tenor.com/images/fea93362cd765a15b5b2f45fc6fca068/tenor.gif?itemid=14715148',
'https://media1.tenor.com/images/fb22e08583263754816e910f6a6ae4bd/tenor.gif?itemid=15310654',
'https://media1.tenor.com/images/9596d3118ddd5c600806a44da90c4863/tenor.gif?itemid=16014629',
'https://media1.tenor.com/images/ce038ac1010fa9514bb40d07c2dfed7b/tenor.gif?itemid=14797681',
'https://media1.tenor.com/images/4fbe2ab9d22992d0a42da37804f227e8/tenor.gif?itemid=9606395',
'https://media1.tenor.com/images/f6fe8d1d0463f4e51b6367bbecf56a3e/tenor.gif?itemid=6198981',
'https://media1.tenor.com/images/a862d2cb92bfbe6213e298871b1e8a9a/tenor.gif?itemid=15805236'
]
caini = [
''
]
pozehentai = [
'https://i.alexflipnote.dev/500ce4.gif',
'https://media1.tenor.com/images/832c34c525cc3b7dae850ce5e7ee451c/tenor.gif?itemid=9714277',
'https://media1.tenor.com/images/1169d1ab96669e13062c1b23ce5b9b01/tenor.gif?itemid=9035033',
'https://media1.tenor.com/images/583d46f95740b8dde76b47585d78f3a4/tenor.gif?itemid=19369487',
'https://media1.tenor.com/images/01b39c35fd1ce4bb6ce8be232c26d423/tenor.gif?itemid=12342539',
'https://media1.tenor.com/images/bd39500869eeedd72d94274282fd14f2/tenor.gif?itemid=9252323',
'https://media1.tenor.com/images/65c92e3932d7617146c7faab53e1063b/tenor.gif?itemid=11098571',
'https://media1.tenor.com/images/c344d38d1a2b799db53478b8ec302f9e/tenor.gif?itemid=14057537'
]
pozekiss = [
'https://media1.tenor.com/images/ef4a0bcb6e42189dc12ee55e0d479c54/tenor.gif?itemid=12143127',
'https://media1.tenor.com/images/f102a57842e7325873dd980327d39b39/tenor.gif?itemid=12392648',
'https://media1.tenor.com/images/3d56f6ef81e5c01241ff17c364b72529/tenor.gif?itemid=13843260',
'https://media1.tenor.com/images/503bb007a3c84b569153dcfaaf9df46a/tenor.gif?itemid=17382412',
'https://media1.tenor.com/images/6bd9c3ba3c06556935a452f0a3679ccf/tenor.gif?itemid=13387677',
'https://media1.tenor.com/images/f1dd2c4bade57949f49daeedbe3a4b86/tenor.gif?itemid=17092948'
]
lick = [
'https://media1.tenor.com/images/2ca4ca0d787ca3af4e27cdf71bb9796f/tenor.gif?itemid=15900645'
]
love = [
'https://media1.tenor.com/images/cf20ebeadcadcd54e6778dac16357644/tenor.gif?itemid=10805514'
]
pozegift = [
'https://i.imgur.com/xnHDSIb.jpg',
'https://i.imgur.com/uTrZDlC.jpg',
'https://i.imgur.com/fMgEDlZ.jpg',
'https://i.imgur.com/HZVKaYK.jpg',
'https://i.imgur.com/HvQnLpj.jpg',
'https://i.imgur.com/qRLPalh.jpg',
'https://i.imgur.com/fQaCCNF.jpg',
'https://i.imgur.com/BM8CoqI.jpg',
'https://i.imgur.com/bSTgzZj.jpg',
'https://i.imgur.com/bZOpa6H.jpg',
'https://i.imgur.com/xjHCbLq.jpg',
'https://i.imgur.com/pFn1b1H.jpg',
'https://i.imgur.com/wxA6Yhm.jpg',
'https://i.imgur.com/jw3ohim.jpg',
'https://i.imgur.com/cZOCcvO.jpg',
'https://i.imgur.com/dpDKiNh.jpg',
'https://i.imgur.com/MSmQjc2.jpg',
'https://i.imgur.com/8LXrQmy.jpg',
]
glumemafia = [
'bagameas pulan mata pleci la scoala cu 10lei an buzunar 5lei de drum 5 lei detigari trantimias pulan mata si ai figuri k ai jordani fake din targ si tricou armani luat de la turci k daca iti deschid sifonieru joak turci cu chinezi barbut',
'Cum plm sa iti ia mata telefonu adica dai un capac sa te stie de jupan',
'te lauzi ca stai la oras da tu stai in ultimu sat uitat de lume ciobanoaia cu 3 case si 2 se darama pisamas pe tn',
'Esti mare diva si ai 10k followeri pe insta da cand deschizi picioarele intreaba lumea cine a deschis punga de lotto cu cascaval',
'te dai mare fumator ca fumezi la narghilea si ai vape dar cand ti am zis de davidoff ai zis ca e ala cu ochelari din migos',
'Flexezi un tricou bape luat din obor cu 10 yang da il contactezi pe premieru chinei daca pui urechea la eticheta in rasa mati de saracie',
'cum frt nai auzit de adrian toma cel mai bun giungel wannabe de pa eune frt gen esti nub? :))))',
'gen cum morti mati sa te joci fortnite mai bine iesi afara si ti construiesti o casa ca si asa stai in pubela de gunoi :)))))))))',
'pui story ca mananci la restaurant meniuri scumpe si esti cu gagicatu mancati bn dar tie cand ti-am aratat prima oara pizza ai zis ca au scos astia de la rolex ceasuri cu salam pe el',
'ce corp ai zici ca e blenderu de facea reclama pe taraf la el',
'cand te am dus prima oara la kfc ai comandat parizer mentolat cu sos de lamaie',
'dai share la parazitii spui dalea cu cand soarele rasare am ochii injectati sau muie garda si dai share la poze cu maria si jointuri bai nebunule sa cada mie tot ce am pe casa de nu fumezi in spate la bloc cu batu ca daca afla mata aia descentrata iti fute o palma de singurul lucru pe care o sa il mai bagi in vene e perfuzia fututi morti mati ))))))',
'ho fa terminato cu fitele astea ca atunci cand te-am dus prima data la mc ai intrebat daca se poate manca cu mana',
'fa proasto te dai mare bad bici dar cand ti-am aratat h&m m-ai intrebat pe unde poti taia lemne',
'te crezi mare diva si iti faci poze pe masini si pe garduri da sa moara chilotii lu nelson daca te vede mata ca esti asa rebela iti fute un telefon nokia in cap de nu mai vezi orgoliul vreo 3 ani',
'fa andreio tiam dat felu 2 al meu la grădinița sa mănânci ca tiera foame siacu ai aruncat trandafiri fututen gura de stoarfa',
'Eu, Lorin Fortuna combatant ezoteric complex și corect privind din punct de vedere ezoteric prin rangul ezoteric precum și prin distincţiile ezoterice care mi-au fost conferite de către conducători supremi abilitaţi, blestem ezoteric la nivelul maxim posibil la care dau dreptul rangul și distinctiile ezoterice care mi-au conferite menţionate anterior. Blestem fără sfârşit temporar în mod direct împotriva fiinţei colective superioare de tip civilizaţie virtuală numită: civilizaţia virtuală arahnidica tarantulara, androgina, neagră, emoţional agresională civilizațională condusă la nivel de conducător suprem de către fiinţa superioară androgină alcătuită din: ființa individuală superioară de gen masculin numită Satanos și fiinţa individuală superioară de gen feminin numită Geea, pentru răul existenţial comis împotriva grupării de civilizaţie virtuale de tip gorilian individual neagresională civilizațional și autentic băștinașe în cadrul lumilor planetare ale planetei al cărei lume planetare medie sunt integrate existenţial cu precizarea că, răul existenţial pentru care blestem civilizaţia virtuală pe care am numit-o anterior ultim ca civilizaţie agresională civilizațional a fost comis în perioada temporală specifică calendarului planetar cuprins între data de început în care s-a dat în funcțiune oficial prima bază civilizațională planetară în cadrul zonei existenţiale a planetei a cărei lume planetară medie sunt integrate existențial aferentă și mă refer la zona existențial în cauză și la concret la baza existențială civilizațională virtuală planetară în cauza deci aferentă civilizației virtuale pe care o blestem și până în prezent.',
'fututi morti mati te dai mare smeker faci paneluri de samp da kand tiam zis de error_log ziceai sefu scuzama nam facut eu asa cv fututi morti mati olteanuadv',
'te dai mare futacios si mare fuckboy da singura fata careti zice so futi e reclama depe xnxx cu maria carei in apropierea ta',
'te dai bodybuilder ca tu faci sala sa pui pe tine da sami bag singur pulan cur ca dacati pui mana in sold zici ca esti cupa uefa esti nebun',
'cum sa te desparti de gagicata gen la inima mai ars dar tot nam sa te las',
'te dai mare smecher prin cluburi da cand era pe tv shaolin soudaun iti puneai manusa lu tac tu de sudura pe cap si ziceai ca e pumnu lu tedigong',
'Te dai mare ITst haker pula mea da nai mai trimis ss la nimeni de când ți ai spart ecranu la tlf că ți era rușine să nu se vadă damiaș drumu n pipota matii',
'pai daten mm de pizda proasta, pui ss cu samsung health la instastory si ne arati cati pasi ai facut tu de la shaormerie pana acasa sau din pat pana la frigider, si te lauzi ca faci sport? sport e cand o sugi si nuti oboseste gura.',
'sa o fut pe mata in gura pana ii perforez laringele',
'Cum sati fie frica de fantome gen scubi dubi du unde esti tu',
'cand ti am aratat prima oara narghileaua ai crezut ca e pompa si ai scos mingea so umflam pt diseara la fotbal',
'ce nas ai zici ca e racheta lu Trump cu care bombardeaza Siria',
'daca esti scunda si folosesti expresia "sunt mai aproape de iad", nu daten mortii mati esti mai aproape sa-mi faci masaj la prostata cu gura din picioare',
'BAGAMIAS PULAN MORTI TAI DITULE AI CORPU ALA ZICI CAI AMBALAJ DE LA IKEA',
'cum sa nu sti nimic despre masini gen am creieru tdi cu motor de 6 mii ))))))',
'sa vedeti cioroilor, azi dimineata stateam linistit in pat si il gadilam pe fratimio in talpa, la care mama "macar asteapta pana se naste", gen cplm nu pot sa ma joc cu el',
'pray pt toti cioroi care lea fost inima ranita de puicute stei strong barbati mei',
'Ho nebunule ca groapa marianelor si mata sunt cele mai adanci puncte de pe planeta',
'te dai mare diva figuri de buftea cu iph 6 da daca conectez castile la buricu tau se aude balada foamei bass boosted',
'cum pulamea sa nadormi vere gen noapte buna somn usor sapte purici pun picior',
'comentezi de bataie dar te sponsorizeaza croco cu corpu ala fmm de stick',
'buna ziua muie la bozgori si o seara cat mai linistita sa aveti oameni buni',
'Baganeam pula în profii de matematică o vezi pe Ionela ca are curu mare și îi pui 8 fututen gura si mie 5 luates in pula cu chelia ta',
'MAMA TA E ASA DE GRASA INCAT THANOS A BATUT DE 2 ORI DIN DEGETE SA O STEARGA DE PE PLANETA',
'esti frumoasa andreea da fara machiaj te striga lumea andrei pe strada',
'te dai mare smecher ca ai bani tu da dormi cu fratii tai pe rand in soba ca e frig afara pisa m as pe tn de sarantoc',
'vezi sa nu cazi in pumn baiatul meu ca poate te omori',
'Sa te fut in gura mort ca viu dai din picioare',
'Coaie te lauzi ca esti orasean ai vazut tranvaiu ai zis ca a fatat trenu copilu matii',
'ESTI ATAT DE URAT INCAT ATUNCI CAND PLANGI LACRIMILE SE INTALNESC LA CEAFA SA ITI OCOLEASCA FATA',
'Te dai mare culturist gen coaie ce spate am rup da sati scuipe unchiu Fester in ciorba mati de nu esti mai cocosat decat Rammus din lol in morti tai de ghertoi mars inapoi in papusoi',
'ma-ta aia curva cand imi vede pula zice "My precious" ca Gollum futu-ti rasa si neamu ma-tii de mamaligar',
'daca esti profesor si in timpul unei lucrari muti colegu ala mai bun din banca astfel incat ala mai prost sa nu poata copia meriti sa se prabuseasca pe tn si pe mata toate pulele pe care le a supt fieta sasi ia jordani la 3 milioane de pe olx',
'cand te am dus prima oara la pull&bear m ai intrebat unde i ursu',
'puneti poze cu bani pistoale si adidasi de la zanotti si valentino dar voi intrati in foame daca va scoate oferta de 5 lei combo de la mec',
'fmm ca te au dus astia la restaurant ca ai comandat ciorba si mancai cu furculita',
'am o dilema coaie, daca sperma are doar 7 calorii mata dc e obeza',
'Coaie ce prosti sunt straini cum plm sati dai 500-1000 eur pe un tricou guci cand in romania sunt la 10 sute 3 bucati ))))',
'Te lauzi ca tu ai geaca monclăr daia scumpa si nu ai ca saraci tai de colegi de la pull and bear dar ai uitat ca anu trecut venei cu hanorac de la decathlon cu pelerina de ploaie fmm de nemancata',
'cand te-am dus prima data la orange m-ai intrebat unde-s portocalele fmm de agricultor',
'cand ti am aratat prima oara o shaorma ai zis ca de ce mananc clatite cu carne si cartofi',
'Te dai mare gigolo dar ti se scoala pula cand se apleaca ma-ta',
'ia st ma sami pun la instastory o poza cu bautura gen sa vada urmaritori mei ca ma respect beau vin la 9,5 lei ca pana atunci singurul alcol care lai gustat a fost tuica de la pomana cand sa imbatat mata de ai duso cu roaba acasa luavas in pula mari smecheri ca puneti 5 inji 10 sute sa beti bohoarca',
'Am facut o lista cu aia cu care nu sa futut mata:',
'dai check-in zi de zi la cinema pui descriere "Another day another movie" da sa moara Toni Montana daca te mint ca acasa inca mai ai antena lu tactu mare de la tara si prinzi 5 canale de tvr 1 in 5 stiluri diferite',
'te dai mare gamerita esti tot cu #pcmasterrace dar cand mai vazut ca ma joc fifa mai intrebat unde a disparut digisport de sus din colt a dracu ascilopata',
'usor cu atitudinea de babygirl pe net ca in realitate ai trezit krakenu cu ragaitu ala posedato',
'coaie cum sa nu sti cum sa ai grija de o tarantula gen lol pela coaie pela pula'
]
| 91.775641 | 1,661 | 0.777607 | 2,339 | 14,317 | 4.758444 | 0.415562 | 0.03558 | 0.051752 | 0.061456 | 0.120934 | 0.014106 | 0.007907 | 0 | 0 | 0 | 0 | 0.089014 | 0.145491 | 14,317 | 155 | 1,662 | 92.367742 | 0.820745 | 0 | 0 | 0 | 0 | 0.233766 | 0.938255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3a8feafe3391c0ddd2f78fb39a9371d4374c0a73 | 1,441 | py | Python | netlog_viewer/netlog_viewer_build/netlog_viewer_dev_server_config.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | netlog_viewer/netlog_viewer_build/netlog_viewer_dev_server_config.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 4,640 | 2015-07-08T16:19:08.000Z | 2019-12-02T15:01:27.000Z | netlog_viewer/netlog_viewer_build/netlog_viewer_dev_server_config.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 698 | 2015-06-02T19:18:35.000Z | 2022-03-29T16:57:15.000Z | # Copyright (c) 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import json
import os
import netlog_viewer_project
import webapp2
from webapp2 import Route
def _RelPathToUnixPath(p):
return p.replace(os.sep, '/')
class TestListHandler(webapp2.RequestHandler):
def get(self, *args, **kwargs): # pylint: disable=unused-argument
project = netlog_viewer_project.NetlogViewerProject()
test_relpaths = ['/' + _RelPathToUnixPath(x)
for x in project.FindAllTestModuleRelPaths()]
tests = {'test_relpaths': test_relpaths}
tests_as_json = json.dumps(tests)
self.response.content_type = 'application/json'
return self.response.write(tests_as_json)
class NetlogViewerDevServerConfig(object):
def __init__(self):
self.project = netlog_viewer_project.NetlogViewerProject()
def GetName(self):
return 'netlog_viewer'
def GetRunUnitTestsUrl(self):
return '/netlog_viewer/tests.html'
def AddOptionstToArgParseGroup(self, g):
pass
def GetRoutes(self, args): # pylint: disable=unused-argument
return [
Route('/netlog_viewer/tests', TestListHandler),
]
def GetSourcePaths(self, args): # pylint: disable=unused-argument
return list(self.project.source_paths)
def GetTestDataPaths(self, args): # pylint: disable=unused-argument
return []
| 26.2 | 72 | 0.727967 | 170 | 1,441 | 6.029412 | 0.476471 | 0.070244 | 0.074146 | 0.105366 | 0.207805 | 0.12 | 0.12 | 0 | 0 | 0 | 0 | 0.005887 | 0.174879 | 1,441 | 54 | 73 | 26.685185 | 0.856182 | 0.199167 | 0 | 0 | 0 | 0 | 0.077661 | 0.021815 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.030303 | 0.151515 | 0.181818 | 0.69697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3aac3f6414867b633dea9c7d45394cdd79f87b50 | 39 | py | Python | cont.py | peterkimutai/continue1 | fe6dd88f6beeb0a93a41deef942d753b0d914cbc | [
"Unlicense"
] | null | null | null | cont.py | peterkimutai/continue1 | fe6dd88f6beeb0a93a41deef942d753b0d914cbc | [
"Unlicense"
] | null | null | null | cont.py | peterkimutai/continue1 | fe6dd88f6beeb0a93a41deef942d753b0d914cbc | [
"Unlicense"
] | null | null | null |
i="meee"
u="you"
print(i," and ",u)
| 5.571429 | 18 | 0.487179 | 8 | 39 | 2.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 6 | 19 | 6.5 | 0.612903 | 0 | 0 | 0 | 0 | 0 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3ab8056bbcfb11d7cb0f257beb6843279b5e80cf | 86 | py | Python | reliabilipy/__init__.py | rafaelvalero/omegapy | 5cc6288f9b0d6101de87229ce0f3a392ff3d1e8a | [
"MIT"
] | 1 | 2022-01-08T20:46:43.000Z | 2022-01-08T20:46:43.000Z | reliabilipy/__init__.py | rafaelvalero/omegapy | 5cc6288f9b0d6101de87229ce0f3a392ff3d1e8a | [
"MIT"
] | null | null | null | reliabilipy/__init__.py | rafaelvalero/omegapy | 5cc6288f9b0d6101de87229ce0f3a392ff3d1e8a | [
"MIT"
] | null | null | null | from ._reliabili import reliability_analysis
__all__ = [
"reliability_analysis",
] | 21.5 | 44 | 0.77907 | 8 | 86 | 7.5 | 0.75 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 86 | 4 | 45 | 21.5 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3aba1d78d37e1173705ea143efeb8730018f6cb1 | 1,296 | py | Python | pynasqm/trajectories/get_reference_job.py | PotentialParadox/pynasqm | 1bd51299b6ca7f8229d8a15428515d53a358903c | [
"MIT"
] | 1 | 2020-03-13T22:34:03.000Z | 2020-03-13T22:34:03.000Z | pynasqm/trajectories/get_reference_job.py | PotentialParadox/pynasqm | 1bd51299b6ca7f8229d8a15428515d53a358903c | [
"MIT"
] | null | null | null | pynasqm/trajectories/get_reference_job.py | PotentialParadox/pynasqm | 1bd51299b6ca7f8229d8a15428515d53a358903c | [
"MIT"
] | null | null | null | from functools import singledispatch
from pynasqm.trajectories.fluorescence import Fluorescence
from pynasqm.trajectories.absorption import Absorption
@singledispatch
def get_reference_job(traj_data):
raise NotImplementedError(f"traj_data type not supported by get_refer\n"\
f"{traj_data}")
@get_reference_job.register(Fluorescence)
def _(traj_data):
return "qmexcited"
@get_reference_job.register(Absorption)
def _(traj_data):
return "qmground"
@singledispatch
def get_n_trajs_of_reference(traj_data):
raise NotImplementedError(f"traj_data type not supported by get_ntrajs_of_reference\n"\
f"{traj_data}")
@get_n_trajs_of_reference.register(Fluorescence)
def _(traj_data):
return traj_data.user_input.n_snapshots_ex
@get_n_trajs_of_reference.register(Absorption)
def _(traj_data):
return traj_data.user_input.n_snapshots_qmground
@singledispatch
def get_n_ref_runs(traj_data):
raise NotImplementedError(f"traj_data type not supported by get_nref_runs\n"\
f"{traj_data}")
@get_n_ref_runs.register(Fluorescence)
def _(traj_data):
return traj_data.user_input.n_exc_runs
@get_n_ref_runs.register(Absorption)
def _(traj_data):
return traj_data.user_input.n_qmground_runs
| 35.027027 | 91 | 0.765432 | 178 | 1,296 | 5.179775 | 0.213483 | 0.164859 | 0.058568 | 0.110629 | 0.71692 | 0.603037 | 0.455531 | 0.455531 | 0.455531 | 0.455531 | 0 | 0 | 0.157407 | 1,296 | 36 | 92 | 36 | 0.844322 | 0 | 0 | 0.363636 | 0 | 0 | 0.152006 | 0.01929 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.181818 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3ac1c768c158dc838dfae0e6094464208e94f0f5 | 434 | py | Python | feature_generation/normalize/rolling_mean.py | s0lvang/ideal-pancake | f7a55f622b02b03a987d74cfdff1c51288bfb657 | [
"MIT"
] | 6 | 2020-09-22T06:54:51.000Z | 2021-03-25T05:38:05.000Z | feature_generation/normalize/rolling_mean.py | s0lvang/ideal-pancake | f7a55f622b02b03a987d74cfdff1c51288bfb657 | [
"MIT"
] | 12 | 2020-09-21T13:20:49.000Z | 2021-04-07T08:01:12.000Z | feature_generation/normalize/rolling_mean.py | s0lvang/ideal-pancake | f7a55f622b02b03a987d74cfdff1c51288bfb657 | [
"MIT"
] | null | null | null | def rolling_mean(data):
return [take_rolling_mean(df) for df in data]
def take_rolling_mean(df):
window = 20
columns_to_take_rolling_mean = [
"pupil_diameter",
"saccade_duration",
"duration",
"saccade_length",
]
for column in columns_to_take_rolling_mean:
df[f"{column}_rolling"] = df[column].rolling(window).mean()
# index < window is nan
return df.iloc[window:]
| 25.529412 | 67 | 0.645161 | 57 | 434 | 4.614035 | 0.421053 | 0.209125 | 0.228137 | 0.193916 | 0.18251 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006098 | 0.24424 | 434 | 16 | 68 | 27.125 | 0.795732 | 0.048387 | 0 | 0 | 0 | 0 | 0.16545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0 | 0.076923 | 0.307692 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3aef79b3128b41665021c29aae4c84fa02130963 | 246 | py | Python | tests/test-config.py | kjdoyle/elyra | bfb79a8e84c85b7d0f39bb168224aed69dbbd808 | [
"Apache-2.0"
] | 2 | 2020-05-23T11:21:31.000Z | 2020-06-03T22:52:09.000Z | tests/test-config.py | kjdoyle/elyra | bfb79a8e84c85b7d0f39bb168224aed69dbbd808 | [
"Apache-2.0"
] | null | null | null | tests/test-config.py | kjdoyle/elyra | bfb79a8e84c85b7d0f39bb168224aed69dbbd808 | [
"Apache-2.0"
] | 1 | 2020-05-17T15:19:13.000Z | 2020-05-17T15:19:13.000Z | c.Session.debug = True
c.LabApp.token = 'test'
c.LabApp.open_browser = False
c.NotebookApp.port_retries = 0
c.LabApp.workspaces_dir = './build/cypress-tests'
c.FileContentsManager.root_dir = './build/cypress-tests'
c.LabApp.quit_button = False
| 30.75 | 56 | 0.764228 | 37 | 246 | 4.945946 | 0.621622 | 0.153005 | 0.163934 | 0.218579 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.093496 | 246 | 7 | 57 | 35.142857 | 0.816144 | 0 | 0 | 0 | 0 | 0 | 0.186992 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
aafb1cf5d24f222fa6f06ff100c89a778fe48350 | 179 | py | Python | accounts/urls.py | Julmgc/Course-Organizer | b383f2845474314186a2ac6589885af890889da8 | [
"MIT"
] | null | null | null | accounts/urls.py | Julmgc/Course-Organizer | b383f2845474314186a2ac6589885af890889da8 | [
"MIT"
] | null | null | null | accounts/urls.py | Julmgc/Course-Organizer | b383f2845474314186a2ac6589885af890889da8 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import UserLogin, UserRegister
urlpatterns = [
path("accounts/", UserRegister.as_view()),
path("login/", UserLogin.as_view()),
]
| 22.375 | 46 | 0.709497 | 21 | 179 | 5.952381 | 0.619048 | 0.096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145251 | 179 | 7 | 47 | 25.571429 | 0.816993 | 0 | 0 | 0 | 0 | 0 | 0.083799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
aafefeb23bc5b42fc6c15fe4bf23b8763287d5b4 | 357 | py | Python | carbontracker/predictor.py | leondz/carbontracker | f8b4542f4a0f803d053401b53a3cc367281b31a9 | [
"MIT"
] | 186 | 2020-05-02T20:51:48.000Z | 2022-03-30T09:33:44.000Z | carbontracker/predictor.py | johnjdailey/carbontracker | 1c9307b5fc2a408667f3a19c12c2b45be08354b2 | [
"MIT"
] | 43 | 2020-05-10T12:44:26.000Z | 2022-03-09T11:12:11.000Z | carbontracker/predictor.py | johnjdailey/carbontracker | 1c9307b5fc2a408667f3a19c12c2b45be08354b2 | [
"MIT"
] | 10 | 2020-05-04T11:20:04.000Z | 2022-02-16T03:02:39.000Z | import numpy as np
# TODO: Do advanced prediction based on profiling work.
def predict_energy(total_epochs, epoch_energy_usages):
avg_epoch_energy = np.mean(epoch_energy_usages)
return total_epochs * avg_epoch_energy
def predict_time(total_epochs, epoch_times):
avg_epoch_time = np.mean(epoch_times)
return total_epochs * avg_epoch_time
| 27.461538 | 55 | 0.792717 | 54 | 357 | 4.87037 | 0.444444 | 0.1673 | 0.121673 | 0.152091 | 0.190114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148459 | 357 | 12 | 56 | 29.75 | 0.865132 | 0.148459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
c909851fe73dcfad421fb6354ea395215029d6a8 | 689 | py | Python | tests/test-vext-pth.py | NomAnor/vext | adea4b593ae4c82da0965ec1addaa1cd6d5b396c | [
"MIT"
] | 62 | 2015-03-25T15:56:38.000Z | 2021-01-07T21:32:27.000Z | tests/test-vext-pth.py | NomAnor/vext | adea4b593ae4c82da0965ec1addaa1cd6d5b396c | [
"MIT"
] | 73 | 2015-02-13T16:02:31.000Z | 2021-01-17T19:35:10.000Z | tests/test-vext-pth.py | NomAnor/vext | adea4b593ae4c82da0965ec1addaa1cd6d5b396c | [
"MIT"
] | 8 | 2016-01-24T16:16:46.000Z | 2020-09-23T17:56:47.000Z | import os
import unittest
from vext.install import DEFAULT_PTH_CONTENT
class TestVextPTH(unittest.TestCase):
# Preliminary test, that verifies that
def test_can_exec_pth_content(self):
# Stub test, verify lines starting with 'import' in the pth can
# be exec'd and doesn't raise any exceptions.
# TODO, mock file.write and get content directly from create_pth
# instead of getting it directly from DEFAULT_PTH_CONTENT
lines = DEFAULT_PTH_CONTENT.splitlines()
for line in lines:
if line.startswith("import ") or line.startswith("import\t"):
exec(line)
if __name__ == "__main__":
unittest.main()
| 28.708333 | 73 | 0.683599 | 92 | 689 | 4.913043 | 0.586957 | 0.088496 | 0.112832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245283 | 689 | 23 | 74 | 29.956522 | 0.869231 | 0.37881 | 0 | 0 | 0 | 0 | 0.054502 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
c916bd42a9f49b86089b3c70e101b95ec26db97d | 198 | py | Python | Lecture 28/Lecture28HWAssignment4.py | AtharvaJoshi21/PythonPOC | 6b95eb5bab7b28e9811e43b39e863faf2ee7565b | [
"MIT"
] | 1 | 2019-04-27T15:37:04.000Z | 2019-04-27T15:37:04.000Z | Lecture 28/Lecture28HWAssignment4.py | AtharvaJoshi21/PythonPOC | 6b95eb5bab7b28e9811e43b39e863faf2ee7565b | [
"MIT"
] | null | null | null | Lecture 28/Lecture28HWAssignment4.py | AtharvaJoshi21/PythonPOC | 6b95eb5bab7b28e9811e43b39e863faf2ee7565b | [
"MIT"
] | 1 | 2020-08-14T06:57:08.000Z | 2020-08-14T06:57:08.000Z | # WAP to accept a filename from user and print all words starting with capital letters.
def main():
inputFilePath = input("Please enter file name: ")
if __name__ == "__main__":
main() | 24.75 | 87 | 0.686869 | 27 | 198 | 4.740741 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 198 | 8 | 88 | 24.75 | 0.831169 | 0.429293 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c9195aa10c6d748883a1b2125a3a031fa6170f06 | 1,380 | py | Python | deluca/envs/lung/__init__.py | AlexanderJYu/deluca | 9e8b0d84d2eb0a58ff82a951b42881bdb2dc9f00 | [
"Apache-2.0"
] | null | null | null | deluca/envs/lung/__init__.py | AlexanderJYu/deluca | 9e8b0d84d2eb0a58ff82a951b42881bdb2dc9f00 | [
"Apache-2.0"
] | null | null | null | deluca/envs/lung/__init__.py | AlexanderJYu/deluca | 9e8b0d84d2eb0a58ff82a951b42881bdb2dc9f00 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO
# - interp smh
import jax.numpy as jnp
from deluca import JaxObject
DEFAULT_PRESSURE_RANGE = (5.0, 35.0)
DEFAULT_KEYPOINTS = [1e-8, 1.0, 1.5, 3.0]
class BreathWaveform(JaxObject):
"""Waveform generator with shape /‾\_"""
def __init__(self, range=None, keypoints=None):
self.lo, self.hi = range or DEFAULT_PRESSURE_RANGE
self.xp = jnp.asarray([0] + (keypoints or DEFAULT_KEYPOINTS))
self.fp = jnp.asarray([self.lo, self.hi, self.hi, self.lo, self.lo])
self.period = self.xp[-1]
def at(self, t):
# return jnp.interp(t, self.xp, self.fp, period=self.period)
return jnp.interp(t, self.xp, self.fp, period=3)
def phase(self, t):
return jnp.searchsorted(self.xp, t % self.period, side="right")
__all__ = ["BreathWaveform"]
| 32.857143 | 76 | 0.695652 | 212 | 1,380 | 4.462264 | 0.5 | 0.063425 | 0.042283 | 0.033827 | 0.071882 | 0.071882 | 0.071882 | 0.071882 | 0.071882 | 0 | 0 | 0.021505 | 0.191304 | 1,380 | 41 | 77 | 33.658537 | 0.825269 | 0.478986 | 0 | 0 | 0 | 0 | 0.027221 | 0 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0.133333 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c93112ec790fae5b416d3ab6e0ee349a48489f55 | 49,239 | py | Python | FBDParser/charmaps/symbols.py | jonix6/fbdparser | 617a79bf9062092e4fa971bbd66da02cd9d45124 | [
"MIT"
] | 7 | 2021-03-15T08:43:56.000Z | 2022-01-09T11:56:43.000Z | FBDParser/charmaps/symbols.py | jonix6/fbdparser | 617a79bf9062092e4fa971bbd66da02cd9d45124 | [
"MIT"
] | null | null | null | FBDParser/charmaps/symbols.py | jonix6/fbdparser | 617a79bf9062092e4fa971bbd66da02cd9d45124 | [
"MIT"
] | 3 | 2021-09-07T09:40:16.000Z | 2022-01-11T10:32:23.000Z | # -*- coding: utf-8 -*-
def gb2unicode_simple(x):
a, b = (x & 0xFF00) >> 8, x & 0x00FF
if 0xAA <= a <= 0xAF and 0xA1 <= b <= 0xFE:
return 0xE000 + (a - 0xAA) * 0x5E + b - 0xA1
elif 0xA1 <= a <= 0xA7 and (0x40 <= b <= 0x7E or 0x80 <= b <= 0xA0):
return 0xE4C6 + (a - 0xA1) * 0x60 + (0x3F + b - 0x80 if b >= 0x80 else b - 0x40)
return ord(bytearray([a, b]).decode('gb18030'))
def _unichr(x):
if x <= 0xFFFF:
return x
# U+10000 ~ U+10FFFF
return bytearray([
0xF0 | (x >> 18 & 0x7), 0x80 | (x >> 12 & 0x3F),
0x80 | (x >> 6 & 0x3F), 0x80 | (x & 0x3F)]).decode('utf-8')
class UnicodeMap(dict):
def __str__(self):
return 'unicode map contains {0} symbols'.format(len(self))
def update(self, hashmap):
for a, b in filter(lambda x: x[0] != x[1], hashmap.items()):
if a != b:
self[gb2unicode_simple(a)] = _unichr(b)
"A库符号"
symbolsA = UnicodeMap()
_update = symbolsA.update
# Area A1
_update({
0xA140: 0xA140, # 带括弧的小写罗马数字1((ⅰ))
0xA141: 0xA141, # 带括弧的小写罗马数字2((ⅱ))
0xA142: 0xA142, # 带括弧的小写罗马数字3((ⅲ))
0xA143: 0xA143, # 带括弧的小写罗马数字4((ⅳ))
0xA144: 0xA144, # 带括弧的小写罗马数字5((ⅴ))
0xA145: 0xA145, # 带括弧的小写罗马数字6((ⅵ))
0xA146: 0xA146, # 带括弧的小写罗马数字7((ⅶ))
0xA147: 0xA147, # 带括弧的小写罗马数字8((ⅷ))
0xA148: 0xA148, # 带括弧的小写罗马数字9((ⅸ))
0xA149: 0xA149, # 带括弧的小写罗马数字10((ⅹ))
0xA14A: 0xA14A, # 带括弧的小写罗马数字11((ⅺ))
0xA14B: 0xA14B, # 带括弧的小写罗马数字12((ⅻ))
0xA14C: 0x003D, # 三分宽等号 = =
0xA14D: 0x2212, # 三分宽减号 = −
0xA14E: 0x2215, # 三分宽斜线(除号) = ∕
0xA14F: 0x1D7CE, # 𝟎
0xA150: 0x1D7CF, # 𝟏
0xA151: 0x1D7D0, # 𝟐
0xA152: 0x1D7D1, # 𝟑
0xA153: 0x1D7D2, # 𝟒
0xA154: 0x1D7D3, # 𝟓
0xA155: 0x1D7D4, # 𝟔
0xA156: 0x1D7D5, # 𝟕
0xA157: 0x1D7D6, # 𝟖
0xA158: 0x1D7D7, # 𝟗
0xA159: 0x2664, # ♤
0xA15A: 0x2667, # ♧
0xA15B: 0x00B6, # ¶
0xA15C: 0x26BE, # ⚾
0xA15D: 0x263E, # 上1/4月亮 = ☾
0xA15E: 0x263D, # 下1/4月亮 = ☽
0xA15F: 0x263A, # 笑脸 = ☺
0xA160: 0x1F31C, # 半脸 = 🌜
0xA161: 0x1F31B, # 半脸 = 🌛
0xA162: 0x3036, # 〶
0xA163: 0x2252, # 近似符等号 = ≒
0xA164: 0xA164, # 吨号(T + S)
0xA165: 0x002B, # 三分宽加号 = +
0xA166: 0x223C, # 三分宽减号 = ∼
0xA167: 0x00A9, # ©
0xA168: 0x24D2, # ⓒ
0xA169: 0x24B8, # Ⓒ
0xA16A: 0x00AE, # ®
0xA16B: 0x24C7, # Ⓡ
0xA16D: 0x203E, # 上横线 = ‾
0xA16E: 0x005F, # 下横线 = _
0xA16F: 0x25E2, # ◢
0xA170: 0x25E3, # ◣
0xA171: 0x25E5, # ◥
0xA172: 0x25E4, # ◤
0xA173: 0x256D, # ╭
0xA174: 0x256E, # ╮
0xA175: 0x2570, # ╰
0xA176: 0x256F, # ╯
0xA177: 0x2550, # 双横线 = ═
0xA178: 0x2551, # 双竖线 = ║
0xA179: 0x2223, # 分开、绝对值 = ∣
0xA17A: 0x2926, # ⤦
0xA17B: 0x2924, # ⤤
0xA17C: 0x2923, # ⤣
0xA17D: 0x293E, # ⤾
0xA17E: 0x293F, # ⤿
0xA180: 0x21E7, # ⇧
0xA181: 0x21E9, # ⇩
0xA182: 0xA182, # 数字阳框码0(□ + 0)
0xA183: 0xA183, # 数字阳框码1(□ + 1)
0xA184: 0xA184, # 数字阳框码2(□ + 2)
0xA185: 0xA185, # 数字阳框码3(□ + 3)
0xA186: 0xA186, # 数字阳框码4(□ + 4)
0xA187: 0xA187, # 数字阳框码5(□ + 5)
0xA188: 0xA188, # 数字阳框码6(□ + 6)
0xA189: 0xA189, # 数字阳框码7(□ + 7)
0xA18A: 0xA18A, # 数字阳框码8(□ + 8)
0xA18B: 0xA18B, # 数字阳框码9(□ + 9)
0xA18C: 0xA18C, # 数字阴框码0(0️⃣)
0xA18D: 0xA18D, # 数字阴框码1(1️⃣)
0xA18E: 0xA18E, # 数字阴框码2(2️⃣)
0xA18F: 0xA18F, # 数字阴框码3(3️⃣)
0xA190: 0xA190, # 数字阴框码4(4️⃣)
0xA191: 0xA191, # 数字阴框码5(5️⃣)
0xA192: 0xA192, # 数字阴框码6(6️⃣)
0xA193: 0xA193, # 数字阴框码7(7️⃣)
0xA194: 0xA194, # 数字阴框码8(8️⃣)
0xA195: 0xA195, # 数字阴框码9(9️⃣)
0xA196: 0x1F6AD, # 🚭
0xA197: 0x1F377, # 🍷
0xA198: 0x26A0, # ⚠
0xA199: 0x2620, # ☠
0xA19A: 0xA19A, # (🚫 + 🔥)
0xA19B: 0x2B4D, # ⭍
0xA19C: 0x21B7, # ↷
0xA19D: 0x293A, # ⤺
0xA19E: 0x2716, # ✖
0xA19F: 0x003F, # 问号 = ?
0xA1A0: 0x0021 # 外文感叹号 = !
})
# Area A2
_update({
0xA240: 0x231C, # ⌜
0xA241: 0x231F, # ⌟
0xA242: 0xA242, # (empty ⌜)
0xA243: 0xA243, # (empty ⌟)
0xA244: 0x231D, # ⌝
0xA245: 0x231E, # ⌞
0xA246: 0xA246, # (empty ⌝)
0xA247: 0xA247, # (empty ⌞)
0xA248: 0xFF1C, # <
0xA249: 0xFF1E, # >
0xA24A: 0x2AA1, # ⪡
0xA24B: 0x2AA2, # ⪢
0xA24C: 0xA24C, # (vertical ”)
0xA24D: 0xA24D, # (vertical “)
0xA24E: 0x201E, # „
0xA24F: 0xA24F, # 斜感叹号(italic !)
0xA250: 0xA250, # 斜问号(italic ?)
0xA251: 0xA76C, # ❬
0xA252: 0xA76D, # ❭
0xA253: 0xA253, # (reversed 「)
0xA254: 0xA254, # (reversed 」)
0xA255: 0xA255, # (reversed 『)
0xA256: 0xA256, # (reversed 』)
0xA257: 0x203C, # 双叹号 = ‼
0xA258: 0xA258, # 斜双叹号(italic ‼)
0xA259: 0x2047, # 双问号 = ⁇
0xA25A: 0xA25A, # 斜双问号(italic ⁇)
0xA25B: 0x2048, # 疑问感叹号 = ⁈
0xA25C: 0xA25C, # 斜疑问感叹号(italic ⁈)
0xA25D: 0x2049, # 感叹疑问号 = ⁉
0xA25E: 0xA25E, # 斜感叹疑问号(italic ⁉)
0xA25F: 0xA25F, # 竖排小数点(vertical .)
0xA260: 0x03D6, # 希腊文符号PI = ϖ
0xA261: 0x2116, # №
0xA262: 0x0142, # 多国外文:带笔画的小写字母l = ł
0xA263: 0x0131, # 多国外文:无点的小写字母I = ı
0xA264: 0x014B, # 多国外文:小写字母eng = ŋ
0xA265: 0x0327, # 下加符 = ̧
0xA266: 0x00BF, # 倒置问号 = ¿
0xA267: 0x00A1, # 倒置感叹号 = ¡
0xA268: 0x00D8, # 多国外文:带笔画的大写字母O = Ø
0xA269: 0x00F8, # 多国外文:带笔画的小写字母o = ø
0xA26A: 0x0087, # 二重剑标 = ‡
0xA26B: 0x0086, # 短剑标 = †
0xA26C: 0x014A, # 多国外文:大写字母ENG = Ŋ
0xA26D: 0xFB00, # 多国外文 = ff
0xA26E: 0xFB01, # 多国外文 = fi
0xA26F: 0xFB02, # 多国外文 = fl
0xA270: 0xFB03, # 多国外文 = ffi
0xA271: 0xFB04, # 多国外文 = ffl
0xA272: 0x0141, # 多国外文 = Ł
0xA273: 0x00C7, # 多国外文 = Ç
0xA274: 0x00C6, # 多国外文 = Æ
0xA275: 0x00E6, # 多国外文 = æ
0xA276: 0x008C, # 多国外文 = Œ
0xA277: 0x009C, # 多国外文 = œ
0xA278: 0x00DF, # 多国外文 = ß
0xA279: 0x0083, # 多国外文 = ƒ
0xA27A: 0x00E5, # 多国外文 = å
0xA27B: 0x00E2, # 多国外文 = â
0xA27C: 0x00E4, # 多国外文 = ä
0xA27D: 0x0101, # 多国外文 = ā
0xA27E: 0x00E1, # 多国外文 = á
0xA280: 0x01CE, # 多国外文 = ǎ
0xA281: 0x00E0, # 多国外文 = à
0xA282: 0x00E3, # 多国外文 = ã
0xA283: 0x00EB, # 多国外文 = ë
0xA284: 0x1EBD, # 多国外文 = ẽ
0xA285: 0x00EE, # 多国外文 = î
0xA286: 0x00EF, # 多国外文 = ï
0xA287: 0x00F5, # 多国外文 = õ
0xA288: 0x00F4, # 多国外文 = ô
0xA289: 0x00F6, # 多国外文 = ö
0xA28A: 0x00FB, # 多国外文 = û
0xA28B: 0x00F1, # 多国外文 = ñ
0xA28C: 0x009A, # 多国外文 = š
0xA28D: 0x015D, # 多国外文 = ŝ
0xA28E: 0x011D, # 多国外文 = ĝ
0xA28F: 0x00FF, # 多国外文 = ÿ
0xA290: 0x009E, # 多国外文 = ž
0xA291: 0x1E91, # 多国外文 = ẑ
0xA292: 0x0109, # 多国外文 = ĉ
0xA293: 0x00E7, # 多国外文 = ç
0xA294: 0xA294, # 多国外文(ê̄)
0xA295: 0x1EBF, # 多国外文 = ế
0xA296: 0xA296, # 多国外文(ê̌)
0xA297: 0x1EC1, # 多国外文 = ề
0xA29A: 0x0307, # 组合用发音符 = ̇
0xA29B: 0x030A, # 组合用发音符 = ̊
0xA29C: 0x0303, # 组合用发音符 = ̃
0xA29D: 0x20F0, # 组合用发音符 = ⃰
0xA29E: 0x0306, # 组合用发音符 = ̆
0xA29F: 0x002C, # 外文逗号 = ,
0xA2A0: 0x0085, # 外文三点省略号,外文三连点 = …
0xA2AB: 0x217A, # 小写罗马数字11 = ⅺ
0xA2AC: 0x217B, # 小写罗马数字12 = ⅻ
0xA2AD: 0xA2AD, # 小写罗马数字13(ⅹⅲ)
0xA2AE: 0xA2AE, # 小写罗马数字14(ⅹⅳ)
0xA2AF: 0xA2AF, # 小写罗马数字15(ⅹⅴ)
0xA2B0: 0xA2B0, # 小写罗马数字16(ⅹⅵ)
0xA2EF: 0xA2EF, # 大写罗马数字15(ⅩⅤ)
0xA2F0: 0xA2F0, # 大写罗马数字16(ⅩⅥ)
0xA2FD: 0xA2FD, # 大写罗马数字13(ⅩⅢ)
0xA2FE: 0xA2FE, # 大写罗马数字14(ⅩⅣ)
})
# Area A3
_update({
0xA340: 0xA340, # 带括号的大写罗马数字1((Ⅰ))
0xA341: 0xA341, # 带括号的大写罗马数字2((Ⅱ))
0xA342: 0xA342, # 带括号的大写罗马数字3((Ⅲ))
0xA343: 0xA343, # 带括号的大写罗马数字4((Ⅳ))
0xA344: 0xA344, # 带括号的大写罗马数字5((Ⅴ))
0xA345: 0xA345, # 带括号的大写罗马数字6((Ⅵ))
0xA346: 0xA346, # 带括号的大写罗马数字7((Ⅶ))
0xA347: 0xA347, # 带括号的大写罗马数字8((Ⅷ))
0xA348: 0xA348, # 带括号的大写罗马数字9((Ⅸ))
0xA349: 0xA349, # 带括号的大写罗马数字10((Ⅹ))
0xA34A: 0xA34A, # 带括号的大写罗马数字11((Ⅺ))
0xA34B: 0xA34B, # 带括号的大写罗马数字12((Ⅻ))
0xA34C: 0x24FF, # 数字阴圈码0 = ⓿
0xA34D: 0x2776, # 数字阴圈码1 = ❶
0xA34E: 0x2777, # 数字阴圈码2 = ❷
0xA34F: 0x2778, # 数字阴圈码3 = ❸
0xA350: 0x2779, # 数字阴圈码4 = ❹
0xA351: 0x277A, # 数字阴圈码5 = ❺
0xA352: 0x277B, # 数字阴圈码6 = ❻
0xA353: 0x277C, # 数字阴圈码7 = ❼
0xA354: 0x277D, # 数字阴圈码8 = ❽
0xA355: 0x277E, # 数字阴圈码9 = ❾
0xA356: 0x24B6, # 字母阳圈码A = Ⓐ
0xA357: 0x24B7, # 字母阳圈码B = Ⓑ
0xA358: 0x24B8, # 字母阳圈码C = Ⓒ
0xA359: 0x24B9, # 字母阳圈码D = Ⓓ
0xA35A: 0x24BA, # 字母阳圈码E = Ⓔ
0xA35B: 0x24BB, # 字母阳圈码F = Ⓕ
0xA35C: 0x24BC, # 字母阳圈码G = Ⓖ
0xA35D: 0x24BD, # 字母阳圈码H = Ⓗ
0xA35E: 0x24BE, # 字母阳圈码I = Ⓘ
0xA35F: 0x24BF, # 字母阳圈码J = Ⓙ
0xA360: 0x1F110, # 圆括号码A = 🄐
0xA361: 0x1F111, # 圆括号码B = 🄑
0xA362: 0x1F112, # 圆括号码C = 🄒
0xA363: 0x1F113, # 圆括号码D = 🄓
0xA364: 0x1F114, # 圆括号码E = 🄔
0xA365: 0x1F115, # 圆括号码F = 🄕
0xA366: 0x1F116, # 圆括号码G = 🄖
0xA367: 0x1F117, # 圆括号码H = 🄗
0xA368: 0x1F118, # 圆括号码I = 🄘
0xA369: 0x1F119, # 圆括号码J = 🄙
0xA36A: 0x24D0, # 阳圈码a = ⓐ
0xA36B: 0x24D1, # 阳圈码b = ⓑ
0xA36C: 0x24D2, # 阳圈码c = ⓒ
0xA36D: 0x24D3, # 阳圈码d = ⓓ
0xA36E: 0x24D4, # 阳圈码e = ⓔ
0xA36F: 0x24D5, # 阳圈码f = ⓕ
0xA370: 0x24D6, # 阳圈码g = ⓖ
0xA371: 0x24D7, # 阳圈码h = ⓗ
0xA372: 0x24D8, # 阳圈码i = ⓘ
0xA373: 0x24D9, # 阳圈码j = ⓙ
0xA374: 0x249C, # 圆括号码a = ⒜
0xA375: 0x249D, # 圆括号码b = ⒝
0xA376: 0x249E, # 圆括号码c = ⒞
0xA377: 0x249F, # 圆括号码d = ⒟
0xA378: 0x24A0, # 圆括号码e = ⒠
0xA379: 0x24A1, # 圆括号码f = ⒡
0xA37A: 0x24A2, # 圆括号码g = ⒢
0xA37B: 0x24A3, # 圆括号码h = ⒣
0xA37C: 0x24A4, # 圆括号码i = ⒤
0xA37D: 0x24A5, # 圆括号码j = ⒥
0xA37E: 0x3396, # 单位符号:毫升 = ㎖
0xA380: 0x3397, # ㎗
0xA381: 0x33CB, # 单位符号:百帕 = ㏋
0xA382: 0x3398, # 单位符号:立升 = ㎘
0xA383: 0x33A0, # 单位符号:平方厘米 = ㎠
0xA384: 0x33A4, # 单位符号:立方厘米 = ㎤
0xA385: 0x33A5, # 单位符号:立方米 = ㎥
0xA386: 0x33A2, # 单位符号:平方公里 = ㎢
0xA387: 0x33BE, # 单位符号:千瓦 = ㎾
0xA388: 0x33C4, # ㏄
0xA389: 0x3383, # 单位符号:毫安 = ㎃
0xA38A: 0x33C2, # ㏂
0xA38B: 0x33D8, # ㏘
0xA38C: 0x33CD, # ㏍
0xA38D: 0x33D7, # ㏗
0xA38E: 0x33DA, # ㏚
0xA38F: 0x339C, # ㎜
0xA390: 0x339D, # ㎝
0xA391: 0x339E, # ㎞
0xA392: 0x33CE, # 单位符号:公里 = ㏎
0xA393: 0x338E, # 单位符号:毫克 = ㎎
0xA394: 0x338F, # 单位符号:千克(公斤) = ㎏
0xA395: 0x33A1, # 单位符号:平方米 = ㎡
0xA396: 0x33D2, # ㏒
0xA397: 0x33D1, # ㏑
0xA398: 0x33C4, # ㏄
0xA399: 0x33D5, # ㏕
0xA39A: 0xAB36, # ꬶ
0xA39B: 0x2113, # ℓ
0xA39C: 0x006D, # m
0xA39D: 0x0078, # x
0xA39E: 0x1EFF, # ỿ
0xA39F: 0x0028, # 左开圆括号 = (
0xA3A0: 0x0029, # 右闭圆括号 = )
})
# Area A4
_update({
0xA440: 0xA440, # BD语言注解:四分空(◯ + ¼)
0xA441: 0xA441, # BD语言注解:二分空(◯ + ½)
0xA442: 0xA442, # BD语言注解:六分空(◯ + ⅙)
0xA443: 0xA443, # BD语言注解:八分空(◯ + ⅙)
0xA444: 0xA444, # (◇ + ◼ + ⬦)
0xA445: 0xA445, # (◇ + ◻)
0xA446: 0xA446, # (☐ + ◆ + ◻)
0xA447: 0xA447, # (⏹ + ⬦)
0xA448: 0x29C8, # ⧈
0xA449: 0x1F79C, # 🞜
0xA44A: 0xA44A, # (◆ + ◻)
0xA44B: 0xA44B, # (◇ + ◼)
0xA44C: 0xA44C, # (☐ + ◆)
0xA44D: 0x26CB, # ⛋
0xA44E: 0x2756, # ❖
0xA44F: 0xA44F, # (negative ❖)
0xA450: 0xA450, # (5-black-square cross, like ⸭)
0xA451: 0xA451, # (5-white-square cross, like ⌘)
0xA452: 0x2795, # ➕
0xA453: 0x271A, # ✚
0xA454: 0x23FA, # ⏺
0xA455: 0x2704, # ✄
0xA456: 0x25C9, # ◉
0xA457: 0x2A00, # ⨀
0xA458: 0x2740, # ❀
0xA459: 0x273F, # ✿
0xA45A: 0x2668, # ♨
0xA45B: 0x2669, # ♩
0xA45C: 0x266A, # ♪
0xA45D: 0x266C, # ♬
0xA45E: 0x2B57, # ⭗
0xA45F: 0x26BE, # ⚾
0xA460: 0x260E, # ☎
0xA461: 0x2025, # ‥
0xA462: 0x261C, # ☜
0xA463: 0x261E, # ☞
0xA464: 0x3021, # 杭州记数标记“一” = 〡
0xA465: 0x3022, # 杭州记数标记“二” = 〢
0xA466: 0x3023, # 杭州记数标记“三” = 〣
0xA467: 0x3024, # 杭州记数标记“四” = 〤
0xA468: 0x3025, # 杭州记数标记“五” = 〥
0xA469: 0x3026, # 杭州记数标记“六” = 〦
0xA46A: 0x3027, # 杭州记数标记“七” = 〧
0xA46B: 0x3028, # 杭州记数标记“八” = 〨
0xA46C: 0x3029, # 杭州记数标记“九” = 〩
0xA46D: 0x3038, # 杭州记数标记“十” = 〸
0xA46E: 0x3039, # 杭州记数标记“廿” = 〹
0xA46F: 0x303A, # 杭州记数标记“卅” = 〺
0xA470: 0x25A2, # ▢
0xA471: 0x00AE, # ®
0xA472: 0x25CF, # ●
0xA473: 0x25CB, # ○
0xA474: 0x25CB, # ♡
0xA475: 0x25CA, # ◊
0xA476: 0xA476, # (▽ + ▿)
0xA477: 0x2236, # ∶
0xA478: 0xA478, # 毫米(m/m)
0xA479: 0xA479, # 厘米(c/m)
0xA47A: 0xA47A, # 分米(d/m)
0xA47B: 0x2105, # ℅
0xA47D: 0xA47D, # (circled ™)
0xA47E: 0x2122, # ™
0xA480: 0xAB65, # ꭥ
0xA481: 0x026E, # ɮ
0xA482: 0x02A7, # ʧ
0xA483: 0x01EB, # ǫ
0xA484: 0x03C5, # υ
0xA485: 0xA7AC, # Ɡ
0xA486: 0x1D93, # ᶓ
0xA487: 0x1D74, # ᵴ
0xA488: 0x1D92, # ᶒ
0xA489: 0x1D95, # ᶕ
0xA48A: 0x02AE, # ʮ
0xA48B: 0x1D8B, # ᶋ
0xA48C: 0x0119, # ę
0xA48D: 0x01BE, # ƾ
0xA48E: 0x1D97, # ᶗ
0xA48F: 0x0293, # ʓ
0xA490: 0xA490, # (hɥ)
0xA491: 0x0253, # ɓ
0xA492: 0x0287, # ʇ
0xA493: 0x01AB, # ƫ
0xA494: 0x028D, # ʍ
0xA495: 0x1D8D, # ᶍ
0xA496: 0x0269, # ɩ
0xA497: 0x025C, # ɜ
0xA498: 0x02A5, # ʥ
0xA499: 0x019E, # ƞ
0xA49A: 0x01AA, # ƪ
0xA49B: 0x0250, # ɐ
0xA49C: 0x0286, # ʆ
0xA49D: 0x01BB, # ƻ
0xA49E: 0x00D8, # Ø
0xA4F4: 0xA4F4, # 三叹号(!!!)
0xA4F5: 0xA4F5, # 斜三叹号(italic !!!)
0xA4F6: 0x32A3, # 带圈汉字:正 = ㊣
0xA4F7: 0x329E, # 带圈汉字:印 = ㊞
0xA4F8: 0x32A4, # 带圈汉字:上 = ㊤
0xA4F9: 0x32A5, # 带圈汉字:中 = ㊥
0xA4FA: 0x32A6, # 带圈汉字:下 = ㊦
0xA4FB: 0x32A7, # 带圈汉字:左 = ㊧
0xA4FC: 0x32A8, # 带圈汉字:右 = ㊨
0xA4FD: 0xA4FD, # 带圈汉字:大(◯ + 大)
0xA4FE: 0xA4FE, # 带圈汉字:小(◯ + 小)
})
# Area A5
_update({
0xA540: 0x0111, # đ
0xA541: 0x1D80, # ᶀ
0xA542: 0x1D81, # ᶁ
0xA543: 0x0252, # ɒ
0xA544: 0xA544, # (ŋ + ʷ)
0xA545: 0x026B, # ɫ
0xA546: 0x1D88, # ᶈ
0xA547: 0x1D82, # ᶂ
0xA548: 0x02A6, # ʦ
0xA549: 0x025F, # ɟ
0xA54A: 0x00FE, # þ
0xA54B: 0x0257, # ɗ
0xA54C: 0xAB67, # ꭧ
0xA54D: 0x0260, # ɠ
0xA54E: 0x0242, # ɂ
0xA54F: 0x02AF, # ʯ
0xA550: 0xA550, # (ʯ)
0xA551: 0x0241, # Ɂ
0xA552: 0x025A, # ɚ
0xA553: 0x1D8A, # ᶊ
0xA554: 0x0296, # ʖ
0xA555: 0x1D8C, # ᶌ
0xA556: 0x1D75, # ᵵ
0xA557: 0x1D6D, # ᵭ
0xA558: 0x027D, # ɽ
0xA559: 0x027A, # ɺ
0xA55A: 0x01BA, # ƺ
0xA55B: 0xA55B, # (turned ɰ)
0xA55C: 0x0273, # ɳ
0xA55D: 0xA795, # ꞕ
0xA55E: 0x01B0, # ư
0xA55F: 0x1D85, # ᶅ
0xA560: 0x0260, # ɠ
0xA561: 0x1D86, # ᶆ
0xA562: 0x0277, # ɷ
0xA563: 0x02A4, # ʤ
0xA564: 0x02A3, # ʣ
0xA565: 0x1D87, # ᶇ
0xA566: 0x1D7C, # ᵼ
0xA567: 0x02A8, # ʨ
0xA568: 0x1D8F, # ᶏ
0xA569: 0x029A, # ʚ
0xA56A: 0x1D9A, # ᶚ
0xA56B: 0xA727, # ꜧ
0xA56C: 0x1D83, # ᶃ
0xA56D: 0xA56D, # (italic ŋ)
0xA56E: 0x029E, # ʞ
0xA56F: 0x0195, # ƕ
0xA570: 0x1D76, # ᵶ
0xA571: 0x027E, # ɾ
0xA572: 0x1D8E, # ᶎ
0xA573: 0x1D89, # ᶉ
0xA574: 0x027C, # ɼ
0xA575: 0x0279, # ɹ
0xA576: 0x018D, # ƍ
0xA577: 0x03C9, # ω
0xA578: 0x025D, # ɝ
0xA579: 0x03C3, # σ
0xA57A: 0x027B, # ɻ
0xA57B: 0x026D, # ɭ
0xA57C: 0x0267, # ɧ
0xA57D: 0x025A, # ɚ
0xA57E: 0xAB66, # ꭦ
0xA580: 0x5F02, # 异
0xA581: 0x28473, # 𨑳
0xA582: 0x5194, # 冔
0xA583: 0x247A3, # 𤞣
0xA584: 0x2896D, # 𨥭
0xA585: 0x5642, # 噂
0xA586: 0x7479, # 瑹
0xA587: 0x243B9, # 𤎹
0xA588: 0x723F, # 爿
0xA589: 0x9D56, # 鵖
0xA58A: 0x4D29, # 䴩
0xA58B: 0x20779, # 𠝹
0xA58C: 0x210F1, # 𡃱
0xA58D: 0x2504C, # 𥁌
0xA58E: 0x233CC, # 𣏌
0xA58F: 0x032F, # 下加符 = ̯
0xA590: 0x0312, # 下加符 = ̒
0xA591: 0x030D, # 下加符 = ̍
0xA592: 0x0314, # 下加符 = ̔
0xA593: 0x0313, # 下加符 = ̓
0xA594: 0x2F83B, # 吆
0xA595: 0x25EC0, # 𥻀
0xA596: 0x445B, # 䑛
0xA597: 0x21D3E, # 𡴾
0xA598: 0x0323, # 下加符 = ̣
0xA599: 0x0325, # 下加符 = ̥
0xA59A: 0x0331, # 下加符 = ̱
0xA59B: 0x032A, # 下加符 = ̪
0xA59C: 0x032C, # 下加符 = ̬
0xA59D: 0x032B, # 下加符 = ̫
0xA59E: 0x0329, # 下加符 = ̩
0xA59F: 0xFF5B, # 左开花括号 = {
0xA5A0: 0xFF5D, # 右闭花括号 = }
0xA5F7: 0x3016, # 左空方圆括号 = 〖
0xA5F8: 0x3017, # 右空方圆括号 = 〗
0xA5F9: 0x29DB, # ⧛
0xA5FA: 0xA5FA, # (vertical ⧛)
0xA5FB: 0x534D, # 卍
0xA5FC: 0xFE47, # 竖排上方括号 = ﹇
0xA5FD: 0xFE48, # 竖排下方括号 = ﹈
0xA5FE: 0x2571, # 斜线 = ╱
})
# Area A6
_update({
0xA640: 0x00C5, # 多国外文 = Å
0xA641: 0x0100, # 多国外文 = Ā
0xA642: 0x00C1, # 多国外文 = Á
0xA643: 0x01CD, # 多国外文 = Ǎ
0xA644: 0x00C0, # 多国外文 = À
0xA645: 0x00C2, # 多国外文 = Â
0xA646: 0x00C4, # 多国外文 = Ä
0xA647: 0x00C3, # 多国外文 = Ã
0xA648: 0x0112, # 多国外文 = Ē
0xA649: 0x00C9, # 多国外文 = É
0xA64A: 0x011A, # 多国外文 = Ě
0xA64B: 0x00C8, # 多国外文 = È
0xA64C: 0x00CA, # 多国外文 = Ê
0xA64D: 0x00CB, # 多国外文 = Ë
0xA64E: 0x1EBC, # 多国外文 = Ẽ
0xA64F: 0x012A, # 多国外文 = Ī
0xA650: 0x00CD, # 多国外文 = Í
0xA651: 0x01CF, # 多国外文 = Ǐ
0xA652: 0x00CC, # 多国外文 = Ì
0xA653: 0x00CE, # 多国外文 = Î
0xA654: 0x00CF, # 多国外文 = Ï
0xA655: 0x014C, # 多国外文 = Ō
0xA656: 0x00D3, # 多国外文 = Ó
0xA657: 0x01D1, # 多国外文 = Ǒ
0xA658: 0x00D2, # 多国外文 = Ò
0xA659: 0x00D4, # 多国外文 = Ô
0xA65A: 0x00D6, # 多国外文 = Ö
0xA65B: 0x00D5, # 多国外文 = Õ
0xA65C: 0x016A, # 多国外文 = Ū
0xA65D: 0x00DA, # 多国外文 = Ú
0xA65E: 0x01D3, # 多国外文 = Ǔ
0xA65F: 0x00D9, # 多国外文 = Ù
0xA660: 0x00DB, # 多国外文 = Û
0xA661: 0x00DC, # 多国外文 = Ü
0xA662: 0x01D5, # 多国外文 = Ǖ
0xA663: 0x01D7, # 多国外文 = Ǘ
0xA664: 0x01D9, # 多国外文 = Ǚ
0xA665: 0x01DB, # 多国外文 = Ǜ
0xA666: 0xA666, # 多国外文(Ü̂)
0xA667: 0x0108, # 多国外文 = Ĉ
0xA668: 0x011C, # 多国外文 = Ĝ
0xA669: 0x0124, # 多国外文 = Ĥ
0xA66A: 0x0134, # 多国外文 = Ĵ
0xA66B: 0x0160, # 多国外文 = Š
0xA66C: 0x015C, # 多国外文 = Ŝ
0xA66D: 0x0178, # 多国外文 = Ÿ
0xA66E: 0x017D, # 多国外文 = Ž
0xA66F: 0x1E90, # 多国外文 = Ẑ
0xA670: 0x0125, # 多国外文 = ĥ
0xA671: 0x0135, # 多国外文 = ĵ
0xA672: 0x00D1, # 多国外文 = Ñ
0xA673: 0x00E1, # á
0xA674: 0x00E9, # é
0xA675: 0x00ED, # í
0xA676: 0x00F3, # ó
0xA677: 0x00FA, # ú
0xA678: 0x2339D, # 𣎝
0xA679: 0x29F15, # 𩼕
0xA67A: 0x23293, # 𣊓
0xA67B: 0x3CA0, # 㲠
0xA67C: 0x2F922, # 牐
0xA67D: 0x24271, # 𤉱
0xA67E: 0x2720F, # 𧈏
0xA680: 0x00C1, # Á
0xA681: 0x0403, # Ѓ
0xA682: 0x00C9, # É
0xA683: 0x040C, # Ќ
0xA684: 0x00D3, # Ó
0xA685: 0x00FD, # ý
0xA686: 0xA686, # (Ы́)
0xA687: 0xA687, # (Э́)
0xA688: 0x04EC, # Ӭ
0xA689: 0xA689, # (Ю́)
0xA68A: 0xA68A, # (Я́)
0xA68B: 0xA68B, # (ѣ́)
0xA68C: 0xA68C, # (Ѣ́)
0xA68D: 0xA68D, # (И́)
0xA68E: 0x27E1B, # 𧸛
0xA68F: 0x910B, # 鄋
0xA690: 0x29F14, # 𩼔
0xA691: 0x2A0DF, # 𪃟
0xA692: 0x20270, # 𠉰
0xA693: 0x203F1, # 𠏱
0xA694: 0x211AB, # 𡆫
0xA695: 0x211E5, # 𡇥
0xA696: 0x21290, # 𡊐
0xA697: 0x363E, # 㘾
0xA698: 0x212DF, # 𡋟
0xA699: 0x57D7, # 埗
0xA69A: 0x2165F, # 𡙟
0xA69B: 0x248C2, # 𤣂
0xA69C: 0x22288, # 𢊈
0xA69D: 0x23C62, # 𣱢
0xA69E: 0x24276, # 𤉶
0xA69F: 0xFF1A, # 冒号 = :
0xA6A0: 0xFF1B, # 分号 = ;
0xA6B9: 0x2202, # 小写希腊字母 = ∂
0xA6BA: 0x03F5, # 小写希腊字母 = ϵ
0xA6BB: 0x03D1, # 小写希腊字母 = ϑ
0xA6BC: 0x03D5, # 小写希腊字母 = ϕ
0xA6BD: 0x03C6, # 小写希腊字母 = φ
0xA6BE: 0x03F0, # 小写希腊字母 = ϰ
0xA6BF: 0x03F1, # 小写希腊字母 = ϱ
0xA6C0: 0x03C2, # 小写希腊字母 = ς
0xA6D9: 0xFE10, # 竖排逗号 = ︐
0xA6DA: 0xFE12, # 竖排句号 = ︒
0xA6DB: 0xFE11, # 竖排顿号 = ︑
0xA6DC: 0xFE13, # 竖排冒号 = ︓
0xA6DD: 0xFE14, # 竖排分号 = ︔
0xA6DE: 0xFE15, # 竖排感叹号 = ︕
0xA6DF: 0xFE16, # 竖排问号 = ︖
0xA6EC: 0xFE17, # 竖排上空方圆括号 = ︗
0xA6ED: 0xFE18, # 竖排下空方圆括号 = ︘
0xA6F3: 0xFE19, # 竖排三点省略号 = ︙
0xA6F6: 0x00B7, # 居中间隔点 = ·
0xA6F7: 0xA6F7, # 居中逗号(middle ,)
0xA6F8: 0xA6F8, # 居中句号(middle 。)
0xA6F9: 0xA6F9, # 居中顿号(middle 、)
0xA6FA: 0xA6FA, # 居中冒号(middle :)
0xA6FB: 0xA6FB, # 居中分号(middle ;)
0xA6FC: 0xA6FC, # 居中感叹号(middle !)
0xA6FD: 0xA6FD, # 居中问号(middle ?)
0xA6FE: 0xA6FE # ( ͘)
})
# Area A7
_update({
0xA740: 0x24235, # 𤈵
0xA741: 0x2431A, # 𤌚
0xA742: 0x2489B, # 𤢛
0xA743: 0x4B63, # 䭣
0xA744: 0x25581, # 𥖁
0xA745: 0x25BB0, # 𥮰
0xA746: 0x7C06, # 簆
0xA747: 0x23388, # 𣎈
0xA748: 0x26A40, # 𦩀
0xA749: 0x26F16, # 𦼖
0xA74A: 0x2717F, # 𧅿
0xA74B: 0x22A98, # 𢪘
0xA74C: 0x3005, # 々
0xA74D: 0x22F7E, # 𢽾
0xA74E: 0x27BAA, # 𧮪
0xA74F: 0x20242, # 𠉂
0xA750: 0x23C5D, # 𣱝
0xA751: 0x22650, # 𢙐
0xA752: 0x247EF, # 𤟯
0xA753: 0x26221, # 𦈡
0xA754: 0x29A02, # 𩨂
0xA755: 0x45EA, # 䗪
0xA756: 0x26B4C, # 𦭌
0xA757: 0x26D9F, # 𦶟
0xA758: 0x26ED8, # 𦻘
0xA759: 0x359E, # 㖞
0xA75A: 0x20E01, # 𠸁
0xA75B: 0x20F90, # 𠾐
0xA75C: 0x3A18, # 㨘
0xA75D: 0x241A2, # 𤆢
0xA75E: 0x3B74, # 㭴
0xA75F: 0x43F2, # 䏲
0xA760: 0x40DA, # 䃚
0xA761: 0x3FA6, # 㾦
0xA762: 0x24ECA, # 𤻊
0xA763: 0x28C3E, # 𨰾
0xA764: 0x28C47, # 𨱇
0xA765: 0x28C4D, # 𨱍
0xA766: 0x28C4F, # 𨱏
0xA767: 0x28C4E, # 𨱎
0xA768: 0x28C54, # 𨱔
0xA769: 0x28C53, # 𨱓
0xA76A: 0x25128, # 𥄨
0xA76B: 0x251A7, # 𥆧
0xA76C: 0x45AC, # 䖬
0xA76D: 0x26A2D, # 𦨭
0xA76E: 0x41F2, # 䇲
0xA76F: 0x26393, # 𦎓
0xA770: 0x29F7C, # 𩽼
0xA771: 0x29F7E, # 𩽾
0xA772: 0x29F83, # 𩾃
0xA773: 0x29F87, # 𩾇
0xA774: 0x29F8C, # 𩾌
0xA775: 0x27785, # 𧞅
0xA776: 0x2775E, # 𧝞
0xA777: 0x28EE7, # 𨻧
0xA778: 0x290AF, # 𩂯
0xA779: 0x2070E, # 𠜎
0xA77A: 0x22AC1, # 𢫁
0xA77B: 0x20CED, # 𠳭
0xA77C: 0x3598, # 㖘
0xA77D: 0x220C7, # 𢃇
0xA77E: 0x22B43, # 𢭃
0xA780: 0x4367, # 䍧
0xA781: 0x20CD3, # 𠳓
0xA782: 0x20CAC, # 𠲬
0xA783: 0x36E2, # 㛢
0xA784: 0x35CE, # 㗎
0xA785: 0x3B39, # 㬹
0xA786: 0x44EA, # 䓪
0xA787: 0x20E96, # 𠺖
0xA788: 0x20E4C, # 𠹌
0xA789: 0x35ED, # 㗭
0xA78A: 0x20EF9, # 𠻹
0xA78B: 0x24319, # 𤌙
0xA78C: 0x267CC, # 𦟌
0xA78D: 0x28056, # 𨁖
0xA78E: 0x28840, # 𨡀
0xA78F: 0x20F90, # 𠾐
0xA790: 0x21014, # 𡀔
0xA791: 0x236DC, # 𣛜
0xA792: 0x28A17, # 𨨗
0xA793: 0x28879, # 𨡹
0xA794: 0x4C9E, # 䲞
0xA795: 0x20410, # 𠐐
0xA796: 0x40DF, # 䃟
0xA797: 0x210BF, # 𡂿
0xA798: 0x22E0B, # 𢸋
0xA799: 0x4312, # 䌒
0xA79A: 0x233AB, # 𣎫
0xA79B: 0x2812E, # 𨄮
0xA79C: 0x4A31, # 䨱
0xA79D: 0x27B48, # 𧭈
0xA79E: 0x29EAC, # 𩺬
0xA79F: 0x23822, # 𣠢
0xA7A0: 0x244CB, # 𤓋
0xA7C2: 0x0409, # 大写俄文字母LJE = Љ
0xA7C3: 0x040A, # 大写俄文字母NJE = Њ
0xA7C4: 0x040F, # 大写俄文字母DZHE = Џ
0xA7C5: 0x04AE, # 大写俄文字母 = Ү
0xA7C6: 0x0402, # 俄文字母 = Ђ
0xA7C7: 0x040B, # 俄文字母 = Ћ
0xA7C8: 0x0474, # 俄文字母 = Ѵ
0xA7C9: 0x0462, # 俄文字母 = Ѣ
0xA7CA: 0x0463, # 俄文字母 = ѣ
0xA7CB: 0x04E8, # 俄文字母 = Ө
0xA7CC: 0x0459, # 俄文字母 = љ
0xA7CD: 0x045A, # 俄文字母 = њ
0xA7CE: 0x045F, # 俄文字母 = џ
0xA7CF: 0x04AF, # 俄文字母 = ү
0xA7F2: 0x00E1, # 俄文字母 = á
0xA7F3: 0x00E9, # 俄文字母 = é
0xA7F4: 0xA7F4, # 俄文字母(и́)
0xA7F5: 0x00F3, # 俄文字母 = ó
0xA7F6: 0x00FD, # 俄文字母 = ý
0xA7F7: 0xA7F7, # 俄文字母(ы́)
0xA7F8: 0xA7F8, # 俄文字母(э́)
0xA7F9: 0xA7F9, # 俄文字母(ю́)
0xA7FA: 0xA7FA, # 俄文字母(я́)
0xA7FB: 0x0452, # 俄文字母 = ђ
0xA7FC: 0x045B, # 俄文字母 = ћ
0xA7FD: 0x0475, # 俄文字母 = ѵ
0xA7FE: 0x04E9 # 俄文字母 = ө
})
# Area A8
_update({
0xA8BC: 0x1E3F, # 汉语拼音(ḿ) = ḿ
0xA8C1: 0xA8C1, # 中文阴圈码十(⏺ + 十)
0xA8C2: 0xA8C2, # 中文阴圈码廿(⏺ + 廿)
0xA8C3: 0xA8C3, # 中文阴圈码卅(⏺ + 卅)
0xA8C4: 0x4E00, # 注音符号— = 一
0xA8EA: 0xA8EA, # 中文阴框码一(⏹ + 一)
0xA8EB: 0xA8EB, # 中文阴框码二(⏹ + 二)
0xA8EC: 0xA8EC, # 中文阴框码三(⏹ + 三)
0xA8ED: 0xA8ED, # 中文阴框码四(⏹ + 四)
0xA8EE: 0xA8EE, # 中文阴框码五(⏹ + 五)
0xA8EF: 0xA8EF, # 中文阴框码六(⏹ + 六)
0xA8F0: 0xA8F0, # 中文阴框码七(⏹ + 七)
0xA8F1: 0xA8F1, # 中文阴框码八(⏹ + 八)
0xA8F2: 0xA8F2, # 中文阴框码九(⏹ + 九)
0xA8F3: 0xA8F3, # 中文阴框码十(⏹ + 十)
0xA8F4: 0xA8F4, # 中文阴框码廿(⏹ + 廿)
0xA8F5: 0xA8F5, # 中文阴框码卅(⏹ + 卅)
0xA8F6: 0xA8F6, # 中文阴圈码一(⏺ + 一)
0xA8F7: 0xA8F7, # 中文阴圈码二(⏺ + 二)
0xA8F8: 0xA8F8, # 中文阴圈码三(⏺ + 三)
0xA8F9: 0xA8F9, # 中文阴圈码四(⏺ + 四)
0xA8FA: 0xA8FA, # 中文阴圈码五(⏺ + 五)
0xA8FB: 0xA8FB, # 中文阴圈码六(⏺ + 六)
0xA8FC: 0xA8FC, # 中文阴圈码七(⏺ + 七)
0xA8FD: 0xA8FD, # 中文阴圈码八(⏺ + 八)
0xA8FE: 0xA8FE # 中文阴圈码九(⏺ + 九)
})
# Area A9
_update({
0xA9A1: 0xA9A1, # (╪)
0xA9A2: 0xA9A2, # (╡)
0xA9F0: 0x21E8, # 空心向右箭头 = ⇨
0xA9F1: 0x21E6, # 空心向左箭头 = ⇦
0xA9F2: 0x2B06, # 实心向上箭头 = ⬆
0xA9F3: 0x2B07, # 实心向下箭头 = ⬇
0xA9F4: 0x27A1, # 实心向右箭头 = ➡
0xA9F5: 0x2B05, # 实心向左箭头 = ⬅
0xA9F6: 0x2B62, # 箭头-无翅向右 = ⭢
0xA9F7: 0x2B60, # 箭头-无翅向左 = ⭠
0xA9F8: 0x2B61, # 箭头-无翅向左 = ⭡
0xA9F9: 0x2B63, # 箭头-无翅向左 = ⭣
0xA9FA: 0x21C1, # 箭头-下单翅向右 = ⇁
0xA9FB: 0x21BD, # 箭头-下单翅向左 = ↽
0xA9FC: 0xA9FC, # 箭头-双向向内(ꜜ͎)
0xA9FD: 0x2195, # 箭头-双向向外 = ↕
0xA9FE: 0x2B65, # 箭头-无翅双向向外 = ⭥
})
# Area AA
_update({
0xAAA1: 0xAAA1, # BD语言注解:盘外符开弧(⸨)
0xAAA2: 0xAAA2, # BD语言注解:盘外符标记()→)
0xAAA3: 0xAAA3, # BD语言注解:盘外符闭弧(⸩)
0xAAA4: 0xAAA4, # BD语言注解:换行符(⇙)
0xAAA5: 0xAAA5, # BD语言注解:换段符(↙)
0xAAA6: 0xAAA6, # BD语言注解:小样文件结束(Ω)
0xAAA7: 0xAAA7, # BD语言注解:数学态标记(◯ + ﹩)
0xAAA8: 0xAAA8, # BD语言注解:自定义参数(◯ + ﹠)
0xAAA9: 0xAAA9, # BD语言注解:盒子开弧(⦃)
0xAAAA: 0xAAAA, # BD语言注解:盒子闭弧(⦄)
0xAAAB: 0xAAAB, # BD语言注解:转字体标记(ⓩ)
0xAAAC: 0xAAAC, # BD语言注解:上标(⤊)
0xAAAD: 0xAAAD, # BD语言注解:下标(⤋)
0xAAB0: 0x002C, # 千分撇 = ,
0xAAB1: 0x002E, # 小数点 = .
0xAAB2: 0x2010, # 半字线 = ‒
0xAAB3: 0x002A, # 六角星号、呼应号 = *
0xAAB4: 0x0021, # 阶乘 = !
0xAAB5: 0x2202, # 偏导数 = ∂
0xAAB6: 0x2211, # 和 = ∑
0xAAB7: 0x220F, # 积 = ∏
0xAAB8: 0x2AEE, # 非因子号 = ⫮
0xAAB9: 0x2031, # 万分号 = ‱
0xAABA: 0x227B, # 前继 = ≻
0xAABB: 0x227A, # 后继 = ≺
0xAABC: 0x2282, # 包含于 = ⊂
0xAABD: 0x2283, # 包含 = ⊃
0xAABE: 0x225C, # Delta等于 = ≜
0xAABF: 0x00AC, # 否定 = ¬
0xAAC0: 0x22CD, # ⋍
0xAAC1: 0x2286, # 包含于 = ⊆
0xAAC2: 0x2287, # 包含 = ⊇
0xAAC3: 0x225C, # ≜
0xAAC4: 0x2243, # 近似符号 = ⋍
0xAAC5: 0x2265, # 大于等于 = ≥
0xAAC6: 0x2264, # 小于等于 = ≤
0xAAC7: 0x2214, # 穆勒连分符号、集合合 = ∔
0xAAC8: 0x2238, # 算术差 = ∸
0xAAC9: 0x2A30, # 直积号 = ⨰
0xAACA: 0x2271, # 不大于等于 = ≱
0xAACB: 0x2270, # 不小于等于 = ≰
0xAACC: 0x2AB0, # ⪰
0xAACD: 0x2AAF, # ⪯
0xAACE: 0x5350, # 卐
0xAACF: 0x212A, # 绝对温度单位 = K
0xAAD0: 0x2200, # 全称量词 = ∀
0xAAD1: 0x21D1, # ⇑
0xAAD2: 0x21E7, # ⇧
0xAAD3: 0x21BE, # ↾
0xAAD4: 0x21D3, # ⇓
0xAAD5: 0x21E9, # ⇩
0xAAD6: 0x21C3, # ⇃
0xAAD7: 0x2935, # ⤵
0xAAD8: 0x21E5, # ⇥
0xAAD9: 0x22F0, # 对角三连点 = ⋰
0xAADA: 0x21D4, # 等价 = ⇔
0xAADB: 0x21C6, # ⇆
0xAADC: 0x2194, # ↔
0xAADD: 0x21D2, # 推断 = ⇒
0xAADE: 0x21E8, # ⇨
0xAADF: 0x21C0, # ⇀
0xAAE0: 0x27F6, # ⟶
0xAAE1: 0x21D0, # ⇐
0xAAE2: 0x21E6, # ⇦
0xAAE3: 0x21BC, # ↼
0xAAE4: 0x27F5, # ⟵
0xAAE5: 0x2196, # ↖️
0xAAE6: 0x2199, # ↙️
0xAAE7: 0x2198, # ↘️
0xAAE8: 0x2197, # ↗️
0xAAE9: 0x22D5, # 平行等于 = ⋕
0xAAEA: 0x2AC5, # 包含于 = ⫅
0xAAEB: 0x2AC6, # 包含 = ⫆
0xAAEC: 0x29CB, # 相当于 = ⧋
0xAAED: 0x226B, # 远大于 = ≫
0xAAEE: 0x226A, # 远小于 = ≪
0xAAEF: 0x2A72, # 加或等于 = ⩲
0xAAF0: 0x22BB, # ⊻
0xAAF1: 0x2AE8, # 垂直等于 = ⫨
0xAAF2: 0x2277, # 大于或小于 = ≷
0xAAF3: 0x227D, # ≽
0xAAF4: 0x227C, # ≼
0xAAF5: 0x2109, # 华氏度 = ℉
0xAAF6: 0x2203, # 存在量词 = ∃
0xAAF7: 0x22F1, # 对角三连点 = ⋱
0xAAF9: 0x2241, # ≁
0xAAFA: 0x2244, # ≄
0xAAFB: 0x2276, # ≶
0xAAFC: 0x2209, # 不属于 = ∉
0xAAFD: 0x2267, # ≧
0xAAFE: 0x2266 # ≦
})
# Area AB
_update({
0xABA1: 0x224B, # ≋
0xABA2: 0x2262, # 不恒等于 = ≢
0xABA3: 0x2251, # 近似值号 = ≑
0xABA4: 0x2284, # 不包含于 = ⊄
0xABA5: 0x2285, # 不包含 = ⊅
0xABA6: 0x2259, # 相当于、等角的、估算 = ≙
0xABA7: 0x2205, # 空集 = ∅
0xABA8: 0x2207, # 微分算符 = ∇
0xABA9: 0x2A01, # 直和 = ⨁
0xABAA: 0x2A02, # 重积 = ⨂
0xABAB: 0x03F9, # 组合 = Ϲ
0xABAC: 0xABAC, # 对角六连点(⋰ + ⋰)
0xABAD: 0x263C, # ☼
0xABAE: 0xABAE, # (⚬ + ↑)
0xABAF: 0x2247, # 不近似等于 = ≇
0xABB0: 0x2249, # 不近似等于 = ≉
0xABB1: 0x2278, # 不小于大于 = ≸
0xABB2: 0x22F6, # 不属于 = ⋶
0xABB3: 0x2AFA, # 大于等于 = ⫺
0xABB4: 0x2AF9, # 小于等于 = ⫹
0xABB5: 0x2245, # 近似等于、接近 = ≅
0xABB6: 0x2267, # 大于等于 = ≧
0xABB7: 0x2250, # 近似等于 = ≐
0xABB8: 0x2266, # 小于等于 = ≦
0xABB9: 0x2A26, # 加或差 = ⨦
0xABBA: 0x2213, # 负或正、减或加 = ∓
0xABBB: 0x233F, # ⌿
0xABBC: 0x30FC, # 日文符号 = ー
0xABBD: 0xABBD, # 近似值号(· + ≈)
0xABBE: 0x2288, # 不包含于 = ⊈
0xABBF: 0x2289, # 不包含 = ⊉
0xABC0: 0x225A, # 角相等 = ≚
0xABC1: 0x2205, # 空集 = ∅
0xABC2: 0x2205, # (diagonal 卐)
0xABC3: 0x0024, # $
0xABC4: 0x2709, # ✉
0xABC5: 0x272E, # ✮
0xABC6: 0x272F, # ✯
0xABC7: 0x2744, # ❄
0xABC8: 0x211E, # 处方符号 = ℞
0xABC9: 0x1D110, # 𝄐
0xABCA: 0x2034, # 三次微分 = ‴
0xABCB: 0xABCB, # 对角六连点(⋱ + ⋱)
0xABCC: 0x2ACB, # 真包含于 = ⫋
0xABCD: 0x2ACC, # 真包含 = ⫌
0xABCE: 0x2A63, # ⩣
0xABCF: 0xABCF, # 约数0(0 + \)
0xABD0: 0xABD0, # 约数1(1 + \)
0xABD1: 0xABD1, # 约数2(2 + \)
0xABD2: 0xABD2, # 约数3(3 + \)
0xABD3: 0xABD3, # 约数4(4 + \)
0xABD4: 0xABD4, # 约数5(5 + \)
0xABD5: 0xABD5, # 约数6(6 + \)
0xABD6: 0xABD6, # 约数7(7 + \)
0xABD7: 0xABD7, # 约数8(8 + \)
0xABD8: 0xABD8, # 约数9(9 + \)
0xABD9: 0x216C, # 罗马数字50 = Ⅼ
0xABDA: 0x216D, # 罗马数字100 = Ⅽ
0xABDB: 0x216E, # 罗马数字500 = Ⅾ
0xABDC: 0x216F, # 罗马数字1000 = Ⅿ
0xABDD: 0x2295, # 圈加 = ⊕
0xABDE: 0xABDE, # 圈加减(◯ + ±)
0xABDF: 0x2296, # 圈减 = ⊖
0xABE0: 0xABE0, # 圈点减(◯ + ∸)
0xABE1: 0x2297, # 圈乘 = ⊗
0xABE2: 0x2A38, # 圈除 = ⨸
0xABE3: 0x229C, # 圈等于 = ⊜
0xABE4: 0xABE4, # 交流电机(◯ + ∼)
0xABE5: 0xABE5, # 圈大于等于(◯ + ≥)
0xABE6: 0xABE6, # 圈小于等于(◯ + ≤)
0xABE7: 0x224A, # 近似等于 = ≊
0xABE8: 0xABE8, # (> + >)
0xABE9: 0xABE9, # (< + <)
0xABEA: 0x22DB, # 大于等于小于 = ⋛
0xABEB: 0x22DA, # 小于等于大于 = ⋚
0xABEC: 0x2A8C, # 大于等于小于 = ⪌
0xABED: 0x2A8B, # 小于等于大于 = ⪋
0xABEE: 0x2273, # ≳
0xABEF: 0x2272, # ≲
0xABF0: 0x29A5, # ⦥
0xABF1: 0x29A4, # ⦤
0xABF2: 0x2660, # 黑桃 = ♠
0xABF3: 0x2394, # 正六边形 = ⎔
0xABF4: 0x2B20, # 正五边形 = ⬠
0xABF5: 0x23E2, # 梯形 = ⏢
0xABF6: 0x2663, # 梅花 = ♣
0xABF7: 0x25B1, # 平行四边形 = ▱
0xABF8: 0x25AD, # 矩形 = ▭
0xABF9: 0x25AF, # 矩形 = ▯
0xABFA: 0x2665, # 红桃 = ♥
0xABFB: 0x2666, # 方块 = ♦
0xABFC: 0x25C1, # 三角形(向左) = ◁
0xABFD: 0x25BD, # 三角形(向下) = ▽
0xABFE: 0x25BD # 三角形(向右) = ▷
})
# Area AC
_update({
0xACA1: 0x25C0, # 实三角形(向左) = ◀
0xACA2: 0x25BC, # 实三角形(向下) = ▼
0xACA3: 0x25B6, # 实三角形(向右) = ▶
0xACA4: 0x25FA, # 直角三角形 = ◺
0xACA5: 0x22BF, # 直角三角形 = ⊿
0xACA6: 0x25B3, # △
0xACA7: 0x27C1, # ⟁
0xACA8: 0x2BCE, # ⯎
0xACA9: 0x2B2F, # ⬯
0xACAA: 0xACAA, # (⬯ + ∥)
0xACAB: 0x2B2E, # ⬮
0xACAC: 0x2279, # 不大于小于 = ≹
0xACAD: 0x1D10B, # 𝄋
0xACAE: 0x2218, # 圈乘 = ∘
0xACAF: 0xACAF, # (vertical ≈)
0xACB2: 0xACB2, # (F-like symbol)
0xACB3: 0x22A6, # ⊦
0xACB4: 0x22A7, # ⊧
0xACB5: 0x22A8, # ⊨
0xACB6: 0x29FA, # 强阳二值 = ⧺
0xACB7: 0x29FB, # 强阳三值 = ⧻
0xACB8: 0xACB8, # 强阳四值(++++)
0xACB9: 0x291A, # ⤚
0xACBA: 0xACBA, # (⤙ + _)
0xACBB: 0xACBB, # (⤚ + _)
0xACBC: 0x2713, # 勾 = ✓
0xACBD: 0x22CE, # ⋎
0xACBE: 0xACBE, # (V + \)
0xACBF: 0xACBF, # (ˇ + | + ꞈ)
0xACC0: 0x224E, # 相当于、等值于 = ≎
0xACC1: 0x224F, # 间差 = ≏
0xACC2: 0x23D3, # ⏓
0xACC3: 0xACC3, # (◡ + _)
0xACC4: 0xACC4, # (◡ + _ + /)
0xACC5: 0x2715, # ✕
0xACC6: 0xACC6, # (✕ + •)
0xACC8: 0xACC8, # (∩ + ˜)
0xACC9: 0xACC9, # (∪ + ˜)
0xACCA: 0xACCA, # (V̰)
0xACCB: 0xACCB, # (V̱)
0xACCC: 0xACCC, # (V̱̰)
0xACCD: 0x2126, # Ω
0xACCE: 0x221D, # 成正比 = ∝
0xACCF: 0x29A0, # 角 = ⦠
0xACD0: 0x2222, # 角 = ∢
0xACD1: 0x2AAC, # 小于等于 = ⪬
0xACD2: 0x2239, # 差 = ∹
0xACD3: 0x223A, # ∺
0xACD4: 0x2135, # ℵ
0xACD5: 0xACD5, # (⊃ + ᐣ)
0xACD6: 0xACD6, # (⊃ + ᐣ + /)
0xACD7: 0x21CC, # ⇌
0xACD8: 0x274B, # ❋
0xACD9: 0x2B01, # ⬁
0xACDA: 0x2B03, # ⬃
0xACDB: 0x2B02, # ⬂
0xACDC: 0x2B00, # ⬀
0xACDD: 0xACDD, # (△ + ▾)
0xACDE: 0xACDE, # (▲ + ▿)
0xACDF: 0xACDE, # (( + —)
0xACE0: 0xACE0, # ([ + —)
0xACE1: 0xACE1, # ([ + —)
0xACE2: 0xACE2, # () + —)
0xACE3: 0xACE3, # (] + —)
0xACE4: 0xACE4, # (] + —)
0xACE5: 0xACE5, # (] + — + ₙ)
0xACE6: 0xACE6, # (] + — + ₘ)
0xACE7: 0xACE7, # (] + — + ₓ)
0xACE8: 0xACE8, # () + — + ₙ)
0xACE9: 0x2233, # 逆时针环积分 = ∳
0xACEA: 0x2232, # 顺时针环积分 = ∲
0xACEB: 0x222C, # 二重积分 = ∬
0xACEC: 0x222F, # 二重环积分 = ∯
0xACED: 0x222D, # 三重积分 = ∭
0xACEE: 0x2230, # 三重环积分 = ∰
0xACEF: 0x0421, # 组合符号 = С
0xACF0: 0x2019, # 所有格符 = ’
0xACF1: 0x0027, # 重音节符号 = '
0xACF2: 0x03A3, # 和(正文态) = Σ
0xACF3: 0x03A0, # 积(正文态) = Π
0xACF4: 0x02C7, # 注音符号 = ˇ
0xACF5: 0x02CB, # 注音符号 = ˋ
0xACF6: 0x02CA, # 注音符号 = ˊ
0xACF7: 0x02D9, # 注音符号 = ˙
0xACF8: 0x29F72, # 𩽲
0xACF9: 0x362D, # 㘭
0xACFA: 0x3A52, # 㩒
0xACFB: 0x3E74, # 㹴
0xACFC: 0x27741, # 𧝁
0xACFD: 0x30FC, # 日文长音记号 = ー
0xACFE: 0x2022 # 注音符号 = •
})
# Area AD
_update({
0xADA1: 0x3280, # 中文阳圈码一 = ㊀
0xADA2: 0x3281, # 中文阳圈码二 = ㊁
0xADA3: 0x3282, # 中文阳圈码三 = ㊂
0xADA4: 0x3283, # 中文阳圈码四 = ㊃
0xADA5: 0x3284, # 中文阳圈码五 = ㊄
0xADA6: 0x3285, # 中文阳圈码六 = ㊅
0xADA7: 0x3286, # 中文阳圈码七 = ㊆
0xADA8: 0x3287, # 中文阳圈码八 = ㊇
0xADA9: 0x3288, # 中文阳圈码九 = ㊈
0xADAA: 0xADAA, # 中文阳圈码一零(◯ + 一〇)
0xADAB: 0xADAB, # 中文阳圈码一一(◯ + 一一)
0xADAC: 0xADAC, # 中文阳圈码一二(◯ + 一二)
0xADAD: 0xADAD, # 中文阳圈码一三(◯ + 一三)
0xADAE: 0xADAE, # 中文阳圈码一四(◯ + 一四)
0xADAF: 0xADAF, # 中文阳圈码一五(◯ + 一五)
0xADB0: 0xADB0, # 中文阳圈码一六(◯ + 一六)
0xADB1: 0xADB1, # 中文阳圈码一七(◯ + 一七)
0xADB2: 0xADB2, # 中文阳圈码一八(◯ + 一八)
0xADB3: 0xADB3, # 中文阳圈码一九(◯ + 一九)
0xADB4: 0xADB4, # 中文阳圈码二零(◯ + 二〇)
0xADB5: 0x24EA, # 数字阳圈码0 = ⓪
0xADB6: 0x2018, # 外文左单引号 = ‘
0xADB7: 0x201C, # 外文左双引号 = “
0xADB8: 0x2019, # 外文右单引号 = ’
0xADB9: 0x201D, # 外文右双引号 = ”
0xADBA: 0x025B, # 国际音标 = ɛ
0xADBB: 0x0251, # 国际音标 = ɑ
0xADBC: 0x0259, # 国际音标 = ə
0xADBD: 0x025A, # 国际音标 = ɚ
0xADBE: 0x028C, # 国际音标 = ʌ
0xADBF: 0x0254, # 国际音标 = ɔ
0xADC0: 0x0283, # 国际音标 = ʃ
0xADC1: 0x02D1, # 国际音标 = ˑ
0xADC2: 0x02D0, # 国际音标 = ː
0xADC3: 0x0292, # 国际音标 = ʒ
0xADC4: 0x0261, # 国际音标 = ɡ
0xADC5: 0x03B8, # 国际音标 = θ
0xADC6: 0x00F0, # 国际音标 = ð
0xADC7: 0x014B, # 国际音标 = ŋ
0xADC8: 0x0264, # 国际音标 = ɤ
0xADC9: 0x0258, # 国际音标 = ɘ
0xADCA: 0x026A, # 国际音标 = ɪ
0xADCB: 0x0268, # 国际音标 = ɨ
0xADCC: 0x027F, # 国际音标 = ɿ
0xADCD: 0x0285, # 国际音标 = ʅ
0xADCE: 0x028A, # 国际音标 = ʊ
0xADCF: 0x00F8, # 国际音标 = ø
0xADD0: 0x0275, # 国际音标 = ɵ
0xADD1: 0x026F, # 国际音标 = ɯ
0xADD2: 0x028F, # 国际音标 = ʏ
0xADD3: 0x0265, # 国际音标 = ɥ
0xADD4: 0x0289, # 国际音标 = ʉ
0xADD5: 0x0278, # 国际音标 = ɸ
0xADD6: 0x0288, # 国际音标 = ʈ
0xADD7: 0x0290, # 国际音标 = ʐ
0xADD8: 0x0256, # 国际音标 = ɖ
0xADD9: 0x0282, # 国际音标 = ʂ
0xADDA: 0x0272, # 国际音标 = ɲ
0xADDB: 0x0271, # 国际音标 = ɱ
0xADDC: 0x03B3, # 国际音标 = γ
0xADDD: 0x0221, # 国际音标 = ȡ
0xADDE: 0x0255, # 国际音标 = ɕ
0xADDF: 0x0235, # 国际音标 = ȵ
0xADE0: 0x0291, # 国际音标 = ʑ
0xADE1: 0x0236, # 国际音标 = ȶ
0xADE2: 0x026C, # 国际音标 = ɬ
0xADE3: 0x028E, # 国际音标 = ʎ
0xADE4: 0x1D84, # 国际音标 = ᶄ
0xADE5: 0xAB53, # 国际音标 = ꭓ
0xADE6: 0x0127, # 国际音标 = ħ
0xADE7: 0x0263, # 国际音标 = ɣ
0xADE8: 0x0281, # 国际音标 = ʁ
0xADE9: 0x0294, # 国际音标 = ʔ
0xADEA: 0x0295, # 国际音标 = ʕ
0xADEB: 0x0262, # 国际音标 = ɢ
0xADEC: 0x0266, # 国际音标 = ɦ
0xADED: 0x4C7D, # 䱽
0xADEE: 0x24B6D, # 𤭭
0xADEF: 0x00B8, # 新蒙文 = ¸
0xADF0: 0x02DB, # 新蒙文 = ˛
0xADF1: 0x04D8, # 新蒙文 = Ә
0xADF2: 0x04BA, # 新蒙文 = Һ
0xADF3: 0x0496, # 新蒙文 = Җ
0xADF4: 0x04A2, # 新蒙文 = Ң
0xADF5: 0x2107B, # 𡁻
0xADF6: 0x2B62C, # 𫘬
0xADF7: 0x04D9, # 新蒙文 = ә
0xADF8: 0x04BB, # 新蒙文 = һ
0xADF9: 0x0497, # 新蒙文 = җ
0xADFA: 0x04A3, # 新蒙文 = ң
0xADFB: 0x40CE, # 䃎
0xADFC: 0x04AF, # 新蒙文 = ү
0xADFD: 0x02CC, # 次重音符号 = ˌ
0xADFE: 0xff40 # 次重音符号 = `
})
# Area F8
_update({
0xF8A1: 0x5C2A, # 尪
0xF8A2: 0x97E8, # 韨
0xF8A3: 0x5F67, # 彧
0xF8A4: 0x672E, # 朮
0xF8A5: 0x4EB6, # 亶
0xF8A6: 0x53C6, # 叆
0xF8A7: 0x53C7, # 叇
0xF8A8: 0x8BBB, # 讻
0xF8A9: 0x27BAA, # 𧮪
0xF8AA: 0x8BEA, # 诪
0xF8AB: 0x8C09, # 谉
0xF8AC: 0x8C1E, # 谞
0xF8AD: 0x5396, # 厖
0xF8AE: 0x9EE1, # 黡
0xF8AF: 0x533D, # 匽
0xF8B0: 0x5232, # 刲
0xF8B1: 0x6706, # 朆
0xF8B2: 0x50F0, # 僰
0xF8B3: 0x4F3B, # 伻
0xF8B4: 0x20242, # 𠉂
0xF8B5: 0x5092, # 傒
0xF8B6: 0x5072, # 偲
0xF8B7: 0x8129, # 脩
0xF8B8: 0x50DC, # 僜
0xF8B9: 0x90A0, # 邠
0xF8BA: 0x9120, # 鄠
0xF8BB: 0x911C, # 鄜
0xF8BC: 0x52BB, # 劻
0xF8BD: 0x52F7, # 勷
0xF8BE: 0x6C67, # 汧
0xF8BF: 0x6C9A, # 沚
0xF8C0: 0x6C6D, # 汭
0xF8C1: 0x6D34, # 洴
0xF8C2: 0x6D50, # 浐
0xF8C3: 0x6D49, # 浉
0xF8C4: 0x6DA2, # 涢
0xF8C5: 0x6D65, # 浥
0xF8C6: 0x6DF4, # 淴
0xF8C7: 0x6EEA, # 滪
0xF8C8: 0x6E87, # 溇
0xF8C9: 0x6EC9, # 滉
0xF8CA: 0x6FBC, # 澼
0xF8CB: 0x6017, # 怗
0xF8CC: 0x22650, # 𢙐
0xF8CD: 0x6097, # 悗
0xF8CE: 0x60B0, # 悰
0xF8CF: 0x60D3, # 惓
0xF8D0: 0x6153, # 慓
0xF8D1: 0x5BAC, # 宬
0xF8D2: 0x5EBC, # 庼
0xF8D3: 0x95EC, # 闬
0xF8D4: 0x95FF, # 闿
0xF8D5: 0x9607, # 阇
0xF8D6: 0x9613, # 阓
0xF8D7: 0x961B, # 阛
0xF8D8: 0x631C, # 挜
0xF8D9: 0x630C, # 挌
0xF8DA: 0x63AF, # 掯
0xF8DB: 0x6412, # 搒
0xF8DC: 0x63F3, # 揳
0xF8DD: 0x6422, # 搢
0xF8DE: 0x5787, # 垇
0xF8DF: 0x57B5, # 垵
0xF8E0: 0x57BD, # 垽
0xF8E1: 0x57FC, # 埼
0xF8E2: 0x56AD, # 嚭
0xF8E3: 0x26B4C, # 𦭌
0xF8E4: 0x8313, # 茓
0xF8E5: 0x8359, # 荙
0xF8E6: 0x82F3, # 苳
0xF8E7: 0x8399, # 莙
0xF8E8: 0x44D6, # 䓖
0xF8E9: 0x841A, # 萚
0xF8EA: 0x83D1, # 菑
0xF8EB: 0x84C2, # 蓂
0xF8EC: 0x8439, # 萹
0xF8ED: 0x844E, # 葎
0xF8EE: 0x8447, # 葇
0xF8EF: 0x84DA, # 蓚
0xF8F0: 0x26D9F, # 𦶟
0xF8F1: 0x849F, # 蒟
0xF8F2: 0x84BB, # 蒻
0xF8F3: 0x850A, # 蔊
0xF8F4: 0x26ED8, # 𦻘
0xF8F5: 0x85A2, # 薢
0xF8F6: 0x85B8, # 薸
0xF8F7: 0x85E8, # 藨
0xF8F8: 0x8618, # 蘘
0xF8F9: 0x596D, # 奭
0xF8FA: 0x546F, # 呯
0xF8FB: 0x54A5, # 咥
0xF8FC: 0x551D, # 唝
0xF8FD: 0x5536, # 唶
0xF8FE: 0x556F # 啯
})
# Area F9
_update({
0xF9A1: 0x5621, # 嘡
0xF9A2: 0x20E01, # 𠸁
0xF9A3: 0x20F90, # 𠾐
0xF9A4: 0x360E, # 㘎
0xF9A5: 0x56F7, # 囷
0xF9A6: 0x5E21, # 帡
0xF9A7: 0x5E28, # 帨
0xF9A8: 0x5CA8, # 岨
0xF9A9: 0x5CE3, # 峣
0xF9AA: 0x5D5A, # 嵚
0xF9AB: 0x5D4E, # 嵎
0xF9AC: 0x5D56, # 嵖
0xF9AD: 0x5DC2, # 巂
0xF9AE: 0x8852, # 衒
0xF9AF: 0x5FAF, # 徯
0xF9B0: 0x5910, # 夐
0xF9B1: 0x7330, # 猰
0xF9B2: 0x247EF, # 𤟯
0xF9B3: 0x734F, # 獏
0xF9B4: 0x9964, # 饤
0xF9B5: 0x9973, # 饳
0xF9B6: 0x997E, # 饾
0xF9B7: 0x9982, # 馂
0xF9B8: 0x9989, # 馉
0xF9B9: 0x5C43, # 屃
0xF9BA: 0x5F36, # 弶
0xF9BB: 0x5B56, # 孖
0xF9BC: 0x59EE, # 姮
0xF9BD: 0x5AEA, # 嫪
0xF9BE: 0x7ED6, # 绖
0xF9BF: 0x7F0A, # 缊
0xF9C0: 0x7E34, # 縴
0xF9C1: 0x7F1E, # 缞
0xF9C2: 0x26221, # 𦈡
0xF9C3: 0x9A8E, # 骎
0xF9C4: 0x29A02, # 𩨂
0xF9C5: 0x9A95, # 骕
0xF9C6: 0x9AA6, # 骦
0xF9C7: 0x659D, # 斝
0xF9C8: 0x241A2, # 𤆢
0xF9C9: 0x712E, # 焮
0xF9CA: 0x7943, # 祃
0xF9CB: 0x794E, # 祎
0xF9CC: 0x7972, # 祲
0xF9CD: 0x7395, # 玕
0xF9CE: 0x73A0, # 玠
0xF9CF: 0x7399, # 玙
0xF9D0: 0x73B1, # 玱
0xF9D1: 0x73F0, # 珰
0xF9D2: 0x740E, # 琎
0xF9D3: 0x742F, # 琯
0xF9D4: 0x7432, # 琲
0xF9D5: 0x67EE, # 柮
0xF9D6: 0x6812, # 栒
0xF9D7: 0x3B74, # 㭴
0xF9D8: 0x6872, # 桲
0xF9D9: 0x68BC, # 梼
0xF9DA: 0x68B9, # 梹
0xF9DB: 0x68C1, # 棁
0xF9DC: 0x696F, # 楯
0xF9DD: 0x69A0, # 榠
0xF9DE: 0x69BE, # 榾
0xF9DF: 0x69E5, # 槥
0xF9E0: 0x6A9E, # 檞
0xF9E1: 0x69DC, # 槜
0xF9E2: 0x6B95, # 殕
0xF9E3: 0x80FE, # 胾
0xF9E4: 0x89F1, # 觱
0xF9E5: 0x74FB, # 瓻
0xF9E6: 0x7503, # 甃
0xF9E7: 0x80D4, # 胔
0xF9E8: 0x22F7E, # 𢽾
0xF9E9: 0x668D, # 暍
0xF9EA: 0x9F12, # 鼒
0xF9EB: 0x6F26, # 漦
0xF9EC: 0x8D51, # 赑
0xF9ED: 0x8D52, # 赒
0xF9EE: 0x8D57, # 赗
0xF9EF: 0x7277, # 牷
0xF9F0: 0x7297, # 犗
0xF9F1: 0x23C5D, # 𣱝
0xF9F2: 0x8090, # 肐
0xF9F3: 0x43F2, # 䏲
0xF9F4: 0x6718, # 朘
0xF9F5: 0x8158, # 腘
0xF9F6: 0x81D1, # 臑
0xF9F7: 0x7241, # 牁
0xF9F8: 0x7242, # 牂
0xF9F9: 0x7A85, # 窅
0xF9FA: 0x7A8E, # 窎
0xF9FB: 0x7ABE, # 窾
0xF9FC: 0x75A2, # 疢
0xF9FD: 0x75AD, # 疭
0xF9FE: 0x75CE # 痎
})
# Area FA
_update({
0xFAA1: 0x3FA6, # 㾦
0xFAA2: 0x7604, # 瘄
0xFAA3: 0x7606, # 瘆
0xFAA4: 0x7608, # 瘈
0xFAA5: 0x24ECA, # 𤻊
0xFAA6: 0x88C8, # 裈
0xFAA7: 0x7806, # 砆
0xFAA8: 0x7822, # 砢
0xFAA9: 0x7841, # 硁
0xFAAA: 0x7859, # 硙
0xFAAB: 0x785A, # 硚
0xFAAC: 0x7875, # 硵
0xFAAD: 0x7894, # 碔
0xFAAE: 0x40DA, # 䃚
0xFAAF: 0x790C, # 礌
0xFAB0: 0x771C, # 眜
0xFAB1: 0x251A7, # 𥆧
0xFAB2: 0x7786, # 瞆
0xFAB3: 0x778B, # 瞋
0xFAB4: 0x7564, # 畤
0xFAB5: 0x756C, # 畬
0xFAB6: 0x756F, # 畯
0xFAB7: 0x76C9, # 盉
0xFAB8: 0x76DD, # 盝
0xFAB9: 0x28C3E, # 𨰾
0xFABA: 0x497A, # 䥺
0xFABB: 0x94D3, # 铓
0xFABC: 0x94E6, # 铦
0xFABD: 0x9575, # 镵
0xFABE: 0x9520, # 锠
0xFABF: 0x9527, # 锧
0xFAC0: 0x28C4F, # 𨱏
0xFAC1: 0x9543, # 镃
0xFAC2: 0x953D, # 锽
0xFAC3: 0x28C4E, # 𨱎
0xFAC4: 0x28C54, # 𨱔
0xFAC5: 0x28C53, # 𨱓
0xFAC6: 0x9574, # 镴
0xFAC7: 0x79FE, # 秾
0xFAC8: 0x7A16, # 稖
0xFAC9: 0x415F, # 䅟
0xFACA: 0x7A5E, # 穞
0xFACB: 0x9E30, # 鸰
0xFACC: 0x9E34, # 鸴
0xFACD: 0x9E27, # 鸧
0xFACE: 0x9E2E, # 鸮
0xFACF: 0x9E52, # 鹒
0xFAD0: 0x9E53, # 鹓
0xFAD1: 0x9E59, # 鹙
0xFAD2: 0x9E56, # 鹖
0xFAD3: 0x9E61, # 鹡
0xFAD4: 0x9E6F, # 鹯
0xFAD5: 0x77DE, # 矞
0xFAD6: 0x76B6, # 皶
0xFAD7: 0x7F91, # 羑
0xFAD8: 0x7F93, # 羓
0xFAD9: 0x26393, # 𦎓
0xFADA: 0x7CA6, # 粦
0xFADB: 0x43AC, # 䎬
0xFADC: 0x8030, # 耰
0xFADD: 0x8064, # 聤
0xFADE: 0x8985, # 覅
0xFADF: 0x9892, # 颒
0xFAE0: 0x98A3, # 颣
0xFAE1: 0x8683, # 蚃
0xFAE2: 0x86B2, # 蚲
0xFAE3: 0x45AC, # 䖬
0xFAE4: 0x8705, # 蜅
0xFAE5: 0x8730, # 蜰
0xFAE6: 0x45EA, # 䗪
0xFAE7: 0x8758, # 蝘
0xFAE8: 0x7F4D, # 罍
0xFAE9: 0x7B4A, # 筊
0xFAEA: 0x41F2, # 䇲
0xFAEB: 0x7BF0, # 篰
0xFAEC: 0x7C09, # 簉
0xFAED: 0x7BEF, # 篯
0xFAEE: 0x7BF2, # 篲
0xFAEF: 0x7C20, # 簠
0xFAF0: 0x26A2D, # 𦨭
0xFAF1: 0x8C68, # 豨
0xFAF2: 0x8C6D, # 豭
0xFAF3: 0x8DF6, # 跶
0xFAF4: 0x8E04, # 踄
0xFAF5: 0x8E26, # 踦
0xFAF6: 0x8E16, # 踖
0xFAF7: 0x8E27, # 踧
0xFAF8: 0x8E53, # 蹓
0xFAF9: 0x8E50, # 蹐
0xFAFA: 0x8C90, # 貐
0xFAFB: 0x9702, # 霂
0xFAFC: 0x9F81, # 龁
0xFAFD: 0x9F82, # 龂
0xFAFE: 0x9C7D # 鱽
})
# Area FB
_update({
0xFBA1: 0x9C8A, # 鲊
0xFBA2: 0x9C80, # 鲀
0xFBA3: 0x9C8F, # 鲏
0xFBA4: 0x4C9F, # 䲟
0xFBA5: 0x9C99, # 鲙
0xFBA6: 0x9C97, # 鲗
0xFBA7: 0x29F7C, # 𩽼
0xFBA8: 0x9C96, # 鲖
0xFBA9: 0x29F7E, # 𩽾
0xFBAA: 0x29F83, # 𩾃
0xFBAB: 0x29F87, # 𩾇
0xFBAC: 0x9CC1, # 鳁
0xFBAD: 0x9CD1, # 鳑
0xFBAE: 0x9CDB, # 鳛
0xFBAF: 0x9CD2, # 鳒
0xFBB0: 0x29F8C, # 𩾌
0xFBB1: 0x9CE3, # 鳣
0xFBB2: 0x977A, # 靺
0xFBB3: 0x97AE, # 鞮
0xFBB4: 0x97A8, # 鞨
0xFBB5: 0x9B4C, # 魌
0xFBB6: 0x9B10, # 鬐
0xFBB7: 0x9B18, # 鬘
0xFBB8: 0x9E80, # 麀
0xFBB9: 0x9E95, # 麕
0xFBBA: 0x9E91, # 麑
})
"B库符号(部分非组合用字符)"
symbolsB = UnicodeMap()
symbolsB.update({
0x8940: 0x1E37, # 国际音标 = ḷ
0x8941: 0x1E43, # 国际音标 = ṃ
0x8942: 0x1E47, # 国际音标 = ṇ
0x8943: 0x015E, # 国际音标 = Ş
0x8944: 0x015F, # 国际音标 = ş
0x8945: 0x0162, # 国际音标 = Ţ
0x8946: 0x0163, # 国际音标 = ţ
0x94C0: 0x2654, # 国际象棋白格白子-王 = ♔
0x94C1: 0x2655, # 国际象棋白格白子-后 = ♕
0x94C2: 0x2656, # 国际象棋白格白子-车 = ♖
0x94C3: 0x2658, # 国际象棋白格白子-马 = ♘
0x94C4: 0x2657, # 国际象棋白格白子-相 = ♗
0x94C5: 0x2659, # 国际象棋白格白子-卒 = ♙
0x94C6: 0x265A, # 国际象棋白格黑子-王 = ♚
0x94C7: 0x265B, # 国际象棋白格黑子-后 = ♛
0x94C8: 0x265C, # 国际象棋白格黑子-车 = ♜
0x94C9: 0x265E, # 国际象棋白格黑子-马 = ♞
0x94CA: 0x265D, # 国际象棋白格黑子-相 = ♝
0x94CB: 0x265F, # 国际象棋白格黑子-卒 = ♟
0x94EC: 0x2660, # 桥牌-黑桃 = ♠
0x94ED: 0x2665, # 桥牌-红桃 = ♥
0x94EE: 0x2666, # 桥牌-方框 = ♦
0x94EF: 0x2663, # 桥牌-梅花 = ♣
0x95F1: 0x1FA67, # 中国象棋黑子-将 = 🩧
0x95F2: 0x1FA64, # 中国象棋红子-车 = 🩤
0x95F3: 0x1FA63, # 中国象棋红子-马 = 🩣
0x95F4: 0x1FA65, # 中国象棋红子-炮 = 🩥
0x95F5: 0x1FA66, # 中国象棋红子-兵 = 🩦
0x95F6: 0x1FA62, # 中国象棋红子-相 = 🩢
0x95F7: 0x1FA61, # 中国象棋红子-士 = 🩡
0x95F8: 0x1FA60, # 中国象棋红子-帅 = 🩠
0x95F9: 0x1FA6B, # 中国象棋黑子-车 = 🩫
0x95FA: 0x1FA6A, # 中国象棋黑子-马 = 🩪
0x95FB: 0x1FA6C, # 中国象棋黑子-炮 = 🩬
0x95FC: 0x1FA6D, # 中国象棋黑子-卒 = 🩭
0x95FD: 0x1FA68, # 中国象棋黑子-士 = 🩨
0x95FE: 0x1FA69, # 中国象棋黑子-象 = 🩩
0x968F: 0x1D11E, # 其他符号 = 𝄞
0x97A0: 0x4DC0, # 八卦符号 = ䷀
0x97A1: 0x4DC1, # 八卦符号 = ䷁
0x97A2: 0x4DC2, # 八卦符号 = ䷂
0x97A3: 0x4DC3, # 八卦符号 = ䷃
0x97A4: 0x4DC4, # 八卦符号 = ䷄
0x97A5: 0x4DC5, # 八卦符号 = ䷅
0x97A6: 0x4DC6, # 八卦符号 = ䷆
0x97A7: 0x4DC7, # 八卦符号 = ䷇
0x97A8: 0x4DC8, # 八卦符号 = ䷈
0x97A9: 0x4DC9, # 八卦符号 = ䷉
0x97AA: 0x4DCA, # 八卦符号 = ䷊
0x97AB: 0x4DCB, # 八卦符号 = ䷋
0x97AC: 0x4DCC, # 八卦符号 = ䷌
0x97AD: 0x4DCD, # 八卦符号 = ䷍
0x97AE: 0x4DCE, # 八卦符号 = ䷎
0x97AF: 0x4DCF, # 八卦符号 = ䷏
0x97B0: 0x4DD0, # 八卦符号 = ䷐
0x97B1: 0x4DD1, # 八卦符号 = ䷑
0x97B2: 0x4DD2, # 八卦符号 = ䷒
0x97B3: 0x4DD3, # 八卦符号 = ䷓
0x97B4: 0x4DD4, # 八卦符号 = ䷔
0x97B5: 0x4DD5, # 八卦符号 = ䷕
0x97B6: 0x4DD6, # 八卦符号 = ䷖
0x97B7: 0x4DD7, # 八卦符号 = ䷗
0x97B8: 0x4DD8, # 八卦符号 = ䷘
0x97B9: 0x4DD9, # 八卦符号 = ䷙
0x97BA: 0x4DDA, # 八卦符号 = ䷚
0x97BB: 0x4DDB, # 八卦符号 = ䷛
0x97BC: 0x4DDC, # 八卦符号 = ䷜
0x97BD: 0x4DDD, # 八卦符号 = ䷝
0x97BE: 0x4DDE, # 八卦符号 = ䷞
0x97BF: 0x4DDF, # 八卦符号 = ䷟
0x97C0: 0x4DE0, # 八卦符号 = ䷠
0x97C1: 0x4DE1, # 八卦符号 = ䷡
0x97C2: 0x4DE2, # 八卦符号 = ䷢
0x97C3: 0x4DE3, # 八卦符号 = ䷣
0x97C4: 0x4DE4, # 八卦符号 = ䷤
0x97C5: 0x4DE5, # 八卦符号 = ䷥
0x97C6: 0x4DE6, # 八卦符号 = ䷦
0x97C7: 0x4DE7, # 八卦符号 = ䷧
0x97C8: 0x4DE8, # 八卦符号 = ䷨
0x97C9: 0x4DE9, # 八卦符号 = ䷩
0x97CA: 0x4DEA, # 八卦符号 = ䷪
0x97CB: 0x4DEB, # 八卦符号 = ䷫
0x97CC: 0x4DEC, # 八卦符号 = ䷬
0x97CD: 0x4DED, # 八卦符号 = ䷭
0x97CE: 0x4DEE, # 八卦符号 = ䷮
0x97CF: 0x4DEF, # 八卦符号 = ䷯
0x97D0: 0x4DF0, # 八卦符号 = ䷰
0x97D1: 0x4DF1, # 八卦符号 = ䷱
0x97D2: 0x4DF2, # 八卦符号 = ䷲
0x97D3: 0x4DF3, # 八卦符号 = ䷳
0x97D4: 0x4DF4, # 八卦符号 = ䷴
0x97D5: 0x4DF5, # 八卦符号 = ䷵
0x97D6: 0x4DF6, # 八卦符号 = ䷶
0x97D7: 0x4DF7, # 八卦符号 = ䷷
0x97D8: 0x4DF8, # 八卦符号 = ䷸
0x97D9: 0x4DF9, # 八卦符号 = ䷹
0x97DA: 0x4DFA, # 八卦符号 = ䷺
0x97DB: 0x4DFB, # 八卦符号 = ䷻
0x97DC: 0x4DFC, # 八卦符号 = ䷼
0x97DD: 0x4DFD, # 八卦符号 = ䷽
0x97DE: 0x4DFE, # 八卦符号 = ䷾
0x97DF: 0x4DFF, # 八卦符号 = ䷿
0x97E0: 0x2630, # 八卦符号 = ☰
0x97E1: 0x2637, # 八卦符号 = ☷
0x97E2: 0x2633, # 八卦符号 = ☳
0x97E3: 0x2634, # 八卦符号 = ☴
0x97E4: 0x2635, # 八卦符号 = ☵
0x97E5: 0x2632, # 八卦符号 = ☲
0x97E6: 0x2636, # 八卦符号 = ☶
0x97E7: 0x2631, # 八卦符号 = ☱
0x97EF: 0x2A0D, # 积分主值 = ⨍
0x97F0: 0x0274, # 国际音标 = ɴ
0x97F1: 0x0280, # 国际音标 = ʀ
0x97F2: 0x97F2, # 国际音标(ɔ̃)
0x97F3: 0x97F3, # 国际音标(ɛ̃)
0xA080: 0x00B7, # 外文间隔点 = ·
0xA08E: 0x2039, # 外文左单书名号 = ‹
0xA08F: 0x203A, # 外文右单书名号 = ›
0xA090: 0x00AB, # 外文左双书名号 = «
0xA091: 0x00BB, # 外文右双书名号 = »
0xBD8A: 0x2201, # 补集 = ∁
0xBD8B: 0x2115, # 集合符号N = ℕ
0xBD8C: 0x2124, # 集合符号Z = ℤ
0xBD8D: 0x211A, # 集合符号Q = ℚ
0xBD8E: 0x211D, # 集合符号R = ℝ
0xBD8F: 0x2102, # 集合符号C = ℂ
0xBD90: 0x00AC, # 否定符号 = ¬
0xBD93: 0xBD93, # 不属于(∈ + \)
0xBD94: 0xBD94, # 不属于(∈ + |)
0xBD95: 0x220B, # 属于 = ∋
0xBD96: 0x220C, # 不属于 = ∌
0xBD97: 0xBD97, # 不属于(∋ + |)
0xBD98: 0xBD98, # 不属于(∌ + \)
0xBD99: 0x22FD, # 不属于 = ⋽
0xBD9A: 0xBD9A, # 不等于(= + \)
0xBD9B: 0x1d463 # 𝑣
})
| 28.744308 | 88 | 0.518268 | 6,110 | 49,239 | 4.314894 | 0.772177 | 0.000303 | 0.000455 | 0.000303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.334979 | 0.333394 | 49,239 | 1,712 | 89 | 28.761098 | 0.439326 | 0.220232 | 0 | 0.020996 | 0 | 0 | 0.001686 | 0 | 0 | 0 | 0.532759 | 0 | 0 | 1 | 0.0024 | false | 0 | 0 | 0.0006 | 0.006599 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c9544ffadc07ec885bd33e7c84ffb14a0d5a171b | 555 | py | Python | puzzles/easy/puzzle8e.py | mhw32/Code-Boola-Python-Workshop | 08bc551b173ff372a267592f58586adb52c582e3 | [
"MIT"
] | null | null | null | puzzles/easy/puzzle8e.py | mhw32/Code-Boola-Python-Workshop | 08bc551b173ff372a267592f58586adb52c582e3 | [
"MIT"
] | null | null | null | puzzles/easy/puzzle8e.py | mhw32/Code-Boola-Python-Workshop | 08bc551b173ff372a267592f58586adb52c582e3 | [
"MIT"
] | null | null | null | # ------------------------------------
# CODE BOOLA 2015 PYTHON WORKSHOP
# Mike Wu, Jonathan Chang, Kevin Tan
# Puzzle Challenges Number 8
# ------------------------------------
# INSTRUCTIONS:
# Write a function that takes an integer
# as its argument and converts it to a
# string. Return the first character of
# of that string.
# EXAMPLE:
# select(12345) => "1"
# select(519) => "5"
# select(2) => "2"
# HINT:
# Use str() to convert an integer to a string.
# Remember that a string can be indexed
# just like a list!
def select(n):
pass
| 21.346154 | 46 | 0.585586 | 75 | 555 | 4.333333 | 0.76 | 0.064615 | 0.055385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038031 | 0.194595 | 555 | 25 | 47 | 22.2 | 0.689038 | 0.881081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
c973d138beb4bdeb8b96079770c98d55a9dad08e | 693 | py | Python | app/ZeroKnowledge/bbs.py | MilkyBoat/AttriChain | ad3a7e5cc58e4add21ffd289d925f73e3367210b | [
"MIT"
] | 5 | 2020-07-10T21:00:28.000Z | 2022-02-23T01:41:01.000Z | app/ZeroKnowledge/bbs.py | MilkyBoat/AttriChain | ad3a7e5cc58e4add21ffd289d925f73e3367210b | [
"MIT"
] | null | null | null | app/ZeroKnowledge/bbs.py | MilkyBoat/AttriChain | ad3a7e5cc58e4add21ffd289d925f73e3367210b | [
"MIT"
] | 4 | 2020-09-13T14:31:45.000Z | 2022-03-23T04:06:38.000Z | from ZeroKnowledge import primality
import random
def goodPrime(p):
return p % 4 == 3 and primality.probablyPrime(p, accuracy=100)
def findGoodPrime(numBits=512):
candidate = 1
while not goodPrime(candidate):
candidate = random.getrandbits(numBits)
return candidate
def makeModulus(numBits=512):
return findGoodPrime(numBits) * findGoodPrime(numBits)
def parity(n):
return sum(int(x) for x in bin(n)[2:]) % 2
def bbs(modulusLength=512):
modulus = makeModulus(numBits=modulusLength)
def f(inputInt):
return pow(inputInt, 2, modulus)
return f
if __name__ == "__main__":
owp = bbs()
print(owp(70203203))
print(owp(12389))
| 21 | 66 | 0.685426 | 87 | 693 | 5.367816 | 0.528736 | 0.12848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056261 | 0.204906 | 693 | 32 | 67 | 21.65625 | 0.791289 | 0 | 0 | 0 | 0 | 0 | 0.011544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.181818 | 0.636364 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c977bbeabde9764661a77f5cb005a889127439bd | 534 | py | Python | yeti/core/entities/malware.py | Darkheir/TibetanBrownBear | c3843daa4f84730e733c2dde1cda7739e6cdad8e | [
"Apache-2.0"
] | 9 | 2018-01-15T22:44:24.000Z | 2021-05-28T11:13:03.000Z | yeti/core/entities/malware.py | Darkheir/TibetanBrownBear | c3843daa4f84730e733c2dde1cda7739e6cdad8e | [
"Apache-2.0"
] | 140 | 2018-01-12T10:07:47.000Z | 2021-08-02T23:03:49.000Z | yeti/core/entities/malware.py | Darkheir/TibetanBrownBear | c3843daa4f84730e733c2dde1cda7739e6cdad8e | [
"Apache-2.0"
] | 11 | 2018-01-16T19:49:35.000Z | 2022-01-18T16:30:34.000Z | """Detail Yeti's Malware object structure."""
from .entity import Entity
class Malware(Entity):
"""Malware Yeti object.
Extends the Malware STIX2 definition.
"""
_collection_name = 'entities'
type = 'malware'
@property
def name(self):
return self._stix_object.name
@property
def description(self):
return self._stix_object.description
@property
def kill_chain_phases(self):
return self._stix_object.kill_chain_phases
Entity.datatypes[Malware.type] = Malware
| 19.777778 | 50 | 0.683521 | 62 | 534 | 5.693548 | 0.451613 | 0.093484 | 0.11898 | 0.152975 | 0.203966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002415 | 0.224719 | 534 | 26 | 51 | 20.538462 | 0.850242 | 0.185393 | 0 | 0.214286 | 0 | 0 | 0.036145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.214286 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c979df9649b375b708736b82938ddd72a6f161b7 | 161 | py | Python | Retired/How many times mentioned.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 6 | 2020-09-03T09:32:25.000Z | 2020-12-07T04:10:01.000Z | Retired/How many times mentioned.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 1 | 2021-12-13T15:30:21.000Z | 2021-12-13T15:30:21.000Z | Retired/How many times mentioned.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | null | null | null | from collections import Counter
def count_mentioned(con,names):
con=con.lower()
res=[con.count(i.lower()) for i in names]
return res if res else None | 32.2 | 45 | 0.714286 | 27 | 161 | 4.222222 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180124 | 161 | 5 | 46 | 32.2 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
a32929c2bf6ce5c743f0108e1a7c3d364e872fd0 | 299 | py | Python | server/entities/log_group.py | thulio/watchlogs | 17469f77851ce0cab916c472f9f508790b6157bf | [
"MIT"
] | 1 | 2019-12-30T16:32:47.000Z | 2019-12-30T16:32:47.000Z | server/entities/log_group.py | thulio/watchlogs | 17469f77851ce0cab916c472f9f508790b6157bf | [
"MIT"
] | null | null | null | server/entities/log_group.py | thulio/watchlogs | 17469f77851ce0cab916c472f9f508790b6157bf | [
"MIT"
] | null | null | null | class LogGroup(object):
def __init__(self, name, streams=None):
self.name = name
self.streams = streams
@classmethod
def from_dict(cls, group_dict):
return LogGroup(group_dict['logGroupName'])
def __eq__(self, other):
return self.name == other.name
| 24.916667 | 51 | 0.64214 | 36 | 299 | 5.027778 | 0.5 | 0.132597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.250836 | 299 | 11 | 52 | 27.181818 | 0.808036 | 0 | 0 | 0 | 0 | 0 | 0.040134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.222222 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
a364c15aa063e5f5b9ce9b053b0dc00b7991aba9 | 45 | py | Python | config.py | grimpy/glunit | ed8b8fabc8539abe94a9bf93418b95d006283066 | [
"MIT"
] | null | null | null | config.py | grimpy/glunit | ed8b8fabc8539abe94a9bf93418b95d006283066 | [
"MIT"
] | null | null | null | config.py | grimpy/glunit | ed8b8fabc8539abe94a9bf93418b95d006283066 | [
"MIT"
] | 1 | 2019-03-02T12:32:40.000Z | 2019-03-02T12:32:40.000Z | GITLAB_URL = "XXXXXX"
GITLAB_TOKEN = "XXXXX"
| 15 | 22 | 0.733333 | 6 | 45 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 2 | 23 | 22.5 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a37f3e393c9a970f74e1fb50bf59be6bc0954abc | 504 | py | Python | scripts/tests/snapshots/snap_etc_test.py | Duroktar/Wolf | c192d5c27eb2098e440f7726eb1bff40ed004db5 | [
"Apache-2.0"
] | 105 | 2018-02-07T22:07:47.000Z | 2022-03-31T18:16:47.000Z | scripts/tests/snapshots/snap_etc_test.py | Duroktar/Wolf | c192d5c27eb2098e440f7726eb1bff40ed004db5 | [
"Apache-2.0"
] | 57 | 2018-02-07T23:07:41.000Z | 2021-11-21T17:14:06.000Z | scripts/tests/snapshots/snap_etc_test.py | Duroktar/Wolf | c192d5c27eb2098e440f7726eb1bff40ed004db5 | [
"Apache-2.0"
] | 10 | 2018-02-24T23:44:51.000Z | 2022-03-02T07:52:27.000Z | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['test_etc 1'] = '[{"lineno": 2, "value": "tup = (1, 2, 3)"}, {"lineno": 3, "source": ["tup\\n"], "value": "(1, 2, 3)"}, {"lineno": 5, "value": "False"}, {"lineno": 7, "value": "text = happy"}, {"lineno": 9, "source": ["text\\n"], "value": "happy"}, {"lineno": 12, "value": "x = foo\\nfaa"}, {"lineno": 15, "value": "a = 1"}]'
| 45.818182 | 335 | 0.56746 | 67 | 504 | 4.179104 | 0.567164 | 0.121429 | 0.021429 | 0.064286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046296 | 0.142857 | 504 | 10 | 336 | 50.4 | 0.601852 | 0.123016 | 0 | 0 | 0 | 0.25 | 0.722096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a396f80d3df39bc129b954b6343810b69c00e0ea | 291 | py | Python | weldx/tags/measurement/source.py | CagtayFabry/weldx | 463f949d4fa54b5edafa2268cb862716865a62c2 | [
"BSD-3-Clause"
] | 13 | 2020-02-20T07:45:02.000Z | 2021-12-10T13:15:47.000Z | weldx/tags/measurement/source.py | BAMWelDX/weldx | ada4e67fa00cdb80a0b954057f4e685b846c9fe5 | [
"BSD-3-Clause"
] | 675 | 2020-02-20T07:47:00.000Z | 2022-03-31T15:17:19.000Z | weldx/tags/measurement/source.py | CagtayFabry/weldx | 463f949d4fa54b5edafa2268cb862716865a62c2 | [
"BSD-3-Clause"
] | 5 | 2020-09-02T07:19:17.000Z | 2021-12-05T08:57:50.000Z | from weldx.asdf.util import dataclass_serialization_class
from weldx.measurement import SignalSource
__all__ = ["SignalSource", "SignalSourceConverter"]
SignalSourceConverter = dataclass_serialization_class(
class_type=SignalSource, class_name="measurement/source", version="0.1.0"
)
| 29.1 | 77 | 0.821306 | 31 | 291 | 7.387097 | 0.580645 | 0.078603 | 0.235808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011321 | 0.089347 | 291 | 9 | 78 | 32.333333 | 0.85283 | 0 | 0 | 0 | 0 | 0 | 0.19244 | 0.072165 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a3a546b361a588aac685878f310be185f371649f | 538 | py | Python | pypxl/errors.py | Kile/pypxl | 0aabe5492386bffc1e246100cb55448bbac521ec | [
"MIT"
] | 1 | 2021-04-02T09:05:33.000Z | 2021-04-02T09:05:33.000Z | pypxl/errors.py | Kile/pypxl | 0aabe5492386bffc1e246100cb55448bbac521ec | [
"MIT"
] | null | null | null | pypxl/errors.py | Kile/pypxl | 0aabe5492386bffc1e246100cb55448bbac521ec | [
"MIT"
] | null | null | null | class PxlapiException(Exception):
"""
The base exception for anything related to pypxl
"""
pass
class InvalidFlag(PxlapiException):
pass
class InvalidFilter(PxlapiException):
pass
class InvalidEyes(PxlapiException):
pass
class TooManyCharacters(PxlapiException):
pass
class InvalidSafety(PxlapiException):
pass
class PxlObjectError(PxlapiException):
"""
A class which all errors originating from using the PxlOnject come from
"""
pass
class InvalidBytes(PxlObjectError):
pass | 18.551724 | 75 | 0.728625 | 52 | 538 | 7.538462 | 0.519231 | 0.160714 | 0.306122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204461 | 538 | 29 | 76 | 18.551724 | 0.915888 | 0.223048 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
6e5f8bfb8859c97984af510e67f81278396d3ad6 | 277 | py | Python | 1 ano/logica-de-programacao/list-telefone-lucio.py | ThiagoPereira232/tecnico-informatica | 6b55ecf34501b38052943acf1b37074e3472ce6e | [
"MIT"
] | 1 | 2021-09-24T16:26:04.000Z | 2021-09-24T16:26:04.000Z | 1 ano/logica-de-programacao/list-telefone-lucio.py | ThiagoPereira232/tecnico-informatica | 6b55ecf34501b38052943acf1b37074e3472ce6e | [
"MIT"
] | null | null | null | 1 ano/logica-de-programacao/list-telefone-lucio.py | ThiagoPereira232/tecnico-informatica | 6b55ecf34501b38052943acf1b37074e3472ce6e | [
"MIT"
] | null | null | null | n = [0,0,0,0,0,0,0,0,0,0]
t = [0,0,0,0,0,0,0,0,0,0]
c=0
while(c<10):
n[c]=input("Digite o nome")
t[c]=input("Digite o telefone")
c+=1
const=""
while(const!="fim"):
cons=input("Digite nome a consultar")
if(n[c]==const):
print(f"TEl: {t[c]}")
c+=1 | 21.307692 | 41 | 0.516245 | 62 | 277 | 2.306452 | 0.33871 | 0.251748 | 0.335664 | 0.391608 | 0.13986 | 0.13986 | 0.13986 | 0.13986 | 0.13986 | 0.13986 | 0 | 0.113122 | 0.202166 | 277 | 13 | 42 | 21.307692 | 0.533937 | 0 | 0 | 0.153846 | 0 | 0 | 0.241007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e710c139901b3edb6aaa6a1f60ac54de8da8353 | 209 | py | Python | mrq_monitor.py | HyokaChen/violet | b89ddb4f909c2a40e76d89b665949e55086a7a80 | [
"Apache-2.0"
] | 1 | 2020-07-29T15:49:35.000Z | 2020-07-29T15:49:35.000Z | mrq_monitor.py | HyokaChen/violet | b89ddb4f909c2a40e76d89b665949e55086a7a80 | [
"Apache-2.0"
] | 1 | 2019-12-19T10:19:57.000Z | 2019-12-19T11:15:28.000Z | mrq_monitor.py | EmptyChan/violet | b89ddb4f909c2a40e76d89b665949e55086a7a80 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created with IntelliJ IDEA.
Description:
User: jinhuichen
Date: 3/28/2018 4:17 PM
Description:
"""
from mrq.dashboard.app import main
if __name__ == '__main__':
main() | 16.076923 | 34 | 0.650718 | 28 | 209 | 4.571429 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065476 | 0.196172 | 209 | 13 | 35 | 16.076923 | 0.696429 | 0.569378 | 0 | 0 | 0 | 0 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6e74495ac01d11fb500db642fc48819334b6af0a | 140 | py | Python | k8s/the-project/kubeless/ok-func.py | cjimti/mk | b303e147da77776baf5fee337e356ebeccbe2c01 | [
"MIT"
] | 1 | 2019-04-18T09:52:48.000Z | 2019-04-18T09:52:48.000Z | k8s/the-project/kubeless/ok-func.py | cjimti/mk | b303e147da77776baf5fee337e356ebeccbe2c01 | [
"MIT"
] | null | null | null | k8s/the-project/kubeless/ok-func.py | cjimti/mk | b303e147da77776baf5fee337e356ebeccbe2c01 | [
"MIT"
] | null | null | null | import requests
def ok(event, context):
url = "http://ok:8080/"
response = requests.request("GET", url)
return response.text
| 15.555556 | 43 | 0.65 | 18 | 140 | 5.055556 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0.207143 | 140 | 8 | 44 | 17.5 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6e824c90d5cc97b09e96bf2d9fa8d40cff2f3778 | 1,797 | py | Python | goatools/gosubdag/utils.py | camiloaruiz/goatools | 3da97251ccb6c5e90b616c3f625513f8aba5aa10 | [
"BSD-2-Clause"
] | null | null | null | goatools/gosubdag/utils.py | camiloaruiz/goatools | 3da97251ccb6c5e90b616c3f625513f8aba5aa10 | [
"BSD-2-Clause"
] | null | null | null | goatools/gosubdag/utils.py | camiloaruiz/goatools | 3da97251ccb6c5e90b616c3f625513f8aba5aa10 | [
"BSD-2-Clause"
] | null | null | null | """Small lightweight utilities used frequently in GOATOOLS."""
__copyright__ = "Copyright (C) 2016-2018, DV Klopfenstein, H Tang, All rights reserved."
__author__ = "DV Klopfenstein"
def extract_kwargs(args, exp_keys, exp_elems):
"""Return user-specified keyword args in a dictionary and a set (for True/False items)."""
arg_dict = {} # For arguments that have values
arg_set = set() # For arguments that are True or False (present in set if True)
for key, val in args.items():
if exp_keys is not None and key in exp_keys and val:
arg_dict[key] = val
elif exp_elems is not None and key in exp_elems and val:
arg_set.add(key)
return {'dict':arg_dict, 'set':arg_set}
def get_kwargs_set(args, exp_elem2dflt):
"""Return user-specified keyword args in a dictionary and a set (for True/False items)."""
arg_set = set() # For arguments that are True or False (present in set if True)
# Add user items if True
for key, val in args.items():
if exp_elem2dflt is not None and key in exp_elem2dflt and val:
arg_set.add(key)
# Add defaults if needed
for key, dfltval in exp_elem2dflt.items():
if dfltval and key not in arg_set:
arg_set.add(key)
return arg_set
def get_kwargs(args, exp_keys, exp_elems):
"""Return user-specified keyword args in a dictionary and a set (for True/False items)."""
arg_dict = {} # For arguments that have values
for key, val in args.items():
if exp_keys is not None and key in exp_keys and val:
arg_dict[key] = val
elif exp_elems is not None and key in exp_elems and val:
arg_dict[key] = True
return arg_dict
# Copyright (C) 2016-2018, DV Klopfenstein, H Tang, All rights reserved.
| 41.790698 | 94 | 0.668893 | 289 | 1,797 | 4.010381 | 0.207612 | 0.041415 | 0.038827 | 0.051769 | 0.788611 | 0.735979 | 0.712683 | 0.695427 | 0.695427 | 0.695427 | 0 | 0.014793 | 0.247635 | 1,797 | 42 | 95 | 42.785714 | 0.842456 | 0.342237 | 0 | 0.571429 | 0 | 0 | 0.079654 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e82b8d1720684c00d864fb512765fbff3379ce5 | 309 | py | Python | nicos_ess/ymir/setups/forwarder.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-03-26T10:30:45.000Z | 2021-03-26T10:30:45.000Z | nicos_ess/ymir/setups/forwarder.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 91 | 2020-08-18T09:20:26.000Z | 2022-02-01T11:07:14.000Z | nicos_ess/ymir/setups/forwarder.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 3 | 2020-08-04T18:35:05.000Z | 2021-04-16T11:22:08.000Z | description = 'Monitors the status of the Forwarder'
devices = dict(
KafkaForwarder=device(
'nicos_ess.devices.forwarder.EpicsKafkaForwarder',
description='Monitors the status of the Forwarder',
statustopic='UTGARD_forwarderStatus',
brokers=['172.30.242.20:9092']),
)
| 30.9 | 59 | 0.68932 | 32 | 309 | 6.59375 | 0.6875 | 0.180095 | 0.208531 | 0.265403 | 0.398104 | 0.398104 | 0.398104 | 0 | 0 | 0 | 0 | 0.056911 | 0.203884 | 309 | 9 | 60 | 34.333333 | 0.800813 | 0 | 0 | 0 | 0 | 0 | 0.514563 | 0.223301 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e942a1e8c0fd4f03d779fd36629d8f97651ff14 | 364 | py | Python | tests/tfgraph/utils/test_datasets.py | tfgraph/tfgraph | 19ae968b3060275c631dc601757646abaf1f58a1 | [
"Apache-2.0"
] | 4 | 2017-07-23T13:48:35.000Z | 2021-12-03T18:11:50.000Z | tests/tfgraph/utils/test_datasets.py | tfgraph/tfgraph | 19ae968b3060275c631dc601757646abaf1f58a1 | [
"Apache-2.0"
] | 21 | 2017-07-23T13:15:20.000Z | 2020-09-28T02:13:11.000Z | tests/tfgraph/utils/test_datasets.py | tfgraph/tfgraph | 19ae968b3060275c631dc601757646abaf1f58a1 | [
"Apache-2.0"
] | 1 | 2017-07-28T10:28:04.000Z | 2017-07-28T10:28:04.000Z | import tfgraph
def test_data_sets_naive_4():
assert tfgraph.DataSets.naive_4().shape == (8, 2)
def test_data_sets_naive_6():
assert tfgraph.DataSets.naive_6().shape == (9, 2)
def test_data_sets_compose():
assert tfgraph.DataSets.compose_from_path("./datasets/wiki-Vote/wiki-Vote.csv",
True).shape == (65499, 2)
| 24.266667 | 81 | 0.653846 | 51 | 364 | 4.372549 | 0.45098 | 0.09417 | 0.147982 | 0.201794 | 0.255605 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.211538 | 364 | 14 | 82 | 26 | 0.728223 | 0 | 0 | 0 | 0 | 0 | 0.093407 | 0.093407 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.375 | true | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ea886c1faad67e0969ccc2de41ff81ea08b3480 | 196 | py | Python | app/forms.py | haibincoder/DjangoTensorflow | 7fc606fa5121f0c48d7c8e649775094d86e6387a | [
"MIT"
] | 17 | 2018-07-21T04:14:09.000Z | 2022-03-09T08:32:49.000Z | app/forms.py | haibincoder/DjangoTensorflow | 7fc606fa5121f0c48d7c8e649775094d86e6387a | [
"MIT"
] | 24 | 2020-01-28T22:11:42.000Z | 2022-03-11T23:47:43.000Z | app/forms.py | haibincoder/DjangoTensorflow | 7fc606fa5121f0c48d7c8e649775094d86e6387a | [
"MIT"
] | 7 | 2018-12-13T08:55:07.000Z | 2021-06-26T08:08:01.000Z | from django import forms
#from app.models import Image
# class ImageForm(forms.ModelForm):
# class Meta:
# model = Image
# name = ['name']
# location = ['location']
| 17.818182 | 35 | 0.591837 | 21 | 196 | 5.52381 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290816 | 196 | 10 | 36 | 19.6 | 0.834532 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6ecdc7cb0a885b814b6a6f30cd78f9066a128b3b | 381 | py | Python | flask_miracle/__init__.py | tdpsk/flask-miracle-acl | 426a9845854678d00108cf5f91ada9323968b524 | [
"BSD-2-Clause"
] | 2 | 2018-01-17T15:57:38.000Z | 2018-02-06T00:03:16.000Z | flask_miracle/__init__.py | tdpsk/flask-miracle-acl | 426a9845854678d00108cf5f91ada9323968b524 | [
"BSD-2-Clause"
] | null | null | null | flask_miracle/__init__.py | tdpsk/flask-miracle-acl | 426a9845854678d00108cf5f91ada9323968b524 | [
"BSD-2-Clause"
] | null | null | null | '''
flask_miracle
-------------
This module provides a fabric layer between the Flask framework and the
Miracle ACL library.
:copyright: (c) 2017 by Timo Puschkasch.
:license: BSD, see LICENSE for more details.
'''
from .base import Acl
from .functions import check_all, check_any, set_current_roles
from .decorators import macl_check_any, macl_check_all
| 27.214286 | 75 | 0.716535 | 53 | 381 | 4.981132 | 0.716981 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.191601 | 381 | 13 | 76 | 29.307692 | 0.844156 | 0.543307 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6ecf4bd8dbec5f43c3a5dbb66ff367208ec1e14c | 73 | py | Python | virustotal_intelligence/__init__.py | elastic/opencti-connector-vti | 52bd6e8c40a8b96f34316b87d4550f308844abbe | [
"Apache-2.0"
] | 1 | 2022-02-11T13:36:11.000Z | 2022-02-11T13:36:11.000Z | virustotal_intelligence/__init__.py | elastic/opencti-connector-vti | 52bd6e8c40a8b96f34316b87d4550f308844abbe | [
"Apache-2.0"
] | null | null | null | virustotal_intelligence/__init__.py | elastic/opencti-connector-vti | 52bd6e8c40a8b96f34316b87d4550f308844abbe | [
"Apache-2.0"
] | null | null | null | __version__ = "5.1.3"
LOGGER_NAME = "connector.virustotal_intelligence"
| 18.25 | 49 | 0.780822 | 9 | 73 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.09589 | 73 | 3 | 50 | 24.333333 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.520548 | 0.452055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ed16212d719203dc9c8b385ee044edff5accf55 | 205 | py | Python | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 8 | 2019-04-09T21:13:05.000Z | 2021-11-23T17:25:18.000Z | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 21 | 2021-03-31T19:48:22.000Z | 2022-03-12T00:24:53.000Z | html/semantics/scripting-1/the-script-element/module/resources/delayed-modulescript.py | ziransun/wpt | ab8f451eb39eb198584d547f5d965ef54df2a86a | [
"BSD-3-Clause"
] | 11 | 2019-04-12T01:20:16.000Z | 2021-11-23T17:25:02.000Z | import time
def main(request, response):
delay = float(request.GET.first("ms", 500))
time.sleep(delay / 1E3);
return [("Content-type", "text/javascript")], "export let delayedLoaded = true;"
| 25.625 | 84 | 0.663415 | 26 | 205 | 5.230769 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02924 | 0.165854 | 205 | 7 | 85 | 29.285714 | 0.766082 | 0 | 0 | 0 | 0 | 0 | 0.297561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6ed9b7c639af766ee8c222459a703935071b14fd | 1,625 | py | Python | shuttl/Storage.py | shuttl-io/shuttl-cms | 50c85db0de42e901c371561270be6425cc65eccc | [
"MIT"
] | 2 | 2017-06-26T18:06:58.000Z | 2017-10-11T21:45:29.000Z | shuttl/Storage.py | shuttl-io/shuttl-cms | 50c85db0de42e901c371561270be6425cc65eccc | [
"MIT"
] | null | null | null | shuttl/Storage.py | shuttl-io/shuttl-cms | 50c85db0de42e901c371561270be6425cc65eccc | [
"MIT"
] | null | null | null | import boto3 as aws
import botocore
from shuttl import app
## Class for AWS S3 storage
class Storage:
bucket = None ##< the bucket the file belongs to
s3 = aws.resource("s3") ##< The s3 instance
@classmethod
def GetBucket(cls, bucketName):
try:
cls.bucket = cls.s3.Bucket(bucketName)
pass
except botocore.exceptions.NoCredentialsError:
pass
pass
@classmethod
def Upload(cls, fileObj):
if app.config["TESTING"]:
return
if cls.bucket is None:
cls.GetBucket("shuttl.io")
pass
try:
return cls.bucket.upload_file(fileObj.filePath, fileObj.filePath)
except botocore.exceptions.NoCredentialsError:
pass
pass
@classmethod
def Delete(cls, fileObj, bucketName="shuttl.io"):
if app.config["TESTING"]:
return
try:
obj = cls.s3.Object(bucketName, fileObj.filePath)
return obj.delete()
except botocore.exceptions.ClientError, botocore.exceptions.NoCredentialsError:
pass
pass
@classmethod
def Download(cls, fileObj, bucketName="shuttl.io"):
if app.config["TESTING"]:
return
try:
obj = cls.s3.Object(bucketName, fileObj.filePath)
return obj.download_file(fileObj.filePath)
except botocore.exceptions.ClientError:
raise FileNotFoundError("No such file or directory: {}".format(fileObj.filePath))
except botocore.exceptions.NoCredentialsError:
pass
pass
| 28.508772 | 93 | 0.603692 | 167 | 1,625 | 5.862275 | 0.305389 | 0.110317 | 0.122574 | 0.163432 | 0.546476 | 0.482125 | 0.482125 | 0.42288 | 0.210419 | 0.210419 | 0 | 0.007143 | 0.310769 | 1,625 | 56 | 94 | 29.017857 | 0.866964 | 0.044923 | 0 | 0.604167 | 0 | 0 | 0.051133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.208333 | 0.0625 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
6edbba4a74356991de5aa46330579ce20ab0026e | 245 | py | Python | Controller/hone_control.py | pupeng/hone | 8fb2618a51478049c73158f1d54e7165a37dffcf | [
"BSD-3-Clause"
] | 5 | 2017-02-18T12:39:13.000Z | 2021-03-29T09:21:58.000Z | Controller/hone_control.py | pupeng/hone | 8fb2618a51478049c73158f1d54e7165a37dffcf | [
"BSD-3-Clause"
] | null | null | null | Controller/hone_control.py | pupeng/hone | 8fb2618a51478049c73158f1d54e7165a37dffcf | [
"BSD-3-Clause"
] | 7 | 2015-08-12T10:08:21.000Z | 2018-08-30T12:55:25.000Z | # Copyright (c) 2011-2013 Peng Sun. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the COPYRIGHT file.
# hone_control.py
# a placeholder file for any control jobs HONE runtime generates
| 35 | 72 | 0.767347 | 42 | 245 | 4.452381 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039801 | 0.179592 | 245 | 6 | 73 | 40.833333 | 0.890547 | 0.95102 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e17097d88bd49914581f2dfe02ed8fa34bee9d4 | 254 | py | Python | backend/authentication/admin.py | jklewis99/hypertriviation | e12be87e978505fb3a73f4fc606173f41a3aee81 | [
"MIT"
] | 1 | 2022-03-27T19:39:07.000Z | 2022-03-27T19:39:07.000Z | backend/authentication/admin.py | jklewis99/hypertriviation | e12be87e978505fb3a73f4fc606173f41a3aee81 | [
"MIT"
] | 5 | 2022-03-27T19:32:54.000Z | 2022-03-31T23:25:44.000Z | backend/authentication/admin.py | jklewis99/hypertriviation | e12be87e978505fb3a73f4fc606173f41a3aee81 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import HypertriviationUser
class HypertriviationUserAdmin(admin.ModelAdmin):
model = HypertriviationUser
# Register your models here.
admin.site.register(HypertriviationUser, HypertriviationUserAdmin) | 25.4 | 66 | 0.838583 | 24 | 254 | 8.875 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106299 | 254 | 10 | 66 | 25.4 | 0.938326 | 0.102362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6e18dbf82c0ab208ca098975575465ec97248c7b | 269 | py | Python | backend/validators/authorization_val.py | NelsonM9/senaSoft | d72b5ed32b86a53aac962ec440d84ecce4555780 | [
"Apache-2.0"
] | null | null | null | backend/validators/authorization_val.py | NelsonM9/senaSoft | d72b5ed32b86a53aac962ec440d84ecce4555780 | [
"Apache-2.0"
] | null | null | null | backend/validators/authorization_val.py | NelsonM9/senaSoft | d72b5ed32b86a53aac962ec440d84ecce4555780 | [
"Apache-2.0"
] | null | null | null | from marshmallow import validate, fields, Schema
class AuthorizationVal(Schema):
id_auth = fields.Str(required=True, validator=validate.Length(max=10))
id_o = fields.Str(required=True, validator=validate.Length(max=10))
file_a = fields.Raw(required=True)
| 38.428571 | 74 | 0.758364 | 37 | 269 | 5.432432 | 0.567568 | 0.179104 | 0.169154 | 0.208955 | 0.487562 | 0.487562 | 0.487562 | 0.487562 | 0.487562 | 0 | 0 | 0.016949 | 0.122677 | 269 | 6 | 75 | 44.833333 | 0.834746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6e2ec7ad4cbde5fb55995e9127da176c9b74eb60 | 167 | py | Python | app/config.py | akabbeke/sd44_server | 7755567c7b273a5ac23b2aacc52477dd4a11d290 | [
"MIT"
] | null | null | null | app/config.py | akabbeke/sd44_server | 7755567c7b273a5ac23b2aacc52477dd4a11d290 | [
"MIT"
] | null | null | null | app/config.py | akabbeke/sd44_server | 7755567c7b273a5ac23b2aacc52477dd4a11d290 | [
"MIT"
] | null | null | null | import yaml
import os
config_file = os.path.join(os.path.dirname(__file__), "config/config.yml")
with open(config_file, 'r') as stream:
CONFIG = yaml.load(stream) | 27.833333 | 74 | 0.736527 | 27 | 167 | 4.333333 | 0.555556 | 0.17094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11976 | 167 | 6 | 75 | 27.833333 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6e362218fdee0a3ed3f2a33dd6f1acddc1fd9111 | 106 | py | Python | native_shortuuid/apps.py | foundertherapy/django-nativeshortuuidfield | 47e5a5d5c0f4caedbadb88ed6ac279f513ae522a | [
"MIT"
] | 5 | 2020-09-30T00:21:05.000Z | 2022-01-10T08:56:47.000Z | native_shortuuid/apps.py | foundertherapy/django-nativeshortuuidfield | 47e5a5d5c0f4caedbadb88ed6ac279f513ae522a | [
"MIT"
] | 1 | 2020-03-11T15:39:44.000Z | 2020-03-11T15:39:44.000Z | native_shortuuid/apps.py | foundertherapy/django-nativeshortuuidfield | 47e5a5d5c0f4caedbadb88ed6ac279f513ae522a | [
"MIT"
] | 1 | 2021-03-03T12:49:52.000Z | 2021-03-03T12:49:52.000Z | from django.apps import AppConfig
class NativeShortuuidConfig(AppConfig):
name = 'native_shortuuid'
| 17.666667 | 39 | 0.792453 | 11 | 106 | 7.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 106 | 5 | 40 | 21.2 | 0.912088 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6e3ac431c3e1e4eb2271fa87cec379de652a2355 | 588 | py | Python | tests/tests/test_analysis/test_utils.py | klavinslab/coral | 17f59591211562a59a051f474cd6cecba4829df9 | [
"MIT"
] | 34 | 2015-12-26T22:13:51.000Z | 2021-11-17T11:46:37.000Z | tests/tests/test_analysis/test_utils.py | klavinslab/coral | 17f59591211562a59a051f474cd6cecba4829df9 | [
"MIT"
] | 13 | 2015-09-11T23:27:51.000Z | 2018-06-25T20:44:28.000Z | tests/tests/test_analysis/test_utils.py | klavinslab/coral | 17f59591211562a59a051f474cd6cecba4829df9 | [
"MIT"
] | 14 | 2015-10-08T17:08:48.000Z | 2022-02-22T04:25:54.000Z | '''
Tests for utils submodule of the analysis module.
'''
from nose.tools import assert_equal, assert_raises
from coral import analysis, DNA, RNA, Peptide
def test_utils():
test_DNA = DNA('ATAGCGATACGAT')
test_RNA = RNA('AUGCGAUAGCGAU')
test_peptide = Peptide('msvkkkpvqg')
test_str = 'msvkkkpvgq'
assert_equal(analysis.utils.sequence_type(test_DNA), 'dna')
assert_equal(analysis.utils.sequence_type(test_RNA), 'rna')
assert_equal(analysis.utils.sequence_type(test_peptide), 'peptide')
assert_raises(Exception, analysis.utils.sequence_type, test_str)
| 29.4 | 71 | 0.748299 | 77 | 588 | 5.467532 | 0.376623 | 0.104513 | 0.199525 | 0.23753 | 0.353919 | 0.285036 | 0.285036 | 0 | 0 | 0 | 0 | 0 | 0.139456 | 588 | 19 | 72 | 30.947368 | 0.832016 | 0.083333 | 0 | 0 | 0 | 0 | 0.111321 | 0 | 0 | 0 | 0 | 0 | 0.454545 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e4d07aff129e622dcf6a63b48636b52ecc07cc1 | 74 | py | Python | python/828.unique-letter-string.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python/828.unique-letter-string.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | python/828.unique-letter-string.py | stavanmehta/leetcode | 1224e43ce29430c840e65daae3b343182e24709c | [
"Apache-2.0"
] | null | null | null | class Solution:
def uniqueLetterString(self, S: str) -> int:
| 18.5 | 48 | 0.608108 | 8 | 74 | 5.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283784 | 74 | 3 | 49 | 24.666667 | 0.849057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6e4dee90bdd936152cb862e03942c4be61d9a3e5 | 249 | py | Python | 2.datatype/1.number_typecasting.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | 2.datatype/1.number_typecasting.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | 2.datatype/1.number_typecasting.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | number_int = int("32");
number_float= float(32);
number_complex = complex(3222342332432435435345324435324523423);
print(type(number_int),": ",number_int);
print(type(number_float),": ",number_float);
print(type(number_complex),": ",number_complex); | 35.571429 | 64 | 0.767068 | 30 | 249 | 6.066667 | 0.266667 | 0.148352 | 0.247253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174468 | 0.056225 | 249 | 7 | 65 | 35.571429 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
6e535cb6e52945115eb6d7ac8b6103b52efc86b8 | 92 | py | Python | app_kasir/apps.py | rizkyarwn/projectkasir | 6524a052bcb52534524db1c5fba05d31a0f0d801 | [
"MIT"
] | 2 | 2018-06-28T10:52:47.000Z | 2018-06-28T10:52:48.000Z | app_kasir/apps.py | rizkyarwn/projectkasir | 6524a052bcb52534524db1c5fba05d31a0f0d801 | [
"MIT"
] | null | null | null | app_kasir/apps.py | rizkyarwn/projectkasir | 6524a052bcb52534524db1c5fba05d31a0f0d801 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class AppKasirConfig(AppConfig):
name = 'app_kasir'
| 15.333333 | 33 | 0.76087 | 11 | 92 | 6.272727 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 92 | 5 | 34 | 18.4 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
2819b274258b4f59c03325199e582718bece2d5e | 536 | py | Python | edinet_baseline_hourly_module/edinet_models/pyEMIS/ConsumptionModels/__init__.py | BeeGroup-cimne/module_edinet | 0cda52e9d6222a681f85567e9bf0f7e5885ebf5e | [
"MIT"
] | null | null | null | edinet_baseline_hourly_module/edinet_models/pyEMIS/ConsumptionModels/__init__.py | BeeGroup-cimne/module_edinet | 0cda52e9d6222a681f85567e9bf0f7e5885ebf5e | [
"MIT"
] | 13 | 2021-03-25T22:24:38.000Z | 2022-03-12T00:56:45.000Z | edinet_baseline_hourly_module/edinet_models/pyEMIS/ConsumptionModels/__init__.py | BeeGroup-cimne/module_edinet | 0cda52e9d6222a681f85567e9bf0f7e5885ebf5e | [
"MIT"
] | 1 | 2019-03-13T09:49:56.000Z | 2019-03-13T09:49:56.000Z | from constantMonthlyModel import ConstantMonthlyModel
from constantModel import ConstantModel
from twoParameterModel import TwoParameterModel
from threeParameterModel import ThreeParameterModel
from anyModel import AnyModelFactory
from schoolModel import SchoolModel, SchoolModelFactory
from recurrentModel import RecurrentModel, RecurrentModelFactory
from weeklyModel import WeeklyModel, WeeklyModelFactory
from monthlyModel import MonthlyModel, MonthlyModelFactory
from nanModel import NanModel
from profile import ConsumptionProfile
| 44.666667 | 64 | 0.902985 | 48 | 536 | 10.083333 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 536 | 11 | 65 | 48.727273 | 0.991803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
284932efb61177d76bc830e6f9381821ff06ec7e | 997 | py | Python | classes/migrations/0007_auto_20201206_1223.py | henrylameck/school_management_system | 38c270977d001d28f2338eb90fffc3e8c2598d06 | [
"MIT"
] | null | null | null | classes/migrations/0007_auto_20201206_1223.py | henrylameck/school_management_system | 38c270977d001d28f2338eb90fffc3e8c2598d06 | [
"MIT"
] | 3 | 2021-06-05T00:01:48.000Z | 2021-09-22T19:39:12.000Z | classes/migrations/0007_auto_20201206_1223.py | henrylameck/school_management_system | 38c270977d001d28f2338eb90fffc3e8c2598d06 | [
"MIT"
] | null | null | null | # Generated by Django 3.1 on 2020-12-06 09:23
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('classes', '0006_auto_20201205_2224'),
]
operations = [
migrations.RemoveField(
model_name='classsyllabus',
name='components',
),
migrations.AddField(
model_name='classsyllabus',
name='assignment',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='classsyllabus',
name='practical',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='classsyllabus',
name='project',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='classsyllabus',
name='theory',
field=models.BooleanField(default=False),
),
]
| 26.236842 | 53 | 0.563691 | 83 | 997 | 6.674699 | 0.46988 | 0.081227 | 0.198556 | 0.234657 | 0.570397 | 0.50722 | 0.427798 | 0.427798 | 0.427798 | 0.427798 | 0 | 0.044709 | 0.326981 | 997 | 37 | 54 | 26.945946 | 0.780924 | 0.043129 | 0 | 0.580645 | 1 | 0 | 0.143908 | 0.02416 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2853e0d7d747d6c3288b88732191d861e6eecd97 | 427 | py | Python | scipy/ndimage/tests/__init__.py | Ennosigaeon/scipy | 2d872f7cf2098031b9be863ec25e366a550b229c | [
"BSD-3-Clause"
] | 9,095 | 2015-01-02T18:24:23.000Z | 2022-03-31T20:35:31.000Z | scipy/ndimage/tests/__init__.py | Ennosigaeon/scipy | 2d872f7cf2098031b9be863ec25e366a550b229c | [
"BSD-3-Clause"
] | 11,500 | 2015-01-01T01:15:30.000Z | 2022-03-31T23:07:35.000Z | scipy/ndimage/tests/__init__.py | Ennosigaeon/scipy | 2d872f7cf2098031b9be863ec25e366a550b229c | [
"BSD-3-Clause"
] | 5,838 | 2015-01-05T11:56:42.000Z | 2022-03-31T23:21:19.000Z |
from __future__ import annotations
from typing import List, Type
import numpy
# list of numarray data types
integer_types: List[Type] = [
numpy.int8, numpy.uint8, numpy.int16, numpy.uint16,
numpy.int32, numpy.uint32, numpy.int64, numpy.uint64]
float_types: List[Type] = [numpy.float32, numpy.float64]
complex_types: List[Type] = [numpy.complex64, numpy.complex128]
types: List[Type] = integer_types + float_types
| 26.6875 | 63 | 0.754098 | 59 | 427 | 5.305085 | 0.457627 | 0.127796 | 0.166134 | 0.172524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.138173 | 427 | 15 | 64 | 28.466667 | 0.788043 | 0.063232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
285a8bab289bfb8c666439b93d30129bb0e1ff4e | 2,076 | py | Python | src/network/topology.py | joelwanner/smtax | 7d46f02cb3f15f2057022c574e0f3a8e5236d647 | [
"MIT"
] | null | null | null | src/network/topology.py | joelwanner/smtax | 7d46f02cb3f15f2057022c574e0f3a8e5236d647 | [
"MIT"
] | null | null | null | src/network/topology.py | joelwanner/smtax | 7d46f02cb3f15f2057022c574e0f3a8e5236d647 | [
"MIT"
] | null | null | null | from network.route import *
class Host(object):
def __init__(self, name, r, s, a=1):
self.name = name
self.receiving_cap = r
self.sending_cap = s
self.amp_factor = a
self.links = []
def add_link(self, l):
self.links.append(l)
def __str__(self):
if self.amp_factor == 1:
return "%s(%d,%d)" % (self.name, self.receiving_cap, self.sending_cap)
else:
return "%s(%d,%d,%d)" % (self.name, self.receiving_cap, self.sending_cap, self.amp_factor)
def __repr__(self):
return self.name
class Server(Host):
def __init__(self, name, r, s, a):
super().__init__(name, r, s, a)
def __str__(self):
return "_" + super().__str__()
class Router(Server):
def __init__(self, name, r, s):
super().__init__(name, r, s, 1)
class Link(object):
def __init__(self, h1, h2, c):
self.h1 = h1
self.h2 = h2
self.capacity = c
def neighbor(self, h):
if h == self.h1:
return self.h2
elif h == self.h2:
return self.h1
else:
return None
def __repr__(self):
return "%s--%s" % (self.h1.name, self.h2.name)
def __str__(self):
return "%s:%d" % (self.__repr__(), self.capacity)
class Topology(object):
def __init__(self, hosts, links):
self.hosts = hosts
self.links = links
self.__routes = None
for l in links:
l.h1.add_link(l)
l.h2.add_link(l)
def get_routes(self):
if not self.__routes:
self.__routes = RoutingTable(self)
return self.__routes
def __str__(self):
host_str = ",\n\t".join([str(h) for h in self.hosts])
link_str = ",\n\t".join([str(l) for l in self.links])
return "hosts {\n\t%s\n}\nlinks {\n\t%s\n}" % (host_str, link_str)
@classmethod
def from_string(cls, s):
return parser.parse_network(s)
# TODO: remove workaround for circular dependencies
import interface.parse as parser
| 23.590909 | 102 | 0.560694 | 292 | 2,076 | 3.684932 | 0.226027 | 0.052045 | 0.051115 | 0.047398 | 0.173792 | 0.123606 | 0.107807 | 0.074349 | 0.074349 | 0.074349 | 0 | 0.01174 | 0.302505 | 2,076 | 87 | 103 | 23.862069 | 0.731354 | 0.023603 | 0 | 0.131148 | 0 | 0 | 0.038025 | 0 | 0 | 0 | 0 | 0.011494 | 0 | 1 | 0.245902 | false | 0 | 0.032787 | 0.081967 | 0.557377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
285ef9691cdce93a606e382f9fdd9a1cebb1c5b6 | 232 | py | Python | views.py | Rexypoo/shortnsweet | e773f01f2fdd6630b8d649232b48a753aa387c4f | [
"Apache-2.0"
] | null | null | null | views.py | Rexypoo/shortnsweet | e773f01f2fdd6630b8d649232b48a753aa387c4f | [
"Apache-2.0"
] | null | null | null | views.py | Rexypoo/shortnsweet | e773f01f2fdd6630b8d649232b48a753aa387c4f | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import get_object_or_404, redirect, render
from .models import ShortURL
def redirect_alias(request, short_name):
shorturl = get_object_or_404(ShortURL, alias=short_name)
return redirect(shorturl.url)
| 29 | 64 | 0.806034 | 33 | 232 | 5.393939 | 0.575758 | 0.101124 | 0.123596 | 0.157303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029557 | 0.125 | 232 | 7 | 65 | 33.142857 | 0.847291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
286195dbc7f21dde0f07a4dbc6375c32996ea510 | 561 | py | Python | oppadc/osutimingpoint.py | jamuwu/oppadc.py | 3faca744143575f0a4f12f213745b0f311973526 | [
"MIT"
] | 8 | 2019-11-01T00:03:52.000Z | 2021-01-02T18:33:31.000Z | oppadc/osutimingpoint.py | jamuwu/oppadc.py | 3faca744143575f0a4f12f213745b0f311973526 | [
"MIT"
] | 7 | 2019-12-16T16:29:07.000Z | 2021-02-22T01:01:22.000Z | oppadc/osutimingpoint.py | jamuwu/oppadc.py | 3faca744143575f0a4f12f213745b0f311973526 | [
"MIT"
] | 9 | 2019-12-16T21:58:21.000Z | 2022-02-02T12:18:45.000Z | class OsuTimingPoint(object):
"""
representats a timingpoint in osu
if change is False:
ms_per_beat = -100.0 * bpm_multiplier
"""
def __init__(self, starttime:float or str=0.0, ms_per_beat:float or str=-100.0, change:bool=False):
self.starttime:float = float(starttime)
self.ms_per_beat:float = float(ms_per_beat)
self.change:bool = bool(change)
def __str__(self):
return self.__repr__()
def __repr__(self):
return f"<{self.__class__.__name__} {self.starttime}ms mspb={round(self.ms_per_beat, 1)}{' [change]' if self.change else ''}>"
| 29.526316 | 128 | 0.71836 | 87 | 561 | 4.229885 | 0.402299 | 0.067935 | 0.122283 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022774 | 0.139037 | 561 | 18 | 129 | 31.166667 | 0.73913 | 0.178253 | 0 | 0 | 0 | 0.111111 | 0.256637 | 0.121681 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.222222 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
286ba9afaaf93ad96524d8cf507a1bf2ad30a104 | 2,862 | py | Python | port_mapping.py | sbalasa/CiscoFMC | 024c9b6df3513e1e4a8e3e3f976a0c67b58c1909 | [
"MIT"
] | 1 | 2021-11-09T03:56:29.000Z | 2021-11-09T03:56:29.000Z | port_mapping.py | sbalasa/CiscoFMC | 024c9b6df3513e1e4a8e3e3f976a0c67b58c1909 | [
"MIT"
] | null | null | null | port_mapping.py | sbalasa/CiscoFMC | 024c9b6df3513e1e4a8e3e3f976a0c67b58c1909 | [
"MIT"
] | 1 | 2021-11-09T03:56:06.000Z | 2021-11-09T03:56:06.000Z | ports = {
"ssh": {"type": "PortLiteral", "port": "22", "protocol": "6",},
"udp/netbios-dgm": {"type": "PortLiteral", "port": "138", "protocol": "17",},
"udp/netbios-ns": {"type": "PortLiteral", "port": "137", "protocol": "17",},
"tcp/ssh": {"type": "PortLiteral", "port": "22", "protocol": "6",},
"tcp": {"type": "PortLiteral", "protocol": "6",},
"esp": {"type": "PortLiteral", "protocol": "50",},
"ah": {"type": "PortLiteral", "protocol": "51",},
"udp": {"type": "PortLiteral", "protocol": "17",},
"snmp": [
{"type": "PortLiteral", "port": "161", "protocol": "17",},
{"type": "PortLiteral", "port": "162", "protocol": "17",},
],
"udp/snmp": [
{"type": "PortLiteral", "port": "161", "protocol": "17",},
{"type": "PortLiteral", "port": "162", "protocol": "6",},
{"type": "PortLiteral", "port": "162", "protocol": "17",},
],
"udp/snmptrap": {"type": "PortLiteral", "port": "162", "protocol": "6",},
"snmptrap": [
{"type": "PortLiteral", "port": "162", "protocol": "6",},
{"type": "PortLiteral", "port": "162", "protocol": "17",},
],
"https": [
{"type": "PortLiteral", "port": "443", "protocol": "6",},
{"type": "PortLiteral", "port": "443", "protocol": "17",},
],
"tcp/https": {"type": "PortLiteral", "port": "443", "protocol": "6",},
"netbios-ssn": {"type": "PortLiteral", "port": "139", "protocol": "6",},
"tcp/netbios-ssn": {"type": "PortLiteral", "port": "139", "protocol": "6",},
"ntp": {"type": "PortLiteral", "port": "123", "protocol": "17",},
"udp/ntp": {"type": "PortLiteral", "port": "123", "protocol": "17",},
"tcp/tacacs": {"type": "PortLiteral", "port": "49", "protocol": "6",},
"udp/tacacs": {"type": "PortLiteral", "port": "49", "protocol": "17",},
"tcp/www": {"type": "PortLiteral", "port": "80", "protocol": "6",},
"udp/www": {"type": "PortLiteral", "port": "80", "protocol": "17",},
"tcp/http": {"type": "PortLiteral", "port": "80", "protocol": "6",},
"ldaps": {"type": "PortLiteral", "port": "636", "protocol": "6",},
"tcp/ldaps": {"type": "PortLiteral", "port": "636", "protocol": "6",},
"ldap": {"type": "PortLiteral", "port": "389", "protocol": "6",},
"tcp/ldap": {"type": "PortLiteral", "port": "389", "protocol": "6",},
"tcp/syslog": {"type": "PortLiteral", "port": "514", "protocol": "6",},
"udp/syslog": {"type": "PortLiteral", "port": "514", "protocol": "17",},
"tcp/domain": {"type": "PortLiteral", "port": "53", "protocol": "6", },
"udp/domain": {"type": "PortLiteral", "port": "53", "protocol": "17",},
"tcp/rsh": {"type": "PortLiteral", "port": "514", "protocol": "6",},
"icmp": {"type": "ICMPv4PortLiteral", "protocol": "1", "icmpType": "Any",},
"any": [],
}
| 57.24 | 82 | 0.490217 | 279 | 2,862 | 5.028674 | 0.175627 | 0.395581 | 0.4469 | 0.094084 | 0.776907 | 0.755524 | 0.513899 | 0.275125 | 0.162509 | 0.162509 | 0 | 0.062904 | 0.189029 | 2,862 | 49 | 83 | 58.408163 | 0.541577 | 0 | 0 | 0.22449 | 0 | 0 | 0.50551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
28846fa1c1e9c7ab3ae95eddc73455be7f366a02 | 196 | py | Python | Source Code/web/backend/files_app/fileapi/urls.py | creosB/Virtual-pdf-library | edb334b16dfd0d3c616683f6fbb370e54f489560 | [
"CC0-1.0"
] | 11 | 2021-12-20T01:51:56.000Z | 2022-01-01T10:17:47.000Z | Source Code/web/backend/files_app/fileapi/urls.py | creosB/Virtual-pdf-library | edb334b16dfd0d3c616683f6fbb370e54f489560 | [
"CC0-1.0"
] | null | null | null | Source Code/web/backend/files_app/fileapi/urls.py | creosB/Virtual-pdf-library | edb334b16dfd0d3c616683f6fbb370e54f489560 | [
"CC0-1.0"
] | 1 | 2021-12-21T08:47:56.000Z | 2021-12-21T08:47:56.000Z | from django.urls import path
from .views import FileList, FileDetail
urlpatterns = [
path('',FileList.as_view()),
path('<int:pk>/',FileDetail.as_view()), # individual items from django
] | 24.5 | 74 | 0.704082 | 25 | 196 | 5.44 | 0.6 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147959 | 196 | 8 | 75 | 24.5 | 0.814371 | 0.142857 | 0 | 0 | 0 | 0 | 0.053892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
288fb0e62147ed4c6a19e3faeb3476a5404525aa | 270 | py | Python | rasterio/errors.py | clembou/rasterio | 57169c31dae04e1319b4c4b607345475a7122910 | [
"BSD-3-Clause"
] | null | null | null | rasterio/errors.py | clembou/rasterio | 57169c31dae04e1319b4c4b607345475a7122910 | [
"BSD-3-Clause"
] | null | null | null | rasterio/errors.py | clembou/rasterio | 57169c31dae04e1319b4c4b607345475a7122910 | [
"BSD-3-Clause"
] | null | null | null | """A module of errors."""
class RasterioIOError(IOError):
"""A failure to open a dataset using the presently registered drivers."""
class RasterioDriverRegistrationError(ValueError):
"""To be raised when, eg, _gdal.GDALGetDriverByName("MEM") returns NULL"""
| 27 | 78 | 0.733333 | 31 | 270 | 6.354839 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 270 | 9 | 79 | 30 | 0.852814 | 0.577778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
95419b554583bc803a43eac8c57ec34a913022a2 | 626 | py | Python | app/models/encryption.py | janaSunrise/ZeroCOM | 7197684ce708f080fe215b0a6e57c12836e4c0ab | [
"Apache-2.0"
] | 6 | 2021-03-27T08:58:04.000Z | 2021-05-23T17:07:09.000Z | app/models/encryption.py | janaSunrise/ZeroCOM | 7197684ce708f080fe215b0a6e57c12836e4c0ab | [
"Apache-2.0"
] | 2 | 2021-05-30T08:06:53.000Z | 2021-06-02T17:02:06.000Z | app/models/encryption.py | janaSunrise/ZeroCOM | 7197684ce708f080fe215b0a6e57c12836e4c0ab | [
"Apache-2.0"
] | null | null | null | import rsa
class RSA:
@classmethod
def generate_keys(cls, size: int = 512) -> tuple:
return rsa.newkeys(size)
@classmethod
def export_key_pkcs1(cls, public_key: rsa.PublicKey, format: str = "PEM") -> bytes:
return rsa.PublicKey.save_pkcs1(public_key, format=format)
@classmethod
def load_key_pkcs1(cls, public_key_pem: bytes) -> rsa.PublicKey:
return rsa.PublicKey.load_pkcs1(public_key_pem)
@classmethod
def sign_message(cls, message: bytes, private_key: rsa.PrivateKey, algorithm: str = "SHA-1") -> bytes:
return rsa.sign(message, private_key, algorithm)
| 31.3 | 106 | 0.693291 | 83 | 626 | 5.036145 | 0.385542 | 0.133971 | 0.052632 | 0.08134 | 0.095694 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015936 | 0.198083 | 626 | 19 | 107 | 32.947368 | 0.816733 | 0 | 0 | 0.285714 | 1 | 0 | 0.01278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.071429 | 0.285714 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
9554dabbb9a81e2fbde331f2e40edcaa0f221585 | 805 | py | Python | bslparloursite/videolibrary/models.py | natfarleydev/thebslparlour | ebb2588282cdb2a977ec6c5f8d82cec4e8fd1f99 | [
"CC0-1.0"
] | 1 | 2016-01-06T23:13:11.000Z | 2016-01-06T23:13:11.000Z | bslparloursite/videolibrary/models.py | natfarleydev/thebslparlour | ebb2588282cdb2a977ec6c5f8d82cec4e8fd1f99 | [
"CC0-1.0"
] | 4 | 2021-03-18T20:15:04.000Z | 2021-06-10T17:52:31.000Z | bslparloursite/videolibrary/models.py | natfarleydev/thebslparlour | ebb2588282cdb2a977ec6c5f8d82cec4e8fd1f99 | [
"CC0-1.0"
] | null | null | null | from django.db import models
from django.utils import timezone
from sizefield.models import FileSizeField
# Create your models here.
class Video(models.Model):
sha224 = models.CharField(max_length=56, unique=True)
filename = models.CharField(max_length=200)
dropbox_directory = models.CharField(max_length=200)
mime_type = models.CharField(max_length=200)
date_added = models.DateTimeField(default=timezone.now, editable=False)
size = FileSizeField()
class Meta:
abstract = True
def __str__(self):
return self.filename or self.sha224_id
class SourceVideo(Video):
vimeo_uri = models.IntegerField()
youtube_id = models.CharField(max_length=30, blank=True)
def __str__(self):
return self.filename+" ("+str(self.vimeo_uri)+")"
| 29.814815 | 75 | 0.720497 | 102 | 805 | 5.490196 | 0.5 | 0.133929 | 0.160714 | 0.214286 | 0.258929 | 0.114286 | 0.114286 | 0 | 0 | 0 | 0 | 0.028832 | 0.181366 | 805 | 26 | 76 | 30.961538 | 0.820941 | 0.029814 | 0 | 0.105263 | 0 | 0 | 0.003851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.157895 | 0.105263 | 0.947368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
95583195ca817a2531ead6462fb4ef3915b9a847 | 12,140 | py | Python | src/awkward1/operations/reducers.py | martindurant/awkward-1.0 | a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38 | [
"BSD-3-Clause"
] | null | null | null | src/awkward1/operations/reducers.py | martindurant/awkward-1.0 | a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38 | [
"BSD-3-Clause"
] | null | null | null | src/awkward1/operations/reducers.py | martindurant/awkward-1.0 | a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38 | [
"BSD-3-Clause"
] | null | null | null | # BSD 3-Clause License; see https://github.com/jpivarski/awkward-1.0/blob/master/LICENSE
from __future__ import absolute_import
import numpy
import awkward1._util
import awkward1._connect._numpy
import awkward1.layout
import awkward1.operations.convert
def count(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] + reduce(xs[1:])
return reduce([numpy.size(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.count(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.count_nonzero)
def count_nonzero(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] + reduce(xs[1:])
return reduce([numpy.count_nonzero(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.count_nonzero(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.sum)
def sum(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] + reduce(xs[1:])
return reduce([numpy.sum(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.sum(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.prod)
def prod(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] * reduce(xs[1:])
return reduce([numpy.prod(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.prod(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.any)
def any(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] or reduce(xs[1:])
return reduce([numpy.any(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.any(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.all)
def all(array, axis=None, keepdims=False, maskidentity=False):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 1:
return xs[0]
else:
return xs[0] and reduce(xs[1:])
return reduce([numpy.all(x) for x in awkward1._util.completely_flatten(layout)])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.all(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.min)
def min(array, axis=None, keepdims=False, maskidentity=True):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 0:
return None
elif len(xs) == 1:
return xs[0]
else:
x, y = xs[0], reduce(xs[1:])
return x if x < y else y
tmp = awkward1._util.completely_flatten(layout)
return reduce([numpy.min(x) for x in tmp if len(x) > 0])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.min(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
@awkward1._connect._numpy.implements(numpy.max)
def max(array, axis=None, keepdims=False, maskidentity=True):
layout = awkward1.operations.convert.tolayout(array, allowrecord=False, allowother=False)
if axis is None:
def reduce(xs):
if len(xs) == 0:
return None
elif len(xs) == 1:
return xs[0]
else:
x, y = xs[0], reduce(xs[1:])
return x if x > y else y
tmp = awkward1._util.completely_flatten(layout)
return reduce([numpy.max(x) for x in tmp if len(x) > 0])
else:
behavior = awkward1._util.behaviorof(array)
return awkward1._util.wrap(layout.max(axis=axis, mask=maskidentity, keepdims=keepdims), behavior)
### The following are not strictly reducers, but are defined in terms of reducers and ufuncs.
def moment(x, n, weight=None, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
if weight is None:
sumw = count(x, axis=axis, keepdims=keepdims)
sumwxn = sum(x**n, axis=axis, keepdims=keepdims)
else:
sumw = sum(x*0 + weight, axis=axis, keepdims=keepdims)
sumwxn = sum((x*weight)**n, axis=axis, keepdims=keepdims)
return numpy.true_divide(sumwxn, sumw)
@awkward1._connect._numpy.implements(numpy.mean)
def mean(x, weight=None, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
if weight is None:
sumw = count(x, axis=axis, keepdims=keepdims)
sumwx = sum(x, axis=axis, keepdims=keepdims)
else:
sumw = sum(x*0 + weight, axis=axis, keepdims=keepdims)
sumwx = sum(x*weight, axis=axis, keepdims=keepdims)
return numpy.true_divide(sumwx, sumw)
@awkward1._connect._numpy.implements(numpy.var)
def var(x, weight=None, ddof=0, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
xmean = mean(x, weight=weight, axis=axis, keepdims=keepdims)
if weight is None:
sumw = count(x, axis=axis, keepdims=keepdims)
sumwxx = sum((x - xmean)**2, axis=axis, keepdims=keepdims)
else:
sumw = sum(x*0 + weight, axis=axis, keepdims=keepdims)
sumwxx = sum((x - xmean)**2 * weight, axis=axis, keepdims=keepdims)
if ddof != 0:
return numpy.true_divide(sumwxx, sumw) * numpy.true_divide(sumw, sumw - ddof)
else:
return numpy.true_divide(sumwxx, sumw)
@awkward1._connect._numpy.implements(numpy.std)
def std(x, weight=None, ddof=0, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
return numpy.sqrt(var(x, weight=weight, ddof=ddof, axis=axis, keepdims=keepdims))
def covar(x, y, weight=None, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
xmean = mean(x, weight=weight, axis=axis, keepdims=keepdims)
ymean = mean(y, weight=weight, axis=axis, keepdims=keepdims)
if weight is None:
sumw = count(x, axis=axis, keepdims=keepdims)
sumwxy = sum((x - xmean)*(y - ymean), axis=axis, keepdims=keepdims)
else:
sumw = sum(x*0 + weight, axis=axis, keepdims=keepdims)
sumwxy = sum((x - xmean)*(y - ymean)*weight, axis=axis, keepdims=keepdims)
return numpy.true_divide(sumwxy, sumw)
def corr(x, y, weight=None, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
xmean = mean(x, weight=weight, axis=axis, keepdims=keepdims)
ymean = mean(y, weight=weight, axis=axis, keepdims=keepdims)
xdiff = x - xmean
ydiff = y - ymean
if weight is None:
sumwxx = sum(xdiff**2, axis=axis, keepdims=keepdims)
sumwyy = sum(ydiff**2, axis=axis, keepdims=keepdims)
sumwxy = sum(xdiff*ydiff, axis=axis, keepdims=keepdims)
else:
sumwxx = sum((xdiff**2)*weight, axis=axis, keepdims=keepdims)
sumwyy = sum((ydiff**2)*weight, axis=axis, keepdims=keepdims)
sumwxy = sum((xdiff*ydiff)*weight, axis=axis, keepdims=keepdims)
return numpy.true_divide(sumwxy, numpy.sqrt(sumwxx * sumwyy))
def linearfit(x, y, weight=None, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
if weight is None:
sumw = count(x, axis=axis, keepdims=keepdims)
sumwx = sum(x, axis=axis, keepdims=keepdims)
sumwy = sum(y, axis=axis, keepdims=keepdims)
sumwxx = sum(x**2, axis=axis, keepdims=keepdims)
sumwxy = sum(x*y, axis=axis, keepdims=keepdims)
else:
sumw = sum(x*0 + weight, axis=axis, keepdims=keepdims)
sumwx = sum(x*weight, axis=axis, keepdims=keepdims)
sumwy = sum(y*weight, axis=axis, keepdims=keepdims)
sumwxx = sum((x**2)*weight, axis=axis, keepdims=keepdims)
sumwxy = sum(x*y*weight, axis=axis, keepdims=keepdims)
delta = (sumw*sumwxx) - (sumwx*sumwx)
intercept = numpy.true_divide(((sumwxx*sumwy) - (sumwx*sumwxy)), delta)
slope = numpy.true_divide(((sumw*sumwxy) - (sumwx*sumwy)), delta)
intercept_error = numpy.sqrt(numpy.true_divide(sumwxx, delta))
slope_error = numpy.sqrt(numpy.true_divide(sumw, delta))
intercept = awkward1.operations.convert.tolayout(intercept, allowrecord=True, allowother=True)
slope = awkward1.operations.convert.tolayout(slope, allowrecord=True, allowother=True)
intercept_error = awkward1.operations.convert.tolayout(intercept_error, allowrecord=True, allowother=True)
slope_error = awkward1.operations.convert.tolayout(slope_error, allowrecord=True, allowother=True)
scalar = not isinstance(intercept, awkward1.layout.Content) and not isinstance(slope, awkward1.layout.Content) and not isinstance(intercept_error, awkward1.layout.Content) and not isinstance(slope_error, awkward1.layout.Content)
if not isinstance(intercept, (awkward1.layout.Content, awkward1.layout.Record)):
intercept = awkward1.layout.NumpyArray(numpy.array([intercept]))
if not isinstance(slope, (awkward1.layout.Content, awkward1.layout.Record)):
slope = awkward1.layout.NumpyArray(numpy.array([slope]))
if not isinstance(intercept_error, (awkward1.layout.Content, awkward1.layout.Record)):
intercept_error = awkward1.layout.NumpyArray(numpy.array([intercept_error]))
if not isinstance(slope_error, (awkward1.layout.Content, awkward1.layout.Record)):
slope_error = awkward1.layout.NumpyArray(numpy.array([slope_error]))
out = awkward1.layout.RecordArray([intercept, slope, intercept_error, slope_error], ["intercept", "slope", "intercept_error", "slope_error"])
out.setparameter("__record__", "LinearFit")
if scalar:
out = out[0]
return awkward1._util.wrap(out, awkward1._util.behaviorof(x, y))
def softmax(x, axis=None, keepdims=False):
with numpy.errstate(invalid="ignore"):
expx = numpy.exp(x)
denom = sum(expx, axis=axis, keepdims=keepdims)
return numpy.true_divide(expx, denom)
__all__ = [x for x in list(globals()) if not x.startswith("_") and x not in ("collections", "numpy", "awkward1")]
| 48.174603 | 236 | 0.647117 | 1,520 | 12,140 | 5.099342 | 0.082895 | 0.04851 | 0.080506 | 0.120759 | 0.8506 | 0.809186 | 0.720165 | 0.660173 | 0.607018 | 0.593601 | 0 | 0.013476 | 0.229819 | 12,140 | 251 | 237 | 48.366534 | 0.815508 | 0.014498 | 0 | 0.515695 | 0 | 0 | 0.011038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107623 | false | 0 | 0.026906 | 0 | 0.327354 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9579e725a92b212adbfbee1f939f56455d5e30da | 22 | py | Python | nextfeed/settings/__init__.py | Nurdok/nextfeed | 197818310bbf7134badc2ef5ed11ab5ede7fdb35 | [
"MIT"
] | 1 | 2015-08-09T10:42:04.000Z | 2015-08-09T10:42:04.000Z | nextfeed/settings/__init__.py | Nurdok/nextfeed | 197818310bbf7134badc2ef5ed11ab5ede7fdb35 | [
"MIT"
] | null | null | null | nextfeed/settings/__init__.py | Nurdok/nextfeed | 197818310bbf7134badc2ef5ed11ab5ede7fdb35 | [
"MIT"
] | null | null | null | __author__ = 'Rachum'
| 11 | 21 | 0.727273 | 2 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9581c71bce4ce0b38517044c9d5a2c496d783a78 | 585 | py | Python | find_nb.py | DemetriusStorm/100daysofcode | ce87a596b565c5740ae3c48adac91cba779b3833 | [
"MIT"
] | null | null | null | find_nb.py | DemetriusStorm/100daysofcode | ce87a596b565c5740ae3c48adac91cba779b3833 | [
"MIT"
] | null | null | null | find_nb.py | DemetriusStorm/100daysofcode | ce87a596b565c5740ae3c48adac91cba779b3833 | [
"MIT"
] | null | null | null | """
Your task is to construct a building which will be a pile of n cubes.
The cube at the bottom will have a volume of n^3, the cube above will have volume of (n-1)^3 and so on until the top
which will have a volume of 1^3.
You are given the total volume m of the building. Being given m can you find the number n of cubes you will have to
build?
The parameter of the function findNb (find_nb, find-nb, findNb) will be an integer m and you have to return the integer
n such as n^3 + (n-1)^3 + ... + 1^3 = m if such a n exists or -1 if there is no such n.
"""
def find_nb(n):
pass | 45 | 119 | 0.711111 | 129 | 585 | 3.209302 | 0.418605 | 0.077295 | 0.043478 | 0.072464 | 0.082126 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024123 | 0.220513 | 585 | 13 | 120 | 45 | 0.883772 | 0.940171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
958ba96c16c5793bb5abfd2bf23b7c56685312b0 | 615 | py | Python | src/models.py | mchuck/tiny-ssg | 52998288daea9fe592b8e6ce769eca782db591cd | [
"MIT"
] | null | null | null | src/models.py | mchuck/tiny-ssg | 52998288daea9fe592b8e6ce769eca782db591cd | [
"MIT"
] | null | null | null | src/models.py | mchuck/tiny-ssg | 52998288daea9fe592b8e6ce769eca782db591cd | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from typing import List, Dict, Any
@dataclass
class WebsitePage:
title: str
body: str
tags: List[str]
created_at: str
url: str
slug: str
meta: Dict
@dataclass
class WebsiteTag:
name: str
slug: str
pages: List[WebsitePage]
@dataclass
class WebsiteCollection:
name: str
pages: List[WebsitePage]
tags: List[WebsiteTag]
@dataclass
class Website:
collections: Dict[str, WebsiteCollection]
meta: Dict
@dataclass
class Templates:
# TODO: Remove Any
index_template: Any
page_template: Any
tag_template: Any
| 16.184211 | 45 | 0.689431 | 74 | 615 | 5.675676 | 0.432432 | 0.166667 | 0.047619 | 0.104762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24065 | 615 | 37 | 46 | 16.621622 | 0.899358 | 0.026016 | 0 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
959cbddc7a775bd66392c574ba57d0e444a033d9 | 736 | py | Python | backend-service/users-service/app/app/models/user.py | abhishek70/python-petclinic-microservices | e15a41a668958f35f1b962487cd2360c5c150f0b | [
"MIT"
] | 2 | 2021-05-19T07:21:59.000Z | 2021-09-15T17:30:08.000Z | backend-service/users-service/app/app/models/user.py | abhishek70/python-petclinic-microservices | e15a41a668958f35f1b962487cd2360c5c150f0b | [
"MIT"
] | null | null | null | backend-service/users-service/app/app/models/user.py | abhishek70/python-petclinic-microservices | e15a41a668958f35f1b962487cd2360c5c150f0b | [
"MIT"
] | null | null | null | from typing import TYPE_CHECKING
from sqlalchemy import Boolean, Column, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
if TYPE_CHECKING:
from .pet import Pet # noqa: F401
class User(Base):
id = Column(Integer, primary_key=True, index=True, autoincrement=True, nullable=False)
first_name = Column(String(20), index=True, nullable=False)
last_name = Column(String(20), index=True, nullable=False)
email = Column(String, unique=True, index=True, nullable=False)
hashed_password = Column(String, nullable=False)
is_active = Column(Boolean(), default=True)
is_superuser = Column(Boolean(), default=False)
pets = relationship("Pet", back_populates="owner")
| 38.736842 | 90 | 0.744565 | 99 | 736 | 5.434343 | 0.454545 | 0.120818 | 0.126394 | 0.122677 | 0.148699 | 0.148699 | 0.148699 | 0.148699 | 0 | 0 | 0 | 0.011164 | 0.148098 | 736 | 18 | 91 | 40.888889 | 0.84689 | 0.013587 | 0 | 0 | 0 | 0 | 0.01105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.333333 | 0 | 0.933333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
95a3853b501cce7a1c286e558ccff9a6692b3e3f | 171 | py | Python | Ekeopara_Praise/Phase 2/LIST/Day43 Tasks/Task3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Ekeopara_Praise/Phase 2/LIST/Day43 Tasks/Task3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Ekeopara_Praise/Phase 2/LIST/Day43 Tasks/Task3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | '''3. Write a Python program to split a list into different variables. '''
universalList = [(1, 2, 3), ('w', 'e', 's')]
lst1, lst2 = universalList
print(lst1)
print(lst2) | 28.5 | 74 | 0.654971 | 26 | 171 | 4.307692 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.157895 | 171 | 6 | 75 | 28.5 | 0.722222 | 0.391813 | 0 | 0 | 0 | 0 | 0.030612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
95b6aab732ea16915f09231a8049e60f6f242ea6 | 593 | py | Python | flaskr/commands.py | aicioara-old/flask_tutorial2 | acb5c6fa2743f2f060ad6a3a26cc7eef56b6490b | [
"MIT"
] | null | null | null | flaskr/commands.py | aicioara-old/flask_tutorial2 | acb5c6fa2743f2f060ad6a3a26cc7eef56b6490b | [
"MIT"
] | null | null | null | flaskr/commands.py | aicioara-old/flask_tutorial2 | acb5c6fa2743f2f060ad6a3a26cc7eef56b6490b | [
"MIT"
] | null | null | null | import os
import datetime
import click
from flask.cli import with_appcontext
from werkzeug.security import generate_password_hash
def init_app(app):
app.cli.add_command(init_db_command)
@click.command('init-db')
@with_appcontext
def init_db_command():
"""Clear the existing data and create new tables."""
init_db()
click.echo('Initialized the database.')
def init_db():
from . import models
models.db.create_all()
user = models.User(username='admin', password=generate_password_hash('admin'))
models.db.session.add(user)
models.db.session.commit()
| 20.448276 | 82 | 0.735245 | 84 | 593 | 5.011905 | 0.452381 | 0.071259 | 0.095012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153457 | 593 | 28 | 83 | 21.178571 | 0.838645 | 0.077572 | 0 | 0 | 1 | 0 | 0.077922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.111111 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
95c1052429e03206d9d42e4ca673e5f48a3f3906 | 35,774 | py | Python | bridge_sim/internal/make/ps_question.py | jerbaroo/bridge-sim | c4ec1c18a07a78462ccf3b970a99a1bd7efcc2af | [
"MIT"
] | 2 | 2020-05-12T11:41:49.000Z | 2020-08-10T15:00:58.000Z | bridge_sim/internal/make/ps_question.py | barischrooneyj/bridge-sim | c4ec1c18a07a78462ccf3b970a99a1bd7efcc2af | [
"MIT"
] | 48 | 2020-05-11T23:58:22.000Z | 2020-09-18T20:28:52.000Z | bridge_sim/internal/make/ps_question.py | jerbaroo/bridge-sim | c4ec1c18a07a78462ccf3b970a99a1bd7efcc2af | [
"MIT"
] | 1 | 2020-05-27T12:43:37.000Z | 2020-05-27T12:43:37.000Z | import os
from copy import deepcopy
import matplotlib.pyplot as plt
import numpy as np
from bridge_sim import model, sim, temperature, traffic, plot, util
from bridge_sim.model import Config, Point, Bridge
from bridge_sim.plot.util import equal_lims
from bridge_sim.sim.responses import without
from bridge_sim.util import print_i, print_w
from bridge_sim.internal.plot import axis_cmap_r
def plot_year_effects(config: Config, x: float, z: float, num_years: int):
"""Plot all effects over a single year and 100 years at a point."""
install_day = 37
year = 2018
weather = temperature.load("holly-springs-18")
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 60 * 10
)
(
ll_responses,
ps_responses,
temp_responses,
shrinkage_responses,
creep_responses,
) = np.repeat(None, 5)
start_day, end_day = None, None
def set_responses(n):
nonlocal weather, start_day, end_day
weather["temp"] = temperature.resize(weather["temp"], year=year)
weather = temperature.repeat(config, "holly-springs-18", weather, n)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
start_day, end_day = install_day, 365 * n
nonlocal ll_responses, ps_responses, temp_responses, shrinkage_responses, creep_responses
(
ll_responses,
ps_responses,
temp_responses,
shrinkage_responses,
creep_responses,
) = sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=model.RT.YTrans,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
ret_all=True,
)
# from sklearn.decomposition import FastICA, PCA
# ica = FastICA(n_components=3)
# try_ = ica.fit_transform((ll_responses + temp_responses + creep_responses + shrinkage_responses).T)
# plt.plot(try_)
# plt.show()
plt.landscape()
lw = 2
def legend():
leg = plt.legend(
facecolor="white",
loc="upper right",
framealpha=1,
fancybox=False,
borderaxespad=0,
)
for legobj in leg.legendHandles:
legobj.set_linewidth(lw)
plt.subplot(1, 2, 1)
set_responses(1)
xax = np.interp(
np.arange(len(traffic_array)), [0, len(traffic_array) - 1], [start_day, end_day]
)
plt.plot(xax, ll_responses[0] * 1e3, c="green", label="traffic", lw=lw)
plt.plot(xax, temp_responses[0] * 1e3, c="red", label="temperature")
plt.plot(xax, shrinkage_responses[0] * 1e3, c="blue", label="shrinkage", lw=lw)
plt.plot(xax, creep_responses[0] * 1e3, c="black", label="creep", lw=lw)
legend()
plt.ylabel("Y translation (mm)")
plt.xlabel("Time (days)")
plt.subplot(1, 2, 2)
end_day = 365 * num_years
set_responses(num_years)
xax = (
np.interp(
np.arange(len(traffic_array)),
[0, len(traffic_array) - 1],
[start_day, end_day],
)
/ 365
)
plt.plot(xax, ll_responses[0] * 1e3, c="green", label="traffic", lw=lw)
plt.plot(xax, temp_responses[0] * 1e3, c="red", label="temperature")
plt.plot(xax, shrinkage_responses[0] * 1e3, c="blue", label="shrinkage", lw=lw)
plt.plot(xax, creep_responses[0] * 1e3, c="black", label="creep", lw=lw)
legend()
plt.ylabel("Y translation (mm)")
plt.xlabel("Time (years)")
equal_lims("y", 1, 2)
plt.suptitle(f"Y translation at X = {x} m, Z = {z} m")
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.savefig(config.get_image_path("classify/ps", f"year-effect-{x}-{z}.png"))
def plot_sensor_placement(config: Config, num_years: int):
all_points = [
model.Point(x=x, z=z)
for x in np.linspace(config.bridge.x_min, config.bridge.x_max, 300)
for z in np.linspace(config.bridge.z_min, config.bridge.z_max, 100)
]
response_type = model.ResponseType.YTrans
install_day = 37
year = 2018
weather = temperature.load("holly-springs-18")
config.sensor_freq = 1
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 10
)
weather["temp"] = temperature.resize(weather["temp"], year=year)
weather = temperature.repeat(config, "holly-springs-18", weather, num_years)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
start_day, end_day = install_day, 365 * num_years
for pier in [9]:
pier_centre = model.Point(
x=config.bridge.supports[pier].x, z=config.bridge.supports[pier].z,
)
points = [p for p in all_points if pier_centre.distance(p) < 7]
ps = model.PierSettlement(pier=pier, settlement=5 / 1e3)
(
_0,
_1,
temp_responses,
shrinkage_responses,
creep_responses,
) = sim.responses.to(
config=config,
points=points,
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
ret_all=True,
)
ps_responses = sim.responses.to_pier_settlement(
config=config,
points=points,
responses_array=_0,
response_type=response_type,
pier_settlement=[(ps, ps)],
).T[-1]
ps_responses += sim.responses.to_creep(
config=config,
points=points,
responses_array=_0,
response_type=response_type,
pier_settlement=[(ps, ps)],
install_pier_settlement=[ps],
install_day=install_day,
start_day=start_day,
end_day=end_day,
).T[-1]
long_term_responses = (
temp_responses.T[-1] + shrinkage_responses.T[-1] + creep_responses.T[-1]
)
############
# Plotting #
############
plt.landscape()
plt.subplot(3, 1, 1)
responses = sim.model.Responses(
response_type=response_type,
responses=list(zip(abs(long_term_responses) * 1e3, points)),
)
plot.contour_responses(config, responses, levels=30, interp=(200, 60))
plot.top_view_bridge(config.bridge, piers=True)
plt.subplot(3, 1, 2)
responses = sim.model.Responses(
response_type=response_type,
responses=list(zip(abs(ps_responses) * 1e3, points)),
)
plot.contour_responses(config, responses, levels=30, interp=(200, 60))
plot.top_view_bridge(config.bridge, piers=True)
plt.subplot(3, 1, 3)
responses = sim.model.Responses(
response_type=response_type,
responses=list(
zip((abs(ps_responses) - abs(long_term_responses)) * 1e3, points)
),
)
plot.contour_responses(config, responses, levels=30, interp=(200, 60))
plot.top_view_bridge(config.bridge, piers=True)
plt.savefig(config.get_image_path("classify/ps", "placement.pdf"))
def plot_removal(config: Config, x: float, z: float):
response_type = model.RT.YTrans
weather = temperature.load("holly-springs-18")
weather["temp"] = temperature.resize(weather["temp"], year=2018)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
install_day = 37
start_day, end_day = install_day, install_day + 365
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), time=60
)
responses = (
sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
# ret_all=True,
)[0]
* 1e3
)
def legend():
return plt.legend(
facecolor="white",
loc="upper right",
framealpha=1,
fancybox=False,
borderaxespad=0,
)
plt.landscape()
plt.subplot(2, 2, 1)
xax = np.interp(
np.arange(len(weather)), [0, len(weather) - 1], [start_day, end_day]
)
plt.plot(xax, weather["temp"], c="red")
plt.ylabel("Temperature °C")
plt.xlabel("Days since T_0")
plt.title("Temperature in 2018")
plt.subplot(2, 2, 2)
xax = np.interp(
np.arange(len(responses)), [0, len(responses) - 1], [start_day, end_day]
)
plt.plot(xax, responses)
plt.ylabel("Y translation (mm)")
plt.xlabel("Days since T_0")
plt.title("Y translation in 2018")
plt.subplot(2, 2, 3)
num_samples = 365 * 24
temps = util.apply(weather["temp"], np.arange(num_samples))
rs = util.apply(responses, np.arange(num_samples))
lr, _ = temperature.regress_and_errors(temps, rs)
lr_x = np.linspace(min(temps), max(temps), 100)
y = lr.predict(lr_x.reshape((-1, 1)))
plt.plot(lr_x, y, lw=2, c="red", label="linear fit")
plt.scatter(temps, rs, s=2, alpha=0.5, label="hourly samples")
leg = legend()
leg.legendHandles[1]._sizes = [30]
plt.ylabel("Y translation (mm)")
plt.xlabel("Temperature °C")
plt.title("Linear model from 2018 data")
#############
# 2019 data #
#############
weather_2019 = temperature.load("holly-springs")
weather_2019["temp"] = temperature.resize(weather_2019["temp"], year=2019)
start_date, end_date = (
weather_2019["datetime"].iloc[0].strftime(temperature.f_string),
weather_2019["datetime"].iloc[-1].strftime(temperature.f_string),
)
start_day, end_day = install_day + 365, install_day + (2 * 365)
responses_2019 = (
sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather_2019,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
)[0]
* 1e3
)
plt.subplot(2, 2, 4)
xax_responses = np.interp(
np.arange(len(responses_2019)),
[0, len(responses_2019) - 1],
[start_day, end_day],
)
plt.plot(xax_responses, responses_2019, label="2019 responses")
temps_2019 = util.apply(weather_2019["temp"], xax_responses)
y = lr.predict(temps_2019.reshape((-1, 1)))
plt.plot(xax_responses, y, label="prediction")
plt.ylabel("Y translation (mm)")
plt.xlabel("Days since T_0")
plt.title("Y translation in 2019")
for legobj in legend().legendHandles:
legobj.set_linewidth(2.0)
plt.suptitle(f"Predicting long-term effect at X = {x} m, Z = {z} m")
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.savefig(config.get_image_path("classify/ps", "regress.pdf"))
def plot_removal_2(config: Config, x: float, z: float):
response_type = model.RT.YTrans
weather_2018 = temperature.load("holly-springs-18")
weather_2018["temp"] = temperature.resize(weather_2018["temp"], year=2018)
start_date, end_date = (
weather_2018["datetime"].iloc[0].strftime(temperature.f_string),
weather_2018["datetime"].iloc[-1].strftime(temperature.f_string),
)
install_day = 37
start_day, end_day = install_day, install_day + 365
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), time=60
)
responses_2018 = (
sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather_2018,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
# ret_all=True,
)[0]
* 1e3
)
num_samples = 365 * 24
temps = util.apply(weather_2018["temp"], np.arange(num_samples))
rs = util.apply(responses_2018, np.arange(num_samples))
lr, err = temperature.regress_and_errors(temps, rs)
def legend():
plt.legend(
facecolor="white",
loc="lower left",
framealpha=1,
fancybox=False,
borderaxespad=0,
labelspacing=0.02,
)
##############################
# Iterate through each year. #
##############################
plt.landscape()
weather_2019 = temperature.load("holly-springs")
weather_2019["temp"] = temperature.resize(weather_2019["temp"], year=2019)
start_date, end_date = (
weather_2019["datetime"].iloc[0].strftime(temperature.f_string),
weather_2019["datetime"].iloc[-1].strftime(temperature.f_string),
)
for y_i, year in enumerate([2019, 2024, 2039]):
plt.subplot(3, 1, y_i + 1)
start_day = install_day + ((year - 2018) * 365)
end_day = start_day + 365
responses_2019 = (
sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather_2019,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
)[0]
* 1e3
)
# Plot actual values.
xax = np.interp(
np.arange(len(responses_2019)), [0, len(responses_2019) - 1], [0, 364]
)
plt.plot(xax, responses_2019, label="responses in year", lw=2)
# Daily prediction.
xax_responses = np.arange(365)
temps_2019 = util.apply(weather_2019["temp"], xax_responses)
y_daily = lr.predict(temps_2019.reshape((-1, 1)))
y_2_week = [
np.mean(y_daily[max(0, i - 14) : min(i + 14, len(y_daily))])
for i in range(len(y_daily))
]
for percentile, alpha in [(100, 20), (75, 40), (50, 60), (25, 100)]:
err = np.percentile(err, percentile)
p = percentile / 100
plt.fill_between(
xax_responses,
y_2_week + (err * p),
y_2_week - (err * p),
color="orange",
alpha=alpha / 100,
label=f"{percentile}% of regression error",
)
plt.plot(xax_responses, y_daily, color="black", lw=2, label="daily prediction")
plt.plot(
xax_responses, y_2_week, color="red", lw=2, label="2 week sliding window"
)
plt.ylabel("Y. trans (mm)")
plt.title(f"Year {year}")
if y_i == 0:
legend()
if y_i == 2:
plt.xlabel("Days in year")
else:
plt.tick_params("x", bottom=False, labelbottom=False)
equal_lims("y", 3, 1)
plt.suptitle(f"Predicting long-term effects at X = {x} m, Z = {z} m")
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.savefig(config.get_image_path("classify/ps", "regress-2.pdf"))
def plot_removal_3(config: Config, x: float, z: float):
# First calculate the linear model.
response_type = model.RT.YTrans
weather_2018 = temperature.load("holly-springs-18")
weather_2018["temp"] = temperature.resize(weather_2018["temp"], year=2018)
start_date, end_date = (
weather_2018["datetime"].iloc[0].strftime(temperature.f_string),
weather_2018["datetime"].iloc[-1].strftime(temperature.f_string),
)
install_day = 37
start_day, end_day = install_day, install_day + 365
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), time=60
)
responses_2018 = (
sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
weather=weather_2018,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
)[0]
* 1e3
)
num_samples = 365 * 24
temps = util.apply(weather_2018["temp"], np.arange(num_samples))
rs = util.apply(responses_2018, np.arange(num_samples))
lr, _ = temperature.regress_and_errors(temps, rs)
# Calculate long-term weather.
NUM_YEARS = 5
PIER = 5
long_weather = deepcopy(weather_2018)
long_weather["temp"] = temperature.resize(long_weather["temp"], year=2019)
print_i(f"Repeating {NUM_YEARS} of weather data")
long_weather = temperature.repeat(
config, "holly-springs-18", long_weather, NUM_YEARS
)
print_i(f"Repeated {NUM_YEARS} of weather data")
start_date, end_date = (
long_weather["datetime"].iloc[0].strftime(temperature.f_string),
long_weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
start_day = install_day + 365
end_day = start_day + 365 * NUM_YEARS
MAX_PS = 20
THRESHES = np.arange(0, MAX_PS, 1)
acc_mat = np.zeros((MAX_PS, len(THRESHES)))
fp_mat = np.zeros(acc_mat.shape)
fn_mat = np.zeros(acc_mat.shape)
tp_mat = np.zeros(acc_mat.shape)
tn_mat = np.zeros(acc_mat.shape)
for p_i, ps in enumerate(range(MAX_PS)):
print_i(f"Using pier settlement = {ps} mm")
long_responses = sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
pier_settlement=[
(
model.PierSettlement(pier=PIER, settlement=0.00001),
model.PierSettlement(pier=PIER, settlement=ps / 1e3),
)
],
install_pier_settlement=[],
weather=long_weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
ret_all=False,
ignore_pier_creep=True,
)
healthy_responses = sim.responses.to(
config=config,
points=[model.Point(x=x, z=z)],
traffic_array=traffic_array,
response_type=response_type,
with_creep=True,
pier_settlement=[],
install_pier_settlement=None,
weather=long_weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
ret_all=False,
ignore_pier_creep=True,
)
plt.plot(healthy_responses[0] * 1e3, label="healthy")
plt.plot(long_responses[0] * 1e3, label="pier settlement")
plt.legend()
plt.savefig(config.get_image_path("hello", f"q3-{p_i}.png"))
plt.close()
for t_i, thresh in enumerate(THRESHES):
thresh *= -1
print(thresh)
print(max(healthy_responses[0]))
print(min(healthy_responses[0]))
print(max(long_responses[0]))
print(min(long_responses[0]))
fp = len([x for x in healthy_responses[0] * 1e3 if x <= thresh])
tp = len([x for x in long_responses[0] * 1e3 if x <= thresh])
tn = len([x for x in healthy_responses[0] * 1e3 if x > thresh])
fn = len([x for x in long_responses[0] * 1e3 if x > thresh])
acc_mat[p_i][t_i] = (tp + tn) / (tp + tn + fp + fn)
fp_mat[p_i][t_i] = fp
tp_mat[p_i][t_i] = tp
fn_mat[p_i][t_i] = fn
tn_mat[p_i][t_i] = tn
##################
# Save matrices. #
##################
plt.imshow(acc_mat, cmap=axis_cmap_r)
plt.savefig(config.get_image_path("hello", f"mat.png"))
plt.close()
plt.imshow(fp_mat, cmap=axis_cmap_r)
plt.savefig(config.get_image_path("hello", f"mat-fp.png"))
plt.close()
plt.imshow(fn_mat, cmap=axis_cmap_r)
plt.savefig(config.get_image_path("hello", f"mat-fn.png"))
plt.close()
plt.imshow(tp_mat, cmap=axis_cmap_r)
plt.savefig(config.get_image_path("hello", f"mat-tp.png"))
plt.close()
plt.imshow(tn_mat, cmap=axis_cmap_r)
plt.savefig(config.get_image_path("hello", f"mat-tn.png"))
plt.close()
def support_with_points(bridge: Bridge, delta_x: float):
for support in bridge.supports:
if support.x < bridge.length / 2:
s_x = support.x - ((support.length / 2) + delta_x)
else:
s_x = support.x + ((support.length / 2) + delta_x)
support.point = Point(x=s_x, z=support.z)
for support_2 in bridge.supports:
if support_2.z == support.z and np.isclose(
support_2.x, bridge.length - support.x
):
support.opposite_support = support_2
print_w(f"Support sensor at X = {support.point.x}, Z = {support.point.z}")
if not hasattr(support, "opposite_support"):
raise ValueError("No opposite support")
return bridge.supports
def plot_min_diff(config: Config, num_years: int, delta_x: float = 0.5):
plt.landscape()
log_path = config.get_image_path("classify/q1", "min-thresh.txt")
if os.path.exists(log_path):
os.remove(log_path)
install_day = 37
start_day, end_day = install_day, 365 * num_years
year = 2018
weather = temperature.load("holly-springs-18")
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 60 * 10
)
weather["temp"] = temperature.resize(weather["temp"], year=year)
# weather = temperature.repeat(config, "holly-springs-18", weather, num_years)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
# For each support load the responses to traffic and assign to "Support".
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
support.responses = (
sim.responses.to_traffic_array(
config=config,
points=[support.point],
traffic_array=traffic_array,
response_type=model.RT.YTrans,
# with_creep=True,
# weather=weather,
# start_date=start_date,
# end_date=end_date,
# install_day=install_day,
# start_day=start_day,
# end_day=end_day,
)[0]
* 1e3
)
# Determine max difference for each sensor pair.
for s_i, support in enumerate(config.bridge.supports):
min1, max1 = min(support.responses), max(support.responses)
min2, max2 = (
min(support.opposite_support.responses),
max(support.opposite_support.responses),
)
delta_1, delta_2 = abs(min1 - max2), abs(min2 - max1)
# max_delta = max(abs(support.responses - support.opposite_support.responses))
support.max_delta = max(delta_1, delta_2)
to_write = f"Max delta {support.max_delta} for support {s_i}, sensor at X = {support.point.x}, Z = {support.point.z}"
with open(log_path, "a") as f:
f.write(to_write)
# Bridge supports.
plot.top_view_bridge(config.bridge, lanes=True, piers=True, units="m")
for s_i, support in enumerate(config.bridge.supports):
if s_i % 4 == 0:
support.max_delta = max(
support.max_delta, config.bridge.supports[s_i + 3].max_delta
)
elif s_i % 4 == 1:
support.max_delta = max(
support.max_delta, config.bridge.supports[s_i + 1].max_delta
)
elif s_i % 4 == 2:
support.max_delta = max(
support.max_delta, config.bridge.supports[s_i - 1].max_delta
)
elif s_i % 4 == 3:
support.max_delta = max(
support.max_delta, config.bridge.supports[s_i - 3].max_delta
)
plt.scatter([support.point.x], [support.point.z], c="red")
plt.annotate(
f"{np.around(support.max_delta, 2)} mm",
xy=(support.point.x - 3, support.point.z + 2),
color="b",
size="large",
)
plt.title("Maximum difference between symmetric sensors")
plt.tight_layout()
plt.savefig(config.get_image_path("classify/q1", "min-thresh.pdf"))
def plot_contour_q2(config: Config, num_years: int, delta_x: float = 0.5):
# Select points: over the deck and the sensors!
points = [
Point(x=x, z=z)
for x in np.linspace(config.bridge.x_min, config.bridge.x_max, 100)
for z in np.linspace(config.bridge.z_min, config.bridge.z_max, 30)
]
sensor_points = [
s.point for s in support_with_points(config.bridge, delta_x=delta_x)
]
points += sensor_points
install_day = 37
start_day, end_day = install_day, 365 * num_years
year = 2018
weather = temperature.load("holly-springs-18")
# Responses aren't much from traffic, and we are getting the maximum from 4
# sensors, so short traffic data doesn't really matter.
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 10
)
weather["temp"] = temperature.resize(weather["temp"], year=year)
# weather = temperature.repeat(config, "holly-springs-18", weather, num_years)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
# Generate the data!
responses = (
sim.responses.to(
config=config,
points=points,
traffic_array=traffic_array,
response_type=model.RT.YTrans,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
)
* 1e3
)
# Convert to Responses, determining maximum response per point.
max_responses = [min(rs) for rs in responses]
sensor_responses = max_responses[-len(sensor_points) :]
responses = sim.model.Responses(
response_type=model.RT.YTrans,
responses=[(r, p) for r, p in zip(max_responses, points)],
units="mm",
).without(without.edges(config, 2))
# Adjust maximum responses per sensor so they are symmetric!
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
support.max_response = sensor_responses[s_i]
for support in support_with_points(config.bridge, delta_x=delta_x):
support.max_response = min(
support.max_response, support.opposite_support.max_response
)
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
if s_i % 4 == 0:
support.max_response = max(
support.max_response, config.bridge.supports[s_i + 3].max_response
)
elif s_i % 4 == 1:
support.max_response = max(
support.max_response, config.bridge.supports[s_i + 1].max_response
)
elif s_i % 4 == 2:
support.max_response = max(
support.max_response, config.bridge.supports[s_i - 1].max_response
)
elif s_i % 4 == 3:
support.max_response = max(
support.max_response, config.bridge.supports[s_i - 3].max_response
)
plt.landscape()
plot.contour_responses(config, responses, interp=(200, 60), levels=20)
plot.top_view_bridge(config.bridge, lanes=True, piers=True, units="m")
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
plt.scatter([support.point.x], [support.point.z], c="black")
plt.annotate(
f"{np.around(support.max_response, 2)}",
xy=(support.point.x - 3, support.point.z + 2),
color="black",
size="large",
)
plt.title(
f"Minimum Y translation over {num_years} years \n from traffic, temperature, shrinkage & creep"
)
plt.tight_layout()
plt.savefig(config.get_image_path("classify/q2", "q2-contour.pdf"))
plt.close()
def plot_min_ps_1(config: Config, num_years: int, delta_x: float = 0.5):
THRESH = 2 # Pier settlement from question 1.
plt.landscape()
log_path = config.get_image_path("classify/q1b", "min-ps.txt")
if os.path.exists(log_path): # Start with fresh logfile.
os.remove(log_path)
install_day = 37
start_day, end_day = install_day, 365 * num_years
year = 2018
weather = temperature.load("holly-springs-18")
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 60 * 10
)
weather["temp"] = temperature.resize(weather["temp"], year=year)
# weather = temperature.repeat(config, "holly-springs-18", weather, num_years)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
# For each support..
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
# ..increase pier settlement until threshold triggered.
for settlement in np.arange(0, 10, 0.1):
responses = (
sim.responses.to(
config=config,
points=[support.point, support.opposite_support.point],
traffic_array=traffic_array,
response_type=model.RT.YTrans,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
pier_settlement=[
(
model.PierSettlement(pier=s_i, settlement=0),
model.PierSettlement(pier=s_i, settlement=settlement / 1e3),
)
],
skip_weather_interp=True,
)
* 1e3
)
delta = max(abs(responses[0] - responses[1]))
to_write = f"Max delta {delta} for settlement {settlement} mm for support {s_i}, sensor at X = {support.point.x}, Z = {support.point.z}"
print_w(to_write)
# Because of "abs", "delta" will be positive.
if delta > THRESH:
break
# Write the minimum settlement value for this support to a file.
with open(log_path, "a") as f:
f.write(to_write)
# Annotate the support with the minimum settlement value.
plt.scatter([support.point.x], [support.point.z], c="red")
plt.annotate(
f"{np.around(settlement, 2)} mm",
xy=(support.point.x - 3, support.point.z + 2),
color="b",
size="large",
)
# Plot the results.
plot.top_view_bridge(config.bridge, lanes=True, piers=True, units="m")
plt.title("Minimum pier settlement detected (Question 1B)")
plt.tight_layout()
plt.savefig(config.get_image_path("classify/q1b", "q1b-min-ps.pdf"))
plt.close()
def plot_min_ps_2(config: Config, num_years: int, delta_x: float = 0.5):
THRESH = 6 # Pier settlement from question 1.
plt.landscape()
log_path = config.get_image_path("classify/q2b", "2b-min-ps.txt")
if os.path.exists(log_path): # Start with fresh logfile.
os.remove(log_path)
install_day = 37
start_day, end_day = install_day, 365 * num_years
year = 2018
weather = temperature.load("holly-springs-18")
_0, _1, traffic_array = traffic.load_traffic(
config, traffic.normal_traffic(config), 60 * 10
)
weather["temp"] = temperature.resize(weather["temp"], year=year)
# weather = temperature.repeat(config, "holly-springs-18", weather, num_years)
start_date, end_date = (
weather["datetime"].iloc[0].strftime(temperature.f_string),
weather["datetime"].iloc[-1].strftime(temperature.f_string),
)
for s_i, support in enumerate(support_with_points(config.bridge, delta_x=delta_x)):
# Increase pier settlement until threshold triggered.
for settlement in np.arange(0, 10, 0.1):
responses = (
sim.responses.to(
config=config,
points=[support.point],
traffic_array=traffic_array,
response_type=model.RT.YTrans,
with_creep=True,
weather=weather,
start_date=start_date,
end_date=end_date,
install_day=install_day,
start_day=start_day,
end_day=end_day,
pier_settlement=[
(
model.PierSettlement(pier=s_i, settlement=0),
model.PierSettlement(pier=s_i, settlement=settlement / 1e3),
)
],
skip_weather_interp=True,
)
* 1e3
)
# Determine the minimum response for this level of settlement.
max_r = min(responses[0])
to_write = f"Min {max_r} for settlement {settlement} mm for support {s_i}, sensor at X = {support.point.x}, Z = {support.point.z}"
print_w(to_write)
if max_r < -THRESH:
break
# Write the minimum response and settlement for this support to a file.
with open(log_path, "a") as f:
f.write(to_write)
plt.scatter([support.point.x], [support.point.z], c="red")
plt.annotate(
f"{np.around(settlement, 2)} mm",
xy=(support.point.x - 3, support.point.z + 2),
color="b",
size="large",
)
plot.top_view_bridge(config.bridge, lanes=True, piers=True, units="m")
plt.title("Minimum pier settlement detected (Question 2B)")
plt.tight_layout()
plt.savefig(config.get_image_path("classify/q2b", "q2b-min-ps.pdf"))
| 38.138593 | 148 | 0.587494 | 4,536 | 35,774 | 4.431878 | 0.079806 | 0.026364 | 0.020146 | 0.021589 | 0.754763 | 0.722529 | 0.70104 | 0.682336 | 0.670348 | 0.65433 | 0 | 0.033771 | 0.288981 | 35,774 | 937 | 149 | 38.179296 | 0.756487 | 0.055571 | 0 | 0.566106 | 0 | 0.004808 | 0.078018 | 0.003812 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016827 | false | 0 | 0.012019 | 0.001202 | 0.03125 | 0.014423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
252492e17fae91abe1251ab7bb4d09c4949ed235 | 37,380 | py | Python | pacu/models/awsapi/iotanalytics.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | 1 | 2022-03-09T14:51:54.000Z | 2022-03-09T14:51:54.000Z | pacu/models/awsapi/iotanalytics.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | null | null | null | pacu/models/awsapi/iotanalytics.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | null | null | null | # generated by datamodel-codegen:
# filename: openapi.yaml
# timestamp: 2021-12-31T02:50:50+00:00
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import Annotated, Any, List, Optional
from pydantic import BaseModel, Extra, Field
class ResourceNotFoundException(BaseModel):
__root__: Any
class InvalidRequestException(ResourceNotFoundException):
pass
class InternalFailureException(ResourceNotFoundException):
pass
class ServiceUnavailableException(ResourceNotFoundException):
pass
class ThrottlingException(ResourceNotFoundException):
pass
class CancelPipelineReprocessingResponse(BaseModel):
pass
class ServiceManagedChannelS3Storage(CancelPipelineReprocessingResponse):
"""
Used to store channel data in an S3 bucket managed by IoT Analytics. You can't change the choice of S3 storage after the data store is created.
"""
pass
class UnlimitedRetentionPeriod(BaseModel):
__root__: bool
class RetentionPeriodInDays(BaseModel):
__root__: Annotated[int, Field(ge=1.0)]
class ResourceAlreadyExistsException(ResourceNotFoundException):
pass
class LimitExceededException(ResourceNotFoundException):
pass
class UnlimitedVersioning(UnlimitedRetentionPeriod):
pass
class MaxVersions(BaseModel):
__root__: Annotated[int, Field(ge=1.0, le=1000.0)]
class ServiceManagedDatastoreS3Storage(CancelPipelineReprocessingResponse):
"""
Used to store data in an Amazon S3 bucket managed by IoT Analytics. You can't change the choice of Amazon S3 storage after your data store is created.
"""
pass
class JsonConfiguration(CancelPipelineReprocessingResponse):
"""
Contains the configuration information of the JSON format.
"""
pass
class RoleArn(BaseModel):
__root__: Annotated[str, Field(max_length=2048, min_length=20)]
class LoggingLevel(Enum):
ERROR = 'ERROR'
class LoggingEnabled(UnlimitedRetentionPeriod):
pass
class MessagePayload(BaseModel):
__root__: str
class TagResourceResponse(CancelPipelineReprocessingResponse):
pass
class UntagResourceResponse(CancelPipelineReprocessingResponse):
pass
class TagKey(BaseModel):
__root__: Annotated[str, Field(max_length=256, min_length=1)]
class ActivityBatchSize(MaxVersions):
pass
class ActivityName(BaseModel):
__root__: Annotated[str, Field(max_length=128, min_length=1)]
class AttributeNameMapping(BaseModel):
pass
class Config:
extra = Extra.allow
class AttributeName(TagKey):
pass
class AttributeNames(BaseModel):
__root__: Annotated[List[AttributeName], Field(max_items=50, min_items=1)]
class MessageId(BaseModel):
__root__: Annotated[str, Field(max_length=128, min_length=1, regex='\\p{ASCII}*')]
class ErrorCode(MessagePayload):
pass
class ErrorMessage(MessagePayload):
pass
class ChannelName(BaseModel):
__root__: Annotated[
str, Field(max_length=128, min_length=1, regex='(^(?!_{2}))(^[a-zA-Z0-9_]+$)')
]
class BucketKeyExpression(BaseModel):
__root__: Annotated[
str, Field(max_length=255, min_length=1, regex="^[a-zA-Z0-9!_.*'()/{}:-]*$")
]
class BucketName(BaseModel):
__root__: Annotated[
str, Field(max_length=255, min_length=3, regex='^[a-zA-Z0-9.\\-_]*$')
]
class PipelineName(ChannelName):
pass
class ReprocessingId(MessagePayload):
pass
class CancelPipelineReprocessingRequest(BaseModel):
pass
class ChannelArn(MessagePayload):
pass
class ChannelStatus(Enum):
CREATING = 'CREATING'
ACTIVE = 'ACTIVE'
DELETING = 'DELETING'
class RetentionPeriod(BaseModel):
"""
How long, in days, message data is kept.
"""
unlimited: Optional[UnlimitedRetentionPeriod] = None
numberOfDays: Optional[RetentionPeriodInDays] = None
class Timestamp(BaseModel):
__root__: datetime
class ServiceManagedChannelS3StorageSummary(CancelPipelineReprocessingResponse):
"""
Used to store channel data in an S3 bucket managed by IoT Analytics.
"""
pass
class ColumnName(BaseModel):
__root__: Annotated[
str,
Field(
max_length=255,
min_length=1,
regex='^[A-Za-z_]([A-Za-z0-9]*|[A-Za-z0-9][A-Za-z0-9_]*)$',
),
]
class ColumnDataType(BaseModel):
__root__: Annotated[str, Field(max_length=131072, min_length=1)]
class Column(BaseModel):
"""
Contains information about a column that stores your data.
"""
name: ColumnName
type: ColumnDataType
class Columns(BaseModel):
__root__: List[Column]
class ComputeType(Enum):
ACU_1 = 'ACU_1'
ACU_2 = 'ACU_2'
class Image(BaseModel):
__root__: Annotated[str, Field(max_length=255)]
class DatasetName(ChannelName):
pass
class DatasetContentVersion(BaseModel):
__root__: Annotated[str, Field(max_length=36, min_length=7)]
class CreateDatasetContentRequest(BaseModel):
versionId: Optional[DatasetContentVersion] = None
class VersioningConfiguration(BaseModel):
"""
Information about the versioning of dataset contents.
"""
unlimited: Optional[UnlimitedVersioning] = None
maxVersions: Optional[MaxVersions] = None
class DatasetArn(MessagePayload):
pass
class DatastoreName(ChannelName):
pass
class DatastoreArn(MessagePayload):
pass
class PipelineArn(MessagePayload):
pass
class S3KeyPrefix(BaseModel):
__root__: Annotated[
str, Field(max_length=255, min_length=1, regex="^[a-zA-Z0-9!_.*'()/{}:-]*/$")
]
class CustomerManagedDatastoreS3StorageSummary(BaseModel):
"""
Contains information about the data store that you manage.
"""
bucket: Optional[BucketName] = None
keyPrefix: Optional[S3KeyPrefix] = None
roleArn: Optional[RoleArn] = None
class DatasetActionName(BaseModel):
__root__: Annotated[
str, Field(max_length=128, min_length=1, regex='^[a-zA-Z0-9_]+$')
]
class DatasetActionType(Enum):
QUERY = 'QUERY'
CONTAINER = 'CONTAINER'
class EntryName(MessagePayload):
pass
class DatasetContentState(Enum):
CREATING = 'CREATING'
SUCCEEDED = 'SUCCEEDED'
FAILED = 'FAILED'
class Reason(MessagePayload):
pass
class DatasetContentStatus(BaseModel):
"""
The state of the dataset contents and the reason they are in this state.
"""
state: Optional[DatasetContentState] = None
reason: Optional[Reason] = None
class DatasetContentSummary(BaseModel):
"""
Summary information about dataset contents.
"""
version: Optional[DatasetContentVersion] = None
status: Optional[DatasetContentStatus] = None
creationTime: Optional[Timestamp] = None
scheduleTime: Optional[Timestamp] = None
completionTime: Optional[Timestamp] = None
class DatasetContentSummaries(BaseModel):
__root__: List[DatasetContentSummary]
class DatasetContentVersionValue(BaseModel):
"""
The dataset whose latest contents are used as input to the notebook or application.
"""
datasetName: DatasetName
class PresignedURI(MessagePayload):
pass
class TriggeringDataset(BaseModel):
"""
Information about the dataset whose content generation triggers the new dataset content generation.
"""
name: DatasetName
class IotSiteWiseCustomerManagedDatastoreS3Storage(BaseModel):
"""
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created.
"""
bucket: BucketName
keyPrefix: Optional[S3KeyPrefix] = None
class IotSiteWiseCustomerManagedDatastoreS3StorageSummary(BaseModel):
"""
Contains information about the data store that you manage, which stores data used by IoT SiteWise.
"""
bucket: Optional[BucketName] = None
keyPrefix: Optional[S3KeyPrefix] = None
class DatastoreIotSiteWiseMultiLayerStorageSummary(BaseModel):
"""
Contains information about the data store that you manage, which stores data used by IoT SiteWise.
"""
customerManagedS3Storage: Optional[
IotSiteWiseCustomerManagedDatastoreS3StorageSummary
] = None
class ServiceManagedDatastoreS3StorageSummary(CancelPipelineReprocessingResponse):
"""
Contains information about the data store that is managed by IoT Analytics.
"""
pass
class DatastoreStorageSummary(BaseModel):
"""
Contains information about your data store.
"""
serviceManagedS3: Optional[ServiceManagedDatastoreS3StorageSummary] = None
customerManagedS3: Optional[CustomerManagedDatastoreS3StorageSummary] = None
iotSiteWiseMultiLayerStorage: Optional[
DatastoreIotSiteWiseMultiLayerStorageSummary
] = None
class FileFormatType(Enum):
JSON = 'JSON'
PARQUET = 'PARQUET'
class DeleteChannelRequest(BaseModel):
pass
class DeleteDatasetContentRequest(BaseModel):
pass
class DeleteDatasetRequest(BaseModel):
pass
class DeleteDatastoreRequest(BaseModel):
pass
class DeletePipelineRequest(BaseModel):
pass
class OffsetSeconds(BaseModel):
__root__: int
class TimeExpression(MessagePayload):
pass
class DeltaTime(BaseModel):
"""
Used to limit data to that which has arrived since the last execution of the action.
"""
offsetSeconds: OffsetSeconds
timeExpression: TimeExpression
class SessionTimeoutInMinutes(BaseModel):
__root__: Annotated[int, Field(ge=1.0, le=60.0)]
class DeltaTimeSessionWindowConfiguration(BaseModel):
"""
<p>A structure that contains the configuration information of a delta time session window.</p> <p> <a href="https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DeltaTime.html"> <code>DeltaTime</code> </a> specifies a time interval. You can use <code>DeltaTime</code> to create dataset contents with data that has arrived in the data store since the last execution. For an example of <code>DeltaTime</code>, see <a href="https://docs.aws.amazon.com/iotanalytics/latest/userguide/automate-create-dataset.html#automate-example6"> Creating a SQL dataset with a delta window (CLI)</a> in the <i>IoT Analytics User Guide</i>.</p>
"""
timeoutInMinutes: SessionTimeoutInMinutes
class IncludeStatisticsFlag(UnlimitedRetentionPeriod):
pass
class DescribeChannelRequest(BaseModel):
pass
class DescribeDatasetRequest(BaseModel):
pass
class DescribeDatastoreRequest(BaseModel):
pass
class DescribeLoggingOptionsRequest(BaseModel):
pass
class LoggingOptions(BaseModel):
"""
Information about logging options.
"""
roleArn: RoleArn
level: LoggingLevel
enabled: LoggingEnabled
class DescribePipelineRequest(BaseModel):
pass
class DoubleValue(BaseModel):
__root__: float
class EndTime(Timestamp):
pass
class SizeInBytes(DoubleValue):
pass
class FilterExpression(TagKey):
pass
class GetDatasetContentRequest(BaseModel):
pass
class GlueTableName(BaseModel):
__root__: Annotated[str, Field(max_length=150, min_length=1)]
class GlueDatabaseName(GlueTableName):
pass
class GlueConfiguration(BaseModel):
"""
Configuration information for coordination with Glue, a fully managed extract, transform and load (ETL) service.
"""
tableName: GlueTableName
databaseName: GlueDatabaseName
class IotEventsInputName(BaseModel):
__root__: Annotated[
str, Field(max_length=128, min_length=1, regex='^[a-zA-Z][a-zA-Z0-9_]*$')
]
class LambdaName(BaseModel):
__root__: Annotated[
str, Field(max_length=64, min_length=1, regex='^[a-zA-Z0-9_-]+$')
]
class LateDataRuleName(DatasetActionName):
pass
class LateDataRuleConfiguration(BaseModel):
"""
The information needed to configure a delta time session window.
"""
deltaTimeSessionWindowConfiguration: Optional[
DeltaTimeSessionWindowConfiguration
] = None
class NextToken(MessagePayload):
pass
class MaxResults(BaseModel):
__root__: Annotated[int, Field(ge=1.0, le=250.0)]
class ListChannelsRequest(BaseModel):
pass
class ListDatasetContentsRequest(BaseModel):
pass
class ListDatasetsRequest(BaseModel):
pass
class ListDatastoresRequest(BaseModel):
pass
class ListPipelinesRequest(BaseModel):
pass
class ResourceArn(RoleArn):
pass
class ListTagsForResourceRequest(BaseModel):
pass
class LogResult(MessagePayload):
pass
class MathExpression(TagKey):
pass
class MaxMessages(BaseModel):
__root__: Annotated[int, Field(ge=1.0, le=10.0)]
class MessagePayloads(BaseModel):
__root__: Annotated[List[MessagePayload], Field(max_items=10, min_items=1)]
class OutputFileName(BaseModel):
__root__: Annotated[str, Field(regex='[\\w\\.-]{1,255}')]
class OutputFileUriValue(BaseModel):
"""
The value of the variable as a structure that specifies an output file URI.
"""
fileName: OutputFileName
class SchemaDefinition(BaseModel):
"""
Information needed to define a schema.
"""
columns: Optional[Columns] = None
class PartitionAttributeName(DatasetActionName):
pass
class PutLoggingOptionsRequest(BaseModel):
loggingOptions: LoggingOptions
class QueryFilter(BaseModel):
"""
Information that is used to filter message data, to segregate it according to the timeframe in which it arrives.
"""
deltaTime: Optional[DeltaTime] = None
class QueryFilters(BaseModel):
__root__: Annotated[List[QueryFilter], Field(max_items=1, min_items=0)]
class ReprocessingStatus(Enum):
RUNNING = 'RUNNING'
SUCCEEDED = 'SUCCEEDED'
CANCELLED = 'CANCELLED'
FAILED = 'FAILED'
class ReprocessingSummary(BaseModel):
"""
Information about pipeline reprocessing.
"""
id: Optional[ReprocessingId] = None
status: Optional[ReprocessingStatus] = None
creationTime: Optional[Timestamp] = None
class VolumeSizeInGB(BaseModel):
__root__: Annotated[int, Field(ge=1.0, le=50.0)]
class S3PathChannelMessage(BaseModel):
__root__: Annotated[
str,
Field(
max_length=1024,
min_length=1,
regex="^[a-zA-Z0-9/_!'(){}\\*\\s\\.\\-\\=\\:]+$",
),
]
class StartTime(Timestamp):
pass
class SampleChannelDataRequest(BaseModel):
pass
class ScheduleExpression(MessagePayload):
pass
class SqlQuery(MessagePayload):
pass
class StringValue(BaseModel):
__root__: Annotated[str, Field(max_length=1024, min_length=0)]
class TagValue(TagKey):
pass
class TagKeyList(BaseModel):
__root__: Annotated[List[TagKey], Field(max_items=50, min_items=1)]
class TimestampFormat(BaseModel):
__root__: Annotated[
str, Field(max_length=50, min_length=1, regex="^[a-zA-Z0-9\\s\\[\\]_,.'/:-]*$")
]
class UntagResourceRequest(BaseModel):
pass
class VariableName(TagKey):
pass
class Variable(BaseModel):
"""
An instance of a variable to be passed to the <code>containerAction</code> execution. Each variable must have a name and a value given by one of <code>stringValue</code>, <code>datasetContentVersionValue</code>, or <code>outputFileUriValue</code>.
"""
name: VariableName
stringValue: Optional[StringValue] = None
doubleValue: Optional[DoubleValue] = None
datasetContentVersionValue: Optional[DatasetContentVersionValue] = None
outputFileUriValue: Optional[OutputFileUriValue] = None
class Message(BaseModel):
"""
Information about a message.
"""
messageId: MessageId
payload: MessagePayload
class CreateChannelResponse(BaseModel):
channelName: Optional[ChannelName] = None
channelArn: Optional[ChannelArn] = None
retentionPeriod: Optional[RetentionPeriod] = None
class CustomerManagedChannelS3Storage(BaseModel):
"""
Used to store channel data in an S3 bucket that you manage. If customer-managed storage is selected, the <code>retentionPeriod</code> parameter is ignored. You can't change the choice of S3 storage after the data store is created.
"""
bucket: BucketName
keyPrefix: Optional[S3KeyPrefix] = None
roleArn: RoleArn
class Tag(BaseModel):
"""
A set of key-value pairs that are used to manage the resource.
"""
key: TagKey
value: TagValue
class CreateDatasetResponse(BaseModel):
datasetName: Optional[DatasetName] = None
datasetArn: Optional[DatasetArn] = None
retentionPeriod: Optional[RetentionPeriod] = None
class LateDataRule(BaseModel):
"""
A structure that contains the name and configuration information of a late data rule.
"""
ruleName: Optional[LateDataRuleName] = None
ruleConfiguration: LateDataRuleConfiguration
class CreateDatasetContentResponse(BaseModel):
versionId: Optional[DatasetContentVersion] = None
class CreateDatastoreResponse(BaseModel):
datastoreName: Optional[DatastoreName] = None
datastoreArn: Optional[DatastoreArn] = None
retentionPeriod: Optional[RetentionPeriod] = None
class CustomerManagedDatastoreS3Storage(CustomerManagedChannelS3Storage):
"""
S3-customer-managed; When you choose customer-managed storage, the <code>retentionPeriod</code> parameter is ignored. You can't change the choice of Amazon S3 storage after your data store is created.
"""
pass
class DatastoreIotSiteWiseMultiLayerStorage(BaseModel):
"""
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created.
"""
customerManagedS3Storage: IotSiteWiseCustomerManagedDatastoreS3Storage
class ParquetConfiguration(BaseModel):
"""
Contains the configuration information of the Parquet format.
"""
schemaDefinition: Optional[SchemaDefinition] = None
class CreatePipelineResponse(BaseModel):
pipelineName: Optional[PipelineName] = None
pipelineArn: Optional[PipelineArn] = None
class DescribeLoggingOptionsResponse(BaseModel):
loggingOptions: Optional[LoggingOptions] = None
class ListDatasetContentsResponse(BaseModel):
datasetContentSummaries: Optional[DatasetContentSummaries] = None
nextToken: Optional[NextToken] = None
class RunPipelineActivityResponse(BaseModel):
payloads: Optional[MessagePayloads] = None
logResult: Optional[LogResult] = None
class ChannelActivity(BaseModel):
"""
The activity that determines the source of the messages to be processed.
"""
name: ActivityName
channelName: ChannelName
next: Optional[ActivityName] = None
class LambdaActivity(BaseModel):
"""
An activity that runs a Lambda function to modify the message.
"""
name: ActivityName
lambdaName: LambdaName
batchSize: ActivityBatchSize
next: Optional[ActivityName] = None
class DatastoreActivity(BaseModel):
"""
The datastore activity that specifies where to store the processed data.
"""
name: ActivityName
datastoreName: DatastoreName
class AddAttributesActivity(BaseModel):
"""
An activity that adds other attributes based on existing attributes in the message.
"""
name: ActivityName
attributes: AttributeNameMapping
next: Optional[ActivityName] = None
class RemoveAttributesActivity(BaseModel):
"""
An activity that removes attributes from a message.
"""
name: ActivityName
attributes: AttributeNames
next: Optional[ActivityName] = None
class SelectAttributesActivity(RemoveAttributesActivity):
"""
Used to create a new message using only the specified attributes from the original message.
"""
pass
class FilterActivity(BaseModel):
"""
An activity that filters a message based on its attributes.
"""
name: ActivityName
filter: FilterExpression
next: Optional[ActivityName] = None
class MathActivity(BaseModel):
"""
An activity that computes an arithmetic expression using the message's attributes.
"""
name: ActivityName
attribute: AttributeName
math: MathExpression
next: Optional[ActivityName] = None
class DeviceRegistryEnrichActivity(BaseModel):
"""
An activity that adds data from the IoT device registry to your message.
"""
name: ActivityName
attribute: AttributeName
thingName: AttributeName
roleArn: RoleArn
next: Optional[ActivityName] = None
class DeviceShadowEnrichActivity(DeviceRegistryEnrichActivity):
"""
An activity that adds information from the IoT Device Shadow service to a message.
"""
pass
class SampleChannelDataResponse(BaseModel):
payloads: Optional[MessagePayloads] = None
class StartPipelineReprocessingResponse(BaseModel):
reprocessingId: Optional[ReprocessingId] = None
class S3PathChannelMessages(BaseModel):
__root__: Annotated[List[S3PathChannelMessage], Field(max_items=100, min_items=1)]
class BatchPutMessageErrorEntry(BaseModel):
"""
Contains informations about errors.
"""
messageId: Optional[MessageId] = None
errorCode: Optional[ErrorCode] = None
errorMessage: Optional[ErrorMessage] = None
class BatchPutMessageErrorEntries(BaseModel):
__root__: List[BatchPutMessageErrorEntry]
class Messages(BaseModel):
__root__: List[Message]
class BatchPutMessageRequest(BaseModel):
channelName: ChannelName
messages: Messages
class ChannelStorage(BaseModel):
"""
Where channel data is stored. You may choose one of <code>serviceManagedS3</code>, <code>customerManagedS3</code> storage. If not specified, the default is <code>serviceManagedS3</code>. This can't be changed after creation of the channel.
"""
serviceManagedS3: Optional[ServiceManagedChannelS3Storage] = None
customerManagedS3: Optional[CustomerManagedChannelS3Storage] = None
class Channel(BaseModel):
"""
A collection of data from an MQTT topic. Channels archive the raw, unprocessed messages before publishing the data to a pipeline.
"""
name: Optional[ChannelName] = None
storage: Optional[ChannelStorage] = None
arn: Optional[ChannelArn] = None
status: Optional[ChannelStatus] = None
retentionPeriod: Optional[RetentionPeriod] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
lastMessageArrivalTime: Optional[Timestamp] = None
class ChannelMessages(BaseModel):
"""
Specifies one or more sets of channel messages.
"""
s3Paths: Optional[S3PathChannelMessages] = None
class EstimatedResourceSize(BaseModel):
"""
The estimated size of the resource.
"""
estimatedSizeInBytes: Optional[SizeInBytes] = None
estimatedOn: Optional[Timestamp] = None
class ChannelStatistics(BaseModel):
"""
Statistics information about the channel.
"""
size: Optional[EstimatedResourceSize] = None
class CustomerManagedChannelS3StorageSummary(CustomerManagedDatastoreS3StorageSummary):
"""
Used to store channel data in an S3 bucket that you manage.
"""
pass
class ChannelStorageSummary(BaseModel):
"""
Where channel data is stored.
"""
serviceManagedS3: Optional[ServiceManagedChannelS3StorageSummary] = None
customerManagedS3: Optional[CustomerManagedChannelS3StorageSummary] = None
class ChannelSummary(BaseModel):
"""
A summary of information about a channel.
"""
channelName: Optional[ChannelName] = None
channelStorage: Optional[ChannelStorageSummary] = None
status: Optional[ChannelStatus] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
lastMessageArrivalTime: Optional[Timestamp] = None
class ChannelSummaries(BaseModel):
__root__: List[ChannelSummary]
class ResourceConfiguration(BaseModel):
"""
The configuration of the resource used to execute the <code>containerAction</code>.
"""
computeType: ComputeType
volumeSizeInGB: VolumeSizeInGB
class Variables(BaseModel):
__root__: Annotated[List[Variable], Field(max_items=50, min_items=0)]
class ContainerDatasetAction(BaseModel):
"""
Information required to run the <code>containerAction</code> to produce dataset contents.
"""
image: Image
executionRoleArn: RoleArn
resourceConfiguration: ResourceConfiguration
variables: Optional[Variables] = None
class TagList(BaseModel):
__root__: Annotated[List[Tag], Field(max_items=50, min_items=1)]
class CreateChannelRequest(BaseModel):
channelName: ChannelName
channelStorage: Optional[ChannelStorage] = None
retentionPeriod: Optional[RetentionPeriod] = None
tags: Optional[TagList] = None
class LateDataRules(BaseModel):
__root__: Annotated[List[LateDataRule], Field(max_items=1, min_items=1)]
class DatastoreStorage(BaseModel):
"""
Where data in a data store is stored.. You can choose <code>serviceManagedS3</code> storage, <code>customerManagedS3</code> storage, or <code>iotSiteWiseMultiLayerStorage</code> storage. The default is <code>serviceManagedS3</code>. You can't change the choice of Amazon S3 storage after your data store is created.
"""
serviceManagedS3: Optional[ServiceManagedDatastoreS3Storage] = None
customerManagedS3: Optional[CustomerManagedDatastoreS3Storage] = None
iotSiteWiseMultiLayerStorage: Optional[DatastoreIotSiteWiseMultiLayerStorage] = None
class FileFormatConfiguration(BaseModel):
"""
<p>Contains the configuration information of file formats. IoT Analytics data stores support JSON and <a href="https://parquet.apache.org/">Parquet</a>.</p> <p>The default file format is JSON. You can specify only one format.</p> <p>You can't change the file format after you create the data store.</p>
"""
jsonConfiguration: Optional[JsonConfiguration] = None
parquetConfiguration: Optional[ParquetConfiguration] = None
class SqlQueryDatasetAction(BaseModel):
"""
The SQL query to modify the message.
"""
sqlQuery: SqlQuery
filters: Optional[QueryFilters] = None
class DatasetActionSummary(BaseModel):
"""
Information about the action that automatically creates the dataset's contents.
"""
actionName: Optional[DatasetActionName] = None
actionType: Optional[DatasetActionType] = None
class DatasetActionSummaries(BaseModel):
__root__: Annotated[List[DatasetActionSummary], Field(max_items=1, min_items=1)]
class IotEventsDestinationConfiguration(BaseModel):
"""
Configuration information for delivery of dataset contents to IoT Events.
"""
inputName: IotEventsInputName
roleArn: RoleArn
class S3DestinationConfiguration(BaseModel):
"""
Configuration information for delivery of dataset contents to Amazon Simple Storage Service (Amazon S3).
"""
bucket: BucketName
key: BucketKeyExpression
glueConfiguration: Optional[GlueConfiguration] = None
roleArn: RoleArn
class DatasetContentDeliveryDestination(BaseModel):
"""
The destination to which dataset contents are delivered.
"""
iotEventsDestinationConfiguration: Optional[
IotEventsDestinationConfiguration
] = None
s3DestinationConfiguration: Optional[S3DestinationConfiguration] = None
class DatasetEntry(BaseModel):
"""
The reference to a dataset entry.
"""
entryName: Optional[EntryName] = None
dataURI: Optional[PresignedURI] = None
class DatasetEntries(BaseModel):
__root__: List[DatasetEntry]
class Schedule(BaseModel):
"""
The schedule for when to trigger an update.
"""
expression: Optional[ScheduleExpression] = None
class Partition(BaseModel):
"""
A partition dimension defined by an attribute.
"""
attributeName: PartitionAttributeName
class TimestampPartition(BaseModel):
"""
A partition dimension defined by a timestamp attribute.
"""
attributeName: PartitionAttributeName
timestampFormat: Optional[TimestampFormat] = None
class DatastorePartition(BaseModel):
"""
A single dimension to partition a data store. The dimension must be an <code>AttributePartition</code> or a <code>TimestampPartition</code>.
"""
attributePartition: Optional[Partition] = None
timestampPartition: Optional[TimestampPartition] = None
class DatastoreStatistics(ChannelStatistics):
"""
Statistical information about the data store.
"""
pass
class ReprocessingSummaries(BaseModel):
__root__: List[ReprocessingSummary]
class PipelineSummary(BaseModel):
"""
A summary of information about a pipeline.
"""
pipelineName: Optional[PipelineName] = None
reprocessingSummaries: Optional[ReprocessingSummaries] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
class StartPipelineReprocessingRequest(BaseModel):
startTime: Optional[StartTime] = None
endTime: Optional[EndTime] = None
channelMessages: Optional[ChannelMessages] = None
class TagResourceRequest(BaseModel):
tags: TagList
class UpdateChannelRequest(BaseModel):
channelStorage: Optional[ChannelStorage] = None
retentionPeriod: Optional[RetentionPeriod] = None
class UpdateDatastoreRequest(BaseModel):
retentionPeriod: Optional[RetentionPeriod] = None
datastoreStorage: Optional[DatastoreStorage] = None
fileFormatConfiguration: Optional[FileFormatConfiguration] = None
class BatchPutMessageResponse(BaseModel):
batchPutMessageErrorEntries: Optional[BatchPutMessageErrorEntries] = None
class DatasetAction(BaseModel):
"""
A <code>DatasetAction</code> object that specifies how dataset contents are automatically created.
"""
actionName: Optional[DatasetActionName] = None
queryAction: Optional[SqlQueryDatasetAction] = None
containerAction: Optional[ContainerDatasetAction] = None
class DatasetTrigger(BaseModel):
"""
The <code>DatasetTrigger</code> that specifies when the dataset is automatically updated.
"""
schedule: Optional[Schedule] = None
dataset: Optional[TriggeringDataset] = None
class DatasetContentDeliveryRule(BaseModel):
"""
When dataset contents are created, they are delivered to destination specified here.
"""
entryName: Optional[EntryName] = None
destination: DatasetContentDeliveryDestination
class Partitions(BaseModel):
__root__: Annotated[List[DatastorePartition], Field(max_items=25, min_items=0)]
class PipelineActivity(BaseModel):
"""
An activity that performs a transformation on a message.
"""
channel: Optional[ChannelActivity] = None
lambda_: Annotated[Optional[LambdaActivity], Field(alias='lambda')] = None
datastore: Optional[DatastoreActivity] = None
addAttributes: Optional[AddAttributesActivity] = None
removeAttributes: Optional[RemoveAttributesActivity] = None
selectAttributes: Optional[SelectAttributesActivity] = None
filter: Optional[FilterActivity] = None
math: Optional[MathActivity] = None
deviceRegistryEnrich: Optional[DeviceRegistryEnrichActivity] = None
deviceShadowEnrich: Optional[DeviceShadowEnrichActivity] = None
class DescribeChannelResponse(BaseModel):
channel: Optional[Channel] = None
statistics: Optional[ChannelStatistics] = None
class GetDatasetContentResponse(BaseModel):
entries: Optional[DatasetEntries] = None
timestamp: Optional[Timestamp] = None
status: Optional[DatasetContentStatus] = None
class ListChannelsResponse(BaseModel):
channelSummaries: Optional[ChannelSummaries] = None
nextToken: Optional[NextToken] = None
class ListTagsForResourceResponse(BaseModel):
tags: Optional[TagList] = None
class DatasetActions(BaseModel):
__root__: Annotated[List[DatasetAction], Field(max_items=1, min_items=1)]
class DatasetTriggers(BaseModel):
__root__: Annotated[List[DatasetTrigger], Field(max_items=5, min_items=0)]
class DatasetContentDeliveryRules(BaseModel):
__root__: Annotated[
List[DatasetContentDeliveryRule], Field(max_items=20, min_items=0)
]
class CreateDatasetRequest(BaseModel):
datasetName: DatasetName
actions: DatasetActions
triggers: Optional[DatasetTriggers] = None
contentDeliveryRules: Optional[DatasetContentDeliveryRules] = None
retentionPeriod: Optional[RetentionPeriod] = None
versioningConfiguration: Optional[VersioningConfiguration] = None
tags: Optional[TagList] = None
lateDataRules: Optional[LateDataRules] = None
class DatastorePartitions(BaseModel):
"""
Contains information about the partition dimensions in a data store.
"""
partitions: Optional[Partitions] = None
class CreateDatastoreRequest(BaseModel):
datastoreName: DatastoreName
datastoreStorage: Optional[DatastoreStorage] = None
retentionPeriod: Optional[RetentionPeriod] = None
tags: Optional[TagList] = None
fileFormatConfiguration: Optional[FileFormatConfiguration] = None
datastorePartitions: Optional[DatastorePartitions] = None
class PipelineActivities(BaseModel):
__root__: Annotated[List[PipelineActivity], Field(max_items=25, min_items=1)]
class CreatePipelineRequest(BaseModel):
pipelineName: PipelineName
pipelineActivities: PipelineActivities
tags: Optional[TagList] = None
class Dataset(BaseModel):
"""
Information about a dataset.
"""
name: Optional[DatasetName] = None
arn: Optional[DatasetArn] = None
actions: Optional[DatasetActions] = None
triggers: Optional[DatasetTriggers] = None
contentDeliveryRules: Optional[DatasetContentDeliveryRules] = None
status: Optional[ChannelStatus] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
retentionPeriod: Optional[RetentionPeriod] = None
versioningConfiguration: Optional[VersioningConfiguration] = None
lateDataRules: Optional[LateDataRules] = None
class DatasetSummary(BaseModel):
"""
A summary of information about a dataset.
"""
datasetName: Optional[DatasetName] = None
status: Optional[ChannelStatus] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
triggers: Optional[DatasetTriggers] = None
actions: Optional[DatasetActionSummaries] = None
class DatasetSummaries(BaseModel):
__root__: List[DatasetSummary]
class Datastore(BaseModel):
"""
Information about a data store.
"""
name: Optional[DatastoreName] = None
storage: Optional[DatastoreStorage] = None
arn: Optional[DatastoreArn] = None
status: Optional[ChannelStatus] = None
retentionPeriod: Optional[RetentionPeriod] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
lastMessageArrivalTime: Optional[Timestamp] = None
fileFormatConfiguration: Optional[FileFormatConfiguration] = None
datastorePartitions: Optional[DatastorePartitions] = None
class DatastoreSummary(BaseModel):
"""
A summary of information about a data store.
"""
datastoreName: Optional[DatastoreName] = None
datastoreStorage: Optional[DatastoreStorageSummary] = None
status: Optional[ChannelStatus] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
lastMessageArrivalTime: Optional[Timestamp] = None
fileFormatType: Optional[FileFormatType] = None
datastorePartitions: Optional[DatastorePartitions] = None
class DatastoreSummaries(BaseModel):
__root__: List[DatastoreSummary]
class Pipeline(BaseModel):
"""
Contains information about a pipeline.
"""
name: Optional[PipelineName] = None
arn: Optional[PipelineArn] = None
activities: Optional[PipelineActivities] = None
reprocessingSummaries: Optional[ReprocessingSummaries] = None
creationTime: Optional[Timestamp] = None
lastUpdateTime: Optional[Timestamp] = None
class PipelineSummaries(BaseModel):
__root__: List[PipelineSummary]
class RunPipelineActivityRequest(BaseModel):
pipelineActivity: PipelineActivity
payloads: MessagePayloads
class UpdateDatasetRequest(BaseModel):
actions: DatasetActions
triggers: Optional[DatasetTriggers] = None
contentDeliveryRules: Optional[DatasetContentDeliveryRules] = None
retentionPeriod: Optional[RetentionPeriod] = None
versioningConfiguration: Optional[VersioningConfiguration] = None
lateDataRules: Optional[LateDataRules] = None
class UpdatePipelineRequest(BaseModel):
pipelineActivities: PipelineActivities
class DescribeDatasetResponse(BaseModel):
dataset: Optional[Dataset] = None
class DescribeDatastoreResponse(BaseModel):
datastore: Optional[Datastore] = None
statistics: Optional[DatastoreStatistics] = None
class DescribePipelineResponse(BaseModel):
pipeline: Optional[Pipeline] = None
class ListDatasetsResponse(BaseModel):
datasetSummaries: Optional[DatasetSummaries] = None
nextToken: Optional[NextToken] = None
class ListDatastoresResponse(BaseModel):
datastoreSummaries: Optional[DatastoreSummaries] = None
nextToken: Optional[NextToken] = None
class ListPipelinesResponse(BaseModel):
pipelineSummaries: Optional[PipelineSummaries] = None
nextToken: Optional[NextToken] = None
| 24.511475 | 640 | 0.736811 | 3,550 | 37,380 | 7.670704 | 0.170423 | 0.02611 | 0.032316 | 0.018361 | 0.28552 | 0.242591 | 0.206639 | 0.189857 | 0.168301 | 0.147663 | 0 | 0.008648 | 0.17718 | 37,380 | 1,524 | 641 | 24.527559 | 0.876711 | 0.196522 | 0 | 0.333797 | 1 | 0.001391 | 0.014664 | 0.007765 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.102921 | 0.006954 | 0 | 0.840056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
253330ec00e3e989dc1286f84a83a5c56cf85fc5 | 332 | py | Python | rylog/__init__.py | Ryan-Holben/rylog | 0f81fc8031b5c008f87ce367ebeabd443ef341f8 | [
"MIT"
] | null | null | null | rylog/__init__.py | Ryan-Holben/rylog | 0f81fc8031b5c008f87ce367ebeabd443ef341f8 | [
"MIT"
] | null | null | null | rylog/__init__.py | Ryan-Holben/rylog | 0f81fc8031b5c008f87ce367ebeabd443ef341f8 | [
"MIT"
] | null | null | null | """
rylog
Logging happening in a 3-dimensional Cartesian product of:
1. The logging level: [debug, info, warn, error]
2. The logging category: e.g. software event, action, output
3. The detected function/method: e.g. my_class.class_method or foo
"""
from .misc import *
from .server import *
from .client import *
| 25.538462 | 70 | 0.701807 | 50 | 332 | 4.62 | 0.74 | 0.08658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015038 | 0.198795 | 332 | 12 | 71 | 27.666667 | 0.853383 | 0.746988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
25428819afdb8bcef5f733f483e2dfff517079e7 | 956 | py | Python | configs.py | rudyn2/visual-odometry | 1ee37ac6669e1429461f23ccc02d5ae9a349409c | [
"MIT"
] | null | null | null | configs.py | rudyn2/visual-odometry | 1ee37ac6669e1429461f23ccc02d5ae9a349409c | [
"MIT"
] | null | null | null | configs.py | rudyn2/visual-odometry | 1ee37ac6669e1429461f23ccc02d5ae9a349409c | [
"MIT"
] | null | null | null | import cv2
class StereoSGBMConfig:
min_disparity = 0
num_disparities = 16*10
sad_window_size = 3
uniqueness_ratio = 5
p1 = 16*sad_window_size*sad_window_size
p2 = 96*sad_window_size*sad_window_size
pre_filter_cap = 63
speckle_window_size = 0
speckle_range = 0
disp_max_diff = 1
mode = cv2.STEREO_SGBM_MODE_SGBM
class StereoSGBMConfig2:
pre_filter_cap = 63
sad_window_size = 3
p1 = sad_window_size * sad_window_size * 4
p2 = sad_window_size * sad_window_size * 32
min_disparity = 0
num_disparities = 128
uniqueness_ratio = 10
speckle_window_size = 100
speckle_range = 32
disp_max_diff = 1
full_dp = 1
mode = cv2.STEREO_SGBM_MODE_SGBM_3WAY
class MatcherConfig:
ransac = {
'max_iterations': 5,
'error_threshold': 50,
'min_consensus': 5
}
hough = {
'dxbin': 100,
'dangbin': 50,
'umbralvotos': 10
}
| 21.727273 | 47 | 0.65272 | 130 | 956 | 4.407692 | 0.407692 | 0.209424 | 0.226876 | 0.111693 | 0.366492 | 0.272251 | 0.09075 | 0 | 0 | 0 | 0 | 0.079826 | 0.279289 | 956 | 43 | 48 | 22.232558 | 0.751814 | 0 | 0 | 0.216216 | 0 | 0 | 0.067992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.783784 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
255a812a0890850fc537c0377c504771edd7d281 | 261 | py | Python | app/main.py | athul/jimbru | bc22449dbfbea19d9605e6271a154dbc7037bafb | [
"MIT"
] | 42 | 2020-11-12T11:34:29.000Z | 2022-01-17T11:40:29.000Z | app/main.py | athul/jimbru | bc22449dbfbea19d9605e6271a154dbc7037bafb | [
"MIT"
] | 1 | 2021-06-09T11:41:49.000Z | 2021-06-09T11:41:49.000Z | app/main.py | athul/jimbru | bc22449dbfbea19d9605e6271a154dbc7037bafb | [
"MIT"
] | 2 | 2021-03-17T18:16:15.000Z | 2021-06-08T17:29:38.000Z | from fastapi import FastAPI
try:
from routes import analytics,templates,auth
except:
from .routes import analytics,templates,auth
app = FastAPI()
app.include_router(analytics.router)
app.include_router(templates.router)
app.include_router(auth.authr) | 21.75 | 48 | 0.800766 | 35 | 261 | 5.885714 | 0.371429 | 0.145631 | 0.23301 | 0.242718 | 0.368932 | 0.368932 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114943 | 261 | 12 | 49 | 21.75 | 0.891775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
256b83c7f65a2f6d348541c27824ba4aba67696c | 1,649 | py | Python | policytools/master_list/actions_master_list_base.py | samkeen/policy-tools | 5183a710ac7b3816c6b6f3f8493d410712018873 | [
"Apache-2.0"
] | 1 | 2021-04-03T12:16:53.000Z | 2021-04-03T12:16:53.000Z | policytools/master_list/actions_master_list_base.py | samkeen/policy-tools | 5183a710ac7b3816c6b6f3f8493d410712018873 | [
"Apache-2.0"
] | 6 | 2019-05-07T03:36:58.000Z | 2021-02-02T22:49:53.000Z | policytools/master_list/actions_master_list_base.py | samkeen/policy-tools | 5183a710ac7b3816c6b6f3f8493d410712018873 | [
"Apache-2.0"
] | null | null | null | import logging
from abc import ABC, abstractmethod
logger = logging.getLogger(__name__)
class ActionsMasterListBase(ABC):
"""
Base class meant to hold the entire Set of IAM resource actions.
It is up to a concrete class to implement a source document parser (parse_actions_source)
"""
def __init__(self, source_master):
"""
:param source_master:
:type source_master: str
"""
self._actions_set = self.parse_actions_source(source_master)
self._actions_set_case_insensitive_lookup = {resource_action.lower(): resource_action for resource_action in
self._actions_set}
super().__init__()
@abstractmethod
def parse_actions_source(self, source_master):
"""
:param source_master:
:type source_master: str
:return:
:rtype: set
"""
pass
@abstractmethod
def all_actions_for_resource(self, resource_name):
"""
This must return a sorted list of all actions for the given resource
:param resource_name:
:type resource_name: str
:return:
:rtype: list
"""
def all_actions_set(self, lower=False):
return set(item.lower() for item in self._actions_set) if lower else self._actions_set
def lookup_action(self, action):
"""
Case insensitive lookup for all known actions. Returned in PascalCase
:param action:
:type action: str
:return:
:rtype: str
"""
return self._actions_set_case_insensitive_lookup.get(action.lower())
| 28.431034 | 116 | 0.62644 | 190 | 1,649 | 5.147368 | 0.331579 | 0.08589 | 0.08589 | 0.042945 | 0.177914 | 0.177914 | 0.106339 | 0.106339 | 0.106339 | 0.106339 | 0 | 0 | 0.29715 | 1,649 | 57 | 117 | 28.929825 | 0.843831 | 0.322013 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0.055556 | 0.111111 | 0.055556 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
256d49d818eb371b9cdddf6e67c307560654cf96 | 969 | py | Python | src/hydep/simplerom.py | CORE-GATECH-GROUP/hydep | 3cb65325eb03251629b3aaa8c3895a002e05d55d | [
"MIT"
] | 2 | 2020-11-12T03:08:07.000Z | 2021-10-04T22:09:48.000Z | src/hydep/simplerom.py | CORE-GATECH-GROUP/hydep | 3cb65325eb03251629b3aaa8c3895a002e05d55d | [
"MIT"
] | 2 | 2020-11-25T16:24:29.000Z | 2021-08-28T23:19:39.000Z | src/hydep/simplerom.py | CORE-GATECH-GROUP/hydep | 3cb65325eb03251629b3aaa8c3895a002e05d55d | [
"MIT"
] | 1 | 2020-11-12T03:08:10.000Z | 2020-11-12T03:08:10.000Z | """
Simple reduced order solver.
More of a no-op, in that it doesn't actually
perform a flux solution
"""
import numpy
from hydep.internal.features import FeatureCollection
from hydep.internal import TransportResult
from .lib import ReducedOrderSolver
class SimpleROSolver(ReducedOrderSolver):
"""The simplest reduced order flux solution where nothing happens"""
needs = FeatureCollection()
def __init__(self):
self._flux = None
def processBOS(self, txResult, _timestep, _power):
"""Store flux from a high fidelity transport solution"""
self._flux = txResult.flux
def substepSolve(self, *args, **kwargs):
"""Return the beginning-of-step flux with no modifications
Returns
-------
hydep.internal.TransportResult
Transport result with the flux provided in :meth:`processBOS`
"""
return TransportResult(self._flux, [numpy.nan, numpy.nan], runTime=numpy.nan)
| 26.916667 | 85 | 0.693498 | 112 | 969 | 5.919643 | 0.553571 | 0.058824 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216718 | 969 | 35 | 86 | 27.685714 | 0.873518 | 0.394221 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
25713c734ac79b5bf287eaff619cf02ebcde4535 | 449 | py | Python | TopQuarkAnalysis/TopEventProducers/python/sequences/ttGenEvent_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | TopQuarkAnalysis/TopEventProducers/python/sequences/ttGenEvent_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | TopQuarkAnalysis/TopEventProducers/python/sequences/ttGenEvent_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
#
# produce ttGenEvent with all necessary ingredients
#
from TopQuarkAnalysis.TopEventProducers.producers.TopInitSubset_cfi import *
from TopQuarkAnalysis.TopEventProducers.producers.TopDecaySubset_cfi import *
from TopQuarkAnalysis.TopEventProducers.producers.TtGenEvtProducer_cfi import *
makeGenEvtTask = cms.Task(
initSubset,
decaySubset,
genEvt
)
makeGenEvt = cms.Sequence(makeGenEvtTask)
| 28.0625 | 79 | 0.830735 | 43 | 449 | 8.604651 | 0.627907 | 0.162162 | 0.3 | 0.372973 | 0.297297 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109131 | 449 | 15 | 80 | 29.933333 | 0.925 | 0.109131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c27e04a4ce8b186ea59a6dc9c61fb5fd29af829e | 134 | py | Python | python-lab-file/64_capitalizefirshchar.py | zshashz/py1729 | 3281ae2a20c665ebcc0d53840cc95143cbe6861b | [
"MIT"
] | 1 | 2021-01-22T09:03:59.000Z | 2021-01-22T09:03:59.000Z | python-lab-file/64_capitalizefirshchar.py | zshashz/py1729 | 3281ae2a20c665ebcc0d53840cc95143cbe6861b | [
"MIT"
] | null | null | null | python-lab-file/64_capitalizefirshchar.py | zshashz/py1729 | 3281ae2a20c665ebcc0d53840cc95143cbe6861b | [
"MIT"
] | 2 | 2021-05-04T11:29:38.000Z | 2021-11-03T13:09:48.000Z | # Program 64 : Capitalize the First Character of a String
my_string = input()
cap_string = my_string.capitalize()
print(cap_string) | 19.142857 | 57 | 0.768657 | 20 | 134 | 4.95 | 0.65 | 0.161616 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.149254 | 134 | 7 | 58 | 19.142857 | 0.850877 | 0.410448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c2b13350784cb8342c6137b9b2e68b9d9cf9a32f | 46,745 | py | Python | covidatx/plot.py | Mlograda/covidatx | 336bd5874ec5c6915f621ff3960ea10f70f6319c | [
"MIT"
] | null | null | null | covidatx/plot.py | Mlograda/covidatx | 336bd5874ec5c6915f621ff3960ea10f70f6319c | [
"MIT"
] | null | null | null | covidatx/plot.py | Mlograda/covidatx | 336bd5874ec5c6915f621ff3960ea10f70f6319c | [
"MIT"
] | null | null | null | from .data import CovidData
import datetime as dt
from matplotlib.offsetbox import AnchoredText
import pandas as pd
import seaborn as sns
import geopandas as gpd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def pan_duration(date):
"""Return the duration in days of the pandemic.
As calculated from the gov.uk API. It subtracts the first date entry
in the API data from the most recent date entry.
Args:
date (datetime): DataFrame column (i.e Series) containing date
field as downloaded from the gov.uk API by get_national_data()
method from CovidData Class.
Returns:
datetime: Duration of pandemic in days as datetime object.
"""
return (date[0] - date[-1]).days
def validate_input(df):
"""Check that input into the plotting functions is of the correct type.
Args:
df (Pandas DataFrame): this is intended to be the plotting parameter
Raises:
TypeError: if parameter is not a DataFrame
"""
# if for_function == 'deaths' or for_function == 'cases':
# expected_cols = {'cases_cumulative', 'cases_demographics',
# 'cases_newDaily', 'case_rate', 'date',
# 'death_Demographics', 'name', 'vac_firstDose',
# 'vac_secondDose'}
if not isinstance(df, pd.DataFrame):
raise TypeError('Parameter must be DataFrame, use get_regional_data'
+ ' method from CovidData class.')
# if set(df.columns) != expected_cols:
# raise ValueError('Incorrect features. Expecting output from'
# + ' get_regional_data method from CovidData class')
def my_path():
"""Find correct path at module level for geo_data files.
Returns:
[type]: [description]
"""
from pathlib import Path
base = Path(__file__).resolve().parent / 'geo_data'
return base
def daily_case_plot(df, pan_duration=pan_duration, save=False):
"""Create a matplotlib plot of case numbers in the UK.
Calculated over the duration of the pandemic.Display text information
giving the most recent daily number, the highest daily number and the
date recorded, the total cumulative
number of cases and the duration of the pandemic in days.
Args:
df (DataFrame): containing covid data retrieved from CovidData
class using get_national_data() or get_UK_data() method.
pan_duration (function, optional): Defaults to pan_duration.
save (bool, optional): set True to save plot. Defaults to False.
Returns:
- Matplotlib plot, styled using matplotlib template 'ggplot'
"""
# Create Variables we wish to plot
cases = df['case_newCases'].to_list()
date = df['date'].to_list()
cumulative = df['case_cumulativeCases'].to_list()
# Find date of highest number of daily cases
high, arg_high = max(cases), cases.index(max(cases))
high_date = date[arg_high].strftime('%d %b %Y')
duration = pan_duration(date=date)
# Create matplotlib figure and specify size
fig = plt.figure(figsize=(12, 10))
plt.style.use('ggplot')
ax = fig.add_subplot()
# Plot varibles
ax.plot(date, cases)
# Style and label plot
ax.set_xlabel('Date')
ax.set_ylabel('Cases')
ax.fill_between(date, cases,
alpha=0.3)
ax.set_title('Number of people who tested positive for Covid-19 (UK)',
fontsize=18)
at = AnchoredText(f"Most recent new cases\n{cases[0]:,.0f}\
\nMax new cases\n{high:,.0f}: {high_date}\
\nCumulative cases\n{cumulative[0]:,.0f}\
\nPandemic duration\n{duration} days",
prop=dict(size=16), frameon=True, loc='upper left')
at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
ax.add_artist(at)
ax.annotate('Source: gov.uk https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, 0.0175), xycoords='figure fraction',
fontsize=12, color='#555555')
plt.style.use('ggplot')
if save:
plt.savefig(f"{date[0].strftime('%Y-%m-%d')}-case_numbers_plot");
plt.show()
def regional_plot_cases(save=False):
"""Plot regional case numbers on a map of the UK.
Function collects data using CovidData get_regional_data method.
Args:
save (bool, optional): If true will save plot. Defaults to False.
Returns:
Plot of regional case numbers on map of UK
"""
# Collect data
regions = CovidData().get_regional_data()
scotland = CovidData(nation='scotland').get_national_data()
wales = CovidData(nation='wales').get_national_data()
ni = CovidData(nation='northern ireland').get_national_data()
regions = regions.assign(case_newCases=regions['cases_newDaily'])
# Set date to plot
date_selector = regions['date'][0]
regions_date = regions.loc[regions['date'] == date_selector]
scotland_date = \
scotland.loc[scotland['date'] == date_selector,
['date', 'name', 'case_newCases']]
wales_date = wales.loc[wales['date'] == date_selector,
['date', 'name', 'case_newCases']]
ni_date = ni.loc[ni['date'] == date_selector,
['date', 'name', 'case_newCases']]
# Combine regional data into single dataframe
final_df = pd.concat([regions_date, scotland_date, wales_date, ni_date],
axis=0)
file_path = my_path() / 'NUTS_Level_1_(January_2018)_Boundaries.shp'
# Check required file exists
try:
# Read shape file
geo_df = gpd.read_file(file_path)
except: # bare except is not good practice, this should be changed
print('Ensure you have imported geo_data sub-folder')
geo_df['nuts118nm'] = \
geo_df['nuts118nm'].replace(['North East (England)',
'North West (England)',
'East Midlands (England)',
'West Midlands (England)',
'South East (England)',
'South West (England)'],
['North East', 'North West',
'East Midlands', 'West Midlands',
'South East', 'South West'])
merged = geo_df.merge(final_df, how='left', left_on="nuts118nm",
right_on="name")
# Column to plot
feature = 'case_newCases'
# Plot range
feature_min, feature_max = merged['case_newCases'].min(), \
merged['case_newCases'].max()
# Create plot
fig, ax = plt.subplots(1, figsize=(12, 10))
# Set style and labels
ax.axis('off')
ax.set_title(f'Number of new cases per region {date_selector}',
fontdict={'fontsize': '18', 'fontweight': '3'})
ax.annotate('Source: gov.uk'
+ ' https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, .05), xycoords='figure fraction',
fontsize=12, color='#555555')
# Create colorbar
sm = plt.cm.ScalarMappable(cmap='Reds',
norm=plt.Normalize(vmin=feature_min,
vmax=feature_max))
fig.colorbar(sm)
# Create map
merged.plot(column=feature, cmap='Reds', linewidth=0.8, ax=ax,
edgecolor='0.8');
plt.show()
if save:
image = merged.plot(column=feature, cmap='Reds', linewidth=0.8,
ax=ax, edgecolor='0.8');
image.figure.savefig(f'{date_selector}-regional_cases_plot')
def regional_plot_rate(save=False):
"""Plot regional case rate per 100,000 on a map of the UK.
Function collects data using CovidData get_regional_data method.
Args:
save (bool, optional): If true will save plot. Defaults to False.
Returns:
Plot of regional case rate on map of UK.
"""
# Collect data
regions = CovidData().get_regional_data()
scotland = CovidData(nation='scotland').get_national_data()
wales = CovidData(nation='wales').get_national_data()
ni = CovidData(nation='northern ireland').get_national_data()
# Set date to plot
date_selector = regions['date'][5]
regions_date = regions.loc[regions['date'] == date_selector]
scotland_date = scotland.loc[scotland['date'] == date_selector,
['date', 'name', 'case_rate']]
wales_date = wales.loc[wales['date'] == date_selector,
['date', 'name', 'case_rate']]
ni_date = ni.loc[ni['date'] == date_selector,
['date', 'name', 'case_rate']]
# Combine regional data into single dataframe
final_df = pd.concat([regions_date, scotland_date, wales_date, ni_date],
axis=0)
file_path = my_path() / 'NUTS_Level_1_(January_2018)_Boundaries.shp'
# Check required file exists
try:
# Read shape file
geo_df = gpd.read_file(file_path)
except: # bare except should be changed, will do so in later interation
print('Ensure you have imported geo_data sub-folder')
geo_df['nuts118nm'] = \
geo_df['nuts118nm'].replace(['North East (England)',
'North West (England)',
'East Midlands (England)',
'West Midlands (England)',
'South East (England)',
'South West (England)'],
['North East', 'North West',
'East Midlands', 'West Midlands',
'South East', 'South West'])
merged = geo_df.merge(final_df, how='left', left_on="nuts118nm",
right_on="name")
# Column to plot
feature = 'case_rate'
# Plot range
feature_min, feature_max = merged['case_rate'].min(),\
merged['case_rate'].max()
# Create plot
fig, ax = plt.subplots(1, figsize=(12, 10))
# Set style and labels
ax.axis('off')
ax.set_title('Regional rate per 100,000 (new cases)',
fontdict={'fontsize': '20', 'fontweight': '3'})
ax.annotate('Source: gov.uk'
+ ' https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, .05), xycoords='figure fraction',
fontsize=12, color='#555555')
# Create colorbar
sm = plt.cm.ScalarMappable(cmap='Reds',
norm=plt.Normalize(vmin=feature_min,
vmax=feature_max))
fig.colorbar(sm)
# Create map
merged.plot(column=feature, cmap='Reds', linewidth=0.8, ax=ax,
edgecolor='0.8');
plt.show()
if save:
image = merged.plot(column=feature, cmap='Reds', linewidth=0.8,
ax=ax, edgecolor='0.8');
image.figure.savefig(f'{date_selector}-regional_rate_plot')
def heatmap_cases(df):
"""Create heatmap of case numbers for duration of pandemic.
Args:
df (DataFrame): Covid case data retrieved by calling CovidData
class method.
Returns:
Seaborn heatmap plot of case numbers for each day of the pandemic.
"""
# Variables to plot
cases = df['case_newCases'].to_list()
date = df['date'].to_list()
# Create new DataFrame containing two columns: date and case numbers
heat_df = pd.DataFrame({'date': date, 'cases': cases}, index=date)
# Separate out date into year month and day
heat_df['year'] = heat_df.index.year
heat_df["month"] = heat_df.index.month
heat_df['day'] = heat_df.index.day
# Use groupby to convert data to wide format for heatmap plot
x = heat_df.groupby(["year", "month", "day"])["cases"].sum()
df_wide = x.unstack()
# Plot data
sns.set(rc={"figure.figsize": (12, 10)})
# Reverse colormap so that dark colours represent higher numbers
cmap = sns.cm.rocket_r
ax = sns.heatmap(df_wide, cmap=cmap)
ax.set_title('Heatmap of daily cases since start of pandemic',
fontsize=20)
ax.annotate('Source: gov.uk https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, 0.01), xycoords='figure fraction',
fontsize=12, color='#555555')
plt.show()
def local_rate_plot(save=False):
"""Plot local case rate per 100,000 on a map of the UK.
Function collects data using CovidData get_regional_data method.
Args:
save (bool, optional): If true will save plot. Defaults to False.
Returns:
Plot of local case rate on map of UK
"""
# Find latest data
recent_date = CovidData().get_regional_data()
recent_date = recent_date['date'][5]
# Select latest data from local data
local = CovidData().get_local_data(date=recent_date)
date_selector = recent_date
local_date = local.loc[local['date'] == date_selector,
['date', 'name', 'case_rate']]
file_path = my_path() / "Local_Authority_Districts.shp"
# Check required file exists
try:
# Read shape file
geo_df = gpd.read_file(file_path)
except: # bare except should be changed, will do so in later interation
print('Ensure you have imported geo_data sub-folder')
local_date['name'] = \
local_date['name'].replace(['Cornwall and Isles of Scilly'],
['Cornwall'])
merged = geo_df.merge(local_date, how='outer',
left_on="lad19nm", right_on="name")
# Column to plot
feature = 'case_rate'
# Plot range
vmin, vmax = merged['case_rate'].min(), merged['case_rate'].max()
# Create plot
fig, ax = plt.subplots(1, figsize=(12, 10))
# Set style and labels
ax.axis('off')
ax.set_title(f'Local rate per 100,000 {recent_date}',
fontdict={'fontsize': '20', 'fontweight': '3'})
ax.annotate('Source: gov.uk'
+ ' https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, .05), xycoords='figure fraction',
fontsize=12, color='#555555')
# Create colorbar
sm = plt.cm.ScalarMappable(cmap='Reds',
norm=plt.Normalize(vmin=vmin, vmax=vmax))
fig.colorbar(sm)
# Create map
merged.plot(column=feature, cmap='Reds', linewidth=0.2, ax=ax,
edgecolor='0.8')
plt.show()
if save:
image = merged.plot(column=feature, cmap='Reds', linewidth=0.2,
ax=ax, edgecolor='0.8');
image.figure.savefig(f'{date_selector}-local_rate_plot')
def local_cases_plot(save=False):
"""Plot local case numbers on a map of the UK.
Function collects data using CovidData get_regional_data method.
Args:
save (bool, optional): If true will save plot. Defaults to False.
"""
# Find latest data
recent_date = CovidData().get_regional_data()
recent_date = recent_date['date'][0]
# Select latest data from local data
local = CovidData().get_local_data(date=recent_date)
date_selector = recent_date
local_date = local.loc[local['date'] == date_selector,
['date', 'name', 'case_newDaily']]
file_path = my_path() / "Local_Authority_Districts.shp"
# Check required file exists
try:
# Read shape file
geo_df = gpd.read_file(file_path)
except: # bare except should be changed, will do so in later interation
print('Ensure you have imported geo_data sub-folder')
local_date['name'] = \
local_date['name'].replace(['Cornwall and Isles of Scilly'],
['Cornwall'])
merged = geo_df.merge(local_date, how='outer',
left_on="lad19nm", right_on="name")
# Column to plot
feature = 'case_newDaily'
# Plot range
vmin, vmax = merged['case_newDaily'].min(), \
merged['case_newDaily'].max()
# Create plot
fig, ax = plt.subplots(1, figsize=(12, 10))
# Set style and labels
ax.axis('off')
ax.set_title(f'Number of new cases by local authority {recent_date}',
fontdict={'fontsize': '20', 'fontweight': '3'})
ax.annotate('Source: gov.uk'
+ ' https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, .05), xycoords='figure fraction',
fontsize=12, color='#555555')
# Create colorbar
sm = plt.cm.ScalarMappable(cmap='Reds',
norm=plt.Normalize(vmin=vmin, vmax=vmax))
fig.colorbar(sm)
# Create map
merged.plot(column=feature, cmap='Reds', linewidth=0.2, ax=ax,
edgecolor='0.8')
plt.show()
if save:
image = merged.plot(column=feature, cmap='Reds', linewidth=0.2,
ax=ax, edgecolor='0.8');
image.figure.savefig(f'{date_selector}-local_cases_plot')
def case_demographics(df):
"""Produce a plot of the age demographics of cases across England.
Args:
df (DataFrame): this must be the dataframe produced by the
get_regional_data method from the CovidData class
Returns:
Plot of case numbers broken down by age
"""
validate_input(df)
df_list = df.loc[:, ['cases_demographics', 'date']]
age_df = []
for i in range(df_list.shape[0]):
if df_list.iloc[i, 0]:
temp_df = pd.DataFrame(df_list.iloc[i, 0])
temp_df['date'] = df_list.iloc[i, 1]
temp_df = temp_df.pivot(values='rollingRate',
columns='age', index='date')
age_df.append(temp_df)
data = pd.concat(age_df)
data.index = pd.to_datetime(data.index)
data = \
data.assign(under_15=(data['00_04']+data['05_09']+data['10_14'])/3,
age_15_29=(data['15_19']+data['20_24']+data['25_29'])/3,
age_30_39=(data['30_34']+data['35_39'])/2,
age_40_49=(data['40_44']+data['45_49'])/2,
age_50_59=(data['50_54']+data['55_59'])/2)
data.drop(columns=['00_04', '00_59', '05_09', '10_14', '15_19', '20_24',
'25_29', '30_34', '35_39', '40_44', '45_49', '50_54',
'55_59', '60_64', '65_69', '70_74', '75_79', '80_84',
'85_89', '90+', 'unassigned'], inplace=True)
date = data.index[0].strftime('%d-%b-%y')
ready_df = data.resample('W').mean()
ready_df.plot(figsize=(15, 10), subplots=True, layout=(3, 3),
title=f'{date} - England case rate per 100,000 by age'
+ ' (weekly)')
plt.style.use('ggplot')
plt.show()
def vaccine_demographics(df):
"""Plot of the age demographics of third vaccine uptake across England.
Args:
df ([DataFrame]): this must be the dataframe produced by the
get_regional_data method from the CovidData class
Returns:
Plot of cumulative third vaccination numbers broken down by age.
"""
validate_input(df)
df_list = df.loc[:, ['vac_demographics', 'date']]
age_df = []
for i in range(df_list.shape[0]):
if df_list.iloc[i, 0]:
temp_df = pd.DataFrame(df_list.iloc[i, 0])
temp_df['date'] = df_list.iloc[i, 1]
temp_df =\
temp_df.pivot(values=
'cumVaccinationThirdInjectionUptakeByVaccinationDatePercentage',
columns='age', index='date')
age_df.append(temp_df)
data = pd.concat(age_df)
data.index = pd.to_datetime(data.index)
date = data.index[0].strftime('%d-%b-%y')
ready_df = data.resample('W').mean()
ready_df.plot(figsize=(15, 10), subplots=True, layout=(6, 3),
title=f'{date} - England vaccine booster uptake (%) by age'
+ ' (weekly)')
plt.style.use('ggplot')
plt.show()
def death_demographics(df):
"""Plot of the age demographics of rate of deaths across England.
Args:
df (DataFrame): this must be the dataframe produced by the
get_regional_data method from the CovidData class
Returns:
Plot of death rate per 100,000 broken down by age.
"""
validate_input(df)
df_list = df.loc[:, ['death_Demographics', 'date']]
age_df = []
for i in range(df_list.shape[0]):
if df_list.iloc[i, 0]:
temp_df = pd.DataFrame(df_list.iloc[i, 0])
temp_df['date'] = df_list.iloc[i, 1]
temp_df = temp_df.pivot(values='rollingRate',
columns='age', index='date')
age_df.append(temp_df)
data = pd.concat(age_df)
data.index = pd.to_datetime(data.index)
data = \
data.assign(under_15=(data['00_04']+data['05_09']+data['10_14'])/3,
age_15_29=(data['15_19']+data['20_24']+data['25_29'])/3,
age_30_39=(data['30_34']+data['35_39'])/2,
age_40_49=(data['40_44']+data['45_49'])/2,
age_50_59=(data['50_54']+data['55_59'])/2)
data.drop(columns=['00_04', '00_59', '05_09', '10_14', '15_19', '20_24',
'25_29', '30_34', '35_39', '40_44', '45_49', '50_54',
'55_59', '60_64', '65_69', '70_74', '75_79', '80_84',
'85_89', '90+'], inplace=True)
date = data.index[0].strftime('%d-%b-%y')
ready_df = data.resample('W').mean()
ready_df.plot(figsize=(15, 10), subplots=True, layout=(3, 3),
title=f'{date} - England death rate per 100,000 by age'
+ ' (weekly)')
plt.style.use('ggplot')
plt.show()
def daily_deaths(df, pan_duration=pan_duration, save=False):
"""Plot number of people died per day within 28 days of 1st +ve test.
COVID-19 deaths over time, from the start of the pandemic March 2020.
Args:
df (DataFrame): requires data from get_uk_data method
pan_duration (function, optional): use pre specified pan_duration.
Defaults to pan_duration.
save (bool, optional): [description]. Defaults to False.
Returns:
Matplotlib plot, styled using matplotlib template 'ggplot'
"""
daily_deaths = df['death_dailyDeaths'].to_list()
date = df['date'].to_list()
# cumulative = df['case_cumulativeCases'].to_list()
# Find date of highest number of daily cases
high, arg_high = max(daily_deaths), daily_deaths.index(max(daily_deaths))
# daily = df['death_dailyDeaths'][0]
high_date = date[arg_high].strftime('%d %b %Y')
# added the number of death for the last seven days
duration = pan_duration(date=date)
# Create matplotlib figure and specify size
fig = plt.figure(figsize=(12, 10))
plt.style.use('ggplot')
ax = fig.add_subplot()
# Plot varibles
ax.plot(date, daily_deaths)
# Style and label plot
ax.set_xlabel('Date')
ax.set_ylabel('Daily deaths')
ax.fill_between(date, daily_deaths,
alpha=0.3)
ax.set_title('Deaths within 28 days of positive test (UK)',
fontsize=18)
at = AnchoredText(f"Most recent daily deaths\n{daily_deaths[0]:,.0f}\
\nMax daily deaths\n{high:,.0f}: {high_date}\
\nPandemic duration\n{duration} days",
prop=dict(size=16), frameon=True, loc='upper left')
# \nCumulative cases\n{cumulative[0]:,.0f}\
at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
ax.add_artist(at)
ax.annotate('Source: gov.uk https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, 0.0175), xycoords='figure fraction',
fontsize=12, color='#555555')
if save:
plt.savefig(f"casenumbers{date[0].strftime('%Y-%m-%d')}")
plt.show()
def cumulative_deaths(df, pan_duration=pan_duration, save=False):
"""Plot cum number of people who died within 28 days of +ve test.
Total COVID-19 deaths over time, from the start of the
pandemic March 2020.
Args:
df (DataFrame): containing covid data retrieved from CovidData
pan_duration ([function], optional): Defaults to pan_duration.
save (bool, optional): True to save plot. Defaults to False.
Returns:
Matplotlib plot, styled using matplotlib template 'ggplot'
"""
df = df.fillna(0)
cum_deaths = df["death_cumulativeDeaths"].to_list()
date = df['date'].to_list()
# cumulative = df['death_cumulativeDeaths'].to_list()
# Find date of highest number of daily cases
high, arg_high = max(cum_deaths), cum_deaths.index(max(cum_deaths))
# daily = df["death_cumulativeDeaths"][0]
high_date = date[arg_high].strftime('%d %b %Y')
# added the number of death for the last seven days
duration = pan_duration(date=date)
# Create matplotlib figure and specify size
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot()
# Plot varibles
ax.plot(date, cum_deaths)
# Style and label plot
ax.set_xlabel('Date')
ax.set_ylabel('Cumulative deaths')
ax.fill_between(date, cum_deaths,
alpha=0.3)
ax.set_title('Cumulative deaths within 28 days of positive test (UK)',
fontsize=18)
at = AnchoredText(f"Last cumulative deaths\n{high:,.0f}: {high_date}\
\nPandemic duration\n{duration} days",
prop=dict(size=16), frameon=True, loc='upper left')
# \nCumulative cases\n{cumulative[0]:,.0f}\
at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
ax.add_artist(at)
ax.annotate('Source: gov.uk https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, 0.0175), xycoords='figure fraction',
fontsize=12, color='#555555')
plt.style.use('ggplot')
if save:
plt.savefig(f"casenumbers{date[0].strftime('%Y-%m-%d')}")
plt.show()
def regional_plot_death_rate(save=False):
"""Plot regional deaths rate per 100,000 on a map of the UK.
Function collects data using CovidData get_regional_data method.
Args:
save (bool, optional): True will save plot. Defaults to False.
Returns:
Plot of regional case rate on map of UK
"""
# Collect data
regions = CovidData().get_regional_data()
scotland = CovidData(nation='scotland').get_national_data()
wales = CovidData(nation='wales').get_national_data()
ni = CovidData(nation='northern ireland').get_national_data()
# Set date to plot
date_selector = regions['date'][7]
regions_date = regions.loc[regions['date'] == date_selector]
scotland_date = scotland.loc[scotland['date'] == date_selector,
['date', 'name', 'death_newDeathRate']]
wales_date = wales.loc[wales['date'] == date_selector,
['date', 'name', 'death_newDeathRate']]
ni_date = ni.loc[ni['date'] == date_selector,
['date', 'name', 'death_newDeathRate']]
# Combine regional data into single dataframe
final_df = pd.concat([regions_date, scotland_date, wales_date, ni_date],
axis=0)
file_path = my_path() / 'NUTS_Level_1_(January_2018)_Boundaries.shp'
# Check required file exists
try:
# Read shape file
geo_df = gpd.read_file(file_path)
except: # bare except should be changed, will do so in later interation
print('Ensure you have imported geo_data sub-folder')
geo_df['nuts118nm'] = \
geo_df['nuts118nm'].replace(['North East (England)',
'North West (England)',
'East Midlands (England)',
'West Midlands (England)',
'South East (England)',
'South West (England)'],
['North East', 'North West',
'East Midlands', 'West Midlands',
'South East', 'South West'])
merged = geo_df.merge(final_df, how='left', left_on="nuts118nm",
right_on="name")
# Column to plot
feature = 'death_newDeathRate'
# Plot range
feature_min, feature_max = merged['death_newDeathRate'].min(),\
merged['death_newDeathRate'].max()
# Create plot
fig, ax = plt.subplots(1, figsize=(12, 10))
# Set style and labels
ax.axis('off')
ax.set_title('Regional rate per 100,000 (new deaths)',
fontdict={'fontsize': '20', 'fontweight': '3'})
ax.annotate('Source: gov.uk \
https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, .05), xycoords='figure fraction',
fontsize=12, color='#555555')
# Create colorbar
sm = plt.cm.ScalarMappable(cmap='Reds',
norm=plt.Normalize(vmin=feature_min,
vmax=feature_max))
fig.colorbar(sm)
# Create map
merged.plot(column=feature, cmap='Reds', linewidth=0.8, ax=ax,
edgecolor='0.8')
plt.show()
if save:
image = merged.plot(column=feature, cmap='Reds', linewidth=0.8,
ax=ax, edgecolor='0.8')
image.figure.savefig(f'caserates{date_selector}')
def regional_deaths_demo(save=False):
"""Plot number of deaths in the UK.
Plot by age category (>60 , <60). Function collects data using
CovidData get_regional_data method.
Args:
save (bool, optional): True will save plot. Defaults to False.
Returns:
Plot of regional deaths by age category (UK)
"""
CovidDataE = CovidData("england")
regional = CovidDataE.get_regional_data()
regional = \
regional.drop(regional.columns.difference(["date",
"death_Demographics"]), 1)
regional
# remove empty lists in 'death_Demographcs column'
regional = regional[regional["death_Demographics"].astype(bool)]
# transform the regional dataframe to have 'age_categories' as columns
# with 'deaths' values and 'date' as rows
age_df = []
for i in range(regional.shape[0]):
if regional.iloc[i, 1]:
temp_df = pd.DataFrame(regional.iloc[i, 1])
temp_df['date'] = regional.iloc[i, 0]
temp_df = temp_df.pivot(values='deaths', columns=['age'],
index='date')
age_df.append(temp_df)
final_death_data = pd.concat(age_df)
# create a dataframe with columns 'age category' and 'number of deaths'
age_cat = ['00_04', '00_59', '05_09', '10_14', '15_19', '20_24', '25_29',
'30_34', '35_39', '40_44', '45_49', '50_54', '55_59', '60+',
'60_64', '65_69', '70_74', '75_79', '80_84', '85_89', '90+']
deaths = []
for ele in age_cat:
x = final_death_data[ele].sum()
deaths.append(x)
deaths_df = pd.DataFrame(list(zip(age_cat, deaths)),
columns=['age category', 'number of deaths'])
# group age categories to have only <60 old years and 60+
cat_1 = deaths_df.loc[deaths_df['age category'] == '00_59']
cat_2 = deaths_df.loc[deaths_df['age category'] == '60+']
below_60 = cat_1['number of deaths'].sum()
above_60 = cat_2['number of deaths'].sum()
lst1 = ['<60', '60+']
lst2 = [below_60, above_60]
final_deaths_age_cat = pd.DataFrame(list(zip(lst1, lst2)),
columns=['age category',
'number of deaths'])
# getting highest number of deaths for each age category
# PLOTTING A BAR PLOT OF NUMBER OF DEATHS vs AGE CATEGORY
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot()
# Plot varibles
ax.bar(final_deaths_age_cat['age category'],
final_deaths_age_cat['number of deaths'])
# plot(date, cum_deaths)
# Style and label plot
ax.set_xlabel('Age category')
ax.set_ylabel('Number of deaths')
ax.fill_between(final_deaths_age_cat['age category'],
final_deaths_age_cat['number of deaths'],
alpha=0.3)
ax.set_title('Number of deaths per age category (England)',
fontsize=18)
at = AnchoredText(f"Number of deaths:\
\nAge <60: {below_60}\
\nAge >60: {above_60}",
prop=dict(size=16), frameon=True, loc='upper left')
# \nCumulative cases\n{cumulative[0]:,.0f}\
at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
ax.add_artist(at)
ax.annotate('Source: gov.uk https://api.coronavirus.data.gov.uk/v1/data',
xy=(0.25, 0.0175), xycoords='figure fraction',
fontsize=12, color='#555555')
plt.style.use('ggplot')
plt.show()
if save:
date = dt.now()
plt.savefig(f"casenumbers{date.strftime('%Y-%m-%d')}")
def collect_hosp_data(country='england'):
"""Collect data for hosp and vac functions.
Args:
country (str, optional): Select country data. Defaults to 'england'.
Returns:
DataFrame: data in correct format for hosp and vac functions
"""
if country == 'england':
hosp_data = CovidData("england").get_national_data()
hosp_data["date"] = hosp_data["date"].astype('datetime64[ns]')
hosp_data = hosp_data.fillna(0)
return hosp_data
else:
hosp_uk = CovidData("england").get_uk_data()
hosp_uk["date"] = hosp_uk["date"].astype('datetime64[ns]')
hosp_uk = hosp_uk.fillna(0)
return hosp_uk
def hosp_cases_plot():
"""Heatmap for the the daily number of hospital cases (England).
Args:
No args required, collects own data.
Returns :
Seaborn heatmap plot for the number of hospital cases
per day of the pandemic.
"""
hosp_data = collect_hosp_data()
hosp_cases_col = ["date", "hosp_hospitalCases"]
hosp_data1 = hosp_data.loc[:, hosp_cases_col]
hosp_data1.loc[:, ["Day"]] = hosp_data1["date"].apply(lambda x: x.day)
hosp_data1["date"] = hosp_data1.date.dt.strftime("%Y-%m")
newpivot = hosp_data1.pivot_table("hosp_hospitalCases", index="date",
columns="Day")
cmap = sns.cm.rocket_r
plt.figure(figsize=(16, 9))
hm2 = sns.heatmap(newpivot, cmap=cmap)
hm2.set_title("Heatmap of the daily number of hospital cases (England)",
fontsize=14)
hm2.set_xlabel("Day", fontsize=12)
hm2.set_ylabel("Month and Year", fontsize=12)
def hosp_newadmissions_plot():
"""Heatmap for the the daily number of new hospital admissions (England).
Args:
No args required, collects own data.
Returns :
Seaborn heatmap plot for the number of new hospital admissions per day
of the pandemic.
"""
hosp_data = collect_hosp_data()
hosp_cases_col = ["date", "hosp_newAdmissions"]
hosp_data2 = hosp_data.loc[:, hosp_cases_col]
hosp_data2["Day"] = hosp_data2.date.apply(lambda x: x.day)
hosp_data2["date"] = hosp_data2.date.dt.strftime("%Y-%m")
newpivot = hosp_data2.pivot_table("hosp_newAdmissions", index="date",
columns="Day")
cmap = sns.cm.rocket_r
plt.figure(figsize=(16, 9))
hm1 = sns.heatmap(newpivot, cmap=cmap)
hm1.set_title("Heatmap of the daily number of new hospital admissions"
+ " (England)", fontsize=14)
hm1.set_xlabel("Day", fontsize=12)
hm1.set_ylabel("Month and Year", fontsize=12)
def hosp_newadmissionschange_plot():
"""Change in hospital admissions (England).
Plot difference between the number of new hospital admissions
during the latest 7-day period and the previous non-overlapping week.
Args:
No args required, collects own data.
Returns :
Lineplot of this difference over the months.
"""
hosp_data = collect_hosp_data()
hosp_cases_col = ["date", "hosp_newAdmissionsChange"]
hosp_data3 = hosp_data.loc[:, hosp_cases_col]
x = hosp_data3["date"].dt.strftime("%Y-%m")
y = hosp_data3["hosp_newAdmissionsChange"]
fig, ax = plt.subplots(1, 1, figsize=(20, 3))
sns.lineplot(x=x, y=y, color="g")
ax.set_title("Daily new admissions change (England)", fontsize=14)
ax.invert_xaxis()
ax.set_xlabel("Date", fontsize=12)
ax.set_ylabel("New Admissions Change", fontsize=12)
def hosp_occupiedbeds_plot():
"""Plot daily number of COVID-19 patients in mechanical ventilator beds.
Plots information for England.
Args:
No args required, collects own data.
Returns :
- Lineplot of this difference over the months.
"""
hosp_data = collect_hosp_data()
hosp_cases_col = ["date", "hosp_covidOccupiedMVBeds"]
hosp_data4 = hosp_data.loc[:, hosp_cases_col]
fig, ax = plt.subplots(1, 1, figsize=(20, 3))
sns.lineplot(x=hosp_data4["date"].dt.strftime("%Y-%m"),
y=hosp_data4["hosp_covidOccupiedMVBeds"], ax=ax, color="b")
ax.set_title("Daily number of COVID occupied Mechanical Ventilator beds"
+ " (England)", fontsize=14)
ax.invert_xaxis()
ax.set_xlabel("Date", fontsize=12)
ax.set_ylabel("Number of occupied MV beds", fontsize=12)
def hosp_casesuk_plot():
"""Heatmap for the the daily number of hospital cases in UK.
Args:
No args required, collects own data.
Returns :
Seaborn heatmap plot for the number of hospital cases
per day of the pandemic.
"""
hosp_uk = collect_hosp_data(country='uk')
hosp_cases_col = ["date", "hosp_hospitalCases"]
hosp_data1 = hosp_uk.loc[:, hosp_cases_col]
hosp_data1["Day"] = hosp_data1["date"].apply(lambda x: x.day)
hosp_data1["date"] = hosp_data1.date.dt.strftime("%Y-%m")
newpivot = hosp_data1.pivot_table("hosp_hospitalCases", index="date",
columns="Day")
cmap = sns.cm.rocket_r
plt.figure(figsize=(16, 9))
hm2 = sns.heatmap(newpivot, cmap=cmap)
hm2.set_title("Heatmap of the daily number of hospital cases in the UK",
fontsize=14)
hm2.set_xlabel("Day", fontsize=12)
hm2.set_ylabel("Month and Year", fontsize=12)
def hosp_newadmissionsuk_plot():
"""Heatmap for the the daily number of new hospital admissions (UK).
Args:
No args required, collects own data.
Returns :
Seaborn heatmap plot for the number of new hospital admissions per day
of the pandemic (UK).
"""
hosp_uk = collect_hosp_data(country='uk')
hosp_cases_col = ["date", "hosp_newAdmissions"]
hosp_data2 = hosp_uk.loc[:, hosp_cases_col]
hosp_data2["Day"] = hosp_data2.date.apply(lambda x: x.day)
hosp_data2["date"] = hosp_data2.date.dt.strftime("%Y-%m")
newpivot = hosp_data2.pivot_table("hosp_newAdmissions", index="date",
columns="Day")
cmap = sns.cm.rocket_r
plt.figure(figsize=(16, 9))
hm1 = sns.heatmap(newpivot, cmap=cmap)
hm1.set_title("Heatmap of the daily number of new hospital admissions"
+ " in the UK", fontsize=14)
hm1.set_xlabel("Day", fontsize=12)
hm1.set_ylabel("Month and Year", fontsize=12)
def hosp_occupiedbedsuk_plot():
"""Plot daily number of COVID-19 patients in mechanical ventilator beds.
Plots information for UK.
Args:
No args required, collects own data.
Returns :
- Lineplot of this difference over the months.
"""
hosp_uk = collect_hosp_data(country='uk')
hosp_cases_col = ["date", "hosp_covidOccupiedMVBeds"]
hosp_data4 = hosp_uk.loc[:, hosp_cases_col]
fig, ax = plt.subplots(1, 1, figsize=(20, 3))
sns.lineplot(x=hosp_data4["date"].dt.strftime("%Y-%m"),
y=hosp_data4["hosp_covidOccupiedMVBeds"], ax=ax, color="b")
ax.set_title("Daily number of COVID occupied Mechanical Ventilator"
+ " beds in the UK", fontsize=14)
ax.invert_xaxis()
ax.set_xlabel("Date", fontsize=12)
ax.set_ylabel("Number of occupied MV beds", fontsize=12)
def vaccine_percentage(df):
"""Plot the percentage of the vaccinated population over time.
Args:
df (DataFrame): Requires data returned by get_uk_data
or get_national_data methods
Retuns:
Plot of total percentage of population vaccinated
"""
df['date'] = df['date'].astype('datetime64[ns]')
plt.figure(figsize=(14, 7))
plot1 = sns.lineplot(x='date', y='vac_total_perc', data=df)
plt.ylim(0, 100)
plot1.set_xlabel("Covid pandemic, up to date", fontsize=12)
plot1.set_ylabel("Percentage", fontsize=12)
plot1.set_title('Percentage of the vaccinated population over time',
fontsize=14)
# print(plot1)
def vaccine_doses_plot(df):
"""Pllot both the first and second doses of vaccines.
Daily information.
Args:
df (DataFrame): Requires data returned by get_national_data
Returns:
Plots of first and second vaccine doses since start of pandemic
records
"""
df['date'] = df['date'].astype('datetime64[ns]')
keep_col = ['date', 'vac_first_dose', 'vac_second_dose']
vaccines_melted = df[keep_col]
vaccines_melted = vaccines_melted.melt('date', var_name="vaccine_doses",
value_name='count')
plt.figure(figsize=(14, 7))
plot = sns.lineplot(x='date', y='count', hue='vaccine_doses',
data=vaccines_melted)
plt.grid()
plt.ylim(0, 50000000)
plot.set_ylabel("count", fontsize=12)
plot.set_xlabel("Covid pandemic, up to date", fontsize=12)
plot.set_title('daily amount of first and second doses' +
' of vaccination administered', fontsize=14)
# use hue = column to categorise the data
# print(plot)
def first_vaccination_hm(df):
"""Plot a heatmap of the first vaccine dose (daily).
Args:
df (DataFrame): Requires data returned by get_national_data
Returns:
Heatmap of first vaccine doses over time
"""
df['date'] = df['date'].astype('datetime64[ns]')
df = df.fillna(0)
keep_col_hm = ['date', 'vac_first_dose']
vaccines_hm = df.loc[:, keep_col_hm]
vaccines_hm["Day"] = vaccines_hm.date.apply(lambda x: x.strftime("%d"))
vaccines_hm.pivot_table(index="Day", columns="date",
values="vac_first_dose")
vaccines_hm.date = vaccines_hm.date.dt.strftime('%Y-%m')
keep_colu = ['date', 'Day', 'vac_first_dose']
vaccines_hm = vaccines_hm[keep_colu]
pivoted = vaccines_hm.pivot(columns='Day',
index='date',
values='vac_first_dose')
pivoted = pivoted.fillna(0)
plt.figure(figsize=(16, 9))
cmap = sns.cm.rocket_r
plot_hm1 = sns.heatmap(pivoted, cmap=cmap)
plot_hm1.set_title('heatmap of the first vaccination dose' +
' administered daily', fontsize=14)
plot_hm1.set_ylabel('Year and month', fontsize=12)
# print(plot_hm1)
def second_vaccination_hm(df):
"""Plot a heatmap of the second vaccine dose (daily).
Args:
df (DataFrame): Requires data returned by get_national_data
Returns:
Heatmap of second vaccine doses over time
"""
df['date'] = df['date'].astype('datetime64[ns]')
df = df.fillna(0)
keep_col_hm = ['date', 'vac_second_dose']
vaccines_hm = df.loc[:, keep_col_hm]
vaccines_hm["Day"] = vaccines_hm.date.apply(lambda x: x.strftime("%d"))
vaccines_hm.pivot_table(index="Day", columns="date",
values="vac_second_dose")
vaccines_hm.date = vaccines_hm.date.dt.strftime('%Y-%m')
keep_colu = ['date', 'Day', 'vac_second_dose']
vaccines_hm = vaccines_hm[keep_colu]
pivoted = vaccines_hm.pivot(columns='Day',
index='date',
values='vac_second_dose')
pivoted = pivoted.fillna(0)
plt.figure(figsize=(16, 9))
cmap = sns.cm.rocket_r
plot_hm2 = sns.heatmap(pivoted, cmap=cmap)
plot_hm2.set_title('heatmap of the second vaccination dose' +
' administered daily', fontsize=14)
plot_hm2.set_ylabel('Year and month', fontsize=12)
# print(plot_hm2)
def vaccines_across_regions(vaccines2):
"""Plot graph of the vaccination uptake percentage by English regions.
Args:
vaccines2 (DataFrame): data from get_regional_data required
Returns:
plot of vaccine uptake by regions in England
"""
keep_fd = ['date', 'name', 'vac_firstDose']
vaccines2['date'] = vaccines2['date'].astype('datetime64[ns]')
vaccines_fd = vaccines2.loc[:, keep_fd]
vaccines_fd.fillna(0, inplace=True)
vaccines_fd
plt.figure(figsize=(16, 9))
plot_fd = sns.lineplot(x='date', y='vac_firstDose', hue='name',
data=vaccines_fd)
plt.ylim(0, 100)
plt.grid()
plot_fd.set_ylabel("percentage", fontsize=12)
plot_fd.set_xlabel("Covid pandemic, up to date", fontsize=12)
plot_fd.set_title('Vaccination uptake by region', fontsize=14)
# print(plot_fd)
| 40.647826 | 79 | 0.58571 | 5,938 | 46,745 | 4.458909 | 0.092119 | 0.012992 | 0.010198 | 0.008725 | 0.773388 | 0.741096 | 0.720965 | 0.694527 | 0.676172 | 0.654266 | 0 | 0.033876 | 0.290833 | 46,745 | 1,149 | 80 | 40.683203 | 0.764827 | 0.231918 | 0 | 0.652677 | 0 | 0 | 0.207836 | 0.025339 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041968 | false | 0 | 0.018813 | 0 | 0.06657 | 0.007236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c2d017d8eee0b960215a2642618960e9f03da11f | 245 | py | Python | src/allyoucanuse/etc/hashing.py | kunlubrain/allyoucanuse | c206d53fa9948cb335b406805d52125921fb71cf | [
"MIT"
] | null | null | null | src/allyoucanuse/etc/hashing.py | kunlubrain/allyoucanuse | c206d53fa9948cb335b406805d52125921fb71cf | [
"MIT"
] | null | null | null | src/allyoucanuse/etc/hashing.py | kunlubrain/allyoucanuse | c206d53fa9948cb335b406805d52125921fb71cf | [
"MIT"
] | null | null | null | from typing import Union, Iterable
import hashlib
def hash_id(seeds:Union[str, Iterable], n:int=32)->str:
"""For the moment, use the default simple python hash func
"""
h = hashlib.sha256(''.join(seeds)).hexdigest()[:n]
return h | 30.625 | 62 | 0.681633 | 37 | 245 | 4.486486 | 0.72973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024752 | 0.17551 | 245 | 8 | 63 | 30.625 | 0.79703 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
c2d3808ea07cbe15ac6fd167c1f1d94408d838e4 | 32 | py | Python | src/constants.py | argho28/Translation | 11e24df4deb29d37dfb1f48cf686cef75eb68397 | [
"MIT"
] | 15 | 2019-09-26T09:59:14.000Z | 2021-08-14T16:54:42.000Z | src/constants.py | argho28/Translation | 11e24df4deb29d37dfb1f48cf686cef75eb68397 | [
"MIT"
] | 9 | 2020-03-24T17:53:25.000Z | 2022-01-13T01:36:39.000Z | src/constants.py | argho28/Translation | 11e24df4deb29d37dfb1f48cf686cef75eb68397 | [
"MIT"
] | 3 | 2019-12-30T15:35:32.000Z | 2021-01-05T18:02:41.000Z | MODEL_PATH = "./model/model.pt"
| 16 | 31 | 0.6875 | 5 | 32 | 4.2 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c2d54bc8670fa3bdf4a2db5b9a515c8fa9d07665 | 189 | py | Python | testspeed/__init__.py | sc-1123/testspeed | 0dc560f9019087275d29eba2e4dfc351ba89566e | [
"MIT"
] | 1 | 2019-07-29T03:12:10.000Z | 2019-07-29T03:12:10.000Z | testspeed/__init__.py | sc-1123/testspeed | 0dc560f9019087275d29eba2e4dfc351ba89566e | [
"MIT"
] | null | null | null | testspeed/__init__.py | sc-1123/testspeed | 0dc560f9019087275d29eba2e4dfc351ba89566e | [
"MIT"
] | null | null | null | name = "testspeed"
from time import time
from sys import argv
from os import system
tic = time()
system('python %s' % (argv[1]))
toc = time()
print('used %s seconds' % (toc - tic))
| 21 | 39 | 0.640212 | 29 | 189 | 4.172414 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006711 | 0.21164 | 189 | 8 | 40 | 23.625 | 0.805369 | 0 | 0 | 0 | 0 | 0 | 0.18232 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c2db90e9e6960ed73fac71500e3d37978e19257c | 1,714 | py | Python | renderer/console.py | deeredman1991/CreepSmash | 566b87c6d70f3663016f1c6d41d63432f9d0e785 | [
"MIT"
] | null | null | null | renderer/console.py | deeredman1991/CreepSmash | 566b87c6d70f3663016f1c6d41d63432f9d0e785 | [
"MIT"
] | null | null | null | renderer/console.py | deeredman1991/CreepSmash | 566b87c6d70f3663016f1c6d41d63432f9d0e785 | [
"MIT"
] | null | null | null | import tools.libtcod.libtcodpy as libtcod
class Console(object):
def __init__(self, x=[0,0], y=[0,0], parent_console=None):
self._settings = {
"x": x,
"y": y,
"Parent_Console": parent_console
}
self._settings["Width"] = int(max(self._settings["x"][1], self._settings["x"][0]) - min(self._settings["x"][1], self._settings["x"][0]))
self._settings["Height"] = int(max(self._settings["y"][1], self._settings["y"][0]) - min(self._settings["y"][1], self._settings["y"][0]))
self._settings["Console"] = libtcod.console_new(self._settings["Width"], self._settings["Height"])
@property
def x(self):
return min(self._settings["x"][1], self._settings["x"][0])
@property
def y(self):
return min(self._settings["y"][1], self._settings["y"][0])
@property
def height(self):
return self._settings["Height"]
@property
def width(self):
return self._settings["Width"]
@property
def console(self):
return self._settings["Console"]
@property
def parent_console(self):
return self._settings["Parent_Console"]
# destination_console | The destination to be blitted to.
# foregroundAlpha, backgroundAlpha | Normalized Alpha transparency of the blitted console.
def blit(self, destination_console = None, foregroundAlpha = 1.0, backgroundAlpha = 1.0):
destination_console = destination_console or self.parent_console.console
libtcod.console_blit(self._settings["Console"], 0, 0, self.width, self.height, destination_console, self.x, self.y, foregroundAlpha, backgroundAlpha) | 39.860465 | 157 | 0.625438 | 207 | 1,714 | 4.985507 | 0.188406 | 0.267442 | 0.088178 | 0.085271 | 0.306202 | 0.174419 | 0.174419 | 0.174419 | 0.120155 | 0 | 0 | 0.016504 | 0.222287 | 1,714 | 43 | 157 | 39.860465 | 0.757689 | 0.085181 | 0 | 0.1875 | 0 | 0 | 0.061381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.03125 | 0.1875 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
c2ed8ec4755fb9cd0f0e90d7dcf10e9cf020ad38 | 8,759 | py | Python | core/migrations/0001_initial.py | vlafranca/stream_framework_example | 3af636c591d4a278f3720f64118d86aeb8091714 | [
"MIT"
] | 102 | 2015-01-18T15:02:34.000Z | 2021-12-07T17:22:12.000Z | core/migrations/0001_initial.py | vlafranca/stream_framework_example | 3af636c591d4a278f3720f64118d86aeb8091714 | [
"MIT"
] | 11 | 2015-01-04T14:42:11.000Z | 2022-01-13T04:58:10.000Z | core/migrations/0001_initial.py | vlafranca/stream_framework_example | 3af636c591d4a278f3720f64118d86aeb8091714 | [
"MIT"
] | 53 | 2015-01-12T07:11:10.000Z | 2021-07-28T08:40:02.000Z | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Item'
db.create_table(u'core_item', (
(u'id', self.gf('django.db.models.fields.AutoField')
(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')
(to=orm['auth.User'])),
('image', self.gf('django.db.models.fields.files.ImageField')
(max_length=100)),
('source_url', self.gf('django.db.models.fields.TextField')()),
('message', self.gf('django.db.models.fields.TextField')
(null=True, blank=True)),
))
db.send_create_signal(u'core', ['Item'])
# Adding model 'Board'
db.create_table(u'core_board', (
(u'id', self.gf('django.db.models.fields.AutoField')
(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')
(to=orm['auth.User'])),
('name', self.gf('django.db.models.fields.CharField')
(max_length=255)),
('description', self.gf('django.db.models.fields.TextField')
(null=True, blank=True)),
('slug', self.gf('django.db.models.fields.SlugField')
(max_length=50)),
))
db.send_create_signal(u'core', ['Board'])
# Adding model 'Pin'
db.create_table(u'core_pin', (
(u'id', self.gf('django.db.models.fields.AutoField')
(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')
(to=orm['auth.User'])),
('item', self.gf('django.db.models.fields.related.ForeignKey')
(to=orm['core.Item'])),
('board', self.gf('django.db.models.fields.related.ForeignKey')
(to=orm['core.Board'])),
('influencer', self.gf('django.db.models.fields.related.ForeignKey')
(related_name='influenced_pins', to=orm['auth.User'])),
('message', self.gf('django.db.models.fields.TextField')
(null=True, blank=True)),
))
db.send_create_signal(u'core', ['Pin'])
# Adding model 'Follow'
db.create_table(u'core_follow', (
(u'id', self.gf('django.db.models.fields.AutoField')
(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')
(related_name='following_set', to=orm['auth.User'])),
('target', self.gf('django.db.models.fields.related.ForeignKey')
(related_name='follower_set', to=orm['auth.User'])),
('deleted_at', self.gf('django.db.models.fields.DateTimeField')
(null=True, blank=True)),
))
db.send_create_signal(u'core', ['Follow'])
def backwards(self, orm):
# Deleting model 'Item'
db.delete_table(u'core_item')
# Deleting model 'Board'
db.delete_table(u'core_board')
# Deleting model 'Pin'
db.delete_table(u'core_pin')
# Deleting model 'Follow'
db.delete_table(u'core_follow')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'core.board': {
'Meta': {'object_name': 'Board'},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'core.follow': {
'Meta': {'object_name': 'Follow'},
'deleted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'follower_set'", 'to': u"orm['auth.User']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'following_set'", 'to': u"orm['auth.User']"})
},
u'core.item': {
'Meta': {'object_name': 'Item'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'source_url': ('django.db.models.fields.TextField', [], {}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'core.pin': {
'Meta': {'object_name': 'Pin'},
'board': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['core.Board']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'influencer': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'influenced_pins'", 'to': u"orm['auth.User']"}),
'item': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['core.Item']"}),
'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
}
}
complete_apps = ['core']
| 56.509677 | 187 | 0.553488 | 965 | 8,759 | 4.918135 | 0.119171 | 0.109566 | 0.188791 | 0.269701 | 0.761694 | 0.714496 | 0.693847 | 0.655499 | 0.611252 | 0.5059 | 0 | 0.006595 | 0.22103 | 8,759 | 154 | 188 | 56.876623 | 0.688993 | 0.022034 | 0 | 0.285714 | 0 | 0 | 0.50713 | 0.299205 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015038 | false | 0.007519 | 0.030075 | 0 | 0.067669 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c2edaf37adb691a52b1dfd785bf639490dc75f3a | 3,309 | py | Python | desicos/abaqus/conecyl/__init__.py | saullocastro/desicos | 922db8ac4fb0fb4d09df18ce2a14011f207f6fa8 | [
"BSD-3-Clause"
] | 1 | 2020-10-22T22:15:24.000Z | 2020-10-22T22:15:24.000Z | desicos/abaqus/conecyl/__init__.py | saullocastro/desicos | 922db8ac4fb0fb4d09df18ce2a14011f207f6fa8 | [
"BSD-3-Clause"
] | 1 | 2020-10-09T12:42:02.000Z | 2020-10-09T12:42:02.000Z | desicos/abaqus/conecyl/__init__.py | saullocastro/desicos | 922db8ac4fb0fb4d09df18ce2a14011f207f6fa8 | [
"BSD-3-Clause"
] | 2 | 2020-07-14T07:45:31.000Z | 2020-12-29T00:22:41.000Z | r"""
===================================================
ConeCyl (:mod:`desicos.abaqus.conecyl`)
===================================================
.. currentmodule:: desicos.abaqus.conecyl
Cone/Cylinder Model
=====================
Figure 1 provides a schematic view of the typical model created using this
module. Two coordinate systems are defined: one rectangular with axes `X_1`,
`X_2`, `X_3` and a cylindrical with axes `R`, `Th`, `Z`.
.. _figure_conecyl:
.. figure:: ../../../figures/modules/abaqus/conecyl/conecyl_model.png
:width: 400
Figure 1: Cone/Cylinder Model
The complexity of the actual model created in Abaqus goes beyond the
simplification above
Boundary Conditions
===================
Based on the coordinate systems shown in Figure 1 the following boundary
condition parameters can be controlled:
- constraint for radial and circumferential displacement (`u_R` and `v`) at
the bottom and top edges
- simply supported or clamped bottom and top edges, consisting in the
rotational constraint along the meridional coordinate, called `\phi_x`.
- use of resin rings as described in :ref:`the next section <resin_rings>`
- the use of distributed or concentrated load at the top edge will be
automatically determined depending on the attributes of the current
:class:`.ConeCyl` object
- application of shims at the top edge as detailed in
:meth:`.ImpConf.add_shim_top_edge`, following this example::
from desicos.abaqus.conecyl import ConeCyl
cc = ConeCyl()
cc.from_DB('castro_2014_c02')
cc.impconf.add_shim(thetadeg, thick, width)
- application of uneven top edges as detailed in
:meth:`.UnevenTopEdge.add_measured_u3s`, following this example::
thetadegs = [0.0, 22.5, 45.0, 67.5, 90.0, 112.5, 135.0, 157.5, 180.0,
202.5, 225.0, 247.5, 270.0, 292.5, 315.0, 337.5, 360.0]
u3s = [0.0762, 0.0508, 0.1270, 0.0000, 0.0000, 0.0762, 0.2794, 0.1778,
0.0000, 0.0000, 0.0762, 0.0000, 0.1016, 0.2032, 0.0381, 0.0000,
0.0762]
cc.impconf.add_measured_u3s_top_edge(thetadegs, u3s)
.. _resin_rings:
Resin Rings
===========
When resin rings are used the actual boundary condition will be determined by
the parameters defining the resin rings (cf. Figure 2), and therefore no clamped conditions
will be applied in the shell edges.
.. figure:: ../../../figures/modules/abaqus/conecyl/resin_rings.png
:width: 400
Figure 2: Resin Rings
Defining resin rings can be done following the example below, where each
attribute is detailed in the :class:`.ConeCyl` class description::
from desicos.abaqus.conecyl import ConeCyl
cc = Conecyl()
cc.from_DB('castro_2014_c02')
cc.resin_add_BIR = False
cc.resin_add_BOR = True
cc.resin_add_TIR = False
cc.resin_add_TOR = True
cc.resin_E = 2454.5336
cc.resin_nu = 0.3
cc.resin_numel = 3
cc.resin_bot_h = 25.4
cc.resin_top_h = 25.4
cc.resin_bir_w1 = 25.4
cc.resin_bir_w2 = 25.4
cc.resin_bor_w1 = 25.4
cc.resin_bor_w2 = 25.4
cc.resin_tir_w1 = 25.4
cc.resin_tir_w2 = 25.4
cc.resin_tor_w1 = 25.4
cc.resin_tor_w2 = 25.4
The ConeCyl Class
=================
.. automodule:: desicos.abaqus.conecyl.conecyl
:members:
"""
from __future__ import absolute_import
from .conecyl import *
| 30.925234 | 91 | 0.676337 | 507 | 3,309 | 4.287968 | 0.370809 | 0.054738 | 0.020699 | 0.041398 | 0.168353 | 0.078197 | 0.078197 | 0.063477 | 0.063477 | 0.063477 | 0 | 0.083149 | 0.178604 | 3,309 | 106 | 92 | 31.216981 | 0.716703 | 0.977637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c2f022d833125248ec963e921ce8841d3ad389bf | 1,230 | py | Python | rpython/jit/backend/ppc/regname.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 381 | 2018-08-18T03:37:22.000Z | 2022-02-06T23:57:36.000Z | rpython/jit/backend/ppc/regname.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 16 | 2018-09-22T18:12:47.000Z | 2022-02-22T20:03:59.000Z | rpython/jit/backend/ppc/regname.py | nanjekyejoannah/pypy | e80079fe13c29eda7b2a6b4cd4557051f975a2d9 | [
"Apache-2.0",
"OpenSSL"
] | 55 | 2015-08-16T02:41:30.000Z | 2022-03-20T20:33:35.000Z | class _R(int):
def __repr__(self):
return "r%s"%(super(_R, self).__repr__(),)
__str__ = __repr__
class _F(int):
def __repr__(self):
return "fr%s"%(super(_F, self).__repr__(),)
__str__ = __repr__
class _V(int):
def __repr__(self):
return "vr%s"%(super(_V, self).__repr__(),)
__str__ = __repr__
r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, \
r13, r14, r15, r16, r17, r18, r19, r20, r21, r22, \
r23, r24, r25, r26, r27, r28, r29, r30, r31 = map(_R, range(32))
fr0, fr1, fr2, fr3, fr4, fr5, fr6, fr7, fr8, fr9, fr10, fr11, fr12, \
fr13, fr14, fr15, fr16, fr17, fr18, fr19, fr20, fr21, fr22, \
fr23, fr24, fr25, fr26, fr27, fr28, fr29, fr30, fr31 = map(_F, range(32))
vr0, vr1, vr2, vr3, vr4, vr5, vr6, vr7, vr8, vr9, vr10, vr11, vr12, vr13, \
vr14, vr15, vr16, vr17, vr18, vr19, vr20, vr21, vr22, vr23, vr24, vr25, \
vr26, vr27, vr28, vr29, vr30, vr31, vr32, vr33, vr34, vr35, vr36, vr37, \
vr38, vr39, vr40, vr41, vr42, vr43, vr44, vr45, vr46, vr47, vr48, \
vr49, vr50, vr51, vr52, vr53, vr54, vr55, vr56, vr57, vr58, vr59, vr60, \
vr61, vr62, vr63 = map(_V, range(64))
crf0, crf1, crf2, crf3, crf4, crf5, crf6, crf7 = range(8)
| 41 | 78 | 0.591057 | 195 | 1,230 | 3.435897 | 0.794872 | 0.026866 | 0.044776 | 0.062687 | 0.149254 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253952 | 0.228455 | 1,230 | 29 | 79 | 42.413793 | 0.452055 | 0 | 0 | 0.24 | 0 | 0 | 0.008943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0 | 0.12 | 0.48 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
c2f186277f31c8ec4b6c844878711153981d3676 | 920 | py | Python | common/utilities/message_utilities.py | uk-gov-mirror/nhsconnect.integration-adaptor-mhs | bf090a17659da738401667997a10695d8b75b94b | [
"Apache-2.0"
] | 15 | 2019-08-06T16:08:12.000Z | 2021-05-24T13:14:39.000Z | common/utilities/message_utilities.py | uk-gov-mirror/nhsconnect.integration-adaptor-mhs | bf090a17659da738401667997a10695d8b75b94b | [
"Apache-2.0"
] | 75 | 2019-04-25T13:59:02.000Z | 2021-09-15T06:05:36.000Z | common/utilities/message_utilities.py | uk-gov-mirror/nhsconnect.integration-adaptor-mhs | bf090a17659da738401667997a10695d8b75b94b | [
"Apache-2.0"
] | 7 | 2019-11-12T15:26:34.000Z | 2021-04-11T07:23:56.000Z | import uuid
import datetime
import utilities.file_utilities as file_utilities
EBXML_TIMESTAMP_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
def get_uuid():
"""Generate a UUID suitable for sending in messages to Spine.
:return: A string representation of the UUID.
"""
return str(uuid.uuid4()).upper()
def get_timestamp():
"""Generate a timestamp in a format suitable for sending in ebXML messages.
:return: A string representation of the timestamp
"""
current_utc_time = datetime.datetime.utcnow()
return current_utc_time.strftime(EBXML_TIMESTAMP_FORMAT)
def load_test_data(message_dir, filename_without_extension):
message = file_utilities.get_file_string(message_dir / (filename_without_extension + ".msg"))
ebxml = file_utilities.get_file_string(message_dir / (filename_without_extension + ".ebxml"))
message = message.replace("{{ebxml}}", ebxml)
return message, ebxml
| 27.878788 | 97 | 0.738043 | 122 | 920 | 5.327869 | 0.393443 | 0.08 | 0.083077 | 0.115385 | 0.335385 | 0.283077 | 0.184615 | 0.184615 | 0.184615 | 0.184615 | 0 | 0.001295 | 0.16087 | 920 | 32 | 98 | 28.75 | 0.840674 | 0.248913 | 0 | 0 | 1 | 0 | 0.055891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.214286 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6c184b2174364e3e55c83631e166cd7d528e99e1 | 60 | py | Python | app/comic/container_exec/__init__.py | EYRA-Benchmark/grand-challenge.org | 8264c19fa1a30ffdb717d765e2aa2e6ceccaab17 | [
"Apache-2.0"
] | 2 | 2019-06-28T09:23:55.000Z | 2020-03-18T05:52:13.000Z | app/comic/container_exec/__init__.py | EYRA-Benchmark/comic | 8264c19fa1a30ffdb717d765e2aa2e6ceccaab17 | [
"Apache-2.0"
] | 112 | 2019-08-12T15:13:27.000Z | 2022-03-21T15:49:40.000Z | app/comic/container_exec/__init__.py | EYRA-Benchmark/grand-challenge.org | 8264c19fa1a30ffdb717d765e2aa2e6ceccaab17 | [
"Apache-2.0"
] | 1 | 2020-03-19T14:19:57.000Z | 2020-03-19T14:19:57.000Z | default_app_config = "comic.container_exec.apps.CoreConfig"
| 30 | 59 | 0.85 | 8 | 60 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 60 | 1 | 60 | 60 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6c1cda3f913ecea499264b75e17f95a35ff6a498 | 443 | py | Python | lantz/qt/widgets/__init__.py | mtsolmn/lantz-qt | 72cb16bd3aafe33caa1a822ac2ba98b3425d4420 | [
"BSD-3-Clause"
] | 1 | 2020-05-13T08:29:16.000Z | 2020-05-13T08:29:16.000Z | lantz/qt/widgets/__init__.py | mtsolmn/lantz-qt | 72cb16bd3aafe33caa1a822ac2ba98b3425d4420 | [
"BSD-3-Clause"
] | null | null | null | lantz/qt/widgets/__init__.py | mtsolmn/lantz-qt | 72cb16bd3aafe33caa1a822ac2ba98b3425d4420 | [
"BSD-3-Clause"
] | 3 | 2019-09-24T16:49:10.000Z | 2020-09-23T17:53:20.000Z | # -*- coding: utf-8 -*-
"""
lantz.qt.widgets
~~~~~~~~~~~~~~~~
PyQt widgets wrapped to work with lantz.
:copyright: 2018 by Lantz Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from . import feat, nonnumeric, numeric
from .common import WidgetMixin, ChildrenWidgets
from .initialize import InitializeWindow, InitializeDialog
from .testgui import DriverTestWidget, SetupTestWidget | 27.6875 | 68 | 0.711061 | 50 | 443 | 6.3 | 0.7 | 0.044444 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013661 | 0.173815 | 443 | 16 | 69 | 27.6875 | 0.846995 | 0.471783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6c6a90b147afe488a76460582fd0b95042612fc0 | 135 | py | Python | PySpace/using_sys.py | dralee/LearningRepository | 4324d3c5ac1a12dde17ae70c1eb7f3d36a047ba4 | [
"Apache-2.0"
] | null | null | null | PySpace/using_sys.py | dralee/LearningRepository | 4324d3c5ac1a12dde17ae70c1eb7f3d36a047ba4 | [
"Apache-2.0"
] | null | null | null | PySpace/using_sys.py | dralee/LearningRepository | 4324d3c5ac1a12dde17ae70c1eb7f3d36a047ba4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
# 文件名:using_sys.py
import sys
print('命令行参数如下:')
for i in sys.argv:
print(i)
print('\n\nPython路径为:',sys.path,'\n') | 15 | 37 | 0.674074 | 24 | 135 | 3.75 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.103704 | 135 | 9 | 37 | 15 | 0.735537 | 0.251852 | 0 | 0 | 0 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.6 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
6c8d08da4457f70f71f8796a1ee31a832ff90488 | 190 | py | Python | day08/test04.py | jaywoong/python | 99daedd5a9418b72b2d5c3b800080e730eb9b3ea | [
"Apache-2.0"
] | null | null | null | day08/test04.py | jaywoong/python | 99daedd5a9418b72b2d5c3b800080e730eb9b3ea | [
"Apache-2.0"
] | null | null | null | day08/test04.py | jaywoong/python | 99daedd5a9418b72b2d5c3b800080e730eb9b3ea | [
"Apache-2.0"
] | null | null | null | from value import Account
acc1 = Account(10000, 3.2)
print(acc1)
acc1.__balance = 100000000
print(acc1)
print(acc1.getBalance())
print(acc1.getInterest())
acc1.setInterest(2.8)
print(acc1)
| 17.272727 | 26 | 0.763158 | 28 | 190 | 5.107143 | 0.535714 | 0.314685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 0.094737 | 190 | 10 | 27 | 19 | 0.680233 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.555556 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
665d3713837abc4149228da527c02f71d0d908ef | 1,151 | py | Python | tests/test_cli.py | joshbduncan/word-search-generator | 3c527f0371cbe4550a24403c660d1c6511b4cf79 | [
"MIT"
] | 4 | 2021-09-18T21:21:54.000Z | 2022-03-02T03:53:54.000Z | tests/test_cli.py | joshbduncan/word-search-generator | 3c527f0371cbe4550a24403c660d1c6511b4cf79 | [
"MIT"
] | 4 | 2021-09-18T21:50:33.000Z | 2022-03-22T04:29:33.000Z | tests/test_cli.py | joshbduncan/word-search-generator | 3c527f0371cbe4550a24403c660d1c6511b4cf79 | [
"MIT"
] | 1 | 2021-11-17T14:53:50.000Z | 2021-11-17T14:53:50.000Z | import os
import pathlib
import tempfile
TEMP_DIR = tempfile.TemporaryDirectory()
def test_entrypoint():
exit_status = os.system("word-search --help")
assert exit_status == 0
def test_no_words_provided():
exit_status = os.system("word-search")
assert os.WEXITSTATUS(exit_status) == 1
def test_just_words():
exit_status = os.system("word-search some test words")
assert exit_status == 0
def test_stdin():
exit_status = os.system("echo computer robot soda | word-search")
assert os.WEXITSTATUS(exit_status) == 0
def test_export_pdf():
temp_path = TEMP_DIR.name + "/test.pdf"
exit_status = os.system(f"word-search some test words -e pdf -o {temp_path}")
assert exit_status == 0 and pathlib.Path(temp_path).exists()
def test_export_csv():
temp_path = TEMP_DIR.name + "/test.csv"
exit_status = os.system(f"word-search some test words -e csv -o {temp_path}")
assert exit_status == 0 and pathlib.Path(temp_path).exists()
def test_invalid_export_location():
exit_status = os.system("word-search some test words -e csv -o ~/RANDOMTESTLOC")
assert os.WEXITSTATUS(exit_status) == 1
| 26.159091 | 84 | 0.709818 | 173 | 1,151 | 4.508671 | 0.248555 | 0.179487 | 0.107692 | 0.161538 | 0.711538 | 0.701282 | 0.482051 | 0.382051 | 0.371795 | 0.266667 | 0 | 0.00733 | 0.170287 | 1,151 | 43 | 85 | 26.767442 | 0.809424 | 0 | 0 | 0.222222 | 0 | 0 | 0.228497 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 1 | 0.259259 | false | 0 | 0.111111 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
666d3c5b51416d64a4d8d00ca1cc2533f85b4bf8 | 296 | py | Python | venv/Lib/site-packages/IPython/terminal/ptshell.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | venv/Lib/site-packages/IPython/terminal/ptshell.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | venv/Lib/site-packages/IPython/terminal/ptshell.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | raise DeprecationWarning("""DEPRECATED:
After Popular request and decision from the BDFL:
`IPython.terminal.ptshell` has been moved back to `IPython.terminal.interactiveshell`
during the beta cycle (after IPython 5.0.beta3) Sorry about that.
This file will be removed in 5.0 rc or final.
""")
| 32.888889 | 85 | 0.777027 | 45 | 296 | 5.111111 | 0.844444 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.138514 | 296 | 8 | 86 | 37 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0.888514 | 0.206081 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
669f60ed987d448932641383a9784e17ffb52883 | 836 | py | Python | tests/scheduler_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | 3,556 | 2018-04-28T22:59:40.000Z | 2022-03-28T22:20:07.000Z | tests/scheduler_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | 120 | 2018-05-20T11:49:00.000Z | 2022-03-07T00:08:55.000Z | tests/scheduler_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | 518 | 2018-05-27T01:42:25.000Z | 2022-03-25T12:38:32.000Z | import pytest
from scylla.scheduler import Scheduler, cron_schedule
@pytest.fixture
def scheduler():
return Scheduler()
def test_start(mocker, scheduler):
process_start = mocker.patch('multiprocessing.Process.start')
thread_start = mocker.patch('threading.Thread.start')
scheduler.start()
process_start.assert_called_once()
thread_start.assert_called()
def test_cron_schedule(mocker, scheduler):
feed_providers = mocker.patch('scylla.scheduler.Scheduler.feed_providers')
cron_schedule(scheduler, only_once=True)
feed_providers.assert_called_once()
def test_feed_providers(mocker, scheduler):
pass
# TODO: mock Queue.put or find other solutions
# queue_put = mocker.patch('multiprocessing.Queue.put')
#
# scheduler.feed_providers()
#
# queue_put.assert_called()
| 23.885714 | 78 | 0.744019 | 100 | 836 | 5.98 | 0.34 | 0.108696 | 0.110368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155502 | 836 | 34 | 79 | 24.588235 | 0.847026 | 0.180622 | 0 | 0 | 0 | 0 | 0.135693 | 0.135693 | 0 | 0 | 0 | 0.029412 | 0.176471 | 1 | 0.235294 | false | 0.058824 | 0.117647 | 0.058824 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
66c4c0ab19cb9fa1cb71b15b0da8a32e24b51bb6 | 5,491 | py | Python | Linuxu.py | Jefferson-Hsu/Linuxu-shell | 2bbc42248e05ac01f8d3466479bb8106833c7ab1 | [
"MIT"
] | 1 | 2022-03-04T05:53:33.000Z | 2022-03-04T05:53:33.000Z | Linuxu.py | Jefferson-Hsu/Linuxu-shell | 2bbc42248e05ac01f8d3466479bb8106833c7ab1 | [
"MIT"
] | null | null | null | Linuxu.py | Jefferson-Hsu/Linuxu-shell | 2bbc42248e05ac01f8d3466479bb8106833c7ab1 | [
"MIT"
] | null | null | null | #library
import os
#string aphoto
print(" _ _ _ _ __ __ _ _ .____ .__ ")
print("| | | | ___| | | __\\ \\ / /__ _ __| | __| | | | |__| ____ __ _____ _____ __ ")
print("| |_| |/ _ \\ | |/ _ \\ \\ /\\ / / _ \\| '__| |/ _` | | | | |/ \| | \ \/ / | \ ")
print("| _ | __/ | | (_) \\ V V / (_) | | | | (_| | | |___| | | \ | /> <| | / ")
print("|_| |_|\\___|_|_|\\___/ \\_/\\_/ \\___/|_| |_|\\__,_| |_______ \__|___| /____//__/\_ \____/ ")
print(" ")
print(" ")
print(" ")
#password & user name
join_key=3
again_key=4
name="XuFaxin"
password="Xinxin080502"
print("--------------------------------------------------------------------------------------------------------------------------------------------")
input_name=input("Please type the user name: ")
print("--------------------------------------------------------------------------------------------------------------------------------------------")
input_password=input("Please type the password: ")
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("welcome to Linuxu system!!!")
print(" ")
while(join_key==3):
if input_name=="XuFaxin" and input_password=="Xinxin080502":
print(" ")
print(" ")
else:
print("Bye,you are not user!")
break
#command shell
command=input("XuFaxin@computer% ")
#root command
if(command=="root"):
print(" ")
print("you are rooter!")
print(" ")
print("But don't be happy too soon")
print(" ")
print("-----------------------------------------------------------------------------------------------------------------------------------")
print(" In the world of Linuxu XuFaxin is god!")
print("-----------------------------------------------------------------------------------------------------------------------------------")
print(" ")
#Calculator command
if(command=="math"):
print("Develop by XuFaxin")
counts=3
while counts>0:
str1=input("First number: ")
str2=input("Second number:")
X=int(str1)
Y=int(str2)
print(X+Y)
print(X-Y)
print(X*Y)
print(X/Y)
print(X**Y)
print(X//Y)
break
#game command
if(command=="game"):
print(" ")
print("Welcome to XuFaxin's guess number game!")
print(" ")
print("You have three chances")
print(" ")
print("Guess an integer between 1 and 10")
print(" ")
print("develop by XuFaxin")
print(" ")
print(" ")
import random
answer=random.randint(1,10)
counts=3
while counts>0:
temp=input("Guess a number: ")
guess=int(temp)
if guess==answer:
print(" ")
print("Win")
print(" ")
print("Win!!! But no pay! HAHA!")
else:
if guess>0:
print(" ")
print("Big!")
print(" ")
else:
print(" ")
print("small!")
counts=counts-1
#clear command
if(command=="clear"):
os.system( 'cls' )
os.system("clear")
#list command
if(command=="ls"):
print("-------------------------------------------------------------------------------------------------------------------------------")
print(" ||game|| ||math|| ")
print("-------------------------------------------------------------------------------------------------------------------------------")
#exit command
if(command=="exit"):
print(" ")
print("See you again!")
break | 44.282258 | 152 | 0.24094 | 275 | 5,491 | 4.396364 | 0.338182 | 0.198511 | 0.079404 | 0.049628 | 0.06617 | 0.034739 | 0.034739 | 0.034739 | 0.034739 | 0.034739 | 0 | 0.011095 | 0.491167 | 5,491 | 124 | 153 | 44.282258 | 0.421618 | 0.024039 | 0 | 0.410526 | 0 | 0.042105 | 0.626168 | 0.174953 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.031579 | 0.021053 | 0 | 0.021053 | 0.589474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
dd1d3ef072d0bbe5516060cc303f3f2982632867 | 305 | py | Python | django_obfuscator/testobfuscator/models.py | vishnuc91/obfuscator-date | d4424cb7823dbf20543c5cc2bc0ce48d8d62a69a | [
"Apache-2.0"
] | null | null | null | django_obfuscator/testobfuscator/models.py | vishnuc91/obfuscator-date | d4424cb7823dbf20543c5cc2bc0ce48d8d62a69a | [
"Apache-2.0"
] | null | null | null | django_obfuscator/testobfuscator/models.py | vishnuc91/obfuscator-date | d4424cb7823dbf20543c5cc2bc0ce48d8d62a69a | [
"Apache-2.0"
] | null | null | null | from django.db import models
# Create your models here.
class MyModel(models.Model):
aname = models.CharField(max_length=100, null=True, blank=True)
anint = models.IntegerField(default=999)
astring = models.CharField(max_length=50)
date = models.DateField('Date', null=True, blank=True)
| 30.5 | 67 | 0.734426 | 42 | 305 | 5.285714 | 0.642857 | 0.135135 | 0.162162 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.147541 | 305 | 9 | 68 | 33.888889 | 0.823077 | 0.078689 | 0 | 0 | 0 | 0 | 0.014337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
dd27497c195aa372001cc5557855526c81484bc2 | 263 | py | Python | nameyourapp/routes.py | WhereWeCanShare/minipy | 485e9c4f122aa56ed8389d0ea7b5c16d59179aed | [
"BSD-3-Clause"
] | null | null | null | nameyourapp/routes.py | WhereWeCanShare/minipy | 485e9c4f122aa56ed8389d0ea7b5c16d59179aed | [
"BSD-3-Clause"
] | null | null | null | nameyourapp/routes.py | WhereWeCanShare/minipy | 485e9c4f122aa56ed8389d0ea7b5c16d59179aed | [
"BSD-3-Clause"
] | null | null | null | from flask import Blueprint
main = Blueprint('main', __name__)
@main.route('/')
def main_index():
return '<div align="center"><img src="https://source.unsplash.com/1200x800/?technology,matrix,hacker,women"><p>Thanks Unsplash for nice photo</p></div>', 200
| 29.222222 | 161 | 0.714829 | 37 | 263 | 4.945946 | 0.810811 | 0.142077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042373 | 0.102662 | 263 | 8 | 162 | 32.875 | 0.733051 | 0 | 0 | 0 | 0 | 0.2 | 0.562738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
dd341ee91f7a33f3e372a0311daf89a77f9a9148 | 349 | py | Python | tests/test_license_screening.py | sthagen/python-scaling-tribble | 2bb2e41185ae2b0108f341751d0e4a2187909683 | [
"MIT"
] | null | null | null | tests/test_license_screening.py | sthagen/python-scaling-tribble | 2bb2e41185ae2b0108f341751d0e4a2187909683 | [
"MIT"
] | 18 | 2021-02-14T15:17:17.000Z | 2021-02-14T17:46:27.000Z | tests/test_license_screening.py | sthagen/python-scaling-tribble | 2bb2e41185ae2b0108f341751d0e4a2187909683 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# pylint: disable=missing-docstring,unused-import,reimported
import pytest # type: ignore
import tests.context as ctx
import license_screening.license_screening as lis
def test_parse_ok_empty_string():
assert lis.parse('') is NotImplemented
def test_parse_ok_known_tree():
assert lis.main(["tests/data"]) == 0
| 21.8125 | 60 | 0.744986 | 49 | 349 | 5.102041 | 0.693878 | 0.128 | 0.096 | 0.112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006645 | 0.137536 | 349 | 15 | 61 | 23.266667 | 0.82392 | 0.266476 | 0 | 0 | 0 | 0 | 0.039683 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | true | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
06bd33902db8a6c06128727e29c0eae037cf9894 | 378 | py | Python | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/NV/fragment_program.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/NV/fragment_program.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/NV/fragment_program.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | '''OpenGL extension NV.fragment_program
This module customises the behaviour of the
OpenGL.raw.GL.NV.fragment_program to provide a more
Python-friendly API
'''
from OpenGL import platform, constants, constant, arrays
from OpenGL import extensions, wrapper
from OpenGL.GL import glget
import ctypes
from OpenGL.raw.GL.NV.fragment_program import *
### END AUTOGENERATED SECTION | 31.5 | 56 | 0.812169 | 55 | 378 | 5.527273 | 0.6 | 0.131579 | 0.167763 | 0.085526 | 0.184211 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121693 | 378 | 12 | 57 | 31.5 | 0.915663 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
06d39356c17a9b2e38d63f8d9aaf9f0140911fdf | 46 | py | Python | wafw00f/__init__.py | nofunofunofunofun/wafw00f | a1c3f3a045077d893cd9ed970f5d687b590abfa5 | [
"BSD-3-Clause"
] | 1 | 2022-03-22T09:15:04.000Z | 2022-03-22T09:15:04.000Z | wafw00f/__init__.py | nofunofunofunofun/wafw00f | a1c3f3a045077d893cd9ed970f5d687b590abfa5 | [
"BSD-3-Clause"
] | null | null | null | wafw00f/__init__.py | nofunofunofunofun/wafw00f | a1c3f3a045077d893cd9ed970f5d687b590abfa5 | [
"BSD-3-Clause"
] | 1 | 2021-01-11T17:26:14.000Z | 2021-01-11T17:26:14.000Z | #!/usr/bin/env python
__version__ = '0.9.6'
| 9.2 | 21 | 0.630435 | 8 | 46 | 3.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.152174 | 46 | 4 | 22 | 11.5 | 0.564103 | 0.434783 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
06da67599586f6f3ad731d357524246723f211dc | 331 | py | Python | python/demo-django/code/character/models.py | denisroldan/talentum-2015-examples | 4980f49dca56a55d26722f4f6d1fdd88e06f38dd | [
"MIT"
] | null | null | null | python/demo-django/code/character/models.py | denisroldan/talentum-2015-examples | 4980f49dca56a55d26722f4f6d1fdd88e06f38dd | [
"MIT"
] | null | null | null | python/demo-django/code/character/models.py | denisroldan/talentum-2015-examples | 4980f49dca56a55d26722f4f6d1fdd88e06f38dd | [
"MIT"
] | null | null | null | from django.db import models
from game.models import Game
class Character(models.Model):
name = models.CharField(max_length=250)
game = models.ManyToManyField(Game, related_name='characters')
def __unicode__(self):
return "{0}".format(self.name)
def __str__(self):
return "{0}".format(self.name) | 25.461538 | 66 | 0.694864 | 43 | 331 | 5.116279 | 0.55814 | 0.090909 | 0.1 | 0.154545 | 0.227273 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0.01845 | 0.181269 | 331 | 13 | 67 | 25.461538 | 0.793358 | 0 | 0 | 0.222222 | 0 | 0 | 0.048193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.222222 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
06dc17868ef7177e219fa324d4ee2030cbe721d0 | 51 | py | Python | appengine/src/greenday_core/settings/__init__.py | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 6 | 2018-07-31T16:48:07.000Z | 2020-02-01T03:17:51.000Z | appengine/src/greenday_core/settings/__init__.py | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 41 | 2018-08-07T16:43:07.000Z | 2020-06-05T18:54:50.000Z | appengine/src/greenday_core/settings/__init__.py | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 1 | 2018-08-07T16:40:18.000Z | 2018-08-07T16:40:18.000Z | """
Modules to configure Django's settings
"""
| 12.75 | 42 | 0.647059 | 6 | 51 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 51 | 3 | 43 | 17 | 0.825 | 0.745098 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
06e51ad894ceaca1307a132e1efdb1fe4242fe80 | 17,008 | py | Python | batches.py | NRHelmi/ldbc_snb_data_converter | 42eb5dfbe8b46bcb4d8ad56e1fd988e7635deea3 | [
"Apache-2.0"
] | 2 | 2021-01-22T10:07:18.000Z | 2021-02-09T18:13:28.000Z | batches.py | NRHelmi/ldbc_snb_data_converter | 42eb5dfbe8b46bcb4d8ad56e1fd988e7635deea3 | [
"Apache-2.0"
] | 5 | 2021-02-11T23:12:05.000Z | 2021-05-21T12:16:29.000Z | batches.py | szarnyasg/ldbc-example-graph | 1fd52bc60d50cf5184ee1331369d754db2b8489f | [
"Apache-2.0"
] | 1 | 2022-03-24T20:02:23.000Z | 2022-03-24T20:02:23.000Z | import duckdb
from datetime import date
from dateutil.relativedelta import relativedelta
import os
import shutil
con = duckdb.connect(database='ldbc.duckdb', read_only=False)
# batches are selected from the [network_start_date, network_end_date) interval,
# each batch denotes a [batch_start_date, batch_end_date) where
# batch_end_date = batch_start_date + batch_size
network_start_date = date(2011, 1, 1)
network_end_date = date(2014, 1, 1)
batch_size = relativedelta(years=1)
if os.path.isdir("batches"):
shutil.rmtree("batches")
os.mkdir("batches")
batch_start_date = network_start_date
while batch_start_date < network_end_date:
batch_end_date = batch_start_date + batch_size
interval = [batch_start_date, batch_end_date]
print(f"Batch: {interval}")
######################################## cleanup #######################################
# clean insert tables
con.execute("DELETE FROM Person") # INS1
con.execute("DELETE FROM Person_hasInterest_Tag")
con.execute("DELETE FROM Person_studyAt_University")
con.execute("DELETE FROM Person_workAt_Company")
con.execute("DELETE FROM Person_likes_Post") # INS2
con.execute("DELETE FROM Person_likes_Comment") # INS3
con.execute("DELETE FROM Forum") # INS4
con.execute("DELETE FROM Forum_hasTag_Tag")
con.execute("DELETE FROM Forum_hasMember_Person") # INS5
con.execute("DELETE FROM Post") # INS6
con.execute("DELETE FROM Post_hasTag_Tag")
con.execute("DELETE FROM Comment") # INS7
con.execute("DELETE FROM Comment_hasTag_Tag") # INS8
con.execute("DELETE FROM Person_knows_Person")
# clean delete tables
con.execute("DELETE FROM Person_Delete_candidates") # DEL1
con.execute("DELETE FROM Person_likes_Post_Delete_candidates") # DEL2
con.execute("DELETE FROM Person_likes_Comment_Delete_candidates") # DEL3
con.execute("DELETE FROM Forum_Delete_candidates") # DEL4
con.execute("DELETE FROM Forum_hasMember_Person_Delete_candidates") # DEL5
con.execute("DELETE FROM Post_Delete_candidates") # DEL6
con.execute("DELETE FROM Comment_Delete_candidates") # DEL7
con.execute("DELETE FROM Person_knows_Person_Delete_candidates") # DEL8
######################################## inserts #######################################
insertion_params = [batch_start_date, batch_end_date, batch_end_date]
# INS1
con.execute("""
INSERT INTO Person
SELECT creationDate, id, firstName, lastName, gender, birthday, locationIP, browserUsed, isLocatedIn_City, speaks, email
FROM Raw_Person
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Person_hasInterest_Tag
SELECT creationDate, id, hasInterest_Tag
FROM Raw_Person_hasInterest_Tag
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Person_studyAt_University
SELECT creationDate, id, studyAt_University, classYear
FROM Raw_Person_studyAt_University
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Person_workAt_Company
SELECT creationDate, id, workAt_Company, workFrom
FROM Raw_Person_workAt_Company
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS2
con.execute("""
INSERT INTO Person_likes_Post
SELECT creationDate, id, likes_Post
FROM Raw_Person_likes_Post
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS3
con.execute("""
INSERT INTO Person_likes_Comment
SELECT creationDate, id, likes_Comment
FROM Raw_Person_likes_Comment
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS4
con.execute("""
INSERT INTO Forum
SELECT creationDate, id, title, hasModerator_Person
FROM Raw_Forum
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Forum_hasTag_Tag
SELECT creationDate, id, hasTag_Tag
FROM Raw_Forum_hasTag_Tag
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS5
con.execute("""
INSERT INTO Forum_hasMember_Person
SELECT creationDate, id, hasMember_Person
FROM Raw_Forum_hasMember_Person
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS6
con.execute("""
INSERT INTO Post
SELECT creationDate, id, imageFile, locationIP, browserUsed, language, content, length, hasCreator_Person, Forum_containerOf, isLocatedIn_Country
FROM Raw_Post
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Post_hasTag_Tag
SELECT creationDate, id, hasTag_Tag
FROM Raw_Post_hasTag_Tag
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS7
con.execute("""
INSERT INTO Comment
SELECT creationDate, id, locationIP, browserUsed, content, length, hasCreator_Person, isLocatedIn_Country, replyOf_Post, replyOf_Comment
FROM Raw_Comment
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
con.execute("""
INSERT INTO Comment_hasTag_Tag
SELECT creationDate, id, hasTag_Tag
FROM Raw_Comment_hasTag_Tag
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
# INS8
con.execute("""
INSERT INTO Person_knows_Person
SELECT creationDate, Person1id, Person2id
FROM Raw_Person_knows_Person
WHERE creationDate >= ?
AND creationDate < ?
AND deletionDate >= ?
""",
insertion_params)
######################################## deletes #######################################
deletion_params = [batch_start_date, batch_start_date, batch_end_date]
# DEL1 (Persons are always explicitly deleted)
con.execute("""
INSERT INTO Person_Delete_candidates
SELECT deletionDate, id
FROM Raw_Person
WHERE creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL2
con.execute("""
INSERT INTO Person_likes_Post_Delete_candidates
SELECT deletionDate, id, likes_Post
FROM Raw_Person_likes_Post
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL3
con.execute("""
INSERT INTO Person_likes_Comment_Delete_candidates
SELECT deletionDate, id, likes_Comment
FROM Raw_Person_likes_Comment
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL4 (Forums are always explicitly deleted -- TODO: check in generated data for walls/albums/groups)
con.execute("""
INSERT INTO Forum_Delete_candidates
SELECT deletionDate, id
FROM Raw_Forum
WHERE creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL5
con.execute("""
INSERT INTO Forum_hasMember_Person_Delete_candidates
SELECT deletionDate, id, hasMember_Person
FROM Raw_Forum_hasMember_Person
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL6
con.execute("""
INSERT INTO Post_Delete_candidates
SELECT deletionDate, id
FROM Raw_Post
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL7
con.execute("""
INSERT INTO Comment_Delete_candidates
SELECT deletionDate, id
FROM Raw_Comment
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
# DEL8
con.execute("""
INSERT INTO Person_knows_Person_Delete_candidates
SELECT deletionDate, Person1id, Person2id
FROM Raw_Person_knows_Person
WHERE explicitlyDeleted
AND creationDate < ?
AND deletionDate >= ?
AND deletionDate < ?
""",
deletion_params)
######################################## export ########################################
batch_dir = f"batches/{batch_start_date}"
os.mkdir(f"{batch_dir}")
os.mkdir(f"{batch_dir}/inserts")
os.mkdir(f"{batch_dir}/deletes")
# inserts
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, firstName, lastName, gender, birthday, locationIP, browserUsed, isLocatedIn_City, speaks, email FROM Person) TO 'batches/{batch_start_date}/inserts/Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS1
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, hasInterest_Tag FROM Person_hasInterest_Tag) TO 'batches/{batch_start_date}/inserts/Person_hasInterest_Tag.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, studyAt_University, classYear FROM Person_studyAt_University) TO 'batches/{batch_start_date}/inserts/Person_studyAt_University.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, workAt_Company, workFrom FROM Person_workAt_Company) TO 'batches/{batch_start_date}/inserts/Person_workAt_Company.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, likes_Post FROM Person_likes_Post) TO 'batches/{batch_start_date}/inserts/Person_likes_Post.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS2
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, likes_Comment FROM Person_likes_Comment) TO 'batches/{batch_start_date}/inserts/Person_likes_Comment.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS3
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, title, hasModerator_Person FROM Forum) TO 'batches/{batch_start_date}/inserts/Forum.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS4
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, hasTag_Tag FROM Forum_hasTag_Tag) TO 'batches/{batch_start_date}/inserts/Forum_hasTag_Tag.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, hasMember_Person FROM Forum_hasMember_Person) TO 'batches/{batch_start_date}/inserts/Forum_hasMember_Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS5
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, imageFile, locationIP, browserUsed, language, content, length, hasCreator_Person, Forum_containerOf, isLocatedIn_Country FROM Post) TO 'batches/{batch_start_date}/inserts/Post.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS6
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, hasTag_Tag FROM Post_hasTag_Tag) TO 'batches/{batch_start_date}/inserts/Post_hasTag_Tag.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, locationIP, browserUsed, content, length, hasCreator_Person, isLocatedIn_Country, replyOf_Post, replyOf_Comment FROM Comment) TO 'batches/{batch_start_date}/inserts/Comment.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS7
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, id, hasTag_Tag FROM Comment_hasTag_Tag) TO 'batches/{batch_start_date}/inserts/Comment_hasTag_Tag.csv' (HEADER, FORMAT CSV, DELIMITER '|')")
con.execute(f"COPY (SELECT strftime(creationDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS creationDate, Person1id, Person2id FROM Person_knows_Person) TO 'batches/{batch_start_date}/inserts/Person_knows_Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #INS8
# deletes
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, id FROM Person_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL1
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, src, trg FROM Person_likes_Post_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Person_likes_Post.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL2
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, src, trg FROM Person_likes_Comment_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Person_likes_Comment.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL3
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, id FROM Forum_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Forum.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL4
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, src, trg FROM Forum_hasMember_Person_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Forum_hasMember_Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL5
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, id FROM Post_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Post.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL6
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, id FROM Comment_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Comment.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL7
con.execute(f"COPY (SELECT strftime(deletionDate, '%Y-%m-%dT%H:%M:%S.%g+00:00') AS deletionDate, src, trg FROM Person_knows_Person_Delete_candidates) TO 'batches/{batch_start_date}/deletes/Person_knows_Person.csv' (HEADER, FORMAT CSV, DELIMITER '|')") #DEL8
############################# set interval for next iteration ##########################
batch_start_date = batch_end_date
| 50.619048 | 350 | 0.596954 | 1,834 | 17,008 | 5.331516 | 0.082879 | 0.067498 | 0.047249 | 0.051544 | 0.859071 | 0.779096 | 0.688178 | 0.579464 | 0.487012 | 0.400491 | 0 | 0.012676 | 0.281044 | 17,008 | 335 | 351 | 50.770149 | 0.786964 | 0.039393 | 0 | 0.591912 | 0 | 0.088235 | 0.798039 | 0.23207 | 0 | 0 | 0 | 0.002985 | 0 | 1 | 0 | false | 0 | 0.018382 | 0 | 0.018382 | 0.003676 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
66436d46f2399420648d1ae5afad1941d5ac5dfe | 981 | py | Python | python/PythonForNetworkEngineers/Lesson3/exercise4.py | ModestTG/scripts | 91517b1238267185852816f73e7f7221012faa8b | [
"MIT"
] | null | null | null | python/PythonForNetworkEngineers/Lesson3/exercise4.py | ModestTG/scripts | 91517b1238267185852816f73e7f7221012faa8b | [
"MIT"
] | null | null | null | python/PythonForNetworkEngineers/Lesson3/exercise4.py | ModestTG/scripts | 91517b1238267185852816f73e7f7221012faa8b | [
"MIT"
] | null | null | null | from __future__ import print_function, unicode_literals, division
arp_table = [('10.220.88.1', '0062.ec29.70fe'),
('10.220.88.20', 'c89c.1dea.0eb6'),
('10.220.88.21', '1c6a.7aaf.576c'),
('10.220.88.28', '5254.aba8.9aea'),
('10.220.88.29', '5254.abbe.5b7b'),
('10.220.88.30', '5254.ab71.e119'),
('10.220.88.32', '5254.abc7.26aa'),
('10.220.88.33', '5254.ab3a.8d26'),
('10.220.88.35', '5254.abfb.af12'),
('10.220.88.37', '0001.00ff.0001'),
('10.220.88.38', '0002.00ff.0001'),
('10.220.88.39', '6464.9be8.08c8'),
('10.220.88.40', '001c.c4bf.826a'),
('10.220.88.41', '001b.7873.5634')]
i = 0
while i < len(arp_table):
output = ""
mac_parts = arp_table[i][1].split(".")
for element in mac_parts:
output += element[0:2].upper() + "." + element[2:4].upper() + "."
print(output[:-1])
i += 1 | 39.24 | 73 | 0.489297 | 142 | 981 | 3.302817 | 0.528169 | 0.149254 | 0.208955 | 0.055437 | 0.063966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.336123 | 0.269113 | 981 | 25 | 74 | 39.24 | 0.317992 | 0 | 0 | 0 | 0 | 0 | 0.372709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.086957 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.