hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fbbc5bc86a9eadd44aae5283b14050fa183a3292 | 118 | py | Python | frontend/backend/home/urls.py | OnlinePoliticalTransparency/Facebook-Political-Ad-Landscape-Notifications | e69637584ba89ef86256ff10906dde10dda5720d | [
"MIT"
] | null | null | null | frontend/backend/home/urls.py | OnlinePoliticalTransparency/Facebook-Political-Ad-Landscape-Notifications | e69637584ba89ef86256ff10906dde10dda5720d | [
"MIT"
] | 3 | 2021-06-05T00:09:26.000Z | 2021-09-08T03:00:03.000Z | frontend/backend/home/urls.py | OnlinePoliticalTransparency/Facebook-Political-Ad-Landscape-Notifications | e69637584ba89ef86256ff10906dde10dda5720d | [
"MIT"
] | null | null | null | from django.urls import re_path
from .views import home
urlpatterns = [
re_path(r"^.*?$", home, name="home"),
]
| 14.75 | 41 | 0.652542 | 17 | 118 | 4.411765 | 0.647059 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177966 | 118 | 7 | 42 | 16.857143 | 0.773196 | 0 | 0 | 0 | 0 | 0 | 0.076271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
fbcb6f03ed76192bc196a020a804f1c7c87687b8 | 212 | py | Python | mailer.py | milles9393/deploy-python-openshift-s2i-tutorial | 790d1c966799bd5fb77d5499b9250d36dd683bd3 | [
"Apache-2.0"
] | null | null | null | mailer.py | milles9393/deploy-python-openshift-s2i-tutorial | 790d1c966799bd5fb77d5499b9250d36dd683bd3 | [
"Apache-2.0"
] | null | null | null | mailer.py | milles9393/deploy-python-openshift-s2i-tutorial | 790d1c966799bd5fb77d5499b9250d36dd683bd3 | [
"Apache-2.0"
] | null | null | null | x = 1
if x == 1:
# indented four spaces
print("x is ASDJASJDJAJSAJD JASJD ASJD JASJD JASDJ AJSDJ ASJD JASJD JASJD JASJD AJS DJASJD JASJD JASDJ AJSD JASJD JSADJ ASJD JASD AJSD JASJD JASD AJSDJ ASJD JA1.")
| 42.4 | 167 | 0.735849 | 36 | 212 | 4.333333 | 0.527778 | 0.025641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018072 | 0.216981 | 212 | 4 | 168 | 53 | 0.921687 | 0.09434 | 0 | 0 | 0 | 0.333333 | 0.810526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fbdacbed2b7247f89afd87222786ee4beeedec22 | 5,275 | py | Python | scripts/word_count_cli.py | b-cube/Response-Identification-Info | d2fa24c9f0d7db7d8bbf5cda937e1a9dd29a8f6e | [
"MIT"
] | null | null | null | scripts/word_count_cli.py | b-cube/Response-Identification-Info | d2fa24c9f0d7db7d8bbf5cda937e1a9dd29a8f6e | [
"MIT"
] | 1 | 2015-09-23T16:30:34.000Z | 2015-09-23T16:30:34.000Z | scripts/word_count_cli.py | b-cube/Response-Identification-Info | d2fa24c9f0d7db7d8bbf5cda937e1a9dd29a8f6e | [
"MIT"
] | 1 | 2020-03-25T09:41:03.000Z | 2020-03-25T09:41:03.000Z | # -*- coding: utf-8 -*-
from optparse import OptionParser
import sys
import re
import dateutil.parser as dateparser
from itertools import chain
from semproc.bag_parser import BagParser
from semproc.nlp_utils import *
import warnings
warnings.filterwarnings('ignore')
'''
a little cli for the bag of words
count to handle the race conditions
in the regex
'''
def convert_header_list(headers):
'''
convert from the list of strings, one string
per kvp, to a dict with keys normalized
'''
return dict(
(k.strip().lower(), v.strip()) for k, v in (
h.split(':', 1) for h in headers)
)
def strip_dates(text):
# this should still make it an invalid date
# text = text[3:] if text.startswith('NaN') else text
try:
d = dateparser.parse(text)
return ''
except ValueError:
return text
except OverflowError:
return text
def strip_filenames(text):
# we'll see
exts = ('png', 'jpg', 'hdf', 'xml', 'doc', 'pdf', 'txt', 'jar', 'nc', 'XSL', 'kml', 'xsd')
return '' if text.endswith(exts) else text
def strip_identifiers(texts):
# chuck any urns, urls, uuids
_pattern_set = [
('url', ur"""(?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)\b/?(?!@)))"""),
# a urn that isn't a url
('urn', ur"(?![http://])(?![https://])(?![ftp://])(([a-z0-9.\S][a-z0-9-.\S]{0,}\S:{1,2}\S)+[a-z0-9()+,\-.=@;$_!*'%/?#]+)"),
('uuid', ur'([a-f\d]{8}(-[a-f\d]{4}){3}-[a-f\d]{12}?)'),
('doi', ur"(10[.][0-9]{4,}(?:[/][0-9]+)*/(?:(?![\"&\\'])\S)+)"),
('md5', ur"([a-f0-9]{32})")
]
for pattern_type, pattern in _pattern_set:
for m in re.findall(re.compile(pattern), texts):
m = max(m) if isinstance(m, tuple) else m
try:
texts = texts.replace(m, '')
except Exception as ex:
print ex
print m
files = ['cat_interop_urns.txt', 'mimetypes.txt', 'namespaces.txt']
for f in files:
texts = remove_tokens(f, texts)
return texts.split()
def clean(text):
text = strip_dates(text)
text = remove_numeric(text).strip()
text = remove_punctuation(text.strip()).strip()
text = strip_terminal_punctuation(text.strip()).strip()
text = strip_filenames(text).strip()
return text
def main():
op = OptionParser()
op.add_option('--file', '-f')
options, arguments = op.parse_args()
# get it from some file, sometimes the xml
# is too long for the args and i'm tired of
# quoting things.
if not options.file:
op.error('No xml file')
exclude_tags = ['schemaLocation', 'noNamespaceSchemaLocation']
with open(options.file, 'r') as f:
xml_as_string = f.read()
bp = BagParser(xml_as_string.encode('utf-8'), True, False)
if bp.parser.xml is None:
sys.stderr.write('Failed xml parse')
sys.exit(1)
stripped_text = [b[1].split() for b in bp.strip_text(exclude_tags) if b[1]]
stripped_text = list(chain.from_iterable(stripped_text))
cleaned_text = [clean(s) for s in stripped_text]
bow = strip_identifiers(' '.join([c for c in cleaned_text if c]))
print ' '.join([b.encode('utf-8') for b in bow if b]).replace("'", "\'")
if __name__ == '__main__':
main()
| 45.08547 | 2,016 | 0.590711 | 1,044 | 5,275 | 2.941571 | 0.444444 | 0.005861 | 0.007815 | 0.009118 | 0.413546 | 0.413546 | 0.391403 | 0.391403 | 0.391403 | 0.388147 | 0 | 0.009872 | 0.155071 | 5,275 | 116 | 2,017 | 45.474138 | 0.678708 | 0.052133 | 0 | 0.069444 | 0 | 0.055556 | 0.499374 | 0.436614 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fbdd78b74ab21c393da357de8b1428e7084fcddd | 3,507 | py | Python | apps/users_auth/models.py | matheuslins/cuzjobs | 0f46402d534fefaef394ccd09b454fe361bb36f2 | [
"MIT"
] | 1 | 2018-07-10T20:30:52.000Z | 2018-07-10T20:30:52.000Z | apps/users_auth/models.py | matheuslins/cuscuzjobs | 0f46402d534fefaef394ccd09b454fe361bb36f2 | [
"MIT"
] | 10 | 2019-04-25T00:01:29.000Z | 2021-04-08T18:52:52.000Z | apps/users_auth/models.py | matheuslins/cuzjobs | 0f46402d534fefaef394ccd09b454fe361bb36f2 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.models import (BaseUserManager, AbstractBaseUser,
PermissionsMixin)
from apps.company.models import Company
class UserManager(BaseUserManager):
use_in_migrations = True
def _create_user(self, email, password, **extra_fields):
"""
Creates and saves a User with the given email and password.
"""
email = self.normalize_email(email)
user = self.model(email=email, **extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email=None, password=None, **extra_fields):
extra_fields.setdefault('is_staff', False)
extra_fields.setdefault('is_superuser', False)
return self._create_user(email, password, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_superuser', True)
if extra_fields.get('is_staff') is not True:
raise ValueError('Superuser must have is_staff=True.')
if extra_fields.get('is_superuser') is not True:
raise ValueError('Superuser must have is_superuser=True.')
return self._create_user(email, password, **extra_fields)
class DefaultUser(AbstractBaseUser, PermissionsMixin):
name = models.CharField('Nome', max_length=100, blank=True, null=True)
image = models.ImageField(
upload_to='users_auth/images',
verbose_name='Imagem',
null=True,
blank=True
)
username = models.CharField('Username', max_length=100, blank=True, null=True)
email = models.EmailField('E-mail', unique=True)
is_active = models.BooleanField('Está ativo?', blank=True, default=True)
is_staff = models.BooleanField('É da equipe?', blank=True, default=False)
date_joined = models.DateTimeField('Data de Entrada', auto_now_add=True)
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
REQUIRED_FIELDS = ['username', 'name']
objects = UserManager()
def __str__(self):
return self.name
def get_absolute_url(self):
return f"/users/{self.pk}/"
def get_full_name(self):
# The user is identified by their email address
return self.name
def get_short_name(self):
# The user is identified by their email address
return self.name
def __unicode__(self):
return u"name: %s " % (self.name or u'')
class Meta:
verbose_name = 'Usuário'
verbose_name_plural = 'Usuários'
class TechRecruiter(DefaultUser):
company = models.ForeignKey(
Company,
on_delete=models.CASCADE,
related_name="tech_recruiter_company",
verbose_name="Company"
)
def __str__(self):
return str(self.name or "[Not set]") + " -> " + self.email
def get_short_name(self):
return self.name
def get_full_name(self):
return str(self)
class Meta:
verbose_name = 'Tech Recruiter'
verbose_name_plural = 'Tech Recruiteis'
class Candidate(DefaultUser):
def __str__(self):
return str(self.name or "[Not set]") + " -> " + self.email
def get_short_name(self):
return self.name
def get_full_name(self):
return str(self)
class Meta:
verbose_name = 'Candidate'
verbose_name_plural = 'Candidates'
| 29.225 | 82 | 0.654691 | 424 | 3,507 | 5.191038 | 0.29717 | 0.059973 | 0.031804 | 0.038619 | 0.418901 | 0.345298 | 0.314403 | 0.252612 | 0.212631 | 0.173557 | 0 | 0.002258 | 0.242372 | 3,507 | 119 | 83 | 29.470588 | 0.82612 | 0.043342 | 0 | 0.283951 | 0 | 0 | 0.113213 | 0.006607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17284 | false | 0.074074 | 0.049383 | 0.135802 | 0.641975 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 3 |
fbde16f3927c21c313b52510a2af057e25f535cb | 9,548 | py | Python | mtda/client.py | hkoturap/mtda | 9cd55dfd40a2fcb14c5bd31daf86d40ca0011e43 | [
"MIT"
] | null | null | null | mtda/client.py | hkoturap/mtda | 9cd55dfd40a2fcb14c5bd31daf86d40ca0011e43 | [
"MIT"
] | null | null | null | mtda/client.py | hkoturap/mtda | 9cd55dfd40a2fcb14c5bd31daf86d40ca0011e43 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------------------------
# MTDA Client
# ---------------------------------------------------------------------------
#
# This software is a part of MTDA.
# Copyright (c) Mentor, a Siemens business, 2017-2020
#
# ---------------------------------------------------------------------------
# SPDX-License-Identifier: MIT
# ---------------------------------------------------------------------------
import os
import random
import socket
import time
import zerorpc
from mtda.main import MentorTestDeviceAgent
import mtda.constants as CONSTS
class Client:
def __init__(self, host=None):
agent = MentorTestDeviceAgent()
agent.load_config(host)
if agent.remote is not None:
uri = "tcp://%s:%d" % (agent.remote, agent.ctrlport)
self._impl = zerorpc.Client(heartbeat=20, timeout=2*60)
self._impl.connect(uri)
else:
self._impl = agent
self._agent = agent
HOST = socket.gethostname()
USER = os.getenv("USER")
WORDS = "/usr/share/dict/words"
if os.path.exists(WORDS):
WORDS = open(WORDS).read().splitlines()
name = random.choice(WORDS)
if name.endswith("'s"):
name = name.replace("'s", "")
elif USER is not None and HOST is not None:
name = "%s@%s" % (USER, HOST)
else:
name = "mtda"
self._session = os.getenv('MTDA_SESSION', name)
def agent_version(self):
return self._impl.agent_version()
def console_prefix_key(self):
return self._agent.console_prefix_key()
def command(self, args):
return self._impl.command(args, self._session)
def console_clear(self):
return self._impl.console_clear(self._session)
def console_dump(self):
return self._impl.console_dump(self._session)
def console_flush(self):
return self._impl.console_flush(self._session)
def console_getkey(self):
return self._agent.console_getkey()
def console_init(self):
return self._agent.console_init()
def console_head(self):
return self._impl.console_head(self._session)
def console_lines(self):
return self._impl.console_lines(self._session)
def console_locked(self):
return self._impl.console_locked(self._session)
def console_print(self, data):
return self._impl.console_print(data, self._session)
def console_prompt(self, newPrompt=None):
return self._impl.console_prompt(newPrompt, self._session)
def console_remote(self, host):
return self._agent.console_remote(host)
def console_run(self, cmd):
return self._impl.console_run(cmd, self._session)
def console_send(self, data, raw=False):
return self._impl.console_send(data, raw, self._session)
def console_tail(self):
return self._impl.console_tail(self._session)
def env_get(self, name):
return self._impl.env_get(name, self._session)
def env_set(self, name, value):
return self._impl.env_set(name, value, self._session)
def keyboard_write(self, data):
return self._impl.keyboard_write(data, self._session)
def power_locked(self):
return self._impl.power_locked(self._session)
def storage_bytes_written(self):
return self._impl.storage_bytes_written(self._session)
def storage_close(self):
return self._impl.storage_close(self._session)
def storage_locked(self):
return self._impl.storage_locked(self._session)
def storage_mount(self, part=None):
return self._impl.storage_mount(part, self._session)
def storage_open(self):
tries = 60
while tries > 0:
tries = tries - 1
status = self._impl.storage_open(self._session)
if status is True:
return True
time.sleep(1)
return False
def storage_status(self):
return self._impl.storage_status(self._session)
def _storage_write(self, image, imgname, imgsize, callback=None):
# Copy loop
bytes_wanted = 0
data = image.read(self._agent.blksz)
dataread = len(data)
totalread = 0
while totalread < imgsize:
totalread += dataread
# Report progress via callback
if callback is not None:
callback(imgname, totalread, imgsize)
# Write block to shared storage device
bytes_wanted = self._impl.storage_write(data, self._session)
# Check what to do next
if bytes_wanted < 0:
break
elif bytes_wanted > 0:
# Read next block
data = image.read(bytes_wanted)
dataread = len(data)
else:
# Agent may continue without further data
data = b''
dataread = 0
# Close the local image
image.close()
# Wait for background writes to complete
while True:
status, writing, written = self._impl.storage_status(self._session)
if writing is False:
break
if callback is not None:
callback(imgname, totalread, imgsize)
time.sleep(0.5)
# Storage may be closed now
status = self.storage_close()
# Provide final update to specified callback
if status is True and callback is not None:
callback(imgname, totalread, imgsize)
# Make sure an error is reported if a write error was received
if bytes_wanted < 0:
status = False
return status
def storage_update(self, dest, src=None, callback=None):
path = dest if src is None else src
imgname = os.path.basename(path)
try:
st = os.stat(path)
imgsize = st.st_size
image = open(path, "rb")
except FileNotFoundError:
return False
status = self._impl.storage_update(dest, 0, self._session)
if status is False:
image.close()
return False
self._impl.storage_compression(CONSTS.IMAGE.RAW.value, self._session)
return self._storage_write(image, imgname, imgsize, callback)
def storage_write_image(self, path, callback=None):
# Get size of the (compressed) image
imgname = os.path.basename(path)
# Open the specified image
try:
st = os.stat(path)
imgsize = st.st_size
if path.endswith(".bz2"):
compression = CONSTS.IMAGE.BZ2.value
elif path.endswith(".gz"):
compression = CONSTS.IMAGE.GZ.value
elif path.endswith(".zst"):
compression = CONSTS.IMAGE.ZST.value
else:
compression = CONSTS.IMAGE.RAW.value
self._impl.storage_compression(compression, self._session)
image = open(path, "rb")
except FileNotFoundError:
return False
# Open the shared storage device
status = self.storage_open()
if status is False:
image.close()
return False
return self._storage_write(image, imgname, imgsize, callback)
def storage_to_host(self):
return self._impl.storage_to_host(self._session)
def storage_to_target(self):
return self._impl.storage_to_target(self._session)
def storage_swap(self):
return self._impl.storage_swap(self._session)
def start(self):
return self._agent.start()
def remote(self):
return self._agent.remote
def session(self):
return self._session
def target_lock(self, retries=0):
status = False
while status is False:
status = self._impl.target_lock(self._session)
if retries <= 0 or status is True:
break
retries = retries - 1
time.sleep(60)
return status
def target_locked(self):
return self._impl.target_locked(self._session)
def target_off(self):
return self._impl.target_off(self._session)
def target_on(self):
return self._impl.target_on(self._session)
def target_status(self):
return self._impl.target_status(self._session)
def target_toggle(self):
return self._impl.target_toggle(self._session)
def target_unlock(self):
return self._impl.target_unlock(self._session)
def toggle_timestamps(self):
return self._impl.toggle_timestamps()
def usb_find_by_class(self, className):
return self._impl.usb_find_by_class(className, self._session)
def usb_has_class(self, className):
return self._impl.usb_has_class(className, self._session)
def usb_off(self, ndx):
return self._impl.usb_off(ndx, self._session)
def usb_off_by_class(self, className):
return self._impl.usb_off_by_class(className, self._session)
def usb_on(self, ndx):
return self._impl.usb_on(ndx, self._session)
def usb_on_by_class(self, className):
return self._impl.usb_on_by_class(className, self._session)
def usb_ports(self):
return self._impl.usb_ports(self._session)
def usb_status(self, ndx):
return self._impl.usb_status(ndx, self._session)
def usb_toggle(self, ndx):
return self._impl.usb_toggle(ndx, self._session)
| 30.504792 | 79 | 0.601906 | 1,140 | 9,548 | 4.821053 | 0.177193 | 0.074236 | 0.10444 | 0.078603 | 0.341157 | 0.191776 | 0.134825 | 0.110444 | 0.050582 | 0.02147 | 0 | 0.004942 | 0.27943 | 9,548 | 312 | 80 | 30.602564 | 0.793895 | 0.090595 | 0 | 0.205607 | 0 | 0 | 0.008778 | 0.002426 | 0 | 0 | 0 | 0 | 0 | 1 | 0.252336 | false | 0 | 0.03271 | 0.224299 | 0.560748 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
8372496172d5eaf9434580da17e0cef12cfad314 | 293 | py | Python | hello.py | SaraWestWA/TwitOff_SW | 61ce9254aaffc3d180b6b85a9482ec5923f6062c | [
"MIT"
] | null | null | null | hello.py | SaraWestWA/TwitOff_SW | 61ce9254aaffc3d180b6b85a9482ec5923f6062c | [
"MIT"
] | null | null | null | hello.py | SaraWestWA/TwitOff_SW | 61ce9254aaffc3d180b6b85a9482ec5923f6062c | [
"MIT"
] | null | null | null | from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'Index Page'
@app.route('/hello')
def hello_world():
return 'Hello, World!'
@app.route('/new')
def new():
return 'Hello, Brave New World!'
if __name__ == "__main__":
app.run(debug=True) | 12.73913 | 36 | 0.627986 | 40 | 293 | 4.275 | 0.475 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194539 | 293 | 23 | 37 | 12.73913 | 0.724576 | 0 | 0 | 0 | 0 | 0 | 0.221088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0.230769 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
837e898a06ea09978f922db71a0f69d201906c48 | 12,327 | py | Python | autofile/data_types/name.py | lpratalimaffei/autofile | 3dae8777473272662acf68a7106d2e1bf4555a00 | [
"Apache-2.0"
] | null | null | null | autofile/data_types/name.py | lpratalimaffei/autofile | 3dae8777473272662acf68a7106d2e1bf4555a00 | [
"Apache-2.0"
] | null | null | null | autofile/data_types/name.py | lpratalimaffei/autofile | 3dae8777473272662acf68a7106d2e1bf4555a00 | [
"Apache-2.0"
] | 8 | 2019-12-24T20:23:39.000Z | 2021-01-04T19:30:37.000Z | """ file namers
"""
class Extension():
""" file extensions """
INFORMATION = '.yaml'
INPUT_LOG = '.inp'
OUTPUT_LOG = '.out'
PROJROT_LOG = '.prot'
TEMPLATE = '.temp'
SHELL_SCRIPT = '.sh'
ENERGY = '.ene'
GEOMETRY = '.xyz'
TRAJECTORY = '.t.xyz'
ZMATRIX = '.zmat'
VMATRIX = '.vmat'
TORS = '.tors'
RTORS = '.rtors'
GRADIENT = '.grad'
HESSIAN = '.hess'
CUBIC_FC = '.cubic'
QUARTIC_FC = '.quartic'
HARMONIC_ZPVE = '.hzpve'
ANHARMONIC_ZPVE = '.azpve'
HARMONIC_FREQUENCIES = '.hfrq'
ANHARMONIC_FREQUENCIES = '.afrq'
PROJECTED_FREQUENCIES = '.pfrq'
ANHARMONICITY_MATRIX = '.xmat'
VIBRO_ROT_MATRIX = '.vrmat'
CENTRIF_DIST_CONSTS = '.qcd'
LJ_EPSILON = '.eps'
LJ_SIGMA = '.sig'
EXTERNAL_SYMMETRY_FACTOR = '.esym'
INTERNAL_SYMMETRY_FACTOR = '.isym'
DIPOLE_MOMENT = '.dmom'
POLARIZABILITY = '.polar'
# Transformation files
REACTION = '.r.yaml'
# Instability Transformation files
INSTAB = '.yaml'
# Various VaReCoF files
VRC_TST = '.tst'
VRC_DIVSUR = '.divsur'
VRC_MOLP = '.molpro'
VRC_TML = '.tml'
VRC_STRUCT = '.struct'
VRC_POT = '.pot'
VRC_FLUX = '.flux'
JSON = '.json'
def information(file_name):
""" adds information extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.INFORMATION)
def input_file(file_name):
""" adds input file extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.INPUT_LOG)
def output_file(file_name):
""" adds output file extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.OUTPUT_LOG)
def instability(file_name):
""" adds instability extension, if missing
"""
return _add_extension(file_name, Extension.INSTAB)
def projrot_file(file_name):
""" adds projrot file extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.PROJROT_LOG)
def run_script(file_name):
""" adds run script extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.SHELL_SCRIPT)
def energy(file_name):
""" adds energy extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.ENERGY)
def geometry(file_name):
""" adds geometry extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.GEOMETRY)
def trajectory(file_name):
""" adds trajectory extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.TRAJECTORY)
def zmatrix(file_name):
""" adds zmatrix extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.ZMATRIX)
def vmatrix(file_name):
""" adds variable zmatrix extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VMATRIX)
def torsions(file_name):
""" adds variable torsions extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.TORS)
def ring_torsions(file_name):
""" adds variable torsions extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.RTORS)
def gradient(file_name):
""" adds gradient extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.GRADIENT)
def hessian(file_name):
""" adds hessian extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.HESSIAN)
def harmonic_zpve(file_name):
""" adds harmonic zpve extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.HARMONIC_ZPVE)
def anharmonic_zpve(file_name):
""" adds anharmonic zpve extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.ANHARMONIC_ZPVE)
def harmonic_frequencies(file_name):
""" adds harmonic frequencies extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.HARMONIC_FREQUENCIES)
def anharmonic_frequencies(file_name):
""" adds anharmonic frequencies extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.ANHARMONIC_FREQUENCIES)
def projected_frequencies(file_name):
""" adds projected frequencies extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.PROJECTED_FREQUENCIES)
def cubic_force_constants(file_name):
""" adds cubic force constants extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.CUBIC_FC)
def quartic_force_constants(file_name):
""" adds quartic force constants extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.QUARTIC_FC)
def anharmonicity_matrix(file_name):
""" adds anharmonicity maxtrix extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.ANHARMONICITY_MATRIX)
def vibro_rot_alpha_matrix(file_name):
""" adds vibro_rot_alpha maxtrix extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VIBRO_ROT_MATRIX)
def quartic_centrifugal_dist_consts(file_name):
""" adds quartic centrifugal distortion constants, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.CENTRIF_DIST_CONSTS)
def lennard_jones_epsilon(file_name):
""" adds lennard-jones epsilon extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.LJ_EPSILON)
def lennard_jones_sigma(file_name):
""" adds lennard-jones sigma extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.LJ_SIGMA)
def lennard_jones_input(file_name):
""" adds lennard-jones input file extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.INPUT_LOG)
def lennard_jones_elstruct(file_name):
""" adds lennard-jones sigma extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.TEMPLATE)
def external_symmetry_factor(file_name):
""" adds external symmetry number extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.EXTERNAL_SYMMETRY_FACTOR)
def internal_symmetry_factor(file_name):
""" adds internal symmetry number extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.INTERNAL_SYMMETRY_FACTOR)
def dipole_moment(file_name):
""" adds dipole moment extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.DIPOLE_MOMENT)
def polarizability(file_name):
""" adds dipole moment extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.POLARIZABILITY)
def reaction(file_name):
""" adds reaction extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.REACTION)
def vrctst_tst(file_name):
""" adds vrctst_tst extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_TST)
def vrctst_divsur(file_name):
""" adds vrctst_divsur extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_DIVSUR)
def vrctst_molpro(file_name):
""" adds vrctst_molpro extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_MOLP)
def vrctst_tml(file_name):
""" adds vrctst_tml extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_TML)
def vrctst_struct(file_name):
""" adds vrctst_struct extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_STRUCT)
def vrctst_pot(file_name):
""" adds vrctst_pot extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_POT)
def vrctst_flux(file_name):
""" adds vrctst_flux extension, if missing
:param file_name: name of file
:type file_name: str
:returns: file with extension added
:rtype: str
"""
return _add_extension(file_name, Extension.VRC_FLUX)
def _add_extension(file_name, ext):
if not str(file_name).endswith(ext):
file_name = '{}{}'.format(file_name, ext)
return file_name
| 24.506958 | 72 | 0.688813 | 1,602 | 12,327 | 5.074282 | 0.078652 | 0.16435 | 0.082667 | 0.103334 | 0.702423 | 0.683725 | 0.679419 | 0.679419 | 0.679419 | 0.679419 | 0 | 0 | 0.221708 | 12,327 | 502 | 73 | 24.555777 | 0.8473 | 0.479679 | 0 | 0.015625 | 0 | 0 | 0.040474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.328125 | false | 0 | 0 | 0 | 0.984375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
838471e060a16f83d890f8cc7f90990275c19564 | 79 | py | Python | exercicios_curso_em_video/Exercicio 16.py | Sposigor/Caminho_do_Python | e84d74e9dc89c0966f931a94cb9ebe3ee4671b6d | [
"MIT"
] | 1 | 2021-01-13T18:07:46.000Z | 2021-01-13T18:07:46.000Z | exercicios_curso_em_video/Exercicio 16.py | Sposigor/Caminho_do_Python | e84d74e9dc89c0966f931a94cb9ebe3ee4671b6d | [
"MIT"
] | null | null | null | exercicios_curso_em_video/Exercicio 16.py | Sposigor/Caminho_do_Python | e84d74e9dc89c0966f931a94cb9ebe3ee4671b6d | [
"MIT"
] | null | null | null | N = float(input('Número: '))
print(f'A porção do número inteiro {N} é {N:.0f}') | 39.5 | 50 | 0.632911 | 15 | 79 | 3.333333 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.139241 | 79 | 2 | 50 | 39.5 | 0.720588 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
8388818c8bceeff6a45eea8ba64a9e8b27daa733 | 959 | py | Python | src/spinnaker_ros_lsm/venv/lib/python2.7/site-packages/spinnman/messages/scp/impl/scp_read_adc_request.py | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | 2 | 2020-11-01T13:22:11.000Z | 2020-11-01T13:22:20.000Z | src/spinnaker_ros_lsm/venv/lib/python2.7/site-packages/spinnman/messages/scp/impl/scp_read_adc_request.py | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | null | null | null | src/spinnaker_ros_lsm/venv/lib/python2.7/site-packages/spinnman/messages/scp/impl/scp_read_adc_request.py | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | null | null | null | """
ScpReadAdcRequest
"""
from spinnman.messages.scp.abstract_messages.abstract_scp_bmp_request import \
AbstractSCPBMPRequest
from spinnman.messages.scp.impl.scp_read_adc_response import SCPReadADCResponse
from spinnman.messages.scp.scp_command import SCPCommand
from spinnman.messages.scp.scp_request_header import SCPRequestHeader
from spinnman.messages.scp.scp_bmp_info_type import SCPBMPInfoType
class SCPReadADCRequest(AbstractSCPBMPRequest):
""" SCP Request for the data from the BMP including voltages and\
temperature.
"""
def __init__(self, board):
"""
:param board: which board to request the adc register from
:return:
"""
AbstractSCPBMPRequest.__init__(
self, board,
SCPRequestHeader(command=SCPCommand.CMD_BMP_INFO),
argument_1=SCPBMPInfoType.ADC)
def get_scp_response(self):
"""
"""
return SCPReadADCResponse()
| 29.96875 | 79 | 0.716371 | 102 | 959 | 6.480392 | 0.421569 | 0.090772 | 0.151286 | 0.173979 | 0.118003 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001318 | 0.208551 | 959 | 31 | 80 | 30.935484 | 0.869565 | 0.187696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
838906cc75735519a9d7bd46a8e6df6afa5400bc | 146 | py | Python | rainforest/apibits/api_endpoint.py | apibitsco/rainforestapp-python | 426245fdb93748d4c2a6f3f12bcce3af20320137 | [
"MIT"
] | 1 | 2016-10-25T18:47:55.000Z | 2016-10-25T18:47:55.000Z | rainforest/apibits/api_endpoint.py | apibitsco/rainforestapp-python | 426245fdb93748d4c2a6f3f12bcce3af20320137 | [
"MIT"
] | 6 | 2015-10-15T11:27:42.000Z | 2019-01-31T10:12:42.000Z | rainforest/apibits/api_endpoint.py | rainforestapp/rainforest-python | c0e5c5dba55c000ee6f92ae5fc9db5f90e415500 | [
"MIT"
] | null | null | null | class ApiEndpoint(object):
client = None
parent = None
def __init__(self, client, parent=None):
self.client = client
self.parent = parent
| 18.25 | 41 | 0.719178 | 19 | 146 | 5.315789 | 0.473684 | 0.19802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 146 | 7 | 42 | 20.857143 | 0.841667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
839613063e9eded6df6352827880762c52e31454 | 237 | py | Python | starter_kits/ml/SVM/train.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 2 | 2018-11-15T14:04:26.000Z | 2018-11-19T01:54:01.000Z | starter_kits/ml/SVM/train.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 5 | 2021-02-08T20:26:47.000Z | 2022-02-26T04:28:33.000Z | starter_kits/ml/SVM/train.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 1 | 2018-11-22T14:58:12.000Z | 2018-11-22T14:58:12.000Z | #!/usr/bin/env python3
import model
m = model.HaliteModel()
m.train_on_files('training', 'aggressive')
m.save(file_name='aggressive.svc')
m = model.HaliteModel()
m.train_on_files('training', 'passive')
m.save(file_name='passive.svc')
| 19.75 | 42 | 0.738397 | 36 | 237 | 4.694444 | 0.5 | 0.071006 | 0.201183 | 0.213018 | 0.449704 | 0.449704 | 0.449704 | 0.449704 | 0 | 0 | 0 | 0.004587 | 0.080169 | 237 | 11 | 43 | 21.545455 | 0.770642 | 0.088608 | 0 | 0.285714 | 0 | 0 | 0.269767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.285714 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
83a53e469c6ba9bb86608d4d3d1a78b7573c7baf | 62 | py | Python | app/defaults/Bootstrapped_Thompson_Sampling/get_context.py | bartfrenk/streamingbandit | 4237a05b439c2c12912e813f0b76ccf8af382aef | [
"MIT"
] | 64 | 2017-05-21T06:08:57.000Z | 2022-01-25T14:44:54.000Z | app/defaults/Bootstrapped_Thompson_Sampling/get_context.py | bartfrenk/streamingbandit | 4237a05b439c2c12912e813f0b76ccf8af382aef | [
"MIT"
] | 76 | 2017-05-04T10:30:59.000Z | 2020-05-07T06:43:03.000Z | app/defaults/Bootstrapped_Thompson_Sampling/get_context.py | bartfrenk/streamingbandit | 4237a05b439c2c12912e813f0b76ccf8af382aef | [
"MIT"
] | 12 | 2017-05-04T13:10:23.000Z | 2020-02-22T17:12:49.000Z | self.context["customer"] = random.choice(["new", "returning"]) | 62 | 62 | 0.693548 | 7 | 62 | 6.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 62 | 1 | 62 | 62 | 0.728814 | 0 | 0 | 0 | 0 | 0 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
83a598cf7477bd6af446480fe6bb3abce321f289 | 279 | tac | Python | semantics_and_generate/samples/op10.tac | AHEADer/my_decaf_compiler | 42ba9f140c5fda3cd2b4fdb727745d2cfd39c923 | [
"MIT"
] | 1 | 2018-01-03T03:35:38.000Z | 2018-01-03T03:35:38.000Z | semantics_and_generate/samples/op10.tac | AHEADer/my_decaf_compiler | 42ba9f140c5fda3cd2b4fdb727745d2cfd39c923 | [
"MIT"
] | null | null | null | semantics_and_generate/samples/op10.tac | AHEADer/my_decaf_compiler | 42ba9f140c5fda3cd2b4fdb727745d2cfd39c923 | [
"MIT"
] | null | null | null | main:
BeginFunc 32 ;
_tmp0 = 5 ;
_tmp1 = 0 ;
_tmp2 = _tmp1 - _tmp0 ;
PushParam _tmp2 ;
LCall _PrintInt ;
PopParams 4 ;
_tmp3 = 9 ;
_tmp4 = 0 ;
_tmp5 = _tmp4 - _tmp3 ;
_tmp6 = 0 ;
_tmp7 = _tmp6 - _tmp5 ;
PushParam _tmp7 ;
LCall _PrintInt ;
PopParams 4 ;
EndFunc ;
| 15.5 | 24 | 0.620072 | 35 | 279 | 4.428571 | 0.571429 | 0.167742 | 0.283871 | 0.296774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124378 | 0.27957 | 279 | 17 | 25 | 16.411765 | 0.646766 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
83a5dbde21176916cb810826ae29aaf5f719e822 | 270 | py | Python | pyzshcomplete/adapters/base/__init__.py | marble/pyzshcomplete | c7896c8db3d753fb41fd1de403d9feaf2a3bae1e | [
"MIT"
] | 14 | 2020-05-23T01:52:53.000Z | 2021-09-21T16:41:01.000Z | pyzshcomplete/adapters/base/__init__.py | marble/pyzshcomplete | c7896c8db3d753fb41fd1de403d9feaf2a3bae1e | [
"MIT"
] | 35 | 2020-03-13T22:46:59.000Z | 2021-09-17T02:48:34.000Z | pyzshcomplete/adapters/base/__init__.py | marble/pyzshcomplete | c7896c8db3d753fb41fd1de403d9feaf2a3bae1e | [
"MIT"
] | 1 | 2021-09-10T09:25:23.000Z | 2021-09-10T09:25:23.000Z | '''
This package serves two purposes:
1. Define the interface for different parser adapters to implement.
2. Implement the completion generation logic based on the defined interface.
Other packages should subclass the supplied classes and implement the interface.
'''
| 30 | 80 | 0.803704 | 37 | 270 | 5.864865 | 0.783784 | 0.110599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008734 | 0.151852 | 270 | 8 | 81 | 33.75 | 0.938865 | 0.966667 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
83e059243d00d20cac7b116e62ce3fb5e37f9565 | 3,179 | py | Python | integration-test/1813-landcover-landuse-zoom-9.py | rinnyB/vector-datasource | 024909ed8245a4ad4a25c908413ba3602de6c335 | [
"MIT"
] | null | null | null | integration-test/1813-landcover-landuse-zoom-9.py | rinnyB/vector-datasource | 024909ed8245a4ad4a25c908413ba3602de6c335 | [
"MIT"
] | 2 | 2021-03-31T20:22:37.000Z | 2021-12-13T20:50:11.000Z | integration-test/1813-landcover-landuse-zoom-9.py | rinnyB/vector-datasource | 024909ed8245a4ad4a25c908413ba3602de6c335 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
from . import FixtureTest
class LandcoverTest(FixtureTest):
def _starts_at(self, zoom, props):
import dsl
all_props = {'source': 'openstreetmap.org'}
all_props.update(props)
# biggest polygon possible - covering the whole world - should be
# visible at this zoom, and have a min zoom of this zoom.
world = dsl.tile_box(0, 0, 0)
self.generate_fixtures(dsl.way(1, world, all_props))
self.assert_has_feature(
zoom, 0, 0, 'landuse', {
'min_zoom': zoom,
})
def test_landuse_farmland(self):
# farmland (and farm), modify the min to be 9 instead of 10.
self._starts_at(9, {'landuse': 'farmland'})
self._starts_at(9, {'landuse': 'farm'})
def test_landuse_orchard(self):
# starts at 10 now, should be 9?
self._starts_at(9, {'landuse': 'orchard'})
def test_landuse_forest(self):
# do we need a custom `tier2_min_zoom` to show all at zoom 9?
self._starts_at(9, {'landuse': 'forest'})
def test_landuse_residential(self):
# okay to start at 10, because of NE overlap
self._starts_at(10, {'landuse': 'residential'})
def test_landuse_commercial(self):
# okay to start at 10, because of NE overlap
self._starts_at(10, {'landuse': 'commercial'})
def test_landuse_retail(self):
# okay to start at 10, because of NE overlap
self._starts_at(10, {'landuse': 'retail'})
def test_landuse_industrial(self):
# okay to start at 10, because of NE overlap
self._starts_at(10, {'landuse': 'industrial'})
def test_landuse_meadow(self):
# starts at 9, but throttled
self._starts_at(9, {'landuse': 'meadow'})
def test_landuse_vineyard(self):
# starts at 9, but throttled
self._starts_at(9, {'landuse': 'vineyard'})
def test_natural_wood(self):
# do we need a custom `tier2_min_zoom` to show all at zoom 9?
self._starts_at(9, {'natural': 'wood'})
def test_natural_sand(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'sand'})
def test_natural_scree(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'scree'})
def test_natural_shingle(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'shingle'})
def test_natural_bare_rock(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'bare_rock'})
def test_natural_heath(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'heath'})
def test_natural_grassland(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'grassland'})
def test_natural_scrub(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'scrub'})
def test_natural_wetland(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'wetland'})
def test_natural_mud(self):
# starts at 9, but throttled
self._starts_at(9, {'natural': 'mud'})
| 32.111111 | 73 | 0.614973 | 429 | 3,179 | 4.337995 | 0.207459 | 0.141859 | 0.206341 | 0.188608 | 0.495433 | 0.470177 | 0.455132 | 0.455132 | 0.455132 | 0.455132 | 0 | 0.025608 | 0.262976 | 3,179 | 98 | 74 | 32.438776 | 0.768673 | 0.258572 | 0 | 0 | 0 | 0 | 0.133676 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.392157 | false | 0 | 0.039216 | 0 | 0.45098 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
83e7094dabea9b50e718c6aebdb793ee58a18515 | 10,839 | py | Python | docs/build/Pygments/tests/examplefiles/output/linecontinuation.py | mjtamlyn/django-braces | 8adc9bc4f5139e3d032d4e38657bf86413388b78 | [
"BSD-3-Clause"
] | 1 | 2015-03-22T16:49:07.000Z | 2015-03-22T16:49:07.000Z | docs/build/Pygments/tests/examplefiles/output/linecontinuation.py | mjtamlyn/django-braces | 8adc9bc4f5139e3d032d4e38657bf86413388b78 | [
"BSD-3-Clause"
] | null | null | null | docs/build/Pygments/tests/examplefiles/output/linecontinuation.py | mjtamlyn/django-braces | 8adc9bc4f5139e3d032d4e38657bf86413388b78 | [
"BSD-3-Clause"
] | null | null | null | (lp1
(ccopy_reg
_reconstructor
p2
(cpygments.token
_TokenType
p3
c__builtin__
tuple
p4
(S'Name'
p5
ttRp6
(dp7
S'Function'
p8
g2
(g3
g4
(g5
g8
ttRp9
(dp10
S'subtypes'
p11
c__builtin__
set
p12
((ltRp13
sS'parent'
p14
g6
sbsS'Exception'
p15
g2
(g3
g4
(g5
g15
ttRp16
(dp17
g11
g12
((ltRp18
sg14
g6
sbsS'Tag'
p19
g2
(g3
g4
(g5
g19
ttRp20
(dp21
g11
g12
((ltRp22
sg14
g6
sbsS'Constant'
p23
g2
(g3
g4
(g5
g23
ttRp24
(dp25
g11
g12
((ltRp26
sg14
g6
sbsg14
g2
(g3
g4
(ttRp27
(dp28
S'Comment'
p29
g2
(g3
g4
(g29
ttRp30
(dp31
g14
g27
sS'Preproc'
p32
g2
(g3
g4
(g29
g32
ttRp33
(dp34
g11
g12
((ltRp35
sg14
g30
sbsS'Single'
p36
g2
(g3
g4
(g29
g36
ttRp37
(dp38
g11
g12
((ltRp39
sg14
g30
sbsS'Multiline'
p40
g2
(g3
g4
(g29
g40
ttRp41
(dp42
g11
g12
((ltRp43
sg14
g30
sbsg11
g12
((lp44
g2
(g3
g4
(g29
S'Special'
p45
ttRp46
(dp47
g11
g12
((ltRp48
sg14
g30
sbag33
ag37
ag41
atRp49
sg45
g46
sbsg5
g6
sS'Keyword'
p50
g2
(g3
g4
(g50
ttRp51
(dp52
g23
g2
(g3
g4
(g50
g23
ttRp53
(dp54
g11
g12
((ltRp55
sg14
g51
sbsg14
g27
sS'Namespace'
p56
g2
(g3
g4
(g50
g56
ttRp57
(dp58
g11
g12
((ltRp59
sg14
g51
sbsS'Pseudo'
p60
g2
(g3
g4
(g50
g60
ttRp61
(dp62
g11
g12
((ltRp63
sg14
g51
sbsS'Reserved'
p64
g2
(g3
g4
(g50
g64
ttRp65
(dp66
g11
g12
((ltRp67
sg14
g51
sbsS'Declaration'
p68
g2
(g3
g4
(g50
g68
ttRp69
(dp70
g11
g12
((ltRp71
sg14
g51
sbsS'Variable'
p72
g2
(g3
g4
(g50
g72
ttRp73
(dp74
g11
g12
((ltRp75
sg14
g51
sbsg11
g12
((lp76
g53
ag65
ag2
(g3
g4
(g50
S'Type'
p77
ttRp78
(dp79
g11
g12
((ltRp80
sg14
g51
sbag69
ag73
ag57
ag61
atRp81
sg77
g78
sbsS'Generic'
p82
g2
(g3
g4
(g82
ttRp83
(dp84
S'Prompt'
p85
g2
(g3
g4
(g82
g85
ttRp86
(dp87
g11
g12
((ltRp88
sg14
g83
sbsg14
g27
sS'Deleted'
p89
g2
(g3
g4
(g82
g89
ttRp90
(dp91
g11
g12
((ltRp92
sg14
g83
sbsS'Traceback'
p93
g2
(g3
g4
(g82
g93
ttRp94
(dp95
g11
g12
((ltRp96
sg14
g83
sbsS'Emph'
p97
g2
(g3
g4
(g82
g97
ttRp98
(dp99
g11
g12
((ltRp100
sg14
g83
sbsS'Output'
p101
g2
(g3
g4
(g82
g101
ttRp102
(dp103
g11
g12
((ltRp104
sg14
g83
sbsS'Subheading'
p105
g2
(g3
g4
(g82
g105
ttRp106
(dp107
g11
g12
((ltRp108
sg14
g83
sbsS'Error'
p109
g2
(g3
g4
(g82
g109
ttRp110
(dp111
g11
g12
((ltRp112
sg14
g83
sbsg11
g12
((lp113
g102
ag98
ag110
ag106
ag94
ag90
ag2
(g3
g4
(g82
S'Heading'
p114
ttRp115
(dp116
g11
g12
((ltRp117
sg14
g83
sbag2
(g3
g4
(g82
S'Inserted'
p118
ttRp119
(dp120
g11
g12
((ltRp121
sg14
g83
sbag2
(g3
g4
(g82
S'Strong'
p122
ttRp123
(dp124
g11
g12
((ltRp125
sg14
g83
sbag86
atRp126
sg122
g123
sg118
g119
sg114
g115
sbsS'Text'
p127
g2
(g3
g4
(g127
ttRp128
(dp129
g11
g12
((lp130
g2
(g3
g4
(g127
S'Symbol'
p131
ttRp132
(dp133
g11
g12
((ltRp134
sg14
g128
sbag2
(g3
g4
(g127
S'Whitespace'
p135
ttRp136
(dp137
g11
g12
((ltRp138
sg14
g128
sbatRp139
sg131
g132
sg135
g136
sg14
g27
sbsS'Punctuation'
p140
g2
(g3
g4
(g140
ttRp141
(dp142
g11
g12
((lp143
g2
(g3
g4
(g140
S'Indicator'
p144
ttRp145
(dp146
g11
g12
((ltRp147
sg14
g141
sbatRp148
sg144
g145
sg14
g27
sbsS'Token'
p149
g27
sS'Number'
p150
g2
(g3
g4
(S'Literal'
p151
g150
ttRp152
(dp153
S'Bin'
p154
g2
(g3
g4
(g151
g150
g154
ttRp155
(dp156
g11
g12
((ltRp157
sg14
g152
sbsS'Binary'
p158
g2
(g3
g4
(g151
g150
g158
ttRp159
(dp160
g11
g12
((ltRp161
sg14
g152
sbsg14
g2
(g3
g4
(g151
ttRp162
(dp163
S'String'
p164
g2
(g3
g4
(g151
g164
ttRp165
(dp166
S'Regex'
p167
g2
(g3
g4
(g151
g164
g167
ttRp168
(dp169
g11
g12
((ltRp170
sg14
g165
sbsS'Interpol'
p171
g2
(g3
g4
(g151
g164
g171
ttRp172
(dp173
g11
g12
((ltRp174
sg14
g165
sbsS'Regexp'
p175
g2
(g3
g4
(g151
g164
g175
ttRp176
(dp177
g11
g12
((ltRp178
sg14
g165
sbsg14
g162
sS'Heredoc'
p179
g2
(g3
g4
(g151
g164
g179
ttRp180
(dp181
g11
g12
((ltRp182
sg14
g165
sbsS'Double'
p183
g2
(g3
g4
(g151
g164
g183
ttRp184
(dp185
g11
g12
((ltRp186
sg14
g165
sbsg131
g2
(g3
g4
(g151
g164
g131
ttRp187
(dp188
g11
g12
((ltRp189
sg14
g165
sbsS'Escape'
p190
g2
(g3
g4
(g151
g164
g190
ttRp191
(dp192
g11
g12
((ltRp193
sg14
g165
sbsS'Character'
p194
g2
(g3
g4
(g151
g164
g194
ttRp195
(dp196
g11
g12
((ltRp197
sg14
g165
sbsS'Interp'
p198
g2
(g3
g4
(g151
g164
g198
ttRp199
(dp200
g11
g12
((ltRp201
sg14
g165
sbsS'Backtick'
p202
g2
(g3
g4
(g151
g164
g202
ttRp203
(dp204
g11
g12
((ltRp205
sg14
g165
sbsS'Char'
p206
g2
(g3
g4
(g151
g164
g206
ttRp207
(dp208
g11
g12
((ltRp209
sg14
g165
sbsg36
g2
(g3
g4
(g151
g164
g36
ttRp210
(dp211
g11
g12
((ltRp212
sg14
g165
sbsS'Other'
p213
g2
(g3
g4
(g151
g164
g213
ttRp214
(dp215
g11
g12
((ltRp216
sg14
g165
sbsS'Doc'
p217
g2
(g3
g4
(g151
g164
g217
ttRp218
(dp219
g11
g12
((ltRp220
sg14
g165
sbsg11
g12
((lp221
g214
ag2
(g3
g4
(g151
g164
S'Atom'
p222
ttRp223
(dp224
g11
g12
((ltRp225
sg14
g165
sbag184
ag207
ag199
ag218
ag180
ag203
ag172
ag187
ag176
ag168
ag210
ag195
ag191
atRp226
sg222
g223
sbsg14
g27
sg150
g152
sS'Scalar'
p227
g2
(g3
g4
(g151
g227
ttRp228
(dp229
g11
g12
((lp230
g2
(g3
g4
(g151
g227
S'Plain'
p231
ttRp232
(dp233
g11
g12
((ltRp234
sg14
g228
sbatRp235
sg14
g162
sg231
g232
sbsg213
g2
(g3
g4
(g151
g213
ttRp236
(dp237
g11
g12
((ltRp238
sg14
g162
sbsS'Date'
p239
g2
(g3
g4
(g151
g239
ttRp240
(dp241
g11
g12
((ltRp242
sg14
g162
sbsg11
g12
((lp243
g240
ag165
ag236
ag152
ag228
atRp244
sbsS'Decimal'
p245
g2
(g3
g4
(g151
g150
g245
ttRp246
(dp247
g11
g12
((ltRp248
sg14
g152
sbsS'Float'
p249
g2
(g3
g4
(g151
g150
g249
ttRp250
(dp251
g11
g12
((ltRp252
sg14
g152
sbsS'Hex'
p253
g2
(g3
g4
(g151
g150
g253
ttRp254
(dp255
g11
g12
((ltRp256
sg14
g152
sbsS'Integer'
p257
g2
(g3
g4
(g151
g150
g257
ttRp258
(dp259
g11
g12
((lp260
g2
(g3
g4
(g151
g150
g257
S'Long'
p261
ttRp262
(dp263
g11
g12
((ltRp264
sg14
g258
sbatRp265
sg261
g262
sg14
g152
sbsS'Octal'
p266
g2
(g3
g4
(g151
g150
g266
ttRp267
(dp268
g11
g12
((ltRp269
sg14
g152
sbsg11
g12
((lp270
g155
ag159
ag267
ag246
ag2
(g3
g4
(g151
g150
S'Oct'
p271
ttRp272
(dp273
g11
g12
((ltRp274
sg14
g152
sbag258
ag250
ag254
atRp275
sg271
g272
sbsg151
g162
sg213
g2
(g3
g4
(g213
ttRp276
(dp277
g11
g12
((ltRp278
sg14
g27
sbsg109
g2
(g3
g4
(g109
ttRp279
(dp280
g11
g12
((ltRp281
sg14
g27
sbsS'Operator'
p282
g2
(g3
g4
(g282
ttRp283
(dp284
g11
g12
((lp285
g2
(g3
g4
(g282
S'Word'
p286
ttRp287
(dp288
g11
g12
((ltRp289
sg14
g283
sbatRp290
sg286
g287
sg14
g27
sbsg11
g12
((lp291
g30
ag279
ag83
ag128
ag6
ag141
ag51
ag162
ag283
ag276
atRp292
sg164
g165
sbsg60
g2
(g3
g4
(g5
g60
ttRp293
(dp294
g11
g12
((ltRp295
sg14
g6
sbsS'Attribute'
p296
g2
(g3
g4
(g5
g296
ttRp297
(dp298
g11
g12
((ltRp299
sg14
g6
sbsS'Label'
p300
g2
(g3
g4
(g5
g300
ttRp301
(dp302
g11
g12
((ltRp303
sg14
g6
sbsS'Blubb'
p304
g2
(g3
g4
(g5
g304
ttRp305
(dp306
g11
g12
((ltRp307
sg14
g6
sbsS'Entity'
p308
g2
(g3
g4
(g5
g308
ttRp309
(dp310
g11
g12
((ltRp311
sg14
g6
sbsS'Builtin'
p312
g2
(g3
g4
(g5
g312
ttRp313
(dp314
g11
g12
((lp315
g2
(g3
g4
(g5
g312
g60
ttRp316
(dp317
g11
g12
((ltRp318
sg14
g313
sbatRp319
sg60
g316
sg14
g6
sbsg213
g2
(g3
g4
(g5
g213
ttRp320
(dp321
g11
g12
((ltRp322
sg14
g6
sbsS'Identifier'
p323
g2
(g3
g4
(g5
g323
ttRp324
(dp325
g11
g12
((ltRp326
sg14
g6
sbsg72
g2
(g3
g4
(g5
g72
ttRp327
(dp328
g14
g6
sS'Global'
p329
g2
(g3
g4
(g5
g72
g329
ttRp330
(dp331
g11
g12
((ltRp332
sg14
g327
sbsS'Instance'
p333
g2
(g3
g4
(g5
g72
g333
ttRp334
(dp335
g11
g12
((ltRp336
sg14
g327
sbsS'Anonymous'
p337
g2
(g3
g4
(g5
g72
g337
ttRp338
(dp339
g11
g12
((ltRp340
sg14
g327
sbsg11
g12
((lp341
g338
ag334
ag330
ag2
(g3
g4
(g5
g72
S'Class'
p342
ttRp343
(dp344
g11
g12
((ltRp345
sg14
g327
sbatRp346
sg342
g343
sbsg11
g12
((lp347
g2
(g3
g4
(g5
S'Decorator'
p348
ttRp349
(dp350
g11
g12
((ltRp351
sg14
g6
sbag297
ag24
ag293
ag2
(g3
g4
(g5
g56
ttRp352
(dp353
g11
g12
((ltRp354
sg14
g6
sbag324
ag313
ag327
ag320
ag305
ag309
ag9
ag2
(g3
g4
(g5
S'Property'
p355
ttRp356
(dp357
g11
g12
((ltRp358
sg14
g6
sbag301
ag20
ag16
ag2
(g3
g4
(g5
g342
ttRp359
(dp360
g11
g12
((ltRp361
sg14
g6
sbatRp362
sg355
g356
sg342
g359
sg348
g349
sg56
g352
sbVapple
p363
tp364
a(g283
V.
tp365
a(g6
Vfilter
p366
tp367
a(g141
V(
tp368
a(g6
Vx
tp369
a(g141
V,
tp370
a(g128
V
tp371
a(g6
Vy
tp372
a(g141
V)
tp373
a(g128
V\u000a
tp374
a(g6
Vapple
p375
tp376
a(g283
V.
tp377
a(g128
V\u005c\u000a
p378
tp379
a(g128
V
p380
tp381
a(g313
Vfilter
p382
tp383
a(g141
V(
tp384
a(g6
Vx
tp385
a(g141
V,
tp386
a(g128
V
tp387
a(g6
Vy
tp388
a(g141
V)
tp389
a(g128
V\u000a
tp390
a(g128
V\u000a
tp391
a(g258
V1
tp392
a(g128
V
tp393
a(g128
V\u005c\u000a
p394
tp395
a(g128
V
p396
tp397
a(g283
V.
tp398
a(g128
V
tp399
a(g128
V\u005c\u000a
p400
tp401
a(g128
V
p402
tp403
a(g6
V__str__
p404
tp405
a(g128
V\u000a
tp406
a(g128
V\u000a
tp407
a(g57
Vfrom
p408
tp409
a(g128
V
tp410
a(g352
Vos
p411
tp412
a(g128
V
tp413
a(g57
Vimport
p414
tp415
a(g128
V
tp416
a(g6
Vpath
p417
tp418
a(g128
V\u000a
tp419
a(g57
Vfrom
p420
tp421
a(g128
V \u005c\u000a
p422
tp423
a(g352
Vos
p424
tp425
a(g128
V \u005c\u000a
p426
tp427
a(g57
Vimport
p428
tp429
a(g128
V
tp430
a(g128
V\u005c\u000a
p431
tp432
a(g128
V
p433
tp434
a(g6
Vpath
p435
tp436
a(g128
V\u000a
tp437
a(g128
V\u000a
tp438
a(g57
Vimport
p439
tp440
a(g128
V
tp441
a(g352
Vos.path
p442
tp443
a(g128
V
tp444
a(g57
Vas
p445
tp446
a(g128
V
tp447
a(g352
Vsomething
p448
tp449
a(g128
V
tp450
a(g128
V\u000a
tp451
a(g128
V\u000a
tp452
a(g57
Vimport
p453
tp454
a(g128
V \u005c\u000a
p455
tp456
a(g352
Vos.path
p457
tp458
a(g128
V \u005c\u000a
p459
tp460
a(g57
Vas
p461
tp462
a(g128
V \u005c\u000a
p463
tp464
a(g352
Vsomething
p465
tp466
a(g128
V
tp467
a(g128
V\u000a
tp468
a(g128
V\u000a
tp469
a(g51
Vclass
p470
tp471
a(g128
V \u005c\u000a
p472
tp473
a(g359
VSpam
p474
tp475
a(g141
V:
tp476
a(g128
V\u000a
tp477
a(g128
V
p478
tp479
a(g51
Vpass
p480
tp481
a(g128
V\u000a
tp482
a(g128
V\u000a
tp483
a(g51
Vclass
p484
tp485
a(g128
V
tp486
a(g359
VSpam
p487
tp488
a(g141
V:
tp489
a(g128
V
tp490
a(g51
Vpass
p491
tp492
a(g128
V\u000a
tp493
a(g128
V\u000a
tp494
a(g51
Vclass
p495
tp496
a(g128
V
tp497
a(g359
VSpam
p498
tp499
a(g141
V(
tp500
a(g313
Vobject
p501
tp502
a(g141
V)
tp503
a(g141
V:
tp504
a(g128
V\u000a
tp505
a(g128
V
p506
tp507
a(g51
Vpass
p508
tp509
a(g128
V\u000a
tp510
a(g128
V\u000a
tp511
a(g51
Vclass
p512
tp513
a(g128
V \u005c\u000a
p514
tp515
a(g359
VSpam
p516
tp517
a(g128
V
tp518
a(g128
V\u005c\u000a
p519
tp520
a(g128
V
p521
tp522
a(g141
V(
tp523
a(g128
V\u000a
tp524
a(g128
V
p525
tp526
a(g313
Vobject
p527
tp528
a(g128
V\u000a
tp529
a(g128
V
tp530
a(g141
V)
tp531
a(g128
V
tp532
a(g128
V\u005c\u000a
p533
tp534
a(g128
V
tp535
a(g141
V:
tp536
a(g128
V\u000a
tp537
a(g128
V
tp538
a(g51
Vpass
p539
tp540
a(g128
V\u000a
tp541
a(g128
V\u000a
tp542
a(g128
V\u000a
tp543
a(g51
Vdef
p544
tp545
a(g128
V \u005c\u000a
p546
tp547
a(g9
Vspam
p548
tp549
a(g128
V
tp550
a(g128
V\u005c\u000a
p551
tp552
a(g128
V
p553
tp554
a(g141
V(
tp555
a(g128
V
tp556
a(g128
V\u005c\u000a
p557
tp558
a(g128
V
p559
tp560
a(g141
V)
tp561
a(g128
V
tp562
a(g128
V\u005c\u000a
p563
tp564
a(g128
V
p565
tp566
a(g141
V:
tp567
a(g128
V
tp568
a(g128
V\u005c\u000a
p569
tp570
a(g128
V
p571
tp572
a(g51
Vpass
p573
tp574
a(g128
V\u000a
tp575
a. | 5.833692 | 22 | 0.741581 | 2,150 | 10,839 | 3.731628 | 0.414419 | 0.043375 | 0.061324 | 0.0349 | 0.09398 | 0.009473 | 0.004986 | 0 | 0 | 0 | 0 | 0.444683 | 0.181936 | 10,839 | 1,858 | 23 | 5.833692 | 0.460133 | 0 | 0 | 0.533369 | 0 | 0 | 0.046863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002691 | 0.002153 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
83f970444a450fdb66171abff201d31113587318 | 499 | py | Python | password_policies/tests/views.py | myecoach/django-password-policies-iplweb | 7127424b31facc4b8df6dade7a9b938351a688a9 | [
"BSD-3-Clause"
] | 37 | 2015-02-10T21:39:58.000Z | 2021-02-25T01:26:43.000Z | password_policies/tests/views.py | myecoach/django-password-policies-iplweb | 7127424b31facc4b8df6dade7a9b938351a688a9 | [
"BSD-3-Clause"
] | 39 | 2015-01-24T14:24:24.000Z | 2021-01-02T19:29:33.000Z | password_policies/tests/views.py | myecoach/django-password-policies-iplweb | 7127424b31facc4b8df6dade7a9b938351a688a9 | [
"BSD-3-Clause"
] | 66 | 2015-01-21T18:56:26.000Z | 2021-07-19T04:09:53.000Z | from django.http import HttpResponse
from django.views.generic.base import View
from django.contrib.auth.decorators import login_required
from django.utils.decorators import method_decorator
class TestHomeView(View):
def get(self, request):
return HttpResponse('<html><head><title>Home</title></head><body><p>Welcome!</p></body></html>')
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super(TestHomeView, self).dispatch(*args, **kwargs)
| 35.642857 | 104 | 0.739479 | 64 | 499 | 5.703125 | 0.546875 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126253 | 499 | 13 | 105 | 38.384615 | 0.837156 | 0 | 0 | 0 | 0 | 0.1 | 0.146293 | 0.146293 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
83fff08f846096a24b5bba244ce1c6f0246bb3f8 | 212 | py | Python | doc/cellprofiler/RunImageJMacroWorkflow/test_ball_macro.py | karlduderstadt/pyimagej | 4585f42cdbf427e43b11bdcb5b39d850d8ba5226 | [
"Apache-2.0"
] | 27 | 2018-04-08T00:37:33.000Z | 2018-11-21T19:43:44.000Z | doc/cellprofiler/RunImageJMacroWorkflow/test_ball_macro.py | karlduderstadt/pyimagej | 4585f42cdbf427e43b11bdcb5b39d850d8ba5226 | [
"Apache-2.0"
] | 13 | 2018-06-07T15:40:39.000Z | 2018-11-27T15:18:08.000Z | doc/cellprofiler/RunImageJMacroWorkflow/test_ball_macro.py | karlduderstadt/pyimagej | 4585f42cdbf427e43b11bdcb5b39d850d8ba5226 | [
"Apache-2.0"
] | 8 | 2018-03-15T22:33:51.000Z | 2018-10-30T13:49:55.000Z | #@String directory
from ij import IJ
import os
im = IJ.open(os.path.join(directory, 'dummy.tiff'))
IJ.run(im, "Subtract Background...", "rolling=50")
IJ.saveAs(im,'tiff',os.path.join(directory,'backsub.tiff'))
| 23.555556 | 59 | 0.712264 | 34 | 212 | 4.441176 | 0.558824 | 0.10596 | 0.13245 | 0.251656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010363 | 0.089623 | 212 | 8 | 60 | 26.5 | 0.772021 | 0.080189 | 0 | 0 | 0 | 0 | 0.298969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f7aed22c34587ace14e645533078a00fd5fbb42a | 638 | py | Python | generic_request_signer/backend.py | imtapps/generic-request-signer | 34c18856ffda6305bd4cd931bd20365bf161d1de | [
"BSD-2-Clause"
] | null | null | null | generic_request_signer/backend.py | imtapps/generic-request-signer | 34c18856ffda6305bd4cd931bd20365bf161d1de | [
"BSD-2-Clause"
] | 6 | 2016-01-07T19:16:43.000Z | 2022-02-09T19:22:29.000Z | generic_request_signer/backend.py | imtapps/generic-request-signer | 34c18856ffda6305bd4cd931bd20365bf161d1de | [
"BSD-2-Clause"
] | 5 | 2016-01-04T15:06:34.000Z | 2016-08-22T17:28:18.000Z | class BasicSettingsApiCredentialsBackend(object):
CLIENT_ERROR_MESSAGE = "Client implementations must define a `{0}` attribute"
CLIENT_SETTINGS_ERROR_MESSAGE = "Settings must contain a `{0}` attribute"
def __init__(self, client):
self.client = client
@property
def base_url(self):
return self.get_setting('domain_settings_name')
@property
def client_id(self):
return self.get_setting('client_id_settings_name')
@property
def private_key(self):
return self.get_setting('private_key_settings_name')
def get_setting(self, name):
raise NotImplementedError
| 27.73913 | 81 | 0.710031 | 75 | 638 | 5.72 | 0.4 | 0.09324 | 0.097902 | 0.118881 | 0.167832 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003945 | 0.205329 | 638 | 22 | 82 | 29 | 0.842209 | 0 | 0 | 0.1875 | 0 | 0 | 0.249216 | 0.075235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0 | 0.1875 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f7b61528710fc037120f500c92118d78c6a1e5fb | 5,755 | py | Python | Urbanization/GOST_Urban/Urban/testingGEE.py | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 40 | 2018-03-13T13:47:36.000Z | 2022-02-17T13:17:32.000Z | Urbanization/GOST_Urban/Urban/testingGEE.py | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 11 | 2018-07-30T20:17:13.000Z | 2020-08-13T20:30:20.000Z | Urbanization/GOST_Urban/Urban/testingGEE.py | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 14 | 2018-08-14T07:47:56.000Z | 2021-11-23T11:07:56.000Z | ###TESTING GEE
import os, sys, inspect, argparse, logging, shutil, re, geojson
import ee
ee.Initialize() #Initialize the ee object, but this requires work the first time through
# https://developers.google.com/earth-engine/python_install
import geopandas as gpd
import pandas as pd
try:
from gbdxtools import Interface
from shapely.geometry import Point, LineString
from shapely.wkt import loads
except:
pass
#Import a number of existing GOST functions
GOSTRocks_folder = os.path.dirname(os.path.dirname(os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]))))
GOST_GBDx_folder = os.path.join(os.path.dirname(GOSTRocks_folder), "GOST_GBDx")
GOST_Public_Goods = os.path.join(os.path.dirname(GOSTRocks_folder), "GOST_PublicGoods")
if GOSTRocks_folder not in sys.path:
sys.path.insert(0, GOSTRocks_folder)
if GOST_GBDx_folder not in sys.path:
sys.path.insert(0, GOST_GBDx_folder)
if GOST_Public_Goods not in sys.path:
sys.path.insert(0, GOST_Public_Goods)
from GOSTRocks import misc
try:
from Market_Access import OD
from Market_Access import OSMNX_POIs
from GOSTRocks import osmMisc
from GOSTRocks import GEE_zonalstats as gee
from GOST_GBDx_Tools import gbdxTasks
from GOST_GBDx_Tools import gbdxURL_misc
from GOST_GBDx_Tools import imagery_search
except:
pass
try:
import arcpy
from GOSTRocks.Urban.SummarizeGHSL import *
from GOSTRocks.arcpyMisc import createMapFromMXD
ARCPY_LOADED = True
except:
pass
logging.basicConfig(level=logging.INFO)
inputShapefile = r"Q:\WORKINGPROJECTS\Indonesia_GBDx\BalikPapan_AOI\DEBUG\BalikPapan_AOI_GRID_WGS84.shp"
baiImages = []
baiImages.append(ee.Image("LANDSAT/LC8_L1T_ANNUAL_BAI/%s" % 2017))
baiImages.append(ee.Image("LANDSAT/LE7_L1T_ANNUAL_BAI/%s" % 2010))
baiImages.append(ee.Image("LANDSAT/LE7_L1T_ANNUAL_BAI/%s" % 2000))
xx = gee.zonalStats(inputShapefile, baiImages, inputShapefile.replace(".shp", ".csv"))
print xx
'''
inputImages = baiImages
inD = gpd.read_file(inputShapefile)
allFeatures = []
idx = 0
subIdx = 0
vCount = 0
scale=50
#Loop through the features in the shapefile
inShapes = inD['geometry']
shp = inShapes[1]
for shp in inShapes[0:3]:
idx = idx + 1
subIdx = subIdx + 1
shpJSON = geojson.Feature(geometry=shp, properties={})
outputAttributes = {'index': subIdx}
for ndviImage in inputImages:
bandName = str(ndviImage.bandNames())
bName = re.search('id(.*)', bandName)
bName = bName.group(1).replace('"', '').replace(':', '').replace(' ', '')
outputAttributes[bName] = ndviImage.reduceRegion(
reducer = ee.Reducer.percentile([0,25,50,75,100], None, None, None, None),
#reducer=ee.Reducer.sum(),
geometry=ee.Geometry.Polygon(shpJSON['geometry']['coordinates']),
scale=scale,
maxPixels=10e15,
bestEffort=True)
allFeatures.append(ee.Feature(None, outputAttributes))
def fetchResults(allFeatures):
allFeaturesCollection = ee.FeatureCollection(allFeatures)
ndviDict = allFeaturesCollection.getInfo()
###YOU FUCKING IDIOT BEN!@&$( FUCTIONS HAVE TO FUCKING RETURN THINGS
return ndviDict
def extractResults(xx):
allRes = []
valNames = []
logVals = True
for x in xx['features']:
curRes = []
if x == xx['features'][0]:
#Write the header if this is the first feature
allHead = []
for xHead in x['properties'].iterkeys():
allHead.append(xHead)
for val in x['properties'].itervalues():
if type(val) is dict:
for y in val.values():
curRes.append(y)
if logVals:
valNames = val.keys()
logVals=False
else:
curRes.append(str(val))
allRes.append(curRes)
allNames = ['IDX']
for hIdx in allHead[1:]:
for vName in valNames:
allNames.append(hIdx + "_" + vName)
return {"Header":allNames, "Results": allRes}
ndviDict = fetchResults(allFeatures)
del final
allRes = []
for curFeat in ndviDict['features']:
curRes = pd.DataFrame(curFeat['properties'])
allRes.append(curRes.unstack())
x = pd.DataFrame(allRes)
x.columns = ['__'.join(col) for col in x.columns.values]
try:
final.append(curRes, axis=0)
except:
final = curRes
pd.DataFrame(ndviDict['features'][0]['properties'])
n1 = extractResults(ndviDict)
n0 = ndviDict
if idx > 100 or idx == inD.shape[0]:
logging.info("Processing %s of %s" % (subIdx, len(inShapes)))
vCount = vCount + 1
curOut = outputFile.replace(".csv", "_%s.csv" % vCount)
if not os.path.exists(curOut):
try:
ndviDict = fetchResults(allFeatures)
except:
logging.warning("***** Encountered GEE error, taking a break")
time.sleep(60)
ndviDict = fetchResults(allFeatures)
if ndviDict and writeTemp:
writeResults(ndviDict, curOut)
else:
pass
pRes = extractResults(ndviDict)
if idx == subIdx:
final = pRes['Results']
else:
for x in pRes['Results']:
final.append(x)
#Reset for next iteration
idx = 0
allFeatures = []
try:
finalPD = pd.DataFrame(final, columns=pRes['Header'])
#finalPD = finalPD.append(inD, axis=1)
finalPD.to_csv(outputFile)
#writeResults(final, outputFile)
return finalPD
except:
print final
''' | 32.514124 | 147 | 0.636664 | 679 | 5,755 | 5.318115 | 0.349043 | 0.016616 | 0.0144 | 0.00997 | 0.111327 | 0.075325 | 0.075325 | 0.075325 | 0.075325 | 0.024924 | 0 | 0.014873 | 0.252302 | 5,755 | 177 | 148 | 32.514124 | 0.824309 | 0.031798 | 0 | 0.195652 | 0 | 0 | 0.111354 | 0.093341 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.065217 | 0.391304 | null | null | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
f7beced02fb0365466d3ca8740264f8e4da98dbf | 339 | py | Python | devind_helpers/import_from_file/__init__.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | null | null | null | devind_helpers/import_from_file/__init__.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | 4 | 2022-02-18T09:24:05.000Z | 2022-03-31T16:46:29.000Z | devind_helpers/import_from_file/__init__.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | null | null | null | from .base_reader import BaseReader
from .csv_reader import CsvReader
from .excel_reader import ExcelReader
from .exceptions import HookException, HookItemException, ItemsException
from .import_from_file import Errors, ItemError, Relative, BeforeCreate, Created, ImportFromFile
from .json_reader import JsonReader
from .ratio import Ratio
| 42.375 | 96 | 0.855457 | 41 | 339 | 6.926829 | 0.560976 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103245 | 339 | 7 | 97 | 48.428571 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f7cbb841a96dd53bacc653f43012ce0d113be20e | 190 | py | Python | vyc.py | yueiuieui/vpy_inside | d6c3ee6741f9fb10ee743fb8041a2fc649c263b4 | [
"MIT"
] | null | null | null | vyc.py | yueiuieui/vpy_inside | d6c3ee6741f9fb10ee743fb8041a2fc649c263b4 | [
"MIT"
] | null | null | null | vyc.py | yueiuieui/vpy_inside | d6c3ee6741f9fb10ee743fb8041a2fc649c263b4 | [
"MIT"
] | null | null | null | from standard import *
vout("------vy compiler ------")
cp=vin("> ")
with open(cp+'.vy','r') as f:
temp=f.read()
write_file('temp.py','w','from standard import *\n'+temp)
run('temp.py')
| 23.75 | 57 | 0.584211 | 31 | 190 | 3.548387 | 0.677419 | 0.218182 | 0.327273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121053 | 190 | 7 | 58 | 27.142857 | 0.658683 | 0 | 0 | 0 | 0 | 0 | 0.363158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f7d1aa5dc0cace475feaceadb6ba33f944eeecb2 | 128 | py | Python | util/text_utils.py | lpmi-13/pypobot | cc481039c0a933e8683342b654d1d319d4ca6687 | [
"MIT"
] | 1 | 2017-06-18T10:33:37.000Z | 2017-06-18T10:33:37.000Z | util/text_utils.py | lpmi-13/pypobot | cc481039c0a933e8683342b654d1d319d4ca6687 | [
"MIT"
] | 9 | 2017-06-21T19:13:11.000Z | 2018-08-10T21:42:04.000Z | util/text_utils.py | lpmi-13/pypobot | cc481039c0a933e8683342b654d1d319d4ca6687 | [
"MIT"
] | null | null | null | import re
def an_finder(text):
pattern = re.compile(' an an ')
result = pattern.search(text)
return result.group()
| 18.285714 | 35 | 0.65625 | 18 | 128 | 4.611111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 128 | 6 | 36 | 21.333333 | 0.83 | 0 | 0 | 0 | 0 | 0 | 0.054688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f7e117b4b50e223e09f1b60dfe0afcaaf667f617 | 137 | py | Python | AoC/dec2018/d1/a.py | zegabr/aoc | ac750781339bd50491d84617996a01dd0c4709bb | [
"MIT"
] | null | null | null | AoC/dec2018/d1/a.py | zegabr/aoc | ac750781339bd50491d84617996a01dd0c4709bb | [
"MIT"
] | null | null | null | AoC/dec2018/d1/a.py | zegabr/aoc | ac750781339bd50491d84617996a01dd0c4709bb | [
"MIT"
] | null | null | null | import sys
a=sys.stdin.read().split()
#print(a)
#f = open("i.txt",'r')
#a=f.read().split()
sum=0
for i in a:
sum+=int(i)
print (sum)
| 13.7 | 26 | 0.583942 | 29 | 137 | 2.758621 | 0.586207 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 0.145985 | 137 | 9 | 27 | 15.222222 | 0.675214 | 0.343066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f7e6c12048a8e33ff8fce444fc0f763795d3c5b4 | 275 | py | Python | Python_Codes_for_BJ/stage07 문자열 사용하기/다이얼.py | ch96an/BaekJoonSolution | 25594fda5ba1c0c4d26ff0828ec8dcf2f6572d33 | [
"MIT"
] | null | null | null | Python_Codes_for_BJ/stage07 문자열 사용하기/다이얼.py | ch96an/BaekJoonSolution | 25594fda5ba1c0c4d26ff0828ec8dcf2f6572d33 | [
"MIT"
] | null | null | null | Python_Codes_for_BJ/stage07 문자열 사용하기/다이얼.py | ch96an/BaekJoonSolution | 25594fda5ba1c0c4d26ff0828ec8dcf2f6572d33 | [
"MIT"
] | null | null | null | dial = {'A': 2, 'B': 2, 'C': 2, 'D': 3, 'E': 3, 'F': 3, 'G': 4, 'H': 4, 'I': 4, 'J': 5, 'K': 5, 'L': 5, 'M': 6, 'N': 6, 'O': 6, 'P': 7, 'Q': 7, 'R': 7, 'T': 8, 'U': 8, 'V': 8, 'W': 9, 'X': 9, 'Y': 9, 'S': 7, 'Z': 9}
n, s = 0, input()
for c in s:
n += dial[c]
print(n+len(s)) | 55 | 215 | 0.316364 | 68 | 275 | 1.279412 | 0.602941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129808 | 0.243636 | 275 | 5 | 216 | 55 | 0.288462 | 0 | 0 | 0 | 0 | 0 | 0.094203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f7f4d32b0d4c21102074690da7e79ca0ebfccf27 | 38 | py | Python | workflowV2/__init__.py | neal-p/workflow2.0 | 65a71754310051ddb3fd4338ff579c499608a483 | [
"MIT"
] | null | null | null | workflowV2/__init__.py | neal-p/workflow2.0 | 65a71754310051ddb3fd4338ff579c499608a483 | [
"MIT"
] | 1 | 2021-07-22T15:19:58.000Z | 2021-07-22T15:19:58.000Z | workflowV2/__init__.py | neal-p/workflowV2 | 65a71754310051ddb3fd4338ff579c499608a483 | [
"MIT"
] | null | null | null | __logfile__ = None
__logging__ = True
| 12.666667 | 18 | 0.789474 | 4 | 38 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 38 | 2 | 19 | 19 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f7ff2b88a5f2e3746635f10ada3446d842d5baea | 1,248 | py | Python | tests/test_openwebifpy.py | fbradyirl/openwebifpy | e40454fbf6e67568a032c67700818aaf6d8e81df | [
"MIT"
] | 5 | 2019-04-07T09:37:37.000Z | 2021-12-01T11:30:23.000Z | tests/test_openwebifpy.py | fbradyirl/openwebifpy | e40454fbf6e67568a032c67700818aaf6d8e81df | [
"MIT"
] | 6 | 2019-03-01T16:16:17.000Z | 2021-05-21T14:52:06.000Z | tests/test_openwebifpy.py | fbradyirl/openwebifpy | e40454fbf6e67568a032c67700818aaf6d8e81df | [
"MIT"
] | 1 | 2020-11-13T14:42:02.000Z | 2020-11-13T14:42:02.000Z | """
tests.test_api
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tests the api
Copyright (c) 2015 Finbarr Brady <https://github.com/fbradyirl>
Licensed under the MIT license.
"""
# pylint: disable=protected-access
import unittest
import openwebif.api
from openwebif.error import OpenWebIfError, MissingParamError
class TestAPI(unittest.TestCase):
""" Tests openwebif.api module. """
def test_create(self):
""" Test creating a new device. """
# Bogus config
self.assertRaises(MissingParamError, lambda: openwebif.api.CreateDevice())
# self.assertRaises(OpenWebIfError, lambda: openwebif.api.CreateDevice('10.10.10.4'))
def test_get_picon_name(self):
self.assertEqual(openwebif.api.CreateDevice.get_picon_name('RTÉ One'), "rteone")
# def test_status(self):
# """ Test getting version and status. """
# # Use this to test on real box
# client = openwebif.api.CreateDevice('vuduo2.local')
# self.assertEqual("OWIF 1.3.6", client.get_version())
# self.assertTrue(len(client.get_status_info()) > 8)
# # Test that an exception doesnt get thrown
# result = client.is_box_in_standby()
# self.assertTrue(result is True or result is False)
| 32 | 93 | 0.66266 | 150 | 1,248 | 5.42 | 0.573333 | 0.088561 | 0.118081 | 0.073801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01592 | 0.194712 | 1,248 | 38 | 94 | 32.842105 | 0.793035 | 0.610577 | 0 | 0 | 0 | 0 | 0.028698 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7900e4fba482c2349f7afd1bedc8ed327d0b15f8 | 4,326 | py | Python | GP/data_transformation.py | VirgiAgl/V_savigp | 310f31f789db34737313bf057ff1474e314d68fd | [
"Apache-2.0"
] | 7 | 2016-04-25T15:02:34.000Z | 2020-03-30T15:10:03.000Z | GP/data_transformation.py | VirgiAgl/V_savigp | 310f31f789db34737313bf057ff1474e314d68fd | [
"Apache-2.0"
] | null | null | null | GP/data_transformation.py | VirgiAgl/V_savigp | 310f31f789db34737313bf057ff1474e314d68fd | [
"Apache-2.0"
] | 5 | 2015-12-09T22:57:58.000Z | 2020-10-07T11:01:34.000Z | import numpy as np
from sklearn import preprocessing
class DataTransformation:
"""
A generic class for the transformation of data
"""
def __init__(self):
pass
def transform_X(self, X):
"""
transforms X
:param
X: Input X
:return
transformed X
"""
raise NotImplementedError()
def transform_Y(self, Y):
"""
transforms Y
:param
Y: Input Y
:return
transformed Y
"""
raise NotImplementedError()
def untransform_X(self, X):
"""
Untransforms X to its original values
:param
X: transformed X
:return
untransformed X
"""
raise NotImplementedError()
def untransform_Y(self, Y):
"""
Untransforms Y
:param
Y: transformed Y
:return
untransfomred Y
"""
raise NotImplementedError()
def untransform_Y_var(self, Yvar):
raise NotImplementedError()
def untransform_NLPD(self, NLPD):
"""
Untransfomrs NLPD to the original Y space
:param
NLPD: transfomred NLPD
:return
untransformed NLPD
"""
raise NotImplementedError()
class IdentityTransformation:
"""
Identity transformation. No transformation will be applied to data.
"""
def __init__(self):
pass
def transform_X(self, X):
return X
def transform_Y(self, Y):
return Y
def untransform_X(self, X):
return X
def untransform_Y(self, Y):
return Y
def untransform_Y_var(self, Yvar):
return Yvar
@staticmethod
def get_transformation(Y, X):
return IdentityTransformation()
def untransform_NLPD(self, NLPD):
return NLPD
class MeanTransformation(object, DataTransformation):
"""
Only transforms Y as follows:
transformed Y = untransformed Y - mean(Y)
"""
def __init__(self, mean):
super(MeanTransformation, self).__init__()
self.mean = mean
def transform_X(self, X):
return X
def transform_Y(self, Y):
return Y - self.mean
def untransform_X(self, X):
return X
def untransform_Y(self, Y):
return Y + self.mean
def untransform_Y_var(self, Yvar):
return Yvar
def untransform_NLPD(self, NLPD):
return NLPD
@staticmethod
def get_transformation(Y, X):
return MeanTransformation(Y.mean(axis=0))
class MeanStdYTransformation(object, DataTransformation):
"""
Transforms only Y in a way that the transformed Y has mean = 0 and std =1
"""
def __init__(self, scalar):
super(MeanStdYTransformation, self).__init__()
self.scalar = scalar
def transform_X(self, X):
return X
def transform_Y(self, Y):
return self.scalar.transform(Y)
def untransform_X(self, X):
return X
def untransform_Y(self, Y):
return self.scalar.inverse_transform(Y)
def untransform_Y_var(self, Yvar):
return Yvar
def untransform_NLPD(self, NLPD):
return NLPD + np.hstack((np.array([np.log(self.scalar.std_).sum()]), np.log(self.scalar.std_)))
@staticmethod
def get_transformation(Y, X):
return MeanStdYTransformation(preprocessing.StandardScaler().fit(Y))
class MinTransformation(object, DataTransformation):
"""
Transforms only Y.
transformed Y = (Y - min(Y)) / (max(Y) - min(Y)) - 0.5
"""
def __init__(self, min, max, offset):
super(MinTransformation, self).__init__()
self.min = min
self.max = max
self.offset = offset
def transform_X(self, X):
return X
def transform_Y(self, Y):
return (Y-self.min).astype('float')/(self.max-self.min) - self.offset
def untransform_X(self, X):
return X
def untransform_Y(self, Y):
return (Y+self.offset)*(self.max-self.min) + self.min
def untransform_Y_var(self, Yvar):
return Yvar * (self.max-self.min) ** 2
def untransform_NLPD(self, NLPD):
return NLPD + np.log(self.max - self.min)
@staticmethod
def get_transformation(Y, X):
return MinTransformation(Y.min(), Y.max(), 0.5)
| 21.63 | 103 | 0.596163 | 501 | 4,326 | 5 | 0.167665 | 0.111776 | 0.023952 | 0.038323 | 0.51976 | 0.396806 | 0.378443 | 0.3002 | 0.26986 | 0.251896 | 0 | 0.002665 | 0.306056 | 4,326 | 199 | 104 | 21.738693 | 0.831779 | 0.162275 | 0 | 0.670213 | 0 | 0 | 0.001531 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.414894 | false | 0.021277 | 0.021277 | 0.297872 | 0.787234 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
790cf0917728b0fd2c56d1cef1a5a4d45cb2cea1 | 20 | py | Python | tests/__init__.py | noamkatzir/palm-hand-reading | 1a405759c03218fc74d661805bced8e4f4a92e74 | [
"BSD-3-Clause"
] | 5 | 2018-10-21T13:07:36.000Z | 2021-11-26T17:01:47.000Z | tests/__init__.py | noamkatzir/palm-hand-reading | 1a405759c03218fc74d661805bced8e4f4a92e74 | [
"BSD-3-Clause"
] | null | null | null | tests/__init__.py | noamkatzir/palm-hand-reading | 1a405759c03218fc74d661805bced8e4f4a92e74 | [
"BSD-3-Clause"
] | 3 | 2019-06-08T07:04:36.000Z | 2019-11-27T03:12:53.000Z | __author__ = 'noam'
| 10 | 19 | 0.7 | 2 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7918deeae964f3e7aeec689952c68e14507b214a | 1,188 | py | Python | test_servers.py | heicj/http-server | 12fd83197d0a0355a91490f0e653310d67fd20af | [
"MIT"
] | null | null | null | test_servers.py | heicj/http-server | 12fd83197d0a0355a91490f0e653310d67fd20af | [
"MIT"
] | null | null | null | test_servers.py | heicj/http-server | 12fd83197d0a0355a91490f0e653310d67fd20af | [
"MIT"
] | null | null | null | import unittest
from client import client
class TestIt(unittest.TestCase):
#test 1,2,3 work for echo server
"""
def test_1(self):
self.assertEqual(client("LF", "Test"), "Test")
def test_2(self):
self.assertEqual(client("LF", "This is a long string test"), "This is a long string test")
def test_3(self):
self.assertEqual(client("LF", ""), "")
"""
#tests for step1 part of assignment getting http response
#def test_4(self):
# """uses key phase testOK to test get a 200 ok response"""
# self.assertEqual(client("LF", "testOK"), 'HTTP/1.1 200 OK')
#def test_5(self):
# """uses key phrase testError to check error response works"""
# self.assertEqual(client("LF", "testError"), 'HTTP/1.1 500 Internal Server Error')
#def test_6(self):
# """test that request to server sends back the resource requested"""
# self.assertEqual(client("test"), "/")
#tests for step2 ran one at a time
def test_7(self):
"""test that error is raised when request isn't a GET requset"""
self.assertEqual(client('test'), 'Not a GET request')
def test_8(self):
"""test that version is not http/1.1"""
self.assertEqual(client('test'), 'version not supported')
| 28.97561 | 92 | 0.677609 | 184 | 1,188 | 4.331522 | 0.391304 | 0.070263 | 0.21079 | 0.144291 | 0.154329 | 0.052698 | 0 | 0 | 0 | 0 | 0 | 0.028398 | 0.170034 | 1,188 | 40 | 93 | 29.7 | 0.779919 | 0.755051 | 0 | 0 | 0 | 0 | 0.164286 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7927f507f1296d336b14e5a433dabc6791260e95 | 17,530 | py | Python | splunk_sdk/identity/gen_identity_and_access_control_api.py | ahallur/splunk-cloud-sdk-python | 27914e4cb624bcc67788408688553cfb99f64aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | splunk_sdk/identity/gen_identity_and_access_control_api.py | ahallur/splunk-cloud-sdk-python | 27914e4cb624bcc67788408688553cfb99f64aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | splunk_sdk/identity/gen_identity_and_access_control_api.py | ahallur/splunk-cloud-sdk-python | 27914e4cb624bcc67788408688553cfb99f64aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding: utf-8
# Copyright © 2019 Splunk, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"): you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# [http://www.apache.org/licenses/LICENSE-2.0]
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
############# This file is auto-generated. Do not edit! #############
"""
SDC Service: Identity and Access Control
With the Splunk Cloud Identity and Access Control (IAC) Service, you can authenticate and authorize Splunk API users.
OpenAPI spec version: v2beta1.9
Generated by: https://openapi-generator.tech
"""
from requests import Response
from string import Template
from typing import List, Dict
from splunk_sdk.base_client import handle_response
from splunk_sdk.base_service import BaseService
from splunk_sdk.common.sscmodel import SSCModel, SSCVoidModel
from splunk_sdk.identity.gen_models import AddGroupMemberBody
from splunk_sdk.identity.gen_models import AddGroupRoleBody
from splunk_sdk.identity.gen_models import AddMemberBody
from splunk_sdk.identity.gen_models import CreateGroupBody
from splunk_sdk.identity.gen_models import CreateRoleBody
from splunk_sdk.identity.gen_models import Group
from splunk_sdk.identity.gen_models import GroupMember
from splunk_sdk.identity.gen_models import GroupRole
from splunk_sdk.identity.gen_models import Member
from splunk_sdk.identity.gen_models import Principal
from splunk_sdk.identity.gen_models import Role
from splunk_sdk.identity.gen_models import RolePermission
from splunk_sdk.identity.gen_models import ValidateInfo
class IdentityAndAccessControl(BaseService):
"""
Identity and Access Control
Version: v2beta1.9
With the Splunk Cloud Identity and Access Control (IAC) Service, you can authenticate and authorize Splunk API users.
"""
def __init__(self, base_client):
super().__init__(base_client)
def add_group_member(self, group: str, add_group_member_body: AddGroupMemberBody) -> GroupMember:
"""
Adds a member to a given group.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}/members").substitute(path_params)
url = self.base_client.build_url(path)
data = add_group_member_body.to_dict()
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, GroupMember)
def add_group_role(self, group: str, add_group_role_body: AddGroupRoleBody) -> GroupRole:
"""
Adds a role to a given group.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}/roles").substitute(path_params)
url = self.base_client.build_url(path)
data = add_group_role_body.to_dict()
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, GroupRole)
def add_member(self, add_member_body: AddMemberBody) -> Member:
"""
Adds a member to a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/members").substitute(path_params)
url = self.base_client.build_url(path)
data = add_member_body.to_dict()
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, Member)
def add_role_permission(self, role: str, body: str) -> RolePermission:
"""
Adds permissions to a role in a given tenant.
"""
query_params = {}
path_params = {
"role": role,
}
path = Template("/identity/v2beta1/roles/${role}/permissions").substitute(path_params)
url = self.base_client.build_url(path)
data = body
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, RolePermission)
def create_group(self, create_group_body: CreateGroupBody) -> Group:
"""
Creates a new group in a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/groups").substitute(path_params)
url = self.base_client.build_url(path)
data = create_group_body.to_dict()
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, Group)
def create_role(self, create_role_body: CreateRoleBody) -> Role:
"""
Creates a new authorization role in a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/roles").substitute(path_params)
url = self.base_client.build_url(path)
data = create_role_body.to_dict()
response = self.base_client.post(url, json=data, **query_params)
return handle_response(response, Role)
def delete_group(self, group: str) -> SSCVoidModel:
"""
Deletes a group in a given tenant.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def delete_role(self, role: str) -> SSCVoidModel:
"""
Deletes a defined role for a given tenant.
"""
query_params = {}
path_params = {
"role": role,
}
path = Template("/identity/v2beta1/roles/${role}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def get_group(self, group: str) -> Group:
"""
Returns information about a given group within a tenant.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, Group)
def get_group_member(self, group: str, member: str) -> GroupMember:
"""
Returns information about a given member within a given group.
"""
query_params = {}
path_params = {
"group": group,
"member": member,
}
path = Template("/identity/v2beta1/groups/${group}/members/${member}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, GroupMember)
def get_group_role(self, group: str, role: str) -> GroupRole:
"""
Returns information about a given role within a given group.
"""
query_params = {}
path_params = {
"group": group,
"role": role,
}
path = Template("/identity/v2beta1/groups/${group}/roles/${role}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, GroupRole)
def get_member(self, member: str) -> Member:
"""
Returns a member of a given tenant.
"""
query_params = {}
path_params = {
"member": member,
}
path = Template("/identity/v2beta1/members/${member}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, Member)
def get_principal(self, principal: str) -> Principal:
"""
Returns the details of a principal, including its tenant membership.
"""
query_params = {}
path_params = {
"principal": principal,
}
path = Template("/system/identity/v2beta1/principals/${principal}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, Principal)
def get_role(self, role: str) -> Role:
"""
Returns a role for a given tenant.
"""
query_params = {}
path_params = {
"role": role,
}
path = Template("/identity/v2beta1/roles/${role}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, Role)
def get_role_permission(self, role: str, permission: str) -> RolePermission:
"""
Gets a permission for the specified role.
"""
query_params = {}
path_params = {
"role": role,
"permission": permission,
}
path = Template("/identity/v2beta1/roles/${role}/permissions/${permission}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, RolePermission)
def list_group_members(self, group: str) -> List[str]:
"""
Returns a list of the members within a given group.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}/members").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_group_roles(self, group: str) -> List[str]:
"""
Returns a list of the roles that are attached to a group within a given tenant.
"""
query_params = {}
path_params = {
"group": group,
}
path = Template("/identity/v2beta1/groups/${group}/roles").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_groups(self) -> List[str]:
"""
List the groups that exist in a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/groups").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_member_groups(self, member: str) -> List[str]:
"""
Returns a list of groups that a member belongs to within a tenant.
"""
query_params = {}
path_params = {
"member": member,
}
path = Template("/identity/v2beta1/members/${member}/groups").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_member_permissions(self, member: str) -> List[str]:
"""
Returns a set of permissions granted to the member within the tenant.
"""
query_params = {}
path_params = {
"member": member,
}
path = Template("/identity/v2beta1/members/${member}/permissions").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_member_roles(self, member: str) -> List[str]:
"""
Returns a set of roles that a given member holds within the tenant.
"""
query_params = {}
path_params = {
"member": member,
}
path = Template("/identity/v2beta1/members/${member}/roles").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_members(self) -> List[str]:
"""
Returns a list of members in a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/members").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_principals(self) -> List[str]:
"""
Returns the list of principals known to IAC.
"""
query_params = {}
path_params = {
}
path = Template("/system/identity/v2beta1/principals").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_role_groups(self, role: str) -> List[str]:
"""
Gets a list of groups for a role in a given tenant.
"""
query_params = {}
path_params = {
"role": role,
}
path = Template("/identity/v2beta1/roles/${role}/groups").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_role_permissions(self, role: str) -> List[str]:
"""
Gets the permissions for a role in a given tenant.
"""
query_params = {}
path_params = {
"role": role,
}
path = Template("/identity/v2beta1/roles/${role}/permissions").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def list_roles(self) -> List[str]:
"""
Returns all roles for a given tenant.
"""
query_params = {}
path_params = {
}
path = Template("/identity/v2beta1/roles").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, str)
def remove_group_member(self, group: str, member: str) -> SSCVoidModel:
"""
Removes the member from a given group.
"""
query_params = {}
path_params = {
"group": group,
"member": member,
}
path = Template("/identity/v2beta1/groups/${group}/members/${member}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def remove_group_role(self, group: str, role: str) -> SSCVoidModel:
"""
Removes a role from a given group.
"""
query_params = {}
path_params = {
"group": group,
"role": role,
}
path = Template("/identity/v2beta1/groups/${group}/roles/${role}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def remove_member(self, member: str) -> SSCVoidModel:
"""
Removes a member from a given tenant
"""
query_params = {}
path_params = {
"member": member,
}
path = Template("/identity/v2beta1/members/${member}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def remove_role_permission(self, role: str, permission: str) -> SSCVoidModel:
"""
Removes a permission from the role.
"""
query_params = {}
path_params = {
"role": role,
"permission": permission,
}
path = Template("/identity/v2beta1/roles/${role}/permissions/${permission}").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.delete(url, **query_params)
return handle_response(response, )
def validate_token(self, include: List[str] = None) -> ValidateInfo:
"""
Validates the access token obtained from the authorization header and returns the principal name and tenant memberships.
"""
query_params = {}
if include is not None:
query_params['include'] = include
path_params = {
}
path = Template("/identity/v2beta1/validate").substitute(path_params)
url = self.base_client.build_url(path)
response = self.base_client.get(url, **query_params)
return handle_response(response, ValidateInfo)
| 32.64432 | 128 | 0.622647 | 2,013 | 17,530 | 5.240437 | 0.094386 | 0.061617 | 0.08361 | 0.067589 | 0.760356 | 0.744052 | 0.721964 | 0.65447 | 0.641483 | 0.633709 | 0 | 0.005995 | 0.267256 | 17,530 | 536 | 129 | 32.705224 | 0.815181 | 0.146321 | 0 | 0.667752 | 0 | 0 | 0.094562 | 0.08298 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104235 | false | 0 | 0.061889 | 0 | 0.270358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7930e665230a8a67e2f4989e1a1b862c5e6e12d1 | 573 | py | Python | backend/db/entities/staging/tunjangan_kinerja.py | R-N/sistem_gaji_vue_thrift | 9ba800b4d8e7849e2c6c4016cb32633caab087be | [
"MIT"
] | null | null | null | backend/db/entities/staging/tunjangan_kinerja.py | R-N/sistem_gaji_vue_thrift | 9ba800b4d8e7849e2c6c4016cb32633caab087be | [
"MIT"
] | null | null | null | backend/db/entities/staging/tunjangan_kinerja.py | R-N/sistem_gaji_vue_thrift | 9ba800b4d8e7849e2c6c4016cb32633caab087be | [
"MIT"
] | null | null | null | from sqlalchemy.orm import reconstructor
from sqlalchemy.ext.hybrid import hybrid_property
from .base import DbStagingEntity
from ..mixin import MxTunjanganKinerja, MxStagingLite
from .job_level import DbJobLevel
from .kinerja import DbKinerja
class DbTunjanganKinerja(MxStagingLite, MxTunjanganKinerja, DbStagingEntity):
@hybrid_property
def real_enabled(self):
return self.job_level.real_enabled and self.kinerja_rel.real_enabled
@real_enabled.expression
def real_enabled(cls):
return DbJobLevel.real_enabled & DbKinerja.real_enabled
| 30.157895 | 77 | 0.808028 | 67 | 573 | 6.731343 | 0.432836 | 0.170732 | 0.062084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139616 | 573 | 18 | 78 | 31.833333 | 0.914807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.461538 | 0.153846 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
f7135c08808687352a851c163a3b0dd12fad5af9 | 608 | py | Python | Python-Basics/12. While Exercise/05.Vending machine.py | Xamaneone/SoftUni-Intro | 985fe3249cd2adf021c2003372e840219811d989 | [
"MIT"
] | null | null | null | Python-Basics/12. While Exercise/05.Vending machine.py | Xamaneone/SoftUni-Intro | 985fe3249cd2adf021c2003372e840219811d989 | [
"MIT"
] | null | null | null | Python-Basics/12. While Exercise/05.Vending machine.py | Xamaneone/SoftUni-Intro | 985fe3249cd2adf021c2003372e840219811d989 | [
"MIT"
] | null | null | null | coins = 0
price = round(float(input()), 2)
while price != 0:
if price >= 2:
price -= 2
coins += 1
elif price >= 1:
price -= 1
coins += 1
elif price >= 0.50:
price -= 0.50
coins += 1
elif price >= 0.20:
price -= 0.20
coins += 1
elif price >= 0.10:
price -= 0.10
coins += 1
elif price >= 0.05:
price -= 0.05
coins += 1
elif price >= 0.02:
price -= 0.02
coins += 1
elif price >= 0.01:
price -= 0.01
coins += 1
price = round(price, 2)
print (coins) | 20.965517 | 32 | 0.430921 | 84 | 608 | 3.119048 | 0.214286 | 0.29771 | 0.267176 | 0.400763 | 0.366412 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 0.427632 | 608 | 29 | 33 | 20.965517 | 0.603448 | 0 | 0 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f71ce4ec1624e15791c2a9cbe2ce17cdd55390d6 | 733 | py | Python | mycloud/drive/filesystem/file_version.py | ThomasGassmann/swisscom-my-cloud-backup | 97e222c45a54197c82c8f3a5d59aa20bf3382ed8 | [
"MIT"
] | 4 | 2019-11-28T22:10:43.000Z | 2022-01-23T15:18:26.000Z | mycloud/drive/filesystem/file_version.py | ThomasGassmann/swisscom-my-cloud-backup | 97e222c45a54197c82c8f3a5d59aa20bf3382ed8 | [
"MIT"
] | 18 | 2019-01-20T22:30:48.000Z | 2020-06-09T21:16:07.000Z | mycloud/drive/filesystem/file_version.py | thomasgassmann/mycloud-cli | 97e222c45a54197c82c8f3a5d59aa20bf3382ed8 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from mycloud.common import sha256_file
from mycloud.constants import VERSION_HASH_LENGTH
class CalculatableVersion(ABC):
@abstractmethod
def calculate_version(self):
raise NotImplementedError()
class BasicStringVersion(CalculatableVersion):
def __init__(self, version: str):
self._version = version
def calculate_version(self):
return self._version
class HashCalculatedVersion(CalculatableVersion):
def __init__(self, local_file: str):
self.local_file = local_file
def calculate_version(self):
return sha256_file(self.local_file)[:VERSION_HASH_LENGTH]
def get_hash(self):
return sha256_file(self.local_file)
| 22.212121 | 65 | 0.740791 | 83 | 733 | 6.228916 | 0.313253 | 0.087041 | 0.10058 | 0.133462 | 0.220503 | 0.12766 | 0.12766 | 0 | 0 | 0 | 0 | 0.015152 | 0.189632 | 733 | 32 | 66 | 22.90625 | 0.855219 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.157895 | 0.157895 | 0.789474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f724a1382e8af5cf9306be07c058f0768f836073 | 807 | py | Python | VetsApp/migrations/0001_initial.py | Sabrinax3/Pet-Clinic-1 | 776955d118a46c8d4eaa74de22ea0280b82debc9 | [
"MIT"
] | 2 | 2020-04-13T14:26:54.000Z | 2022-01-19T01:30:25.000Z | VetsApp/migrations/0001_initial.py | Sabrinax3/Pet-Clinic-1 | 776955d118a46c8d4eaa74de22ea0280b82debc9 | [
"MIT"
] | 2 | 2020-05-29T18:52:55.000Z | 2020-05-30T02:06:28.000Z | VetsApp/migrations/0001_initial.py | Sabrinax3/Pet-Clinic-1 | 776955d118a46c8d4eaa74de22ea0280b82debc9 | [
"MIT"
] | 8 | 2020-04-11T08:30:44.000Z | 2020-05-30T03:26:13.000Z | # Generated by Django 3.0.5 on 2020-04-10 10:01
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Vets',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Name', models.CharField(max_length=100)),
('ContactNo', models.CharField(max_length=30)),
('Address', models.CharField(max_length=200)),
('University', models.CharField(max_length=100)),
('HighestDegree', models.CharField(max_length=50)),
('Image', models.CharField(max_length=1000)),
],
),
]
| 29.888889 | 114 | 0.570012 | 81 | 807 | 5.567901 | 0.592593 | 0.199557 | 0.239468 | 0.31929 | 0.119734 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056239 | 0.294919 | 807 | 26 | 115 | 31.038462 | 0.73638 | 0.055762 | 0 | 0 | 1 | 0 | 0.073684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f7569bfb29145ae6d4ea355c92b3223507df2d9f | 265 | py | Python | Tests/Tests.py | SkyLined/mDebugOutput | 0ba3ae5e6118283449119a4942186cb55e47dfe6 | [
"CC-BY-4.0"
] | 2 | 2019-07-23T06:47:07.000Z | 2021-01-30T07:29:28.000Z | Tests/Tests.py | SkyLined/zyp | d5d68257d1bacff4781ad7af318d6922f0dbbd46 | [
"CC-BY-4.0"
] | null | null | null | Tests/Tests.py | SkyLined/zyp | d5d68257d1bacff4781ad7af318d6922f0dbbd46 | [
"CC-BY-4.0"
] | null | null | null | import os, sys;
sModulePath = os.path.dirname(__file__);
sys.path = [sModulePath] + [sPath for sPath in sys.path if sPath.lower() != sModulePath.lower()];
from fTestDependencies import fTestDependencies;
fTestDependencies();
# I should add some tests here.
pass;
| 26.5 | 97 | 0.750943 | 34 | 265 | 5.735294 | 0.617647 | 0.071795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128302 | 265 | 9 | 98 | 29.444444 | 0.844156 | 0.109434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
f7673ae10e7de6de3c23b3b3523fbe6ea9e77e97 | 173 | py | Python | 1_languages/python/src/types_tuple.py | praisetompane/3_programming | dd3e2e89a36a613d895fdbdd9c03845cb648fddf | [
"MIT"
] | null | null | null | 1_languages/python/src/types_tuple.py | praisetompane/3_programming | dd3e2e89a36a613d895fdbdd9c03845cb648fddf | [
"MIT"
] | null | null | null | 1_languages/python/src/types_tuple.py | praisetompane/3_programming | dd3e2e89a36a613d895fdbdd9c03845cb648fddf | [
"MIT"
] | null | null | null | '''
immutable
'''
letters = ("a", "b", "c", "d", "c")
print(letters.count("c"))
print(letters.index("d"))
coordinates = (94, 6, 7)
x, y, z = coordinates
print(x, y, z) | 15.727273 | 35 | 0.543353 | 27 | 173 | 3.481481 | 0.592593 | 0.12766 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027972 | 0.17341 | 173 | 11 | 36 | 15.727273 | 0.629371 | 0.052023 | 0 | 0 | 0 | 0 | 0.045752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
f7740965e74ceb03ba5d25ad485c1292fd0c77ea | 2,231 | py | Python | bokeh/models/canvas.py | TheLaw1337/bokeh | ca6c4abf3b0cef587136b8431e9d6ee6acbedc2f | [
"BSD-3-Clause"
] | 15,193 | 2015-01-01T05:11:45.000Z | 2022-03-31T19:30:20.000Z | bokeh/models/canvas.py | TheLaw1337/bokeh | ca6c4abf3b0cef587136b8431e9d6ee6acbedc2f | [
"BSD-3-Clause"
] | 9,554 | 2015-01-01T03:16:54.000Z | 2022-03-31T22:59:39.000Z | bokeh/models/canvas.py | TheLaw1337/bokeh | ca6c4abf3b0cef587136b8431e9d6ee6acbedc2f | [
"BSD-3-Clause"
] | 4,829 | 2015-01-02T03:35:32.000Z | 2022-03-30T16:40:26.000Z | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2021, Anaconda, Inc., and Bokeh Contributors.
# All rights reserved.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
from __future__ import annotations
import logging # isort:skip
log = logging.getLogger(__name__)
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Bokeh imports
from ..core.properties import Instance
from ..model import Model
from .ranges import DataRange1d, Range
from .scales import LinearScale, Scale
#-----------------------------------------------------------------------------
# Globals and constants
#-----------------------------------------------------------------------------
__all__ = (
"CoordinateMapping",
)
#-----------------------------------------------------------------------------
# General API
#-----------------------------------------------------------------------------
class CoordinateMapping(Model):
""" A mapping between two coordinate systems. """
x_source = Instance(Range, default=lambda: DataRange1d())
y_source = Instance(Range, default=lambda: DataRange1d())
x_scale = Instance(Scale, default=lambda: LinearScale())
y_scale = Instance(Scale, default=lambda: LinearScale())
x_target = Instance(Range)
y_target = Instance(Range)
#-----------------------------------------------------------------------------
# Dev API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
| 37.813559 | 78 | 0.317795 | 117 | 2,231 | 5.905983 | 0.564103 | 0.075253 | 0.054993 | 0.075253 | 0.24602 | 0.24602 | 0 | 0 | 0 | 0 | 0 | 0.005366 | 0.08113 | 2,231 | 58 | 79 | 38.465517 | 0.331707 | 0.692066 | 0 | 0 | 0 | 0 | 0.026074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.352941 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f787d83923194f987062e8fb6ab6acfe63a905a0 | 1,666 | py | Python | Creational/abstract_factory.py | ofl/design-patterns-for-humans-python | 0fd1830dce3ee024ed8db0494ce24b1f0c93b72a | [
"CC-BY-4.0"
] | null | null | null | Creational/abstract_factory.py | ofl/design-patterns-for-humans-python | 0fd1830dce3ee024ed8db0494ce24b1f0c93b72a | [
"CC-BY-4.0"
] | 2 | 2021-03-13T08:35:42.000Z | 2021-04-06T11:46:44.000Z | Creational/abstract_factory.py | ofl/design-patterns-for-humans-python | 0fd1830dce3ee024ed8db0494ce24b1f0c93b72a | [
"CC-BY-4.0"
] | null | null | null | # Abstract Factory Pattern
from abc import ABCMeta, abstractmethod
import random
class AbstractProductX(metaclass=ABCMeta):
@abstractmethod
def get_description(self) -> None:
pass
class AbstractProductY(metaclass=ABCMeta):
@abstractmethod
def get_description(self) -> None:
pass
class ConcreteProductX1(AbstractProductX):
def get_description(self) -> None:
print('I am a product X1')
class ConcreteProductX2(AbstractProductX):
def get_description(self) -> None:
print('I am a product X2')
class ConcreteProductY1(AbstractProductY):
def get_description(self) -> None:
print('I am a product Y1')
class ConcreteProductY2(AbstractProductY):
def get_description(self) -> None:
print('I am a product Y2')
class AbstractFactory(metaclass=ABCMeta):
@abstractmethod
def make_product_1(self) -> AbstractProductX:
pass
@abstractmethod
def make_product_2(self) -> AbstractProductY:
pass
class ConcreteFactoryX(AbstractFactory):
def make_product_1(self) -> AbstractProductX:
return ConcreteProductX1()
def make_product_2(self) -> AbstractProductY:
return ConcreteProductY1()
class ConcreteFactoryY(AbstractFactory):
def make_product_1(self) -> AbstractProductX:
return ConcreteProductX2()
def make_product_2(self) -> AbstractProductY:
return ConcreteProductY2()
if random.randint(1, 2) == 1:
factory = ConcreteFactoryX()
else:
factory = ConcreteFactoryY()
product_1 = factory.make_product_1()
product_2 = factory.make_product_2()
product_1.get_description()
product_2.get_description()
| 22.513514 | 49 | 0.715486 | 177 | 1,666 | 6.576271 | 0.237288 | 0.09622 | 0.087629 | 0.108247 | 0.532646 | 0.532646 | 0.472509 | 0.402062 | 0.305842 | 0.305842 | 0 | 0.020149 | 0.195678 | 1,666 | 73 | 50 | 22.821918 | 0.848507 | 0.014406 | 0 | 0.425532 | 0 | 0 | 0.041463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.255319 | false | 0.085106 | 0.042553 | 0.085106 | 0.574468 | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
f7a0004839183a957e900ba421df425be0490aee | 5,795 | py | Python | juego/Disparo.py | rodrigo129/JuegoBulletHell | 274b95e8ae697219b3aa2485c6a93e794acad498 | [
"MIT"
] | null | null | null | juego/Disparo.py | rodrigo129/JuegoBulletHell | 274b95e8ae697219b3aa2485c6a93e794acad498 | [
"MIT"
] | null | null | null | juego/Disparo.py | rodrigo129/JuegoBulletHell | 274b95e8ae697219b3aa2485c6a93e794acad498 | [
"MIT"
] | null | null | null | """
@author: Rodrigo Vargas
modulo encargado del comportamiento de los disparos estandar
"""
class Disparo :
"""
modulo encargado del comportamiento de los disparos estandar
Attributes:
XY (list):coordenadas (x,y) del disparo
Velocidad (int):velocidad del disparo
Index(int):indice sprite del disparo
VelocidadFinal(int):velocidad maxima del disparo
Aceleracion(float):acceleracion del disparo
Daño(int):daño del disparo
Tipo(str):tipo de sprite que tiene el disparo
Continuo(bool):indica si el disparo puede producir otros
disparos
Fin(bool):indica si el disparo tiene que eliminarse
Alianza(str):bando al que pertenece
"""
# recordar siempre declarar self en una funcion de objeto
XY: list = None
#Color = None
Index: int = 0
# Z=None#aun no implementado
Velocidad: int = -5
Aceleracion: float = 0
VelocidadFinal: int = 0
# parametro dueño para comprobar colisiones
Daño: int = 300 # añadir a constructores
Tipo: str = "Estatico"
Continuo: bool = False
Fin: bool = False
def __init__ ( self , xy: list , velocidad: int ,
velocidadFinal: int ,
aceleracion: float , alianza: str ) :
"""
Args:
xy: lista coordenadas (x,y)
velocidad: velocididad de movimiento
aceleracion: aceleracion por ciclo de velocidad
velocidadFinal: velocidad final de movimiento
alianza: bando al que pertenece
:rtype: cls
:type alianza: str
:type aceleracion: float
:type velocidadFinal: int
:type xy: list
:type velocidad: int
"""
# self.Daño=10
self.XY = xy
self.Velocidad = velocidad
self.Aceleracion = aceleracion
self.VelocidadFinal = velocidadFinal
# self.Z=z
self.Alianza = alianza
def EsContinuo ( self ) -> bool:
"""
indica si un disparo puede generar nuevas intancias de disparo
:rtype: bool
"""
aux = self.Continuo
self.Continuo = False
return aux
def GetAlianza ( self ) -> str :
"""
indica a que bando pertence la entidad
:rtype: str
"""
return self.Alianza
def GetX ( self ) -> int :
"""
Returns:
valor numerico de XY[0]
:rtype: int
"""
return self.XY [ 0 ]
"""obtener X"""
def GetY ( self ) -> int :
"""
Returns:
valor numerico de XY[0]
:rtype: int
"""
return self.XY [ 1 ]
"""obtener Y"""
def SetX ( self , x: int ) -> None :
"""
Args:
x: nueva valor XY[0]
Returns:
fija un nuevo XY[0]
no retorna nada
:rtype: None
"""
self.XY [ 0 ] = x
pass
"""fijar X"""
def SetY ( self , y: int ) -> None :
"""
Args:
y: nueva valor XY[1]
Returns:
fija un nuevo XY[1]
no retorna nada
:rtype: None
"""
self.XY [ 1 ] = y
pass
"""fijar Y"""
def GetTipo ( self ) -> str:
"""
Returns:
indica si el sprite del disparo puede tener rotaciones
:rtype: str
"""
return self.Tipo
def GetDaño ( self ) -> int:
"""
Returns:
entre la cantidad de daño del disparo
:rtype: int
"""
return self.Daño
def GetVel ( self ) -> int :
"""
Returns:
valor numerico de la velocidad
:rtype: int
"""
return self.Velocidad
"""fijar obtener velocidad"""
def EsFin ( self ) -> bool:
"""
Returns:
indica si un disparo tiene que ser eliminado
:rtype: bool
"""
return self.Fin
def GetTodo ( self ) -> tuple :
"""
Returns:
:rtype: tuple
:return: retorna coordenadas para localizar el sprite
"""
return (self.XY [ 0 ] , self.XY [ 1 ] )
"""
Returns:
retorna coordenadas para localizar el sprite
"""
"""obtener tupla de coordenadas"""
def Actualizar ( self ) -> None :
"""
Returns:
la funcion esta encargada de calcular las fisicas del
disparo
:rtype: None
"""
self.SetY ( self.GetY ( ) - self.GetVel ( ) )
pass
"""actualizar datos"""
def GetIndex ( self ) -> int :
"""
Returns:
indice del sprite asignado
:rtype: int
"""
return self.Index
def GetXY ( self ) -> list :
"""
:rtype: list
:return: entrega las coordenas en formato de lista
"""
return self.XY
def Accelerar ( self ) -> None:
"""
funcion encargada de calcular las fisicas correspondientes a
acceleraciones y desaceleraciones
:rtype: None
"""
if self.Aceleracion == 0 :
pass
elif self.Aceleracion < 0 :
if self.Velocidad + self.Aceleracion > self.VelocidadFinal :
self.Velocidad = self.Velocidad + self.Aceleracion
else :
self.Velocidad = self.VelocidadFinal
elif self.Aceleracion > 0 :
if self.Velocidad + self.Aceleracion < self.VelocidadFinal :
self.Velocidad = self.Velocidad + self.Aceleracion
else :
self.Velocidad = self.VelocidadFinal
pass
| 26.828704 | 72 | 0.512338 | 582 | 5,795 | 5.094502 | 0.256014 | 0.043845 | 0.045868 | 0.030354 | 0.270152 | 0.222934 | 0.186847 | 0.16796 | 0.132209 | 0.132209 | 0 | 0.006924 | 0.401898 | 5,795 | 215 | 73 | 26.953488 | 0.848529 | 0.39862 | 0 | 0.169231 | 0 | 0 | 0.003284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.246154 | false | 0.076923 | 0 | 0 | 0.569231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
e39b193e7b3fe254714a0a515b6ec12ba93f88be | 3,382 | py | Python | MS5607.py | llinear/MS5607 | 29e1855fe3575a725dd097737ee94155ce0b5438 | [
"MIT"
] | 4 | 2016-03-04T22:26:57.000Z | 2019-03-19T18:39:50.000Z | MS5607.py | llinear/MS5607 | 29e1855fe3575a725dd097737ee94155ce0b5438 | [
"MIT"
] | null | null | null | MS5607.py | llinear/MS5607 | 29e1855fe3575a725dd097737ee94155ce0b5438 | [
"MIT"
] | 1 | 2021-04-21T14:09:22.000Z | 2021-04-21T14:09:22.000Z | import time
import smbus
import math
bus = smbus.SMBus(1)
class MS5607:
"""
http://www.parallaxinc.com/sites/default/files/downloads/29124-APPNote_520_C_code.pdf
http://www.parallaxinc.com/sites/default/files/downloads/29124-MS5607-02BA03-Datasheet.pdf
Offset for humidity to provide better precision
"""
DEVICE_ADDRESS = 0x76
_CMD_RESET = 0x1E
_CMD_ADC_READ = 0x00
_CMD_PROM_RD = 0xA0
_CMD_ADC_CONV = 0x40
_CMD_ADC_D1 = 0x00
_CMD_ADC_D2 = 0x10
_CMD_ADC_256 = 0x00
_CMD_ADC_512 = 0x02
_CMD_ADC_1024 = 0x04
_CMD_ADC_2048 = 0x06
_CMD_ADC_4096 = 0x08
def __init__(self):
self.resetSensor()
self.coefficients = self.readCoefficients()
# Some utility methods
def read16U(self, register1, register2):
bytes = bus.read_i2c_block_data(self.DEVICE_ADDRESS, register1, 2)
return (bytes[0] << 8) + (bytes[1])
def read24U(self, register):
bytes = bus.read_i2c_block_data(self.DEVICE_ADDRESS, register, 3)
return (bytes[0] << 16) + (bytes[1] << 8) + bytes[2]
def hectoPascalToInHg(self, milliBar):
return milliBar * 29.5333727 / 100000
def inHgToHectoPascal(self, inHg):
return 100 * 1000 * inHg / 29.5333727
def getImperialAltitude(self, currentMilliBar, baseMilliBar):
return (1 - math.pow(currentMilliBar / baseMilliBar, .190284)) * 145366.45
def getMetricAltitude(self, currentMilliBar, baseMilliBar):
return 0.3048 * self.getImperialAltitude(currentMilliBar, baseMilliBar)
# Commands
def resetSensor(self):
bus.write_byte(self.DEVICE_ADDRESS, self._CMD_RESET)
time.sleep(0.003) # wait for the reset sequence timing
def readCoefficient(self, i):
return self.read16U(self._CMD_PROM_RD + 2 * i, self._CMD_PROM_RD + 2 * i + 1)
def readCoefficients(self):
coefficients = [0] * 6
for i in range(6):
coefficients[i] = self.readCoefficient(i + 1)
return coefficients
def readAdc(self, cmd):
# set conversion mode
bus.write_byte(self.DEVICE_ADDRESS, self._CMD_ADC_CONV + cmd)
sleepTime = {self._CMD_ADC_256: 0.0009, self._CMD_ADC_512: 0.003, self._CMD_ADC_1024: 0.004, self._CMD_ADC_2048: 0.006, self._CMD_ADC_4096: 0.010}
time.sleep(sleepTime[cmd & 0x0f])
return self.read24U(self._CMD_ADC_READ)
def getDigitalPressure(self):
return self.readAdc(self._CMD_ADC_D1 + self._CMD_ADC_4096)
def getDigitalTemperature(self):
return self.readAdc(self._CMD_ADC_D2 + self._CMD_ADC_4096)
def getTemperature(self):
dT = self.getDigitalTemperature() - self.coefficients[4] * math.pow(2, 8)
return (2000 + dT * self.coefficients[5] / math.pow(2, 23)) / 100
def convertPressureTemperature(self, pressure, temperature):
# Calculate 1st order pressure and temperature
dT = temperature - self.coefficients[4] * 256
# Offset at actual temperature
off = self.coefficients[1] * 4 + ((float(dT) / 2048) * (float(self.coefficients[3]) / 1024))
# Sensitivity at actual temperature
sens = self.coefficients[0] * 2 + ((float(dT) / 4096) * (float(self.coefficients[2]) / 1024))
# Temperature compensated pressure
press = (float(pressure) / 2048) * (float(sens) / 1024) - off
return press
| 41.243902 | 154 | 0.667948 | 434 | 3,382 | 4.990783 | 0.336406 | 0.055402 | 0.050785 | 0.019391 | 0.177285 | 0.161588 | 0.147738 | 0.119114 | 0.085873 | 0 | 0 | 0.101449 | 0.224719 | 3,382 | 81 | 155 | 41.753086 | 0.724638 | 0.133944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01795 | 0 | 0 | 1 | 0.241935 | false | 0 | 0.048387 | 0.112903 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e3c00e1018e148e9c58ff6cb7973ea44ce3df215 | 871 | py | Python | behavioral/strategy.py | zhengxiaowai/python-javascript-design-patterns | 867ef54c34994752afd77fd0ac54b99567ccfa52 | [
"MIT"
] | 6 | 2019-03-13T03:09:36.000Z | 2021-06-05T03:05:02.000Z | behavioral/strategy.py | zhengxiaowai/python-javascript-design-patterns | 867ef54c34994752afd77fd0ac54b99567ccfa52 | [
"MIT"
] | null | null | null | behavioral/strategy.py | zhengxiaowai/python-javascript-design-patterns | 867ef54c34994752afd77fd0ac54b99567ccfa52 | [
"MIT"
] | 3 | 2017-05-08T08:09:37.000Z | 2018-12-16T09:16:39.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# 定义一系列计算年终奖的算法
bouns_strategy = {
'S': lambda s: s * 4,
'A': lambda s: s * 3,
'B': lambda s: s * 2,
# 添加 S+ 和 C 算法
'SP': lambda s: s * 5,
'C': lambda s: s,
}
def calculate_bouns(name, strategy, salary):
return '{name} get {salary}'.format(name=name, salary=strategy(salary))
if __name__ == '__main__':
print(calculate_bouns('jack', bouns_strategy['S'], 10000))
print(calculate_bouns('linda', bouns_strategy['A'], 10000))
print(calculate_bouns('sean', bouns_strategy['B'], 10000))
# 现在需求变化了,添加 S+ 作为最高级,C 级 作为最低级
# jack 从 S 调整到 S+
# sean 从 B 调整到 C
print('需求改变以后......')
print(calculate_bouns('jack', bouns_strategy['SP'], 10000))
print(calculate_bouns('linda', bouns_strategy['A'], 10000))
print(calculate_bouns('sean', bouns_strategy['C'], 10000))
| 26.393939 | 75 | 0.615385 | 125 | 871 | 4.112 | 0.352 | 0.177043 | 0.22179 | 0.18677 | 0.466926 | 0.466926 | 0.326848 | 0.326848 | 0.326848 | 0.326848 | 0 | 0.050143 | 0.198622 | 871 | 32 | 76 | 27.21875 | 0.686246 | 0.149254 | 0 | 0.117647 | 0 | 0 | 0.106267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0.058824 | 0.117647 | 0.411765 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
e3c4d848521c652c8216dc5f8d6fc1ec0b15da4d | 147 | py | Python | solutions/090_solution_01.py | UFResearchComputing/py4ai | db7f80614f26274ec18556d56ea9f549c463165a | [
"CC-BY-4.0"
] | null | null | null | solutions/090_solution_01.py | UFResearchComputing/py4ai | db7f80614f26274ec18556d56ea9f549c463165a | [
"CC-BY-4.0"
] | null | null | null | solutions/090_solution_01.py | UFResearchComputing/py4ai | db7f80614f26274ec18556d56ea9f549c463165a | [
"CC-BY-4.0"
] | 1 | 2021-04-27T09:50:54.000Z | 2021-04-27T09:50:54.000Z | values = []
values.append(1)
values.append(3)
values.append(5)
print('first time:', values)
values = values[1:]
print('second time:', values) | 21 | 29 | 0.673469 | 21 | 147 | 4.714286 | 0.428571 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031496 | 0.136054 | 147 | 7 | 30 | 21 | 0.748032 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e3d6287d7505de5a0acd34bbeec93abae5a3da30 | 48 | py | Python | smartpost/__init__.py | raoz/pysmartpost | 9b8fdad0b63b45b8d358f2674e7fffb7bd3b5fa2 | [
"MIT"
] | null | null | null | smartpost/__init__.py | raoz/pysmartpost | 9b8fdad0b63b45b8d358f2674e7fffb7bd3b5fa2 | [
"MIT"
] | 11 | 2019-09-25T04:05:04.000Z | 2022-01-06T03:07:28.000Z | smartpost/__init__.py | raoz/pysmartpost | 9b8fdad0b63b45b8d358f2674e7fffb7bd3b5fa2 | [
"MIT"
] | null | null | null | name = "smartpost-itella"
__version__ = "0.0.1"
| 16 | 25 | 0.6875 | 7 | 48 | 4.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.125 | 48 | 2 | 26 | 24 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e3ed1057f5d0cf1b7bf15135485dbb7a836ce135 | 507 | py | Python | fdk_client/application/models/DiscountProperties.py | kavish-d/fdk-client-python | a1023eb530473322cb52e095fc4ceb226c1e6037 | [
"MIT"
] | null | null | null | fdk_client/application/models/DiscountProperties.py | kavish-d/fdk-client-python | a1023eb530473322cb52e095fc4ceb226c1e6037 | [
"MIT"
] | null | null | null | fdk_client/application/models/DiscountProperties.py | kavish-d/fdk-client-python | a1023eb530473322cb52e095fc4ceb226c1e6037 | [
"MIT"
] | null | null | null | """Application Models."""
from marshmallow import fields, Schema
from marshmallow.validate import OneOf
from ..enums import *
from ..models.BaseSchema import BaseSchema
class DiscountProperties(BaseSchema):
# Rewards swagger.json
absolute = fields.Float(required=False)
currency = fields.Str(required=False)
display_absolute = fields.Str(required=False)
display_percent = fields.Str(required=False)
percent = fields.Float(required=False)
| 14.911765 | 49 | 0.698225 | 53 | 507 | 6.641509 | 0.45283 | 0.184659 | 0.144886 | 0.1875 | 0.164773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213018 | 507 | 33 | 50 | 15.363636 | 0.882206 | 0.080868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
e3f3a278d74472b5a10e342c8b2cd1a27b01a763 | 1,365 | py | Python | juriscraper/opinions/united_states/state/fladistctapp_3.py | EvandoBlanco/juriscraper | 3d16af258620d4ba1b4827f66ef69e8a2c5a0484 | [
"BSD-2-Clause"
] | 228 | 2015-01-23T04:41:39.000Z | 2022-03-30T09:52:20.000Z | juriscraper/opinions/united_states/state/fladistctapp_3.py | EvandoBlanco/juriscraper | 3d16af258620d4ba1b4827f66ef69e8a2c5a0484 | [
"BSD-2-Clause"
] | 331 | 2015-01-05T18:53:40.000Z | 2022-03-29T23:43:30.000Z | juriscraper/opinions/united_states/state/fladistctapp_3.py | EvandoBlanco/juriscraper | 3d16af258620d4ba1b4827f66ef69e8a2c5a0484 | [
"BSD-2-Clause"
] | 84 | 2015-01-03T01:19:21.000Z | 2022-03-01T08:09:32.000Z | """
Scraper for Florida 3rd District Court of Appeal
CourtID: flaapp3
Court Short Name: flaapp3
Contact: 3dca@flcourts.org, Angel Falero <faleroa@flcourts.org> (305-229-6743)
"""
from juriscraper.lib.string_utils import convert_date_string
from juriscraper.lib.html_utils import (
get_table_column_links,
get_table_column_text,
)
from juriscraper.OpinionSite import OpinionSite
class Site(OpinionSite):
def __init__(self, *args, **kwargs):
super(Site, self).__init__(*args, **kwargs)
self.court_id = self.__module__
# Browser url: https://www.3dca.flcourts.org/Opinions/Most-Recent-Opinion-Release
self.url = "https://www.3dca.flcourts.org/search/opinions/?sort=opinion/type%20desc,%20opinion/case_number%20asc&view=embed_custom&searchtype=opinions&recent_only=1&hide_search=1&hide_filters=1&limit=75&offset=0"
def _get_case_names(self):
return get_table_column_text(self.html, 4)
def _get_download_urls(self):
return get_table_column_links(self.html, 1)
def _get_case_dates(self):
date_strings = get_table_column_text(self.html, 7)
return [convert_date_string(ds) for ds in date_strings]
def _get_precedential_statuses(self):
return ["Published"] * len(self.case_dates)
def _get_docket_numbers(self):
return get_table_column_text(self.html, 3)
| 35.921053 | 220 | 0.740659 | 194 | 1,365 | 4.902062 | 0.479381 | 0.050473 | 0.088328 | 0.07571 | 0.182965 | 0.157729 | 0.07571 | 0.07571 | 0 | 0 | 0 | 0.027682 | 0.153114 | 1,365 | 37 | 221 | 36.891892 | 0.794983 | 0.183883 | 0 | 0 | 0 | 0.045455 | 0.188065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.136364 | 0.181818 | 0.681818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5410e107e5244b85794a15a20ff3f6c3394e37f6 | 91 | py | Python | Code/Python2.7/Kattis/13taketwostones.py | nicholasz2510/General | e2783cad4da7f9b50c952c2b91ef311d22b1d56f | [
"MIT"
] | 1 | 2019-11-21T15:56:03.000Z | 2019-11-21T15:56:03.000Z | Code/Python2.7/Kattis/13taketwostones.py | nicholasz2510/General | e2783cad4da7f9b50c952c2b91ef311d22b1d56f | [
"MIT"
] | 12 | 2019-11-21T21:00:57.000Z | 2022-02-27T01:46:56.000Z | Code/Python2.7/Kattis/13taketwostones.py | nicholasz2510/General | e2783cad4da7f9b50c952c2b91ef311d22b1d56f | [
"MIT"
] | 1 | 2019-11-21T20:49:18.000Z | 2019-11-21T20:49:18.000Z | import sys
if int(sys.stdin.readline()) % 2 == 0:
print "Bob"
else:
print "Alice"
| 13 | 38 | 0.593407 | 14 | 91 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 0.241758 | 91 | 6 | 39 | 15.166667 | 0.753623 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
541ae21951bce72d005b560cd139366a9fec2885 | 43 | py | Python | smiview/link/__init__.py | Haros85/gios | bfde14ab61c3ac448864ebe200062155d8564cee | [
"MIT"
] | null | null | null | smiview/link/__init__.py | Haros85/gios | bfde14ab61c3ac448864ebe200062155d8564cee | [
"MIT"
] | null | null | null | smiview/link/__init__.py | Haros85/gios | bfde14ab61c3ac448864ebe200062155d8564cee | [
"MIT"
] | null | null | null | default_app_config = "link.apps.LinkConfig" | 43 | 43 | 0.837209 | 6 | 43 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 43 | 1 | 43 | 43 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5809c8d0c7beac98f65906661938b178d3ed7209 | 402 | py | Python | core/newintentobserver.py | Wil-Peters/HomeAutomation | ab4f78d9fad42093435732233e99003f12dca5e7 | [
"MIT"
] | 2 | 2020-04-09T20:29:15.000Z | 2021-01-20T09:21:02.000Z | core/newintentobserver.py | Wil-Peters/HomeAutomation | ab4f78d9fad42093435732233e99003f12dca5e7 | [
"MIT"
] | null | null | null | core/newintentobserver.py | Wil-Peters/HomeAutomation | ab4f78d9fad42093435732233e99003f12dca5e7 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from core.intent import Intent
class NewIntentObserver(ABC):
"""The NewIntentObserver & NewIntentSubject abstract base classes are used to have a clean
interface between the part of the code where the origin of the Intent lies, and the part
where the Intent is handled."""
@abstractmethod
def update(self, intent: Intent) -> str:
pass
| 30.923077 | 94 | 0.733831 | 55 | 402 | 5.363636 | 0.636364 | 0.047458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211443 | 402 | 12 | 95 | 33.5 | 0.930599 | 0.50995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
583ff3998e33d0e84f82b85e9355f48c89d34d02 | 429 | py | Python | psi/data/sinks/api.py | bburan/psiexperiment | 9b70f7f0b4a4379d8c3fc463e1df272153afd247 | [
"MIT"
] | 5 | 2016-05-26T13:46:00.000Z | 2020-03-03T13:07:47.000Z | psi/data/sinks/api.py | bburan/psiexperiment | 9b70f7f0b4a4379d8c3fc463e1df272153afd247 | [
"MIT"
] | 2 | 2018-04-17T15:06:35.000Z | 2019-03-25T18:13:10.000Z | psi/data/sinks/api.py | bburan/psiexperiment | 9b70f7f0b4a4379d8c3fc463e1df272153afd247 | [
"MIT"
] | 1 | 2016-05-28T19:36:38.000Z | 2016-05-28T19:36:38.000Z | import enaml
with enaml.imports():
from .bcolz_store import BColzStore
from .display_value import DisplayValue
from .event_log import EventLog
from .epoch_counter import EpochCounter, GroupedEpochCounter
from .preferences_store import PreferencesStore
from .table_store import TableStore
from .text_store import TextStore
from .trial_log import TrialLog
from .sdt_analysis import SDTAnalysis
| 33 | 64 | 0.785548 | 51 | 429 | 6.431373 | 0.568627 | 0.134146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 429 | 12 | 65 | 35.75 | 0.934473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
584a9360b94e07b9bed039e35476cf70ccd91d84 | 1,685 | py | Python | lsfd202201/settings.py | z-t-y/lsfd202201v3 | 5f751083bfd9bad8f56e7d1168957edb2be80b35 | [
"MIT"
] | 1 | 2020-07-24T11:38:13.000Z | 2020-07-24T11:38:13.000Z | lsfd202201/settings.py | z-t-y/LSFD202201V3 | 5f751083bfd9bad8f56e7d1168957edb2be80b35 | [
"MIT"
] | 2 | 2020-07-24T11:59:56.000Z | 2020-07-26T03:16:07.000Z | lsfd202201/settings.py | z-t-y/LSFD202201V3 | 5f751083bfd9bad8f56e7d1168957edb2be80b35 | [
"MIT"
] | null | null | null | # flake8: noqa
import os
from werkzeug.security import generate_password_hash
def generate_sqlite_file(file_name: str):
basedir = os.path.abspath(os.path.dirname(__file__))
return "sqlite:///" + os.path.join(basedir, f"{file_name}.sqlite3")
class Base:
DEBUG = False
TESTING = False
SECRET_KEY = os.getenv("SECRET_KEY", "secret key")
ARTICLE_PASSWORD_HASH = generate_password_hash(
os.getenv("PASSWORD", "article-password")
)
ADMIN_PASSWORD_HASH = generate_password_hash(
os.getenv("ADMIN_PASSWORD", "admin-password")
)
SQLALCHEMY_TRACK_MODIFICATIONS = False
MAIL_SERVER = os.getenv("MAIL_SERVER", "smtp.office365.com")
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USERNAME = os.getenv("MAIL_USERNAME", "username@example.com")
MAIL_PASSWORD = os.getenv("MAIL_PASSWORD", "password")
MAIL_DEFAULT_SENDER = os.getenv("DEFAULT_EMAIL_SENDER", f"username {MAIL_USERNAME}")
ADMIN_ONE_EMAIL = os.getenv("ADMIN_ONE_EMAIL")
ADMIN_TWO_EMAIL = os.getenv("ADMIN_TWO_EMAIL")
ADMIN_EMAIL_LIST = [ADMIN_ONE_EMAIL, ADMIN_TWO_EMAIL]
class Production(Base):
FLASK_CONFIG = "production"
SQLALCHEMY_DATABASE_URI = os.getenv("DATABASE_URL", generate_sqlite_file("data"))
class Development(Base):
FLASK_CONFIG = "development"
SQLALCHEMY_DATABASE_URI = os.getenv("DATABASE_DEV", generate_sqlite_file("data-dev"))
DEBUG = True
MAIL_SUPPRESS_SEND = True
class Test(Base):
TESTING = True
WTF_CSRF_ENABLED = False
SQLALCHEMY_DATABASE_URI = "sqlite:///:memory:"
MAIL_SUPPRESS_SEND = True
config = {"production": Production, "development": Development, "testing": Test}
| 29.051724 | 89 | 0.719881 | 212 | 1,685 | 5.386792 | 0.330189 | 0.077058 | 0.052539 | 0.049037 | 0.180385 | 0.180385 | 0.070053 | 0 | 0 | 0 | 0 | 0.005678 | 0.163798 | 1,685 | 57 | 90 | 29.561404 | 0.804826 | 0.007122 | 0 | 0.051282 | 1 | 0 | 0.216038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0.153846 | 0.051282 | 0 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
5885818511e1bfabe8d22eb1788a8214c5572568 | 85 | py | Python | pluto/apps.py | rororobby/melody | afae3d0df9b494f85dc0a2dca91b5342350b5860 | [
"MIT"
] | 2 | 2020-11-28T19:16:01.000Z | 2021-01-16T15:47:48.000Z | pluto/apps.py | rororobby/melody | afae3d0df9b494f85dc0a2dca91b5342350b5860 | [
"MIT"
] | null | null | null | pluto/apps.py | rororobby/melody | afae3d0df9b494f85dc0a2dca91b5342350b5860 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class PlutoConfig(AppConfig):
name = 'pluto'
| 14.166667 | 33 | 0.741176 | 10 | 85 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 5 | 34 | 17 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
588a57fc4f2b2eab21ad8f89560dcc4347c11739 | 11,386 | py | Python | pylox/builtins/py_builtins.py | sco1/pylox | b4820828306c20cee3f8533c2547fafb92c6c1bd | [
"MIT"
] | 2 | 2021-12-18T01:52:50.000Z | 2022-01-17T19:41:52.000Z | pylox/builtins/py_builtins.py | sco1/pylox | b4820828306c20cee3f8533c2547fafb92c6c1bd | [
"MIT"
] | 18 | 2021-11-30T04:05:53.000Z | 2022-02-01T03:30:04.000Z | pylox/builtins/py_builtins.py | sco1/pylox | b4820828306c20cee3f8533c2547fafb92c6c1bd | [
"MIT"
] | null | null | null | import math
import re
import statistics
import time
import typing as t
from collections import deque
from pathlib import Path
from pylox.callable import LoxCallable, LoxInstance
from pylox.containers.array import LoxArray
from pylox.environment import Environment
from pylox.protocols.interpreter import SourceInterpreterProtocol
from pylox.tokens import Token, TokenType
NUMERIC = t.Union[float, int]
def _lox_arrayize(in_iter: t.Iterable) -> LoxArray:
"""Build a `LoxArray` instance to represent the input iterable."""
out_array = LoxArray(0)
out_array.fields = deque(in_iter)
return out_array
def _arrayize_match(in_match: re.Match | None) -> LoxArray:
"""
Expand the provided match object into a `LoxArray`.
If the match is not empty, the first element of the resulting array is the full match followed
by all group matches, if there was grouping in the pattern.
"""
if in_match:
return _lox_arrayize((in_match[0], *in_match.groups()))
else:
return LoxArray(0)
class BuiltinFunction(LoxCallable): # pragma: no cover
"""Base class for Lox's built-in functions."""
_shortname: str
def __str__(self) -> str:
return f"<builtin fn {self._shortname}>"
class Abs(BuiltinFunction):
_shortname = "abs"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> NUMERIC:
"""Return the absolute value of a number."""
return abs(arguments[0])
class Array(BuiltinFunction):
_shortname = "array"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[int]) -> LoxArray:
"""Initialize an n-sized Lox array of `None` values."""
return LoxArray(arguments[0])
class Ceil(BuiltinFunction):
_shortname = "ceil"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> int:
"""Return the smallest number greater than or equal to the input value."""
return math.ceil(arguments[0])
class Clock(BuiltinFunction):
_shortname = "clock"
@property
def arity(self) -> int:
return 0
def call(self, interpreter: SourceInterpreterProtocol, arguments: list) -> float:
"""Return the time in seconds since the epoch as a floating point number."""
return time.time()
class DivMod(BuiltinFunction):
_shortname = "divmod"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> LoxArray:
"""Return a `LoxArray` with the quotient and remainder from integer division."""
return _lox_arrayize(divmod(arguments[0], arguments[1]))
class Floor(BuiltinFunction):
_shortname = "floor"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> int:
"""Return the smallest number less than or equal to the input value."""
return math.floor(arguments[0])
class Input(BuiltinFunction):
_shortname = "input"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> str:
"""Prompt the user for a line of input using the provided prompt."""
return input(arguments[0])
class Len(BuiltinFunction):
_shortname = "len"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[t.Any]) -> int:
"""
Return the length of the specified object.
This assumes that the underlying Python object defines a `__str__` method.
"""
if isinstance(arguments[0], str):
return len(arguments[0])
if isinstance(arguments[0], LoxInstance):
return len(arguments[0])
raise NotImplementedError(f"Object of type '{type(arguments[0]).__name__}' has no length.")
class Max(BuiltinFunction):
_shortname = "max"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> NUMERIC:
"""Return the maximum of the two values."""
return max(arguments)
class Mean(BuiltinFunction):
_shortname = "mean"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[LoxArray]) -> float:
"""Return the sample arithmetic mean of the data in the input `LoxArray`."""
return statistics.mean(arguments[0].fields)
class Median(BuiltinFunction):
_shortname = "median"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[LoxArray]) -> float:
"""
Return the median (middle value) of the input `LoxArray`.
If the input array contains an even number of data points, the median is interpolated by
taking the average of the two middle values.
"""
return statistics.median(arguments[0].fields)
class Min(BuiltinFunction):
_shortname = "min"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[NUMERIC]) -> NUMERIC:
"""Return the minimum of the two values."""
return min(arguments)
class Mode(BuiltinFunction):
_shortname = "mode"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[LoxArray]) -> float:
"""Return the single most common data point from the input `LoxArray`."""
return statistics.mode(arguments[0].fields)
class Ord(BuiltinFunction):
_shortname = "ord"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> int:
"""Return an integer representing the Unicode code point of the input character."""
return ord(arguments[0])
class ReFindall(BuiltinFunction):
_shortname = "re_findall"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> LoxArray:
"""
Return all non-overlapping matches of pattern in string, as an array of strings or arrays.
The string is scanned left-to-right, and matches are returned in the order found. Empty
matches are included in the result.
The result depends on the number of capturing groups in the pattern. If there are no groups,
return an array of strings matching the whole pattern. If there is exactly one group, return
an array of strings matching that group. If multiple groups are present, return an array of
arrays of strings matching the groups.
Non-capturing groups do not affect the form of the result.
"""
matches = re.findall(arguments[0], arguments[1])
if isinstance(matches[0], tuple):
return _lox_arrayize((_lox_arrayize(groups) for groups in matches))
else:
return _lox_arrayize(matches)
class ReMatch(BuiltinFunction):
_shortname = "re_match"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> LoxArray:
"""
Match if the pattern matches zero or more characters at the beginning of the string.
The first value in the array will always correspond to `match.group(0)`; if the pattern
contains one or more groups then the array will match the output of `match.groups()`
"""
return _arrayize_match(re.match(arguments[0], arguments[1]))
class ReSearch(BuiltinFunction):
_shortname = "re_search"
@property
def arity(self) -> int:
return 2
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> LoxArray:
"""
Scan through string looking for the first location where the pattern produces a match.
The first value in the array will always correspond to `match.group(0)`; if the pattern
contains one or more groups then the array will match the output of `match.groups()`
"""
return _arrayize_match(re.search(arguments[0], arguments[1]))
class ReSub(BuiltinFunction):
_shortname = "re_sub"
@property
def arity(self) -> int:
return 3
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> str:
"""Replace the leftmost non-overlapping occurrences of the pattern in the given string."""
return re.sub(arguments[0], arguments[1], arguments[2])
class ReadText(BuiltinFunction):
_shortname = "read_text"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> str:
"""Return the contents of the specified file as a string."""
return Path(arguments[0]).read_text()
class Std(BuiltinFunction):
_shortname = "std"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[LoxArray]) -> float:
"""Return the sample standard deviation of the input `LoxArray`."""
return statistics.stdev(arguments[0].fields)
class Str2Num(BuiltinFunction):
_shortname = "str2num"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> NUMERIC:
"""Convert the provided string into an integer or float."""
try:
return int(arguments[0])
except ValueError:
try:
return float(arguments[0])
except ValueError:
pass
raise ValueError(f"Cannot convert '{arguments[0]}' to an integer or float.")
class StringArray(BuiltinFunction):
_shortname = "string_array"
@property
def arity(self) -> int:
return 1
def call(self, interpreter: SourceInterpreterProtocol, arguments: list[str]) -> LoxArray:
"""Break the input string into a `LoxArray` of individual characters."""
return _lox_arrayize(arguments[0])
BUILTIN_MAPPING = {
"abs": Abs(),
"array": Array(),
"ceil": Ceil(),
"clock": Clock(),
"divmod": DivMod(),
"floor": Floor(),
"input": Input(),
"len": Len(),
"max": Max(),
"mean": Mean(),
"median": Median(),
"min": Min(),
"mode": Mode(),
"ord": Ord(),
"re_findall": ReFindall(),
"re_match": ReMatch(),
"re_search": ReSearch(),
"re_sub": ReSub(),
"read_text": ReadText(),
"std": Std(),
"str2num": Str2Num(),
"string_array": StringArray(),
}
def load_builtins(global_environment: Environment) -> Environment:
"""Insert Lox's built-ins into the provided global environment."""
for name, func in BUILTIN_MAPPING.items():
token = Token(TokenType.FUN, name)
global_environment.define(token, func)
return global_environment
| 28.898477 | 100 | 0.657913 | 1,341 | 11,386 | 5.52349 | 0.192394 | 0.031592 | 0.047523 | 0.059403 | 0.398407 | 0.36992 | 0.353989 | 0.345889 | 0.345889 | 0.326178 | 0 | 0.007272 | 0.239153 | 11,386 | 393 | 101 | 28.97201 | 0.847743 | 0.251449 | 0 | 0.392857 | 0 | 0 | 0.047969 | 0.003793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.004464 | 0.053571 | 0.102679 | 0.705357 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5894f59e6a265fbcd2ec93ad68c5b02570c799f3 | 16,492 | py | Python | tests/test_simulation.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 7 | 2018-11-07T16:45:35.000Z | 2020-01-10T16:54:26.000Z | tests/test_simulation.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 1 | 2019-04-05T18:41:39.000Z | 2019-04-05T18:41:39.000Z | tests/test_simulation.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 2 | 2019-06-04T17:01:53.000Z | 2019-07-08T19:01:40.000Z | import os.path
import tempfile
from unittest import TestCase
import numpy as np
from aspire.operators import IdentityFilter, RadialCTFFilter
from aspire.source.relion import RelionSource
from aspire.source.simulation import Simulation
from aspire.utils.types import utest_tolerance
from aspire.volume import Volume
DATA_DIR = os.path.join(os.path.dirname(__file__), "saved_test_data")
class SingleSimTestCase(TestCase):
"""Test we can construct a length 1 Sim."""
def setUp(self):
self.sim = Simulation(
n=1,
L=8,
)
def testImage(self):
"""Test we can get an Image from a length 1 Sim."""
_ = self.sim.images(0, 1)
class SimTestCase(TestCase):
def setUp(self):
self.sim = Simulation(
n=1024,
L=8,
unique_filters=[
RadialCTFFilter(defocus=d) for d in np.linspace(1.5e4, 2.5e4, 7)
],
seed=0,
noise_filter=IdentityFilter(),
dtype="single",
)
def tearDown(self):
pass
def testGaussianBlob(self):
blobs = self.sim.vols.asnumpy()
ref = np.load(os.path.join(DATA_DIR, "sim_blobs.npy"))
self.assertTrue(np.allclose(blobs, ref))
def testSimulationRots(self):
self.assertTrue(
np.allclose(
self.sim.rots[0, :, :],
np.array(
[
[0.91675498, 0.2587233, 0.30433956],
[0.39941773, -0.58404652, -0.70665065],
[-0.00507853, 0.76938412, -0.63876622],
]
),
)
)
def testSimulationImages(self):
images = self.sim.clean_images(0, 512).asnumpy()
self.assertTrue(
np.allclose(
images,
np.load(os.path.join(DATA_DIR, "sim_clean_images.npy")),
rtol=1e-2,
atol=utest_tolerance(self.sim.dtype),
)
)
def testSimulationImagesNoisy(self):
images = self.sim.images(0, 512).asnumpy()
self.assertTrue(
np.allclose(
images,
np.load(os.path.join(DATA_DIR, "sim_images_with_noise.npy")),
rtol=1e-2,
atol=utest_tolerance(self.sim.dtype),
)
)
def testSimulationImagesDownsample(self):
# The simulation already generates images of size 8 x 8; Downsampling to resolution 8 should thus have no effect
self.sim.downsample(8)
images = self.sim.clean_images(0, 512).asnumpy()
self.assertTrue(
np.allclose(
images,
np.load(os.path.join(DATA_DIR, "sim_clean_images.npy")),
rtol=1e-2,
atol=utest_tolerance(self.sim.dtype),
)
)
def testSimulationImagesShape(self):
# The 'images' method should be tolerant of bounds - here we ask for 1000 images starting at index 1000,
# so we'll get back 25 images in return instead
images = self.sim.images(1000, 1000)
self.assertTrue(images.shape, (8, 8, 25))
def testSimulationImagesDownsampleShape(self):
self.sim.downsample(6)
first_image = self.sim.images(0, 1)[0]
self.assertEqual(first_image.shape, (6, 6))
def testSimulationEigen(self):
eigs_true, lambdas_true = self.sim.eigs()
self.assertTrue(
np.allclose(
eigs_true[0, :, :, 2],
np.array(
[
[
-1.67666201e-07,
-7.95741380e-06,
-1.49160041e-04,
-1.10151654e-03,
-3.11287888e-03,
-3.09157884e-03,
-9.91418026e-04,
-1.31673165e-04,
],
[
-1.15402077e-06,
-2.49849709e-05,
-3.51658906e-04,
-2.21575261e-03,
-7.83315487e-03,
-9.44795180e-03,
-4.07636259e-03,
-9.02186439e-04,
],
[
-1.88737249e-05,
-1.91418396e-04,
-1.09021540e-03,
-1.02020288e-03,
1.39411855e-02,
8.58035963e-03,
-5.54619730e-03,
-3.86377703e-03,
],
[
-1.21280536e-04,
-9.51461843e-04,
-3.22565017e-03,
-1.05731178e-03,
2.61375736e-02,
3.11595201e-02,
6.40814053e-03,
-2.31698658e-02,
],
[
-2.44067283e-04,
-1.40560151e-03,
-6.73082832e-05,
1.44160679e-02,
2.99893934e-02,
5.92632964e-02,
7.75623545e-02,
3.06570008e-02,
],
[
-1.53507499e-04,
-7.21709803e-04,
8.54929152e-04,
-1.27235036e-02,
-5.34382043e-03,
2.18879692e-02,
6.22706190e-02,
4.51998860e-02,
],
[
-3.00595184e-05,
-1.43038429e-04,
-2.15870258e-03,
-9.99002904e-02,
-7.79077187e-02,
-1.53395887e-02,
1.88777559e-02,
1.68759506e-02,
],
[
3.22692649e-05,
4.07977635e-03,
1.63959339e-02,
-8.68835449e-02,
-7.86240026e-02,
-1.75694861e-02,
3.24984640e-03,
1.95389288e-03,
],
]
),
)
)
def testSimulationMean(self):
mean_vol = self.sim.mean_true()
self.assertTrue(
np.allclose(
[
[
0.00000930,
0.00033866,
0.00490734,
0.01998369,
0.03874487,
0.04617764,
0.02970645,
0.00967604,
],
[
0.00003904,
0.00247391,
0.03818476,
0.12325402,
0.22278425,
0.25246665,
0.14093882,
0.03683474,
],
[
0.00014177,
0.01191146,
0.14421064,
0.38428235,
0.78645319,
0.86522675,
0.44862473,
0.16382280,
],
[
0.00066036,
0.03137806,
0.29226971,
0.97105378,
2.39410496,
2.17099857,
1.23595858,
0.49233940,
],
[
0.00271748,
0.05491289,
0.49955708,
2.05356097,
3.70941424,
3.01578689,
1.51441932,
0.52054572,
],
[
0.00584845,
0.06962635,
0.50568032,
1.99643707,
3.77415895,
2.76039767,
1.04602003,
0.20633197,
],
[
0.00539583,
0.06068972,
0.47008955,
1.17128026,
1.82821035,
1.18743944,
0.30667788,
0.04851476,
],
[
0.00246362,
0.04867788,
0.65284950,
0.65238875,
0.65745538,
0.37955678,
0.08053055,
0.01210055,
],
],
mean_vol[0, :, :, 4],
)
)
def testSimulationVolCoords(self):
coords, norms, inners = self.sim.vol_coords()
self.assertTrue(np.allclose([4.72837704, -4.72837709], coords, atol=1e-4))
self.assertTrue(np.allclose([8.20515764e-07, 1.17550184e-06], norms, atol=1e-4))
self.assertTrue(
np.allclose([3.78030562e-06, -4.20475816e-06], inners, atol=1e-4)
)
def testSimulationCovar(self):
covar = self.sim.covar_true()
result = [
[
-0.00000289,
-0.00005839,
-0.00018998,
-0.00124722,
-0.00003155,
+0.00743356,
+0.00798143,
+0.00303416,
],
[
-0.00000776,
+0.00018371,
+0.00448675,
-0.00794970,
-0.02988000,
-0.00185446,
+0.01786612,
+0.00685990,
],
[
+0.00001144,
+0.00324029,
+0.03364052,
-0.00272520,
-0.08976389,
-0.05404807,
+0.00268740,
-0.03081760,
],
[
+0.00003204,
+0.00909853,
+0.07859941,
+0.07254293,
-0.19365733,
-0.09007251,
-0.15731451,
-0.15690306,
],
[
-0.00040561,
+0.00685139,
+0.11074986,
+0.35207557,
+0.17264650,
-0.16662873,
-0.15010859,
-0.14292650,
],
[
-0.00107461,
-0.00497393,
+0.04630126,
+0.38048555,
+0.47915877,
+0.05379957,
-0.11833663,
-0.03372971,
],
[
-0.00029630,
-0.00485664,
-0.00640120,
+0.22068169,
+0.15419035,
+0.08281200,
+0.03373241,
+0.00103902,
],
[
+0.00044323,
+0.00850533,
+0.09683860,
+0.16959519,
+0.03629097,
+0.03740599,
+0.02212356,
+0.00318127,
],
]
self.assertTrue(np.allclose(result, covar[:, :, 4, 4, 4, 4], atol=1e-4))
def testSimulationEvalMean(self):
mean_est = Volume(np.load(os.path.join(DATA_DIR, "mean_8_8_8.npy")))
result = self.sim.eval_mean(mean_est)
self.assertTrue(np.allclose(result["err"], 2.664116055950763, atol=1e-4))
self.assertTrue(np.allclose(result["rel_err"], 0.1765943704851626, atol=1e-4))
self.assertTrue(np.allclose(result["corr"], 0.9849211540734224, atol=1e-4))
def testSimulationEvalCovar(self):
covar_est = np.load(os.path.join(DATA_DIR, "covar_8_8_8_8_8_8.npy"))
result = self.sim.eval_covar(covar_est)
self.assertTrue(np.allclose(result["err"], 13.322721549011165, atol=1e-4))
self.assertTrue(np.allclose(result["rel_err"], 0.5958936073938558, atol=1e-4))
self.assertTrue(np.allclose(result["corr"], 0.8405347287741631, atol=1e-4))
def testSimulationEvalCoords(self):
mean_est = Volume(np.load(os.path.join(DATA_DIR, "mean_8_8_8.npy")))
eigs_est = Volume(
np.load(os.path.join(DATA_DIR, "eigs_est_8_8_8_1.npy"))[..., 0]
)
clustered_coords_est = np.load(
os.path.join(DATA_DIR, "clustered_coords_est.npy")
)
result = self.sim.eval_coords(mean_est, eigs_est, clustered_coords_est)
self.assertTrue(
np.allclose(
result["err"][:10],
[
1.58382394,
1.58382394,
3.72076112,
1.58382394,
1.58382394,
3.72076112,
3.72076112,
1.58382394,
1.58382394,
1.58382394,
],
)
)
self.assertTrue(
np.allclose(
result["rel_err"][0, :10],
[
0.11048937,
0.11048937,
0.21684697,
0.11048937,
0.11048937,
0.21684697,
0.21684697,
0.11048937,
0.11048937,
0.11048937,
],
)
)
self.assertTrue(
np.allclose(
result["corr"][0, :10],
[
0.99390133,
0.99390133,
0.97658719,
0.99390133,
0.99390133,
0.97658719,
0.97658719,
0.99390133,
0.99390133,
0.99390133,
],
)
)
def testSimulationSaveFile(self):
# Create a tmpdir in a context. It will be cleaned up on exit.
with tempfile.TemporaryDirectory() as tmpdir:
# Save the simulation object into STAR and MRCS files
star_filepath = os.path.join(tmpdir, "save_test.star")
# Save images into one single MRCS file
self.sim.save(
star_filepath, batch_size=512, save_mode="single", overwrite=False
)
imgs_org = self.sim.images(start=0, num=1024)
# Input saved images into Relion object
relion_src = RelionSource(star_filepath, tmpdir, max_rows=1024)
imgs_sav = relion_src.images(start=0, num=1024)
# Compare original images with saved images
self.assertTrue(np.allclose(imgs_org.asnumpy(), imgs_sav.asnumpy()))
# Save images into multiple MRCS files based on batch size
self.sim.save(star_filepath, batch_size=512, overwrite=False)
# Input saved images into Relion object
relion_src = RelionSource(star_filepath, tmpdir, max_rows=1024)
imgs_sav = relion_src.images(start=0, num=1024)
# Compare original images with saved images
self.assertTrue(np.allclose(imgs_org.asnumpy(), imgs_sav.asnumpy()))
| 33.520325 | 120 | 0.38928 | 1,378 | 16,492 | 4.584906 | 0.304064 | 0.027699 | 0.055714 | 0.083571 | 0.30975 | 0.293606 | 0.280785 | 0.212884 | 0.177271 | 0.172206 | 0 | 0.316987 | 0.517766 | 16,492 | 491 | 121 | 33.588595 | 0.47743 | 0.043172 | 0 | 0.244944 | 0 | 0 | 0.016118 | 0.004442 | 0 | 0 | 0 | 0 | 0.053933 | 1 | 0.042697 | false | 0.002247 | 0.020225 | 0 | 0.067416 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
589dc4d487e6272607c2716232f8c68e0095307f | 237 | py | Python | src/lib/metrics.py | lab-a1/correct-image-rotation | 526a708cd536f894635cc89159d5dc2615b8a0c3 | [
"WTFPL"
] | null | null | null | src/lib/metrics.py | lab-a1/correct-image-rotation | 526a708cd536f894635cc89159d5dc2615b8a0c3 | [
"WTFPL"
] | null | null | null | src/lib/metrics.py | lab-a1/correct-image-rotation | 526a708cd536f894635cc89159d5dc2615b8a0c3 | [
"WTFPL"
] | null | null | null | import torch
def accuracy(output, target):
_, predicted = torch.max(output.data, 1)
correct_predictions = (predicted == target).sum().item()
accuracy_result = correct_predictions / target.size(0)
return accuracy_result
| 26.333333 | 60 | 0.7173 | 28 | 237 | 5.892857 | 0.642857 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0.168776 | 237 | 8 | 61 | 29.625 | 0.827411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
58a302ec0728977ace16b587d34fb6377c71a027 | 255 | py | Python | helloworld.py | zqli-90s/LearnPythonCode | 8cdf5044c3b33435d904132b910c882cf6ee2231 | [
"MIT"
] | null | null | null | helloworld.py | zqli-90s/LearnPythonCode | 8cdf5044c3b33435d904132b910c882cf6ee2231 | [
"MIT"
] | null | null | null | helloworld.py | zqli-90s/LearnPythonCode | 8cdf5044c3b33435d904132b910c882cf6ee2231 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# print "hello world!"
# print "100 + 200 + 300 =", 100 + 200 + 300
#
# name = raw_input("please input a name: ")
# print "hello, ", name
print "hello, %s! age is %d" %("zqli", 27)
| 13.421053 | 45 | 0.505882 | 35 | 255 | 3.657143 | 0.685714 | 0.234375 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116022 | 0.290196 | 255 | 18 | 46 | 14.166667 | 0.59116 | 0.666667 | 0 | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
547bdb35c7788c72796738382e794ed7db0b2934 | 144 | py | Python | homeassistant/components/trace/const.py | WizBangCrash/core | b1d0b37d2c575a440686c9bd177862a723deea68 | [
"Apache-2.0"
] | 1 | 2021-03-23T07:20:03.000Z | 2021-03-23T07:20:03.000Z | homeassistant/components/trace/const.py | WizBangCrash/core | b1d0b37d2c575a440686c9bd177862a723deea68 | [
"Apache-2.0"
] | 19 | 2021-08-18T06:16:06.000Z | 2022-03-31T06:17:46.000Z | homeassistant/components/trace/const.py | vasili8m/core | cd455e296e331af0212344563d95ded2a537b877 | [
"Apache-2.0"
] | null | null | null | """Shared constants for automation and script tracing and debugging."""
DATA_TRACE = "trace"
STORED_TRACES = 5 # Stored traces per automation
| 28.8 | 71 | 0.763889 | 19 | 144 | 5.684211 | 0.736842 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.152778 | 144 | 4 | 72 | 36 | 0.877049 | 0.659722 | 0 | 0 | 0 | 0 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5492e7bf0d19f91cb4819df4243a577c41067e7c | 176 | py | Python | Utopian Tree .py | SriCharan220800/RomanReigns | 0ec11c65fa0cfa6264f162c5e3f2ba5e45986fbb | [
"MIT"
] | null | null | null | Utopian Tree .py | SriCharan220800/RomanReigns | 0ec11c65fa0cfa6264f162c5e3f2ba5e45986fbb | [
"MIT"
] | null | null | null | Utopian Tree .py | SriCharan220800/RomanReigns | 0ec11c65fa0cfa6264f162c5e3f2ba5e45986fbb | [
"MIT"
] | null | null | null | t=int(input())
for i in range(t):
n=int(input())
h=0
for i in range(n+1):
if i%2==0:
h += 1
else:
h *= 2
print(h) | 16 | 25 | 0.363636 | 30 | 176 | 2.133333 | 0.5 | 0.25 | 0.1875 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.471591 | 176 | 11 | 26 | 16 | 0.623656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5496c95843d29eac8544f4fae025620822d0af73 | 610 | py | Python | ModelServices/eventTriggerInputGroupingMappingServices.py | tuanldchainos/HcPullData | 65f89cfdcae135781aad4b3edf210c0ecd2d6a1c | [
"Apache-2.0"
] | null | null | null | ModelServices/eventTriggerInputGroupingMappingServices.py | tuanldchainos/HcPullData | 65f89cfdcae135781aad4b3edf210c0ecd2d6a1c | [
"Apache-2.0"
] | null | null | null | ModelServices/eventTriggerInputGroupingMappingServices.py | tuanldchainos/HcPullData | 65f89cfdcae135781aad4b3edf210c0ecd2d6a1c | [
"Apache-2.0"
] | null | null | null | from Repository.eventTriggerInputGroupingMappingRepo import eventTriggerInputGroupingMappingRepo
from sqlalchemy import Table
from sqlalchemy.engine.base import Connection
from sqlalchemy.sql.expression import BinaryExpression
class eventTriggerInputGroupingMappingServices():
__eventTriggerInputGroupingMappingRepo: eventTriggerInputGroupingMappingRepo
def __init__(self, eventTriggerInputGroupingMappingTable: Table, context: Connection):
self.__eventTriggerInputGroupingMappingRepo = eventTriggerInputGroupingMappingRepo(eventTriggerInputGroupingMappingTable, context=context)
| 55.454545 | 146 | 0.865574 | 38 | 610 | 13.684211 | 0.5 | 0.080769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 610 | 11 | 147 | 55.454545 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.875 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
54b5dcdc183250e2408650efabf9e702604d16b8 | 383 | py | Python | msr18model/__init__.py | mandoway/dfp | d8b1bd911fa810ce08e9719c9988e5a765b0128b | [
"Apache-2.0"
] | null | null | null | msr18model/__init__.py | mandoway/dfp | d8b1bd911fa810ce08e9719c9988e5a765b0128b | [
"Apache-2.0"
] | null | null | null | msr18model/__init__.py | mandoway/dfp | d8b1bd911fa810ce08e9719c9988e5a765b0128b | [
"Apache-2.0"
] | null | null | null | from .Project import Project
from .Dockerfile import Dockerfile
from .Snapshot import Snapshot
from .Violation import Violation
from .SnapViolationDiff import SnapViolationDiff
from .SnapViolDiffItem import SnapViolDiffItem
from .Diff import Diff
from .DiffType import DiffType
from .MapDiff import MapDiff
from .FailedProjects import FailedProjects
from .FixChange import FixChange
| 31.916667 | 48 | 0.856397 | 44 | 383 | 7.454545 | 0.295455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114883 | 383 | 11 | 49 | 34.818182 | 0.967552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
54ba227f2264a6721037a87db97dfa5449e0490f | 566 | py | Python | tests/test_util.py | 20c/grainy | a25c3c972a74204773b9d3d6a0a06e0d93a6e2d1 | [
"Apache-2.0"
] | 2 | 2019-01-22T13:34:06.000Z | 2020-07-31T12:42:42.000Z | tests/test_util.py | 20c/grainy | a25c3c972a74204773b9d3d6a0a06e0d93a6e2d1 | [
"Apache-2.0"
] | 11 | 2017-12-04T11:50:33.000Z | 2021-08-24T15:36:42.000Z | tests/test_util.py | 20c/grainy | a25c3c972a74204773b9d3d6a0a06e0d93a6e2d1 | [
"Apache-2.0"
] | 1 | 2021-08-06T17:53:38.000Z | 2021-08-06T17:53:38.000Z | import unittest
from grainy import const, core
class TestUtils(unittest.TestCase):
def test_int_flags(self):
self.assertEqual(core.int_flags("c"), const.PERM_CREATE)
self.assertEqual(core.int_flags("cr"), const.PERM_CREATE | const.PERM_READ)
self.assertEqual(
core.int_flags("cru"),
const.PERM_CREATE | const.PERM_READ | const.PERM_UPDATE,
)
self.assertEqual(
core.int_flags("crud"),
const.PERM_CREATE | const.PERM_READ | const.PERM_UPDATE | const.PERM_DELETE,
)
| 31.444444 | 88 | 0.646643 | 70 | 566 | 5 | 0.342857 | 0.257143 | 0.217143 | 0.251429 | 0.634286 | 0.325714 | 0.245714 | 0.245714 | 0.245714 | 0 | 0 | 0 | 0.243816 | 566 | 17 | 89 | 33.294118 | 0.817757 | 0 | 0 | 0.142857 | 0 | 0 | 0.017668 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
54c57f463b293d019c4981fa4cea412d6dd57abe | 476 | py | Python | core/taming/modules/diffusion/__init__.py | ollietb/VQGAN-CLIP-Docker | f3c6bd44bd1948dd9a5a4fd75d8d43910c9267ce | [
"MIT"
] | 57 | 2021-08-06T19:23:06.000Z | 2022-03-30T04:20:11.000Z | core/taming/modules/diffusion/__init__.py | ollietb/VQGAN-CLIP-Docker | f3c6bd44bd1948dd9a5a4fd75d8d43910c9267ce | [
"MIT"
] | 3 | 2021-09-01T00:31:10.000Z | 2021-11-13T11:27:42.000Z | core/taming/modules/diffusion/__init__.py | ollietb/VQGAN-CLIP-Docker | f3c6bd44bd1948dd9a5a4fd75d8d43910c9267ce | [
"MIT"
] | 12 | 2021-08-08T05:18:38.000Z | 2022-03-24T20:12:56.000Z | from core.taming.modules.diffusion.attn_block import AttnBlock
from core.taming.modules.diffusion.resnet_block import ResnetBlock
from core.taming.modules.diffusion.downsample import Downsample
from core.taming.modules.diffusion.upsample import Upsample
from core.taming.modules.diffusion.encoder import Encoder
from core.taming.modules.diffusion.decoder import Decoder
__all__ = [
AttnBlock,
ResnetBlock,
Downsample,
Upsample,
Encoder,
Decoder,
]
| 25.052632 | 66 | 0.80042 | 57 | 476 | 6.578947 | 0.280702 | 0.128 | 0.224 | 0.336 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130252 | 476 | 18 | 67 | 26.444444 | 0.905797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
49b4f07464dd5e07f11dc493f7687e114779e805 | 1,610 | py | Python | drf_actions/migrations/0003_auto_20210729_0724.py | speechki-book/drf-actions | 7effcf8df2c47ce6a1028ad86f252b3c6378c371 | [
"MIT"
] | null | null | null | drf_actions/migrations/0003_auto_20210729_0724.py | speechki-book/drf-actions | 7effcf8df2c47ce6a1028ad86f252b3c6378c371 | [
"MIT"
] | null | null | null | drf_actions/migrations/0003_auto_20210729_0724.py | speechki-book/drf-actions | 7effcf8df2c47ce6a1028ad86f252b3c6378c371 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-29 07:24
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('drf_actions', '0002_auto_20200618_1044'),
]
operations = [
migrations.AlterField(
model_name='actioncontenttype',
name='content_type',
field=models.CharField(choices=[('user', 'user'), ('book', 'book'), ('publishbook', 'publishbook')], db_index=True, max_length=50),
),
migrations.AlterField(
model_name='actioncontenttype',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='eventjournal',
name='content_type',
field=models.CharField(choices=[('user', 'user'), ('book', 'book'), ('publishbook', 'publishbook')], db_index=True, max_length=50),
),
migrations.AlterField(
model_name='eventjournal',
name='data',
field=models.JSONField(blank=True, null=True),
),
migrations.AlterField(
model_name='eventjournal',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
migrations.AlterField(
model_name='eventjournal',
name='reason',
field=models.CharField(choices=[('INSERT', 'INSERT'), ('UPDATE', 'UPDATE'), ('DELETE', 'DELETE')], db_index=True, max_length=30),
),
]
| 36.590909 | 143 | 0.588199 | 157 | 1,610 | 5.878981 | 0.388535 | 0.130011 | 0.162514 | 0.188516 | 0.712893 | 0.691224 | 0.561213 | 0.561213 | 0.561213 | 0.561213 | 0 | 0.031197 | 0.263354 | 1,610 | 43 | 144 | 37.44186 | 0.747049 | 0.02795 | 0 | 0.702703 | 1 | 0 | 0.172745 | 0.014715 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
49ba5766e8794f1af11e5183d10b31dea90cd2d8 | 247 | py | Python | ch_2/compare_float.py | ProhardONE/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | 51 | 2016-04-05T16:56:11.000Z | 2022-02-08T00:08:47.000Z | ch_2/compare_float.py | zhangxiao921207/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | null | null | null | ch_2/compare_float.py | zhangxiao921207/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | 47 | 2016-05-02T07:51:37.000Z | 2022-02-08T01:28:15.000Z | # Exercise 2.24
# Author: Noah Waterfield Price
a = 1 / 947.0 * 947
b = 1
if a != b:
print 'Wrong result!'
a = 1 / 947.0 * 947
b = 1
if abs(a - b) > 1e-15:
print 'Wrong result!'
"""
Sample run:
python compare_float.py
Wrong result!
"""
| 13 | 31 | 0.591093 | 44 | 247 | 3.295455 | 0.568182 | 0.227586 | 0.068966 | 0.082759 | 0.17931 | 0.17931 | 0.17931 | 0.17931 | 0 | 0 | 0 | 0.130435 | 0.255061 | 247 | 18 | 32 | 13.722222 | 0.657609 | 0.174089 | 0 | 0.75 | 0 | 0 | 0.180556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
49d1e4d15232468c84571b9e3397f64736a6e4bd | 1,905 | py | Python | PyObjCTest/test_nsscroller.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | PyObjCTest/test_nsscroller.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | PyObjCTest/test_nsscroller.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | import AppKit
from PyObjCTools.TestSupport import TestCase, min_os_level
class TestNSScroller(TestCase):
def testConstants(self):
self.assertEqual(AppKit.NSScrollerArrowsMaxEnd, 0)
self.assertEqual(AppKit.NSScrollerArrowsMinEnd, 1)
self.assertEqual(AppKit.NSScrollerArrowsDefaultSetting, 0)
self.assertEqual(AppKit.NSScrollerArrowsNone, 2)
self.assertEqual(AppKit.NSNoScrollerParts, 0)
self.assertEqual(AppKit.NSOnlyScrollerArrows, 1)
self.assertEqual(AppKit.NSAllScrollerParts, 2)
self.assertEqual(AppKit.NSScrollerNoPart, 0)
self.assertEqual(AppKit.NSScrollerDecrementPage, 1)
self.assertEqual(AppKit.NSScrollerKnob, 2)
self.assertEqual(AppKit.NSScrollerIncrementPage, 3)
self.assertEqual(AppKit.NSScrollerDecrementLine, 4)
self.assertEqual(AppKit.NSScrollerIncrementLine, 5)
self.assertEqual(AppKit.NSScrollerKnobSlot, 6)
self.assertEqual(AppKit.NSScrollerIncrementArrow, 0)
self.assertEqual(AppKit.NSScrollerDecrementArrow, 1)
@min_os_level("10.7")
def testConstants10_7(self):
self.assertEqual(AppKit.NSScrollerStyleLegacy, 0)
self.assertEqual(AppKit.NSScrollerStyleOverlay, 1)
self.assertEqual(AppKit.NSScrollerKnobStyleDefault, 0)
self.assertEqual(AppKit.NSScrollerKnobStyleDark, 1)
self.assertEqual(AppKit.NSScrollerKnobStyleLight, 2)
self.assertIsInstance(AppKit.NSPreferredScrollerStyleDidChangeNotification, str)
def testMethods(self):
self.assertArgIsBOOL(AppKit.NSScroller.drawArrow_highlight_, 1)
self.assertArgIsBOOL(AppKit.NSScroller.drawKnobSlotInRect_highlight_, 1)
self.assertArgIsBOOL(AppKit.NSScroller.highlight_, 0)
@min_os_level("10.7")
def testMethods10_7(self):
self.assertResultIsBOOL(AppKit.NSScroller.isCompatibleWithOverlayScrollers)
| 41.413043 | 88 | 0.751181 | 172 | 1,905 | 8.244186 | 0.337209 | 0.222144 | 0.311001 | 0.108604 | 0.086037 | 0.086037 | 0 | 0 | 0 | 0 | 0 | 0.022599 | 0.16378 | 1,905 | 45 | 89 | 42.333333 | 0.867546 | 0 | 0 | 0.057143 | 0 | 0 | 0.004199 | 0 | 0 | 0 | 0 | 0 | 0.742857 | 1 | 0.114286 | false | 0 | 0.057143 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
49f0213c5e0c381e637716e610a8b058cde65e70 | 9,407 | py | Python | zkay/config.py | nibau/zkay | da04760088767e05214aac2d2beee4cbf8ed77f6 | [
"MIT"
] | null | null | null | zkay/config.py | nibau/zkay | da04760088767e05214aac2d2beee4cbf8ed77f6 | [
"MIT"
] | null | null | null | zkay/config.py | nibau/zkay | da04760088767e05214aac2d2beee4cbf8ed77f6 | [
"MIT"
] | null | null | null | import json
import math
import os
from contextlib import contextmanager
from typing import Dict, Any, ContextManager, List
from semantic_version import NpmSpec
from zkay.compiler.privacy.proving_scheme.meta import provingschemeparams
from zkay.config_user import UserConfig
from zkay.config_version import Versions
from zkay.transaction.crypto.meta import cryptoparams
def zk_print(*args, verbosity_level=1, **kwargs):
if (verbosity_level <= cfg.verbosity) and not cfg.is_unit_test:
print(*args, **kwargs)
def zk_print_banner(title: str):
l = len(title) + 4
zk_print(f'{"#"*l}\n# {title} #\n{"#"*l}\n')
class Config(UserConfig):
def __init__(self):
super().__init__()
# Internal values
self._options_with_effect_on_circuit_output = [
'proving_scheme', 'snark_backend', 'crypto_backend',
'opt_solc_optimizer_runs', 'opt_hash_threshold',
'opt_eval_constexpr_in_circuit', 'opt_cache_circuit_inputs', 'opt_cache_circuit_outputs',
]
self._is_unit_test = False
self._concrete_solc_version = None
def _load_cfg_file_if_exists(self, filename):
if os.path.exists(filename):
with open(filename) as conf:
try:
self.override_defaults(json.load(conf))
except ValueError as e:
raise ValueError(f'{e} (in file "{filename}")')
def load_configuration_from_disk(self, local_cfg_file: str):
# Load global configuration file
global_config_dir = self._appdirs.site_config_dir
global_cfg_file = os.path.join(global_config_dir, 'config.json')
self._load_cfg_file_if_exists(global_cfg_file)
# Load user configuration file
user_config_dir = self._appdirs.user_config_dir
user_cfg_file = os.path.join(user_config_dir, 'config.json')
self._load_cfg_file_if_exists(user_cfg_file)
# Load local configuration file
self._load_cfg_file_if_exists(local_cfg_file)
def override_defaults(self, overrides: Dict[str, Any]):
for arg, val in overrides.items():
if not hasattr(self, arg):
raise ValueError(f'Tried to override non-existing config value {arg}')
try:
setattr(self, arg, val)
except ValueError as e:
raise ValueError(f'{e} (for entry "{arg}")')
def export_compiler_settings(self) -> dict:
out = {}
for k in self._options_with_effect_on_circuit_output:
out[k] = getattr(self, k)
return out
def import_compiler_settings(self, vals: dict):
for k in vals:
if k not in self._options_with_effect_on_circuit_output:
raise KeyError(f'vals contains unknown option "{k}"')
setattr(self, k, vals[k])
@contextmanager
def library_compilation_environment(self) -> ContextManager:
"""Use this fixed configuration compiling libraries to get reproducible output."""
old_solc, old_opt_runs = self.solc_version, self.opt_solc_optimizer_runs
self.override_solc(self.library_solc_version)
self.opt_solc_optimizer_runs = 1000
yield
self.opt_solc_optimizer_runs = old_opt_runs
self.override_solc(old_solc)
@property
def library_solc_version(self) -> str:
# Note: Changing this version breaks compatibility with already deployed library contracts
return Versions.ZKAY_LIBRARY_SOLC_VERSION
@property
def zkay_version(self) -> str:
"""zkay version number"""
return Versions.ZKAY_VERSION
@property
def zkay_solc_version_compatibility(self) -> NpmSpec:
"""Target solidity language level for the current zkay version"""
return Versions.ZKAY_SOLC_VERSION_COMPATIBILITY
@property
def solc_version(self) -> str:
version = Versions.SOLC_VERSION
assert version is not None and version != 'latest'
return version
@staticmethod
def override_solc(new_version):
Versions.set_solc_version(new_version)
@property
def key_bits(self) -> int:
return cryptoparams[self.crypto_backend]['key_bits']
@property
def key_bytes(self) -> int:
return self.key_bits // 8
@property
def rnd_bytes(self) -> int:
return cryptoparams[self.crypto_backend]['rnd_bytes']
@property
def cipher_bytes_payload(self) -> int:
return cryptoparams[self.crypto_backend]['cipher_payload_bytes']
@property
def cipher_bytes_meta(self) -> int:
return cryptoparams[self.crypto_backend]['cipher_meta_bytes']
def is_symmetric_cipher(self) -> bool:
return cryptoparams[self.crypto_backend]['symmetric']
@property
def cipher_payload_len(self) -> int:
return int(math.ceil(self.cipher_bytes_payload / self.cipher_chunk_size))
@property
def cipher_len(self) -> int:
if self.is_symmetric_cipher():
return self.cipher_payload_len + 1 # Additional uint to store sender address
else:
return self.cipher_payload_len
@property
def key_len(self) -> int:
return 1 if self.is_symmetric_cipher() else int(math.ceil(self.key_bytes / self.cipher_chunk_size))
@property
def randomness_len(self) -> int:
return 0 if self.is_symmetric_cipher() else int(math.ceil(self.rnd_bytes / self.rnd_chunk_size))
@property
def proof_len(self) -> int:
return provingschemeparams[self.proving_scheme]['proof_len']
@property
def external_crypto_lib_names(self) -> List[str]:
"""Names of all solidity libraries in verify_libs.sol, which need to be linked against."""
return provingschemeparams[self.proving_scheme]['external_sol_libs']
def should_use_hash(self, circuit: 'CircuitHelper') -> bool:
"""
This function determines whether input hashing is used for a particular circuit.
:return: if true, all public circuit inputs are passed as private inputs into the circuit and only their combined hash-
value is passed as a public input. This makes verification constant-cost,
but increases offchain resource usage during key and proof generation.
"""
pub_arg_size = circuit.trans_in_size + circuit.trans_out_size
return pub_arg_size > self.opt_hash_threshold
@property
def reserved_name_prefix(self) -> str:
"""
Identifiers in user code must not start with this prefix.
This is to ensure that user code does not interfere with the additional code generated by the zkay compiler.
"""
return 'zk__'
@property
def reserved_conflict_resolution_suffix(self) -> str:
"""
Identifiers in user code must not end with this suffix.
This is used for resolving conflicts with python globals in the generated offchain simulation code.
"""
return '_zalt'
def get_internal_name(self, fct) -> str:
if fct.requires_verification_when_external:
return f'_{self.reserved_name_prefix}{fct.name}'
else:
return fct.name
def get_verification_contract_name(self, contract: str, fct: str):
return f'{cfg.reserved_name_prefix}Verify_{contract}_{fct}'
def get_circuit_output_dir_name(self, verifier_name: str) -> str:
"""Return the output directory for an individual circuit"""
return f'{verifier_name}_out'
@staticmethod
def get_contract_var_name(type_name: str) -> str:
"""
Return an identifier referring to the address variable of verification contract of type 'type_name'
:param type_name: name of the unqualified verification contract type
:return: new identifier
"""
return f'{type_name}_inst'
@property
def pki_contract_name(self) -> str:
return f'{self.reserved_name_prefix}PublicKeyInfrastructure'
@property
def zk_out_name(self) -> str:
return f'{self.reserved_name_prefix}out'
@property
def zk_in_name(self) -> str:
return f'{self.reserved_name_prefix}in'
@property
def proof_param_name(self) -> str:
return f'{self.reserved_name_prefix}proof'
@property
def return_var_name(self) -> str:
return f'{self.reserved_name_prefix}ret'
@property
def field_prime_var_name(self) -> str:
return f'{self.reserved_name_prefix}field_prime'
@property
def prover_key_hash_name(self) -> str:
return f'{self.reserved_name_prefix}prover_key_hash'
@property
def zk_struct_prefix(self) -> str:
return f'{self.reserved_name_prefix}data'
@property
def zk_data_var_name(self) -> str:
return f'{self.zk_struct_prefix}'
@property
def jsnark_circuit_classname(self) -> str:
return 'ZkayCircuit'
@property
def verification_function_name(self) -> str:
return 'check_verify'
@property
def cipher_chunk_size(self) -> int:
return cryptoparams[self.crypto_backend]['cipher_chunk_size']
@property
def rnd_chunk_size(self) -> int:
return cryptoparams[self.crypto_backend]['rnd_chunk_size']
@property
def is_unit_test(self) -> bool:
return self._is_unit_test
cfg = Config()
Versions.set_solc_version('latest')
| 33.476868 | 127 | 0.675136 | 1,205 | 9,407 | 4.99751 | 0.229876 | 0.056626 | 0.023746 | 0.028396 | 0.271172 | 0.206742 | 0.188143 | 0.159415 | 0.092328 | 0.042179 | 0 | 0.001396 | 0.238758 | 9,407 | 280 | 128 | 33.596429 | 0.839548 | 0.15042 | 0 | 0.208556 | 0 | 0 | 0.121464 | 0.0631 | 0 | 0 | 0 | 0 | 0.005348 | 1 | 0.251337 | false | 0 | 0.058824 | 0.139037 | 0.529412 | 0.02139 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
b724555684e39e44199cefc865c7a68aec3d61f1 | 677 | py | Python | Aula20/exercicio3.py | marcelabbc07/TrabalhosPython | 91734d13110e4dee12a532dfd7091e36394a6449 | [
"MIT"
] | null | null | null | Aula20/exercicio3.py | marcelabbc07/TrabalhosPython | 91734d13110e4dee12a532dfd7091e36394a6449 | [
"MIT"
] | null | null | null | Aula20/exercicio3.py | marcelabbc07/TrabalhosPython | 91734d13110e4dee12a532dfd7091e36394a6449 | [
"MIT"
] | null | null | null | # Aula 20 - 05-12-2019
# Surgiu a necessidade de envio massivo de e-mails dos clientes cadastrados
# no arquivo cadastro1.txt
# >>>> Fazer tudo com metodos <<<<<
# 1 - Para isso o programa necessita que separe os clientes maiores de 20 anos
# em um arquivo separado chamado menores_de_idade.txt
# 2 - Separar os clientes femininos e salvar em um arquivo chamado feminini.txt
# 3 - Fazer um terminal de consulta onde se digita o código cliente e
# imprima na tela o (f-string) o codigo, nome, idade, sexo, email, telefone.
# Se digitar um número que não exista, deverá aparecer uma mensagem dizendo
# "código não encontrado!" Se digitar 'S' (sair) o programa deve finalizar.
| 56.416667 | 79 | 0.750369 | 110 | 677 | 4.6 | 0.709091 | 0.035573 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028777 | 0.17873 | 677 | 11 | 80 | 61.545455 | 0.881295 | 0.966027 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.090909 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b72c6bcb79db8c3ae7539f7378565c1640d4aaa3 | 152 | py | Python | polrev/volunteers/apps.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | 1 | 2021-12-10T05:54:16.000Z | 2021-12-10T05:54:16.000Z | polrev/volunteers/apps.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | null | null | null | polrev/volunteers/apps.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class VolunteersConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'volunteers'
| 21.714286 | 56 | 0.769737 | 17 | 152 | 6.764706 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144737 | 152 | 6 | 57 | 25.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0.256579 | 0.190789 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
b73740c5f44efa16fad64b46b64e5616b9a26657 | 24,860 | py | Python | tests.py | KyleKing/implements | 781f5ee51093bdfc0bdda915c985ef6d2d2c0f46 | [
"Apache-2.0"
] | 29 | 2017-05-28T02:53:26.000Z | 2021-12-22T17:43:30.000Z | tests.py | KyleKing/implements | 781f5ee51093bdfc0bdda915c985ef6d2d2c0f46 | [
"Apache-2.0"
] | 23 | 2019-04-12T11:45:04.000Z | 2022-01-18T12:53:14.000Z | tests.py | KyleKing/implements | 781f5ee51093bdfc0bdda915c985ef6d2d2c0f46 | [
"Apache-2.0"
] | 5 | 2017-07-23T02:19:26.000Z | 2021-06-30T01:54:43.000Z | # Copyright 2017-2020 Kamil Sindi
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import pytest
from implements import Interface, implements, get_mro
py36 = pytest.mark.skipif(sys.version_info < (3, 6), reason='requires py3.6')
def test_empty():
class FooInterface(Interface):
pass
@implements(FooInterface)
class FooImplementation:
pass
def test_with_args_kwargs():
class FooInterface(Interface):
def foo(self, a, *args, b=1, **kwargs):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, a, *args, b=7):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, a, *args, b=1, **kwargs):
pass
def test_with_kwarg_only():
class FooInterface(Interface):
def foo(self, a, *, b):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, a, b):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, a, *, b):
pass
def test_property():
class FooInterface(Interface):
@property
def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
@property
def foo(self):
pass
def test_property_inverse():
class FooInterface(Interface):
def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def foo(self):
pass
def test_setters():
class FooInterface(Interface):
@property
def foo(self):
pass
@foo.setter
def foo(self, val):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
@property
def foo(self):
pass
@foo.setter
def foo(self, val):
pass
def test_deleters():
class FooInterface(Interface):
@property
def foo(self):
pass
@foo.deleter
def foo(self, val):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
@property
def foo(self):
pass
@foo.deleter
def foo(self, val):
pass
def test_implementation_implements_more_descriptors():
class FooInterface(Interface):
@property
def foo(self):
pass
# An implementation must implement all data descriptors defined in
# the interface, however, the implementation could define more.
#
# The case below must not generate errors because FooImplementationPass
# defines a foo.setter which isn't defined by FooInterface
@implements(FooInterface)
class FooImplementationPass:
@property
def foo(self):
pass
@foo.setter
def foo(self, val):
pass
def test_missing_method():
class FooInterface(Interface):
def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self):
pass
def test_missing_argument():
class FooInterface(Interface):
def foo(self, arg):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, arg):
pass
def test_renamed_argument():
class FooInterface(Interface):
def foo(self, arg):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, arrrrg):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, arg):
pass
def test_extra_argument():
class FooInterface(Interface):
def foo(self, arg):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, arg, ument):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, arg):
pass
def test_different_defaults():
class FooInterface(Interface):
def foo(self, arg=7):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, arg=8):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, arg=7):
pass
def test_different_order():
class FooInterface(Interface):
def foo(self, a, b):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, b, a):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, a, b):
pass
def test_missing_kwargs():
class FooInterface(Interface):
def foo(self, **kwargs):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, **kwargs):
pass
def test_missing_property():
class FooInterface(Interface):
@property
def foo(self):
pass
with pytest.raises(NotImplementedError): # missing method
@implements(FooInterface)
class FooImplementationFail1: # skipcq: PYL-W0612
pass
with pytest.raises(NotImplementedError): # missing property decorator
@implements(FooInterface)
class FooImplementationFail2: # skipcq: PYL-W0612
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
@property
def foo(self):
pass
def test_bad_constructor():
class FooInterface(Interface):
def __init__(self, a):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def __init__(self):
pass
@implements(FooInterface)
class FooImplementationPass:
def __init__(self, a):
pass
def test_multiple_errors():
class FooInterface(Interface):
@property
def foo(self):
pass
def __init__(self, a):
pass
# Bad constructor, missing method getter, and missing class attribute (3)
match = r'^Found 3 errors in implementation:\n- .+\n- .+\n- .+\nwith .+'
with pytest.raises(NotImplementedError, match=match):
@implements(FooInterface)
class FooImplementationFail: # skipcq: PYL-W0612
def __init__(self):
pass
def test_static():
class FooInterface(Interface):
@staticmethod
def foo(a, b, c):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail1: # skipcq: PYL-W0612
pass # missing foo
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail2: # skipcq: PYL-W0612
# skipcq: PYL-E0213
def foo(a, b, c): # missing staticmethod decorator
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail3: # skipcq: PYL-W0612
@classmethod # classmethod instead of staticmethod
def foo(cls, a, b, c): # decorator-check fails before signature
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail4: # skipcq: PYL-W0612
@staticmethod
def foo(m, n, o): # staticmethod, but wrong signature
pass
@implements(FooInterface)
class FooImplementationPass:
@staticmethod
def foo(a, b, c):
pass
def test_classmethods():
class FooInterface(Interface):
@classmethod
def foo(cls, a, b, c):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail1: # skipcq: PYL-W0612
pass # missing foo
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail2: # skipcq: PYL-W0612
# skipcq: PYL-E0213
def foo(cls, a, b, c): # missing classmethod decorator
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail3: # skipcq: PYL-W0612
@staticmethod # staticmethod instead of classmethod
def foo(a, b, c): # decorator-check fails before signature
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail4: # skipcq: PYL-W0612
@classmethod
def foo(cls, m, n, o): # classmethod, but wrong signature
pass
@implements(FooInterface)
class FooImplementationPass:
@classmethod
def foo(cls, a, b, c):
pass
def test_classmethod_signature_match():
# For a classmethod, inspect.signature returns a signature with the first
# element (cls) stripped. A classmethod with signature (cls, a, b, c) has
# signature equivalence with a regular method with signature (a, b, c)
#
# Example:
from inspect import signature
class TestA:
@classmethod
def foo(cls, a, b, c):
pass
class TestB:
# skipcq: PYL-E0213
def foo(a, b, c):
pass
assert signature(TestA.foo) == signature(TestB.foo)
# The test below ensures that the above case is flagged
class FooInterface(Interface):
@classmethod
def foo(cls, a, b, c):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
# skipcq: PYL-E0213
def foo(a, b, c):
pass
def test_staticmethod_classmethod_with_decorator():
class FooBarInterface(Interface):
@staticmethod
def foo(a, b, c):
pass
@classmethod
def bar(cls, a, b, c):
pass
import functools
def decorator(func):
@functools.wraps(func)
def inner(*args, **kwargs):
return func(*args, **kwargs)
return inner
@implements(FooBarInterface)
class FooBarImplementationPass:
@staticmethod
@decorator
def foo(a, b, c):
pass
@classmethod
@decorator
def bar(cls, a, b, c):
pass
def test_kwargs_only():
class FooInterface(Interface):
def foo(self, *, a):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementation:
def foo(self, a):
pass
def test_multiple_interfaces():
class FooInterface(Interface):
def foo(self):
pass
class BarInterface(Interface):
def bar(self):
pass
with pytest.raises(NotImplementedError):
@implements(BarInterface)
@implements(FooInterface)
class FooImplementationNoBar:
def foo(self, a):
pass
with pytest.raises(NotImplementedError):
@implements(BarInterface)
@implements(FooInterface)
class FooImplementationNoFoo:
def bar(self, a):
pass
@implements(BarInterface)
@implements(FooInterface)
class FooImplementation:
def foo(self):
pass
def bar(self):
pass
def test_interface_name_collision():
class Foo1Interface(Interface):
def foo(self):
pass
class Foo2Interface(Interface):
def foo(self):
pass
@implements(Foo2Interface)
@implements(Foo1Interface)
class FooImplementation:
def foo(self):
pass
def test_interface_name_and_signature_collision():
class Foo1Interface(Interface):
def foo(self):
pass
class Foo2Interface(Interface):
def foo(self) -> str:
return 'foo'
# Two interfaces with different signatures for a given method will
# always result in failure for the implementing class, as the
# implemented method's signature can only satisfy one of the interfaces.
with pytest.raises(NotImplementedError):
@implements(Foo2Interface)
@implements(Foo1Interface)
class FooImplementationFail:
def foo(self):
pass
def test_interface_inheritance():
class BaseInterface(Interface):
def bar(self):
pass
class FooInterface(BaseInterface):
def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self):
pass
def bar(self):
pass
def test_class_inheritance():
class FooInterface(Interface):
def foo(self):
pass
@implements(FooInterface)
class ParentImplementation:
def foo(self):
pass
@implements(FooInterface)
class ChildImplementation(ParentImplementation):
pass
def test_class_multiple_inheritance():
# --------- INTERFACES -----------------------------------------------
#
class FooInterface(Interface):
def foo(self, final):
pass
class BarInterface(Interface):
def bar(self, final):
pass
class FooBarInterface(FooInterface, BarInterface):
pass
# --------- IMPLEMENTATION -------------------------------------------
#
class BaseFooImplementation: # must get overridden
def foo(self, override, my, args):
pass
@implements(FooInterface)
class FooImplementation(BaseFooImplementation):
def foo(self, final): # skipcq: PYL-W0221
pass
@implements(BarInterface)
class BarImplementation:
def bar(self, final):
pass
with pytest.raises(NotImplementedError):
@implements(FooBarInterface)
class SubFooImplementation(FooImplementation): # foo, no bar
pass
@implements(FooInterface)
@implements(BarInterface)
@implements(FooBarInterface)
class FooBarImplementation(FooImplementation, BarImplementation):
pass
def test_rtn_type_annotation():
class FooInterface(Interface):
def foo(self) -> str:
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self) -> int:
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self) -> str:
pass
def test_arg_type_annotation():
class FooInterface(Interface):
def foo(self, arg: str):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
def foo(self, arg: int):
pass
@implements(FooInterface)
class FooImplementationPass:
def foo(self, arg: str):
pass
def test_other_decorator_compat():
def decorator(cls):
class Wrapper:
def __init__(self, *args):
self.wrapped = cls(*args)
def __getattr__(self, name):
print('Getting the {} of {}'.format(name, self.wrapped))
return getattr(self.wrapped, name, None)
return Wrapper
class FooInterface(Interface):
def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
@decorator
class FooImplementationFail:
def __init__(self, x, y):
self.x = x
self.y = y
def foo(self):
pass
@decorator
@implements(FooInterface)
class FooImplementationPass:
def __init__(self, x, y):
self.x = x
self.y = y
def foo(self):
pass
def test_magic_methods():
class FooInterface(Interface):
def __add__(self, other):
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
pass
@implements(FooInterface)
class FooImplementationPass:
def __add__(self, other):
pass
def test_attributes():
class FooInterface(Interface):
a = None
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
pass
@implements(FooInterface)
class FooImplementationPass:
a = 1
b = 2
def test_async():
class AsyncInterface:
async def __aenter__(self):
return self
async def __aexit__(self, *args, **kwargs):
pass
with pytest.raises(NotImplementedError):
@implements(AsyncInterface)
class AsyncImplementation:
pass
def test_async_method():
class AsyncFooInterface:
async def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(AsyncFooInterface)
class FooImplementationFail: # skipcq: PYL-W0612
def foo(self):
pass
@implements(AsyncFooInterface)
class AsyncFooImplementation: # skipcq: PYL-W0612
async def foo(self):
pass
def test_generator():
class GenFooInterface:
def foo(self): # skipcq: PYL-R0201
yield 1
with pytest.raises(NotImplementedError):
@implements(GenFooInterface)
class FooImplementationFail: # skipcq: PYL-W0612
def foo(self):
pass
# must fail a generator which happens to be async
with pytest.raises(NotImplementedError):
@implements(GenFooInterface)
class AsyncGenFooImplementationFail: # skipcq: PYL-W0612
async def foo(self):
yield 1
@implements(GenFooInterface)
class GenFooImplementation: # skipcq: PYL-W0612
def foo(self): # skipcq: PYL-R0201
yield 1
def test_asyncgen_method():
class AsyncGenFooInterface:
async def foo(self):
yield 1
with pytest.raises(NotImplementedError):
@implements(AsyncGenFooInterface)
class AsyncFooImplementationFail: # skipcq: PYL-W0612
async def foo(self):
pass
with pytest.raises(NotImplementedError):
@implements(AsyncGenFooInterface)
class GenFooImplementationFail: # skipcq: PYL-W0612
def foo(self): # skipcq: PYL-R0201
yield 1
@implements(AsyncGenFooInterface)
class AsyncGenFooImplementation: # skipcq: PYL-W0612
async def foo(self):
yield 1
@py36
def test_new_style_descriptors():
class IntField:
def __get__(self, instance, owner):
return instance.__dict__[self.name]
def __set__(self, instance, value):
if not isinstance(value, int):
raise ValueError('expecting integer in {}'.format(self.name))
instance.__dict__[self.name] = value
def __set_name__(self, owner, name):
self.name = name # skipcq: PYL-W0201
class FooInterface(Interface):
int_field = IntField()
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
pass
@implements(FooInterface)
class FooImplementationPass:
int_field = IntField()
@py36
def test_new_style_metaclasses():
class Polygon:
def __init_subclass__(cls, sides, **kwargs):
cls.sides = sides
if cls.sides < 3:
raise ValueError('polygons need 3+ sides')
@classmethod
def interior_angles(cls):
return (cls.sides - 2) * 180
class PolygonInterface(Interface):
def rotate(self):
pass
@implements(PolygonInterface)
class Triangle(Polygon, sides=3):
def rotate(self):
pass
def test_descriptors_signature_getter():
class FooInterface(Interface):
@property
def someprop(self) -> str:
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def someprop(self) -> int:
pass
def test_descriptors_signature_setter():
class FooInterface(Interface):
@property
def someprop(self):
pass
@someprop.setter
def someprop(self, value: str) -> str:
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def someprop(self):
pass
@someprop.setter
def someprop(self, value: int) -> float:
pass
def test_descriptors_signature_deleter():
class FooInterface(Interface):
@property
def someprop(self):
pass
@someprop.deleter
def someprop(self) -> str:
pass
with pytest.raises(NotImplementedError):
@implements(FooInterface)
class FooImplementationFail:
@property
def someprop(self):
pass
@someprop.deleter
def someprop(self) -> int:
pass
def test_get_mro():
class RegularClass:
pass
mro = get_mro(RegularClass)
assert object not in mro
expected = RegularClass.mro()[:-1]
assert mro == expected
def test_class_hierarchy_overlap_of_common_class():
class CommonClass:
pass
class FooInterface(CommonClass):
def abc(self) -> str:
pass
with pytest.raises(ValueError):
@implements(FooInterface)
class FooImplemenation(CommonClass):
def abc(self) -> str:
pass
def test_implementation_inheriting_from_interface():
class FooInterface:
def abc(self) -> str:
pass
with pytest.raises(ValueError):
@implements(FooInterface)
class FooImplemenation(FooInterface):
def abc(self) -> str:
pass
| 25.264228 | 77 | 0.587932 | 2,276 | 24,860 | 6.336995 | 0.138401 | 0.042848 | 0.06032 | 0.114054 | 0.683561 | 0.647299 | 0.618873 | 0.538238 | 0.491299 | 0.446162 | 0 | 0.010585 | 0.331134 | 24,860 | 983 | 78 | 25.289929 | 0.856808 | 0.097546 | 0 | 0.797784 | 0 | 0 | 0.006396 | 0 | 0 | 0 | 0 | 0 | 0.004155 | 1 | 0.259003 | false | 0.235457 | 0.006925 | 0.00554 | 0.484765 | 0.001385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
3f8079857f369f246815ddc3dc776c04aeee5ff1 | 1,045 | py | Python | python/tests/test_builder_pattern.py | imsardine/learning | 925841ddd93d60c740a62e12d9f57ef15b6e0a20 | [
"MIT"
] | null | null | null | python/tests/test_builder_pattern.py | imsardine/learning | 925841ddd93d60c740a62e12d9f57ef15b6e0a20 | [
"MIT"
] | null | null | null | python/tests/test_builder_pattern.py | imsardine/learning | 925841ddd93d60c740a62e12d9f57ef15b6e0a20 | [
"MIT"
] | null | null | null | from datetime import datetime
class LogRecord:
def __init__(self, time, level, message):
self._time = time
self._level = level
self._message = message
@property
def time(self):
return self._time
@property
def level(self):
return self._level
@property
def message(self):
return self._message
class LogRecordBuilder():
def __init__(self):
self._time = datetime.now()
self._level = 'DEBUG'
self._message = 'bla bla bla ...'
def time(self, time):
self._time = time
return self
def level(self, level):
self._level = level
return self
def message(self, message):
self._message = message
return self
def build(self):
return LogRecord(self._time, self._level, self._message)
def test_builder_for_testing():
record = LogRecordBuilder().time(datetime(2014, 9, 1)).level('WARNING').build()
assert record.time.year == 2014
assert record.level == 'WARNING'
| 21.326531 | 83 | 0.61244 | 121 | 1,045 | 5.07438 | 0.239669 | 0.091205 | 0.068404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013369 | 0.284211 | 1,045 | 48 | 84 | 21.770833 | 0.807487 | 0 | 0 | 0.342857 | 0 | 0 | 0.032567 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.285714 | false | 0 | 0.028571 | 0.114286 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3f86cd1c0cd5f083fe9342fa3f362925c1904dcc | 482 | py | Python | tests/step25_tests.py | svaningelgem/advent_of_code_2021 | 80351508d6d6953392bc57af20e1fac05ab3ec2a | [
"MIT"
] | null | null | null | tests/step25_tests.py | svaningelgem/advent_of_code_2021 | 80351508d6d6953392bc57af20e1fac05ab3ec2a | [
"MIT"
] | null | null | null | tests/step25_tests.py | svaningelgem/advent_of_code_2021 | 80351508d6d6953392bc57af20e1fac05ab3ec2a | [
"MIT"
] | null | null | null | from pathlib import Path
TEST_INPUT = Path(__file__).parent / 'step25.txt'
REAL_INPUT = Path(__file__).parent.parent / 'src/step25.txt'
def test_step25():
# assert calculate_position() == 150
pass
def test_step25_real_data():
# assert calculate_position() == 2039256
pass
def test_step25_part2():
# assert calculate_position_with_aim() == 900
pass
def test_step25_part2_real_data():
# assert calculate_position_with_aim() == 1856459736
pass
| 19.28 | 60 | 0.715768 | 62 | 482 | 5.112903 | 0.403226 | 0.088328 | 0.164038 | 0.160883 | 0.451104 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093671 | 0.180498 | 482 | 24 | 61 | 20.083333 | 0.708861 | 0.348548 | 0 | 0.363636 | 0 | 0 | 0.07767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.363636 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
3f90ddaf6c49e364e14a5efa42eb889c3d07ef27 | 229 | py | Python | neat/function_reporter.py | ykeuter/neat-python | 8a4385624f759eb9f66c7a9a071f0789edff4d95 | [
"BSD-3-Clause"
] | null | null | null | neat/function_reporter.py | ykeuter/neat-python | 8a4385624f759eb9f66c7a9a071f0789edff4d95 | [
"BSD-3-Clause"
] | null | null | null | neat/function_reporter.py | ykeuter/neat-python | 8a4385624f759eb9f66c7a9a071f0789edff4d95 | [
"BSD-3-Clause"
] | null | null | null | from neat.reporting import BaseReporter
class FunctionReporter(BaseReporter):
def __init__(self, handle):
self._handle = handle
def end_generation(self, config, population, species_set):
self._handle()
| 22.9 | 62 | 0.720524 | 25 | 229 | 6.28 | 0.68 | 0.191083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196507 | 229 | 9 | 63 | 25.444444 | 0.853261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3fa5af38d8c73c64a9f2683a0ec3565464d69f98 | 19 | py | Python | src/urh/util/__init__.py | awesome-archive/urh | c8c3aabc9d637ca660d8c72c3d8372055e0f3ec7 | [
"Apache-2.0"
] | 1 | 2017-06-21T02:37:16.000Z | 2017-06-21T02:37:16.000Z | src/urh/util/__init__.py | dspmandavid/urh | 30643c1a68634b1c97eb9989485a4e96a3b038ae | [
"Apache-2.0"
] | null | null | null | src/urh/util/__init__.py | dspmandavid/urh | 30643c1a68634b1c97eb9989485a4e96a3b038ae | [
"Apache-2.0"
] | null | null | null | __author__ = 'joe'
| 9.5 | 18 | 0.684211 | 2 | 19 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3faf02ac2273b8c8dc8b8bb1c94269b19f9ab331 | 129 | py | Python | backend/payments/urls.py | BloomTech-Labs/Labs8-OfflineReader | be56741c59a9675dd9c1364155a0ea1027d8b28e | [
"MIT"
] | 10 | 2018-11-05T22:29:06.000Z | 2020-02-19T16:30:19.000Z | backend/payments/urls.py | jtwray/Labs8-OfflineReader | be56741c59a9675dd9c1364155a0ea1027d8b28e | [
"MIT"
] | 22 | 2018-11-07T17:17:38.000Z | 2021-09-01T01:51:26.000Z | backend/payments/urls.py | jtwray/Labs8-OfflineReader | be56741c59a9675dd9c1364155a0ea1027d8b28e | [
"MIT"
] | 5 | 2019-03-11T04:01:30.000Z | 2020-01-30T22:56:47.000Z | from django.urls import path
from .views import checkout
urlpatterns = [
path('create-charge/', checkout, name="cout"),
]
| 14.333333 | 50 | 0.697674 | 16 | 129 | 5.625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170543 | 129 | 8 | 51 | 16.125 | 0.841122 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3fbfa4d6dc63081d71f48631512befbdec626928 | 77 | py | Python | magda/utils/logger/__init__.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 8 | 2021-02-25T14:00:25.000Z | 2022-03-10T00:32:43.000Z | magda/utils/logger/__init__.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 22 | 2021-03-24T11:56:47.000Z | 2021-11-02T15:09:50.000Z | magda/utils/logger/__init__.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 6 | 2021-04-06T07:26:47.000Z | 2021-12-07T18:55:52.000Z | from magda.utils.logger.logger import MagdaLogger
__all__ = ['MagdaLogger']
| 19.25 | 49 | 0.792208 | 9 | 77 | 6.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 77 | 3 | 50 | 25.666667 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3fc010849f3400db96af61ec2e6dfcafd4498abd | 439 | py | Python | rl_sandbox/priors/gaussian.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | 14 | 2020-11-09T22:05:37.000Z | 2022-02-11T12:41:33.000Z | rl_sandbox/priors/gaussian.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | null | null | null | rl_sandbox/priors/gaussian.py | chanb/rl_sandbox_public | e55f954a29880f83a5b0c3358badda4d900f1564 | [
"MIT"
] | null | null | null | import torch
from torch.distributions import Normal
from rl_sandbox.constants import CPU
class GaussianPrior:
def __init__(self, loc, scale, device=torch.device(CPU)):
self.device = device
self.dist = Normal(loc=loc, scale=scale)
def sample(self, num_samples):
return self.dist.rsample(sample_shape=num_samples).to(self.device)
def lprob(self, samples):
return self.dist.log_prob(samples)
| 24.388889 | 74 | 0.710706 | 60 | 439 | 5.05 | 0.466667 | 0.079208 | 0.112211 | 0.138614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191344 | 439 | 17 | 75 | 25.823529 | 0.853521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.272727 | 0.181818 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3fc2ecdc29177b616df5342cd72fee12b0128df5 | 157 | py | Python | python/deep_learning/FUNCTIONAL/1.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 16 | 2018-11-26T08:39:42.000Z | 2019-05-08T10:09:52.000Z | python/deep_learning/FUNCTIONAL/1.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 8 | 2020-05-04T06:29:26.000Z | 2022-02-12T05:33:16.000Z | python/deep_learning/FUNCTIONAL/1.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 5 | 2020-02-11T16:02:21.000Z | 2021-02-05T07:48:30.000Z | def running_sum(numbers, start=0):
if len(numbers) == 0:
print()
return
total = numbers[0] + start
print(total,end="")
running_sum(numbers[1:],total)
| 19.625 | 34 | 0.675159 | 24 | 157 | 4.333333 | 0.541667 | 0.192308 | 0.326923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.146497 | 157 | 7 | 35 | 22.428571 | 0.746269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0.285714 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3fcf8821703be5b4811e722f33f8b5e9ce735c11 | 64 | py | Python | URI Online Judge/Python/SimpleSum.py | AugustoEstevaoMonte/Learning---C-programming | e496b301b6cc9dda68b1da6d72a4937b2c5f9aec | [
"MIT"
] | null | null | null | URI Online Judge/Python/SimpleSum.py | AugustoEstevaoMonte/Learning---C-programming | e496b301b6cc9dda68b1da6d72a4937b2c5f9aec | [
"MIT"
] | 1 | 2020-08-04T17:08:41.000Z | 2020-08-04T17:12:48.000Z | URI Online Judge/Python/SimpleSum.py | AugustoEstevaoMonte/Learning---C-programming | e496b301b6cc9dda68b1da6d72a4937b2c5f9aec | [
"MIT"
] | null | null | null | a=int(input())
b=int(input())
soma=a+b
print("SOMA = %i" % soma) | 16 | 25 | 0.59375 | 13 | 64 | 2.923077 | 0.538462 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 64 | 4 | 25 | 16 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3ff37a618611f015465d215c44ad3868c97f269a | 606 | py | Python | kolibri/plugins/setup_wizard/hooks.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | kolibri/plugins/setup_wizard/hooks.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | kolibri/plugins/setup_wizard/hooks.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
from kolibri.core.webpack import hooks as webpack_hooks
class SetupWizardSyncHook(webpack_hooks.WebpackInclusionHook):
"""
Inherit a hook defining assets to be loaded sychronously in setup_wizard/setup_wizard.html
"""
class Meta:
abstract = True
class SetupWizardAsyncHook(webpack_hooks.WebpackInclusionHook):
"""
Inherit a hook defining assets to be loaded sychronously in setup_wizard/setup_wizard.html
"""
class Meta:
abstract = True
| 28.857143 | 94 | 0.768977 | 72 | 606 | 6.166667 | 0.458333 | 0.099099 | 0.108108 | 0.175676 | 0.581081 | 0.581081 | 0.581081 | 0.581081 | 0.581081 | 0.581081 | 0 | 0 | 0.181518 | 606 | 20 | 95 | 30.3 | 0.895161 | 0.29868 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b20862597eb41497f99429580da31bdc98cb9da7 | 88 | py | Python | Regular/15/15-A.py | unasuke/AtCoder | 2e9f369ff8a6d8ae45c7786a360fe218bd0f160a | [
"MIT"
] | null | null | null | Regular/15/15-A.py | unasuke/AtCoder | 2e9f369ff8a6d8ae45c7786a360fe218bd0f160a | [
"MIT"
] | null | null | null | Regular/15/15-A.py | unasuke/AtCoder | 2e9f369ff8a6d8ae45c7786a360fe218bd0f160a | [
"MIT"
] | null | null | null | #AtCoder Regular 15 A
celsius = int(raw_input())
print ( (9.0 / 5.0) * celsius ) + 32.0
| 22 | 38 | 0.625 | 16 | 88 | 3.375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 0.193182 | 88 | 3 | 39 | 29.333333 | 0.633803 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
b74cb3fb4565deb53adbf744fc8a2e363a3ef603 | 131 | py | Python | nose2/tests/functional/support/scenario/pretty_asserts/unittest_assertion/test_prettyassert_unittestassertion.py | ltfish/nose2 | e47363dad10056cf906daf387613c21d74f37e56 | [
"BSD-2-Clause"
] | 637 | 2015-01-12T02:02:53.000Z | 2022-03-30T19:47:48.000Z | nose2/tests/functional/support/scenario/pretty_asserts/unittest_assertion/test_prettyassert_unittestassertion.py | ltfish/nose2 | e47363dad10056cf906daf387613c21d74f37e56 | [
"BSD-2-Clause"
] | 276 | 2015-01-02T19:14:06.000Z | 2022-03-18T04:03:08.000Z | nose2/tests/functional/support/scenario/pretty_asserts/unittest_assertion/test_prettyassert_unittestassertion.py | ltfish/nose2 | e47363dad10056cf906daf387613c21d74f37e56 | [
"BSD-2-Clause"
] | 127 | 2015-01-08T12:02:10.000Z | 2022-01-10T20:52:29.000Z | import unittest
class TestFoo(unittest.TestCase):
def test_old_assertion(self):
x = False
self.assertTrue(x)
| 16.375 | 33 | 0.671756 | 16 | 131 | 5.375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244275 | 131 | 7 | 34 | 18.714286 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
b7505b10cbeaeb04346903ab78322434f70e2701 | 116 | py | Python | ipcluster/kernel.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | 4 | 2016-02-23T15:39:45.000Z | 2018-03-25T20:15:07.000Z | ipcluster/kernel.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | null | null | null | ipcluster/kernel.py | wasit7/tutorials | 83499821266c8debac05cb5d6d5f6da0f0abd68f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Oct 14 17:40:55 2015
@author: Wasit
"""
import numpy as np
y=np.sum(x)
| 11.6 | 35 | 0.603448 | 22 | 116 | 3.181818 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 0.198276 | 116 | 9 | 36 | 12.888889 | 0.612903 | 0.637931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b783f6e20b4940d15fb2deccfb44b27e44332116 | 2,468 | py | Python | isi_sdk_8_0_1/test/test_network_api.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 24 | 2018-06-22T14:13:23.000Z | 2022-03-23T01:21:26.000Z | isi_sdk_8_0_1/test/test_network_api.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 46 | 2018-04-30T13:28:22.000Z | 2022-03-21T21:11:07.000Z | isi_sdk_8_0_1/test/test_network_api.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 29 | 2018-06-19T00:14:04.000Z | 2022-02-08T17:51:19.000Z | # coding: utf-8
"""
Isilon SDK
Isilon SDK - Language bindings for the OneFS API # noqa: E501
OpenAPI spec version: 4
Contact: sdk@isilon.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import isi_sdk_8_0_1
from isi_sdk_8_0_1.api.network_api import NetworkApi # noqa: E501
from isi_sdk_8_0_1.rest import ApiException
class TestNetworkApi(unittest.TestCase):
"""NetworkApi unit test stubs"""
def setUp(self):
self.api = isi_sdk_8_0_1.api.network_api.NetworkApi() # noqa: E501
def tearDown(self):
pass
def test_create_dnscache_flush_item(self):
"""Test case for create_dnscache_flush_item
"""
pass
def test_create_network_groupnet(self):
"""Test case for create_network_groupnet
"""
pass
def test_create_network_sc_rebalance_all_item(self):
"""Test case for create_network_sc_rebalance_all_item
"""
pass
def test_delete_network_groupnet(self):
"""Test case for delete_network_groupnet
"""
pass
def test_get_network_dnscache(self):
"""Test case for get_network_dnscache
"""
pass
def test_get_network_external(self):
"""Test case for get_network_external
"""
pass
def test_get_network_groupnet(self):
"""Test case for get_network_groupnet
"""
pass
def test_get_network_interfaces(self):
"""Test case for get_network_interfaces
"""
pass
def test_get_network_pools(self):
"""Test case for get_network_pools
"""
pass
def test_get_network_rules(self):
"""Test case for get_network_rules
"""
pass
def test_get_network_subnets(self):
"""Test case for get_network_subnets
"""
pass
def test_list_network_groupnets(self):
"""Test case for list_network_groupnets
"""
pass
def test_update_network_dnscache(self):
"""Test case for update_network_dnscache
"""
pass
def test_update_network_external(self):
"""Test case for update_network_external
"""
pass
def test_update_network_groupnet(self):
"""Test case for update_network_groupnet
"""
pass
if __name__ == '__main__':
unittest.main()
| 19.744 | 75 | 0.632901 | 304 | 2,468 | 4.763158 | 0.233553 | 0.072514 | 0.11395 | 0.155387 | 0.595304 | 0.444061 | 0.080111 | 0.030387 | 0 | 0 | 0 | 0.013061 | 0.286467 | 2,468 | 124 | 76 | 19.903226 | 0.809199 | 0.389384 | 0 | 0.380952 | 1 | 0 | 0.005727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.404762 | false | 0.380952 | 0.119048 | 0 | 0.547619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
b7860053edab7d51d1ccec9202ad3804553afe52 | 91 | py | Python | immutable_collection/__init__.py | djtaylor/python-immutable-collection | 2248c3169f03556782d292d2bd9c17f2310c9c2f | [
"MIT"
] | 1 | 2018-07-25T14:58:48.000Z | 2018-07-25T14:58:48.000Z | immutable_collection/__init__.py | djtaylor/python-immutable-collection | 2248c3169f03556782d292d2bd9c17f2310c9c2f | [
"MIT"
] | null | null | null | immutable_collection/__init__.py | djtaylor/python-immutable-collection | 2248c3169f03556782d292d2bd9c17f2310c9c2f | [
"MIT"
] | null | null | null | __version__ = '1.1.post2'
from immutable_collection.collection import ImmutableCollection
| 22.75 | 63 | 0.846154 | 10 | 91 | 7.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.087912 | 91 | 3 | 64 | 30.333333 | 0.831325 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b79a8eef0482a049e7b11dc0738b4a6ac1888822 | 408 | py | Python | main (9).py | AntonMarinin/itstep | c3e313cd1f790cd7f910d91ef420c14dfad98d85 | [
"Apache-2.0"
] | null | null | null | main (9).py | AntonMarinin/itstep | c3e313cd1f790cd7f910d91ef420c14dfad98d85 | [
"Apache-2.0"
] | null | null | null | main (9).py | AntonMarinin/itstep | c3e313cd1f790cd7f910d91ef420c14dfad98d85 | [
"Apache-2.0"
] | null | null | null | our_set = set()
our_set2 = {0}
print(our_set, type(our_set))
print(our_set2, type(our_set2))
our_set.add('tomato')
our_set2.add("potato")
print(our_set)
print(our_set2)
x = "tomato"
print(x in our_set)
print(x in our_set2)
print(our_set.isdisjoint(our_set2))
our_set3 = our_set.union(our_set2)
print(our_set3)
our_set.update(our_set3)
our_set.update(our_set2)
print(our_set)
print(our_set.issubset(our_set3)) | 22.666667 | 35 | 0.77451 | 79 | 408 | 3.683544 | 0.202532 | 0.247423 | 0.189003 | 0.14433 | 0.42268 | 0.140893 | 0 | 0 | 0 | 0 | 0 | 0.037135 | 0.07598 | 408 | 18 | 36 | 22.666667 | 0.734748 | 0 | 0 | 0.111111 | 0 | 0 | 0.04401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
b7a65da9fbae4d79e7f5f6bd57b653c3f239b80e | 4,290 | py | Python | srgan/vgg19/vgg19.py | pushkalkatara/Low-Resolution-Face-Recognition | bba9074acd0e695a0baa6d979cd7afe034235392 | [
"MIT"
] | 3 | 2020-02-14T09:54:31.000Z | 2021-07-29T07:53:56.000Z | srgan/vgg19/vgg19.py | sohne-ck/Low-Resolution-Face-Recognition | bba9074acd0e695a0baa6d979cd7afe034235392 | [
"MIT"
] | null | null | null | srgan/vgg19/vgg19.py | sohne-ck/Low-Resolution-Face-Recognition | bba9074acd0e695a0baa6d979cd7afe034235392 | [
"MIT"
] | 1 | 2020-11-16T00:33:16.000Z | 2020-11-16T00:33:16.000Z | import tensorflow as tf
import sys
sys.path.append('../utils')
from layer import *
class VGG19:
def __init__(self, x, t, is_training):
if x is None: return
self.out, self.phi = self.build_model(x, is_training)
self.loss = self.inference_loss(self.out, t)
def build_model(self, x, is_training, reuse=False):
with tf.variable_scope('vgg19', reuse=reuse):
phi = []
with tf.variable_scope('conv1a'):
x = conv_layer(x, [3, 3, 3, 64], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv1b'):
x = conv_layer(x, [3, 3, 64, 64], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
phi.append(x)
x = max_pooling_layer(x, 2, 2)
with tf.variable_scope('conv2a'):
x = conv_layer(x, [3, 3, 64, 128], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv2b'):
x = conv_layer(x, [3, 3, 128, 128], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
phi.append(x)
x = max_pooling_layer(x, 2, 2)
with tf.variable_scope('conv3a'):
x = conv_layer(x, [3, 3, 128, 256], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv3b'):
x = conv_layer(x, [3, 3, 256, 256], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv3c'):
x = conv_layer(x, [3, 3, 256, 256], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv3d'):
x = conv_layer(x, [3, 3, 256, 256], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
phi.append(x)
x = max_pooling_layer(x, 2, 2)
with tf.variable_scope('conv4a'):
x = conv_layer(x, [3, 3, 256, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv4b'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv4c'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv4d'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
phi.append(x)
x = max_pooling_layer(x, 2, 2)
with tf.variable_scope('conv5a'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv5b'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv5c'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
with tf.variable_scope('conv5d'):
x = conv_layer(x, [3, 3, 512, 512], 1)
x = batch_normalize(x, is_training)
x = lrelu(x)
phi.append(x)
x = max_pooling_layer(x, 2, 2)
x = flatten_layer(x)
with tf.variable_scope('fc1'):
x = full_connection_layer(x, 4096)
x = lrelu(x)
with tf.variable_scope('fc2'):
x = full_connection_layer(x, 4096)
x = lrelu(x)
with tf.variable_scope('softmax'):
x = full_connection_layer(x, 100)
return x, phi
def inference_loss(self, out, t):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
labels=tf.one_hot(t, 100),
logits=out)
return tf.reduce_mean(cross_entropy)
| 37.631579 | 64 | 0.479254 | 555 | 4,290 | 3.517117 | 0.142342 | 0.076844 | 0.143443 | 0.194672 | 0.745902 | 0.703381 | 0.696721 | 0.656762 | 0.656762 | 0.656762 | 0 | 0.071456 | 0.399767 | 4,290 | 113 | 65 | 37.964602 | 0.686602 | 0 | 0 | 0.554455 | 0 | 0 | 0.028445 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029703 | false | 0 | 0.029703 | 0 | 0.089109 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b7b971d3a9ba8d5ab948b2ea62a67368cfba5373 | 838 | py | Python | api_app/api/dependencies/workspace_service_templates.py | stuartleeks/AzureTRE | 56069f83a1ae27f434c444526f2fa504a8868ce8 | [
"MIT"
] | 71 | 2021-03-04T15:10:18.000Z | 2022-03-29T16:37:37.000Z | api_app/api/dependencies/workspace_service_templates.py | stuartleeks/AzureTRE | 56069f83a1ae27f434c444526f2fa504a8868ce8 | [
"MIT"
] | 1,498 | 2021-03-05T07:28:00.000Z | 2022-03-31T16:28:06.000Z | api_app/api/dependencies/workspace_service_templates.py | stuartleeks/AzureTRE | 56069f83a1ae27f434c444526f2fa504a8868ce8 | [
"MIT"
] | 60 | 2021-04-30T10:09:26.000Z | 2022-03-30T12:39:27.000Z | from fastapi import Depends, HTTPException, Path
from starlette.status import HTTP_404_NOT_FOUND
from api.dependencies.database import get_repository
from db.errors import EntityDoesNotExist
from db.repositories.resource_templates import ResourceTemplateRepository
from models.domain.resource import ResourceType
from models.domain.resource_template import ResourceTemplate
from resources import strings
async def get_workspace_service_template_by_name_from_path(service_template_name: str = Path(...), template_repo=Depends(get_repository(ResourceTemplateRepository))) -> ResourceTemplate:
try:
return template_repo.get_current_template(service_template_name, ResourceType.WorkspaceService)
except EntityDoesNotExist:
raise HTTPException(status_code=HTTP_404_NOT_FOUND, detail=strings.TEMPLATE_DOES_NOT_EXIST)
| 49.294118 | 186 | 0.854415 | 100 | 838 | 6.87 | 0.49 | 0.065502 | 0.029112 | 0.043668 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007905 | 0.094272 | 838 | 16 | 187 | 52.375 | 0.897233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.615385 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b7c2ca3fe6ec7cd86819dba69cb3cd0227c23072 | 187 | py | Python | telegramtui/src/aalib.py | Bergiu/TelegramTUI | ea37481c3cecdce82e9d1a35565d892e2b0cd4a3 | [
"MIT"
] | 470 | 2018-10-17T04:44:18.000Z | 2022-03-20T15:40:42.000Z | telegramtui/src/aalib.py | Bergiu/TelegramTUI | ea37481c3cecdce82e9d1a35565d892e2b0cd4a3 | [
"MIT"
] | 29 | 2018-11-20T08:58:39.000Z | 2022-01-23T17:47:41.000Z | telegramtui/src/aalib.py | Bergiu/TelegramTUI | ea37481c3cecdce82e9d1a35565d892e2b0cd4a3 | [
"MIT"
] | 62 | 2018-12-10T23:04:37.000Z | 2022-03-04T16:05:23.000Z | import platform
def is_aalib_support():
"""
Currently aalib working only on Linux
:return:
"""
if platform.system() in ('Linux'):
return True
return False | 18.7 | 41 | 0.614973 | 22 | 187 | 5.136364 | 0.772727 | 0.19469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283422 | 187 | 10 | 42 | 18.7 | 0.843284 | 0.245989 | 0 | 0 | 0 | 0 | 0.040984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
b7c6b675fa85e7649b17540a5ccd723fc94bac62 | 126 | py | Python | week3/week3 1.py | Kevinskwk/ILP | 7a3925a22232d486a5a8f5df8255f9297fd73fec | [
"MIT"
] | 1 | 2020-07-09T23:10:56.000Z | 2020-07-09T23:10:56.000Z | week3/week3 1.py | Kevinskwk/ILP | 7a3925a22232d486a5a8f5df8255f9297fd73fec | [
"MIT"
] | null | null | null | week3/week3 1.py | Kevinskwk/ILP | 7a3925a22232d486a5a8f5df8255f9297fd73fec | [
"MIT"
] | null | null | null | listone = [1,2,3,4,5,6,7,8,9]
listtwo = []
for i in listone:
if i%2==0:
listtwo.append(i)
print listtwo
| 14 | 30 | 0.531746 | 24 | 126 | 2.791667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 0.293651 | 126 | 8 | 31 | 15.75 | 0.629213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b7dcd16aa18f602b77d05c3be6953762efe388be | 248 | py | Python | mmseg/core/seg/__init__.py | evgeniya-egupova/mmsegmentation | 3857f19321ad6af41c8a6af364898ee050225f4c | [
"Apache-2.0"
] | null | null | null | mmseg/core/seg/__init__.py | evgeniya-egupova/mmsegmentation | 3857f19321ad6af41c8a6af364898ee050225f4c | [
"Apache-2.0"
] | null | null | null | mmseg/core/seg/__init__.py | evgeniya-egupova/mmsegmentation | 3857f19321ad6af41c8a6af364898ee050225f4c | [
"Apache-2.0"
] | null | null | null | from .builder import build_pixel_sampler
from .sampler import BasePixelSampler, OHEMPixelSampler, ClassWeightingPixelSampler
__all__ = [
'build_pixel_sampler',
'BasePixelSampler',
'OHEMPixelSampler',
'ClassWeightingPixelSampler'
]
| 24.8 | 83 | 0.78629 | 19 | 248 | 9.842105 | 0.526316 | 0.106952 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141129 | 248 | 9 | 84 | 27.555556 | 0.877934 | 0 | 0 | 0 | 0 | 0 | 0.310484 | 0.104839 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4d0b6683a453e00041fa5fde622b4fb5bc9c45d6 | 199 | py | Python | apps/mappings/apps.py | fylein/fyle-qbo-api | 627339ddbab37cafc8aa419a757d826759d86778 | [
"MIT"
] | 1 | 2021-03-08T08:13:36.000Z | 2021-03-08T08:13:36.000Z | apps/mappings/apps.py | fylein/fyle-netsuite-api | d7957642524698208db368582e7166a9f392e439 | [
"MIT"
] | 82 | 2020-11-24T10:17:29.000Z | 2022-03-30T07:27:26.000Z | apps/mappings/apps.py | fylein/fyle-xero-api | ba81af058dc413fc801d4cf7d1a8961bd42df469 | [
"MIT"
] | 3 | 2020-03-09T12:56:31.000Z | 2020-06-10T07:55:47.000Z | from django.apps import AppConfig
class MappingsConfig(AppConfig):
name = 'apps.mappings'
def ready(self):
super(MappingsConfig, self).ready()
import apps.mappings.signals
| 19.9 | 43 | 0.693467 | 22 | 199 | 6.272727 | 0.636364 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211055 | 199 | 9 | 44 | 22.111111 | 0.878981 | 0 | 0 | 0 | 0 | 0 | 0.065327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4d20d44b0116e5a30f6e28e7711e01d07a503470 | 8,251 | py | Python | integration/switch/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2017-12-14T14:25:17.000Z | 2019-03-09T03:29:12.000Z | integration/switch/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 10 | 2019-06-14T09:12:55.000Z | 2021-10-01T12:15:43.000Z | integration/switch/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2019-05-10T14:59:48.000Z | 2022-03-07T16:34:23.000Z | from regression_tests import *
class TestBase(Test):
def test_produce_expected_output(self):
self.assert_c_produces_output_when_run(
input='0',
expected_return_code=0,
expected_output=' 0 after 0 after 0 after 0 after 0 after 0 after 0 0 22 after before 0 after '
)
self.assert_c_produces_output_when_run(
input='1',
expected_return_code=0,
expected_output=' 1 after 1 after 1 1after 1 1after 1 after 1 after1 22 after before 1 after '
)
self.assert_c_produces_output_when_run(
input='4',
expected_return_code=0,
expected_output='4 0 after4after4after4after4after4after4 22 after before 4 after '
)
self.assert_c_produces_output_when_run(
input='22',
expected_return_code=0,
expected_output='22 0 after22after22after22after22after22after 22 after before 22 after '
)
self.assert_c_produces_output_when_run(
input='26',
expected_return_code=0,
expected_output='26 0 after26after26after26after26after26after26 22 after before 26 after '
)
self.assert_c_produces_output_when_run(
input='30',
expected_return_code=0,
expected_output='30 0 after30after30after30after30after30after30 22 after before 30 after '
)
class Test_2018(Test):
settings_2018 = TestSettings(
input=files_in_dir('2018-09-17'),
)
def test_check_global_variable(self):
assert self.out_c.has_any_global_vars()
assert self.out_c.has_global_vars('x')
assert self.out_c.global_vars['x'].type.is_int(32)
def check_switch_function_generic(self, func_name):
assert self.out_c.has_func(func_name)
assert self.out_c.funcs[func_name].return_type.is_void()
assert self.out_c.funcs[func_name].param_count == 0
#assert self.out_c.funcs[func_name].has_any_switch_stmts()
#assert len (self.out_c.funcs[func_name].switch_stmts) == 1
#assert self.out_c.funcs[func_name].switch_stmts[0].has_cases()
#assert self.out_c.funcs[func_name].switch_stmts[0].has_default_case()
# assert len(self.out_c.funcs[func_name].switch_stmts[0].cases) == 2
assert self.out_c.funcs[func_name].calls('printf')
def test_check_switch_functions(self):
func_names = ['func1a', 'func1b', 'func2a', 'func2b', 'func3a', 'func3b', 'func4']
for func in func_names:
self.check_switch_function_generic(func)
def test_check_function_jump_table(self):
assert self.out_c.has_func('jump_table')
assert self.out_c.funcs['jump_table'].return_type.is_void()
assert self.out_c.funcs['jump_table'].param_count == 0
#assert self.out_c.funcs['jump_table'].has_any_switch_stmts()
#assert len (self.out_c.funcs['jump_table'].switch_stmts) == 1
#assert self.out_c.funcs['jump_table'].switch_stmts[0].has_cases()
#assert self.out_c.funcs['jump_table'].switch_stmts[0].has_default_case()
#assert len(self.out_c.funcs['jump_table'].switch_stmts[0].cases) == 7
assert self.out_c.funcs['jump_table'].calls('printf')
# def test_check_function_jump_table2(self):
# assert self.out_c.has_func('jump_table2')
# assert self.out_c.funcs['jump_table2'].return_type.is_void()
# assert self.out_c.funcs['jump_table2'].param_count == 0
# assert self.out_c.funcs['jump_table2'].has_any_switch_stmts()
# assert self.out_c.funcs['jump_table2'].calls('rand')
# assert self.out_c.funcs['jump_table2'].has_switch_stmts('switch (rand())')
# assert self.out_c.funcs['jump_table2'].switch_stmts['switch (rand())'].has_cases()
# assert self.out_c.funcs['jump_table2'].switch_stmts['switch (rand())'].has_default_case()
# assert len(self.out_c.funcs['jump_table2'].switch_stmts['switch (rand())'].cases) == 6
# assert self.out_c.funcs['jump_table2'].calls('printf')
def test_check_function_main(self):
assert self.out_c.has_func('main')
assert self.out_c.funcs['main'].has_any_assignments()
assert self.out_c.funcs['main'].has_assignments('x = 0')
assert self.out_c.funcs['main'].calls('func1a')
assert self.out_c.funcs['main'].calls('func1b')
assert self.out_c.funcs['main'].calls('func2a')
assert self.out_c.funcs['main'].calls('func2b')
assert self.out_c.funcs['main'].calls('func3a')
assert self.out_c.funcs['main'].calls('func3b')
assert self.out_c.funcs['main'].calls('func4')
assert self.out_c.funcs['main'].calls('jump_table')
assert self.out_c.funcs['main'].has_any_return_stmts()
def test_check_presence_of_literals(self):
assert self.out_c.has_string_literal(' 0 ')
assert self.out_c.has_string_literal(' 1 ')
#assert self.out_c.has_string_literal(' 20 ')
#assert self.out_c.has_string_literal(' 21 ')
assert self.out_c.has_string_literal(' 22 ')
#assert self.out_c.has_string_literal(' 23 ')
#assert self.out_c.has_string_literal(' 24 ')
#assert self.out_c.has_string_literal(' 25 ')
#assert self.out_c.has_string_literal(' 28 ')
#assert self.out_c.has_string_literal(' 2, 3 \\n')
#assert self.out_c.has_string_literal(' 5 \\n')
#assert self.out_c.has_string_literal(' 8 \\n')
#assert self.out_c.has_string_literal(' 57 \\n')
#assert self.out_c.has_string_literal(' break \\n')
assert self.out_c.has_string_literal('%d')
assert self.out_c.has_string_literal(' %d ')
assert self.out_c.has_string_literal('after')
#assert self.out_c.has_string_literal(' before jump table 2 ')
class Test_2017(TestBase):
settings_2017 = TestSettings(
input=files_in_dir('2017-11-14'),
)
class Test_2015(TestBase):
settings_2015 = TestSettings(
input=files_in_dir('2015-03-30'),
)
class Test_MSVC(Test):
settings = TestSettings(
input='switch-test-msvc-O0.ex',
)
def test_for_switches_in_function_411a90(self):
assert self.out_c.contains('default: {')
assert self.out_c.contains('case 20: {')
assert self.out_c.contains('case 21: {')
assert self.out_c.contains('case 22: {')
assert self.out_c.contains('case 23: {')
assert self.out_c.contains('case 24: {')
assert self.out_c.contains('case 25: {')
assert self.out_c.contains('case 28: {')
def test_for_switches_in_function_411c90(self):
assert self.out_c.contains('default: {')
assert self.out_c.contains('case 0: {')
assert self.out_c.contains('case 2: {')
assert self.out_c.contains('case 3: {')
assert self.out_c.contains('case 5: {')
assert self.out_c.contains('case 8: {')
assert self.out_c.contains('case 57: {')
def test_for_strings(self):
assert self.out_c.has_string_literal(' before ')
assert self.out_c.has_string_literal(' after ')
assert self.out_c.has_string_literal(' 0 ')
assert self.out_c.has_string_literal(' 1 ')
assert self.out_c.has_string_literal('%d')
assert self.out_c.has_string_literal(' 20 ')
assert self.out_c.has_string_literal(' 21 ')
assert self.out_c.has_string_literal(' 22 ')
assert self.out_c.has_string_literal(' 23 ')
assert self.out_c.has_string_literal(' 24 ')
assert self.out_c.has_string_literal(' 25 ')
assert self.out_c.has_string_literal(' 28 ')
assert self.out_c.has_string_literal(' before jump table 2 ')
assert self.out_c.has_string_literal(' break \\n')
assert self.out_c.has_string_literal(' 0 \\n')
assert self.out_c.has_string_literal(' 2, 3 \\n')
assert self.out_c.has_string_literal(' 5 \\n')
assert self.out_c.has_string_literal(' 8 \\n')
assert self.out_c.has_string_literal(' 57 \\n')
assert self.out_c.has_string_literal(' after jumpt table 2 ')
def test_issue_636(self):
assert self.out_c.contains('char \* g[0-9]; // 0x419548')
| 46.353933 | 109 | 0.658223 | 1,192 | 8,251 | 4.247483 | 0.106544 | 0.13411 | 0.153269 | 0.254395 | 0.793403 | 0.758246 | 0.623543 | 0.532688 | 0.461979 | 0.399763 | 0 | 0.044068 | 0.213429 | 8,251 | 177 | 110 | 46.615819 | 0.736055 | 0.240335 | 0 | 0.19685 | 0 | 0 | 0.166319 | 0.02919 | 0 | 0 | 0.001283 | 0 | 0.559055 | 1 | 0.086614 | false | 0 | 0.007874 | 0 | 0.165354 | 0.015748 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4d28d4a376cb665cef81e56764b4f48b50edd3b8 | 184 | py | Python | Codeforces/A_k_Multiple_Free_Set.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | Codeforces/A_k_Multiple_Free_Set.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | Codeforces/A_k_Multiple_Free_Set.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | n, k = map(int, input().split())
a = list(sorted(map(int, input().split()), reverse=True))
s = set()
for i in range(len(a)):
if a[i] * k not in s:
s.add(a[i])
print(len(s)) | 26.285714 | 57 | 0.548913 | 37 | 184 | 2.72973 | 0.594595 | 0.118812 | 0.217822 | 0.316832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201087 | 184 | 7 | 58 | 26.285714 | 0.687075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4d40e63ebeb6df46e96de8604d6c6e4ee5b58f19 | 58 | py | Python | commands/willie.py | WouterPennings/willie | be80e7830c0905c49cc53f6d742142bfe411829e | [
"MIT"
] | null | null | null | commands/willie.py | WouterPennings/willie | be80e7830c0905c49cc53f6d742142bfe411829e | [
"MIT"
] | null | null | null | commands/willie.py | WouterPennings/willie | be80e7830c0905c49cc53f6d742142bfe411829e | [
"MIT"
] | null | null | null | async def Willie(context):
await context.send("Wonka") | 29 | 31 | 0.724138 | 8 | 58 | 5.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 58 | 2 | 31 | 29 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4d544c6acf7e83e4d3386c276ee1b1b16d6972ae | 180 | py | Python | lotube/videos/apps.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | lotube/videos/apps.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | lotube/videos/apps.py | zurfyx/lotube | d00a456d4aff5a6f4c63dab5d90ba6a3a72e3a3f | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.apps import AppConfig
class VideosConfig(AppConfig):
name = 'videos'
def ready(self):
from . import signals
| 16.363636 | 39 | 0.722222 | 21 | 180 | 5.952381 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216667 | 180 | 10 | 40 | 18 | 0.886525 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
4d65302c873a133dca7d2a1f2fb1b74a67ab0d2c | 294 | py | Python | server/.vim/plugged/python-mode/submodules/pylint/tests/extensions/data/broad_try_clause.py | hkdb/sysconf | 99d334f7309657647059c4b37f25e33dffc81fc3 | [
"MIT"
] | 10 | 2020-07-21T21:59:54.000Z | 2021-07-19T11:01:47.000Z | server/.vim/plugged/python-mode/submodules/pylint/tests/extensions/data/broad_try_clause.py | hkdb/sysconf | 99d334f7309657647059c4b37f25e33dffc81fc3 | [
"MIT"
] | null | null | null | server/.vim/plugged/python-mode/submodules/pylint/tests/extensions/data/broad_try_clause.py | hkdb/sysconf | 99d334f7309657647059c4b37f25e33dffc81fc3 | [
"MIT"
] | 1 | 2021-01-30T18:17:01.000Z | 2021-01-30T18:17:01.000Z | # pylint: disable=missing-docstring, invalid-name
MY_DICTIONARY = {"key_one": 1, "key_two": 2, "key_three": 3}
try: # [max-try-statements]
value = MY_DICTIONARY["key_one"]
value += 1
except KeyError:
pass
try:
value = MY_DICTIONARY["key_one"]
except KeyError:
value = 0
| 19.6 | 60 | 0.663265 | 41 | 294 | 4.560976 | 0.560976 | 0.192513 | 0.240642 | 0.28877 | 0.245989 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021097 | 0.193878 | 294 | 14 | 61 | 21 | 0.767932 | 0.231293 | 0 | 0.6 | 0 | 0 | 0.165919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
4d72f655a83f73f920cc0d28db9de400f5c3bfc8 | 1,873 | py | Python | third_party/spider/baselines/nl2code/nn/objectives.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 69 | 2021-04-14T06:35:07.000Z | 2022-03-31T18:35:05.000Z | third_party/spider/baselines/nl2code/nn/objectives.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 19 | 2018-12-17T20:42:11.000Z | 2020-02-12T21:29:51.000Z | third_party/spider/baselines/nl2code/nn/objectives.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 31 | 2016-01-26T16:08:28.000Z | 2021-11-29T00:24:10.000Z | from __future__ import absolute_import
import theano
import theano.tensor as T
import numpy as np
from six.moves import range
if theano.config.floatX == 'float64':
epsilon = 1.0e-9
else:
epsilon = 1.0e-7
def mean_squared_error(y_true, y_pred):
return T.sqr(y_pred - y_true).mean(axis=-1)
def mean_absolute_error(y_true, y_pred):
return T.abs_(y_pred - y_true).mean(axis=-1)
def mean_absolute_percentage_error(y_true, y_pred):
return T.abs_((y_true - y_pred) / T.clip(T.abs_(y_true), epsilon, np.inf)).mean(axis=-1) * 100.
def mean_squared_logarithmic_error(y_true, y_pred):
return T.sqr(T.log(T.clip(y_pred, epsilon, np.inf) + 1.) - T.log(T.clip(y_true, epsilon, np.inf) + 1.)).mean(axis=-1)
def squared_hinge(y_true, y_pred):
return T.sqr(T.maximum(1. - y_true * y_pred, 0.)).mean(axis=-1)
def hinge(y_true, y_pred):
return T.maximum(1. - y_true * y_pred, 0.).mean(axis=-1)
def categorical_crossentropy(y_true, y_pred):
'''Expects a binary class matrix instead of a vector of scalar classes
'''
y_pred = T.clip(y_pred, epsilon, 1.0 - epsilon)
# scale preds so that the class probas of each sample sum to 1
y_pred /= y_pred.sum(axis=-1, keepdims=True)
cce = T.nnet.categorical_crossentropy(y_pred, y_true)
return cce
def binary_crossentropy(y_true, y_pred):
y_pred = T.clip(y_pred, epsilon, 1.0 - epsilon)
bce = T.nnet.binary_crossentropy(y_pred, y_true).mean(axis=-1)
return bce
def poisson_loss(y_true, y_pred):
return T.mean(y_pred - y_true * T.log(y_pred + epsilon), axis=-1)
# aliases
mse = MSE = mean_squared_error
mae = MAE = mean_absolute_error
mape = MAPE = mean_absolute_percentage_error
msle = MSLE = mean_squared_logarithmic_error
from .utils.generic_utils import get_from_module
def get(identifier):
return get_from_module(identifier, globals(), 'objective')
| 28.815385 | 121 | 0.714362 | 329 | 1,873 | 3.81459 | 0.25228 | 0.099602 | 0.057371 | 0.095618 | 0.413546 | 0.301195 | 0.287649 | 0.251793 | 0.195219 | 0.154582 | 0 | 0.019695 | 0.159637 | 1,873 | 64 | 122 | 29.265625 | 0.777637 | 0.075814 | 0 | 0.051282 | 0 | 0 | 0.009281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25641 | false | 0 | 0.153846 | 0.205128 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4d8974f8e4462d307e7cce226ed8f03009274694 | 3,172 | py | Python | reamber/algorithms/api/osu/OsuAPIV2.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | reamber/algorithms/api/osu/OsuAPIV2.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | reamber/algorithms/api/osu/OsuAPIV2.py | Bestfast/reamberPy | 91b76ca6adf11fbe8b7cee7c186481776a4d7aaa | [
"MIT"
] | null | null | null | from __future__ import annotations
import requests
from dataclasses import dataclass
from configparser import ConfigParser
BASE_URL = "https://osu.ppy.sh/api/v2"
@dataclass
class OsuAPIV2:
""" This class helps in connecting with the new osu! API.
This isn't fleshed out yet since a lot of useful functionality is not implemented yet.
"""
token: str
@staticmethod
def fromPost(secret=None, id_=None, cfg_path="") -> OsuAPIV2:
""" This grabs the token by requesting a new one with the secret and id """
if not (secret and id_):
if not cfg_path:
raise TypeError("Secret and ID or cfg_path must be defined.")
else: # cfg_path is given
cfg = ConfigParser()
cfg.read(cfg_path, encoding='utf-8')
id_ = cfg['API']['id']
secret = cfg['API']['secret']
response = requests.post("https://osu.ppy.sh/oauth/token",
data={"client_id": id_,
"client_secret": secret,
"grant_type": "client_credentials",
"scope": "public"})
payload = eval(response.text)
return OsuAPIV2(payload['access_token'])
@staticmethod
def fromCfg(cfg_path):
""" This grabs the token from the current cfg, good if you don't want to keep on requesting tokens
However note that the token will expire in 24 hours upon request."""
cfg = ConfigParser()
cfg.read(cfg_path, encoding='utf-8')
return OsuAPIV2(cfg['API']['token'])
def get(self, api_path, **kwargs):
return requests.get(f"{BASE_URL}{api_path}",
params={k: v for k, v in kwargs.items() if v},
headers={"Authorization": f"Bearer {self.token}"})
def beatmapScores(self, id_):
return self.get(f"/beatmaps/{id_}/scores")
def beatmap(self, id_):
return self.get(f"/beatmaps/{id_}")
def beatmapSet(self, id_):
return self.get(f"/beatmapsets/{id_}")
def userKudosu(self, id_, limit=None, offset=None):
return self.get(f"/users/{id_}/kudosu", limit=limit, offset=offset)
def _userScores(self, id_, type_, includeFails, mode, limit, offset):
return self.get(f"/users/{id_}/scores/{type_}",
includeFails=includeFails, mode=mode, limit=limit, offset=offset)
def userScoresBest(self, id_, includeFails=None, mode=None, limit=None, offset=None):
return self._userScores(id_, type_="best",
includeFails=includeFails, mode=mode, limit=limit, offset=offset)
def userScoresFirsts(self, id_, includeFails=None, mode=None, limit=None, offset=None):
return self._userScores(id_, type_="firsts",
includeFails=includeFails, mode=mode, limit=limit, offset=offset)
def userScoresRecent(self, id_, includeFails=None, mode=None, limit=None, offset=None):
return self._userScores(id_, type_="recent",
includeFails=includeFails, mode=mode, limit=limit, offset=offset)
| 40.151899 | 106 | 0.604981 | 384 | 3,172 | 4.869792 | 0.346354 | 0.025668 | 0.034759 | 0.037433 | 0.390374 | 0.359893 | 0.316578 | 0.316578 | 0.255615 | 0.120321 | 0 | 0.003928 | 0.277743 | 3,172 | 78 | 107 | 40.666667 | 0.812309 | 0.123266 | 0 | 0.185185 | 0 | 0 | 0.135155 | 0.017851 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203704 | false | 0 | 0.074074 | 0.166667 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4d91150ef476a3aa58a7774f95b2be4351981d16 | 787 | py | Python | coinExchange/trading/admin.py | biz2013/xwjy | 8f4b5e3e3fc964796134052ff34d58d31ed41904 | [
"Apache-2.0"
] | 1 | 2019-12-15T16:56:44.000Z | 2019-12-15T16:56:44.000Z | coinExchange/trading/admin.py | biz2013/xwjy | 8f4b5e3e3fc964796134052ff34d58d31ed41904 | [
"Apache-2.0"
] | 87 | 2018-01-06T10:18:31.000Z | 2022-03-11T23:32:30.000Z | coinExchange/trading/admin.py | biz2013/xwjy | 8f4b5e3e3fc964796134052ff34d58d31ed41904 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Wallet, Cryptocurrency, SiteSettings, Transaction
from .models import PaymentProvider, UserExternalWalletAddress
from .models import UserPaymentMethod, UserPaymentMethodImage, Order, UserWallet
from .models import UserWalletTransaction
from .adminModels import UserPaymentMethodImageAdmin, UserPaymentMethodAdmin
admin.site.register(Wallet)
admin.site.register(Cryptocurrency)
admin.site.register(SiteSettings)
admin.site.register(PaymentProvider)
admin.site.register(UserExternalWalletAddress)
admin.site.register(UserPaymentMethod, UserPaymentMethodAdmin)
admin.site.register(UserPaymentMethodImage, UserPaymentMethodImageAdmin)
admin.site.register(Order)
admin.site.register(UserWallet)
admin.site.register(UserWalletTransaction)
| 43.722222 | 80 | 0.871665 | 75 | 787 | 9.146667 | 0.293333 | 0.131195 | 0.247813 | 0.113703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057179 | 787 | 17 | 81 | 46.294118 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4daa8afcea08709f848ba5a0c558662e99bac5cc | 75,863 | py | Python | dataloader/file_io/filelist_creator.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | 6 | 2020-09-18T06:25:20.000Z | 2021-11-16T22:33:37.000Z | dataloader/file_io/filelist_creator.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | null | null | null | dataloader/file_io/filelist_creator.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | 1 | 2021-03-05T08:36:49.000Z | 2021-03-05T08:36:49.000Z | import os
import sys
import json
import numpy as np
import pandas as pd
import dataloader.file_io.dir_lister as dl
import dataloader.file_io.get_path as gp
SUPPORTED_DATASETS = ('cityscapes', 'cityscapes_video', 'cityscapes_sequence', 'cityscapes_extra', 'cityscapes_part',
'kitti', 'kitti_2012', 'kitti_2015', 'virtual_kitti', 'mapillary', 'mapillary_by_ID', 'gta5',
'synthia', 'bdd100k', 'voc2012', 'a2d2', 'lostandfound', 'camvid', 'make3d')
# If a directory path contains one one the following strings, it will be ignored
FOLDERS_TO_IGNORE = ('segmentation_trainid')
class FilelistCreator:
"""This class provides helper functions to create a list of all files inside a dataset."""
def __init__(self, path):
"""Initializes the dataset path and gets a list of all folders in the dataset
:param path: absolute path to the dataset
"""
if os.path.isdir(os.path.abspath(path)):
self.dataset_path = os.path.abspath(path)
else:
sys.exit("Der angegebene Dateipfad existiert nicht")
self.folders = dl.DirLister.get_directories(self.dataset_path)
self.json_dict = {}
def preprocess_directories_list(self, filter_names):
"""Removes all directories from the list that contain at least one of the given strings
:param filter_names: list of names by which the folders should be filtered
"""
self.folders = dl.DirLister.remove_dirs_by_name(self.folders, filter_names)
def preprocess_file_list(self, filter_dict):
"""Removes all entries from the file lists that do not contain all of the specified filters.
Sometimes (e.g. in segmentation masks) there are multiple representations of the data and one only wants to
keep one. This function will go through the self.json_dict name by name and for each name, it will remove all
entries from the 'files' entry that to not contain all of the strings in filter_dict[name].
:param filter_dict: dictionary where the key corresponds to one of the names and the values correspond to the
names which have to appear in the files
"""
for key in filter_dict.keys():
if key in self.json_dict['names']:
index = self.json_dict['names'].index(key)
self.json_dict['files'][index] = dl.DirLister.include_dirs_by_name(
self.json_dict['files'][index], filter_dict[key])
def create_filelist(self, filters, ending, ignore=(), ambiguous_names_to_ignore=()):
"""Creates a filtered list of files inside self.folders
:param filters: names which have to appear in the directory path
:param ending: file ending of valid files
:param ignore: list of strings. All files/folders containing these strings will be ignored
:param ambiguous_names_to_ignore: Sometimes it is inevitable to have a filter name that also appears in every
path name. In this case, these longer strings can be specified in this parameter and the folders and files
will only be included if the filters appear outside of any of the ambiguous_strings_to_ignore.
:return: list of all filtered folders and a list of all filtered files inside these folders.
"""
self.preprocess_directories_list(FOLDERS_TO_IGNORE)
folders = dl.DirLister.include_dirs_by_name(self.folders, filters, ignore, ambiguous_names_to_ignore)
folders = sorted(folders, key=str.lower)
filelist = []
for fold in folders:
files = dl.DirLister.get_files_by_ending(fold, ending, ignore)
filelist.extend(files)
filelist = sorted(filelist, key=str.lower)
return folders, filelist
def dump_to_json(self, filename, remove_root=True):
"""Dumps the json list to a json file
:param filename: name of the json file
:param remove_root: if True, the root directory up to the dataset path is removed to store the images
relative to the dataset path
"""
dump_location = os.path.join(self.dataset_path, filename)
if remove_root:
root_stringlength = len(self.dataset_path) + 1
for i in range(len(self.json_dict['folders'])):
for j in range(len(self.json_dict['folders'][i])):
folder_file = self.json_dict['folders'][i][j]
self.json_dict['folders'][i][j] = folder_file[root_stringlength:]
for i in range(len(self.json_dict['files'])):
for j in range(len(self.json_dict['files'][i])):
image_file = self.json_dict['files'][i][j]
self.json_dict['files'][i][j] = image_file[root_stringlength:]
with open(dump_location, 'w') as fp:
json.dump(self.json_dict, fp)
def create_json_from_list(self, json_list, stereo_replace):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
This method has to be implemented for each dataset-specific child class.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...]
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...]
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: dicionary that defines the strings that have to be interchanged in order to get the
right stereo image from the left stereo image: {left_image_string: right_image_string}
"""
raise NotImplementedError
class KITTIFilelistCreator(FilelistCreator):
"""Class to create the KITTI file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: dicionary that defines the strings that have to be interchanged in order to get the
right stereo image from the left stereo image: {left_image_string: right_image_string}
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
folders_list.append(folders)
file_list.append(files)
# positions contains 4-tuples, where the single entries have the following meaning:
# 1. global position inside the dataset (sorted by frame number and sequence
# 2. number of preceding frames in the sequence
# 3. number of frames in the sequence after the current frame
# 4. local position inside the list of the elements (e.g. depth has 20000 elements but color has 40000
# then the first entry will contain the mapping from depth to color and the fourth entry will contain
# numbers from 0 to 20000
positions = []
lower_limit = [0]
upper_limit = []
old_frame_number = None
new_frame_number = None
# get the sequence limits (upper and lower)
for file in files:
old_frame_number = new_frame_number
new_frame_number = int(os.path.splitext(os.path.split(file)[1])[0])
if old_frame_number != new_frame_number - 1 and old_frame_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
upper_limit.append(len(files) - 1)
index = 0
# get the position entries and the file names of all image files, numerical values are handled
# differently
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0:
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append(file)
else:
if 'right' in name:
for key in stereo_replace.keys():
file = file.replace(stereo_replace[key], key)
positions.append((main_files.index(file), j - lower_limit[index], upper_limit[index] - j, j))
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
# camera intrinsic parameters
camera_intrinsics = []
camera_intrinsics_right = []
json_list['names'].extend(['camera_intrinsics', 'camera_intrinsics_right'])
json_list['types'].extend(['.txt', '.txt'])
json_list['filters'].extend([['Raw_data'], ['Raw_data']])
for file in main_files:
base = file.split(os.sep)[0]
# get the corresponding calibration files, every color frame has a calibration file
if 'test' in file:
param_file_name = os.path.split(file)[1].replace('png', 'txt')
calib_file = os.path.join(self.dataset_path, 'Raw_data', base, 'intrinsics', param_file_name)
left = open(calib_file).readlines()[0][6:].split()
right = open(calib_file).readlines()[0][6:].split()
else:
calib_file = os.path.join(self.dataset_path, 'Raw_data', base, 'calib_cam_to_cam.txt')
left = open(calib_file).readlines()[:20][-1][6:].split()
right = open(calib_file).readlines()[:28][-1][6:].split()
left_matrix = np.eye(4)
right_matrix = np.eye(4)
left_matrix[:3, :3] = np.array([float(l) for l in left]).reshape((3, 3))
right_matrix[:3, :3] = np.array([float(r) for r in right]).reshape((3, 3))
left_matrix = list(left_matrix)
right_matrix = list(right_matrix)
left_matrix = [list(l) for l in left_matrix]
right_matrix = [list(r) for r in right_matrix]
camera_intrinsics.append(left_matrix)
camera_intrinsics_right.append(right_matrix)
print('camera_intrinsics:', len(camera_intrinsics))
print('camera_intrinsics_right:', len(camera_intrinsics_right))
folders_list.extend([folders_list[0], folders_list[1]])
position_list.extend([position_list[0], position_list[0]])
file_list.extend([file_list[0].copy(), file_list[1].copy()])
numerical_list.extend([camera_intrinsics, camera_intrinsics_right])
# velocity and timestamps
json_list['names'].extend(['timestamp', 'velocity'])
json_list['types'].extend(['.txt', '.txt'])
json_list['filters'].extend([['Raw_data', 'oxts'], ['Raw_data', 'oxts']])
folders, files = self.create_filelist(['Raw_data', 'oxts'], '.txt')
folders_vel = dl.DirLister.include_dirs_by_name(folders, 'data')
folders_time = [os.path.split(f)[0] for f in folders_vel]
files.extend([os.path.join(f, 'timestamps.txt') for f in folders_time])
folders_list.extend([folders_time, folders_vel])
times = []
velocities = []
for file in files:
if 'timestamps' in file:
temp_time = np.array(pd.read_csv(file, header=None, delimiter=' ')[1].values)
time = [float(t.split(':')[0])*3600 + float(t.split(':')[1])*60 + float(t.split(':')[2])
for t in temp_time]
times.extend(time)
if '00000' in file:
temp_data = np.array(pd.read_csv(file, header=None, delimiter=' ').values)[0]
velocity = np.sqrt(temp_data[8]**2 + temp_data[9]**2 + temp_data[10]**2)
velocities.append(velocity)
file_list.extend([file_list[0][:len(times)], file_list[0][:len(velocities)]])
position_list.extend([position_list[0][:len(times)], position_list[0][:len(velocities)]])
numerical_list.extend([times, velocities])
print('timestamps:', len(times))
print('velocities', len(velocities))
# poses for odometry evaluation (odometry dataset needed if desired!)
json_list['names'].extend(['poses'])
json_list['types'].extend(['.txt'])
json_list['filters'].extend([['Raw_data']])
folders, files = self.create_filelist('poses', '.txt')
if files:
poses = []
drives = []
frame_numbers = []
# defined officially in the odometry set, seq. 8 is not available in the raw data
frame_min_max_sequence = [[0, 270],
[0, 2760],
[0, 1100],
[0, 1100],
[1100, 5170],
[0, 1590],
[0, 1200],
[0, 4540],
[0, 4660],
[0, 1100]]
# get the poses from file
for min_max, file in zip(frame_min_max_sequence, files):
for i in range(min_max[0], min_max[1] + 1, 1):
frame_numbers.append(i)
drive = file.split(os.sep)[-3]
for i in range(min_max[0], min_max[1] + 1, 1):
drives.append(drive)
pose = np.loadtxt(file).reshape((-1, 3, 4))
for p in pose:
p = list(p)
p = [list(i) for i in p]
poses.append(p)
counter = 0
positions = []
files = []
# get the corresponding filenames and positions from the poses
lower_limit = [0]
upper_limit = []
old_frame_number = None
new_frame_number = None
for i, number in zip(range(len(frame_numbers)), frame_numbers):
old_frame_number = new_frame_number
new_frame_number = number
if old_frame_number != new_frame_number - 1 and old_frame_number is not None:
upper_limit.append(i - 1)
lower_limit.append(i)
upper_limit.append(len(frame_numbers) - 1)
index = 0
# append the values of positions and files to the data dict
for i, file in zip(range(len(main_files)), main_files):
main_drive = file.split(os.sep)[-4]
main_frame_number = int(os.path.splitext(os.path.split(file)[1])[0])
if main_drive == drives[counter] and main_frame_number == frame_numbers[counter]:
temp_position = (main_files.index(file), counter - lower_limit[index],
upper_limit[index] - counter, counter)
positions.append(temp_position)
files.append(file_list[0][main_files.index(file)])
counter += 1
if index < len(lower_limit) - 1 and counter == lower_limit[index + 1]:
index += 1
if counter == len(frame_numbers):
break
print('poses:', len(poses))
position_list.append(positions)
file_list.append(files)
numerical_list.append(poses)
folders_list.append(folders)
# save the json dict
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class KITTI2015FilelistCreator(FilelistCreator):
"""Class to create the KITTI file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: dicionary that defines the strings that have to be interchanged in order to get the
right stereo image from the left stereo image: {left_image_string: right_image_string}
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
folders_list.append(folders)
file_list.append(files)
positions = []
lower_limit = [0]
upper_limit = []
old_frame_number = None
new_frame_number = None
for file in files:
old_frame_number = new_frame_number
new_frame_number = int(os.path.splitext(os.path.split(file)[1])[0].split('_')[1])
if old_frame_number != new_frame_number - 1 and old_frame_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
upper_limit.append(len(files) - 1)
index = 0
for j, file in zip(range(len(files)), files):
folder = os.path.split(os.path.split(os.path.split(file)[0])[0])[1]
file = os.path.split(file)[1]
file = os.path.join(folder, file)
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0:
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append(file)
else:
if 'right' in name:
for key in stereo_replace.keys():
file = file.replace(stereo_replace[key], key)
positions.append((main_files.index(file), j - lower_limit[index], upper_limit[index] - j, j))
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
# camera intrinsic parameters
camera_intrinsics = []
camera_intrinsics_right = []
json_list['names'].extend(['camera_intrinsics', 'camera_intrinsics_right'])
json_list['types'].extend(['.txt', '.txt'])
json_list['filters'].extend(['calib_cam_to_cam', 'calib_cam_to_cam'])
for file in main_files:
base = file.split(os.sep)[1].split('_')[0]
folder = file.split(os.sep)[0]
param_file_name = base + '.txt'
calib_file = os.path.join(self.dataset_path, folder, 'calib_cam_to_cam', param_file_name)
left = open(calib_file).readlines()[:20][-1][6:].split()
right = open(calib_file).readlines()[:28][-1][6:].split()
left_matrix = np.eye(4)
right_matrix = np.eye(4)
left_matrix[:3, :3] = np.array([float(l) for l in left]).reshape((3, 3))
right_matrix[:3, :3] = np.array([float(r) for r in right]).reshape((3, 3))
left_matrix = list(left_matrix)
right_matrix = list(right_matrix)
left_matrix = [list(l) for l in left_matrix]
right_matrix = [list(r) for r in right_matrix]
camera_intrinsics.append(left_matrix)
camera_intrinsics_right.append(right_matrix)
print('camera_intrinsics:', len(camera_intrinsics))
print('camera_intrinsics_right:', len(camera_intrinsics_right))
folders_list.extend([folders_list[0], folders_list[1]])
position_list.extend([position_list[0], position_list[0]])
file_list.extend([file_list[0].copy(), file_list[1].copy()])
numerical_list.extend([camera_intrinsics, camera_intrinsics_right])
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class VirtualKITTIFilelistCreator(FilelistCreator):
"""Class to create the Virtual KITTI file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: Not used for this Dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
folders_list.append(folders)
file_list.append(files)
# positions contains 4-tuples, where the single entries have the following meaning:
# 1. global position inside the dataset (sorted by frame number and sequence
# 2. number of preceding frames in the sequence
# 3. number of frames in the sequence after the current frame
# 4. local position inside the list of the elements (e.g. depth has 20000 elements but color has 40000
# then the first entry will contain the mapping from depth to color and the fourth entry will contain
# numbers from 0 to 20000
positions = []
lower_limit = [0] # Array containing the index of the starting frames for each video
upper_limit = [] # Array containing the index of the end frame for each video
old_frame_number = None
new_frame_number = None
# get the sequence limits (upper and lower)
for file in files:
old_frame_number = new_frame_number
new_frame_number = int(os.path.splitext(os.path.split(file)[1])[0])
# detect start of a new video
if old_frame_number != new_frame_number - 1 and old_frame_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
upper_limit.append(len(files) - 1)
index = 0
# get the position entries and the file names of all image files, numerical values are handled
# differently
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
# detect start of a new video
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0:
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append(file)
else:
positions.append((main_files.index(file), j - lower_limit[index], upper_limit[index] - j, j))
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class CityscapesFilelistCreator(FilelistCreator):
"""Class to create the Cityscapes file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: dicionary that defines the strings that have to be interchanged in order to get the
right stereo image from the left stereo image: {left_image_string: right_image_string}
"""
# string sequences from corrupted files that should be ignored
ignore_extra = ['troisdorf_000000_000073']
ignore_video_left = ['frankfurt_000000_006434', 'frankfurt_000001_023592', 'frankfurt_000001_038767']
ignore_video_right = ['frankfurt_000000_022587', 'frankfurt_000001_026781', 'frankfurt_000001_059933',
'frankfurt_000001_059934', 'frankfurt_000001_060157', 'frankfurt_000001_070159',
'frankfurt_000001_083533']
ignore_list = ignore_extra + ignore_video_left + ignore_video_right
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
# cityscapes_extra dataset
if 'gtCoarse' in filter:
filter.append('train_extra')
folders, files = self.create_filelist(filter, type, ignore=ignore_list)
if 'segmentation' in name:
files = [f for f in files if 'color' not in f and 'instance' not in f]
folders_list.append(folders)
file_list.append(files)
# Find the splits between the video sequences
positions = []
lower_limit = [0]
upper_limit = []
old_frame_number = None
new_frame_number = None
for file in files:
old_frame_number = new_frame_number
img_filename = os.path.splitext(os.path.split(file)[1])[0]
new_frame_number = int(img_filename.split('_')[2])
previous_frame_name = '_'.join(img_filename.split('_')[:2]) + '_' + str(new_frame_number-1).zfill(6)
if previous_frame_name in ignore_list:
# Skip the ignored file while checking whether the images belong to the same sequence
if old_frame_number not in [new_frame_number - 2, new_frame_number - 3] \
and old_frame_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
elif old_frame_number != new_frame_number - 1 and old_frame_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
upper_limit.append(len(files) - 1)
# Create the positions entries
index = 0
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0:
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append('_'.join(file.split('_')[:-1]))
else:
if 'right' in name:
for key in stereo_replace.keys():
file = file.replace(stereo_replace[key], key)
if 'segmentation' in name:
positions.append((main_files.index('_'.join(file.split('_')[:-2])),
j - lower_limit[index], upper_limit[index] - j, j))
else:
positions.append((main_files.index('_'.join(file.split('_')[:-1])),
j - lower_limit[index], upper_limit[index] - j, j))
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
# camera intrinsics
main_folder = file_list[0][0].split(self.dataset_path + os.sep)[1]
main_folder = main_folder.split(os.sep)[0]
if os.path.isdir(os.path.join(self.dataset_path, 'camera')):
camera_intrinsics = []
camera_intrinsics_right = []
json_list['names'].extend(['camera_intrinsics', 'camera_intrinsics_right'])
json_list['types'].extend(['.txt', '.txt'])
json_list['filters'].extend([['camera'], ['camera']])
counter = 0
for file in main_files:
if ('mainz' in file or 'bielefeld' in file or 'bochum' in file or 'frankfurt' in file\
or 'hamburg' in file or 'hanover' in file or 'krefeld' in file or 'strasbourg' in file\
or 'monchengladbach' in file) and 'sequence' in main_folder:
transformed_file = file.split('_')
transformed_file[-1] = int(transformed_file[-1]) + 19 - counter
counter += 1
if counter % 30 == 0:
counter = 0
transformed_file[-1] = str(transformed_file[-1]).zfill(6)
calib_file = os.path.join(self.dataset_path, 'camera', '_'.join(transformed_file) + '_camera.json')
elif 'sequence' in main_folder:
calib_file = os.path.join(self.dataset_path, 'camera', file[:-2] + '19' + '_camera.json')
else:
calib_file = os.path.join(self.dataset_path, 'camera', file + '_camera.json')
with open(calib_file) as f:
intrinsic_json = json.load(f)
left_matrix = np.eye(4)
left_matrix[0, 0] = intrinsic_json['intrinsic']['fx']
left_matrix[1, 1] = intrinsic_json['intrinsic']['fy']
left_matrix[0, 2] = intrinsic_json['intrinsic']['u0']
left_matrix[1, 2] = intrinsic_json['intrinsic']['v0']
left_matrix = list(left_matrix)
left_matrix = [list(l) for l in left_matrix]
right_matrix = left_matrix
camera_intrinsics.append(left_matrix)
camera_intrinsics_right.append(right_matrix)
print('camera_intrinsics:', len(camera_intrinsics))
print('camera_intrinsics_right:', len(camera_intrinsics_right))
folders_list.extend([folders_list[0].copy(), folders_list[1].copy()])
position_list.extend([position_list[0].copy(), position_list[0].copy()])
file_list.extend([file_list[0].copy(), file_list[0].copy()])
numerical_list.extend([camera_intrinsics, camera_intrinsics_right])
# velocity and timestamps
timestamps = False
# Cityscapes_sequence
if os.path.isdir(os.path.join(self.dataset_path, 'timestamp_sequence')):
folders_time, files_time = self.create_filelist(['timestamp_sequence'], '.txt', ignore=ignore_list)
folders_vel, files_vel = self.create_filelist(['vehicle_sequence'], '.json', ignore=ignore_list)
timestamps = True
# Cityscapes_video
elif os.path.isdir(os.path.join(self.dataset_path, 'timestamp_allFrames')):
folders_time, files_time = self.create_filelist(['timestamp_allFrames'], '.txt', ignore=ignore_list)
folders_vel, files_vel = self.create_filelist(['vehicle_allFrames'], '.json', ignore=ignore_list)
timestamps = True
# Cityscapes_standard & extra
elif os.path.isdir(os.path.join(self.dataset_path, 'vehicle')):
folders_vel, files_vel = self.create_filelist(['vehicle'], '.json', ignore=ignore_list)
if timestamps:
times = []
json_list['names'].extend(['timestamp'])
json_list['types'].extend(['.txt'])
json_list['filters'].extend([['timestamp']])
for file in files_time:
time = float(np.array(pd.read_csv(file, header=None).values)) / (10 ** 9)
times.append(time)
print('timestamps:', len(times))
folders_list.extend([folders_time])
file_list.extend([file_list[0].copy()])
position_list.extend([position_list[0].copy()])
numerical_list.extend([times])
json_list['names'].extend(['velocity'])
json_list['types'].extend(['.json'])
json_list['filters'].extend([['vehicle']])
velocities = []
for file in files_vel:
with open(file) as json_file:
velocity = float(json.load(json_file)['speed'])
velocities.append(velocity)
print('velocities:', len(velocities))
folders_list.extend([folders_vel])
file_list.extend([file_list[0].copy()])
position_list.extend([position_list[0].copy()])
numerical_list.extend([velocities])
# Save everything in the json_list
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class MapillaryFilelistCreator(FilelistCreator):
"""Class to create the Mapillary file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type, ignore=(os.path.join('test', 'Segmentation')))
positions = []
for j in range(len(files)):
if filter == 'Segmentation':
positions.append([j+5000, 0, 0, j+5000])
else:
positions.append([j, 0, 0, j])
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class Gta5FilelistCreator(FilelistCreator):
"""Class to create the GTA5 file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
for j in range(len(files)):
positions.append([j, 0, 0, j])
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class SynthiaFilelistCreator(FilelistCreator):
"""Class to create the Snythia file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
for j in range(len(files)):
positions.append([j, 0, 0, j])
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class Bdd100kFilelistCreator(FilelistCreator):
"""Class to create the Bdd100k file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _generate_file_identifier(self, filename):
"""
Generates an identifier that separates different images from each other but is the same for corresponding
images of different types, e.g. for corresponding color, depth and segmentation images.
:param filename: Filename from which the identifier will be extracted
:return: file identifier
"""
filename = os.path.split(filename)[1]
filename = os.path.splitext(filename)[0]
filename = filename.split('_')
return filename[0]
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
self.preprocess_directories_list(['color_labels'])
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
#for j in range(len(files)):
# positions.append([j, 0, 0, j])
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if i == 0: # color image
positions.append((len(positions), 0, 0, j))
main_files.append(self._generate_file_identifier(file))
else: # segmentation
positions.append((main_files.index(self._generate_file_identifier(file)), 0, 0, j))
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class Voc2012FilelistCreator(FilelistCreator):
"""Class to create the Pascal Voc 2012 file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if i == 0: # color image
positions.append((len(positions), 0, 0, j))
main_files.append(file.split('.')[0])
else: # segmentation
positions.append((main_files.index(file.split('.')[0]), 0, 0, j))
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class A2d2FilelistCreator(FilelistCreator):
"""Class to create the a2d2 file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _generate_file_identifier(self, filename):
"""
Generates an identifier that separates different images from each other but is the same for corresponding
images of different types, e.g. for corresponding color, depth and segmentation images.
:param filename: Filename from which the identifier will be extracted
:return: file identifier
"""
filename = os.path.split(filename)[1]
filename = filename.split('_')
return filename[0] + '_' + filename[2] + '_' + filename[3].split('.')[0]
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type, ambiguous_names_to_ignore='camera_lidar_semantic')
positions = []
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if i == 0: # color image
positions.append((len(positions), 0, 0, j))
main_files.append(self._generate_file_identifier(file))
else: # segmentation
positions.append((main_files.index(self._generate_file_identifier(file)), 0, 0, j))
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class LostAndFoundFilelistCreator(FilelistCreator):
"""Class to create the LostAndFound file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
if 'segmentation' in name:
files = [f for f in files if 'color' not in f and 'instance' not in f and 'Train' not in f]
folders_list.append(folders)
file_list.append(files)
positions = []
lower_limit = [0]
upper_limit = []
old_frame_number = None
new_frame_number = None
old_seq_number = None
new_seq_number = None
frame_indicator_pos = {'color': -2, 'segmentation': -3}
for file in files:
old_frame_number = new_frame_number
old_seq_number = new_seq_number
img_filename = os.path.splitext(os.path.split(file)[1])[0]
if 'color' in name:
new_frame_number = int(img_filename.split('_')[frame_indicator_pos['color']])
new_seq_number = int(img_filename.split('_')[frame_indicator_pos['color']-1])
elif 'segmentation' in name:
new_frame_number = int(img_filename.split('_')[frame_indicator_pos['segmentation']])
new_seq_number = int(img_filename.split('_')[frame_indicator_pos['segmentation']-1])
if old_seq_number != new_seq_number and old_seq_number is not None:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
upper_limit.append(len(files) - 1)
index = 0
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0:
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append('_'.join(file.split('_')[:-1]))
else:
if 'segmentation' in name:
positions.append((main_files.index('_'.join(file.split('_')[:-2])),
j - lower_limit[index], upper_limit[index] - j, j))
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class CamVidFilelistCreator(FilelistCreator):
"""Class to create the CamVid file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _generate_file_identifier(self, filename):
"""
Generates an identifier that separates different images from each other but is the same for corresponding
images of different types, e.g. for corresponding color, depth and segmentation images.
:param filename: Filename from which the identifier will be extracted
:return: file identifier
"""
filename = os.path.split(filename)[1]
filename = os.path.splitext(filename)[0]
filename = filename.split('_')
return filename[0] + '_' + filename[1]
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
lower_limit = [0]
upper_limit = []
frame_number = 0
new_seq_id = None
seq_number = 0
for file in files:
old_seq_id = new_seq_id
new_seq_id = self._generate_file_identifier(file)[0]
if new_seq_id != old_seq_id:
upper_limit.append(files.index(file) - 1)
lower_limit.append(files.index(file))
seq_number += 1
upper_limit.append(len(files) - 1)
index = 0
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if index < len(lower_limit) - 1 and j == lower_limit[index + 1]:
index += 1
if i == 0: # color image
positions.append((len(positions), j - lower_limit[index], upper_limit[index] - j, j))
main_files.append(self._generate_file_identifier(file))
else: # segmentation
positions.append((main_files.index(self._generate_file_identifier(file)), j - lower_limit[index],
upper_limit[index] - j, j))
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class Make3dFilelistCreator(FilelistCreator):
"""Class to create the make3d file list"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _generate_file_identifier(self, filename):
"""
Generates an identifier that separates different images from each other but is the same for corresponding
images of different types, e.g. for corresponding color, depth and segmentation images.
:param filename: Filename from which the identifier will be extracted
:return: file identifier
"""
filename = os.path.split(filename)[1]
filename = os.path.splitext(filename)[0]
filename = filename.split("img-")[-1]
filename = filename.split("depth_sph_corr-")[-1]
return filename
def create_json_from_list(self, json_list, stereo_replace=None):
"""Creates a dictionary in the format of the basic_files.json.
Takes a dictionary with the dataset-specific names and file endings and fills the dictionary self.json_dict
with the entries from the dataset folder based on the information in the given dictionary.
:param json_list: dataset-spicific dictionary of the form
{'names: [list of all data categories that this dataset provides, e.g. 'color', 'depth', ...],
'types': [list of the corresponding file types, e.g. '.png', '.txt', ...],
'filters': [list of the corresponding filters to identify the folders for each name, e.g. 'camera', ...]}
:param stereo_replace: not used for this dataset
"""
folders_list = []
file_list = []
position_list = []
numerical_list = []
main_files = []
for i, name, type, filter in zip(range(len(json_list['names'])), json_list['names'],
json_list['types'], json_list['filters']):
folders, files = self.create_filelist(filter, type)
positions = []
for j, file in zip(range(len(files)), files):
file = file.split(self.dataset_path + os.sep)[1]
file = os.path.join(*file.split(os.sep)[1:])
if i == 0: # color image
positions.append((len(positions), 0, 0, j))
main_files.append(self._generate_file_identifier(file))
else: # segmentation
positions.append((main_files.index(self._generate_file_identifier(file)), 0, 0, j))
folders_list.append(folders)
file_list.append(files)
position_list.append(positions)
numerical_list.append(None)
print('name: ', name, 'num_items: ', len(files))
json_list.update({'folders': folders_list, 'files': file_list, 'positions': position_list,
'numerical_values': numerical_list})
self.json_dict = json_list
class DatasetCreator:
"""Class to create the dataset file list for different datasets"""
def __init__(self, dataset, path=None, rewrite=False):
"""Initializes the dataset name and path
:param dataset: name of the dataset folder
:param path: can user-define a path and not get it automatically from get_path (not recommended)
:param rewrite: can be set to True to ensure that the dataset is rewritten
"""
assert dataset in SUPPORTED_DATASETS, 'Dataset not supported'
self.data_dict = {'cityscapes': self.create_cityscapeslists,
'cityscapes_video': self.create_cityscapeslists,
'cityscapes_sequence': self.create_cityscapeslists,
'cityscapes_extra': self.create_cityscapeslists,
'cityscapes_part': self.create_cityscapeslists,
'kitti': self.create_kittilists,
'kitti_2012': self.create_kitti2012lists,
'kitti_2015': self.create_kitti2015lists,
'virtual_kitti': self.create_virtualkittilists,
'mapillary': self.create_mapillarylists,
'mapillary_by_ID': self.create_mapillarylists,
'gta5': self.create_gta5lists,
'synthia': self.create_synthialists,
'bdd100k': self.create_bdd100klists,
'voc2012': self.create_voc2012lists,
'a2d2': self.create_a2d2lists,
'lostandfound': self.create_lostandfoundlists,
'camvid': self.create_camvidlists,
'make3d': self.create_make3dlists,
}
self.dataset = dataset
if path:
self.dataset_folder_path = path
else:
path_getter = gp.GetPath()
self.dataset_folder_path = path_getter.get_data_path()
self.filename = 'basic_files' + '.json'
self.rewrite = rewrite
def check_state(self):
"""Checks whether the basic_files.json already exists and contains valid filenames"""
check = False
if self.rewrite:
return check
else:
json_file = os.path.join(self.dataset_folder_path, self.dataset, self.filename)
is_file = os.path.isfile(os.path.join(self.dataset_folder_path, self.dataset, self.filename))
if is_file:
with open(json_file) as file:
json_data = json.load(file)
is_valid_image = os.path.isfile(os.path.join(self.dataset_folder_path, self.dataset,
json_data['files'][0][0]))
if is_valid_image:
check = True
return check
def create_dataset(self):
"""Executes the dataset-specific function that reads in the data and creates the basic_files.json"""
self.data_dict[self.dataset]()
def create_cityscapeslists(self):
"""Creates the basic_files.json for any dataset in the Cityscapes family"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
if 'extra' in self.dataset:
json_list = {
'names': ['color', 'color_right', 'depth', 'segmentation'],
'types': ['.png', '.png', '.png', '.png'],
'filters': [['leftImg8bit'],
['rightImg8bit'],
['disparity'],
['gtCoarse']]
}
elif 'video' in self.dataset:
json_list = {
'names': ['color', 'color_right'],
'types': ['.png', '.png'],
'filters': [['leftImg8bit'],
['rightImg8bit']]
}
else:
json_list = {
'names': ['color', 'color_right', 'depth', 'segmentation'],
'types': ['.png', '.png', '.png', '.png'],
'filters': [['leftImg8bit'],
['rightImg8bit'],
['disparity'],
['gtFine']]
}
creator = CityscapesFilelistCreator(local_path)
creator.preprocess_directories_list(['foggy', '_rain'])
creator.create_json_from_list(json_list, stereo_replace={'left': 'right'})
creator.dump_to_json(self.filename)
def create_kittilists(self):
"""Creates the basic_files.json for any dataset in the KITTI family"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {'names': ['color', 'color_right', 'depth', 'depth_right', 'depth_improved', 'depth_improved_right',
'depth_processed', 'depth_processed_right', 'depth_processed_improved',
'depth_processed_improved_right', 'depth_completed', 'depth_completed_right',
'depth_completed_improved', 'depth_completed_improved_right'],
'types': ['.png', '.png', '.png', '.png', '.png', '.png', '.png', '.png', '.png', '.png',
'.png', '.png', '.png', '.png'],
'filters': [['Raw_data', 'image_02', 'data'],
['Raw_data', 'image_03', 'data'],
['Depth' + os.sep, 'image_02', 'data'],
['Depth' + os.sep, 'image_03', 'data'],
['Depth_improved' + os.sep, 'image_02', 'data'],
['Depth_improved' + os.sep, 'image_03', 'data'],
['Depth_processed' + os.sep, 'image_02', 'data'],
['Depth_processed' + os.sep, 'image_03', 'data'],
['Depth_processed_improved' + os.sep, 'image_02', 'data'],
['Depth_processed_improved' + os.sep, 'image_03', 'data'],
['Depth_completed' + os.sep, 'image_02', 'data'],
['Depth_completed' + os.sep, 'image_03', 'data'],
['Depth_completed_improved' + os.sep, 'image_02', 'data'],
['Depth_completed_improved' + os.sep, 'image_03', 'data']],
}
creator = KITTIFilelistCreator(local_path)
creator.preprocess_directories_list(['image_00', 'image_01', 'velodyne_points'])
creator.create_json_from_list(json_list, stereo_replace={'image_02': 'image_03'})
creator.dump_to_json(self.filename)
def create_kitti2012lists(self):
"""Creates the basic_files.json for the KITTI 2012 dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {'names': ['color', 'color_right'],
'types': ['.png', '.png'],
'filters': [['image_2'],
['image_3'],
]
}
creator = KITTI2015FilelistCreator(local_path)
creator.create_json_from_list(json_list, stereo_replace={})
creator.dump_to_json(self.filename)
def create_kitti2015lists(self):
"""Creates the basic_files.json for the KITTI 2015 dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {'names': ['color', 'color_right', 'depth', 'depth_right',
'segmentation', 'flow', 'flow_noc'],
'types': ['.png', '.png', '.png', '.png', '.png', '.png', '.png'],
'filters': [['image_2'],
['image_3'],
['depth_occ_0'],
['depth_occ_1'],
['semantic'],
['flow_occ'],
['flow_noc']]
}
creator = KITTI2015FilelistCreator(local_path)
creator.preprocess_directories_list(['viz_flow'])
creator.create_json_from_list(json_list, stereo_replace={})
creator.dump_to_json(self.filename)
def create_virtualkittilists(self):
"""Creates the basic_files.json for the Virtual KITTI dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {'names': ['color', 'depth', 'segmentation'],
'types': ['.png', '.png', '.png'],
'filters': [['vkitti_1.3.1_rgb'],
['vkitti_1.3.1_depthgt'],
['vkitti_1.3.1_scenegt']]
}
creator = VirtualKITTIFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_mapillarylists(self):
"""Creates the basic_files.json for the Mapillary dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.jpg', '.png'],
'filters': ['ColorImage', 'Segmentation']
}
creator = MapillaryFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_gta5lists(self):
"""Creates the basic_files.json for the GTA5 dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.png', '.png'],
'filters': ['images', 'labels']
}
creator = Gta5FilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_synthialists(self):
"""Creates the basic_files.json for the Synthia dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'depth', 'segmentation'],
'types': ['.png', '.png', '.png'],
'filters': ['RGB', 'Depth_1_channel', ['GT_1_channel', 'LABELS']]
}
creator = SynthiaFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_bdd100klists(self):
"""Creates the basic_files.json for the BDD100K dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.jpg', '.png'],
'filters': ['images', 'labels']
}
creator = Bdd100kFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_voc2012lists(self):
"""Creates the basic_files.json for the Pascal VOC 2012 dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.jpg', '.png'],
'filters': ['JPEGImages', 'SegmentationClassAug']
}
creator = Voc2012FilelistCreator(local_path)
creator.preprocess_directories_list(['__MACOSX'])
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_a2d2lists(self):
"""Creates the basic_files.json for the a2d2 dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.png', '.png'],
'filters': [['camera', 'front_center'] , ['label', 'front_center']]
}
creator = A2d2FilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_lostandfoundlists(self):
"""Creates the basic_files.json for the LostAndFound dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.png', '.png'],
'filters': ['leftImg8bit', 'gtCoarse']
}
creator = LostAndFoundFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_camvidlists(self):
"""Creates the basic_files.json for the CamVid dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'segmentation'],
'types': ['.png', '.png'],
'filters': ['701_StillsRaw_full' , 'LabeledApproved_full']
}
creator = CamVidFilelistCreator(local_path)
creator.preprocess_directories_list(['trainid'])
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
def create_make3dlists(self):
"""Creates the basic_files.json for the CamVid dataset"""
local_path = os.path.join(self.dataset_folder_path, self.dataset)
json_list = {
'names': ['color', 'depth'],
'types': ['.jpg', '.png'],
'filters': ['ColorImage' , 'Depth_PNG']
}
creator = Make3dFilelistCreator(local_path)
creator.create_json_from_list(json_list)
creator.dump_to_json(self.filename)
if __name__ == '__main__':
"""this script has to be executed on each platform given the local path
Supported datasets until now:
- a2d2
- bdd100k
- camvid
- cityscapes
- cityscapes_extra
- cityscapes_sequence
- cityscapes_standard
- cityscapes_video
- gta5
- kitti
- kitti_2012
- kitti_2015
- lostandfound
- mapillary
- mapillary_by_ID
- synthia
- virtual_kitti
- voc2012
Supported modes:
Path can be passed optionally however you are then responsible yourself of the right format.
Otherwise the path is taken from get_path, you may modify this file for your needs,
which selects the right paths automatically on your machine
The script creates a json file with the file names of all images which correspond
to a certain class and saves them in the corresponding dataset directory
"""
dataset = 'bdd100k'
if len(sys.argv) > 1:
dataset = sys.argv[1]
data_creator = DatasetCreator(dataset, rewrite=True)
check = data_creator.check_state()
if not check:
data_creator.create_dataset()
| 50.948959 | 120 | 0.584554 | 8,920 | 75,863 | 4.777018 | 0.060314 | 0.029851 | 0.014949 | 0.011969 | 0.756119 | 0.722372 | 0.70578 | 0.685339 | 0.664828 | 0.651757 | 0 | 0.01459 | 0.303442 | 75,863 | 1,488 | 121 | 50.983199 | 0.791779 | 0.226566 | 0 | 0.619094 | 0 | 0 | 0.092542 | 0.011791 | 0.000964 | 0 | 0 | 0 | 0.000964 | 1 | 0.051109 | false | 0 | 0.00675 | 0 | 0.079074 | 0.023144 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.