hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
993ea445eb7f4a47f6f9aa018ee57cbb229cee13 | 652 | py | Python | src/splits_creation.py | ValentinMastro/av1_split_encode | 27fcec1cfc92701c3fa37b6d6470f18422f2819c | [
"MIT"
] | null | null | null | src/splits_creation.py | ValentinMastro/av1_split_encode | 27fcec1cfc92701c3fa37b6d6470f18422f2819c | [
"MIT"
] | 1 | 2022-01-21T06:35:49.000Z | 2022-01-25T12:29:37.000Z | src/splits_creation.py | ValentinMastro/av1_split_encode | 27fcec1cfc92701c3fa37b6d6470f18422f2819c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from src.splits import Trim_split, Vapoursynth_split
def select_split_class(parameters, index, begin, end):
if (parameters.splitting_method == "ffmpeg_trim"):
return Trim_split(parameters, index, begin, end)
elif (parameters.splitting_method == "Vapoursynth"):
return Vapoursynth_split(parameters, index, begin, end)
elif (parameters.splitting_method == "MagicYUV"):
pass
def create_splits_from_keyframes(parameters, keyframes):
splits = []
for index in range(len(keyframes)-1):
begin, end = keyframes[index], keyframes[index+1]
splits.append(select_split_class(parameters, index, begin, end))
return splits
| 31.047619 | 66 | 0.765337 | 84 | 652 | 5.761905 | 0.404762 | 0.082645 | 0.165289 | 0.190083 | 0.396694 | 0.396694 | 0.396694 | 0.235537 | 0.235537 | 0 | 0 | 0.005217 | 0.118098 | 652 | 20 | 67 | 32.6 | 0.836522 | 0.032209 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
9940a116a01f4e47312e5f4cf38763ccdc7a21a7 | 974 | py | Python | python/binary_search.py | Hamng/python-sources | 0cc5a5d9e576440d95f496edcfd921ae37fcd05a | [
"Unlicense"
] | null | null | null | python/binary_search.py | Hamng/python-sources | 0cc5a5d9e576440d95f496edcfd921ae37fcd05a | [
"Unlicense"
] | 1 | 2019-02-23T18:30:51.000Z | 2019-02-23T18:30:51.000Z | python/binary_search.py | Hamng/python-sources | 0cc5a5d9e576440d95f496edcfd921ae37fcd05a | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Dec 14 13:29:10 2020
@author: Ham
"""
def locate(ar, target):
lft = 0
rgt = len(ar) - 1
while lft <= rgt:
mid = (lft + rgt) // 2
if ar[mid] == target:
return mid
if target < ar[mid]:
rgt = mid - 1
else:
lft = mid + 1
return -1
#for i,e in enumerate(ar):
# if e == target:
# return i
return -1
ar = range(11, 34)
print('locate([],', 20, 'found at', locate(ar, 20))
print('locate([],', 10, 'found at', locate(ar, 10))
print('locate([],', 50, 'found at', locate(ar, 50))
print('locate([],', 11, 'found at', locate(ar, 11))
print('locate([],', 33, 'found at', locate(ar, 33))
print('locate([],', 21, 'found at', locate(ar, 21))
print('locate([],', 22, 'found at', locate(ar, 22))
print('locate([],', 23, 'found at', locate(ar, 23))
print('locate([],', 24, 'found at', locate(ar, 24))
| 26.324324 | 52 | 0.491786 | 139 | 974 | 3.446043 | 0.33813 | 0.167015 | 0.244259 | 0.281837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0.291581 | 974 | 36 | 53 | 27.055556 | 0.607246 | 0.135524 | 0 | 0.086957 | 0 | 0 | 0.20403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.173913 | 0.391304 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
99411b0cfc335f939ca03f747656b817d94aefbb | 784 | py | Python | tests/dispatchers/handlers/test_dispatchers_email.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 4 | 2018-03-01T10:22:30.000Z | 2020-04-04T16:31:11.000Z | tests/dispatchers/handlers/test_dispatchers_email.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 60 | 2018-05-20T04:42:32.000Z | 2022-02-10T17:03:37.000Z | tests/dispatchers/handlers/test_dispatchers_email.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 1 | 2018-08-04T05:06:45.000Z | 2018-08-04T05:06:45.000Z | import os
import pytest
from bitcaster.dispatchers import Email
from bitcaster.utils.tests.dispatcher_testcase import DispatcherBaseTest
pytestmark = pytest.mark.django_db
@pytest.mark.plugin
@pytest.mark.skipif_missing('TEST_EMAIL_USER', 'TEST_EMAIL_PASSWORD', 'TEST_EMAIL_RECIPIENT')
class TestDispatcherEmail(DispatcherBaseTest):
TARGET = Email
CONFIG = {'username': os.environ.get('TEST_EMAIL_USER'),
'password': os.environ.get('TEST_EMAIL_PASSWORD'),
'server': os.environ.get('TEST_EMAIL_HOST', 'smtp.gmail.com'),
'port': os.environ.get('TEST_EMAIL_PORT', '587'),
'tls': os.environ.get('TEST_EMAIL_TLS', '1'),
'sender': 'sender@sender.com'}
RECIPIENT = os.environ.get('TEST_EMAIL_RECIPIENT')
| 35.636364 | 93 | 0.69898 | 95 | 784 | 5.547368 | 0.421053 | 0.1537 | 0.136622 | 0.182163 | 0.239089 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006126 | 0.167092 | 784 | 21 | 94 | 37.333333 | 0.800919 | 0 | 0 | 0 | 0 | 0 | 0.283163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
995612c96395164f8373540180613aa3fca1c524 | 320 | py | Python | override.py | Sp-X/PCAP | f34cf9415262178bd6dfd7941512ff695876d3da | [
"MIT"
] | null | null | null | override.py | Sp-X/PCAP | f34cf9415262178bd6dfd7941512ff695876d3da | [
"MIT"
] | null | null | null | override.py | Sp-X/PCAP | f34cf9415262178bd6dfd7941512ff695876d3da | [
"MIT"
] | null | null | null | class Mouse:
def __init__(self, name):
self.name = name
def __str__(self):
return "My name is " + self.name
class AncientMouse(Mouse):
def __str__(self):
return "Meum nomen est " + self.name
mus = AncientMouse("Caesar") # Prints "Meum nomen est Caesar"
print(mus)
| 21.333333 | 63 | 0.6 | 41 | 320 | 4.414634 | 0.463415 | 0.176796 | 0.110497 | 0.176796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290625 | 320 | 14 | 64 | 22.857143 | 0.792952 | 0.09375 | 0 | 0.2 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
995ae27b97de25734e48f7daab78eff604dc8d6f | 2,812 | py | Python | sandbox/globalsettings/models.py | Bastilla123/shop2 | b2a7ded5b39d0228fadfdb1c9e1fbd3ab0e4cfba | [
"BSD-3-Clause"
] | null | null | null | sandbox/globalsettings/models.py | Bastilla123/shop2 | b2a7ded5b39d0228fadfdb1c9e1fbd3ab0e4cfba | [
"BSD-3-Clause"
] | 12 | 2021-12-01T11:05:47.000Z | 2022-03-01T11:06:09.000Z | sandbox/globalsettings/models.py | Bastilla123/shop2 | b2a7ded5b39d0228fadfdb1c9e1fbd3ab0e4cfba | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.contrib.auth.models import User
#from photo.models import Photo
class Paymentmethod(models.Model):
code = models.CharField(max_length=10, null=True, blank=True, default="")
method = models.CharField(max_length=100, null=True, blank=True, default="")
is_active = models.BooleanField(default=False)
class Globalsettings(models.Model):
client_companyname = models.CharField(max_length=240, null=True, blank=True, default="")
client_city = models.CharField(max_length=240, null=True, blank=True, default="")
client_street = models.CharField(max_length=240, null=True, blank=True, default="")
client_zip = models.IntegerField(null=True, blank=True, default=0)
client_country = models.PositiveSmallIntegerField(null=True, blank=True, default=0)
bank = models.CharField(max_length=240, null=True, blank=True, default="")
steuernr = models.CharField(max_length=13, null=True, blank=True, default="")
bic_swift = models.PositiveSmallIntegerField(null=True, blank=True, default=0)
iban = models.PositiveIntegerField(null=True, blank=True, default=0)
phone = models.CharField(max_length=240, null=True, blank=True, default="")
email = models.CharField(max_length=240, null=True, blank=True, default="")
impressum = models.TextField(blank=True, default="", null=True)
kleingewerbe = models.BooleanField(default=False)
cashondelivery = models.ManyToManyField(Paymentmethod, blank=True, null=True, default=None,
related_name="Globalsettings_cashondelivery")
class UserSettings(models.Model):
user_link = models.OneToOneField(
User,
on_delete=models.CASCADE,
related_name="usersettings_user_link", primary_key=True, )
street = models.CharField(max_length=120, default="", blank=True, null=True)
zip = models.DecimalField(max_digits=7, decimal_places=0, blank=True, null=True)
city = models.CharField(max_length=120, default="", blank=True, null=True)
birthdate = models.DateField(default=None, blank=True, null=True)
company_position = models.CharField(max_length=240, default="", blank=True, null=True)
description = models.TextField(blank=True, default="", null=True)
#image = models.OneToOneField(Photo, null=True, blank=True, on_delete=models.CASCADE)
#Social Media
facebook_url = models.CharField(max_length=240, default="", blank=True, null=True)
instagram_url = models.CharField(max_length=240, default="", blank=True, null=True)
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
if created:
UserSettings.objects.create(user_link=instance) | 53.056604 | 95 | 0.73293 | 356 | 2,812 | 5.676966 | 0.264045 | 0.095002 | 0.118753 | 0.166254 | 0.453736 | 0.408214 | 0.383474 | 0.344879 | 0.289461 | 0.289461 | 0 | 0.019063 | 0.141892 | 2,812 | 53 | 96 | 53.056604 | 0.818483 | 0.044808 | 0 | 0 | 0 | 0 | 0.019001 | 0.019001 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.097561 | 0 | 0.829268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9971f366ce29441f55363040d84d470174381052 | 108 | py | Python | main.py | Colk-tech/discoplug | 76ae4a71d78d6709e8e393958f3f444cb501759c | [
"MIT"
] | null | null | null | main.py | Colk-tech/discoplug | 76ae4a71d78d6709e8e393958f3f444cb501759c | [
"MIT"
] | 1 | 2022-03-24T08:30:10.000Z | 2022-03-24T09:15:11.000Z | main.py | Colk-tech/discoplug | 76ae4a71d78d6709e8e393958f3f444cb501759c | [
"MIT"
] | null | null | null | from src.discord.bot import DiscordBOT
if __name__ == "__main__":
bot = DiscordBOT()
bot.launch()
| 15.428571 | 38 | 0.675926 | 13 | 108 | 5 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203704 | 108 | 6 | 39 | 18 | 0.755814 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
99892cea2230bd84c374e64c27308a34778a34c4 | 1,260 | py | Python | fluentogram/src/impl/runner.py | Arustinal/fluentogram | c3d7b307b40d520ef1db5e2a0945f3d7fe269b78 | [
"MIT"
] | null | null | null | fluentogram/src/impl/runner.py | Arustinal/fluentogram | c3d7b307b40d520ef1db5e2a0945f3d7fe269b78 | [
"MIT"
] | null | null | null | fluentogram/src/impl/runner.py | Arustinal/fluentogram | c3d7b307b40d520ef1db5e2a0945f3d7fe269b78 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
A translator runner by itself
"""
from typing import TypeVar, List
from fluentogram.src.impl import AttribTracer, FluentTranslator
TTranslatorRunner = TypeVar("TTranslatorRunner", bound="TranslatorRunner")
class TranslatorRunner(AttribTracer):
"""This is one-shot per Telegram event translator with attrib tracer access way."""
def __init__(self, translators: List[FluentTranslator]) -> None:
super().__init__()
self.translators = translators
self.request_line = ""
def get(self, key: str, **kwargs) -> str:
"""Faster, direct way to use translator, without sugar-like typing supported attribute access way"""
return self._get_translation(key, **kwargs)
def _get_translation(self, key, **kwargs):
for translator in self.translators:
try:
return translator.get(key, **kwargs)
except KeyError:
continue
def __call__(self, **kwargs) -> str:
text = self._get_translation(self.request_line[:-1], **kwargs)
self.request_line = ""
return text
def __getattr__(self, item: str) -> TTranslatorRunner:
self.request_line += f"{item}{self.translators[0].separator}"
return self
| 32.307692 | 108 | 0.659524 | 139 | 1,260 | 5.791367 | 0.510791 | 0.074534 | 0.074534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003093 | 0.230159 | 1,260 | 38 | 109 | 33.157895 | 0.826804 | 0.171429 | 0 | 0.086957 | 0 | 0 | 0.068226 | 0.036062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.086957 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
41dcfb43014cc728eb2e70b8bf45de511d7c1657 | 2,379 | py | Python | tests/formatters/chrome_preferences.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | 1,253 | 2015-01-02T13:58:02.000Z | 2022-03-31T08:43:39.000Z | tests/formatters/chrome_preferences.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | 3,388 | 2015-01-02T11:17:58.000Z | 2022-03-30T10:21:45.000Z | tests/formatters/chrome_preferences.py | pyllyukko/plaso | 7533db2d1035ca71d264d6281ebd5db2d073c587 | [
"Apache-2.0"
] | 376 | 2015-01-20T07:04:54.000Z | 2022-03-04T23:53:00.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Tests for the Google Chrome Preferences file event formatter."""
import unittest
from plaso.formatters import chrome_preferences
from tests.formatters import test_lib
class ChromePreferencesPrimaryURLFormatterHelperTest(
test_lib.EventFormatterTestCase):
"""Tests for the Google Chrome preferences primary URL formatter helper."""
def testFormatEventValues(self):
"""Tests the FormatEventValues function."""
formatter_helper = (
chrome_preferences.ChromePreferencesPrimaryURLFormatterHelper())
event_values = {'primary_url': 'https://example.com'}
formatter_helper.FormatEventValues(event_values)
self.assertEqual(event_values['primary_url'], 'https://example.com')
event_values = {'primary_url': ''}
formatter_helper.FormatEventValues(event_values)
self.assertEqual(event_values['primary_url'], 'local file')
event_values = {'primary_url': None}
formatter_helper.FormatEventValues(event_values)
self.assertIsNone(event_values['primary_url'])
class ChromePreferencesSecondaryURLFormatterHelperTest(
test_lib.EventFormatterTestCase):
"""Tests for the Google Chrome preferences secondary URL formatter helper."""
def testFormatEventValues(self):
"""Tests the FormatEventValues function."""
formatter_helper = (
chrome_preferences.ChromePreferencesSecondaryURLFormatterHelper())
event_values = {
'primary_url': 'https://example.com',
'secondary_url': 'https://anotherexample.com'}
formatter_helper.FormatEventValues(event_values)
self.assertEqual(
event_values['secondary_url'], 'https://anotherexample.com')
event_values = {
'primary_url': 'https://example.com',
'secondary_url': 'https://example.com'}
formatter_helper.FormatEventValues(event_values)
self.assertIsNone(event_values['secondary_url'])
event_values = {
'primary_url': 'https://example.com',
'secondary_url': ''}
formatter_helper.FormatEventValues(event_values)
self.assertEqual(event_values['secondary_url'], 'local file')
event_values = {
'primary_url': 'https://example.com',
'secondary_url': None}
formatter_helper.FormatEventValues(event_values)
self.assertIsNone(event_values['secondary_url'])
if __name__ == '__main__':
unittest.main()
| 33.507042 | 79 | 0.730559 | 235 | 2,379 | 7.131915 | 0.212766 | 0.137828 | 0.107399 | 0.125298 | 0.779236 | 0.747017 | 0.72673 | 0.685561 | 0.685561 | 0.556683 | 0 | 0.000992 | 0.152165 | 2,379 | 70 | 80 | 33.985714 | 0.829945 | 0.135771 | 0 | 0.511111 | 0 | 0 | 0.210449 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 1 | 0.044444 | false | 0 | 0.066667 | 0 | 0.155556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
41e9d47a070d901a1509bec90ba3b4cd92478800 | 2,023 | py | Python | devo/common/dates/dateoperations.py | felixpelaez/python-sdk | 13409db8a436d0a7f833242fd289328353373265 | [
"MIT"
] | null | null | null | devo/common/dates/dateoperations.py | felixpelaez/python-sdk | 13409db8a436d0a7f833242fd289328353373265 | [
"MIT"
] | null | null | null | devo/common/dates/dateoperations.py | felixpelaez/python-sdk | 13409db8a436d0a7f833242fd289328353373265 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Utils for date operations."""
from datetime import datetime as dt, timedelta
from .dateutils import DateUtils
class DateOperations(object):
"""
This class is a collection of allowed operations on date parsing
"""
@staticmethod
def month():
"""
Return millis for a month
:return: 30 * 24 * 60 * 60 * 1000
"""
return 2592000000
@staticmethod
def week():
"""
Return millis for a week
:return: 7 * 24 * 60 * 60 * 1000
"""
return 604800000
@staticmethod
def day():
"""
Return millis for a day
:return: 24 * 60 * 60 * 1000
"""
return 86400000
@staticmethod
def hour():
"""
Return millis for an hour
:return: Return 60 * 60 * 1000
"""
return 3600000
@staticmethod
def minute():
"""
Return millis for a minute
:return: 60 * 1000
"""
return 60000
@staticmethod
def second():
"""
Return millis for a second
:return: 1000
"""
return 1000
@staticmethod
def now():
"""
Return current millis in UTC
:return: Millis
"""
return DateUtils.to_millis(dt.utcnow())
@staticmethod
def now_without_ms():
"""
Return current millis in UTC
:return: Millis
"""
return DateUtils.to_millis(DateUtils.trunc_time_minute(dt.utcnow()))
@staticmethod
def today():
"""
Return current millis with the time truncated to 00:00:00
:return: Millis
"""
return DateUtils.to_millis(DateUtils.trunc_time(dt.utcnow()))
@staticmethod
def yesterday():
"""
Return millis from yesterday with time truncated to 00:00:00
:return: Millis
"""
date = DateUtils.trunc_time(dt.utcnow()) - timedelta(days=1)
return DateUtils.to_millis(date)
| 21.98913 | 76 | 0.539298 | 212 | 2,023 | 5.099057 | 0.301887 | 0.122109 | 0.083256 | 0.074006 | 0.300648 | 0.224792 | 0.224792 | 0.224792 | 0.174838 | 0.109158 | 0 | 0.083462 | 0.360356 | 2,023 | 91 | 77 | 22.230769 | 0.751932 | 0.331191 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.058824 | 0 | 0.676471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
510e62cbba52bcc8b070eefec8577dd8334f7a8f | 245 | py | Python | pylings/ex_for_beginner/reverse_integer.py | magiskboy/pylings | 6639107925b3cb8710403c4381027be894a043c2 | [
"MIT"
] | null | null | null | pylings/ex_for_beginner/reverse_integer.py | magiskboy/pylings | 6639107925b3cb8710403c4381027be894a043c2 | [
"MIT"
] | null | null | null | pylings/ex_for_beginner/reverse_integer.py | magiskboy/pylings | 6639107925b3cb8710403c4381027be894a043c2 | [
"MIT"
] | null | null | null | """
DESCRIPTION:
Write a code to extract each digit from an integer, in the reverse order
EXAMPLE
Input:
n = 1234
Output:
"4 3 2 1"
"""
def main(n: int) -> str:
ret_val: str = ""
# Enter the code below
return ret_val
| 11.136364 | 72 | 0.612245 | 39 | 245 | 3.794872 | 0.846154 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045977 | 0.289796 | 245 | 21 | 73 | 11.666667 | 0.804598 | 0.657143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
512231868289dfe82b8fbe5a8453ba5fa10be0bc | 248 | py | Python | baekjoon/rust/effective_hacking_input_gen.py | yskang/AlgorithmPracticeWithPython | f7129bd1924a7961489198f0ee052d2cd1e9cf40 | [
"MIT"
] | null | null | null | baekjoon/rust/effective_hacking_input_gen.py | yskang/AlgorithmPracticeWithPython | f7129bd1924a7961489198f0ee052d2cd1e9cf40 | [
"MIT"
] | 1 | 2019-11-04T06:44:04.000Z | 2019-11-04T06:46:55.000Z | baekjoon/rust/effective_hacking_input_gen.py | yskang/AlgorithmPractice | 31b76e38b4c2f1e3e29fb029587662a745437912 | [
"MIT"
] | null | null | null | from random import randint
n = randint(1, 10)
m = randint(1, 10)
print(f'{n} {m}')
for _ in range(m):
while True:
a = randint(1, n)
b = randint(1, n)
if a != b:
print(f'{a} {b}')
break
| 16.533333 | 29 | 0.447581 | 38 | 248 | 2.894737 | 0.5 | 0.290909 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.395161 | 248 | 15 | 30 | 16.533333 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0.056225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.181818 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
512f5e2f4a0a78538eba5d3573c70d5097d57776 | 6,252 | py | Python | Cats vs Dogs Classification/cats_vs_dogs.py | DhruvAwasthi/ModelsCollection | 80ab3ada2d5cb23cce7a3db23be1ec1dc14d8733 | [
"Apache-2.0"
] | 2 | 2020-10-22T16:52:56.000Z | 2021-10-03T15:57:14.000Z | Cats vs Dogs Classification/cats_vs_dogs.py | DhruvAwasthi/ModelsCollection | 80ab3ada2d5cb23cce7a3db23be1ec1dc14d8733 | [
"Apache-2.0"
] | null | null | null | Cats vs Dogs Classification/cats_vs_dogs.py | DhruvAwasthi/ModelsCollection | 80ab3ada2d5cb23cce7a3db23be1ec1dc14d8733 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Untitled0.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/13osyIHB9WZp5yZL8td8zLGJ7vGZNqywq
"""
import os
import zipfile
import random
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from shutil import copyfile
!wget --no-check-certificate \
"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip" \
-O "/tmp/cats-and-dogs.zip"
local_zip = '/tmp/cats-and-dogs.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
print(len(os.listdir('/tmp/PetImages/Cat/')))
print(len(os.listdir('/tmp/PetImages/Dog/')))
try:
os.mkdir('/tmp/cats-v-dogs')
os.mkdir('/tmp/cats-v-dogs/training')
os.mkdir('/tmp/cats-v-dogs/testing')
os.mkdir('/tmp/cats-v-dogs/training/cats')
os.mkdir('/tmp/cats-v-dogs/training/dogs')
os.mkdir('/tmp/cats-v-dogs/testing/cats')
os.mkdir('/tmp/cats-v-dogs/testing/dogs')
except OSError:
pass
def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):
files = []
for filename in os.listdir(SOURCE):
file = SOURCE + filename
if os.path.getsize(file) > 0:
files.append(filename)
else:
print(filename + " is zero length, so ignoring.")
training_length = int(len(files) * SPLIT_SIZE)
testing_length = int(len(files) - training_length)
shuffled_set = random.sample(files, len(files))
training_set = shuffled_set[0:training_length]
testing_set = shuffled_set[:testing_length]
for filename in training_set:
this_file = SOURCE + filename
destination = TRAINING + filename
copyfile(this_file, destination)
for filename in testing_set:
this_file = SOURCE + filename
destination = TESTING + filename
copyfile(this_file, destination)
CAT_SOURCE_DIR = "/tmp/PetImages/Cat/"
TRAINING_CATS_DIR = "/tmp/cats-v-dogs/training/cats/"
TESTING_CATS_DIR = "/tmp/cats-v-dogs/testing/cats/"
DOG_SOURCE_DIR = "/tmp/PetImages/Dog/"
TRAINING_DOGS_DIR = "/tmp/cats-v-dogs/training/dogs/"
TESTING_DOGS_DIR = "/tmp/cats-v-dogs/testing/dogs/"
split_size = .9
split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)
split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)
print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))
print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))
print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))
print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing/"
validation_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
history = model.fit(train_generator,
epochs=15,
verbose=1,
validation_data=validation_generator)
# Commented out IPython magic to ensure Python compatibility.
# %matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.figure()
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a dog")
else:
print(fn + " is a cat") | 34.351648 | 118 | 0.630678 | 796 | 6,252 | 4.816583 | 0.272613 | 0.03469 | 0.035472 | 0.053208 | 0.356286 | 0.311163 | 0.226656 | 0.181012 | 0.181012 | 0.140845 | 0 | 0.027376 | 0.193698 | 6,252 | 182 | 119 | 34.351648 | 0.733188 | 0.100448 | 0 | 0.234848 | 1 | 0.007576 | 0.179662 | 0.093049 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007576 | 0.090909 | null | null | 0.075758 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5137b889372eb98218355aa6e747192f6fb5c0b2 | 111 | py | Python | iotest.py | erichmatt/pyboard | f5cbda2bd5036c643afe60bd72332958c3a4b387 | [
"MIT"
] | null | null | null | iotest.py | erichmatt/pyboard | f5cbda2bd5036c643afe60bd72332958c3a4b387 | [
"MIT"
] | null | null | null | iotest.py | erichmatt/pyboard | f5cbda2bd5036c643afe60bd72332958c3a4b387 | [
"MIT"
] | null | null | null | import time
f = open('testlist.csv','w')
for x in range(100):
f.write(str(time.clock()))
f.write("\n")
| 18.5 | 30 | 0.594595 | 20 | 111 | 3.3 | 0.8 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.171171 | 111 | 5 | 31 | 22.2 | 0.684783 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
513e466953e9624c26fa627e365787c9d6c87190 | 465 | py | Python | projects/balancer.py | donuk/pyboard-robotics | c8cb816bc0a3f0bb30977b184cee97e3e85b1c3f | [
"Apache-2.0"
] | null | null | null | projects/balancer.py | donuk/pyboard-robotics | c8cb816bc0a3f0bb30977b184cee97e3e85b1c3f | [
"Apache-2.0"
] | 1 | 2019-12-27T15:49:23.000Z | 2019-12-27T15:49:23.000Z | projects/balancer.py | donuk/pyboard-robotics | c8cb816bc0a3f0bb30977b184cee97e3e85b1c3f | [
"Apache-2.0"
] | null | null | null | # main.py -- put your code here!
from lib.servo import servo1, stop_servos_before_finishing
from lib.accelerometer import accelerometer
turn_left = True
# This will stop the servos if anything goes wrong
with stop_servos_before_finishing():
# Loop forever
while True:
x = accelerometer.x()
if x < -5:
servo1.forward()
elif x > 5:
servo1.backward()
else:
servo1.stop()
| 24.473684 | 59 | 0.608602 | 57 | 465 | 4.842105 | 0.631579 | 0.050725 | 0.115942 | 0.181159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018927 | 0.31828 | 465 | 18 | 60 | 25.833333 | 0.851735 | 0.197849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
513f7a9c349e22fc27386406afdcb4c854e324d9 | 1,217 | py | Python | testing/tests/001-main/003-self/200-json/003-repositories.py | fekblom/critic | a6b60c9053e13d4c878d50531860d7389568626d | [
"Apache-2.0"
] | null | null | null | testing/tests/001-main/003-self/200-json/003-repositories.py | fekblom/critic | a6b60c9053e13d4c878d50531860d7389568626d | [
"Apache-2.0"
] | null | null | null | testing/tests/001-main/003-self/200-json/003-repositories.py | fekblom/critic | a6b60c9053e13d4c878d50531860d7389568626d | [
"Apache-2.0"
] | null | null | null | # @dependency 001-main/002-createrepository.py
frontend.json(
"repositories",
expect={ "repositories": [critic_json] })
frontend.json(
"repositories/1",
expect=critic_json)
frontend.json(
"repositories",
params={ "name": "critic" },
expect=critic_json)
frontend.json(
"repositories/4711",
expect={ "error": { "title": "No such resource",
"message": "Resource not found: Invalid repository id: 4711" }},
expected_http_status=404)
frontend.json(
"repositories/critic",
expect={ "error": { "title": "Invalid API request",
"message": "Invalid numeric id: 'critic'" }},
expected_http_status=400)
frontend.json(
"repositories",
params={ "name": "nosuchrepository" },
expect={ "error": { "title": "No such resource",
"message": "Resource not found: Invalid repository name: 'nosuchrepository'" }},
expected_http_status=404)
frontend.json(
"repositories",
params={ "filter": "interesting" },
expect={ "error": { "title": "Invalid API request",
"message": "Invalid repository filter parameter: 'interesting'" }},
expected_http_status=400)
| 29.682927 | 104 | 0.614626 | 116 | 1,217 | 6.353448 | 0.327586 | 0.113976 | 0.227951 | 0.089552 | 0.654003 | 0.548168 | 0.43962 | 0.317503 | 0.189959 | 0.189959 | 0 | 0.02897 | 0.234182 | 1,217 | 40 | 105 | 30.425 | 0.761803 | 0.036154 | 0 | 0.65625 | 0 | 0 | 0.412468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
515992ed5a7c31c979b341f8c384d04b01621296 | 129 | py | Python | src/beginner/1151.py | henrikhorbovyi/URI | 6df34d87bb7bbbf760ab774e01da87ad669acae9 | [
"Apache-2.0"
] | 1 | 2018-07-04T01:53:29.000Z | 2018-07-04T01:53:29.000Z | src/beginner/1151.py | henrikhorbovyi/URI | 6df34d87bb7bbbf760ab774e01da87ad669acae9 | [
"Apache-2.0"
] | null | null | null | src/beginner/1151.py | henrikhorbovyi/URI | 6df34d87bb7bbbf760ab774e01da87ad669acae9 | [
"Apache-2.0"
] | null | null | null | N = int(input())
a,b = 1,1
print(0,end=' ')
for i in range(1,N-1):
if i != N:
print(a,end=' ')
a,b = b,a+b
print(a,end='\n') | 16.125 | 22 | 0.496124 | 32 | 129 | 2 | 0.4375 | 0.09375 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048544 | 0.20155 | 129 | 8 | 23 | 16.125 | 0.572816 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
515cd796f7e34f68f9a7c813d0344e52bda689a0 | 826 | py | Python | playground/optimization/ott2butKAMA2-serenity/custom_indicators/ewo.py | ysdede/jesse_strategies | ade9f4ba42cec11207c766d267b9d8feb8bce648 | [
"CC0-1.0"
] | 38 | 2021-09-18T15:33:28.000Z | 2022-02-21T17:29:08.000Z | fractional/custom_indicators/ewo.py | ysdede/jesse_strategies | ade9f4ba42cec11207c766d267b9d8feb8bce648 | [
"CC0-1.0"
] | 4 | 2022-01-02T14:46:12.000Z | 2022-02-16T18:39:41.000Z | fractional/custom_indicators/ewo.py | ysdede/jesse_strategies | ade9f4ba42cec11207c766d267b9d8feb8bce648 | [
"CC0-1.0"
] | 11 | 2021-10-19T06:21:43.000Z | 2022-02-21T17:29:10.000Z | import numpy as np
import talib
from typing import Union
from jesse.helpers import get_candle_source, slice_candles
def ewo(candles: np.ndarray, short_period: int = 5, long_period: int = 35, source_type="close", sequential=False) -> \
Union[float, np.ndarray]:
"""
Elliott Wave Oscillator
:param candles: np.ndarray
:param short_period: int - default: 5
:param long_period: int - default: 34
:param source_type: str - default: close
:param sequential: bool - default: False
:return: Union[float, np.ndarray]
"""
candles = slice_candles(candles, sequential)
src = get_candle_source(candles, source_type)
ewo = np.subtract(talib.EMA(src, timeperiod=short_period), talib.EMA(src, timeperiod=long_period))
if sequential:
return ewo
else:
return ewo[-1] | 29.5 | 118 | 0.697337 | 111 | 826 | 5.054054 | 0.405405 | 0.064171 | 0.053476 | 0.067736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01059 | 0.199758 | 826 | 28 | 119 | 29.5 | 0.838124 | 0.292978 | 0 | 0 | 0 | 0 | 0.009174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
51638f725fbcd2002829a1c5d2ecadbde1f14bc4 | 588 | py | Python | class and objects/abstract calss inherit from another abstract class.py | ZephyrAveryl777/Python-Programs | 26de85c31af28382d406d27d54186b966a7b1bfc | [
"MIT"
] | 6 | 2020-08-13T11:49:29.000Z | 2021-03-07T05:46:17.000Z | class and objects/abstract calss inherit from another abstract class.py | ZephyrAveryl777/Python-Programs | 26de85c31af28382d406d27d54186b966a7b1bfc | [
"MIT"
] | null | null | null | class and objects/abstract calss inherit from another abstract class.py | ZephyrAveryl777/Python-Programs | 26de85c31af28382d406d27d54186b966a7b1bfc | [
"MIT"
] | 1 | 2021-04-24T06:12:48.000Z | 2021-04-24T06:12:48.000Z | '''
A class that is derived from an abstract class cannot be
instantiated unless all of its abstract methods are overridden.
'''
from abc import ABC,abstractmethod
print(__doc__,end="")
print('\n'+'-'*25+'Abstract class inherit from another abstract class'+'-'*35)
class A(ABC):
def __init__(self,username):
self.username=username
super().__init__()
@abstractmethod
def name(self):
pass
class B(A):
@abstractmethod
def age(self):
pass
class C(B):
def name(self):
print(self.username)
def age(self):
return
c=C('Testing-123456')
c.name() | 20.275862 | 79 | 0.678571 | 83 | 588 | 4.662651 | 0.506024 | 0.100775 | 0.056848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020964 | 0.188776 | 588 | 29 | 80 | 20.275862 | 0.790356 | 0.205782 | 0 | 0.380952 | 0 | 0 | 0.157407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0.095238 | 0.047619 | 0.047619 | 0.47619 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
516646dce18d825a2d1a4a1379f922eb2f8854f9 | 369 | py | Python | scripts/8-pacotes-modulos-python/ex107/teste.py | dev-alissonalves/python-codes | 376a5e52cebcf9e5102f55c6aa44745aaef50add | [
"MIT"
] | 1 | 2021-07-05T10:46:25.000Z | 2021-07-05T10:46:25.000Z | scripts/8-pacotes-modulos-python/ex107/teste.py | dev-alissonalves/python-codes | 376a5e52cebcf9e5102f55c6aa44745aaef50add | [
"MIT"
] | null | null | null | scripts/8-pacotes-modulos-python/ex107/teste.py | dev-alissonalves/python-codes | 376a5e52cebcf9e5102f55c6aa44745aaef50add | [
"MIT"
] | 1 | 2021-07-05T15:41:08.000Z | 2021-07-05T15:41:08.000Z | import moeda
p = float(input("Informe um valor qualquer: "))
print("\n")
print("-=-=-=-=- INÍCIO =-=-=-=")
print(f"A metade do valor {p:.2f} é: {moeda.metade(p)}")
print(f"O dobro do valor {p:.2f} é: {moeda.dobro(p)}")
print(f"Aumentando 10%, temos: {moeda.aumentar(p, 10)}")
print(f"Diminuindo 10%, temos: {moeda.diminuir(p, 10)}")
print("-=-=-=-=- FIM =-=-=-=")
| 24.6 | 56 | 0.588076 | 57 | 369 | 3.807018 | 0.473684 | 0.110599 | 0.073733 | 0.092166 | 0.147465 | 0.147465 | 0 | 0 | 0 | 0 | 0 | 0.030864 | 0.121951 | 369 | 14 | 57 | 26.357143 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.777778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
5a9d312c9847bdaf1084e0b7133c41b344493da3 | 5,568 | py | Python | mvp/djfulcrum/filters/run_filters.py | venicegeo/fulcrum_demo | 0f160655ac1eede1c5d537f688b00d7fe1d08fa9 | [
"Apache-2.0"
] | null | null | null | mvp/djfulcrum/filters/run_filters.py | venicegeo/fulcrum_demo | 0f160655ac1eede1c5d537f688b00d7fe1d08fa9 | [
"Apache-2.0"
] | null | null | null | mvp/djfulcrum/filters/run_filters.py | venicegeo/fulcrum_demo | 0f160655ac1eede1c5d537f688b00d7fe1d08fa9 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
import os
from importlib import import_module
def filter_features(features, filter_name=None, run_once=False):
"""
Args:
features: A geojson Feature Collection
filter_name: The name of a filter to use if None all active filters are used (default:None)
run_once: Run the filter one time without being active.
Returns:
Geojson Feature Collection that passed any filters in the in filter package
If no features passed None is returned
"""
from ..models import Filter
from ..djfulcrum import delete_feature
workspace = os.path.dirname(os.path.abspath(__file__))
files = os.listdir(workspace)
if features.get('features'):
filtered_feature_count = len(features.get('features'))
filtered_results = None
if filter_name:
filter_models = Filter.objects.filter(filter_name__iexact=filter_name)
else:
filter_models = Filter.objects.all()
if filter_models:
un_needed = []
for filter_model in filter_models:
if filter_model.filter_name in files:
if filter_model.filter_active or run_once:
if not features:
break
try:
module_name = 'djfulcrum.filters.' + str(filter_model.filter_name.rstrip('.py'))
mod = import_module(module_name)
print "Running: {}".format(filter_model.filter_name)
filtered_results = mod.filter_features(features)
except ImportError:
print "Could not filter features - ImportError"
except TypeError as te:
print te
print "Could not filter features - TypeError"
except Exception as e:
"Unknown error occurred, could not filter features"
print repr(e)
if filtered_results:
if filtered_results.get('failed').get('features'):
for feature in filtered_results.get('failed').get('features'):
if run_once:
delete_feature(feature.get('properties').get('fulcrum_id'))
print "{} features failed the filter".format(
len(filtered_results.get('failed').get('features')))
if filtered_results.get('passed').get('features'):
print "{} features passed the filter".format(
len(filtered_results.get('passed').get('features')))
features = filtered_results.get('passed')
filtered_feature_count = len(filtered_results.get('passed').get('features'))
else:
features = None
filtered_feature_count = 0
else:
print "Failure to get filtered results"
else:
un_needed.append(filter_model)
if un_needed:
for filter_model in un_needed:
print("The filter {} was found in the database but the module is "
"missing.".format(filter_model.filter_name))
print("It will be disabled. If the module is installed later, reenable the filter "
"in the admin console.")
filter_model.filter_active = False
else:
features = None
filtered_feature_count = 0
return features, filtered_feature_count
def check_filters():
"""
Returns: True if checking the filters was successful.
Finds '.py' files used for filtering and adds to db model for use in admin console.
Sets cache value so function will not running fully every time it is called by tasks.py
"""
from ..models import Filter
from ..tasks import get_lock_id
from django.db import IntegrityError
from importlib import import_module
from django.core.cache import cache
workspace = os.path.dirname(os.path.abspath(__file__))
files = os.listdir(workspace)
if files:
lock_id = get_lock_id('list-filters-success')
if cache.get(lock_id):
return True
for filter_file in files:
if filter_file.endswith('.py'):
if filter_file == 'run_filters.py' or filter_file == '__init__.py':
continue
try:
filter_names = Filter.objects.filter(filter_name__iexact=filter_file)
if not filter_names.exists():
filter_model = Filter.objects.create(filter_name=filter_file)
print ("Created filter {}".format(filter_model.filter_name))
except IntegrityError:
return False
try:
mod = import_module('djfulcrum.filters.' + str(filter_file.rstrip('.py')))
if 'setup_filter_model' in dir(mod):
mod.setup_filter_model()
except ImportError:
return False
cache.set(lock_id, True, 20)
return True
| 46.016529 | 108 | 0.541128 | 576 | 5,568 | 5.038194 | 0.255208 | 0.049276 | 0.046864 | 0.036182 | 0.287388 | 0.201585 | 0.160924 | 0.044108 | 0.044108 | 0.044108 | 0 | 0.001174 | 0.38829 | 5,568 | 120 | 109 | 46.4 | 0.850851 | 0 | 0 | 0.270833 | 0 | 0 | 0.129378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.052083 | 0.15625 | null | null | 0.114583 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5aa4948852536ca4f40cb2992519749d10b4cc3f | 18,304 | py | Python | src/intensio_obfuscator/core/obfuscation/intensio_delete.py | bbhunter/Intensio-Obfuscator | f66a22b50c19793edac673cfd7dc319405205c39 | [
"MIT"
] | 553 | 2019-06-08T17:47:41.000Z | 2022-03-29T03:12:11.000Z | src/intensio_obfuscator/core/obfuscation/intensio_delete.py | bbhunter/Intensio-Obfuscator | f66a22b50c19793edac673cfd7dc319405205c39 | [
"MIT"
] | 69 | 2019-06-08T13:25:47.000Z | 2022-02-15T08:34:07.000Z | src/intensio_obfuscator/core/obfuscation/intensio_delete.py | bbhunter/Intensio-Obfuscator | f66a22b50c19793edac673cfd7dc319405205c39 | [
"MIT"
] | 130 | 2019-06-08T18:44:13.000Z | 2022-03-27T01:00:52.000Z | # -*- coding: utf-8 -*-
# https://github.com/Hnfull/Intensio-Obfuscator
#---------------------------------------------------------- [Lib] -----------------------------------------------------------#
import re
import fileinput
import os
import sys
from progress.bar import Bar
try:
from intensio_obfuscator.core.utils.intensio_utils import Utils, Reg
except ModuleNotFoundError:
from core.utils.intensio_utils import Utils, Reg
#------------------------------------------------- [Function(s)/Class(es)] --------------------------------------------------#
class Delete:
def __init__(self):
self.utils = Utils()
def LinesSpaces(self, outputArg, verboseArg):
checkLinesSpace = {}
checkEmptyLineOutput = 0
checkEmptyLineInput = 0
countRecursFiles = 0
numberLine = 0
recursFiles = self.utils.CheckFileDir(
output=outputArg,
detectFiles="py",
blockDir="__pycache__",
blockFile=False,
dirOnly=False
)
for file in recursFiles:
countRecursFiles += 1
# -- Delete all empty lines -- #
with Bar("Obfuscation ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
with fileinput.FileInput(file, inplace=True) as inputFile:
for eachLine in inputFile:
if re.match(Reg.detectLineEmpty, eachLine):
checkEmptyLineInput += 1
pass
else:
sys.stdout.write(eachLine)
bar.next(1)
bar.finish()
with Bar("Check ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
numberLine = 0
with open(file, "r") as readFile:
readF = readFile.readlines()
for eachLine in readF:
numberLine += 1
if re.match(Reg.detectLineEmpty, eachLine):
checkLinesSpace[numberLine] = file
checkEmptyLineOutput += 1
bar.next(1)
bar.finish()
if checkEmptyLineOutput == 0:
return 1
else:
if verboseArg:
print("\n[!] Empty line that not been deleted... :\n")
for key, value in checkLinesSpace.items():
print("\n-> File : {}".format(value))
print("-> Line : {}".format(key))
else:
print("\n[*] Empty line that deleted : {}\n".format(checkEmptyLineInput))
return 0
def Comments(self, outputArg, verboseArg):
getIndexList = []
filesConcerned = []
eachLineListCheckIndex = []
countLineCommentOutput = 0
countLineCommentInput = 0
multipleLinesComments = 0
countRecursFiles = 0
noCommentsQuotes = 0
getIndex = 0
detectIntoSimpleQuotes = None
eachLineCheckIndex = ""
recursFiles = self.utils.CheckFileDir(
output=outputArg,
detectFiles="py",
blockDir="__pycache__",
blockFile=False,
dirOnly=False
)
for i in recursFiles:
countRecursFiles += 1
# -- Delete comments and count comments will be deleted -- #
print("\n[+] Running delete comments in {} file(s)...\n".format(countRecursFiles))
with Bar("Obfuscation ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
with fileinput.input(file, inplace=True) as inputFile:
for eachLine in inputFile:
if re.match(Reg.pythonFileHeader, eachLine):
sys.stdout.write(eachLine)
else:
if multipleLinesComments == 1:
if re.match(Reg.quotesCommentsEndMultipleLines, eachLine):
if self.utils.VerifyMultipleLinesComments(eachLine) == True:
if multipleLinesComments == 1:
countLineCommentInput += 1
multipleLinesComments = 0
else:
countLineCommentInput += 1
else:
countLineCommentInput += 1
elif noCommentsQuotes == 1:
if re.match(Reg.checkIfEndVarStdoutMultipleQuotes, eachLine):
sys.stdout.write(eachLine)
noCommentsQuotes = 0
else:
sys.stdout.write(eachLine)
else:
if re.match(Reg.quotesCommentsOneLine, eachLine):
countLineCommentInput += 1
else:
if re.match(Reg.quotesCommentsMultipleLines, eachLine):
if self.utils.VerifyMultipleLinesComments(eachLine) == True:
countLineCommentInput += 1
multipleLinesComments = 1
else:
sys.stdout.write(eachLine)
else:
if re.match(Reg.checkIfStdoutMultipleQuotes, eachLine) \
or re.match(Reg.checkIfVarMultipleQuotes, eachLine):
sys.stdout.write(eachLine)
noCommentsQuotes = 1
elif re.match(Reg.checkIfRegexMultipleQuotes, eachLine):
sys.stdout.write(eachLine)
else:
sys.stdout.write(eachLine)
with fileinput.input(file, inplace=True) as inputFile:
for eachLine in inputFile:
if re.match(Reg.pythonFileHeader, eachLine):
sys.stdout.write(eachLine)
else:
if re.match(Reg.hashCommentsBeginLine, eachLine):
countLineCommentInput += 1
elif re.match(Reg.hashCommentsAfterLine, eachLine):
eachLineList = list(eachLine)
getIndexList = []
for i, v in enumerate(eachLineList):
if v == "#":
getIndexList.append(i)
for i in getIndexList:
if self.utils.DetectIntoSimpleQuotes(eachLine, maxIndexLine=i) == False:
countLineCommentInput += 1
detectIntoSimpleQuotes = False
break
else:
continue
if detectIntoSimpleQuotes == False:
for i in getIndexList:
eachLineListCheckIndex = eachLineList[:i]
eachLineListCheckIndex.append("\n")
eachLineCheckIndex = "".join(eachLineListCheckIndex)
if self.utils.DetectIntoSimpleQuotes(eachLineCheckIndex, maxIndexLine=i) == False:
getIndex = i
break
else:
continue
eachLineList = eachLineList[:getIndex]
eachLineList.append("\n")
eachLine = "".join(eachLineList)
sys.stdout.write(eachLine)
detectIntoSimpleQuotes = None
countLineCommentInput += 1
else:
sys.stdout.write(eachLine)
else:
sys.stdout.write(eachLine)
bar.next(1)
bar.finish()
# -- Check if all comments are deleted -- #
with Bar("Check ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
with open(file, "r") as readFile:
readF = readFile.readlines()
for eachLine in readF:
if re.match(Reg.pythonFileHeader, eachLine):
continue
else:
if multipleLinesComments == 1:
if re.match(Reg.quotesCommentsEndMultipleLines, eachLine):
if self.utils.VerifyMultipleLinesComments(eachLine) == True:
if multipleLinesComments == 1:
countLineCommentOutput += 1
multipleLinesComments = 0
filesConcerned.append(file)
else:
countLineCommentOutput += 1
filesConcerned.append(file)
else:
countLineCommentOutput += 1
filesConcerned.append(file)
elif noCommentsQuotes == 1:
if re.match(Reg.checkIfEndVarStdoutMultipleQuotes, eachLine):
noCommentsQuotes = 0
else:
continue
else:
if re.match(Reg.quotesCommentsOneLine, eachLine):
countLineCommentOutput += 1
filesConcerned.append(file)
else:
if re.match(Reg.quotesCommentsMultipleLines, eachLine):
if self.utils.VerifyMultipleLinesComments(eachLine) == True:
countLineCommentOutput += 1
multipleLinesComments = 1
filesConcerned.append(file)
else:
continue
else:
if re.match(Reg.checkIfStdoutMultipleQuotes, eachLine) \
or re.match(Reg.checkIfVarMultipleQuotes, eachLine):
noCommentsQuotes = 1
elif re.match(Reg.checkIfRegexMultipleQuotes, eachLine):
continue
else:
continue
with open(file, "r") as readFile:
readF = readFile.readlines()
for eachLine in readF:
if re.match(Reg.pythonFileHeader, eachLine):
continue
else:
if re.match(Reg.hashCommentsBeginLine, eachLine):
countLineCommentOutput += 1
filesConcerned.append(file)
elif re.match(Reg.hashCommentsAfterLine, eachLine):
eachLineList = list(eachLine)
getIndexList = []
for i, v in enumerate(eachLineList):
if v == "#":
getIndexList.append(i)
for i in getIndexList:
if self.utils.DetectIntoSimpleQuotes(eachLine, maxIndexLine=i) == False:
countLineCommentOutput += 1
detectIntoSimpleQuotes = False
filesConcerned.append(file)
break
else:
continue
if detectIntoSimpleQuotes == False:
for i in getIndexList:
eachLineListCheckIndex = eachLineList[:i]
eachLineListCheckIndex.append("\n")
eachLineCheckIndex = "".join(eachLineListCheckIndex)
if self.utils.DetectIntoSimpleQuotes(eachLineCheckIndex, maxIndexLine=i) == False:
getIndex = i
break
else:
continue
eachLineList = eachLineList[:getIndex]
eachLineList.append("\n")
eachLine = "".join(eachLineList)
countLineCommentOutput += 1
detectIntoSimpleQuotes = None
else:
continue
else:
continue
bar.next(1)
bar.finish()
if countLineCommentOutput == 0:
print("\n-> {} lines of comments deleted\n".format(countLineCommentInput))
return 1
else:
if verboseArg:
filesConcerned = self.utils.RemoveDuplicatesValuesInList(filesConcerned)
print("\nFiles concerned of comments no deleted :\n")
for f in filesConcerned:
print("-> {}".format(f))
print("\n-> {} lines of comments no deleted\n".format(countLineCommentOutput))
return 0
def TrashFiles(self, outputArg, verboseArg):
countRecursFiles = 0
deleteFiles = 0
checkPycFile = []
currentPosition = os.getcwd()
recursFiles = self.utils.CheckFileDir(
output=outputArg,
detectFiles="pyc",
blockDir="__pycache__",
blockFile=False,
dirOnly=False
)
for number in recursFiles:
countRecursFiles += 1
if countRecursFiles == 0:
print("[!] No .pyc file(s) found in {}".format(outputArg))
return 1
print("\n[+] Running delete {} .pyc file(s)...\n".format(countRecursFiles))
# -- Check if .pyc file(s) exists and delete it -- #
with Bar("Setting up ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
if re.match(Reg.detectPycFiles, file):
deleteFiles += 1
checkPycFile.append(file)
bar.next(1)
bar.finish()
# -- Delete pyc file(s) -- #
with Bar("Correction ", fill="=", max=countRecursFiles, suffix="%(percent)d%%") as bar:
for file in recursFiles:
if re.match(Reg.detectPycFiles, file):
extractPycFiles = re.search(r".*\.pyc$", file)
moveFolder = re.sub(r".*\.pyc$", "", file)
os.chdir(moveFolder)
os.remove(extractPycFiles.group(0))
os.chdir(currentPosition)
bar.next(1)
bar.finish()
checkRecursFiles = self.utils.CheckFileDir(
output=outputArg,
detectFiles="pyc",
blockDir="__pycache__",
blockFile=False,
dirOnly=False
)
if checkRecursFiles != []:
if verboseArg:
for pycFile in checkRecursFiles:
print("-> .pyc file no deleted : {}".format(pycFile))
return 0
else:
if verboseArg:
for pycFile in checkPycFile:
print("-> .pyc file deleted : {}".format(pycFile))
print("\n-> {} .pyc file(s) deleted".format(deleteFiles))
return 1
| 48.941176 | 126 | 0.394231 | 1,107 | 18,304 | 6.497742 | 0.151762 | 0.025302 | 0.036146 | 0.033366 | 0.670513 | 0.615877 | 0.582233 | 0.546365 | 0.546365 | 0.477965 | 0 | 0.007868 | 0.527808 | 18,304 | 373 | 127 | 49.072386 | 0.824367 | 0.02901 | 0 | 0.775701 | 0 | 0 | 0.03768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012461 | false | 0.003115 | 0.021807 | 0 | 0.05919 | 0.043614 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5ab3d12f2d0c6dadc8b931d7354407aa96d8ba44 | 793 | py | Python | test_pascals_triangle.py | jaebradley/leetcode.py | 64634cc7d0e975ddd163f35acb18cc92960b8eb5 | [
"MIT"
] | null | null | null | test_pascals_triangle.py | jaebradley/leetcode.py | 64634cc7d0e975ddd163f35acb18cc92960b8eb5 | [
"MIT"
] | 2 | 2019-11-13T19:55:49.000Z | 2019-11-13T19:55:57.000Z | test_pascals_triangle.py | jaebradley/leetcode.py | 64634cc7d0e975ddd163f35acb18cc92960b8eb5 | [
"MIT"
] | null | null | null | from unittest import TestCase
from pascals_triangle import Solution
class TestSolution(TestCase):
def setUp(self) -> None:
self.solution = Solution()
def test_generate(self):
self.assertListEqual(
[[1]],
self.solution.generate(1)
)
self.assertListEqual(
[[1], [1, 1]],
self.solution.generate(2)
)
self.assertListEqual(
[[1], [1, 1], [1, 2, 1]],
self.solution.generate(3)
)
self.assertListEqual(
[[1], [1, 1], [1, 2, 1], [1, 3, 3, 1]],
self.solution.generate(4)
)
self.assertListEqual(
[[1], [1, 1], [1, 2, 1], [1, 3, 3, 1], [1, 4, 6, 4, 1]],
self.solution.generate(5)
)
| 23.323529 | 68 | 0.474149 | 88 | 793 | 4.25 | 0.25 | 0.074866 | 0.05615 | 0.280749 | 0.280749 | 0.221925 | 0.221925 | 0.221925 | 0.15508 | 0.15508 | 0 | 0.07984 | 0.368222 | 793 | 33 | 69 | 24.030303 | 0.666667 | 0 | 0 | 0.192308 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5abe0804e10d04a719d8730679fd30f1b996a4d5 | 289 | py | Python | app/__init__.py | tirgei/stack-overflow-lite-api | 34a3ca2a63b2acdfa5bdbeda556417ae7ccc5b28 | [
"MIT"
] | 1 | 2018-12-17T08:35:02.000Z | 2018-12-17T08:35:02.000Z | app/__init__.py | tirgei/stack-overflow-lite-api | 34a3ca2a63b2acdfa5bdbeda556417ae7ccc5b28 | [
"MIT"
] | 1 | 2018-12-19T15:44:18.000Z | 2018-12-19T15:44:18.000Z | app/__init__.py | tirgei/stack-overflow-lite-api | 34a3ca2a63b2acdfa5bdbeda556417ae7ccc5b28 | [
"MIT"
] | null | null | null | import os
from flask import Flask
from instance.config import app_config
from app.api.v1.views.user_views import v1 as users_v1
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(app_config[config_name])
app.register_blueprint(users_v1)
return app | 24.083333 | 54 | 0.782007 | 47 | 289 | 4.510638 | 0.425532 | 0.169811 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01626 | 0.148789 | 289 | 12 | 55 | 24.083333 | 0.845528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.666667 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
5abe73158155f231abe8be65125222d9b38f4f5b | 72 | py | Python | ashd/pipelinev2/global_vals.py | johnnygreco/asas-sn-hd | 1baf7ff6c0cb7c65caf7b2e07ed67b13049c92d9 | [
"MIT"
] | null | null | null | ashd/pipelinev2/global_vals.py | johnnygreco/asas-sn-hd | 1baf7ff6c0cb7c65caf7b2e07ed67b13049c92d9 | [
"MIT"
] | null | null | null | ashd/pipelinev2/global_vals.py | johnnygreco/asas-sn-hd | 1baf7ff6c0cb7c65caf7b2e07ed67b13049c92d9 | [
"MIT"
] | null | null | null | # note that these are default variables
MAX_TRIES = 10
MAX_FINDINGS = 3 | 18 | 39 | 0.777778 | 12 | 72 | 4.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050847 | 0.180556 | 72 | 4 | 40 | 18 | 0.864407 | 0.513889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5acbcb9e5fd66f410d8f88ab087aa7d19eba06e5 | 881 | py | Python | setup.py | williamfzc/devcube | 71bc293161d489ca47a8c3deebfdc52e8933f82f | [
"MIT"
] | 4 | 2020-08-07T07:52:10.000Z | 2020-08-07T16:32:03.000Z | setup.py | williamfzc/devcube | 71bc293161d489ca47a8c3deebfdc52e8933f82f | [
"MIT"
] | null | null | null | setup.py | williamfzc/devcube | 71bc293161d489ca47a8c3deebfdc52e8933f82f | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
from devcube import (
__AUTHOR__,
__AUTHOR_EMAIL__,
__VERSION__,
__PROJECT_NAME__,
)
with open("requirements.txt", encoding="utf-8") as f:
requirements = [
each.strip() for each in f.readlines() if not each.startswith("git+")
]
setup(
name=__PROJECT_NAME__,
version=__VERSION__,
author=__AUTHOR__,
author_email=__AUTHOR_EMAIL__,
packages=find_packages(),
include_package_data=True,
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
],
python_requires=">=3.6",
install_requires=requirements,
entry_points={"console_scripts": ["devcube = devcube.cli:main"]},
)
| 27.53125 | 77 | 0.653802 | 94 | 881 | 5.648936 | 0.521277 | 0.178908 | 0.235405 | 0.195857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014535 | 0.219069 | 881 | 31 | 78 | 28.419355 | 0.757267 | 0 | 0 | 0 | 0 | 0 | 0.280363 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5ae221e0b2b897840f866c40878a6fd9835def7f | 3,150 | py | Python | asyncapi_schema_pydantic/v2_3_0/message_bindings.py | albertnadal/asyncapi-schema-pydantic | 83966bdc11f2d465a10b52cec5ff79d18fa6f5fe | [
"MIT"
] | null | null | null | asyncapi_schema_pydantic/v2_3_0/message_bindings.py | albertnadal/asyncapi-schema-pydantic | 83966bdc11f2d465a10b52cec5ff79d18fa6f5fe | [
"MIT"
] | null | null | null | asyncapi_schema_pydantic/v2_3_0/message_bindings.py | albertnadal/asyncapi-schema-pydantic | 83966bdc11f2d465a10b52cec5ff79d18fa6f5fe | [
"MIT"
] | null | null | null | from typing import Optional
from pydantic import BaseModel, Extra
from .http_bindings import HttpMessageBinding
from .web_sockets_bindings import WebSocketsMessageBinding
from .kafka_bindings import KafkaMessageBinding
from .anypoint_mq_bindings import AnypointMqMessageBinding
from .amqp_bindings import AmqpMessageBinding
from .amqp1_bindings import Amqp1MessageBinding
from .mqtt_bindings import MqttMessageBinding
from .mqtt5_bindings import Mqtt5MessageBinding
from .nats_bindings import NatsMessageBinding
from .jms_bindings import JmsMessageBinding
from .sns_bindings import SnsMessageBinding
from .solace_bindings import SolaceMessageBinding
from .sqs_bindings import SqsMessageBinding
from .stomp_bindings import StompMessageBinding
from .redis_bindings import RedisMessageBinding
from .mercure_bindings import MercureMessageBinding
from .ibm_mq_bindings import IbmMqMessageBinding
class MessageBindings(BaseModel):
"""
Map describing protocol-specific definitions for a message.
"""
http: Optional[HttpMessageBinding] = None
"""
Protocol-specific information for an HTTP message, i.e., a request or a response.
"""
ws: Optional[WebSocketsMessageBinding] = None
"""
Protocol-specific information for a WebSockets message.
"""
kafka: Optional[KafkaMessageBinding] = None
"""
Protocol-specific information for a Kafka message.
"""
anypointmq: Optional[AnypointMqMessageBinding] = None
"""
Protocol-specific information for an Anypoint MQ message.
"""
amqp: Optional[AmqpMessageBinding] = None
"""
Protocol-specific information for an AMQP 0-9-1 message.
"""
amqp1: Optional[Amqp1MessageBinding] = None
"""
Protocol-specific information for an AMQP 1.0 message.
"""
mqtt: Optional[MqttMessageBinding] = None
"""
Protocol-specific information for an MQTT message.
"""
mqtt5: Optional[Mqtt5MessageBinding] = None
"""
Protocol-specific information for an MQTT 5 message.
"""
nats: Optional[NatsMessageBinding] = None
"""
Protocol-specific information for a NATS message.
"""
jms: Optional[JmsMessageBinding] = None
"""
Protocol-specific information for a JMS message.
"""
sns: Optional[SnsMessageBinding] = None
"""
Protocol-specific information for an SNS message.
"""
solace: Optional[SolaceMessageBinding] = None
"""
Protocol-specific information for a Solace message.
"""
sqs: Optional[SqsMessageBinding] = None
"""
Protocol-specific information for an SQS message.
"""
stomp: Optional[StompMessageBinding] = None
"""
Protocol-specific information for a STOMP message.
"""
redis: Optional[RedisMessageBinding] = None
"""
Protocol-specific information for a Redis message.
"""
mercure: Optional[MercureMessageBinding] = None
"""
Protocol-specific information for a Mercure message.
"""
ibmmq: Optional[IbmMqMessageBinding] = None
"""
Protocol-specific information for an IBM MQ message.
"""
class Config:
extra = Extra.forbid
| 27.155172 | 85 | 0.724127 | 315 | 3,150 | 7.177778 | 0.228571 | 0.127377 | 0.150376 | 0.233083 | 0.274215 | 0.274215 | 0.070765 | 0 | 0 | 0 | 0 | 0.005547 | 0.19873 | 3,150 | 115 | 86 | 27.391304 | 0.890254 | 0.01873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.487179 | 0 | 0.974359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5aefdc6a6208c608443af8d9fd5961f34de89ce8 | 796 | py | Python | dltb/tool/tests/test_detector.py | CogSciUOS/DeepLearningToolbox | bf07578b9486d8c48e25df357bc4b9963b513b46 | [
"MIT"
] | 2 | 2019-09-01T01:38:59.000Z | 2020-02-13T19:25:51.000Z | dltb/tool/tests/test_detector.py | CogSciUOS/DeepLearningToolbox | bf07578b9486d8c48e25df357bc4b9963b513b46 | [
"MIT"
] | null | null | null | dltb/tool/tests/test_detector.py | CogSciUOS/DeepLearningToolbox | bf07578b9486d8c48e25df357bc4b9963b513b46 | [
"MIT"
] | null | null | null | from unittest import TestCase
from dltb.tool import Tool
from dltb.tool.detector import Detections
from dltb.base.image import Image
class TestDetector(TestCase):
def setUp(self):
self.detector = Tool['haar']
self.detector.prepare()
# self.image = imread('examples/reservoir-dogs.jpg')
self.image = Image.as_data('examples/reservoir-dogs.jpg')
def test_detect1(self):
detections = self.detector.detect(self.image)
self.assertTrue(isinstance(detections, Detections))
# self.datasource.unprepare()
# self.assertFalse(self.datasource.prepared)
self.assertEqual(len(detections), 6)
def test_detect2(self):
pass
# self.datasource.prepare()
# self.assertEqual(len(self.datasource), 2330)
| 29.481481 | 65 | 0.679648 | 92 | 796 | 5.847826 | 0.423913 | 0.104089 | 0.04461 | 0.089219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.208543 | 796 | 26 | 66 | 30.615385 | 0.842857 | 0.241206 | 0 | 0 | 0 | 0 | 0.051839 | 0.045151 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.2 | false | 0.066667 | 0.266667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
5af814eaaf5e5ef51807162ca3f9fb884b962ee7 | 813 | py | Python | cpp/src/examples/pbe_keyset.py | nawien-sharma/keyczar | c55563bbd70f4b6fefc7444e296aab9894475f9a | [
"Apache-2.0"
] | null | null | null | cpp/src/examples/pbe_keyset.py | nawien-sharma/keyczar | c55563bbd70f4b6fefc7444e296aab9894475f9a | [
"Apache-2.0"
] | null | null | null | cpp/src/examples/pbe_keyset.py | nawien-sharma/keyczar | c55563bbd70f4b6fefc7444e296aab9894475f9a | [
"Apache-2.0"
] | 1 | 2021-04-13T05:05:30.000Z | 2021-04-13T05:05:30.000Z | # Encrypts and decrypts a short message from a PBE encrypted JSON key set.
#
# Example: python pbe_keyset.py ~/my-pbe-json-aes-encrypted password
#
import os
import sys
import keyczar
def Encrypt(crypted_path, password):
if not os.path.exists(crypted_path):
return
input = 'Secret message'
reader = keyczar.KeysetPBEJSONFileReader(crypted_path, password)
crypter = keyczar.Crypter.Read(reader)
ciphertext = crypter.Encrypt(input)
print 'plaintext:', input
print 'ciphertext:', ciphertext
decrypted = crypter.Decrypt(ciphertext)
assert decrypted == input
if __name__ == '__main__':
if (len(sys.argv) != 3):
print >> sys.stderr, "Provide a valid JSON key set path and a password as arguments."
sys.exit(1)
Encrypt(sys.argv[1], sys.argv[2])
| 26.225806 | 93 | 0.694957 | 107 | 813 | 5.168224 | 0.523364 | 0.059675 | 0.036166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006154 | 0.200492 | 813 | 30 | 94 | 27.1 | 0.844615 | 0.170972 | 0 | 0 | 0 | 0 | 0.156951 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | null | null | 0.157895 | 0.157895 | null | null | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5afa767a6c14464cc2c5b0783f37229c88180e39 | 298 | py | Python | events/views.py | austinprog/hsvdotbeer | 2979a2b0b105b85d0865c6771bfcb7debb98b3e8 | [
"Apache-2.0"
] | null | null | null | events/views.py | austinprog/hsvdotbeer | 2979a2b0b105b85d0865c6771bfcb7debb98b3e8 | [
"Apache-2.0"
] | 6 | 2020-08-03T09:50:01.000Z | 2021-06-10T18:17:28.000Z | events/views.py | austinprog/hsvdotbeer | 2979a2b0b105b85d0865c6771bfcb7debb98b3e8 | [
"Apache-2.0"
] | null | null | null | from rest_framework.viewsets import ModelViewSet
from . import models
from . import serializers
class EventViewSet(ModelViewSet):
serializer_class = serializers.EventSerializer
queryset = models.Event.objects.select_related('venue').order_by(
'start_time', 'venue__name',
)
| 22.923077 | 69 | 0.758389 | 32 | 298 | 6.84375 | 0.71875 | 0.091324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157718 | 298 | 12 | 70 | 24.833333 | 0.87251 | 0 | 0 | 0 | 0 | 0 | 0.087248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
850c52ae07fd7b9cb26557ccb87770e03a7ec740 | 1,079 | py | Python | src/modules/dropout.py | HuicheolMoon/P4_ModelOptimization | 63a1615068de16e75514be0af4a317ab44ab9eb3 | [
"MIT"
] | null | null | null | src/modules/dropout.py | HuicheolMoon/P4_ModelOptimization | 63a1615068de16e75514be0af4a317ab44ab9eb3 | [
"MIT"
] | null | null | null | src/modules/dropout.py | HuicheolMoon/P4_ModelOptimization | 63a1615068de16e75514be0af4a317ab44ab9eb3 | [
"MIT"
] | null | null | null | # Dropout Module
import torch
from torch import nn as nn
from src.modules.base_generator import GeneratorAbstract
class Dropout(nn.Module):
"""Dropout module."""
def __init__(self, prob: int =0.5):
"""
Args:
prob: dropout probability
"""
super().__init__()
self.dropout = nn.Dropout(prob)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward."""
return self.dropout(x)
class DropoutGenerator(GeneratorAbstract):
"""Dropout module generator for parsing."""
def __init__(self, *args, **kwargs):
"""Initailize."""
super().__init__(*args, **kwargs)
@property
def out_channel(self) -> int:
"""Get out channel size."""
return self.in_channel
def __call__(self, repeat: int = 1):
p = self.args[0]
if repeat > 1:
module = []
for i in range(repeat):
module.append(Dropout(prob=p))
else:
module = Dropout(prob=p)
return self._get_module(module)
| 23.977778 | 56 | 0.567192 | 120 | 1,079 | 4.891667 | 0.391667 | 0.06644 | 0.037479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006667 | 0.304912 | 1,079 | 44 | 57 | 24.522727 | 0.776 | 0.137164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.125 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8511df590273fd306cf7ac94104ea0f8adb88c19 | 410 | py | Python | tutorial/list.py | Dannyps/fpai-datamining | 98d18aa697f7a13ba790423eba6e6c6b71a32a56 | [
"Unlicense"
] | null | null | null | tutorial/list.py | Dannyps/fpai-datamining | 98d18aa697f7a13ba790423eba6e6c6b71a32a56 | [
"Unlicense"
] | null | null | null | tutorial/list.py | Dannyps/fpai-datamining | 98d18aa697f7a13ba790423eba6e6c6b71a32a56 | [
"Unlicense"
] | null | null | null | test = [11.0, "Alice has a cat", 12, 4, "5"]
print("len(test) = " + str(len(test)))
print("test[1] = " + str(test[1]))
print("test[3:6] = " + str(test[3:6]))
print("test[1:6:2] = " + str(test[1:6:2]))
print("test[:6] = " + str(test[:6]))
print("test[-2] = " + str(test[-2]))
test.append(121)
test2 = test + [1, 2, 3]
print("len(test2) = " + str(len(test2)))
test2[0] = "Lodz"
test2[6] = 77
print(test2)
| 19.52381 | 44 | 0.531707 | 74 | 410 | 2.945946 | 0.297297 | 0.206422 | 0.091743 | 0.06422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113703 | 0.163415 | 410 | 20 | 45 | 20.5 | 0.521866 | 0 | 0 | 0 | 0 | 0 | 0.25122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
851427682ec24633568e4a0bcaca14c5a15c7093 | 7,023 | py | Python | tests/test.py | YegorDB/CTHPoker | 3e49e85ca00c47589f7ccc8b47d68caa4c1ce649 | [
"Apache-2.0"
] | null | null | null | tests/test.py | YegorDB/CTHPoker | 3e49e85ca00c47589f7ccc8b47d68caa4c1ce649 | [
"Apache-2.0"
] | null | null | null | tests/test.py | YegorDB/CTHPoker | 3e49e85ca00c47589f7ccc8b47d68caa4c1ce649 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Yegor Bitensky
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from cthpoker import findCombo, findRatioCombo
def get_parameters(func):
def wrap(self, values):
return func(self, **values)
return wrap
class TestFindCombo:
variants = [
{'cards': [144, 134, 124, 114, 104], 'value': [9, 14]},
{'cards': [134, 122, 124, 114, 104, 94], 'value': [9, 13]},
{'cards': [124, 114, 101, 104, 94, 84, 21], 'value': [9, 12]},
{'cards': [123, 114, 104, 94, 84, 73, 74], 'value': [9, 11]},
{'cards': [141, 104, 94, 84, 74, 64, 52], 'value': [9, 10]},
{'cards': [94, 84, 74, 73, 64, 52, 54], 'value': [9, 9]},
{'cards': [121, 84, 74, 64, 54, 44, 23], 'value': [9, 8]},
{'cards': [143, 74, 64, 54, 44, 34, 23], 'value': [9, 7]},
{'cards': [114, 104, 64, 54, 44, 34, 24], 'value': [9, 6]},
{'cards': [141, 144, 62, 54, 44, 34, 24], 'value': [9, 5]},
{'cards': [144, 142, 143, 141], 'value': [8, 14]},
{'cards': [134, 144, 132, 133, 131], 'value': [8, 13, 14]},
{'cards': [124, 134, 144, 122, 123, 121], 'value': [8, 12, 14]},
{'cards': [114, 134, 112, 133, 113, 111], 'value': [8, 11, 13]},
{'cards': [104, 144, 53, 102, 103, 101], 'value': [8, 10, 14]},
{'cards': [144, 142, 143, 131, 133], 'value': [7, 14, 13]},
{'cards': [144, 131, 133, 124, 123, 122], 'value': [7, 12, 13]},
{'cards': [131, 114, 113, 133, 112, 134, 53], 'value': [7, 13, 11]},
{'cards': [101, 114, 113, 103, 83, 104, 82], 'value': [7, 10, 11]},
{'cards': [114, 51, 112, 92, 113, 54, 52], 'value': [7, 11, 5]},
{'cards': [144, 134, 124, 114, 94], 'value': [6, 14, 13, 12, 11, 9]},
{'cards': [143, 134, 104, 94, 54, 24], 'value': [6, 13, 10, 9, 5, 2]},
{'cards': [132, 104, 94, 73, 74, 54, 24], 'value': [6, 10, 9, 7, 5, 2]},
{'cards': [94, 84, 74, 54, 44, 34, 24], 'value': [6, 9, 8, 7, 5, 4]},
{'cards': [123, 74, 54, 44, 34, 24, 21], 'value': [6, 7, 5, 4, 3, 2]},
{'cards': [144, 133, 121, 112, 103], 'value': [5, 14]},
{'cards': [121, 113, 104, 103, 94, 81], 'value': [5, 12]},
{'cards': [104, 94, 81, 72, 64, 54, 23], 'value': [5, 10]},
{'cards': [81, 72, 63, 54, 41, 32, 23], 'value': [5, 8]},
{'cards': [114, 113, 64, 51, 42, 31, 23], 'value': [5, 6]},
{'cards': [141, 51, 42, 31, 23], 'value': [5, 5]},
{'cards': [144, 143, 141], 'value': [4, 14]},
{'cards': [134, 133, 131, 42], 'value': [4, 13, 4]},
{'cards': [142, 124, 123, 121, 72], 'value': [4, 12, 14, 7]},
{'cards': [132, 122, 114, 113, 111, 92], 'value': [4, 11, 13, 12]},
{'cards': [74, 52, 42, 34, 33, 31, 22], 'value': [4, 3, 7, 5]},
{'cards': [144, 143, 131, 133], 'value': [3, 14, 13]},
{'cards': [142, 131, 133, 123, 122], 'value': [3, 13, 12, 14]},
{'cards': [132, 111, 113, 101, 93, 92], 'value': [3, 11, 9, 13]},
{'cards': [101, 104, 93, 92, 74, 51, 53], 'value': [3, 10, 9, 7]},
{'cards': [51, 53, 43, 42, 31, 34, 24], 'value': [3, 5, 4, 3]},
{'cards': [144, 143], 'value': [2, 14]},
{'cards': [144, 132, 133], 'value': [2, 13, 14]},
{'cards': [134, 122, 123, 112], 'value': [2, 12, 13, 11]},
{'cards': [134, 122, 113, 112, 104], 'value': [2, 11, 13, 12, 10]},
{'cards': [143, 113, 104, 83, 81, 63], 'value': [2, 8, 14, 11, 10]},
{'cards': [131, 122, 113, 104, 63, 51, 53], 'value': [2, 5, 13, 12, 11]},
{'cards': [44], 'value': [1, 4]},
{'cards': [141, 72], 'value': [1, 14, 7]},
{'cards': [122, 53, 32], 'value': [1, 12, 5, 3]},
{'cards': [112, 103, 91, 84], 'value': [1, 11, 10, 9, 8]},
{'cards': [134, 102, 83, 63, 21], 'value': [1, 13, 10, 8, 6, 2]},
{'cards': [104, 93, 72, 54, 43, 31], 'value': [1, 10, 9, 7, 5, 4]},
{'cards': [134, 122, 94, 73, 52, 41, 23], 'value': [1, 13, 12, 9, 7, 5]},
]
@pytest.mark.parametrize("values", variants)
@get_parameters
def test_all_values(self, cards, value):
assert findCombo(cards) == value
class TestFindRatioCombo:
variants = [
{'cards': [114, 104, 94, 84, 21, 1124, 1101], 'value': [9, 12], 'kind': 2},
{'cards': [104, 94, 84, 74, 64, 1141, 1051], 'value': [9, 10], 'kind': 0},
{'cards': [143, 21, 31, 41, 51, 1141, 1082], 'value': [9, 5], 'kind': 2},
{'cards': [141, 21, 31, 41, 51, 1142, 1134], 'value': [9, 5], 'kind': 0},
{'cards': [134, 132, 133, 131, 74, 1144, 1022], 'value': [8, 13, 14], 'kind': 0},
{'cards': [134, 144, 123, 121, 22, 1124, 1122], 'value': [8, 12, 14], 'kind': 2},
{'cards': [112, 132, 113, 134, 52, 1131, 1114], 'value': [7, 13, 11], 'kind': 2},
{'cards': [114, 112, 102, 104, 82, 1101, 1083], 'value': [7, 10, 11], 'kind': 1},
{'cards': [114, 51, 112, 54, 52, 1092, 1134], 'value': [7, 5, 11], 'kind': 0},
{'cards': [74, 54, 44, 34, 24, 1094, 1084], 'value': [6, 9, 8, 7, 5, 4], 'kind': 2},
{'cards': [74, 54, 44, 34, 24, 1123, 1021], 'value': [6, 7, 5, 4, 3, 2], 'kind': 0},
{'cards': [63, 54, 41, 32, 23, 1081, 1072], 'value': [5, 8], 'kind': 2},
{'cards': [64, 51, 42, 31, 23, 1114, 1113], 'value': [5, 6], 'kind': 0},
{'cards': [142, 24, 31, 41, 53, 1144, 1101], 'value': [5, 5], 'kind': 2},
{'cards': [142, 24, 31, 41, 53, 1104, 1101], 'value': [5, 5], 'kind': 0},
{'cards': [122, 113, 111, 92, 52, 1132, 1114], 'value': [4, 11, 13, 12], 'kind': 2},
{'cards': [42, 34, 33, 31, 22, 1074, 1052], 'value': [4, 3, 7, 5], 'kind': 0},
{'cards': [113, 132, 101, 92, 34, 1111, 1093], 'value': [3, 11, 9, 13], 'kind': 2},
{'cards': [104, 93, 92, 74, 53, 1101, 1051], 'value': [3, 10, 9, 7], 'kind': 1},
{'cards': [51, 53, 43, 42, 31, 1034, 1024], 'value': [3, 5, 4, 3], 'kind': 0},
{'cards': [143, 112, 104, 63, 43, 1083, 1081], 'value': [2, 8, 14, 11, 10], 'kind': 2},
{'cards': [112, 104, 63, 51, 53, 1131, 1123], 'value': [2, 5, 13, 12, 11], 'kind': 0},
{'cards': [134, 122, 94, 73, 23, 1052, 1041], 'value': [1, 13, 12, 9, 7, 5], 'kind': 2},
{'cards': [141, 104, 93, 72, 54, 1043, 1031], 'value': [1, 14, 10, 9, 7, 5], 'kind': 0},
]
@pytest.mark.parametrize("values", variants)
@get_parameters
def test_all_values(self, cards, value, kind):
assert findRatioCombo(cards) == [value, kind]
| 57.097561 | 96 | 0.477431 | 1,099 | 7,023 | 3.044586 | 0.166515 | 0.026898 | 0.032875 | 0.014345 | 0.222056 | 0.127316 | 0.067543 | 0.046623 | 0.046623 | 0.046623 | 0 | 0.314176 | 0.256728 | 7,023 | 122 | 97 | 57.565574 | 0.32682 | 0.078457 | 0 | 0.061224 | 0 | 0 | 0.13744 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 1 | 0.040816 | false | 0 | 0.020408 | 0.010204 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
851639da53c56a4337a1ff09a45417f2f39ee42c | 61 | py | Python | cbbuild-manifest/version.py | ceejatec/python-couchbase-commons | ad74fc3de98eeaadf0ea38da86639d07f92b0945 | [
"Apache-2.0"
] | null | null | null | cbbuild-manifest/version.py | ceejatec/python-couchbase-commons | ad74fc3de98eeaadf0ea38da86639d07f92b0945 | [
"Apache-2.0"
] | null | null | null | cbbuild-manifest/version.py | ceejatec/python-couchbase-commons | ad74fc3de98eeaadf0ea38da86639d07f92b0945 | [
"Apache-2.0"
] | 1 | 2020-02-18T08:10:51.000Z | 2020-02-18T08:10:51.000Z | # Update this file for version changes
__version__ = '0.5.3'
| 20.333333 | 38 | 0.737705 | 10 | 61 | 4.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.163934 | 61 | 2 | 39 | 30.5 | 0.745098 | 0.590164 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
851e3aedc187910bb5e2a11f8eaab604baa8b396 | 634 | py | Python | dimensigon/network/auth.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | 2 | 2020-11-20T10:27:14.000Z | 2021-02-21T13:57:56.000Z | dimensigon/network/auth.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | dimensigon/network/auth.py | dimensigon/dimensigon | 079d7c91a66e10f13510d89844fbadb27e005b40 | [
"Apache-2.0"
] | null | null | null | from requests.auth import AuthBase
class HTTPBearerAuth(AuthBase):
def __init__(self, token):
self.token = token
def __eq__(self, other):
return self.token == getattr(other, 'token', None)
def __ne__(self, other):
return not self == other
def __call__(self, r):
if hasattr(r, 'headers'):
r.headers.update(self.header)
else:
r['headers'].update(self.header)
r.pop('auth', None)
return r
@property
def header(self):
return {'Authorization': str(self)}
def __str__(self):
return 'Bearer ' + self.token
| 22.642857 | 58 | 0.578864 | 74 | 634 | 4.689189 | 0.418919 | 0.103746 | 0.086455 | 0.103746 | 0.138329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.299685 | 634 | 27 | 59 | 23.481481 | 0.781532 | 0 | 0 | 0 | 0 | 0 | 0.067823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.05 | 0.2 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
85246a13fd70ce31f59ef368d91ed8a4a6677db5 | 236 | py | Python | pygame/song with pygame.py | vitorhugo1207/SomethingsInPython | 505a7e906cd391a9c6f820900c8ac8b7f456a5dc | [
"MIT"
] | null | null | null | pygame/song with pygame.py | vitorhugo1207/SomethingsInPython | 505a7e906cd391a9c6f820900c8ac8b7f456a5dc | [
"MIT"
] | null | null | null | pygame/song with pygame.py | vitorhugo1207/SomethingsInPython | 505a7e906cd391a9c6f820900c8ac8b7f456a5dc | [
"MIT"
] | null | null | null | import pygame
text = input('Nome da musica: ')
pygame.init()
pygame.mixer.music.load(text)
pygame.mixer.music.play()
print('S para parar.')
text2 = input('-> ')
if text2 == 'S':
pygame.mixer.stop()
else:
pygame.event.wait()
| 14.75 | 32 | 0.652542 | 34 | 236 | 4.529412 | 0.647059 | 0.214286 | 0.207792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01005 | 0.15678 | 236 | 15 | 33 | 15.733333 | 0.763819 | 0 | 0 | 0 | 0 | 0 | 0.139831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.090909 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8529f954b36e705a8589749f2d20932ba107fc8d | 1,351 | py | Python | pfrf_example/pfrf_example/controller/person_controller.py | problemfighter/pfms-example | 97e1ea951a3ae34b4df88da791e00857b5c433e9 | [
"Apache-2.0"
] | 2 | 2021-07-16T21:34:26.000Z | 2021-07-16T21:35:11.000Z | pfrf_example/pfrf_example/controller/person_controller.py | problemfighter/pfms-example | 97e1ea951a3ae34b4df88da791e00857b5c433e9 | [
"Apache-2.0"
] | null | null | null | pfrf_example/pfrf_example/controller/person_controller.py | problemfighter/pfms-example | 97e1ea951a3ae34b4df88da791e00857b5c433e9 | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint
from pfms.swagger.pfms_swagger_decorator import pfms_create, pfms_details, pfms_pagination_sort_search_list, pfms_restore, pfms_delete
from pfrf_example.dto.person_dto import PersonCreateDto, PersonDetailsDto, PersonUpdateDto
from pfrf_example.service.person_service import PersonService
person_controller = Blueprint("person_controller", __name__, url_prefix="/api/v1/person")
person_service = PersonService()
@person_controller.route("/create", methods=['POST'])
@pfms_create(request_body=PersonCreateDto)
def create():
return person_service.create()
@person_controller.route("/details/<int:id>", methods=['GET'])
@pfms_details(response_obj=PersonDetailsDto)
def details(id: int):
return person_service.details(id)
@person_controller.route("/update", methods=['POST'])
@pfms_create(request_body=PersonUpdateDto)
def update():
return person_service.update()
@person_controller.route("/delete/<int:id>", methods=['DELETE'])
@pfms_delete()
def delete(id: int):
return person_service.delete(id)
@person_controller.route("/restore/<int:id>", methods=['GET'])
@pfms_restore()
def restore(id: int):
return person_service.restore(id)
@person_controller.route("/list", methods=['GET'])
@pfms_pagination_sort_search_list(response_obj=PersonDetailsDto)
def list():
return person_service.list()
| 30.022222 | 134 | 0.783864 | 170 | 1,351 | 5.952941 | 0.252941 | 0.102767 | 0.124506 | 0.050395 | 0.227273 | 0.063241 | 0 | 0 | 0 | 0 | 0 | 0.000808 | 0.083642 | 1,351 | 44 | 135 | 30.704545 | 0.81664 | 0 | 0 | 0 | 0 | 0 | 0.091044 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0.2 | 0.533333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
51b0a533b3ef7b0274f3b039ee56940b42536735 | 506 | py | Python | tests/test_oscillator.py | borismarin/org.geppetto.model.apigen | ed099ac64301de11570779e5e294b7a210a0e0b2 | [
"MIT"
] | null | null | null | tests/test_oscillator.py | borismarin/org.geppetto.model.apigen | ed099ac64301de11570779e5e294b7a210a0e0b2 | [
"MIT"
] | null | null | null | tests/test_oscillator.py | borismarin/org.geppetto.model.apigen | ed099ac64301de11570779e5e294b7a210a0e0b2 | [
"MIT"
] | null | null | null | from gpt_oscillator import oscillator_lib as lib
def test_dynamic_types():
assert(type(lib.oscillator) is type)
osc = lib.oscillator("o_id", "o_name")
assert(osc.id == 'o_id')
def test_default_ids():
o2 = lib.oscillator()
o3 = lib.oscillator()
assert(o2.id == 'oscillator_0')
assert(o3.id == 'oscillator_1')
def test_type_inheritance():
so = lib.aSpecialOcillator()
assert(isinstance(so, lib.aSpecialOcillator)) # d'uh!
assert(isinstance(so, lib.oscillator))
| 24.095238 | 58 | 0.681818 | 69 | 506 | 4.811594 | 0.42029 | 0.195783 | 0.13253 | 0.126506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014423 | 0.177866 | 506 | 20 | 59 | 25.3 | 0.783654 | 0.009881 | 0 | 0 | 0 | 0 | 0.076152 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
51b652b8422d3f9d57c3800b5be4e37b3c91f691 | 2,992 | py | Python | src/web_encoder/exceptions.py | cesarmerjan/web_encoder | 7a1bbde6a1c1aceede433e105175f20d17629313 | [
"MIT"
] | null | null | null | src/web_encoder/exceptions.py | cesarmerjan/web_encoder | 7a1bbde6a1c1aceede433e105175f20d17629313 | [
"MIT"
] | null | null | null | src/web_encoder/exceptions.py | cesarmerjan/web_encoder | 7a1bbde6a1c1aceede433e105175f20d17629313 | [
"MIT"
] | null | null | null | class WebEncoderException(Exception):
pass
class InvalidEncodingErrors(WebEncoderException):
"""Exception raised for errors in the input value of encoding_errors attribute of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = (
message
or "Invalid encoding_errors value. It should be 'strict', 'ignore', 'replace' or 'xmlcharrefreplace'."
)
super().__init__(self.message)
class InvalidStringType(WebEncoderException):
"""Exception raised for errors in the input value of _string into _string_to_bytes method of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The _string need to be str type."
super().__init__(self.message)
class InvalidBytesType(WebEncoderException):
"""Exception raised for errors in the input value of _bytes into methods of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The _bytes need to be bytes type."
super().__init__(self.message)
class InvalidDataType(WebEncoderException):
"""Exception raised for errors in the input value of data into encode method of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The data need to be str type."
super().__init__(self.message)
class InvalidEncodedDataType(WebEncoderException):
"""Exception raised for errors in the input value of encoded_data into decode method of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The encoded_data need to be str type."
super().__init__(self.message)
class DataDecodeError(WebEncoderException):
"""Exception raised for data decoding error in _bytes_to_string of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "Could not decode the message."
super().__init__(self.message)
class CannotBeCompressed(WebEncoderException):
"""Exception raised for errors in the _compress_data method of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The data cannot be compressed."
super().__init__(self.message)
class CannotBeDecompressed(WebEncoderException):
"""Exception raised for errors in the _decompress_data method of WebEncoder class.
Args:
message: explanation of the error
"""
def __init__(self, message=None):
self.message = message or "The data cannot be decompressed."
super().__init__(self.message)
| 29.333333 | 114 | 0.688168 | 354 | 2,992 | 5.584746 | 0.175141 | 0.133536 | 0.121396 | 0.149722 | 0.726353 | 0.688417 | 0.673748 | 0.62519 | 0.62519 | 0.62519 | 0 | 0 | 0.231618 | 2,992 | 101 | 115 | 29.623762 | 0.859939 | 0.365307 | 0 | 0.432432 | 0 | 0 | 0.18187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0.027027 | 0 | 0 | 0.459459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
51be679d682fbf8eb496c5c3820e7d6843d898a5 | 380 | py | Python | apps/reputation/serializers.py | macdaliot/exist | 65244f79c602c5a00c3ea6a7eef512ce9c21e60a | [
"MIT"
] | 159 | 2019-03-15T10:46:19.000Z | 2022-03-12T09:19:31.000Z | apps/reputation/serializers.py | macdaliot/exist | 65244f79c602c5a00c3ea6a7eef512ce9c21e60a | [
"MIT"
] | 6 | 2019-03-16T12:51:24.000Z | 2020-07-09T02:25:42.000Z | apps/reputation/serializers.py | macdaliot/exist | 65244f79c602c5a00c3ea6a7eef512ce9c21e60a | [
"MIT"
] | 36 | 2019-03-16T10:37:14.000Z | 2021-11-14T21:04:18.000Z | from rest_framework import serializers
from .models import blacklist
class blSerializer(serializers.ModelSerializer):
source = serializers.CharField(source='get_source_display')
class Meta:
model = blacklist
fields = ('__all__')
class sourceSerializer(serializers.ModelSerializer):
class Meta:
model = blacklist
fields = ('SOURCES',)
| 27.142857 | 63 | 0.715789 | 36 | 380 | 7.361111 | 0.555556 | 0.196226 | 0.10566 | 0.173585 | 0.218868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202632 | 380 | 13 | 64 | 29.230769 | 0.874587 | 0 | 0 | 0.363636 | 0 | 0 | 0.084211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
51bf679a93edad177bbce42cdd412836a80f29c2 | 2,042 | py | Python | run.py | ShawnHXH/GNN-CCA | 598d5bb13e445b864e728cc2c9e1af38783a2319 | [
"MIT"
] | null | null | null | run.py | ShawnHXH/GNN-CCA | 598d5bb13e445b864e728cc2c9e1af38783a2319 | [
"MIT"
] | null | null | null | run.py | ShawnHXH/GNN-CCA | 598d5bb13e445b864e728cc2c9e1af38783a2319 | [
"MIT"
] | null | null | null | import argparse
from tools.trainer import Trainer
def make_parser():
parser = argparse.ArgumentParser()
# training config
parser.add_argument("--train", default=False, action="store_true")
parser.add_argument("--epochs", type=int, default=100, help="training epochs")
parser.add_argument("--batch-size", type=int, default=16, help="batch size for training")
parser.add_argument("--eval-batch-size", type=int, default=64, help="batch size for validating")
parser.add_argument("-s", "--max-passing-steps", type=int, default=4,
help="maximum message passing steps in GNN for MPN")
# testing config
parser.add_argument("--test", default=False, action="store_true")
parser.add_argument("--ckpt", type=str, help="path to test model")
parser.add_argument("--visualize", default=False, action="store_true",
help="visualize the testing results")
# common config
parser.add_argument("--device", type=str, default="cuda", help="device for training")
parser.add_argument("--output", type=str, default="./ckpt", help="output dir for training")
# reid feature extractor
parser.add_argument("--reid-name", type=str, default="osnet_ain_x1_0",
help="the name of feature extractor model")
parser.add_argument("--reid-path", type=str, default="ckpt/osnet_ain_ms_d_c.pth.tar",
help="the path to feature extractor model")
# dataset
parser.add_argument("--epfl", default=False, action="store_true",
help="using EPFL dataset for training")
parser.add_argument("--seq-name", type=str, nargs="+",
help="using specific sequences in dataset, 'all' means using all of them")
return parser
if __name__ == '__main__':
args = make_parser().parse_args()
main = Trainer(args)
if args.train:
main.train()
elif args.test:
main.test()
else:
raise ValueError("Please assign a state in (train, test)")
| 44.391304 | 100 | 0.648384 | 261 | 2,042 | 4.934866 | 0.356322 | 0.097826 | 0.184783 | 0.071429 | 0.217391 | 0.11646 | 0.068323 | 0.068323 | 0 | 0 | 0 | 0.006219 | 0.212537 | 2,042 | 45 | 101 | 45.377778 | 0.794776 | 0.036729 | 0 | 0 | 0 | 0 | 0.328914 | 0.014788 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0.058824 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
51c8ccc28d7cc3aedfe256bc301ddc9a177e5fc2 | 376 | py | Python | src/901_1000/0922_sort-array-by-parity-ii/sort-array-by-parity-ii.py | himichael/LeetCode | d54f48e785af3d47a2a67a95fd3343d2b23f8ae5 | [
"Apache-2.0"
] | 1 | 2019-12-18T06:08:47.000Z | 2019-12-18T06:08:47.000Z | src/901_1000/0922_sort-array-by-parity-ii/sort-array-by-parity-ii.py | himichael/LeetCode | d54f48e785af3d47a2a67a95fd3343d2b23f8ae5 | [
"Apache-2.0"
] | 1 | 2019-05-18T09:35:22.000Z | 2019-05-18T09:35:22.000Z | src/901_1000/0922_sort-array-by-parity-ii/sort-array-by-parity-ii.py | himichael/LeetCode | d54f48e785af3d47a2a67a95fd3343d2b23f8ae5 | [
"Apache-2.0"
] | null | null | null | class Solution(object):
def sortArrayByParityII(self, A):
if not A:
return []
n = len(A)
res = [0] * n
i = 0
for x in A:
if x%2==0:
res[i] = x
i += 2
i = 1
for x in A:
if x%2==1:
res[i] = x
i += 2
return res | 22.117647 | 37 | 0.316489 | 50 | 376 | 2.4 | 0.42 | 0.075 | 0.1 | 0.116667 | 0.3 | 0.183333 | 0.183333 | 0 | 0 | 0 | 0 | 0.056604 | 0.577128 | 376 | 17 | 38 | 22.117647 | 0.691824 | 0 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
51cb3b7c3ae579225f507efca28cc4220ea99cfb | 681 | py | Python | corshinesubd.py | Corshine-Official/Eztools | 9278ef4916172505895071c4fc82ee68f4b92e18 | [
"MIT"
] | 1 | 2020-05-02T17:14:46.000Z | 2020-05-02T17:14:46.000Z | corshinesubd.py | Corshine-Official/Eztools | 9278ef4916172505895071c4fc82ee68f4b92e18 | [
"MIT"
] | null | null | null | corshinesubd.py | Corshine-Official/Eztools | 9278ef4916172505895071c4fc82ee68f4b92e18 | [
"MIT"
] | null | null | null | ##################################################
## / ___/ _ \| _ \/ ___|| | | |_ _| \ | | ____|##
##| | | | | | |_) \___ \| |_| || || \| | _| ##
##| |__| |_| | _ < ___) | _ || || |\ | |___ ##
## \____\___/|_| \_\____/|_| |_|___|_| \_|_____|##
##################################################
#Usage: python3 corshinesubd.py (domain)
import requests
import sys
sub_list = open("wordlistsubd.txt").read()
subs = sub_list.splitlines()
for sub in subs:
url_to_check = f"http://{sub}.{sys.argv[1]}"
try:
requests.get(url_to_check)
except requests.ConnectionError:
pass
else:
print("Valid domain: ",url_to_check)
| 25.222222 | 50 | 0.433186 | 49 | 681 | 4.714286 | 0.673469 | 0.064935 | 0.12987 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003788 | 0.22467 | 681 | 26 | 51 | 26.192308 | 0.433712 | 0.325991 | 0 | 0 | 0 | 0 | 0.164223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.166667 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
51d84e96d0312983385d2c010401ad3d5ca51c3c | 390 | py | Python | osmaxx/job_progress/views.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
] | 27 | 2015-03-30T14:17:26.000Z | 2022-02-19T17:30:44.000Z | osmaxx/job_progress/views.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
] | 483 | 2015-03-09T16:58:03.000Z | 2022-03-14T09:29:06.000Z | osmaxx/job_progress/views.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
] | 6 | 2015-04-07T07:38:30.000Z | 2020-04-01T12:45:53.000Z | from django.http import HttpResponse
from django.shortcuts import get_object_or_404
from osmaxx.excerptexport.models import Export
def tracker(request, export_id):
export = get_object_or_404(Export, pk=export_id)
export.set_and_handle_new_status(request.GET['status'], incoming_request=request)
response = HttpResponse('')
response.status_code = 200
return response
| 27.857143 | 85 | 0.787179 | 53 | 390 | 5.528302 | 0.54717 | 0.068259 | 0.075085 | 0.095563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026706 | 0.135897 | 390 | 13 | 86 | 30 | 0.84273 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
51e43ed69446545aaa1166e7327dd9ff90d24b54 | 191 | py | Python | okta/models/app/AppUserProfile.py | xmercury-qb/okta-sdk-python | a668a963b13fc61177b36c5438c6ec5fa6f17c4e | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-06-19T18:54:00.000Z | 2020-06-19T18:54:00.000Z | okta/models/app/AppUserProfile.py | xmercury-qb/okta-sdk-python | a668a963b13fc61177b36c5438c6ec5fa6f17c4e | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2018-10-09T22:14:33.000Z | 2018-10-09T23:10:40.000Z | okta/models/app/AppUserProfile.py | xmercury-qb/okta-sdk-python | a668a963b13fc61177b36c5438c6ec5fa6f17c4e | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2018-11-08T19:32:46.000Z | 2021-03-30T06:35:48.000Z | class AppUserProfile:
types = {
'username': str,
'password': str
}
def __init__(self):
self.username = None # str
self.password = None # str
| 14.692308 | 35 | 0.518325 | 18 | 191 | 5.277778 | 0.555556 | 0.147368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.376963 | 191 | 12 | 36 | 15.916667 | 0.798319 | 0.036649 | 0 | 0 | 0 | 0 | 0.088398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.25 | 0 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
51e6e07b849141bd9def2d517b1954741683241d | 122 | py | Python | hydra/__init__.py | Hydraverse/hypy | 7f276975747ba86296044526acc090e540ee77cc | [
"Apache-2.0"
] | 3 | 2021-11-18T19:04:10.000Z | 2022-01-26T04:53:51.000Z | hydra/__init__.py | Hydraverse/hypy | 7f276975747ba86296044526acc090e540ee77cc | [
"Apache-2.0"
] | null | null | null | hydra/__init__.py | Hydraverse/hypy | 7f276975747ba86296044526acc090e540ee77cc | [
"Apache-2.0"
] | null | null | null | """Hydra Library Tools & Applications.
"""
__all__ = (
"app", "hy", "log", "rpc", "test", "util"
)
VERSION = "2.6.5"
| 15.25 | 45 | 0.532787 | 15 | 122 | 4.066667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0.196721 | 122 | 7 | 46 | 17.428571 | 0.591837 | 0.286885 | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
51ea02eff58b5c67bd9880c1554ad4af45dd3227 | 569 | py | Python | test/SpeedTest.py | prajwal309/TierraCrossSection | 4b8896540616ec3f46322ea4c40df9f2b4774a44 | [
"MIT"
] | null | null | null | test/SpeedTest.py | prajwal309/TierraCrossSection | 4b8896540616ec3f46322ea4c40df9f2b4774a44 | [
"MIT"
] | null | null | null | test/SpeedTest.py | prajwal309/TierraCrossSection | 4b8896540616ec3f46322ea4c40df9f2b4774a44 | [
"MIT"
] | null | null | null | #author:Prajwal Niraula
#insitution: MIT
import matplotlib.pyplot as plt
import time
import numpy as np
try:
from HAPILite import CalcCrossSection
except:
from ..HAPILite import CalcCrossSection
WaveNumber = np.arange(0,10000,0.001)
StartTime = time.time()
CrossSection = CalcCrossSection("CO2",Temp=1000.0,WN_Grid=WaveNumber, Profile="Doppler", NCORES=-1)
StopTime = time.time()
print("The time take to calculate the cross-section is %4.3f" %(StopTime - StartTime))
plt.figure()
plt.plot(WaveNumber, CrossSection, "k-")
plt.title("L-HAPI")
plt.show()
| 21.074074 | 100 | 0.745167 | 79 | 569 | 5.35443 | 0.658228 | 0.056738 | 0.085106 | 0.160757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038306 | 0.128295 | 569 | 26 | 101 | 21.884615 | 0.814516 | 0.065026 | 0 | 0 | 0 | 0 | 0.133962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
51f6dd3ff8f3aa50fc8c26af38852974d4425e94 | 666 | py | Python | app/user/schemas.py | joeseggie/resourceidea | aae6120e3ec84f3fc7e1ab1bc833ce37bd06685f | [
"MIT"
] | null | null | null | app/user/schemas.py | joeseggie/resourceidea | aae6120e3ec84f3fc7e1ab1bc833ce37bd06685f | [
"MIT"
] | 21 | 2019-01-26T20:39:34.000Z | 2019-06-20T10:09:57.000Z | app/user/schemas.py | joeseggie/resourceidea | aae6120e3ec84f3fc7e1ab1bc833ce37bd06685f | [
"MIT"
] | null | null | null | from marshmallow import fields
from marshmallow import Schema
from marshmallow.validate import OneOf
class UsersListFilterSchema(Schema):
sort_key = fields.String(
OneOf(choices=['username', 'email', 'phone_number']),
missing='username')
sort_order = fields.String(missing='asc')
class UsersListSchema(Schema):
username = fields.String()
email = fields.String()
phone_number = fields.String()
class UserInputSchema(Schema):
username = fields.String()
password = fields.String()
confirm_password = fields.String()
email = fields.String()
confirm_email = fields.String()
phone_number = fields.String()
| 25.615385 | 61 | 0.707207 | 71 | 666 | 6.535211 | 0.338028 | 0.284483 | 0.109914 | 0.112069 | 0.260776 | 0.172414 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0.181682 | 666 | 25 | 62 | 26.64 | 0.851376 | 0 | 0 | 0.315789 | 0 | 0 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.105263 | 0.157895 | 0 | 0.894737 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
51fc23caa27319c9a7c6759a461e51d312c4b89e | 3,335 | py | Python | testcases/DPO.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | testcases/DPO.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | testcases/DPO.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# $Source: op-test-framework/testcases/OpTestDPO.py $
#
# OpenPOWER Automated Test Project
#
# Contributors Listed Below - COPYRIGHT 2017
# [+] International Business Machines Corp.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
#
# IBM_PROLOG_END_TAG
# @package DPO.py
# Delayed Power off testcase is to test OS graceful shutdown request
# to be notified from OPAL and OS should process the request.
# We will use "ipmitool power soft" command to issue DPO.
from common.OpTestConstants import OpTestConstants as BMC_CONST
from common.OpTestError import OpTestError
import unittest
import pexpect
import OpTestConfiguration
from common.OpTestSystem import OpSystemState
class Base(unittest.TestCase):
def setUp(self):
conf = OpTestConfiguration.conf
self.cv_IPMI = conf.ipmi()
self.cv_SYSTEM = conf.system()
self.cv_HOST = conf.host()
self.bmc_type = conf.args.bmc_type
self.util = self.cv_SYSTEM.util
class DPOSkiroot(Base):
def setup_test(self):
self.cv_SYSTEM.goto_state(OpSystemState.PETITBOOT_SHELL)
self.c = self.cv_SYSTEM.sys_get_ipmi_console()
self.cv_SYSTEM.host_console_unique_prompt()
self.host = "Skiroot"
##
# @brief This will test DPO feature in skiroot and Host
#
# @return BMC_CONST.FW_SUCCESS or raise OpTestError
#
def runTest(self):
self.setup_test()
self.c.run_command("uname -a")
if self.host == "Host":
self.cv_SYSTEM.load_ipmi_drivers(True)
self.c.sol.sendline("ipmitool power soft")
try:
rc = self.c.sol.expect_exact(["reboot: Power down",
"Chassis Power Control: Soft",
"Power down",
"Invalid command",
"Unspecified error",
"Could not open device at"
], timeout=120)
self.assertIn(rc, [0, 1, 2], "Failed to power down")
except pexpect.TIMEOUT:
raise OpTestError("Soft power off not happening")
rc = self.cv_SYSTEM.sys_wait_for_standby_state()
print rc
self.cv_SYSTEM.set_state(OpSystemState.OFF)
self.cv_SYSTEM.goto_state(OpSystemState.OS)
class DPOHost(DPOSkiroot):
def setup_test(self):
self.host = "Host"
self.cv_SYSTEM.goto_state(OpSystemState.OS)
self.util.PingFunc(self.cv_HOST.ip, BMC_CONST.PING_RETRY_POWERCYCLE)
self.c = self.cv_SYSTEM.sys_get_ipmi_console()
self.cv_SYSTEM.host_console_login()
self.cv_SYSTEM.host_console_unique_prompt()
| 35.860215 | 76 | 0.651274 | 426 | 3,335 | 4.957746 | 0.453052 | 0.045455 | 0.073864 | 0.022727 | 0.158617 | 0.143466 | 0.110322 | 0.053977 | 0.053977 | 0.053977 | 0 | 0.00613 | 0.266267 | 3,335 | 92 | 77 | 36.25 | 0.856968 | 0.335232 | 0 | 0.163265 | 1 | 0 | 0.092117 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0 | null | null | 0 | 0.122449 | null | null | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a40837dd0431a46c20a1dc405b0aa97db458697f | 777 | py | Python | nsc.py | Lord-Vlad/numeral-system-converter | 38bdf11fc8a0816fc57d5f5e6c893fef5707de2c | [
"MIT"
] | null | null | null | nsc.py | Lord-Vlad/numeral-system-converter | 38bdf11fc8a0816fc57d5f5e6c893fef5707de2c | [
"MIT"
] | null | null | null | nsc.py | Lord-Vlad/numeral-system-converter | 38bdf11fc8a0816fc57d5f5e6c893fef5707de2c | [
"MIT"
] | null | null | null | # Numeral System Converter
"""
TODO
1. Convert from any system to decimal
"""
def binary_to_decimal(bin_string:str) -> int:
bin_string = str(bin_string).strip()
if not bin_string:
raise ValueError("Empty string was passed to the function")
is_negative = bin_string[0] == "-"
if is_negative:
bin_string = bin_string[1:]
if not all(char in "01" for char in bin_string):
raise ValueError("Non-binary value was passed to the function")
decimal_number = 0
for char in bin_string:
decimal_number = 2 * decimal_number + int(char)
return -decimal_number if is_negative else decimal_number
def to_dec(num:int) -> int:
pass
def main():
print(binary_to_decimal(2))
pass
if __name__ == "__main__":
main() | 25.9 | 71 | 0.674389 | 115 | 777 | 4.295652 | 0.4 | 0.163968 | 0.060729 | 0.097166 | 0.161943 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 0.227799 | 777 | 30 | 72 | 25.9 | 0.81 | 0.087516 | 0 | 0.1 | 0 | 0 | 0.132479 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 1 | 0.15 | false | 0.2 | 0 | 0 | 0.2 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cfb6d7eb0802680ca32ee8d7f0c2b6622c37bbe6 | 135 | py | Python | python/problem7.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | 1 | 2015-12-19T09:59:39.000Z | 2015-12-19T09:59:39.000Z | python/problem7.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | null | null | null | python/problem7.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | null | null | null |
from primes import sieve
target = 10001
primes = sieve()
n = 1
while n <= target:
prime = primes.next()
n += 1
print prime
| 10.384615 | 25 | 0.622222 | 20 | 135 | 4.2 | 0.6 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.274074 | 135 | 12 | 26 | 11.25 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cfbad6b4d8d4be1f6071c7e4b841238f41f1f4b7 | 976 | py | Python | env/lib/python3.7/site-packages/indicoio/text/organizations.py | Novandev/gn_api | 08b071ae3916bb7a183d61843a2cd09e9fe15c7b | [
"MIT"
] | 4 | 2015-08-20T22:42:19.000Z | 2016-03-14T01:28:45.000Z | indicoio/text/organizations.py | mikesperry/IndicoIo-python | caa155b8b31b76df3f86f559ce5324f061a03e40 | [
"MIT"
] | null | null | null | indicoio/text/organizations.py | mikesperry/IndicoIo-python | caa155b8b31b76df3f86f559ce5324f061a03e40 | [
"MIT"
] | null | null | null | from ..utils.api import api_handler
from ..utils.decorators import detect_batch_decorator
@detect_batch_decorator
def organizations(text, cloud=None, batch=None, api_key=None, version=2, **kwargs):
"""
Given input text, returns references to specific organizations found in the text
Example usage:
.. code-block:: python
>>> text = "London Underground's boss Mike Brown warned that the strike ..."
>>> entities = indicoio.organizations(text)
[
{
u'text': "London Underground",
u'confidence': 0.8643872141838074,
u'position': [0, 18]
}
]
:param text: The text to be analyzed.
:type text: str or unicode
:rtype: Dictionary of language probability pairs
"""
url_params = {"batch": batch, "api_key": api_key, "version": version}
return api_handler(text, cloud=cloud, api="organizations", url_params=url_params, **kwargs)
| 33.655172 | 96 | 0.630123 | 114 | 976 | 5.289474 | 0.570175 | 0.029851 | 0.066335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029126 | 0.26127 | 976 | 28 | 97 | 34.857143 | 0.807212 | 0.514344 | 0 | 0 | 0 | 0 | 0.085562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
cfbd2f1a8cd10091dcf8bd4bc792ebe73cbb6d06 | 1,832 | py | Python | indico/migrations/versions/20190118_1213_7ec3949a21c7_use_enum_for_resv_occurrence_state.py | salevajo/indico | 6f9cbabc20d1641caea907099388ae2b04965cf8 | [
"MIT"
] | 1 | 2018-11-12T21:29:26.000Z | 2018-11-12T21:29:26.000Z | indico/migrations/versions/20190118_1213_7ec3949a21c7_use_enum_for_resv_occurrence_state.py | salevajo/indico | 6f9cbabc20d1641caea907099388ae2b04965cf8 | [
"MIT"
] | 9 | 2020-09-08T09:25:57.000Z | 2022-01-13T02:59:05.000Z | indico/migrations/versions/20190118_1213_7ec3949a21c7_use_enum_for_resv_occurrence_state.py | salevajo/indico | 6f9cbabc20d1641caea907099388ae2b04965cf8 | [
"MIT"
] | 3 | 2020-07-20T09:09:44.000Z | 2020-10-19T00:29:49.000Z | """Use enum for resv occurrence state
Revision ID: 7ec3949a21c7
Revises: 579a36843848
Create Date: 2019-01-18 12:13:42.042274
"""
import sqlalchemy as sa
from alembic import op
from indico.core.db.sqlalchemy import PyIntEnum
from indico.modules.rb.models.reservation_occurrences import ReservationOccurrenceState
# revision identifiers, used by Alembic.
revision = '7ec3949a21c7'
down_revision = '579a36843848'
branch_labels = None
depends_on = None
def upgrade():
op.add_column('reservation_occurrences', sa.Column('state', PyIntEnum(ReservationOccurrenceState), nullable=True),
schema='roombooking')
op.execute('''
UPDATE roombooking.reservation_occurrences
SET state = CASE
WHEN is_rejected THEN 4
WHEN is_cancelled THEN 3
ELSE 2
END
''')
op.alter_column('reservation_occurrences', 'state', nullable=False, schema='roombooking')
op.drop_column('reservation_occurrences', 'is_cancelled', schema='roombooking')
op.drop_column('reservation_occurrences', 'is_rejected', schema='roombooking')
def downgrade():
op.add_column('reservation_occurrences', sa.Column('is_rejected', sa.Boolean(), nullable=True),
schema='roombooking')
op.add_column('reservation_occurrences', sa.Column('is_cancelled', sa.Boolean(), nullable=True),
schema='roombooking')
op.execute('''
UPDATE roombooking.reservation_occurrences
SET is_cancelled = (state = 3),
is_rejected = (state = 4)
''')
op.alter_column('reservation_occurrences', 'is_rejected', nullable=False, schema='roombooking')
op.alter_column('reservation_occurrences', 'is_cancelled', nullable=False, schema='roombooking')
op.drop_column('reservation_occurrences', 'state', schema='roombooking')
| 35.921569 | 118 | 0.709607 | 203 | 1,832 | 6.236453 | 0.359606 | 0.208531 | 0.199052 | 0.094787 | 0.545024 | 0.466825 | 0.408373 | 0.338863 | 0.227488 | 0.126382 | 0 | 0.041777 | 0.176856 | 1,832 | 50 | 119 | 36.64 | 0.797745 | 0.088974 | 0 | 0.257143 | 0 | 0 | 0.43923 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.114286 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cfc57d3e1135b16bab03c2bf1964e838746c06d7 | 11,582 | py | Python | doubtlab/reason.py | avvorstenbosch/doubtlab | 5691ceb29a615c998f23f989ccae9dd37872031a | [
"MIT"
] | null | null | null | doubtlab/reason.py | avvorstenbosch/doubtlab | 5691ceb29a615c998f23f989ccae9dd37872031a | [
"MIT"
] | null | null | null | doubtlab/reason.py | avvorstenbosch/doubtlab | 5691ceb29a615c998f23f989ccae9dd37872031a | [
"MIT"
] | null | null | null | import numpy as np
from cleanlab.pruning import get_noise_indices
class ProbaReason:
"""
Assign doubt based on low proba-confidence values from a scikit-learn model.
Arguments:
model: scikit-learn classifier
max_proba: maximum probability threshold for doubt assignment
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import ProbaReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=1_000)
model.fit(X, y)
doubt = DoubtEnsemble(reason = ProbaReason(model, max_proba=0.55))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, max_proba=0.55):
self.model = model
self.max_proba = max_proba
def __call__(self, X, y=None):
result = self.model.predict_proba(X).max(axis=1) <= self.max_proba
return result.astype(np.float16)
class RandomReason:
"""
Assign doubt based on a random value.
Arguments:
probability: probability of assigning a doubt
random_seed: seed for random number generator
Usage:
```python
from sklearn.datasets import load_iris
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import RandomReason
X, y = load_iris(return_X_y=True)
doubt = DoubtEnsemble(reason = RandomReason(probability=0.05, random_seed=42))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, probability=0.01, random_seed=42):
self.probability = probability
self.random_seed = random_seed
def __call__(self, X, y=None):
np.random.seed(self.random_seed)
rvals = np.random.random(size=len(X))
return np.where(rvals < self.probability, rvals, 0)
class WrongPredictionReason:
"""
Assign doubt when the model prediction doesn't match the label.
Arguments:
model: scikit-learn classifier
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import WrongPredictionReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=1_000)
model.fit(X, y)
doubt = DoubtEnsemble(reason = WrongPredictionReason(model=model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model):
self.model = model
def __call__(self, X, y):
return (self.model.predict(X) != y).astype(np.float16)
class LongConfidenceReason:
"""
Assign doubt when a wrong class gains too much confidence.
Arguments:
model: scikit-learn classifier
threshold: confidence threshold for doubt assignment
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import LongConfidenceReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=1_000)
model.fit(X, y)
doubt = DoubtEnsemble(reason = LongConfidenceReason(model=model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, threshold=0.2):
self.model = model
self.threshold = threshold
def _max_bad_class_confidence(self, X, y):
probas = self.model.predict_proba(X)
values = []
for i, proba in enumerate(probas):
proba_dict = {
self.model.classes_[j]: v for j, v in enumerate(proba) if j != y[i]
}
values.append(max(proba_dict.values()))
return np.array(values)
def __call__(self, X, y):
confidences = self._max_bad_class_confidence(X, y)
return np.where(confidences > self.threshold, confidences, 0)
class MarginConfidenceReason:
"""
Assign doubt when a the difference between the top two most confident classes is too large.
Throws an error when there are only two classes.
Arguments:
model: scikit-learn classifier
threshold: confidence threshold for doubt assignment
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import MarginConfidenceReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=1_000)
model.fit(X, y)
doubt = DoubtEnsemble(reason = MarginConfidenceReason(model=model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, threshold=0.2):
self.model = model
self.threshold = threshold
def _calc_margin(self, probas):
sorted = np.sort(probas, axis=1)
return sorted[:, -1] - sorted[:, -2]
def __call__(self, X, y):
probas = self.model.predict_proba(X)
margin = self._calc_margin(probas)
return np.where(margin > self.threshold, margin, 0)
class ShortConfidenceReason:
"""
Assign doubt when the correct class gains too little confidence.
Arguments:
model: scikit-learn classifier
threshold: confidence threshold for doubt assignment
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import ShortConfidenceReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=1_000)
model.fit(X, y)
doubt = DoubtEnsemble(reason = ShortConfidenceReason(model=model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, threshold=0.2):
self.model = model
self.threshold = threshold
def _correct_class_confidence(self, X, y):
"""
Gives the predicted confidence (or proba) associated
with the correct label `y` from a given model.
"""
probas = self.model.predict_proba(X)
values = []
for i, proba in enumerate(probas):
proba_dict = {self.model.classes_[j]: v for j, v in enumerate(proba)}
values.append(proba_dict[y[i]])
return np.array(values)
def __call__(self, X, y):
confidences = self._correct_class_confidence(X, y)
return np.where(confidences < self.threshold, 1 - confidences, 0)
class DisagreeReason:
"""
Assign doubt when two scikit-learn models disagree on a prediction.
Arguments:
model1: scikit-learn classifier
model2: a different scikit-learn classifier
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import DisagreeReason
X, y = load_iris(return_X_y=True)
model1 = LogisticRegression(max_iter=1_000)
model2 = KNeighborsClassifier()
model1.fit(X, y)
model2.fit(X, y)
doubt = DoubtEnsemble(reason = DisagreeReason(model1, model2))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model1, model2):
self.model1 = model1
self.model2 = model2
def __call__(self, X, y):
result = self.model1.predict(X) != self.model2.predict(X)
return result.astype(np.float16)
class OutlierReason:
"""
Assign doubt when a scikit-learn outlier model detects an outlier.
Arguments:
model: scikit-learn outlier model
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.ensemble import IsolationForest
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import OutlierReason
X, y = load_iris(return_X_y=True)
model = IsolationForest()
model.fit(X)
doubt = DoubtEnsemble(reason = OutlierReason(model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model):
self.model = model
def __call__(self, X, y):
return (self.model.predict(X) == -1).astype(np.float16)
class AbsoluteDifferenceReason:
"""
Assign doubt when the absolute difference between label and regression is too large.
Arguments:
model: scikit-learn outlier model
threshold: cutoff for doubt assignment
Usage:
```python
from sklearn.datasets import load_diabetes
from sklearn.linear_model import LinearRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import AbsoluteDifferenceReason
X, y = load_diabetes(return_X_y=True)
model = LinearRegression()
model.fit(X, y)
doubt = DoubtEnsemble(reason = AbsoluteDifferenceReason(model, threshold=100))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, threshold):
self.model = model
self.threshold = threshold
def __call__(self, X, y):
difference = np.abs(self.model.predict(X) - y)
return (difference >= self.threshold).astype(np.float16)
class RelativeDifferenceReason:
"""
Assign doubt when the relative difference between label and regression is too large.
Arguments:
model: scikit-learn outlier model
threshold: cutoff for doubt assignment
Usage:
```python
from sklearn.datasets import load_diabetes
from sklearn.linear_model import LinearRegression
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import RelativeDifferenceReason
X, y = load_diabetes(return_X_y=True)
model = LinearRegression()
model.fit(X, y)
doubt = DoubtEnsemble(reason = RelativeDifferenceReason(model, threshold=0.5))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, threshold):
self.model = model
self.threshold = threshold
def __call__(self, X, y):
difference = np.abs(self.model.predict(X) - y) / y
return (difference >= self.threshold).astype(np.float16)
class CleanlabReason:
"""
Assign doubt when using the cleanlab heuristic.
Arguments:
model: scikit-learn outlier model
sorted_index_method: method used by cleanlab for sorting indices
min_doubt: the minimum doubt output value used for sorting by the ensemble
Usage:
```python
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from doubtlab.ensemble import DoubtEnsemble
from doubtlab.reason import CleanlabReason
X, y = load_iris(return_X_y=True)
model = LogisticRegression()
model.fit(X, y)
doubt = DoubtEnsemble(reason = CleanlabReason(model))
indices = doubt.get_indices(X, y)
```
"""
def __init__(self, model, sorted_index_method="normalized_margin", min_doubt=0.5):
self.model = model
self.sorted_index_method = sorted_index_method
self.min_doubt = min_doubt
def __call__(self, X, y):
probas = self.model.predict_proba(X)
ordered_label_errors = get_noise_indices(y, probas, self.sorted_index_method)
result = np.zeros_like(y)
conf_arr = np.linspace(1, self.min_doubt, result.shape[0])
for idx, conf in zip(ordered_label_errors, conf_arr):
result[idx] = conf
return result
| 26.87239 | 95 | 0.676308 | 1,409 | 11,582 | 5.394606 | 0.12775 | 0.016051 | 0.010262 | 0.031838 | 0.646889 | 0.623602 | 0.597158 | 0.592685 | 0.586896 | 0.537956 | 0 | 0.010524 | 0.237006 | 11,582 | 430 | 96 | 26.934884 | 0.84961 | 0.554999 | 0 | 0.45098 | 0 | 0 | 0.00396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.245098 | false | 0 | 0.019608 | 0.019608 | 0.509804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cfec891329d44a43054514a9a4545d735ea4e91e | 2,577 | py | Python | api/api/models.py | shmulik323/project-manager-course | ab74c7b474b0bb459b4886e3b9cbb6fc4c37df92 | [
"MIT"
] | null | null | null | api/api/models.py | shmulik323/project-manager-course | ab74c7b474b0bb459b4886e3b9cbb6fc4c37df92 | [
"MIT"
] | 6 | 2020-03-24T17:00:59.000Z | 2021-12-13T19:59:12.000Z | api/api/models.py | shmulik323/project-manager-course | ab74c7b474b0bb459b4886e3b9cbb6fc4c37df92 | [
"MIT"
] | 1 | 2019-11-23T16:10:59.000Z | 2019-11-23T16:10:59.000Z | from datetime import datetime
from flask_sqlalchemy import SQLAlchemy
from flask_login import UserMixin
from werkzeug.security import generate_password_hash, check_password_hash
db = SQLAlchemy()
class User(db.Model, UserMixin):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50))
last = db.Column(db.String(50))
username = db.Column(db.String(50), unique=True, nullable=False)
email = db.Column(db.String(150), unique=True, nullable=False)
image_file = db.Column(
db.String(150), nullable=False, default="default.jpg")
password = db.Column(db.String(256))
admin = db.Column(db.Boolean, default=False)
premium = db.Column(db.Boolean, default=False)
def __init__(self, email, password, name, last, username, image_file="default.jpg"):
self.email = email
self.password = generate_password_hash(password, method='sha256')
self.username = username
self.name = name
self.last = last
self.image_file = image_file if image_file else "default.jpg"
def change(self):
self.premium = True if self.premium == False else False
def promote(self):
self.admin = True
@classmethod
def authenticate(cls, **kwargs):
email = kwargs.get('email')
password = kwargs.get('password')
username = kwargs.get('username')
if not email or not password:
if not username:
return None
user = cls.query.filter_by(email=email).first()
if not user:
user = cls.query.filter_by(username=username).first()
if not user or not check_password_hash(user.password, password):
return None
return user
def to_dict(self):
return dict(id=self.id, email=self.email, username=self.username, name=self.name, last=self.last, admin=self.admin, premium=self.premium, image_file=self.image_file)
class Pdf(db.Model):
__tablename__ = "pdf"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), unique=True)
data = db.Column(db.String())
user_id = db.Column(db.Integer)
def __init__(self, name, data, user_id):
self.name = name
self.data = data
self.user_id = user_id
def __repr__(self):
return super().__repr__()
def to_dict(self):
return dict(id=self.id, name=self.name, data=self.data, user_id=self.user_id)
class PremiumUser(User):
__tablename__ = 'premium'
credit_card = db.Column(db.String(50))
| 30.678571 | 173 | 0.658518 | 349 | 2,577 | 4.696275 | 0.209169 | 0.068334 | 0.085418 | 0.087858 | 0.249542 | 0.179988 | 0.128127 | 0.104942 | 0.104942 | 0.067114 | 0 | 0.011011 | 0.22468 | 2,577 | 83 | 174 | 31.048193 | 0.809309 | 0 | 0 | 0.131148 | 1 | 0 | 0.028716 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131148 | false | 0.114754 | 0.065574 | 0.04918 | 0.622951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cff262684fe94d016ad16523903cb2cd8bae4d2f | 1,166 | py | Python | src/wai/common/meta/_import_code.py | waikato-datamining/wai-common | bf3d7ae6e01bcb7ffe9f5c2b5d10a05908a68c34 | [
"MIT"
] | null | null | null | src/wai/common/meta/_import_code.py | waikato-datamining/wai-common | bf3d7ae6e01bcb7ffe9f5c2b5d10a05908a68c34 | [
"MIT"
] | 8 | 2020-07-01T02:11:31.000Z | 2020-12-17T01:57:17.000Z | src/wai/common/meta/_import_code.py | waikato-datamining/wai-common | bf3d7ae6e01bcb7ffe9f5c2b5d10a05908a68c34 | [
"MIT"
] | null | null | null | from typing import Type, Union
def get_import_code(cls: Type, indent: Union[int, str] = "", alias_inner_class: bool = True) -> str:
"""
Gets the code to use to import a class.
:param cls: The class.
:param indent: The indentation of the code.
:param alias_inner_class: Whether to add code to alias inner classes as outer classes.
:return: The code.
"""
# Can't get import code for closure classes
if "<locals>" in cls.__qualname__:
raise ValueError(f"Can't get import code for closure class '{cls.__qualname__}'")
# Get the outer-most class to import
outer_class = cls.__qualname__[:cls.__qualname__.index(".")] if "." in cls.__qualname__ else cls.__qualname__
# Format the indentation string
if isinstance(indent, int):
indent = " " * indent
# Get the code for importing the outer-most class
code = f"{indent}from {cls.__module__} import {outer_class}"
# If it's an inner class, alias it
if cls.__name__ != outer_class and alias_inner_class:
code += f"\n{indent}{cls.__name__} = {cls.__qualname__}"
return code
| 36.4375 | 113 | 0.64494 | 161 | 1,166 | 4.354037 | 0.347826 | 0.109843 | 0.055635 | 0.03709 | 0.077033 | 0.077033 | 0.077033 | 0 | 0 | 0 | 0 | 0 | 0.255575 | 1,166 | 31 | 114 | 37.612903 | 0.807604 | 0.38765 | 0 | 0 | 0 | 0 | 0.244838 | 0.035398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
cff5c9bc1794fa2471b86b8045d56eef733f0820 | 210 | py | Python | python_caltrain/__init__.py | wwong/python-caltrain | 09a19549b7cecde8558d67a9c07fb622fba0106b | [
"MIT"
] | 2 | 2017-03-01T04:55:16.000Z | 2017-05-15T02:07:06.000Z | python_caltrain/__init__.py | wwong/python-caltrain | 09a19549b7cecde8558d67a9c07fb622fba0106b | [
"MIT"
] | 3 | 2019-08-20T00:37:41.000Z | 2020-02-15T17:36:27.000Z | python_caltrain/__init__.py | wwong/python-caltrain | 09a19549b7cecde8558d67a9c07fb622fba0106b | [
"MIT"
] | 1 | 2020-02-14T04:42:14.000Z | 2020-02-14T04:42:14.000Z | from .caltrain import (
Caltrain,
Train,
Trip,
TransitType,
Station,
Stop,
Direction,
UnknownStationError,
UnexpectedGTFSLayoutError,
)
from .__version__ import __version__
| 15 | 36 | 0.67619 | 16 | 210 | 8.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261905 | 210 | 13 | 37 | 16.153846 | 0.864516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cff879c3a2664236747d343dcb3c0091efc4d764 | 5,311 | py | Python | tests/unit/pypyr/steps/fileread_test.py | FranklinHarry/pypyr | fcabdbeabc67dc11aca5585bb90c9dfa0088e946 | [
"Apache-2.0"
] | 1 | 2021-12-30T20:47:18.000Z | 2021-12-30T20:47:18.000Z | tests/unit/pypyr/steps/fileread_test.py | FranklinHarry/pypyr | fcabdbeabc67dc11aca5585bb90c9dfa0088e946 | [
"Apache-2.0"
] | null | null | null | tests/unit/pypyr/steps/fileread_test.py | FranklinHarry/pypyr | fcabdbeabc67dc11aca5585bb90c9dfa0088e946 | [
"Apache-2.0"
] | null | null | null | """fileread.py unit tests."""
from unittest.mock import mock_open, patch
import pytest
from pypyr.context import Context
from pypyr.errors import KeyInContextHasNoValueError, KeyNotInContextError
import pypyr.steps.fileread as fileread
def test_fileread_no_input_raises():
"""No input."""
context = Context({
'k1': 'v1'})
with pytest.raises(KeyNotInContextError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead'] "
"doesn't exist. It must exist for "
"pypyr.steps.fileread.")
def test_fileread_none_raises():
"""Input exists but is None."""
context = Context({
'k1': 'v1',
'fileRead': None})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead'] must have a "
"value for pypyr.steps.fileread.")
def test_fileread_none_path_raises():
"""None path raises."""
context = Context({
'fileRead': {
'path': None}})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead']['path'] must have a "
"value for pypyr.steps.fileread.")
def test_fileread_empty_path_raises():
"""Empty path raises."""
context = Context({
'fileRead': {
'path': ''}})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead']['path'] must have a "
"value for pypyr.steps.fileread.")
def test_fileread_no_key_raises():
"""No key raises."""
context = Context({
'fileRead': {
'path': '/arb'}})
with pytest.raises(KeyNotInContextError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead']['key'] "
"doesn't exist. It must exist for "
"pypyr.steps.fileread.")
def test_fileread_none_key_raises():
"""None key raises."""
context = Context({
'fileRead': {
'path': '/arb',
'key': None}})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead']['key'] must have a "
"value for pypyr.steps.fileread.")
def test_fileread_empty_key_raises():
"""Empty key raises."""
context = Context({
'fileRead': {
'path': '/arb',
'key': ''}})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
fileread.run_step(context)
assert str(err_info.value) == ("context['fileRead']['key'] must have a "
"value for pypyr.steps.fileread.")
# endregion validation
# region text mode
def test_fileread_defaults():
"""Read file with minimal defaults."""
context = Context({
'fileRead': {
'path': '/arb',
'key': 'out'}})
with patch('pypyr.steps.fileread.open',
mock_open(read_data='one\ntwo\nthree')) as mocked_open:
fileread.run_step(context)
assert context['out'] == 'one\ntwo\nthree'
mocked_open.assert_called_once_with('/arb', 'r')
# endregion text mode
# region binary mode
def test_fileread_binary_true():
"""Read file with binary true."""
context = Context({
'fileRead': {
'path': '/arb',
'key': 'out',
'binary': True}})
with patch('pypyr.steps.fileread.open',
mock_open(read_data=b'12345')) as mocked_open:
fileread.run_step(context)
assert context['out'] == b'12345'
mocked_open.assert_called_once_with('/arb', 'rb')
def test_fileread_binary_explicit_false():
"""Read file with binary explicit false."""
context = Context({
'fileRead': {
'path': '/arb',
'key': 'out',
'binary': False}})
with patch('pypyr.steps.fileread.open',
mock_open(read_data='one\ntwo\nthree')) as mocked_open:
fileread.run_step(context)
assert context['out'] == 'one\ntwo\nthree'
mocked_open.assert_called_once_with('/arb', 'r')
# endregion binary mode
# region substitutions
def test_fileread_substitutions():
"""Read file with substitutions."""
context = Context({
'p': '/arb',
'k': 'out',
'b': False,
'fileRead': {
'path': '{p}',
'key': '{k}',
'binary': '{b}'}})
with patch('pypyr.steps.fileread.open',
mock_open(read_data='one\ntwo\nthree')) as mocked_open:
fileread.run_step(context)
assert context['out'] == 'one\ntwo\nthree'
assert context == {
'p': '/arb',
'k': 'out',
'b': False,
'out': 'one\ntwo\nthree',
'fileRead': {
'path': '{p}',
'key': '{k}',
'binary': '{b}'}}
mocked_open.assert_called_once_with('/arb', 'r')
# endregion substitutions
| 27.096939 | 77 | 0.567501 | 571 | 5,311 | 5.117338 | 0.134851 | 0.077002 | 0.073922 | 0.08282 | 0.757016 | 0.747776 | 0.723135 | 0.656057 | 0.598563 | 0.583847 | 0 | 0.003696 | 0.286763 | 5,311 | 195 | 78 | 27.235897 | 0.767687 | 0.079081 | 0 | 0.689076 | 0 | 0 | 0.207919 | 0.078566 | 0 | 0 | 0 | 0 | 0.134454 | 1 | 0.092437 | false | 0 | 0.042017 | 0 | 0.134454 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cfff70b07b8f0fa123d4802f2c9c57405025318f | 22,378 | py | Python | doc/userGuide/tutorial/introductory.py | lbl-srg/MPCPy | 3e7bc83b1aa80e16474e77d2574c7bbaba080976 | [
"BSD-3-Clause-LBNL"
] | 96 | 2017-03-31T09:59:44.000Z | 2022-03-23T18:39:37.000Z | doc/userGuide/tutorial/introductory.py | kuzha/MPCPy | 9f78aa68236f87d39a50de54978c5064f9cc13c6 | [
"BSD-3-Clause-LBNL"
] | 150 | 2017-03-03T17:28:34.000Z | 2021-02-24T20:03:24.000Z | doc/userGuide/tutorial/introductory.py | kuzha/MPCPy | 9f78aa68236f87d39a50de54978c5064f9cc13c6 | [
"BSD-3-Clause-LBNL"
] | 32 | 2017-04-24T18:22:40.000Z | 2022-03-29T17:51:20.000Z | # -*- coding: utf-8 -*-
"""
This tutorial will introduce the basic concepts and workflow of mpcpy.
By the end, we will train a simple model based on emulated data, and use
the model to optimize the control signal of the system. All required data
files for this tutorial are located in doc/userGuide/tutorial.
The model is a simple RC model of zone thermal response to ambient temperature
and a singal heat input. It is written in Modelica:
.. code-block:: modelica
model RC "A simple RC network for example purposes"
Modelica.Blocks.Interfaces.RealInput weaTDryBul(unit="K") "Ambient temperature";
Modelica.Blocks.Interfaces.RealInput Qflow(unit="W") "Heat input";
Modelica.Blocks.Interfaces.RealOutput Tzone(unit="K") "Zone temperature";
Modelica.Thermal.HeatTransfer.Components.HeatCapacitor heatCapacitor(C=1e5)
"Thermal capacitance of zone";
Modelica.Thermal.HeatTransfer.Components.ThermalResistor thermalResistor(R=0.01)
"Thermal resistance of zone";
Modelica.Thermal.HeatTransfer.Sources.PrescribedTemperature preTemp;
Modelica.Thermal.HeatTransfer.Sensors.TemperatureSensor senTemp;
Modelica.Thermal.HeatTransfer.Sources.PrescribedHeatFlow preHeat;
equation
connect(senTemp.T, Tzone)
connect(preHeat.Q_flow, Qflow)
connect(heatCapacitor.port, senTemp.port)
connect(heatCapacitor.port, preHeat.port)
connect(preTemp.port, thermalResistor.port_a)
connect(thermalResistor.port_b, heatCapacitor.port)
connect(preTemp.T, weaTDryBul)
end RC;
Variables and Units
-------------------
First, lets get familiar with variables and units, the basic building blocks of MPCPy.
>>> from mpcpy import variables
>>> from mpcpy import units
Static variables contain data that is not a timeseries:
>>> setpoint = variables.Static('setpoint', 20, units.degC)
>>> print(setpoint) # doctest: +NORMALIZE_WHITESPACE
Name: setpoint
Variability: Static
Quantity: Temperature
Display Unit: degC
The unit assigned to the variable is the display unit.
However, each display unit quantity has a base unit that is used to store
the data in memory. This makes it easy to convert between units
when necessary. For example, the degC display unit has a quantity temperature,
which has base unit in Kelvin.
>>> # Get the data in display units
>>> setpoint.display_data()
20.0
>>> # Get the data in base units
>>> setpoint.get_base_data()
293.15
>>> # Convert the display unit to degF
>>> setpoint.set_display_unit(units.degF)
>>> setpoint.display_data() # doctest: +NORMALIZE_WHITESPACE
68.0
Timeseries variables contain data in the form of a ``pandas`` Series with a
datetime index:
>>> # Create pandas Series object
>>> import pandas as pd
>>> data = [0, 5, 10, 15, 20]
>>> index = pd.date_range(start='1/1/2017', periods=len(data), freq='H')
>>> ts = pd.Series(data=data, index=index, name='power_data')
Now we can do the same thing with the timeseries variable as we did with the
static variable:
>>> # Create mpcpy variable
>>> power_data = variables.Timeseries('power_data', ts, units.Btuh)
>>> print(power_data) # doctest: +NORMALIZE_WHITESPACE
Name: power_data
Variability: Timeseries
Quantity: Power
Display Unit: Btuh
>>> # Get the data in display units
>>> power_data.display_data()
2017-01-01 00:00:00+00:00 0.0
2017-01-01 01:00:00+00:00 5.0
2017-01-01 02:00:00+00:00 10.0
2017-01-01 03:00:00+00:00 15.0
2017-01-01 04:00:00+00:00 20.0
Freq: H, Name: power_data, dtype: float64
>>> # Get the data in base units
>>> power_data.get_base_data()
2017-01-01 00:00:00+00:00 0.000000
2017-01-01 01:00:00+00:00 1.465355
2017-01-01 02:00:00+00:00 2.930711
2017-01-01 03:00:00+00:00 4.396066
2017-01-01 04:00:00+00:00 5.861421
Freq: H, Name: power_data, dtype: float64
>>> # Convert the display unit to kW
>>> power_data.set_display_unit(units.kW)
>>> power_data.display_data()
2017-01-01 00:00:00+00:00 0.000000
2017-01-01 01:00:00+00:00 0.001465
2017-01-01 02:00:00+00:00 0.002931
2017-01-01 03:00:00+00:00 0.004396
2017-01-01 04:00:00+00:00 0.005861
Freq: H, Name: power_data, dtype: float64
There is additional functionality with the units that may be useful, such as
setting new data and getting the units. Consult the documentation on these
classes for more information.
Collect model weather and control signal data
---------------------------------------------
Now, we would like to collect the weather data and control signal inputs
for our model. We do this using exodata objects:
>>> from mpcpy import exodata
Let's take our weather data from an EPW file. We instantiate the weather
exodata object by supplying the path to the EPW file:
>>> weather = exodata.WeatherFromEPW('USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw')
Note that using the weather exodata object assumes that weather inputs to our
model are named a certain way. Consult the documentation on the weather
exodata class for more information. In this case, the ambient dry bulb
temperature input in our model is named weaTDryBul.
Let's take our control input signal from a CSV file. The CSV file looks like:
::
Time,Qflow_csv
01/01/17 12:00 AM,3000
01/01/17 01:00 AM,3000
01/01/17 02:00 AM,3000
...
01/02/17 10:00 PM,3000
01/02/17 11:00 PM,3000
01/03/17 12:00 AM,3000
We instantiate the control exodata object by supplying the path to the CSV file
as well as a map of the names of the columns to the input of our model.
We also assume that the data in the CSV file is given in the local time of the
weather file, and so we supply this optional parameter, tz_name, upon
instantiation as well. If no time zone is supplied, it is assumed to be UTC.
>>> variable_map = {'Qflow_csv' : ('Qflow', units.W)}
>>> control = exodata.ControlFromCSV('ControlSignal.csv',
... variable_map,
... tz_name = weather.tz_name)
Now we are ready to collect the exogenous data from our data sources for a
given time period.
>>> start_time = '1/1/2017'
>>> final_time = '1/3/2017'
>>> weather.collect_data(start_time, final_time) # doctest: +ELLIPSIS
-etc-
>>> control.collect_data(start_time, final_time)
Use the ``display_data()`` and ``get_base_data()`` functions for the weather
and control objects to get the data in the form of a pandas dataframe. Note
that the data is given in UTC time.
>>> control.display_data() # doctest: +ELLIPSIS
Qflow
Time
2017-01-01 06:00:00+00:00 3000.0
2017-01-01 07:00:00+00:00 3000.0
2017-01-01 08:00:00+00:00 3000.0
-etc-
Simulate as Emulated System
---------------------------
The model has parameters for the resistance and capacitance set in the
modelica code. For the purposes of this tutorial, we will assume that the
model with these parameter values represents the actual system. We now wish to
collect measurements from this 'actual system.' For this, we use the systems
module of mpcpy.
>>> from mpcpy import systems
First, we instantiate our system model by supplying a measurement dictionary,
information about where the model resides, and information about model exodata.
The measurement dictionary holds information about and data from the variables
being measured. We start with defining the variables we are interested in
measuring and their sample rate. In this case, we have two, the output of
the model, called 'Tzone' and the control input called 'Qflow'.
Note that 'heatCapacitor.T' would also be valid instead of 'Tzone'.
>>> measurements = {'Tzone' : {}, 'Qflow' : {}}
>>> measurements['Tzone']['Sample'] = variables.Static('sample_rate_Tzone',
... 3600,
... units.s)
>>> measurements['Qflow']['Sample'] = variables.Static('sample_rate_Qflow',
... 3600,
... units.s)
The model information is given by a tuple containing the path to the
Modelica (.mo) file, the path of the model within the .mo file, and a list of
paths of any required libraries other than the Modelica Standard.
For this example, there are no additional libraries.
>>> moinfo = ('Tutorial.mo', 'Tutorial.RC', {})
Ultimately, the modelica model is compiled into an FMU. If the emulation model
is already an FMU, than an fmupath can be specified instead of the modelica
information tuple. For more information, see the documentation on the systmems
class.
We can now instantiate the system emulation object with our measurement
dictionary, model information, collected exogenous data, and time zone:
>>> emulation = systems.EmulationFromFMU(measurements,
... moinfo = moinfo,
... weather_data = weather.data,
... control_data = control.data,
... tz_name = weather.tz_name)
Finally, we can collect the measurements from our emulation over a specified
time period and display the results as a pandas dataframe. The
``collect_measurements()`` function updates the measurement dictionary with
timeseries data in the ``'Measured'`` field for each variable.
>>> # Collect the data
>>> emulation.collect_measurements('1/1/2017', '1/2/2017') # doctest: +ELLIPSIS
-etc-
>>> # Display the results
>>> emulation.display_measurements('Measured').applymap('{:.2f}'.format) # doctest: +ELLIPSIS
Qflow Tzone
Time
2017-01-01 06:00:00+00:00 3000.00 293.15
2017-01-01 07:00:00+00:00 3000.00 291.01
2017-01-01 08:00:00+00:00 3000.00 291.32
-etc-
Estimate Parameters
-------------------
Now assume that we do not know the parameters of the model. Or, that we have
measurements from a real or emulated system, and would like to estimate
parameters of our model to fit the measurements. For this, we use the models
module from mpcpy.
>>> from mpcpy import models
In this case, we have a Modelica model with two parameters that we would like
to train based on the measured data from our system; the resistance
and capacitance.
We first need to collect some information about our parameters and do so using
a parameters exodata object. The parameter information is stored in a CSV file
that looks like:
::
Name,Free,Value,Minimum,Maximum,Covariance,Unit
heatCapacitor.C,True,40000,1.00E+04,1.00E+06,1000,J/K
thermalResistor.R,True,0.002,0.001,0.1,0.0001,K/W
The name is the name of the parameter in the model. The Free field indicates
if the parameter is free to be changed during the estimation method or not.
The Value is the current value of the parameter. If the parameter is to be
estimated, this would be an initial guess. If the parameter's Free field is
set to False, then the value is set to the parameter upon simulation. The
Minimum and Maximum fields set the minimum and maximum value allowed by the
parameter during estimation. The Covariance field sets the covariance of
the parameter, and is only used for unscented kalman filtering. Finally, the
Unit field specifies the unit of the parameter using the name string of
MPCPy unit classes.
>>> parameters = exodata.ParameterFromCSV('Parameters.csv')
>>> parameters.collect_data()
>>> parameters.display_data() # doctest: +NORMALIZE_WHITESPACE
Covariance Free Maximum Minimum Unit Value
Name
heatCapacitor.C 1000 True 1e+06 10000 J/K 40000
thermalResistor.R 0.0001 True 0.1 0.001 K/W 0.002
Now, we can instantiate the model object by defining the estimation method,
validation method, measurement dictionary, model information, parameter data,
and exogenous data. In this case, we use JModelica optimization to perform
the parameter estimation and will validate the parameter estimation by
calculating the root mean square error (RMSE) between measurements from the
model and emulation.
>>> model = models.Modelica(models.JModelicaParameter,
... models.RMSE,
... emulation.measurements,
... moinfo = moinfo,
... parameter_data = parameters.data,
... weather_data = weather.data,
... control_data = control.data,
... tz_name = weather.tz_name)
Let's simulate the model to see how far off we are with our initial parameter
guesses. The ``simulate()`` function updates the measurement dictionary with
timeseries data in the ``'Simulated'`` field for each variable.
>>> # Simulate the model
>>> model.simulate('1/1/2017', '1/2/2017') # doctest: +ELLIPSIS
-etc-
>>> # Display the results
>>> model.display_measurements('Simulated').applymap('{:.2f}'.format) # doctest: +ELLIPSIS
Qflow Tzone
Time
2017-01-01 06:00:00+00:00 3000.00 293.15
2017-01-01 07:00:00+00:00 3000.00 266.95
2017-01-01 08:00:00+00:00 3000.00 267.44
-etc-
Now, we are ready to estimate the parameters to better fit the emulated
measurements. In addtion to a training period, we must supply a list of
measurement variables for which to minimize the error between the simulated
and measured data. In this case, we only have one, ``'Tzone'``. The
``estimate()`` function updates the Value field for the parameter data in
the model.
>>> model.parameter_estimate('1/1/2017', '1/2/2017', ['Tzone']) # doctest: +ELLIPSIS
-etc-
Let's validate the estimation on the training period. The ``validate()``
method will simulate the model over the specified time period, calculate the
RMSE between the simulated and measured data, and generate a plot in the
working directory that shows the simulated and measured data for each
measurement variable.
>>> # Perform validation
>>> model.validate('1/1/2017', '1/2/2017', 'validate_tra', plot=1) # doctest: +ELLIPSIS
-etc-
>>> # Get RMSE
>>> print("%.3f" % model.RMSE['Tzone'].display_data()) # doctest: +NORMALIZE_WHITESPACE
0.041
Now let's validate on a different period of exogenous data:
>>> # Define validation period
>>> start_time_val = '1/2/2017'
>>> final_time_val = '1/3/2017'
>>> # Collect new measurements
>>> emulation.collect_measurements(start_time_val, final_time_val) # doctest: +ELLIPSIS
-etc-
>>> # Assign new measurements to model
>>> model.measurements = emulation.measurements
>>> # Perform validation
>>> model.validate(start_time_val, final_time_val, 'validate_val', plot=1) # doctest: +ELLIPSIS
-etc-
>>> # Get RMSE
>>> print("%.3f" % model.RMSE['Tzone'].display_data()) # doctest: +NORMALIZE_WHITESPACE
0.047
Finally, let's view the estimated parameter values:
>>> for key in model.parameter_data.keys():
... print(key, "%.2f" % model.parameter_data[key]['Value'].display_data())
('heatCapacitor.C', '119828.30')
('thermalResistor.R', '0.01')
Optimize Control
----------------
We are now ready to optimize control of our system heater using our calibrated
MPC model. Specificlaly, we would like to maintain a comfortable temperature
in our zone with the minimum amount of heater energy. We can do this by using
the optimization module of MPCPy.
>>> from mpcpy import optimization
First, we need to collect some constraint data to add to our optimization
problem. In this case, we will constrain the heating input to between
0 and 4000 W, and the temperature to a comfortable range, between
20 and 25 degC. We collect contraint data from a CSV using a constraint
exodata data object. The constraint CSV looks like:
::
Time,Qflow_min,Qflow_max,T_min,T_max
01/01/17 12:00 AM,0,4000,20,25
01/01/17 01:00 AM,0,4000,20,25
01/01/17 02:00 AM,0,4000,20,25
...
01/02/17 10:00 PM,0,4000,20,25
01/02/17 11:00 PM,0,4000,20,25
01/03/17 12:00 AM,0,4000,20,25
The constraint exodata object is used to determine which column of data matches
with which model variable and whether it is a less-than-or-equal-to (LTE) or
greater-than-or-equal-to (GTE) constraint:
>>> # Define variable map
>>> variable_map = {'Qflow_min' : ('Qflow', 'GTE', units.W),
... 'Qflow_max' : ('Qflow', 'LTE', units.W),
... 'T_min' : ('Tzone', 'GTE', units.degC),
... 'T_max' : ('Tzone', 'LTE', units.degC)}
>>> # Instantiate constraint exodata object
>>> constraints = exodata.ConstraintFromCSV('Constraints.csv',
... variable_map,
... tz_name = weather.tz_name)
>>> # Collect data
>>> constraints.collect_data('1/1/2017', '1/3/2017')
>>> # Get data
>>> constraints.display_data() # doctest: +ELLIPSIS
Qflow_GTE Qflow_LTE Tzone_GTE Tzone_LTE
Time
2017-01-01 06:00:00+00:00 0.0 4000.0 20.0 25.0
2017-01-01 07:00:00+00:00 0.0 4000.0 20.0 25.0
2017-01-01 08:00:00+00:00 0.0 4000.0 20.0 25.0
-etc-
We can now instantiate an optimization object using our calibrated MPC model,
selecting an optimization problem type and solver package, and specifying
which of the variables in the model to treat as the objective variable.
In this case, we choose an energy minimization problem (integral of variable
over time horizon) to be solved using JModelica, and Qflow to be the variable
we wish to minimize the integral of over the time horizon.
>>> opt_problem = optimization.Optimization(model,
... optimization.EnergyMin,
... optimization.JModelica,
... 'Qflow',
... constraint_data = constraints.data)
The information provided is used to automatically generate a .mop (optimization
model file for JModelica) and transfer the optimization problem using JModelica.
Using the ``optimize()`` function optimizes the variables defined in the control
data of the model object and updates their timeseries data with the optimal
solution for the time period specified. Note that other than the constraints,
the exogenous data within the model object is used, and the control interval
is assumed to be the same as the measurement sampling rate of the model. Use
the ``get_optimization_options()`` and ``set_optimization_options()`` to see
and change the options for the optimization solver; for instance number
of control points, maximum iteration number, tolerance, or maximum CPU time.
See the documentation for these functions for more information.
>>> opt_problem.optimize('1/2/2017', '1/3/2017') # doctest: +ELLIPSIS
-etc-
We can get the optimization solver statistics in the form of
(return message, # of iterations, objective value, solution time in seconds):
>>> opt_problem.get_optimization_statistics() # doctest: +ELLIPSIS
('Solve_Succeeded', 12, -etc-)
We can retrieve the optimal control solution and verify that the
constraints were satisfied. The intermediate points are a result of the
direct collocation method used by JModelica.
>>> opt_problem.display_measurements('Simulated').applymap('{:.2f}'.format) # doctest: +ELLIPSIS
Qflow Tzone
Time
2017-01-02 06:00:00+00:00 669.93 298.15
2017-01-02 06:09:18.183693+00:00 1512.95 293.15
2017-01-02 06:38:41.816307+00:00 2599.01 293.15
2017-01-02 07:00:00+00:00 1888.28 293.15
-etc-
Finally, we can simulate the model using the optimized control trajectory.
Note that the ``model.control_data`` dictionary is updated by the
``opt_problem.optimize()`` function.
>>> model.control_data['Qflow'].display_data().loc[pd.to_datetime('1/2/2017 06:00:00'):pd.to_datetime('1/3/2017 06:00:00')].map('{:.2f}'.format) # doctest: +ELLIPSIS
2017-01-02 06:00:00+00:00 669.93
2017-01-02 06:09:18.183693+00:00 1512.95
2017-01-02 06:38:41.816307+00:00 2599.01
2017-01-02 07:00:00+00:00 1888.28
-etc-
>>> model.simulate('1/2/2017', '1/3/2017') # doctest: +ELLIPSIS
-etc-
>>> model.display_measurements('Simulated').applymap('{:.2f}'.format) # doctest: +ELLIPSIS
Qflow Tzone
Time
2017-01-02 06:00:00+00:00 669.93 293.15
2017-01-02 07:00:00+00:00 1888.28 291.41
2017-01-02 08:00:00+00:00 2277.67 293.03
-etc-
Note there is some mismatch between the simulated model output temperature
and the raw optimal control solution model output temperature output.
This is due to the interpolation of control input results during simulation
not aligning with the collocation polynomials and timestep determined by the
optimization solver. We can solve the optimization problem again, this
time updating the ``model.control_data`` with a greater time resolution of 1
second. Some mismatch will still occur due to the optimization solution
using collocation being an approximation of the true dynamic model.
>>> opt_problem.optimize('1/2/2017', '1/3/2017', res_control_step=1.0) # doctest: +ELLIPSIS
-etc-
>>> model.control_data['Qflow'].display_data().loc[pd.to_datetime('1/2/2017 06:00:00'):pd.to_datetime('1/3/2017 06:00:00')].map('{:.2f}'.format) # doctest: +ELLIPSIS
2017-01-02 06:00:00+00:00 669.93
2017-01-02 06:00:01+00:00 671.66
2017-01-02 06:00:02+00:00 673.38
-etc-
>>> model.simulate('1/2/2017', '1/3/2017') # doctest: +ELLIPSIS
-etc-
>>> model.display_measurements('Simulated').applymap('{:.2f}'.format) # doctest: +ELLIPSIS
Qflow Tzone
Time
2017-01-02 06:00:00+00:00 669.93 293.15
2017-01-02 07:00:00+00:00 1888.28 292.67
2017-01-02 08:00:00+00:00 2277.67 293.13
-etc-
"""
if __name__ == "__main__":
import doctest
doctest.ELLIPSIS_MARKER = '-etc-'
(n_fails, n_tests) = doctest.testmod()
if n_fails:
print('\nTutorial finished with {0} fails.'.format(n_fails));
else:
print('\nTutorial finished OK.')
| 42.302457 | 166 | 0.681607 | 3,341 | 22,378 | 4.518707 | 0.170308 | 0.033649 | 0.031397 | 0.021726 | 0.246142 | 0.208916 | 0.179771 | 0.166987 | 0.144532 | 0.132609 | 0 | 0.097305 | 0.215614 | 22,378 | 528 | 167 | 42.382576 | 0.762776 | 0.987622 | 0 | 0 | 0 | 0 | 0.265918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3207bc4ec7928b3b495d634d04393fd2375b4522 | 263 | py | Python | codigo_das_aulas/aula_05/aula_05_01.py | VeirichR/curso-python-selenium | 9b9107a64adb4e6bcf10c76287e0b4cc7d024321 | [
"CC0-1.0"
] | 234 | 2020-04-03T02:59:30.000Z | 2022-03-27T15:29:21.000Z | codigo_das_aulas/aula_05/aula_05_01.py | VeirichR/curso-python-selenium | 9b9107a64adb4e6bcf10c76287e0b4cc7d024321 | [
"CC0-1.0"
] | 8 | 2020-04-20T11:20:43.000Z | 2021-08-18T16:41:15.000Z | codigo_das_aulas/aula_05/aula_05_01.py | VeirichR/curso-python-selenium | 9b9107a64adb4e6bcf10c76287e0b4cc7d024321 | [
"CC0-1.0"
] | 77 | 2020-04-03T13:25:19.000Z | 2022-02-24T15:31:26.000Z | from selenium.webdriver import Firefox
url = 'http://selenium.dunossauro.live/aula_05_a.html'
firefox = Firefox()
firefox.get(url)
div_py = firefox.find_element_by_id('python')
div_hk = firefox.find_element_by_id('haskell')
print(div_hk.text)
firefox.quit()
| 18.785714 | 54 | 0.775665 | 41 | 263 | 4.707317 | 0.634146 | 0.145078 | 0.186529 | 0.207254 | 0.227979 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.091255 | 263 | 13 | 55 | 20.230769 | 0.799163 | 0 | 0 | 0 | 0 | 0 | 0.224335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c5afa19e1cbb672a48b30d9d8326b1c68d38ecc | 1,671 | py | Python | case_05.py | AlexRogalskiy/Python | 78a38746de51688dc118ba921da08b920fe4caf2 | [
"MIT"
] | null | null | null | case_05.py | AlexRogalskiy/Python | 78a38746de51688dc118ba921da08b920fe4caf2 | [
"MIT"
] | null | null | null | case_05.py | AlexRogalskiy/Python | 78a38746de51688dc118ba921da08b920fe4caf2 | [
"MIT"
] | null | null | null | import pandas as pd
print pd.__version__
#Load data
#------------------------------------------
df = pd.read_csv("Data.csv")
df.to_csv("Data_copy.csv")
df = pd.read_excel("Data.xlsx", "Sheet1")
df.to_excel("Data_copy.xlsx", sheet_name="Sheet1")
#Dataframe preview
#------------------------------------------
df.head(5)
df.tail(5)
df.columns()
#Rename columns
#------------------------------------------
df2 = df.rename(columns={'ID':'EMPID'})
df2 = df.rename(columns={'ID':'EMPID'}, inplace=True)
#Select columns/rows
#------------------------------------------
df[['ID', "Gender"]]
df[df['Sales'] > 10 & df['BMI'] > 1000]
#Handle missing values
#------------------------------------------
df.dropna()
df.fillna(value=5)
mean = df['Sales'].mean()
df['Sales'].fillna(mean)
#Creat column
#------------------------------------------
df['Sales_Rate'] = df['Sales'] * 40;
#Aggregate data
#------------------------------------------
df.groupby('Gender').sum()
pd.pivot_table(df, values='Sales', index=['Income', 'Product'], columns=['Gender'])
pd.pivot_table(df, values='Sales', index=['Income', 'Product'], columns=['Gender'], aggfunc=len)
pd.crosstab(df['Gender'], df['Income'])
#Merge dataframes
#------------------------------------------
pd.concat([df1, df2])
pd.merge(df1, df2, on='ID', how='inner')
#Apply function
#------------------------------------------
df['Income'].map(lambda x: 10 + x)
df[['Sales','Income']].apply(sum)
func = lambda x: x + ''
df.applymap(func)
#Identify unique values
#------------------------------------------
df['ID'].unique()
#Basic stattistics
#------------------------------------------
df.describe()
df.cov()
df.corr()
| 22.890411 | 96 | 0.476361 | 186 | 1,671 | 4.204301 | 0.430108 | 0.053708 | 0.02046 | 0.046036 | 0.207161 | 0.207161 | 0.143223 | 0.143223 | 0.143223 | 0.143223 | 0 | 0.01369 | 0.081987 | 1,671 | 72 | 97 | 23.208333 | 0.496089 | 0.381209 | 0 | 0 | 0 | 0 | 0.200593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.03125 | null | null | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c66ae3f16bc6afe8d89f8f8d12ffe49e7d933ca | 1,787 | py | Python | mayaside/ChangeRenderSetting.py | Kususumu/pythonServerWorkplace | d76080276b9616bbf5945413bcf4336779546ebc | [
"MIT"
] | null | null | null | mayaside/ChangeRenderSetting.py | Kususumu/pythonServerWorkplace | d76080276b9616bbf5945413bcf4336779546ebc | [
"MIT"
] | null | null | null | mayaside/ChangeRenderSetting.py | Kususumu/pythonServerWorkplace | d76080276b9616bbf5945413bcf4336779546ebc | [
"MIT"
] | null | null | null | #ChangeRenderSetting.py
##This only use in the maya software render , not in arnold
#Three main Node of Maya Render:
# ->defaultRenderGlobals, defaultRenderQuality and defaultResolution
# ->those are separate nodes in maya
'''
import maya.cmds as cmds
#Function : getRenderGlobals()
#Usage : get the Value of Render Globals and print it
def getRenderGlobals() :
render_glob = "defaultRenderGlobals"
list_Attr = cmds.listAttr(render_glob,r = True,s = True)
#loop the list
print 'defaultRenderSetting As follows :'
for attr in list_Attr:
get_attr_name = "%s.%s"%(render_glob, attr)
print "setAttr %s %s"%(get_attr_name, cmds.getAttr(get_attr_name))
#Function : getRenderResolution()
#Usage : get the Value of Render Resolution and print it
def getRenderResolution() :
resolu_list = "defaultResolution"
list_Attr = cmds.listAttr(resolu_list,r = True , s = True)
print 'defaultResolution As follows :'
for attr in list_Attr:
get_attr_name = "%s.%s"%(resolu_list, attr)
print "setAttr %s %s"%(get_attr_name, cmds.getAttr(get_attr_name))
#defaultRenderGlobals.startFrame = 2.0
#Function : startEndFrame
#Usage : to change the Global value(startFrame,endFrame) of the Render
#use for set the render startFrame,endFrame,byframe
#Example : startEndFrame(3.0,7.0)
def startEndFrame(startTime,endTime) :
cmds.setAttr("defaultRenderGlobals.startFrame",startTime)
cmds.setAttr("defaultRenderGlobals.endFrame",endTime)
#Function : setWidthAndHeight
#Usage : to change the Resolution value(width,height) of the Render
#Example : setWidthAndHeight(960,540)
def setWidthAndHeight(width,height) :
cmds.setAttr("defaultResolution.width",width)
cmds.setAttr("defaultResolution.height",height)
''' | 35.039216 | 74 | 0.734751 | 225 | 1,787 | 5.737778 | 0.324444 | 0.032533 | 0.051123 | 0.024787 | 0.176607 | 0.176607 | 0.139427 | 0.139427 | 0.139427 | 0.139427 | 0 | 0.008032 | 0.163962 | 1,787 | 51 | 75 | 35.039216 | 0.856091 | 0.988248 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c729e31ba9386c63c16a49f742b6865ce1e6073 | 8,745 | py | Python | hxlm/ontologia/python/systema.py | EticaAI/HXL-Data-Science-file-formats | c7c5aa56c452ac1613242ee04cc9ae66f38ec24d | [
"Unlicense"
] | 3 | 2021-01-25T20:44:10.000Z | 2021-04-19T22:47:05.000Z | hxlm/ontologia/python/systema.py | fititnt/HXL-Data-Science-file-formats | f4fe9866e53280767f9cb4c8c488ef9c8b9d33cd | [
"Unlicense"
] | 24 | 2021-01-26T00:36:39.000Z | 2021-11-13T23:59:56.000Z | hxlm/ontologia/python/systema.py | fititnt/HXL-Data-Science-file-formats | f4fe9866e53280767f9cb4c8c488ef9c8b9d33cd | [
"Unlicense"
] | 1 | 2021-09-05T03:43:37.000Z | 2021-09-05T03:43:37.000Z | """hxlm.ontologia.python.systema
This module, like hxlm.ontologia.python.commune, contains generic data classes
when implementing HXLm in python. But one main difference is that on this
module... the initial author have no idea good naming in Latin! But note that
most of what is here was created on last 100 years, so we may need to keep
as it is.
Trivia:
- "systēma"
- https://en.wiktionary.org/wiki/systema#Latin
- Latin, Etymology: From Ancient Greek σύστημα (sústēma,
“organised whole, body”), from σύν (sún, “with, together”) + ἵστημι
(hístēmi, “I stand”).
- Noun, systēma n (genitive systēmatis); third declension,
- 1. system
- 2. harmony
Author: 2021, Emerson Rocha (Etica.AI) <rocha@ieee.org>
License: Public Domain / BSD Zero Clause License
SPDX-License-Identifier: Unlicense OR 0BSD
"""
from dataclasses import InitVar, dataclass
# from typing import NamedTuple, TypedDict
from enum import Enum
from typing import Any, Union
__all__ = ['EntryPointType', 'FileType', 'RemoteType', 'ResourceWrapper',
'URNType']
class EntryPointType(Enum):
"""Typehints for entry points to resources (the 'pointer', not content)"""
# TODO: webdav?
# https://tools.ietf.org/html/rfc1738 (File protocol)
FTP = 'ftp'
"""FTP/FTPS protocol https://tools.ietf.org/html/rfc959"""
GIT = 'git://'
"""Git protocol (abstracton over SSH)
See:
- https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols
"""
HTTP = "http"
"""Generic HTTP/HTTPS entrypoint, https://tools.ietf.org/html/rfc2616"""
# https://english.stackexchange.com/questions/113606
# /difference-between-folder-and-directory
LOCAL_DIR = "file://localhost/dir/"
"""Local directory"""
LOCAL_FILE = "file://localhost/file"
"""Local file"""
NETWORK_DIR = "file://remotehost/dir/"
"""Directory acessible via access to an non-localhost hostname"""
NETWORK_FILE = "file://remotehost/file"
"""File acessible via access to an non-localhost hostname"""
PYDICT = "PyDict"
"""Entrypoint already is Python Dict object"""
PYLIST = "PyList"
"""Entrypoint already is Python List object"""
# Note: hxlm.core, at least for entrypoint type, mostly use python list
# and in some cases dict. So at the moment we will not implement
# other internal types
SSH = 'ssh://'
"""Secure Shell (SSH), https://tools.ietf.org/html/rfc4253"""
# https://docs.python.org/3/library/asyncio-stream.html
STREAM = "STREAM"
"""Data stream"""
STRING = "STRING"
"""Generic raw string ready to be immediately parsed (see STREAM"""
UNKNOW = "?"
"""Unknow entrypoint"""
URN = "URN"
"""Uniform Resource Name, see https://tools.ietf.org/html/rfc8141"""
@dataclass(repr=False)
class Factum:
"""Encapsulate contextual messages for smarter L10N output (S-expressions!)
The ideal usage og Factum is, with processing of some external function,
convert terms like the 'vkg.attr.descriptionem' to what would have meaning
to the user.
Since Factum can also be used even for debug program (or or the
Vocabulary Knowledge Graph may not be ready) is possible to do a raw
print() of contents. Not ideal, but a fallback.
>>> Factum("Testing")
(vkg.attr.factum (vkg.attr.descriptionem "Testing"))
>>> Factum("Testing", linguam="ENG")
(vkg.attr.factum (vkg.attr.descriptionem (ENG "Testing")))
>>> Factum("Testing", datum=[1, 2])
(vkg.attr.factum (vkg.attr.descriptionem "Testing")(vkg.attr.datum "[1, 2]"))
"""
descriptionem: str
"""Textual information about the fact. Strongly recommended not omit"""
fontem: Any = None
"""Source of the message to be presented to end user. Can be ommited.
Try keep it short. Must be something that can be converted to string.
"""
datum: Any = None
"""Extra context about the message. Can be ommited."""
linguam: str = None
"""When descriptionem is an natural language, this can explicit this"""
# TODO: implement type of Factum (like if is error, or informative message)
def __repr__(self):
"""Export an string representation without user translation of terms
Please consider using helpers instead of export objects directly.
"""
if self.linguam is None:
desc = '"' + self.descriptionem + '"'
else:
desc = '(' + self.linguam + ' "' + self.descriptionem + '")'
resultatum = "(vkg.attr.factum "
resultatum += '(vkg.attr.descriptionem ' + desc + ')'
if self.fontem is not None:
resultatum += '(vkg.attr.fontem "' + str(self.fontem) + '")'
if self.datum is not None:
resultatum += '(vkg.attr.datum "' + str(self.datum) + '")'
resultatum += ")"
return resultatum
class FileType(Enum):
"""File formats
See:
- https://en.wikipedia.org/wiki/File_format
- https://en.wikipedia.org/wiki/Container_format_(computing)
- https://en.wikipedia.org/wiki/Delimiter#Delimiter_collision
"""
CSV = '.csv?'
"""Generic CSV (allows non-strict), https://tools.ietf.org/html/rfc4180
Can be used when the content is likely to be CSV, but the file was not
processed yet
"""
# TODO: zip
# TODO: gz
CSV_RFC4180 = '.csv'
"""An CSV strict compliant with https://tools.ietf.org/html/rfc4180"""
JSON = '.json'
"""JSON JavaScript Object Notation, https://tools.ietf.org/html/rfc8259"""
JSON5 = '.json5'
""" JSON5 Data Interchange Format (JSON5), https://json5.org/"""
TSV = '.tsv' # .tsv, .tab
"""Tab-separated values, a more strict CSV type (No RFC for this one)
See:
- https://en.wikipedia.org/wiki/Tab-separated_values
"""
YAML = '.yml' # .yml, .yaml
"""An YAML 'YAML Ain't Markup Language' container, https://yaml.org/"""
# Class definition: Almost the same
@dataclass
class L10NContext:
"""Localization Context for current user, see hxlm.core.localization
This dataclass is mostly an wrapper around the context of current user (so
it will try to localize/translate based on users know languages) but
can be also used when exporting or explicitly requiting an different
localization.
"""
available: list
"""Available languages (lid) on loaded VKG"""
user: list
"""Orderen know languages of the user"""
def about(self, key: str = None):
"""Export values"""
about = {
'available': self.available,
'user': self.user,
}
if key:
if key in about:
return about[key]
return None
return about
class RemoteType(Enum):
"""Typehints for specialization of entrypoints"""
# TODO: S3
# TODO: Google Spreadsheet
# TODO: CKAN
# TODO: HXL-Proxy
# TODO: FTP strict / FTPS
HTTP = "http://"
"""Generic HTTP without HTTPS"""
HTTPS = "https://"
"""Generic HTTPS"""
UNKNOW = "?"
"""Unknow entrypoint"""
# Class definition: Almost the same
@dataclass(init=True, eq=True)
class ResourceWrapper:
"""An Resource Wrapper"""
content: Union[dict, list, str] = None
"""If the entrypoint already is not an RAW string/object, the content"""
entrypoint: InitVar[Any] = None
entrypoint_t: InitVar[EntryPointType] = None
log: InitVar[list] = []
"""Log of messages. Can be used when failed = True or for verbose output"""
failed: bool = False
"""If this resource tried to be loaded, bug failed"""
remote_t: InitVar[RemoteType] = None
# def about(self, key: str = None):
# """Export values"""
# about = {
# 'failed': self.failed,
# 'log': self.log,
# 'entrypoint': self.entrypoint,
# 'entrypoint_t': self.entrypoint_t,
# 'remote_t': self.remote_t,
# 'content': self.content,
# }
# if key:
# if key in about:
# return about[key]
# return None
# return about
class URNType(Enum):
"""Uniform Resource Name, see https://tools.ietf.org/html/rfc8141
See also hxlm/core/htype/urn.py (not fully implemented yet with this)
"""
DATA = "urn:data"
"""URN Data
See:
- https://github.com/EticaAI/HXL-Data-Science-file-formats/tree/main
/urn-data-specification
"""
HDP = "urn:hdp"
"""HDP Declarative Programming URN"""
IETF = "urn:ietf"
"""IETF URN, see https://tools.ietf.org/html/rfc2648"""
ISO = "urn:ietf"
"""..., see https://tools.ietf.org/html/rfc5141"""
UNKNOW = "?"
"""Unknow URN type"""
| 28.392857 | 79 | 0.629503 | 1,097 | 8,745 | 4.996354 | 0.354603 | 0.015326 | 0.028097 | 0.034118 | 0.168217 | 0.140668 | 0.083197 | 0.068601 | 0.053275 | 0.039044 | 0 | 0.011896 | 0.240595 | 8,745 | 307 | 80 | 28.485342 | 0.813432 | 0.405031 | 0 | 0.038462 | 0 | 0 | 0.135235 | 0.040385 | 0 | 0 | 0 | 0.013029 | 0 | 1 | 0.025641 | false | 0 | 0.038462 | 0 | 0.717949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5c7eef17cbb62508fcf15afbb4cca512d6b1522f | 789 | py | Python | acton_test_app.py | akshaybkiitb/HW_acton_spec | 6872683633fc41d44947868d0e8534e5275b10db | [
"BSD-3-Clause"
] | 2 | 2019-12-11T01:22:04.000Z | 2022-03-30T13:44:52.000Z | acton_test_app.py | akshaybkiitb/HW_acton_spec | 6872683633fc41d44947868d0e8534e5275b10db | [
"BSD-3-Clause"
] | null | null | null | acton_test_app.py | akshaybkiitb/HW_acton_spec | 6872683633fc41d44947868d0e8534e5275b10db | [
"BSD-3-Clause"
] | 2 | 2020-05-18T14:10:05.000Z | 2020-07-31T11:24:53.000Z | from __future__ import division, print_function
from ScopeFoundry import BaseMicroscopeApp
#from ScopeFoundry.helper_funcs import sibling_path, load_qt_ui_file
import logging
logging.basicConfig(level=logging.DEBUG)
class SpecTestApp(BaseMicroscopeApp):
name = 'spec_test_app'
def setup(self):
# from ScopeFoundryHW.andor_camera import AndorCCDHW
# self.add_hardware(AndorCCDHW(self))
from ScopeFoundryHW.acton_spec import ActonSpectrometerHW
self.add_hardware(ActonSpectrometerHW)
# from ScopeFoundryHW.andor_camera import AndorCCDReadoutMeasure
# self.add_measurement(AndorCCDReadoutMeasure)
#
if __name__ == '__main__':
import sys
app = SpecTestApp(sys.argv)
sys.exit(app.exec_()) | 28.178571 | 72 | 0.732573 | 82 | 789 | 6.719512 | 0.54878 | 0.098004 | 0.079855 | 0.105263 | 0.127042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201521 | 789 | 28 | 73 | 28.178571 | 0.874603 | 0.372624 | 0 | 0 | 0 | 0 | 0.043659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.615385 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
5c87620e45ca5473590bf908661872d3585bab69 | 1,017 | py | Python | Problems/0005. Longest Palindromic Substring/SolutionPython3.py | SpacelandingCat/leetcode | 761093513aed0d4779944ca152180e3356facc2f | [
"MIT"
] | null | null | null | Problems/0005. Longest Palindromic Substring/SolutionPython3.py | SpacelandingCat/leetcode | 761093513aed0d4779944ca152180e3356facc2f | [
"MIT"
] | null | null | null | Problems/0005. Longest Palindromic Substring/SolutionPython3.py | SpacelandingCat/leetcode | 761093513aed0d4779944ca152180e3356facc2f | [
"MIT"
] | null | null | null | class Solution:
def longestPalindrome(self, s: str) -> str:
result = ''
pal_s = set(s)
if len(pal_s) == 1:
return s
for ind_c in range(len(s)):
pal = ''
ind_l = ind_c
ind_r = ind_c + 1
while ind_l > -1 and ind_r < len(s):
if s[ind_l] == s[ind_r]:
pal = s[ind_l] + pal + s[ind_r]
ind_l -= 1
ind_r += 1
else:
break
if len(result) < len(pal):
result = pal
pal = s[ind_c]
ind_l = ind_c - 1
ind_r = ind_c + 1
while ind_l > -1 and ind_r < len(s):
if s[ind_l] == s[ind_r]:
pal = s[ind_l] + pal + s[ind_r]
ind_l -= 1
ind_r += 1
else:
break
if len(result) < len(pal):
result = pal
return result | 31.78125 | 51 | 0.360865 | 131 | 1,017 | 2.587786 | 0.19084 | 0.117994 | 0.103245 | 0.047198 | 0.60767 | 0.60767 | 0.60767 | 0.60767 | 0.60767 | 0.60767 | 0 | 0.021368 | 0.539823 | 1,017 | 32 | 52 | 31.78125 | 0.702991 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c8ab1b3eddace8397226479e640c75020c3d5d9 | 147 | py | Python | python/fib.py | Cinia/fibonacci | 7ca6dd20e1faced2ce31c8349835dc4f6181a0d2 | [
"MIT"
] | null | null | null | python/fib.py | Cinia/fibonacci | 7ca6dd20e1faced2ce31c8349835dc4f6181a0d2 | [
"MIT"
] | null | null | null | python/fib.py | Cinia/fibonacci | 7ca6dd20e1faced2ce31c8349835dc4f6181a0d2 | [
"MIT"
] | null | null | null | import sys
def fib(i):
if i < 2:
return 1
return fib(i-1) + fib(i-2)
if __name__ == "__main__":
print(fib(int(sys.argv[1])))
| 14.7 | 32 | 0.544218 | 26 | 147 | 2.769231 | 0.538462 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 0.278912 | 147 | 9 | 33 | 16.333333 | 0.632075 | 0 | 0 | 0 | 0 | 0 | 0.054422 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c90dc090af95d86c043cbd504b3dcbe75aa4737 | 13,243 | py | Python | tests/test_soda.py | Adam-Lechnos/SocrataPython | 798e7abe160d3a7c1aa2b6c42708ad0b600b097e | [
"MIT"
] | null | null | null | tests/test_soda.py | Adam-Lechnos/SocrataPython | 798e7abe160d3a7c1aa2b6c42708ad0b600b097e | [
"MIT"
] | null | null | null | tests/test_soda.py | Adam-Lechnos/SocrataPython | 798e7abe160d3a7c1aa2b6c42708ad0b600b097e | [
"MIT"
] | null | null | null | from sodapy import Socrata
from sodapy.constants import DEFAULT_API_PREFIX, OLD_API_PREFIX
import requests
import requests_mock
import os.path
import inspect
import json
PREFIX = "https://"
DOMAIN = "fakedomain.com"
DATASET_IDENTIFIER = "songs"
APPTOKEN = "FakeAppToken"
USERNAME = "fakeuser"
PASSWORD = "fakepassword"
TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.abspath(inspect.getfile(
inspect.currentframe()))), "test_data")
def test_client():
client = Socrata(DOMAIN, APPTOKEN)
assert isinstance(client, Socrata)
client.close()
def test_get():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, session_adapter=mock_adapter)
response_data = "get_songs.txt"
setup_mock(adapter, "GET", response_data, 200)
response = client.get(DATASET_IDENTIFIER)
assert isinstance(response, list)
assert len(response) == 10
client.close()
def test_get_unicode():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, session_adapter=mock_adapter)
response_data = "get_songs_unicode.txt"
setup_mock(adapter, "GET", response_data, 200)
response = client.get(DATASET_IDENTIFIER)
assert isinstance(response, list)
assert len(response) == 10
client.close()
def test_get_metadata_and_attachments():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, session_adapter=mock_adapter)
response_data = "get_song_metadata.txt"
setup_old_api_mock(adapter, "GET", response_data, 200)
response = client.get_metadata(DATASET_IDENTIFIER)
assert isinstance(response, dict)
assert "newBackend" in response
assert "attachments" in response["metadata"]
response = client.download_attachments(DATASET_IDENTIFIER)
assert isinstance(response, list)
assert len(response) == 0
client.close()
def test_update_metadata():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, session_adapter=mock_adapter)
response_data = "update_song_metadata.txt"
setup_old_api_mock(adapter, "PUT", response_data, 200)
data = {"category": "Education", "attributionLink": "https://testing.updates"}
response = client.update_metadata(DATASET_IDENTIFIER, data)
assert isinstance(response, dict)
assert response.get("category") == data["category"]
assert response.get("attributionLink") == data["attributionLink"]
client.close()
def test_upsert_exception():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, session_adapter=mock_adapter)
response_data = "403_response_json.txt"
setup_mock(adapter, "POST", response_data, 403, reason="Forbidden")
data = [{"theme": "Surfing", "artist": "Wavves",
"title": "King of the Beach", "year": "2010"}]
try:
client.upsert(DATASET_IDENTIFIER, data)
except Exception as e:
assert isinstance(e, requests.exceptions.HTTPError)
else:
raise AssertionError("No exception raised for bad request.")
def test_upsert():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "upsert_songs.txt"
data = [{"theme": "Surfing", "artist": "Wavves",
"title": "King of the Beach", "year": "2010"}]
setup_mock(adapter, "POST", response_data, 200)
response = client.upsert(DATASET_IDENTIFIER, data)
assert isinstance(response, dict)
assert response.get("Rows Created") == 1
client.close()
def test_replace():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "replace_songs.txt"
data = [
{"theme": "Surfing", "artist": "Wavves", "title": "King of the Beach",
"year": "2010"},
{"theme": "History", "artist": "Best Friends Forever",
"title": "Abe Lincoln", "year": "2008"},
]
setup_mock(adapter, "PUT", response_data, 200)
response = client.replace(DATASET_IDENTIFIER, data)
assert isinstance(response, dict)
assert response.get("Rows Created") == 2
client.close()
def test_delete():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
uri = "{0}{1}{2}/{3}.json".format(PREFIX, DOMAIN, OLD_API_PREFIX, DATASET_IDENTIFIER)
adapter.register_uri("DELETE", uri, status_code=200)
response = client.delete(DATASET_IDENTIFIER)
assert response.status_code == 200
try:
client.delete("foobar")
except Exception as e:
assert isinstance(e, requests_mock.exceptions.NoMockAddress)
finally:
client.close()
def test_create():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "create_foobar.txt"
setup_mock(adapter, "POST", response_data, 200, dataset_identifier=None)
columns = [
{"fieldName": "foo", "name": "Foo", "dataTypeName": "text"},
{"fieldName": "bar", "name": "Bar", "dataTypeName": "number"}
]
tags = ["foo", "bar"]
response = client.create("Foo Bar", description="test dataset",
columns=columns, tags=tags, row_identifier="bar")
request = adapter.request_history[0]
request_payload = json.loads(request.text) # can't figure out how to use .json
# Test request payload
for dataset_key in ["name", "description", "columns", "tags"]:
assert dataset_key in request_payload
for column_key in ["fieldName", "name", "dataTypeName"]:
assert column_key in request_payload["columns"][0]
# Test response
assert isinstance(response, dict)
assert len(response.get("id")) == 9
client.close()
def test_set_permission():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "empty.txt"
setup_old_api_mock(adapter, "PUT", response_data, 200)
# Test response
response = client.set_permission(DATASET_IDENTIFIER, "public")
assert response.status_code == 200
# Test request
request = adapter.request_history[0]
query_string = request.url.split("?")[-1]
params = query_string.split("&")
assert len(params) == 2
assert "method=setPermission" in params
assert "value=public.read" in params
client.close()
def test_publish():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "create_foobar.txt"
setup_publish_mock(adapter, "POST", response_data, 200)
response = client.publish(DATASET_IDENTIFIER)
assert isinstance(response, dict)
assert len(response.get("id")) == 9
client.close()
def test_import_non_data_file():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "successblobres.txt"
nondatasetfile_path = 'tests/test_data/nondatasetfile.zip'
setup_import_non_data_file(adapter, "POST", response_data, 200)
with open(nondatasetfile_path, 'rb') as f:
files = (
{'file': ("nondatasetfile.zip", f)}
)
response = client.create_non_data_file({}, files)
assert isinstance(response, dict)
assert response.get("blobFileSize") == 496
client.close()
def test_replace_non_data_file():
mock_adapter = {}
mock_adapter["prefix"] = PREFIX
adapter = requests_mock.Adapter()
mock_adapter["adapter"] = adapter
client = Socrata(DOMAIN, APPTOKEN, username=USERNAME, password=PASSWORD,
session_adapter=mock_adapter)
response_data = "successblobres.txt"
nondatasetfile_path = 'tests/test_data/nondatasetfile.zip'
setup_replace_non_data_file(adapter, "POST", response_data, 200)
with open(nondatasetfile_path, 'rb') as fin:
file = (
{'file': ("nondatasetfile.zip", fin)}
)
response = client.replace_non_data_file(DATASET_IDENTIFIER, {}, file)
assert isinstance(response, dict)
assert response.get("blobFileSize") == 496
client.close()
def setup_publish_mock(adapter, method, response, response_code, reason="OK",
dataset_identifier=DATASET_IDENTIFIER, content_type="json"):
path = os.path.join(TEST_DATA_PATH, response)
with open(path, "r") as response_body:
body = json.load(response_body)
uri = "{0}{1}{2}/{3}/publication.{4}".format(PREFIX, DOMAIN, OLD_API_PREFIX,
dataset_identifier, content_type)
headers = {
"content-type": "application/json; charset=utf-8"
}
adapter.register_uri(method, uri, status_code=response_code, json=body, reason=reason,
headers=headers)
def setup_import_non_data_file(adapter, method, response, response_code, reason="OK",
dataset_identifier=DATASET_IDENTIFIER, content_type="json"):
path = os.path.join(TEST_DATA_PATH, response)
with open(path, "r") as response_body:
body = json.load(response_body)
uri = "{0}{1}/api/imports2/?method=blob".format(PREFIX, DOMAIN)
headers = {
"content-type": "application/json; charset=utf-8"
}
adapter.register_uri(method, uri, status_code=response_code, json=body, reason=reason,
headers=headers)
def setup_replace_non_data_file(adapter, method, response, response_code, reason="OK",
dataset_identifier=DATASET_IDENTIFIER, content_type="json"):
path = os.path.join(TEST_DATA_PATH, response)
with open(path, "r") as response_body:
body = json.load(response_body)
uri = "{0}{1}{2}/{3}.{4}?method=replaceBlob&id={3}".format(PREFIX, DOMAIN, OLD_API_PREFIX,
dataset_identifier, "txt")
headers = {
"content-type": "text/plain; charset=utf-8"
}
adapter.register_uri(method, uri, status_code=response_code, json=body, reason=reason,
headers=headers)
def setup_old_api_mock(adapter, method, response, response_code, reason="OK",
dataset_identifier=DATASET_IDENTIFIER, content_type="json"):
path = os.path.join(TEST_DATA_PATH, response)
with open(path, "r") as response_body:
try:
body = json.load(response_body)
except ValueError:
body = None
uri = "{0}{1}{2}/{3}.{4}".format(PREFIX, DOMAIN, OLD_API_PREFIX, dataset_identifier,
content_type)
headers = {
"content-type": "application/json; charset=utf-8"
}
adapter.register_uri(method, uri, status_code=response_code, json=body, reason=reason,
headers=headers)
def setup_mock(adapter, method, response, response_code, reason="OK",
dataset_identifier=DATASET_IDENTIFIER, content_type="json"):
path = os.path.join(TEST_DATA_PATH, response)
with open(path, "r") as response_body:
body = json.load(response_body)
if dataset_identifier is None: # for create endpoint
uri = "{0}{1}{2}.{3}".format(PREFIX, DOMAIN, OLD_API_PREFIX, "json")
else: # most cases
uri = "{0}{1}{2}{3}.{4}".format(PREFIX, DOMAIN, DEFAULT_API_PREFIX, dataset_identifier,
content_type)
headers = {
"content-type": "application/json; charset=utf-8"
}
adapter.register_uri(method, uri, status_code=response_code, json=body, reason=reason,
headers=headers)
| 32.538084 | 95 | 0.659896 | 1,511 | 13,243 | 5.577101 | 0.130377 | 0.101816 | 0.083304 | 0.067877 | 0.737866 | 0.70642 | 0.699775 | 0.679601 | 0.643289 | 0.626557 | 0 | 0.011701 | 0.219135 | 13,243 | 406 | 96 | 32.618227 | 0.803211 | 0.009514 | 0 | 0.575758 | 0 | 0 | 0.129072 | 0.019757 | 0 | 0 | 0 | 0 | 0.117845 | 1 | 0.063973 | false | 0.030303 | 0.037037 | 0 | 0.10101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c9a4d3b8666bef59e42b987ae0767e462ac1a09 | 151 | py | Python | COJ/1049Sum.py | Loptt/python-programs | 798985a540e1b4de0d416867b5a4c6f6d75771a0 | [
"MIT"
] | null | null | null | COJ/1049Sum.py | Loptt/python-programs | 798985a540e1b4de0d416867b5a4c6f6d75771a0 | [
"MIT"
] | null | null | null | COJ/1049Sum.py | Loptt/python-programs | 798985a540e1b4de0d416867b5a4c6f6d75771a0 | [
"MIT"
] | null | null | null | num = raw_input()
num = int(num)
suma = 0
if num >= 0:
for x in range(0, num + 1):
suma += x
else:
for x in range(num, 2):
suma += x
print suma | 12.583333 | 28 | 0.569536 | 31 | 151 | 2.741935 | 0.483871 | 0.094118 | 0.141176 | 0.258824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.271523 | 151 | 12 | 29 | 12.583333 | 0.727273 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5c9af6716995ec97062daa8eaf37fe3a39fc3c3f | 909 | py | Python | portcast/schemas.py | FyonnOh/PortCastAssignment | 23cdc1ec22990a919ebd7131c2ddc07038c32d14 | [
"MIT"
] | null | null | null | portcast/schemas.py | FyonnOh/PortCastAssignment | 23cdc1ec22990a919ebd7131c2ddc07038c32d14 | [
"MIT"
] | null | null | null | portcast/schemas.py | FyonnOh/PortCastAssignment | 23cdc1ec22990a919ebd7131c2ddc07038c32d14 | [
"MIT"
] | 1 | 2022-01-11T07:56:21.000Z | 2022-01-11T07:56:21.000Z | from typing import List, Optional
from pydantic import BaseModel
class TransshipmentBase(BaseModel):
is_loaded: bool
is_discharged: bool
discharge_date: str
loaded_date: str
discharge_location: str
loaded_location: str
vessel: str
class TransshipmentCreate(TransshipmentBase):
pass
class Transshipment(TransshipmentBase):
id: int
container_id: str
is_loaded: bool
is_discharged: bool
discharge_date: str
loaded_date: str
discharge_location: str
loaded_location: str
vessel: str
class Config:
orm_mode = True
class ContainerBase(BaseModel):
id: str
final_pod: str
final_pod_eta: str
class ContainerCreate(ContainerBase):
pass
class Container(ContainerBase):
id: str
final_pod: str
final_pod_eta: str
transshipments: List[Transshipment] = []
class Config:
orm_mode = True
| 17.150943 | 45 | 0.70187 | 105 | 909 | 5.87619 | 0.32381 | 0.045381 | 0.071313 | 0.045381 | 0.510535 | 0.447326 | 0.447326 | 0.447326 | 0.447326 | 0.350081 | 0 | 0 | 0.242024 | 909 | 52 | 46 | 17.480769 | 0.895501 | 0 | 0 | 0.702703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.054054 | 0 | 0.891892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5cb6b53628daa912f11e9c3ebd832c8dd8740466 | 271 | py | Python | temas/tema2/codigo/t2e07.correos.py | GabJL/FP2021 | 9c2c80c3bd0b7e112f66475c48ecdcf20b611338 | [
"MIT"
] | 1 | 2021-11-29T12:12:48.000Z | 2021-11-29T12:12:48.000Z | temas/tema2/codigo/t2e07.correos.py | GabJL/FP2021 | 9c2c80c3bd0b7e112f66475c48ecdcf20b611338 | [
"MIT"
] | null | null | null | temas/tema2/codigo/t2e07.correos.py | GabJL/FP2021 | 9c2c80c3bd0b7e112f66475c48ecdcf20b611338 | [
"MIT"
] | null | null | null | nombre = input("Nombre: ")
apellido = input("Apellido: ")
edad_nac = input("Edad de nacimiento: ")
correo1 = nombre[0] + "." + apellido + edad_nac[-2:] + "@uma.es"
correo2 = nombre[:3] + apellido[:3] + edad_nac[-2:] + "@uma.es"
print("Correos:", correo1, "y", correo2)
| 30.111111 | 64 | 0.616236 | 36 | 271 | 4.555556 | 0.472222 | 0.128049 | 0.182927 | 0.134146 | 0.158537 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.147601 | 271 | 8 | 65 | 33.875 | 0.670996 | 0 | 0 | 0 | 0 | 0 | 0.228782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5cbf10894183be7ac7e35be9617195b1566c8060 | 734 | py | Python | settings/wrappers.py | xupei0610/PFPN | 7ad049f84d1cc03200bb6cea383d5c00721f9ac4 | [
"MIT"
] | 9 | 2020-12-07T02:32:18.000Z | 2022-02-07T02:38:54.000Z | settings/wrappers.py | xupei0610/PFPN | 7ad049f84d1cc03200bb6cea383d5c00721f9ac4 | [
"MIT"
] | null | null | null | settings/wrappers.py | xupei0610/PFPN | 7ad049f84d1cc03200bb6cea383d5c00721f9ac4 | [
"MIT"
] | 1 | 2022-01-01T01:45:01.000Z | 2022-01-01T01:45:01.000Z | class DiscreteActionWrapper:
def __init__(self, env, n):
self.env = env
self.action_cont = [
# [l + (_+0.5)*(h-l)/n for _ in range(n)]
# [l + _*(h-l)/(n+1) for _ in range(n)]
[l + _*(h-l)/(n-1) for _ in range(n)]
for h, l in zip(self.env.action_space.high, self.env.action_space.low)
]
self.env.action_space.shape = [n] * len(self.env.action_space.high)
delattr(self.env.action_space, "low")
delattr(self.env.action_space, "high")
def step(self, a):
action_cont = [_[i] for _, i in zip(self.action_cont, a)]
return self.env.step(action_cont)
def __getattr__(self, name):
return getattr(self.env, name)
| 40.777778 | 82 | 0.565395 | 110 | 734 | 3.536364 | 0.272727 | 0.179949 | 0.200514 | 0.277635 | 0.424165 | 0.11054 | 0.11054 | 0.11054 | 0.11054 | 0.11054 | 0 | 0.007533 | 0.276567 | 734 | 17 | 83 | 43.176471 | 0.725047 | 0.104905 | 0 | 0 | 0 | 0 | 0.010703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.066667 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7a233abefcdd79a70599ef23710df83a02c92b84 | 1,408 | py | Python | src/addresses/models.py | caesarorz/complete-ecommerce | 35493812167c208c166df3048190a9988adf6bb0 | [
"MIT"
] | null | null | null | src/addresses/models.py | caesarorz/complete-ecommerce | 35493812167c208c166df3048190a9988adf6bb0 | [
"MIT"
] | null | null | null | src/addresses/models.py | caesarorz/complete-ecommerce | 35493812167c208c166df3048190a9988adf6bb0 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
from billing.models import BillingProfile
ADDRESS_TYPES = (
# ('billing', 'Facturacion'), # ('billing','Billing'),
('shipping', 'Envio'), # ('shipping','Shipping'),
)
class Address(models.Model):
billing_profile = models.ForeignKey(BillingProfile, on_delete=models.CASCADE)
address_type = models.CharField(max_length=120, choices=ADDRESS_TYPES)
direccion_linea_1 = models.CharField(max_length=120)
direccion_linea_2 = models.CharField(max_length=120, null=True, blank=True)
# pais = models.CharField(max_length=120, default='Costa Rica')
provincia = models.CharField(max_length=120)
canton = models.CharField(max_length=120)
distrito = models.CharField(max_length=120, null=True, blank=True)
def __str__(self):
return str(self.billing_profile)
def get_address(self):
return "{linea1} {linea2}, {provincia}, {canton}, {distrito} ".format( # {postal}
linea1 = self.direccion_linea_1,
linea2 = self.direccion_linea_2 or "",
# pais = self.pais,
provincia = self.provincia,
canton = self.canton,
distrito = self.distrito,
)
| 39.111111 | 91 | 0.584517 | 141 | 1,408 | 5.652482 | 0.368794 | 0.131744 | 0.158093 | 0.21079 | 0.279799 | 0.110414 | 0.110414 | 0.110414 | 0.110414 | 0 | 0 | 0.029683 | 0.306108 | 1,408 | 35 | 92 | 40.228571 | 0.78608 | 0.153409 | 0 | 0 | 0 | 0 | 0.055743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0.086957 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7a2d41f97cfb2016f65180d1790a374db4dad3de | 1,286 | py | Python | homework/if-elif-else.py | mcorley-gba/IntroCS21-22 | a823e17f2cb618be0e67468cb15f48873ae85152 | [
"MIT"
] | null | null | null | homework/if-elif-else.py | mcorley-gba/IntroCS21-22 | a823e17f2cb618be0e67468cb15f48873ae85152 | [
"MIT"
] | null | null | null | homework/if-elif-else.py | mcorley-gba/IntroCS21-22 | a823e17f2cb618be0e67468cb15f48873ae85152 | [
"MIT"
] | null | null | null | #Write an if-elif-else chain that determines a person's stage of life.
#Set a value for the variable age, then:
# If the person is less than 2, print a message that the person
# is a baby.
#
# If the person is at least 2 years old, but less than 4, print
# a message that the person is a toddler.
#
# If the person is at least 4 years old, but less than 13, print
# a message that the person is a kid.
#
# If the person is at least 13 years old, but less than 20, print
# a message that the person is a teenager.
#
# If the person is at least 20 years old, but less than 65, print
# a message that the person is an adult.
#
# If the person is age 65 or older, print a message that the
# the person is an elder.
#Write your code below:
#age = 1 #If time, come back and change to user input.
age = eval(input("How old are you?\n")) #56
#if conditional_test:
if age < 2:
print("You are a baby!")
elif age < 4: #elif = else if
print("You are a toddler!")
elif age < 13:
print("You are a kid!")
elif age < 20:
print("You are a teenager!")
elif age < 65:
print("You are an adult!")
else: #Else is a catch-all if every previous if test fails.
print("You are an elder!")
if sports = "y":
print("You play sports!") | 31.365854 | 71 | 0.654743 | 238 | 1,286 | 3.533613 | 0.302521 | 0.128419 | 0.156956 | 0.092747 | 0.380499 | 0.26635 | 0.171225 | 0.137931 | 0 | 0 | 0 | 0.028213 | 0.255832 | 1,286 | 41 | 72 | 31.365854 | 0.850575 | 0.687403 | 0 | 0 | 0 | 0 | 0.35809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.466667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7a487e2a7618ec44c666e6aeb17a864f5b09aa19 | 241 | py | Python | ABC/abc101-abc150/abc112/a.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 2 | 2020-06-12T09:54:23.000Z | 2021-05-04T01:34:07.000Z | ABC/abc101-abc150/abc112/a.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | 961 | 2020-06-23T07:26:22.000Z | 2022-03-31T21:34:52.000Z | ABC/abc101-abc150/abc112/a.py | KATO-Hiro/AtCoder | cbbdb18e95110b604728a54aed83a6ed6b993fde | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
def main():
n = int(input())
if n == 1:
print("Hello World")
else:
a = int(input())
b = int(input())
print(a + b)
if __name__ == '__main__':
main()
| 14.176471 | 29 | 0.39834 | 28 | 241 | 3.142857 | 0.607143 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014184 | 0.414938 | 241 | 16 | 30 | 15.0625 | 0.609929 | 0.087137 | 0 | 0 | 0 | 0 | 0.094059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7a48add92c5fa309680f94449d41fc1a514c1384 | 752 | py | Python | project/access/tests/fixtures/grants.py | jssmk/asylum | 004b05939784b86ba559968a7cdcedf248edb01f | [
"MIT"
] | 1 | 2017-04-08T21:31:37.000Z | 2017-04-08T21:31:37.000Z | project/access/tests/fixtures/grants.py | jssmk/asylum | 004b05939784b86ba559968a7cdcedf248edb01f | [
"MIT"
] | 9 | 2016-01-23T22:40:26.000Z | 2021-09-13T17:44:11.000Z | project/access/tests/fixtures/grants.py | jssmk/asylum | 004b05939784b86ba559968a7cdcedf248edb01f | [
"MIT"
] | 1 | 2017-04-08T22:13:42.000Z | 2017-04-08T22:13:42.000Z | # -*- coding: utf-8 -*-
import random
import factory.django
import factory.fuzzy
from access.models import AccessType, TokenType
from members.models import Member
from members.tests.fixtures.memberlikes import MemberFactory
from asylum.tests.utils import FuzzyLoremipsum
class GrantFactory(factory.django.DjangoModelFactory):
class Meta:
model = 'access.Grant'
django_get_or_create = ('atype', 'owner')
atype = factory.fuzzy.FuzzyChoice(AccessType.objects.all())
owner = factory.fuzzy.FuzzyChoice(Member.objects.filter(access_tokens__ttype__in=TokenType.objects.all())) # It only makes sense to generate grants for members that have tokens
#owner = factory.SubFactory(MemberFactory)
#notes = FuzzyLoremipsum()
| 34.181818 | 181 | 0.765957 | 90 | 752 | 6.311111 | 0.577778 | 0.06338 | 0.080986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001548 | 0.140957 | 752 | 21 | 182 | 35.809524 | 0.877709 | 0.206117 | 0 | 0 | 0 | 0 | 0.037162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.538462 | 0 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7a4b25e795260a82a448484fd77fd363f09417a7 | 8,117 | py | Python | app/models.py | gacbas/Zumicrospectro | 22f49158df193be0a3314806d1f42662b6a3f029 | [
"MIT"
] | null | null | null | app/models.py | gacbas/Zumicrospectro | 22f49158df193be0a3314806d1f42662b6a3f029 | [
"MIT"
] | null | null | null | app/models.py | gacbas/Zumicrospectro | 22f49158df193be0a3314806d1f42662b6a3f029 | [
"MIT"
] | null | null | null | from datetime import datetime
from hashlib import md5
from time import time
from flask import current_app
from flask_login import UserMixin
from werkzeug.security import generate_password_hash, check_password_hash
import jwt
from app import db, login
from app.search import add_to_index, remove_from_index, query_index
import array
arraytypecode = chr(ord('f'))
class SearchableMixin(object):
@classmethod
def search(cls, expression, page, per_page):
ids, total = query_index(cls.__tablename__, expression, page, per_page)
if total == 0:
return cls.query.filter_by(id=0), 0
when = []
for i in range(len(ids)):
when.append((ids[i], i))
return cls.query.filter(cls.id.in_(ids)).order_by(
db.case(when, value=cls.id)), total
@classmethod
def before_commit(cls, session):
session._changes = {
'add': [obj for obj in session.new if isinstance(obj, cls)],
'update': [obj for obj in session.dirty if isinstance(obj, cls)],
'delete': [obj for obj in session.deleted if isinstance(obj, cls)]
}
@classmethod
def after_commit(cls, session):
for obj in session._changes['add']:
add_to_index(cls.__tablename__, obj)
for obj in session._changes['update']:
add_to_index(cls.__tablename__, obj)
for obj in session._changes['delete']:
remove_from_index(cls.__tablename__, obj)
session._changes = None
@classmethod
def reindex(cls):
for obj in cls.query:
add_to_index(cls.__tablename__, obj)
followers = db.Table(
'followers',
db.Column('follower_id', db.Integer, db.ForeignKey('user.id')),
db.Column('followed_id', db.Integer, db.ForeignKey('user.id'))
)
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(120), index=True, unique=True)
password_hash = db.Column(db.String(128))
posts = db.relationship('Post', backref='author', lazy='dynamic')
about_me = db.Column(db.String(140))
last_seen = db.Column(db.DateTime, default=datetime.utcnow)
followed = db.relationship(
'User', secondary=followers,
primaryjoin=(followers.c.follower_id == id),
secondaryjoin=(followers.c.followed_id == id),
backref=db.backref('followers', lazy='dynamic'), lazy='dynamic')
def __repr__(self):
return '<User {}>'.format(self.username)
def set_password(self, password):
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
def avatar(self, size):
digest = md5(self.email.lower().encode('utf-8')).hexdigest()
return 'https://www.gravatar.com/avatar/{}?d=identicon&s={}'.format(
digest, size)
def follow(self, user):
if not self.is_following(user):
self.followed.append(user)
def unfollow(self, user):
if self.is_following(user):
self.followed.remove(user)
def is_following(self, user):
return self.followed.filter(
followers.c.followed_id == user.id).count() > 0
def followed_posts(self):
followed = Post.query.join(
followers, (followers.c.followed_id == Post.user_id)).filter(
followers.c.follower_id == self.id)
own = Post.query.filter_by(user_id=self.id)
return followed.union(own).order_by(Post.timestamp.desc())
def get_reset_password_token(self, expires_in=600):
return jwt.encode(
{'reset_password': self.id, 'exp': time() + expires_in},
current_app.config['SECRET_KEY'],
algorithm='HS256').decode('utf-8')
@staticmethod
def verify_reset_password_token(token):
try:
id = jwt.decode(token, current_app.config['SECRET_KEY'],
algorithms=['HS256'])['reset_password']
except:
return
return User.query.get(id)
@login.user_loader
def load_user(id):
return User.query.get(int(id))
class Post(SearchableMixin, db.Model):
__searchable__ = ['body']
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.String(140))
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
language = db.Column(db.String(5))
image = db.Column(db.LargeBinary)
extension = db.Column(db.String(50))
def __repr__(self):
return '<Post {}>'.format(self.body)
db.event.listen(db.session, 'before_commit', Post.before_commit)
db.event.listen(db.session, 'after_commit', Post.after_commit)
class Samples(db.Model):
sample_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
spectral_library = db.Column(db.String(256))
class_mineral = db.Column(db.String)
name = db.Column(db.String)
image = db.Column(db.LargeBinary)
image_link = db.Column(db.String)
wavelength_range = db.Column(db.String(128))
description = db.Column(db.TEXT)
@staticmethod
def get_classes():
try:
clsses = Samples.query.with_entities(Samples.class_mineral).distinct().all()
except:
return []
return [cls[0] for cls in clsses]
@staticmethod
def get_names_by_class(cls):
try:
nms = Samples.query.filter(Samples.class_mineral == cls).with_entities(Samples.name).distinct().all()
except:
return []
return [nm[0] for nm in nms]
@staticmethod
def get_sample_id_by_name(name):
try:
lst0 = Samples.query.filter(Samples.name.contains(name)).with_entities(Samples.sample_id).distinct().all()
lst1 = Samples.query.filter(Samples.name.contains(name.lower())).with_entities(
Samples.sample_id).distinct().all()
lst2 = Samples.query.filter(Samples.name.contains(name.title())).with_entities(
Samples.sample_id).distinct().all()
lst = list(set().union(lst0, lst1, lst2))
except:
return []
return [l[0] for l in lst]
@staticmethod
def get_spectra_metatadata_by_sample_id(iid):
try:
qr = Samples.query.filter(Samples.sample_id == iid).first()
txt = "Name: " + qr.name + "\n" + \
"Class: " + qr.class_mineral + "\n" + \
"Wavelength range: " + qr.wavelength_range + "\n" + \
"Description: " + qr.description
except:
txt = []
return txt
@staticmethod
def get_sample_image_from_id(iid):
try:
im = Samples.query.filter(Samples.sample_id == iid).with_entities(Samples.image).first()
except:
return None
return im.image
class Spectra(db.Model):
spectrum_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
sample_id = db.Column(db.Integer, db.ForeignKey('samples.sample_id', ondelete='CASCADE'))
instrument = db.Column(db.String(128))
measurement = db.Column(db.String(128))
x_unit = db.Column(db.String(128))
y_unit = db.Column(db.String(128))
min_wavelength = db.Column(db.Float)
max_wavelength = db.Column(db.Float)
num_values = db.Column(db.Integer)
additional_information = db.Column(db.TEXT)
x_data = db.Column(db.LargeBinary)
y_data = db.Column(db.LargeBinary)
sample = db.relationship("Samples", backref=db.backref("spectrum", passive_deletes=True, uselist=False))
@staticmethod
def get_spectrum_by_id(iid):
try:
dt = Spectra.query.filter(Spectra.sample_id == iid).with_entities(Spectra.x_data, Spectra.y_data).first()
x = array.array(arraytypecode)
x.frombytes(dt.x_data)
y = array.array(arraytypecode)
y.frombytes(dt.y_data)
except:
return []
return [list(x), list(y)] | 35.600877 | 118 | 0.63521 | 1,039 | 8,117 | 4.795958 | 0.218479 | 0.056191 | 0.066225 | 0.051375 | 0.274734 | 0.168573 | 0.13827 | 0.052177 | 0.038932 | 0.038932 | 0 | 0.00982 | 0.234693 | 8,117 | 228 | 119 | 35.600877 | 0.792337 | 0 | 0 | 0.208333 | 1 | 0 | 0.047425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114583 | false | 0.057292 | 0.052083 | 0.03125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7a4fdc53b715ad463c963713fbdd91b3d75425ed | 292 | py | Python | _pysh/styles.py | etianen/py.sh | 8172cc39508bd8ead68af20cdcfe96a6d2583fb5 | [
"MIT"
] | 3 | 2018-12-18T23:21:30.000Z | 2021-12-24T05:37:18.000Z | _pysh/styles.py | etianen/py.sh | 8172cc39508bd8ead68af20cdcfe96a6d2583fb5 | [
"MIT"
] | 35 | 2016-03-27T20:42:12.000Z | 2017-10-17T10:57:27.000Z | _pysh/styles.py | etianen/py.sh | 8172cc39508bd8ead68af20cdcfe96a6d2583fb5 | [
"MIT"
] | 6 | 2016-06-07T11:43:02.000Z | 2021-12-24T05:37:24.000Z | class StyleMapping:
def __init__(self, opts):
self._opts = opts
def __getitem__(self, key):
return getattr(self._opts, "style_{}".format(key), "").encode().decode("unicode_escape")
def apply_styles(opts, command):
return command.format_map(StyleMapping(opts))
| 24.333333 | 96 | 0.678082 | 35 | 292 | 5.257143 | 0.571429 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 292 | 11 | 97 | 26.545455 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0.075342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7a667e0205507dfaafc0fa2f690894e26f9139e1 | 1,368 | py | Python | libs/sqlobject/tests/test_sqlbuilder_importproxy.py | scambra/HTPC-Manager | 1a1440db84ae1b6e7a2610c7f3bd5b6adf0aab1d | [
"MIT"
] | 422 | 2015-01-08T14:08:08.000Z | 2022-02-07T11:47:37.000Z | libs/sqlobject/tests/test_sqlbuilder_importproxy.py | scambra/HTPC-Manager | 1a1440db84ae1b6e7a2610c7f3bd5b6adf0aab1d | [
"MIT"
] | 581 | 2015-01-01T08:07:16.000Z | 2022-02-23T11:44:37.000Z | libs/sqlobject/tests/test_sqlbuilder_importproxy.py | scambra/HTPC-Manager | 1a1440db84ae1b6e7a2610c7f3bd5b6adf0aab1d | [
"MIT"
] | 162 | 2015-01-01T00:21:16.000Z | 2022-02-23T02:36:04.000Z | from sqlobject import *
from sqlobject.tests.dbtest import *
from sqlobject.views import *
from sqlobject.sqlbuilder import ImportProxy, Alias
def testSimple():
nyi = ImportProxy('NotYetImported')
x = nyi.q.name
class NotYetImported(SQLObject):
name = StringCol(dbName='a_name')
y = nyi.q.name
assert str(x) == 'not_yet_imported.a_name'
assert str(y) == 'not_yet_imported.a_name'
def testAddition():
nyi = ImportProxy('NotYetImported2')
x = nyi.q.name+nyi.q.name
class NotYetImported2(SQLObject):
name = StringCol(dbName='a_name')
assert str(x) == '((not_yet_imported2.a_name) + (not_yet_imported2.a_name))'
def testOnView():
nyi = ImportProxy('NotYetImportedV')
x = nyi.q.name
class NotYetImported3(SQLObject):
name = StringCol(dbName='a_name')
class NotYetImportedV(ViewSQLObject):
class sqlmeta:
idName = NotYetImported3.q.id
name = StringCol(dbName=NotYetImported3.q.name)
assert str(x) == 'not_yet_imported_v.name'
def testAlias():
nyi = ImportProxy('NotYetImported4')
y = Alias(nyi, 'y')
x = y.q.name
class NotYetImported4(SQLObject):
name = StringCol(dbName='a_name')
assert str(y) == 'not_yet_imported4 y'
assert tablesUsedSet(x, None) == set(['not_yet_imported4 y'])
assert str(x) == 'y.a_name'
| 26.307692 | 80 | 0.666667 | 171 | 1,368 | 5.192982 | 0.251462 | 0.050676 | 0.045045 | 0.126126 | 0.400901 | 0.273649 | 0.191441 | 0.15991 | 0 | 0 | 0 | 0.010138 | 0.206871 | 1,368 | 51 | 81 | 26.823529 | 0.808295 | 0 | 0 | 0.162162 | 0 | 0 | 0.188596 | 0.089912 | 0 | 0 | 0 | 0 | 0.189189 | 1 | 0.108108 | false | 0 | 0.567568 | 0 | 0.972973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7a756102e6c5960c9b3b3d7df97056599e46634f | 846 | py | Python | nlu/components/classifiers/generic_classifier/generic_classifier.py | milyiyo/nlu | d209ed11c6a84639c268f08435552248391c5573 | [
"Apache-2.0"
] | 480 | 2020-08-24T02:36:40.000Z | 2022-03-30T08:09:43.000Z | nlu/components/classifiers/generic_classifier/generic_classifier.py | milyiyo/nlu | d209ed11c6a84639c268f08435552248391c5573 | [
"Apache-2.0"
] | 28 | 2020-09-26T18:55:43.000Z | 2022-03-26T01:05:45.000Z | nlu/components/classifiers/generic_classifier/generic_classifier.py | milyiyo/nlu | d209ed11c6a84639c268f08435552248391c5573 | [
"Apache-2.0"
] | 76 | 2020-09-25T22:55:12.000Z | 2022-03-17T20:25:52.000Z | from sparknlp_jsl.annotator import GenericClassifierModel, GenericClassifierApproach
from sparknlp_jsl.base import *
class GenericClassifier:
@staticmethod
def get_default_model():
return GenericClassifierModel.pretrained() \
.setInputCols("feature_vector") \
.setOutputCol("generic_classification") \
@staticmethod
def get_pretrained_model(name, language):
return GenericClassifierModel.pretrained(name,language) \
.setInputCols("feature_vector") \
.setOutputCol("generic_classification") \
@staticmethod
def get_default_trainable_model():
return GenericClassifierApproach() \
.setInputCols("feature_vector") \
.setOutputCol("generic_classification") \
.setLabelColumn("y") \
.setEpochsNumber(2)
| 33.84 | 85 | 0.682033 | 64 | 846 | 8.78125 | 0.46875 | 0.080071 | 0.096085 | 0.197509 | 0.373665 | 0.373665 | 0.270463 | 0.270463 | 0.270463 | 0 | 0 | 0.001534 | 0.229314 | 846 | 24 | 86 | 35.25 | 0.860429 | 0 | 0 | 0.45 | 0 | 0 | 0.128842 | 0.078014 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.1 | 0.15 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7a849b05642c39d1086f0df1b170473231c30737 | 98 | py | Python | exp/exception_handling_dynamic_creation.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 41 | 2016-01-21T05:14:45.000Z | 2021-11-24T20:37:21.000Z | exp/exception_handling_dynamic_creation.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 5 | 2016-01-21T05:36:37.000Z | 2016-08-22T19:26:51.000Z | exp/exception_handling_dynamic_creation.py | nicolasessisbreton/fython | 988f5a94cee8b16b0000501a22239195c73424a1 | [
"Apache-2.0"
] | 3 | 2016-01-23T04:03:44.000Z | 2016-08-21T15:58:38.000Z | a = type('a_fyerr', (Exception,), {})
try:
raise a('aa')
except Exception as e:
print(type(e)) | 14 | 37 | 0.612245 | 16 | 98 | 3.6875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 7 | 38 | 14 | 0.719512 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7a8ac69d9312c897a80eef7fbe6316c24ba27db8 | 2,809 | py | Python | wsu/tools/simx/simx/python/simx/base/extension.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2020-02-28T20:35:09.000Z | 2020-02-28T20:35:09.000Z | wsu/tools/simx/simx/python/simx/base/extension.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | wsu/tools/simx/simx/python/simx/base/extension.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | import re
class Extension(object):
"""
Base class and helper functions for TossimBase extensions.
Extensions allow adding extra information and methods to
L{TossimBase} and associated L{Node} objects. Some examples are
advanced topology configuration (see L{simx.fluid}) and sensor
readings (see L{simx.sensor}).
"""
def __init__(self, extension_name):
"""
Initializer.
@type extension_name: string
@param extension_name: name used when extension is registered;
it should be unique per extension type
"""
self.extension_name = extension_name
@staticmethod
def mixin(target_class, source_class):
"""
Mixes the source_class into the target_class.
More specifically, mixes in instance methods while ignoring
class methods, static methods and other class attributes. If
an instance method already exists in the target_class an
exception is raised.
@type target_class: class
@param target_class: class to mixin methods
@type source_class: class
@param source_class: class with mixin methods
@raise RuntimeError: if a name conflict occurs during mixin
"""
def instance_method(attr):
return hasattr(attr, "im_func") and not attr.im_self
for name in (name for name in dir(source_class)
if not re.match("^__.*__$", name)):
method = getattr(source_class, name)
if instance_method(method):
if hasattr(target_class, name):
raise RuntimeError("duplicate method: %s" % name)
else:
setattr(target_class, name, method.im_func)
def decorate_node_class(self, node_class):
"""
This called once; it can be used to decorate the node class.
The order it is invoked relative to other extensions is the
order the extension was registered.
@type node_class: class, of L{Node}
@param node_class: class to decorate
"""
pass
def decorate_node(self, node):
"""
This is called once per node.
The order it is invoked relative to other extensions is the
order the extension was registered.
@type node: L{Node}
@param node: node to decorate
"""
pass
def decorate_tossim_class(self, tossim_class):
"""
This called once; it can be used to decorate the tossim class.
The order it is invoked relative to other extensions is the
order the extension was registered.
@type tossim_class: class, of TossimBase
@param tossim_class: class to decorate
"""
pass
| 30.532609 | 70 | 0.620862 | 343 | 2,809 | 4.962099 | 0.306122 | 0.047004 | 0.021152 | 0.021152 | 0.256757 | 0.207403 | 0.207403 | 0.207403 | 0.207403 | 0.207403 | 0 | 0 | 0.317195 | 2,809 | 91 | 71 | 30.868132 | 0.887383 | 0.551442 | 0 | 0.136364 | 0 | 0 | 0.038002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.136364 | 0.045455 | 0.045455 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8f85163033775c570b74b9dc7062353045d4c52d | 1,796 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIpNhrpNhsDetail/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIpNhrpNhsDetail/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIpNhrpNhsDetail/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | expected_output = {
"Tunnel0": {
"nhs_ip": {
"111.0.0.100": {
"nhs_state": "E",
"nbma_address": "111.1.1.1",
"priority": 0,
"cluster": 0,
"req_sent": 0,
"req_failed": 0,
"reply_recv": 0,
"current_request_id": 94,
"protection_socket_requested": "FALSE",
}
}
},
"Tunnel100": {
"nhs_ip": {
"100.0.0.100": {
"nhs_state": "RE",
"nbma_address": "101.1.1.1",
"priority": 0,
"cluster": 0,
"req_sent": 105434,
"req_failed": 0,
"reply_recv": 105434,
"receive_time": "00:00:49",
"current_request_id": 35915,
"ack": 35914,
"protection_socket_requested": "FALSE",
}
}
},
"Tunnel111": {
"nhs_ip": {
"111.0.0.100": {
"nhs_state": "E",
"nbma_address": "111.1.1.1",
"priority": 0,
"cluster": 0,
"req_sent": 184399,
"req_failed": 0,
"reply_recv": 0,
"current_request_id": 35916,
}
}
},
"pending_registration_requests": {
"req_id": {
"16248": {
"ret": 64,
"nhs_ip": "111.0.0.100",
"nhs_state": "expired",
"tunnel": "Tu111",
},
"57": {
"ret": 64,
"nhs_ip": "172.16.0.1",
"nhs_state": "expired",
"tunnel": "Tu100",
},
}
},
}
| 27.630769 | 55 | 0.342428 | 149 | 1,796 | 3.879195 | 0.369128 | 0.020761 | 0.034602 | 0.055363 | 0.480969 | 0.425606 | 0.425606 | 0.425606 | 0.389273 | 0.217993 | 0 | 0.150731 | 0.505011 | 1,796 | 64 | 56 | 28.0625 | 0.499438 | 0 | 0 | 0.40625 | 0 | 0 | 0.320156 | 0.046214 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8f8538f9d3c50e10e3bee464d57a3a8811ef1814 | 151 | py | Python | emotion_detection/read_emotions.py | yaiestura/coursera_hse_research | 510d2cc4e255346b9fefa0d386b566af1123be81 | [
"MIT"
] | null | null | null | emotion_detection/read_emotions.py | yaiestura/coursera_hse_research | 510d2cc4e255346b9fefa0d386b566af1123be81 | [
"MIT"
] | null | null | null | emotion_detection/read_emotions.py | yaiestura/coursera_hse_research | 510d2cc4e255346b9fefa0d386b566af1123be81 | [
"MIT"
] | null | null | null | def main():
with open("emotions.txt", "r") as f:
count = set(f.readlines())
print(count)
print(len(count))
if __name__ == '__main__':
main()
| 15.1 | 37 | 0.615894 | 22 | 151 | 3.863636 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178808 | 151 | 9 | 38 | 16.777778 | 0.685484 | 0 | 0 | 0 | 0 | 0 | 0.139073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0.285714 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8fa5dd1f18048a804e383db31d6325e0de639243 | 287 | py | Python | pyexcelerate/__init__.py | AlexHill/PyExcelerate | 8944c1c30117d4a7e19d9eb15d05c00929d16989 | [
"BSD-2-Clause"
] | null | null | null | pyexcelerate/__init__.py | AlexHill/PyExcelerate | 8944c1c30117d4a7e19d9eb15d05c00929d16989 | [
"BSD-2-Clause"
] | null | null | null | pyexcelerate/__init__.py | AlexHill/PyExcelerate | 8944c1c30117d4a7e19d9eb15d05c00929d16989 | [
"BSD-2-Clause"
] | null | null | null | from .Workbook import Workbook
from .Style import Style
from .Fill import Fill
from .Font import Font
from .Format import Format
from .Alignment import Alignment
try:
import pkg_resources
__version__ = pkg_resources.require('PyExcelerate')[0].version
except:
__version__ = 'unknown'
| 22.076923 | 63 | 0.797909 | 38 | 287 | 5.763158 | 0.447368 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004016 | 0.132404 | 287 | 12 | 64 | 23.916667 | 0.875502 | 0 | 0 | 0 | 0 | 0 | 0.066202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.636364 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8fb0cc91f84bbc539459ee97c5ce225a96109b9f | 315 | py | Python | booth/gprInit.py | Fife/MSU-Robotics-Club | ba74df39020c6310324fa960c651a6f3d647ce1e | [
"MIT"
] | 9 | 2021-11-30T20:05:49.000Z | 2022-01-27T00:50:55.000Z | booth/gprInit.py | Fife/MSU-Robotics-Club | ba74df39020c6310324fa960c651a6f3d647ce1e | [
"MIT"
] | 1 | 2022-01-30T00:34:42.000Z | 2022-01-30T00:34:42.000Z | booth/gprInit.py | Fife/MSU-Robotics-Club | ba74df39020c6310324fa960c651a6f3d647ce1e | [
"MIT"
] | 1 | 2022-01-27T00:48:48.000Z | 2022-01-27T00:48:48.000Z | import getopt, sys
from Extractor1 import *
#Only argument is the path of the argos experiment file
try:
experiment_path = sys.argv[1]
except IndexError:
print("Erorr, Nothing was passed in as an argument.")
else:
experiment_path = sys.argv[1]
root = getRoot(experiment_path)
writeInit(root)
| 17.5 | 55 | 0.72381 | 45 | 315 | 5 | 0.688889 | 0.186667 | 0.151111 | 0.186667 | 0.195556 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011858 | 0.196825 | 315 | 17 | 56 | 18.529412 | 0.87747 | 0.171429 | 0 | 0.2 | 0 | 0 | 0.169884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8fbf5ecc6ed3de785cd388008c750e0d8edb9542 | 1,777 | py | Python | src/NodeGenerators/PeanoHilbertDistributeNodes.py | as1m0n/spheral | 4d72822f56aca76d70724c543d389d15ff6ca48e | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | 19 | 2020-10-21T01:49:17.000Z | 2022-03-15T12:29:17.000Z | src/NodeGenerators/PeanoHilbertDistributeNodes.py | markguozhiming/spheral | bbb982102e61edb8a1d00cf780bfa571835e1b61 | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | 41 | 2020-09-28T23:14:27.000Z | 2022-03-28T17:01:33.000Z | src/NodeGenerators/PeanoHilbertDistributeNodes.py | markguozhiming/spheral | bbb982102e61edb8a1d00cf780bfa571835e1b61 | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | 5 | 2020-11-03T16:14:26.000Z | 2022-01-03T19:07:24.000Z | import Spheral
import distributeNodesGeneric
#-------------------------------------------------------------------------------
# Domain decompose using PeanoHilbert ordering (1d method).
#-------------------------------------------------------------------------------
def distributeNodes1d(*listOfNodeTuples):
distributeNodesGeneric.distributeNodesGeneric(listOfNodeTuples,
Spheral.DataBase1d,
Spheral.globalNodeIDsAll1d,
Spheral.PeanoHilbertOrderRedistributeNodes1d)
#-------------------------------------------------------------------------------
# Domain decompose using PeanoHilbert ordering (2d method).
#-------------------------------------------------------------------------------
def distributeNodes2d(*listOfNodeTuples):
distributeNodesGeneric.distributeNodesGeneric(listOfNodeTuples,
Spheral.DataBase2d,
Spheral.globalNodeIDsAll2d,
Spheral.PeanoHilbertOrderRedistributeNodes2d)
#-------------------------------------------------------------------------------
# Domain decompose using PeanoHilbert ordering (3d method).
#-------------------------------------------------------------------------------
def distributeNodes3d(*listOfNodeTuples):
distributeNodesGeneric.distributeNodesGeneric(listOfNodeTuples,
Spheral.DataBase3d,
Spheral.globalNodeIDsAll3d,
Spheral.PeanoHilbertOrderRedistributeNodes3d)
| 57.322581 | 95 | 0.409679 | 61 | 1,777 | 11.934426 | 0.42623 | 0.061813 | 0.082418 | 0.131868 | 0.506868 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011914 | 0.291503 | 1,777 | 30 | 96 | 59.233333 | 0.566322 | 0.36466 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | true | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8fc68c6f7d1c08754edd736345f131547afecca5 | 677 | py | Python | dzTraficoBackend/dzTrafico/BusinessEntities/LCRecommendation.py | DZAymen/dz-Trafico | 74ff9caf9e3845d8af977c46b04a2d3421a0661b | [
"MIT"
] | null | null | null | dzTraficoBackend/dzTrafico/BusinessEntities/LCRecommendation.py | DZAymen/dz-Trafico | 74ff9caf9e3845d8af977c46b04a2d3421a0661b | [
"MIT"
] | null | null | null | dzTraficoBackend/dzTrafico/BusinessEntities/LCRecommendation.py | DZAymen/dz-Trafico | 74ff9caf9e3845d8af977c46b04a2d3421a0661b | [
"MIT"
] | null | null | null |
class LCRecommendation:
TURN_LEFT = -1
TURN_RIGHT = 1
STRAIGHT_AHEAD = 0
CHANGE_TO_EITHER_WAY = 2
change_lane = True
change_to_either_way = False
recommendation = 0
def __init__(self, lane, recommendation):
self.lane = lane
self.recommendation = recommendation
if recommendation == self.TURN_RIGHT:
self.target_lane = lane - 1
elif recommendation == self.TURN_LEFT:
self.target_lane = lane + 1
elif recommendation == self.CHANGE_TO_EITHER_WAY:
self.change_to_either_way = True
elif recommendation == self.STRAIGHT_AHEAD:
self.change_lane = False | 29.434783 | 57 | 0.645495 | 79 | 677 | 5.202532 | 0.303797 | 0.218978 | 0.136253 | 0.16545 | 0.291971 | 0.199513 | 0.199513 | 0.199513 | 0 | 0 | 0 | 0.014553 | 0.289513 | 677 | 23 | 58 | 29.434783 | 0.839917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8fdf0d6e3dcdf33d021bba7c08fb3cbf31721907 | 302 | py | Python | python/CV_ex049.py | FaunoGuazina/X_very_old_algorithm_exercises | c9b9ec78e8b82f2e23ef85ba9a5e7fd6e0deaea6 | [
"MIT"
] | null | null | null | python/CV_ex049.py | FaunoGuazina/X_very_old_algorithm_exercises | c9b9ec78e8b82f2e23ef85ba9a5e7fd6e0deaea6 | [
"MIT"
] | null | null | null | python/CV_ex049.py | FaunoGuazina/X_very_old_algorithm_exercises | c9b9ec78e8b82f2e23ef85ba9a5e7fd6e0deaea6 | [
"MIT"
] | null | null | null | print('=ˆ= ' * 8)
print(' TAULA DE MULTIPLICACIÓ')
print('=ˆ= ' * 8)
num = int(input('introduïu un número per trobar\nla taula de multiplicació: '))
print(' ' * 7, '-' * 13)
x = 1
for c in range(1, 11):
print(' ' * 7, '{} x {:2} = {:3}'.format(num, x, num*x))
x += 1
print(' ' * 7, '-' * 13) | 30.2 | 79 | 0.516556 | 48 | 302 | 3.25 | 0.5625 | 0.115385 | 0.089744 | 0.320513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06867 | 0.228477 | 302 | 10 | 80 | 30.2 | 0.600858 | 0 | 0 | 0.4 | 0 | 0 | 0.376238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
8fe31103547372def2141cfd476d4dc83dfc8cfd | 129 | py | Python | src/keri/vdr/__init__.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 10 | 2021-06-09T16:15:32.000Z | 2022-03-28T22:14:11.000Z | src/keri/vdr/__init__.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 47 | 2021-06-17T20:00:02.000Z | 2022-03-31T20:20:44.000Z | src/keri/vdr/__init__.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 6 | 2021-06-10T11:24:25.000Z | 2022-01-28T08:07:43.000Z | # -*- encoding: utf-8 -*-
"""
KERI
keri.vdr Package
"""
__all__ = ["issuing", "eventing", "registering", "viring", "verifying"]
| 16.125 | 71 | 0.604651 | 13 | 129 | 5.692308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.139535 | 129 | 7 | 72 | 18.428571 | 0.657658 | 0.356589 | 0 | 0 | 0 | 0 | 0.546667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ff27d008364f8f98d08457e48951f175ea9e3e0 | 356 | py | Python | app/app/tests.py | obetron/recipe-app-api | 5dbdeb4eea13ca9388091b7c2fcc3461640a397c | [
"MIT"
] | null | null | null | app/app/tests.py | obetron/recipe-app-api | 5dbdeb4eea13ca9388091b7c2fcc3461640a397c | [
"MIT"
] | null | null | null | app/app/tests.py | obetron/recipe-app-api | 5dbdeb4eea13ca9388091b7c2fcc3461640a397c | [
"MIT"
] | null | null | null | from django.test import TestCase
from app.calc import add, substract
class MyTestCase(TestCase):
def test_add(self):
"""Test that two numbers are added together"""
self.assertEqual(add(11, 3), 14)
def test_substract(self):
"""Test that values are substracted and returned"""
self.assertEqual(substract(5, 11), 6)
| 27.384615 | 59 | 0.674157 | 48 | 356 | 4.958333 | 0.604167 | 0.058824 | 0.10084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032374 | 0.219101 | 356 | 12 | 60 | 29.666667 | 0.823741 | 0.241573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8ff6fb9297dead34c9e84a21e45d008aa46a7bf0 | 1,146 | py | Python | entangle/http.py | radiantone/entangle | 82ee5adaf5a3e40bca5b049e272736d3a8322568 | [
"MIT"
] | 102 | 2021-04-02T22:15:24.000Z | 2022-02-02T10:13:48.000Z | entangle/http.py | radiantone/entangle | 82ee5adaf5a3e40bca5b049e272736d3a8322568 | [
"MIT"
] | null | null | null | entangle/http.py | radiantone/entangle | 82ee5adaf5a3e40bca5b049e272736d3a8322568 | [
"MIT"
] | 7 | 2021-06-01T01:52:57.000Z | 2021-07-12T22:58:27.000Z | """
http.py - Module that provides http oriented decorators
"""
from functools import partial
import requests
def request(function=None,
timeout=None,
url=None,
method='GET',
sleep=None):
"""
:param function:
:param timeout:
:param url:
:param method:
:param sleep:
:return:
"""
def decorator(func):
def wrapper(f_func):
# Build http request function here, get result
# call func with result
def invoke_request(_func, **kwargs):
def make_request(url, method, data):
if method == 'GET':
response = requests.get(url=url, params=data)
return response.content
return None
response = make_request(url, method, kwargs)
return _func(response)
pfunc = partial(invoke_request, f_func)
pfunc.__name__ = func.__name__
return pfunc
return wrapper(func)
if function is not None:
return decorator(function)
return decorator
| 22.92 | 69 | 0.541012 | 115 | 1,146 | 5.252174 | 0.382609 | 0.049669 | 0.046358 | 0.066225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.378709 | 1,146 | 49 | 70 | 23.387755 | 0.848315 | 0.179756 | 0 | 0 | 0 | 0 | 0.006704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
891463285334b75309673863ed31806da507dbd1 | 1,080 | py | Python | SimGeneral/DataMixingModule/python/supplementary/ReconstructionLocalCosmics_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | SimGeneral/DataMixingModule/python/supplementary/ReconstructionLocalCosmics_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | SimGeneral/DataMixingModule/python/supplementary/ReconstructionLocalCosmics_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
#
# tracker
#
from RecoLocalTracker.Configuration.RecoLocalTracker_Cosmics_cff import *
from RecoTracker.Configuration.RecoTrackerP5_cff import *
from RecoVertex.BeamSpotProducer.BeamSpot_cff import *
from RecoTracker.Configuration.RecoTrackerBHM_cff import *
from RecoTracker.DeDx.dedxEstimators_Cosmics_cff import *
#
# calorimeters
#
from RecoLocalCalo.Configuration.RecoLocalCalo_Cosmics_cff import *
from RecoEcal.Configuration.RecoEcalCosmics_cff import *
#
# muons
#
from RecoLocalMuon.Configuration.RecoLocalMuonCosmics_cff import *
from RecoMuon.Configuration.RecoMuonCosmics_cff import *
# primary vertex
#from RecoVertex.Configuration.RecoVertexCosmicTracks_cff import *
# local reco
trackerCosmics = cms.Sequence(offlineBeamSpot*trackerlocalreco)
caloCosmics = cms.Sequence(calolocalreco)
muonsLocalRecoCosmics = cms.Sequence(muonlocalreco+muonlocalrecoNoDrift)
localReconstructionCosmics = cms.Sequence(trackerCosmics*caloCosmics*muonsLocalRecoCosmics)
reconstructionCosmics = cms.Sequence(localReconstructionCosmics)
| 30 | 91 | 0.858333 | 100 | 1,080 | 9.14 | 0.45 | 0.098468 | 0.085339 | 0.078775 | 0.080963 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001005 | 0.078704 | 1,080 | 35 | 92 | 30.857143 | 0.917588 | 0.108333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64df1cf1557766bd4bb7ac11c42f4c8045b38a1c | 318 | py | Python | roglick/mobs.py | Kromey/roglick | b76202af71df0c30be0bd5f06a3428c990476e0e | [
"MIT"
] | 6 | 2015-05-05T21:28:35.000Z | 2019-04-14T13:42:38.000Z | roglick/mobs.py | Kromey/roglick | b76202af71df0c30be0bd5f06a3428c990476e0e | [
"MIT"
] | null | null | null | roglick/mobs.py | Kromey/roglick | b76202af71df0c30be0bd5f06a3428c990476e0e | [
"MIT"
] | null | null | null | from roglick.engine import colors,file_obj
class Mob(file_obj.MultiFileObj):
def __init__(self):
super().__init__('data/mobs/*.json')
def _process_item(self, item):
item['sprite']['color'] = getattr(colors, item['sprite']['color'])
return super()._process_item(item)
npcs = Mob()
| 21.2 | 74 | 0.650943 | 40 | 318 | 4.825 | 0.6 | 0.072539 | 0.15544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185535 | 318 | 14 | 75 | 22.714286 | 0.745174 | 0 | 0 | 0 | 0 | 0 | 0.119874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
64ea6bc027cd362adcba9bdbbd869cf670b03e8e | 779 | py | Python | books/repositories/elastic.py | OurBooks-Team-Yuml/Books | 58abbbcd761f4ec359c6ab62a36ee77da6348e43 | [
"MIT"
] | null | null | null | books/repositories/elastic.py | OurBooks-Team-Yuml/Books | 58abbbcd761f4ec359c6ab62a36ee77da6348e43 | [
"MIT"
] | 45 | 2019-12-16T11:10:27.000Z | 2020-05-18T07:15:15.000Z | books/repositories/elastic.py | OurBooks-Team-Yuml/Books | 58abbbcd761f4ec359c6ab62a36ee77da6348e43 | [
"MIT"
] | null | null | null | from dataclasses import asdict
import os
from elasticsearch import Elasticsearch
from books.entities import Author, Book
from books.use_cases.repositories import BaseElasticRepository
class ElasticRepository(BaseElasticRepository):
def add_book(self, book: Book) -> None:
es = self._get_elastic()
es.index(index=os.environ['ES_BOOKS_INDEX'], id=book.id, body=asdict(book))
def add_author(self, author: Author) -> None:
full_name = f"{author.first_name} {author.last_name}"
body = {**asdict(author), "full_name": full_name}
es = self._get_elastic()
es.index(index=os.environ['ES_AUTHORS_INDEX'], id=author.id, body=body)
def _get_elastic(self) -> Elasticsearch:
return Elasticsearch(os.environ['ES_URL'])
| 32.458333 | 83 | 0.709884 | 102 | 779 | 5.235294 | 0.343137 | 0.05618 | 0.061798 | 0.059925 | 0.146067 | 0.146067 | 0.146067 | 0.146067 | 0.146067 | 0.146067 | 0 | 0 | 0.173299 | 779 | 23 | 84 | 33.869565 | 0.829193 | 0 | 0 | 0.125 | 0 | 0 | 0.106547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.3125 | 0.0625 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64ecc1a5cceb67533c436f6bebe184eefbb4042f | 366 | py | Python | adminlte/tests.py | ricardochaves/django-adminlte | 7d47948119a04056bec4e7572c69f288ab307894 | [
"BSD-3-Clause"
] | 1 | 2017-07-25T01:31:31.000Z | 2017-07-25T01:31:31.000Z | adminlte/tests.py | ricardochaves/django-adminlte | 7d47948119a04056bec4e7572c69f288ab307894 | [
"BSD-3-Clause"
] | null | null | null | adminlte/tests.py | ricardochaves/django-adminlte | 7d47948119a04056bec4e7572c69f288ab307894 | [
"BSD-3-Clause"
] | 5 | 2017-10-17T06:05:21.000Z | 2020-12-10T03:04:28.000Z | from django.test import TestCase
from django.forms.boundfield import BoundField
from django.forms import Widget
from .templatetags.extra_functions import add_class
# Create your tests here.
class ExtraFunciontionsTests(TestCase):
def teste_add_class(self):
field = Widget()
add_class(field, "teste")
self.assertEqual('', 'Ricardo')
| 21.529412 | 51 | 0.737705 | 44 | 366 | 6.022727 | 0.545455 | 0.113208 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 366 | 16 | 52 | 22.875 | 0.883333 | 0.062842 | 0 | 0 | 0 | 0 | 0.035191 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64f169dde7e95020d7483a3a6cfc83ba0f2c825e | 2,306 | py | Python | litterbox/fabric/dataset_record.py | rwightman/tensorflow-litterbox | ddeeb3a6c7de64e5391050ffbb5948feca65ad3c | [
"Apache-2.0"
] | 49 | 2016-09-09T15:31:36.000Z | 2022-03-09T09:43:52.000Z | litterbox/fabric/dataset_record.py | TangxinKevin/tensorflow-litterbox | ddeeb3a6c7de64e5391050ffbb5948feca65ad3c | [
"Apache-2.0"
] | 1 | 2017-06-09T07:24:16.000Z | 2017-06-09T15:28:11.000Z | litterbox/fabric/dataset_record.py | TangxinKevin/tensorflow-litterbox | ddeeb3a6c7de64e5391050ffbb5948feca65ad3c | [
"Apache-2.0"
] | 29 | 2016-09-20T07:29:54.000Z | 2021-09-28T08:03:49.000Z | # Copyright (C) 2016 Ross Wightman. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# ==============================================================================
# Based on original Work Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow as tf
from abc import ABCMeta
from abc import abstractmethod
from .dataset import Dataset
from .dataset import FLAGS
class DatasetRecord(Dataset):
"""A simple class for handling data sets."""
__metaclass__ = ABCMeta
def __init__(self, name, subset):
super(DatasetRecord, self).__init__(name, subset, is_record=True)
def data_files(self):
"""Returns a python list of all (sharded) data subset files.
Returns:
python list of all (sharded) data set files.
Raises:
ValueError: if there are not data_files matching the subset.
"""
tf_record_pattern = os.path.join(FLAGS.data_dir, '%s-*' % self.subset)
data_files = tf.gfile.Glob(tf_record_pattern)
if not data_files:
print('No files found for dataset %s/%s at %s' %
(self.name, self.subset, FLAGS.data_dir))
exit(-1)
return data_files
def reader(self):
"""Return a reader for a single entry from the data set.
See io_ops.py for details of Reader class.
Returns:
Reader object that reads the data set.
"""
return tf.TFRecordReader()
| 32.942857 | 80 | 0.671726 | 320 | 2,306 | 4.71875 | 0.428125 | 0.059603 | 0.034437 | 0.042384 | 0.274172 | 0.274172 | 0.239735 | 0.239735 | 0.239735 | 0.239735 | 0 | 0.009033 | 0.183868 | 2,306 | 69 | 81 | 33.42029 | 0.793305 | 0.605377 | 0 | 0 | 0 | 0 | 0.050725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.391304 | 0 | 0.695652 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64f6ae25e737cbba29b652bf6659a581666ab26e | 1,574 | py | Python | src/azure-cli/azure/cli/command_modules/serviceconnector/__init__.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 3,287 | 2016-07-26T17:34:33.000Z | 2022-03-31T09:52:13.000Z | src/azure-cli/azure/cli/command_modules/serviceconnector/__init__.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 19,206 | 2016-07-26T07:04:42.000Z | 2022-03-31T23:57:09.000Z | src/azure-cli/azure/cli/command_modules/serviceconnector/__init__.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 2,575 | 2016-07-26T06:44:40.000Z | 2022-03-31T22:56:06.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from azure.cli.core import AzCommandsLoader
from azure.cli.command_modules.serviceconnector._help import helps # pylint: disable=unused-import
class MicrosoftServiceConnectorCommandsLoader(AzCommandsLoader):
def __init__(self, cli_ctx=None):
from azure.cli.core.commands import CliCommandType
from azure.cli.command_modules.serviceconnector._client_factory import cf_connection_cl
connection_custom = CliCommandType(
operations_tmpl='azure.cli.command_modules.serviceconnector.custom#{}',
client_factory=cf_connection_cl)
parent = super(MicrosoftServiceConnectorCommandsLoader, self)
parent.__init__(cli_ctx=cli_ctx, custom_command_type=connection_custom)
def load_command_table(self, args):
from azure.cli.command_modules.serviceconnector.commands import load_command_table as load_command_table_manual
load_command_table_manual(self, args)
return self.command_table
def load_arguments(self, command):
from azure.cli.command_modules.serviceconnector._params import load_arguments as load_arguments_manual
load_arguments_manual(self, command)
COMMAND_LOADER_CLS = MicrosoftServiceConnectorCommandsLoader
| 49.1875 | 119 | 0.693139 | 160 | 1,574 | 6.5125 | 0.39375 | 0.053743 | 0.069098 | 0.105566 | 0.197697 | 0.161228 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135959 | 1,574 | 31 | 120 | 50.774194 | 0.766176 | 0.232529 | 0 | 0 | 0 | 0 | 0.043261 | 0.043261 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.315789 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64f8b4b24e32e694e7569255090ea2b01133abbc | 2,991 | py | Python | gui/interface.py | sgtdarkskull/passwordManager | e0570c4dd185bd66319338f682fc653108693aeb | [
"MIT"
] | null | null | null | gui/interface.py | sgtdarkskull/passwordManager | e0570c4dd185bd66319338f682fc653108693aeb | [
"MIT"
] | null | null | null | gui/interface.py | sgtdarkskull/passwordManager | e0570c4dd185bd66319338f682fc653108693aeb | [
"MIT"
] | null | null | null | # from kivy.app import App
# from kivy.core.text import LabelBase
# from kivy.uix.gridlayout import GridLayout
# from kivy.uix.relativelayout import RelativeLayout
# from kivy.uix.screenmanager import Screen, ScreenManager
# from kivy.config import Config
# from gui.components import *
#
# LabelBase.register(name='Macondo', fn_regular='assets/Macondo.ttf')
# LabelBase.register(name='Kalam', fn_regular='assets/Kalam-Regular.ttf')
#
# Config.set('graphics', 'width', '480')
# Config.set('graphics', 'height', '720')
# Config.write()
#
#
# class PScreenManager(ScreenManager):
# pass
#
#
# class MainScreen(Screen):
# pass
#
#
# class MainLayout(GridLayout):
# pass
#
#
# class MainInterface(RelativeLayout):
# pass
#
#
# class LoginScreen(Screen):
# pass
#
#
# class LoginLayout(GridLayout):
#
# def validate_login_request(self):
# username = self.ids["username"].text
# passwd = self.ids["passwd"].text
#
# if username == "user1" and passwd == "user1":
# self.ids["login_label"].text = "Access Granted"
#
#
# class Interface(App):
# def build(self):
# self.title = "Password Manager"
# return MainInterface()
#
#
#
# <MainInterface>:
# PScreenManager:
#
# <PScreenManager>:
# id: "s_manager"
# MainScreen:
# #LoginScreen:
#
#
# <MainScreen>:
# name: "main_screen"
# MainLayout:
#
#
# <MainLayout>:
# cols: 1
# rows: 2
# padding: (dp(20), 0, dp(20), dp(20))
# Label:
# text: "Password Manager"
# font_name: "Kalam"
# font_size: dp(30)
# GridLayout:
# rows: 3
# padding: dp(40)
# spacing: dp(20)
# KvRoundButtonUi:
# button_color: .6, .2, .2
# button_label_text: "Login"
# on_release:
# app.root.children[0].current = "login_screen"
# root.parent.manager.transition.direction = "left"
# KvRoundButtonUi:
# button_color: .6, .2, .2
# button_label_text: "Signup"
# KvRoundButtonUi:
# button_color: .6, .2, .2
# button_label_text: "Check/Generate Password"
#
# <LoginScreen>:
# name: "login_screen"
# BoxLayout:
# orientation: "vertical"
# KvPreviousButtonUi:
# on_release:
# app.root.children[0].current = "main_screen"
# root.manager.transition.direction = "right"
# LoginLayout:
#
#
# <LoginLayout>:
# cols: 1
# rows: 2
# padding: (dp(20), 0, dp(20), dp(20))
# Label:
# id: login_label
# text: "Login Screen"
# font_name: "Macondo"
# font_size: dp(30)
# GridLayout:
# cols: 1
# rows: 3
# padding: dp(40)
# spacing: dp(30)
# GridLayout:
# cols: 2
# padding: dp(10)
# Label:
# text: "USERNAME: "
# KvTextFieldUi:
# id: username
# GridLayout:
# cols: 2
# padding: dp(10)
# Label:
# text: "PASSWORD"
# KvTextFieldUi:
# id: passwd
# password: True
# #password_mask: "*"
# GridLayout:
# cols: 1
# KvRoundButtonUi:
# id: "login_button"
# button_color: .6, .2, .2
# button_label_text: "Login"
# on_press: root.validate_login_request()
| 21.212766 | 73 | 0.624875 | 337 | 2,991 | 5.445104 | 0.299703 | 0.044142 | 0.021798 | 0.028338 | 0.256676 | 0.232698 | 0.232698 | 0.171662 | 0.133515 | 0.077384 | 0 | 0.026372 | 0.213975 | 2,991 | 140 | 74 | 21.364286 | 0.754147 | 0.905383 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8f1a04cc211ed87cb6f0af3e14b79fa038ff718d | 1,354 | py | Python | digitalocean/LoadBalancers.py | SuleAlOthman/digitalocean | 005e18ebdc4c4d8175641d60af97640531b190a1 | [
"MIT"
] | 2 | 2021-02-12T16:09:06.000Z | 2021-11-18T14:12:32.000Z | digitalocean/LoadBalancers.py | sulealothman/digitalocean | 005e18ebdc4c4d8175641d60af97640531b190a1 | [
"MIT"
] | null | null | null | digitalocean/LoadBalancers.py | sulealothman/digitalocean | 005e18ebdc4c4d8175641d60af97640531b190a1 | [
"MIT"
] | null | null | null | from .reqmethods import Req, Arrange
class LoadBalancers(Req, Arrange):
slug = 'load_balancers'
def __init__(self, token):
self.setToken(token)
def getAllLoadBalancers(self):
data = self.get(f'{self.slug}')
if 'id' in data:
return data
return self.arrangeData(data)
def getLoadBalancerById(self, lbId):
data = self.get(f'{self.slug}/{lbId}')
if 'id' in data:
return data
return self.arrangeData(data)
def storeLoadBalancer(self, body):
data = self.post(f'{self.slug}', body)
return data
def updateLoadBalancer(self, lbId, body):
data = self.put(f'{self.slug}/{lbId}', body)
return data
def storeForwardingRulesLoadBalancer(self, lbId, body):
data = self.post(f'{self.slug}/{lbId}/forwarding_rules', body)
return data
def deleteForwardingRulesLoadBalancer(self, lbId, body):
data = self.delete(f'{self.slug}/{lbId}/forwarding_rules', body)
return data
def deleteLoadBalancer(self, lbId):
data = self.delete(f'{self.slug}/{lbId}')
return data
def storeDropletsLoadBalancer(self, lbId, dropletsId):
body = {'droplet_ids' : f'{dropletsId}'}
data = self.post(f'{self.slug}/{firewallId}/droplets', body)
return data
def deleteDropletsLoadBalancer(self, lbId, dropletsId):
body = {'droplet_ids' : f'{dropletsId}'}
data = self.delete(f'{self.slug}/{lbId}/droplets', body)
return data
| 26.038462 | 66 | 0.706795 | 176 | 1,354 | 5.386364 | 0.25 | 0.075949 | 0.085443 | 0.082278 | 0.508439 | 0.466245 | 0.410338 | 0.303797 | 0.303797 | 0.303797 | 0 | 0 | 0.145495 | 1,354 | 51 | 67 | 26.54902 | 0.81936 | 0 | 0 | 0.394737 | 0 | 0 | 0.199409 | 0.096012 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.026316 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8f298f88db8450c1767bf13f5feb645e359ce31f | 418 | py | Python | analyzer/darwin/modules/packages/app.py | phdphuc/mac-a-mal-cuckoo | 8bbec99de276e854a6f48bc9ff069b911d111af2 | [
"MIT"
] | 41 | 2018-03-23T07:51:17.000Z | 2021-04-07T08:26:25.000Z | analyzer/darwin/modules/packages/app.py | phdphuc/mac-a-mal-cuckoo | 8bbec99de276e854a6f48bc9ff069b911d111af2 | [
"MIT"
] | 7 | 2018-04-09T13:38:11.000Z | 2020-10-17T08:04:59.000Z | analyzer/darwin/modules/packages/app.py | phdphuc/mac-a-mal-cuckoo | 8bbec99de276e854a6f48bc9ff069b911d111af2 | [
"MIT"
] | 8 | 2018-03-22T20:07:33.000Z | 2020-07-27T08:49:11.000Z | #!/usr/bin/env python
# Copyright (C) 2018 phdphuc
# This software may be modified and distributed under the terms
# of the MIT license. See the LICENSE file for details.
from os import system, path
from lib.core.packages import Package
from plistlib import readPlist
class App(Package):
""" OS X application analysis package. """
def prepare(self):
system("/bin/chmod -R +x \"%s\"" % self.target)
| 26.125 | 63 | 0.708134 | 62 | 418 | 4.774194 | 0.774194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011799 | 0.188995 | 418 | 15 | 64 | 27.866667 | 0.861357 | 0.476077 | 0 | 0 | 0 | 0 | 0.086124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8f32fa49eadc4d964717c5da62ff74fed30dd0ac | 519 | py | Python | Projects/project07/insert-test.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | 2 | 2019-02-13T17:49:18.000Z | 2020-09-30T04:51:53.000Z | Projects/project07/insert-test.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | null | null | null | Projects/project07/insert-test.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | null | null | null | from HashTable import HashTable
def assertNode(node, key, value):
if key is None:
assert node is None
else:
assert node.key == key and node.value == value
ht = HashTable()
ht.insert("abc", 1)
ht.insert("acb", 2)
ht.insert("bac", 3)
assertNode(ht.table[0], "bac", 3)
assertNode(ht.table[1], None, None)
assertNode(ht.table[2], "abc", 1)
assertNode(ht.table[3], "acb", 2)
ht.insert("abc", 10) # Reassignment
assertNode(ht.table[2], "abc", 10)
assert ht.size == 3
assert ht.capacity == 4
| 18.535714 | 54 | 0.643545 | 83 | 519 | 4.024096 | 0.349398 | 0.179641 | 0.254491 | 0.071856 | 0.251497 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040094 | 0.183044 | 519 | 27 | 55 | 19.222222 | 0.747642 | 0.023121 | 0 | 0 | 0 | 0 | 0.047525 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.