hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8e51813cf90869e9e478904329040a48b9d0a1c3 | 269 | py | Python | Virus Project/Key Logger/KeyLog/Keylogger/Keylogger.py | nickh671/CS216-Cyber-Security | 6c372986831b7d1d1de799c2c6c29408466693d5 | [
"MIT"
] | 1 | 2020-01-30T03:25:31.000Z | 2020-01-30T03:25:31.000Z | Virus Project/Key Logger/KeyLog/Keylogger/Keylogger.py | nickh671/CS216-Cyber-Security | 6c372986831b7d1d1de799c2c6c29408466693d5 | [
"MIT"
] | null | null | null | Virus Project/Key Logger/KeyLog/Keylogger/Keylogger.py | nickh671/CS216-Cyber-Security | 6c372986831b7d1d1de799c2c6c29408466693d5 | [
"MIT"
] | null | null | null | import pynput
from pynput.keyboard import Key, Listener
def on_press(key):
print(key + " was pressed")
def on_release(key):
if key == key.esc:
return False
with Listener(on_press=on_press, on_release=on_release) as listener:
listener.join() | 19.214286 | 68 | 0.695167 | 40 | 269 | 4.525 | 0.5 | 0.116022 | 0.099448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208178 | 269 | 14 | 69 | 19.214286 | 0.849765 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f3cc995e55bc61f143ee1d25d6145d478ce825c4 | 975 | py | Python | py2/ex20.py | iamovrhere/lpthw | 6ad12dd14bbece75331b98fb20bcb37335727d52 | [
"MIT"
] | null | null | null | py2/ex20.py | iamovrhere/lpthw | 6ad12dd14bbece75331b98fb20bcb37335727d52 | [
"MIT"
] | null | null | null | py2/ex20.py | iamovrhere/lpthw | 6ad12dd14bbece75331b98fb20bcb37335727d52 | [
"MIT"
] | null | null | null | from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
def rewind(f):
f.seek(0)
def print_a_line(line_count, f):
print line_count, f.readline()
def seek_a_line(seek, f):
f.seek(seek)
print seek, f.readline()
current_file = open(input_file)
print "First let's print the whole file: \n"
print_all(current_file)
print "What if we retry print all?"
# Nothing. read() moves the seek.
print_all(current_file)
print "Now let's rewind?"
rewind(current_file)
print "Let's print three lines"
current_line = 1
print_a_line(current_line, current_file)
# ++ and -- don't exist in python.
# There's a strong preference to using iterators instead.
current_line += 1
print_a_line(current_line, current_file)
current_line += 1
print_a_line(current_line, current_file)
print "Seek lines from the middle?"
# Seek is character based not line based.
seek_a_line(2, current_file)
seek_a_line(3, current_file)
seek_a_line(1, current_file)
| 20.3125 | 57 | 0.74359 | 170 | 975 | 4.035294 | 0.335294 | 0.16035 | 0.058309 | 0.074344 | 0.3207 | 0.19242 | 0.19242 | 0.19242 | 0.19242 | 0.19242 | 0 | 0.008485 | 0.153846 | 975 | 47 | 58 | 20.744681 | 0.82303 | 0.164103 | 0 | 0.241379 | 0 | 0 | 0.160494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034483 | null | null | 0.517241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
f3cf06856dd843c9bc2c06977c531049ba077738 | 210 | py | Python | code_source/script_frame.py | zeddo123/voidgame | 4d247af542bf646c073c9445f36456e5a18c72e9 | [
"MIT"
] | 7 | 2019-02-12T15:37:44.000Z | 2020-04-19T01:19:06.000Z | code_source/script_frame.py | zeddo123/voidgame | 4d247af542bf646c073c9445f36456e5a18c72e9 | [
"MIT"
] | null | null | null | code_source/script_frame.py | zeddo123/voidgame | 4d247af542bf646c073c9445f36456e5a18c72e9 | [
"MIT"
] | 3 | 2019-03-29T18:12:09.000Z | 2019-05-05T22:21:57.000Z | import sys
import os
import re
f = open(sys.argv[1]+"/frames.txt","w");
regex = re.compile(r"png$")
for i in sorted(os.listdir(sys.argv[1])):
if regex.search(i):
f.write(sys.argv[1]+"/"+i+'\n')
f.close() | 16.153846 | 41 | 0.62381 | 41 | 210 | 3.195122 | 0.609756 | 0.160305 | 0.183206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016304 | 0.12381 | 210 | 13 | 42 | 16.153846 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0.090047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f3d22d96146bb149a9d37d0000e6b97b6775fed2 | 742 | py | Python | pythpy/state/core.py | johnstonematt/pythpy | e1459ceae3e7edd4e79bfd86ad5df43caa7b0317 | [
"MIT"
] | 1 | 2021-11-29T22:45:49.000Z | 2021-11-29T22:45:49.000Z | pythpy/state/core.py | johnstonematt/pythpy | e1459ceae3e7edd4e79bfd86ad5df43caa7b0317 | [
"MIT"
] | null | null | null | pythpy/state/core.py | johnstonematt/pythpy | e1459ceae3e7edd4e79bfd86ad5df43caa7b0317 | [
"MIT"
] | 1 | 2021-12-11T17:58:20.000Z | 2021-12-11T17:58:20.000Z | import json
from abc import ABC, abstractmethod
from construct import Struct, Container
class StateCore(ABC):
layout: Struct = None
@classmethod
@abstractmethod
def from_container(cls, container: Container):
pass
@classmethod
def parse(cls, bytes_data: bytes, factor: int):
container = cls.layout.parse(bytes_data)
obj = cls.from_container(container=container)
obj = obj.parse_precision(factor=factor)
return obj
@abstractmethod
def parse_precision(self, factor: int):
pass
@abstractmethod
def to_dict(self) -> dict:
pass
def __str__(self):
my_dict = self.to_dict()
return json.dumps(my_dict, sort_keys=False, indent=4) | 23.935484 | 61 | 0.66442 | 89 | 742 | 5.370787 | 0.393258 | 0.106695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001799 | 0.250674 | 742 | 31 | 61 | 23.935484 | 0.857914 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0.125 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f3d9c66fba4bc89341f37fb0054f5f6e77ebc4f1 | 2,990 | py | Python | shadowfiend/conductor/handlers/account_api.py | RogerYuQian/shadowfiend | beb99b523bd9941e9920e4f8305c9a1d9fc610e7 | [
"Apache-2.0"
] | 1 | 2019-03-07T04:34:43.000Z | 2019-03-07T04:34:43.000Z | shadowfiend/conductor/handlers/account_api.py | RogerYuQian/shadowfiend | beb99b523bd9941e9920e4f8305c9a1d9fc610e7 | [
"Apache-2.0"
] | null | null | null | shadowfiend/conductor/handlers/account_api.py | RogerYuQian/shadowfiend | beb99b523bd9941e9920e4f8305c9a1d9fc610e7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2014 Objectif Libre
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from shadowfiend.db import api as dbapi
from shadowfiend.db import models as db_models
LOG = log.getLogger(__name__)
class Handler(object):
dbapi = dbapi.get_instance()
def __init__(self):
super(Handler, self).__init__()
def create_account(cls, context, **kwargs):
LOG.debug('create account: Received message from RPC.')
account = db_models.Account(**kwargs)
return cls.dbapi.create_account(context, account)
def get_account(cls, context, **kwargs):
LOG.debug('get account: Received message from RPC.')
account = cls.dbapi.get_account(context, **kwargs)
return account
def get_accounts(cls, context, **kwargs):
LOG.debug('get accounts: Received message from RPC.')
accounts = cls.dbapi.get_accounts(context, **kwargs)
return accounts
def get_accounts_count(cls, context, **kwargs):
LOG.debug('get accounts count: Received message from RPC.')
accounts = cls.dbapi.get_accounts_count(context, **kwargs)
return accounts
def delete_account(cls, context, **kwargs):
LOG.debug('delete account: Received message from RPC.')
cls.dbapi.delete_account(context, **kwargs)
def change_account_level(cls, context, **kwargs):
LOG.debug('change account level: Received message from RPC.')
return cls.dbapi.change_account_level(context, **kwargs)
def charge_account(cls, context, **kwargs):
LOG.debug('update account: Received message from RPC.')
return cls.dbapi.charge_account(context,
kwargs.pop('user_id'),
**kwargs)
def update_account(cls, context, **kwargs):
LOG.debug('update account: Received message from RPC.')
return cls.dbapi.update_account(context,
kwargs.pop('user_id'),
**kwargs)
def get_charges(cls, context, **kwargs):
LOG.debug('get charges: Received message from RPC.')
return cls.dbapi.get_charges(context, **kwargs)
def get_charges_price_and_count(cls, context, **kwargs):
LOG.debug('get_charges_price_and_count: Received message from RPC.')
return cls.dbapi.get_charges_price_and_count(context, **kwargs)
| 38.333333 | 78 | 0.656856 | 374 | 2,990 | 5.112299 | 0.28877 | 0.129184 | 0.083682 | 0.099372 | 0.491632 | 0.406904 | 0.319038 | 0.222803 | 0.183054 | 0.083682 | 0 | 0.003981 | 0.243813 | 2,990 | 77 | 79 | 38.831169 | 0.841663 | 0.201003 | 0 | 0.173913 | 0 | 0 | 0.189132 | 0.011794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.23913 | false | 0 | 0.065217 | 0 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f3fc907269919ea58efdcc699ee2085da711639e | 19,605 | py | Python | scuole/stats/schemas/tapr/schema_v2.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | 1 | 2019-03-12T04:30:02.000Z | 2019-03-12T04:30:02.000Z | scuole/stats/schemas/tapr/schema_v2.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | 616 | 2017-08-18T21:15:39.000Z | 2022-03-25T11:17:10.000Z | scuole/stats/schemas/tapr/schema_v2.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | null | null | null | SCHEMA = {
# staff-and-student-information
"all_students_count": "{short_code}PETALLC",
"african_american_count": "{short_code}PETBLAC",
"african_american_percent": "{short_code}PETBLAP",
"american_indian_count": "{short_code}PETINDC",
"american_indian_percent": "{short_code}PETINDP",
"asian_count": "{short_code}PETASIC",
"asian_percent": "{short_code}PETASIP",
"hispanic_count": "{short_code}PETHISC",
"hispanic_percent": "{short_code}PETHISP",
"pacific_islander_count": "{short_code}PETPCIC",
"pacific_islander_percent": "{short_code}PETPCIP",
"two_or_more_races_count": "{short_code}PETTWOC",
"two_or_more_races_percent": "{short_code}PETTWOP",
"white_count": "{short_code}PETWHIC",
"white_percent": "{short_code}PETWHIP",
"early_childhood_education_count": "{short_code}PETGEEC",
"early_childhood_education_percent": "{short_code}PETGEEP",
"prek_count": "{short_code}PETGPKC",
"prek_percent": "{short_code}PETGPKP",
"kindergarten_count": "{short_code}PETGKNC",
"kindergarten_percent": "{short_code}PETGKNP",
"first_count": "{short_code}PETG01C",
"first_percent": "{short_code}PETG01P",
"second_count": "{short_code}PETG02C",
"second_percent": "{short_code}PETG02P",
"third_count": "{short_code}PETG03C",
"third_percent": "{short_code}PETG03P",
"fourth_count": "{short_code}PETG04C",
"fourth_percent": "{short_code}PETG04P",
"fifth_count": "{short_code}PETG05C",
"fifth_percent": "{short_code}PETG05P",
"sixth_count": "{short_code}PETG06C",
"sixth_percent": "{short_code}PETG06P",
"seventh_count": "{short_code}PETG07C",
"seventh_percent": "{short_code}PETG07P",
"eighth_count": "{short_code}PETG08C",
"eighth_percent": "{short_code}PETG08P",
"ninth_count": "{short_code}PETG09C",
"ninth_percent": "{short_code}PETG09P",
"tenth_count": "{short_code}PETG10C",
"tenth_percent": "{short_code}PETG10P",
"eleventh_count": "{short_code}PETG11C",
"eleventh_percent": "{short_code}PETG11P",
"twelfth_count": "{short_code}PETG12C",
"twelfth_percent": "{short_code}PETG12P",
"at_risk_count": "{short_code}PETRSKC",
"at_risk_percent": "{short_code}PETRSKP",
"economically_disadvantaged_count": "{short_code}PETECOC",
"economically_disadvantaged_percent": "{short_code}PETECOP",
"limited_english_proficient_count": "{short_code}PETLEPC",
"limited_english_proficient_percent": "{short_code}PETLEPP",
"bilingual_esl_count": "{short_code}PETBILC",
"bilingual_esl_percent": "{short_code}PETBILP",
"career_technical_education_count": "{short_code}PETVOCC",
"career_technical_education_percent": "{short_code}PETVOCP",
"gifted_and_talented_count": "{short_code}PETGIFC",
"gifted_and_talented_percent": "{short_code}PETGIFP",
"special_education_count": "{short_code}PETSPEC",
"special_education_percent": "{short_code}PETSPEP",
"class_size_avg_kindergarten": "{short_code}PCTGKGA",
"class_size_avg_first": "{short_code}PCTG01A",
"class_size_avg_second": "{short_code}PCTG02A",
"class_size_avg_third": "{short_code}PCTG03A",
"class_size_avg_fourth": "{short_code}PCTG04A",
"class_size_avg_fifth": "{short_code}PCTG05A",
"class_size_avg_sixth": "{short_code}PCTG06A",
"class_size_avg_mixed_elementary": "{short_code}PCTGMEA",
"class_size_avg_secondary_english": "{short_code}PCTENGA",
"class_size_avg_secondary_foreign_language": "{short_code}PCTFLAA",
"class_size_avg_secondary_math": "{short_code}PCTMATA",
"class_size_avg_secondary_science": "{short_code}PCTSCIA",
"class_size_avg_secondary_social_studies": "{short_code}PCTSOCA",
"students_per_teacher": "{short_code}PSTKIDR",
# teacher_avg_tenure is Average Years Experience of Teachers with District
"teacher_avg_tenure": "{short_code}PSTTENA",
# teacher_avg_experience is Average Years Experience of Teachers
"teacher_avg_experience": "{short_code}PSTEXPA",
"teacher_avg_base_salary": "{short_code}PSTTOSA",
"teacher_avg_beginning_salary": "{short_code}PST00SA",
"teacher_avg_1_to_5_year_salary": "{short_code}PST01SA",
"teacher_avg_6_to_10_year_salary": "{short_code}PST06SA",
"teacher_avg_11_to_20_year_salary": "{short_code}PST11SA",
"teacher_avg_20_plus_year_salary": "{short_code}PST20SA",
"teacher_total_fte_count": "{short_code}PSTTOFC",
"teacher_african_american_fte_count": "{short_code}PSTBLFC",
"teacher_american_indian_fte_count": "{short_code}PSTINFC",
"teacher_asian_fte_count": "{short_code}PSTASFC",
"teacher_hispanic_fte_count": "{short_code}PSTHIFC",
"teacher_pacific_islander_fte_count": "{short_code}PSTPIFC",
"teacher_two_or_more_races_fte_count": "{short_code}PSTTWFC",
"teacher_white_fte_count": "{short_code}PSTWHFC",
"teacher_total_fte_percent": "{short_code}PSTTOFC",
"teacher_african_american_fte_percent": "{short_code}PSTBLFP",
"teacher_american_indian_fte_percent": "{short_code}PSTINFP",
"teacher_asian_fte_percent": "{short_code}PSTASFP",
"teacher_hispanic_fte_percent": "{short_code}PSTHIFP",
"teacher_pacific_islander_fte_percent": "{short_code}PSTPIFP",
"teacher_two_or_more_races_fte_percent": "{short_code}PSTTWFP",
"teacher_white_fte_percent": "{short_code}PSTWHFP",
"teacher_no_degree_count": "{short_code}PSTNOFC",
"teacher_bachelors_count": "{short_code}PSTBAFC",
"teacher_masters_count": "{short_code}PSTMSFC",
"teacher_doctorate_count": "{short_code}PSTPHFC",
"teacher_no_degree_percent": "{short_code}PSTNOFP",
"teacher_bachelors_percent": "{short_code}PSTBAFP",
"teacher_masters_percent": "{short_code}PSTMSFP",
"teacher_doctorate_percent": "{short_code}PSTPHFP",
# postsecondary-readiness-and-non-staar-performance-indicators
# 'college_ready_graduates_english_all_students_count': 'ACRR',
"college_ready_graduates_english_all_students_percent": "{short_code}ACRR{year}R",
# 'college_ready_graduates_english_african_american_count': 'BCRR',
"college_ready_graduates_english_african_american_percent": "{short_code}BCRR{year}R",
# 'college_ready_graduates_english_american_indian_count': 'ICRR',
"college_ready_graduates_english_american_indian_percent": "{short_code}ICRR{year}R",
# 'college_ready_graduates_english_asian_count': '3CRR',
"college_ready_graduates_english_asian_percent": "{short_code}3CRR{year}R",
# 'college_ready_graduates_english_hispanic_count': 'HCRR',
"college_ready_graduates_english_hispanic_percent": "{short_code}HCRR{year}R",
# 'college_ready_graduates_english_pacific_islander_count': '4CRR',
"college_ready_graduates_english_pacific_islander_percent": "{short_code}4CRR{year}R",
# 'college_ready_graduates_english_two_or_more_races_count': '2CRR',
"college_ready_graduates_english_two_or_more_races_percent": "{short_code}2CRR{year}R",
# 'college_ready_graduates_english_white_count': 'WCRR',
"college_ready_graduates_english_white_percent": "{short_code}WCRR{year}R",
# 'college_ready_graduates_english_economically_disadvantaged_count': 'ECRR',
"college_ready_graduates_english_economically_disadvantaged_percent": "{short_code}ECRR{year}R",
# 'college_ready_graduates_english_limited_english_proficient_count': 'LCRR',
"college_ready_graduates_english_limited_english_proficient_percent": "{short_code}LCRR{year}R",
# 'college_ready_graduates_english_at_risk_count': 'RCRR',
"college_ready_graduates_english_at_risk_percent": "{short_code}RCRR{year}R",
# 'college_ready_graduates_math_all_students_count': 'ACRM',
"college_ready_graduates_math_all_students_percent": "{short_code}ACRM{year}R",
# 'college_ready_graduates_math_african_american_count': 'BCRM',
"college_ready_graduates_math_african_american_percent": "{short_code}BCRM{year}R",
# 'college_ready_graduates_math_american_indian_count': 'ICRM',
"college_ready_graduates_math_american_indian_percent": "{short_code}ICRM{year}R",
# 'college_ready_graduates_math_asian_count': '3CRM',
"college_ready_graduates_math_asian_percent": "{short_code}3CRM{year}R",
# 'college_ready_graduates_math_hispanic_count': 'HCRM',
"college_ready_graduates_math_hispanic_percent": "{short_code}HCRM{year}R",
# 'college_ready_graduates_math_pacific_islander_count': '4CRM',
"college_ready_graduates_math_pacific_islander_percent": "{short_code}4CRM{year}R",
# 'college_ready_graduates_math_two_or_more_races_count': '2CRM',
"college_ready_graduates_math_two_or_more_races_percent": "{short_code}2CRM{year}R",
# 'college_ready_graduates_math_white_count': 'WCRM',
"college_ready_graduates_math_white_percent": "{short_code}WCRM{year}R",
# 'college_ready_graduates_math_economically_disadvantaged_count': 'ECRM',
"college_ready_graduates_math_economically_disadvantaged_percent": "{short_code}ECRM{year}R",
# 'college_ready_graduates_math_limited_english_proficient_count': 'LCRM',
"college_ready_graduates_math_limited_english_proficient_percent": "{short_code}LCRM{year}R",
# 'college_ready_graduates_math_at_risk_count': 'RCRM',
"college_ready_graduates_math_at_risk_percent": "{short_code}RCRM{year}R",
# 'college_ready_graduates_both_all_students_count': 'ACRB',
"college_ready_graduates_both_all_students_percent": "{short_code}ACRB{year}R",
# 'college_ready_graduates_both_african_american_count': 'BCRB',
"college_ready_graduates_both_african_american_percent": "{short_code}BCRB{year}R",
# 'college_ready_graduates_both_asian_count': '3CRB',
"college_ready_graduates_both_asian_percent": "{short_code}3CRB{year}R",
# 'college_ready_graduates_both_hispanic_count': 'HCRB',
"college_ready_graduates_both_hispanic_percent": "{short_code}HCRB{year}R",
# 'college_ready_graduates_both_american_indian_count': 'ICRB',
"college_ready_graduates_both_american_indian_percent": "{short_code}ICRB{year}R",
# 'college_ready_graduates_both_pacific_islander_count': '4CRB',
"college_ready_graduates_both_pacific_islander_percent": "{short_code}4CRB{year}R",
# 'college_ready_graduates_both_two_or_more_races_count': '2CRB',
"college_ready_graduates_both_two_or_more_races_percent": "{short_code}2CRB{year}R",
# 'college_ready_graduates_both_white_count': 'WCRB',
"college_ready_graduates_both_white_percent": "{short_code}WCRB{year}R",
# 'college_ready_graduates_both_economically_disadvantaged_count': 'ECRB',
"college_ready_graduates_both_economically_disadvantaged_percent": "{short_code}ECRB{year}R",
# 'college_ready_graduates_both_limited_english_proficient_count': 'LCRB',
"college_ready_graduates_both_limited_english_proficient_percent": "{short_code}LCRB{year}R",
# 'college_ready_graduates_both_at_risk_count': 'RCRB',
"college_ready_graduates_both_at_risk_percent": "{short_code}RCRB{year}R",
"avg_sat_score_all_students": "{short_code}A0CSA{year}R",
"avg_sat_score_african_american": "{short_code}B0CSA{year}R",
"avg_sat_score_american_indian": "{short_code}I0CSA{year}R",
"avg_sat_score_asian": "{short_code}30CSA{year}R",
"avg_sat_score_hispanic": "{short_code}H0CSA{year}R",
"avg_sat_score_pacific_islander": "{short_code}40CSA{year}R",
"avg_sat_score_two_or_more_races": "{short_code}20CSA{year}R",
"avg_sat_score_white": "{short_code}W0CSA{year}R",
"avg_sat_score_economically_disadvantaged": "{short_code}E0CSA{year}R",
"avg_act_score_all_students": "{short_code}A0CAA{year}R",
"avg_act_score_african_american": "{short_code}B0CAA{year}R",
"avg_act_score_american_indian": "{short_code}I0CAA{year}R",
"avg_act_score_asian": "{short_code}30CAA{year}R",
"avg_act_score_hispanic": "{short_code}H0CAA{year}R",
"avg_act_score_pacific_islander": "{short_code}40CAA{year}R",
"avg_act_score_two_or_more_races": "{short_code}20CAA{year}R",
"avg_act_score_white": "{short_code}W0CAA{year}R",
"avg_act_score_economically_disadvantaged": "{short_code}E0CAA{year}R",
# 'ap_ib_all_students_count_above_criterion': 'A0BKA',
"ap_ib_all_students_percent_above_criterion": "{short_code}A0BKA{year}R",
# 'ap_ib_african_american_count_above_criterion': 'B0BKA',
"ap_ib_african_american_percent_above_criterion": "{short_code}B0BKA{year}R",
# 'ap_ib_asian_count_above_criterion': '30BKA',
"ap_ib_asian_percent_above_criterion": "{short_code}30BKA{year}R",
# 'ap_ib_hispanic_count_above_criterion': 'H0BKA',
"ap_ib_hispanic_percent_above_criterion": "{short_code}H0BKA{year}R",
# 'ap_ib_american_indian_count_above_criterion': 'I0BKA',
"ap_ib_american_indian_percent_above_criterion": "{short_code}I0BKA{year}R",
# 'ap_ib_pacific_islander_count_above_criterion': '40BKA',
"ap_ib_pacific_islander_percent_above_criterion": "{short_code}40BKA{year}R",
# 'ap_ib_two_or_more_races_count_above_criterion': '20BKA',
"ap_ib_two_or_more_races_percent_above_criterion": "{short_code}20BKA{year}R",
# 'ap_ib_white_count_above_criterion': 'W0BKA',
"ap_ib_white_percent_above_criterion": "{short_code}W0BKA{year}R",
# 'ap_ib_economically_disadvantaged_count_above_criterion': 'E0BKA',
"ap_ib_economically_disadvantaged_percent_above_criterion": "{short_code}E0BKA{year}R",
"ap_ib_all_students_percent_taking": "{short_code}A0BTA{year}R",
"ap_ib_african_american_percent_taking": "{short_code}B0BTA{year}R",
"ap_ib_asian_percent_taking": "{short_code}30BTA{year}R",
"ap_ib_hispanic_percent_taking": "{short_code}H0BTA{year}R",
"ap_ib_american_indian_percent_taking": "{short_code}I0BTA{year}R",
"ap_ib_pacific_islander_percent_taking": "{short_code}40BTA{year}R",
"ap_ib_two_or_more_races_percent_taking": "{short_code}20BTA{year}R",
"ap_ib_white_percent_taking": "{short_code}W0BTA{year}R",
"ap_ib_economically_disadvantaged_percent_taking": "{short_code}E0BTA{year}R",
# # 'dropout_all_students_count': 'A0912DR',
# 'dropout_all_students_percent': 'A0912DR',
# # 'dropout_african_american_count': 'B0912DR',
# 'dropout_african_american_percent': 'B0912DR',
# # 'dropout_asian_count': '30912DR',
# 'dropout_asian_percent': '30912DR',
# # 'dropout_hispanic_count': 'H0912DR',
# 'dropout_hispanic_percent': 'H0912DR',
# # 'dropout_american_indian_count': 'I0912DR',
# 'dropout_american_indian_percent': 'I0912DR',
# # 'dropout_pacific_islander_count': '40912DR',
# 'dropout_pacific_islander_percent': '40912DR',
# # 'dropout_two_or_more_races_count': '20912DR',
# 'dropout_two_or_more_races_percent': '20912DR',
# # 'dropout_white_count': 'W0912DR',
# 'dropout_white_percent': 'W0912DR',
# # 'dropout_at_risk_count': 'R0912DR',
# 'dropout_at_risk_percent': 'R0912DR',
# # 'dropout_economically_disadvantaged_count': 'E0912DR',
# 'dropout_economically_disadvantaged_percent': 'E0912DR',
# # 'dropout_limited_english_proficient_count': 'E0912DR',
# 'dropout_limited_english_proficient_percent': 'E0912DR',
# # 'four_year_graduate_all_students_count': 'AGC4X',
# 'four_year_graduate_all_students_percent': 'AGC4X',
# # 'four_year_graduate_african_american_count': 'BGC4X',
# 'four_year_graduate_african_american_percent': 'BGC4X',
# # 'four_year_graduate_american_indian_count': 'IGC4X',
# 'four_year_graduate_american_indian_percent': 'IGC4X',
# # 'four_year_graduate_asian_count': '3GC4X',
# 'four_year_graduate_asian_percent': '3GC4X',
# # 'four_year_graduate_hispanic_count': 'HGC4X',
# 'four_year_graduate_hispanic_percent': 'HGC4X',
# # 'four_year_graduate_pacific_islander_count': '4GC4X',
# 'four_year_graduate_pacific_islander_percent': '4GC4X',
# # 'four_year_graduate_two_or_more_races_count': '2GC4X',
# 'four_year_graduate_two_or_more_races_percent': '2GC4X',
# # 'four_year_graduate_white_count': 'WGC4X',
# 'four_year_graduate_white_percent': 'WGC4X',
# # 'four_year_graduate_at_risk_count': 'RGC4X',
# 'four_year_graduate_at_risk_percent': 'RGC4X',
# # 'four_year_graduate_economically_disadvantaged_count': 'EGC4X',
# 'four_year_graduate_economically_disadvantaged_percent': 'EGC4X',
# # 'four_year_graduate_limited_english_proficient_count': 'L3C4X',
# 'four_year_graduate_limited_english_proficient_percent': 'L3C4X',
# attendence
"attendance_rate": "{short_code}A0AT{year}R",
# longitudinal-rate
# 'dropout_all_students_count': 'A0912DR',
"dropout_all_students_percent": "{short_code}A0912DR{year}R",
# 'dropout_african_american_count': 'B0912DR',
"dropout_african_american_percent": "{short_code}B0912DR{year}R",
# 'dropout_asian_count': '30912DR',
"dropout_asian_percent": "{short_code}30912DR{year}R",
# 'dropout_hispanic_count': 'H0912DR',
"dropout_hispanic_percent": "{short_code}H0912DR{year}R",
# 'dropout_american_indian_count': 'I0912DR',
"dropout_american_indian_percent": "{short_code}I0912DR{year}R",
# 'dropout_pacific_islander_count': '40912DR',
"dropout_pacific_islander_percent": "{short_code}40912DR{year}R",
# 'dropout_two_or_more_races_count': '20912DR',
"dropout_two_or_more_races_percent": "{short_code}20912DR{year}R",
# 'dropout_white_count': 'W0912DR',
"dropout_white_percent": "{short_code}W0912DR{year}R",
# 'dropout_at_risk_count': 'R0912DR',
"dropout_at_risk_percent": "{short_code}R0912DR{year}R",
# 'dropout_economically_disadvantaged_count': 'E0912DR',
"dropout_economically_disadvantaged_percent": "{short_code}E0912DR{year}R",
# 'dropout_limited_english_proficient_count': 'E0912DR',
"dropout_limited_english_proficient_percent": "{short_code}E0912DR{year}R",
# 'four_year_graduate_all_students_count': 'AGC4X',
"four_year_graduate_all_students_percent": "{short_code}AGC4{suffix}{year}R",
# 'four_year_graduate_african_american_count': 'BGC4X',
"four_year_graduate_african_american_percent": "{short_code}BGC4{suffix}{year}R",
# 'four_year_graduate_american_indian_count': 'IGC4X',
"four_year_graduate_american_indian_percent": "{short_code}IGC4{suffix}{year}R",
# 'four_year_graduate_asian_count': '3GC4X',
"four_year_graduate_asian_percent": "{short_code}3GC4{suffix}{year}R",
# 'four_year_graduate_hispanic_count': 'HGC4X',
"four_year_graduate_hispanic_percent": "{short_code}HGC4{suffix}{year}R",
# 'four_year_graduate_pacific_islander_count': '4GC4X',
"four_year_graduate_pacific_islander_percent": "{short_code}4GC4{suffix}{year}R",
# 'four_year_graduate_two_or_more_races_count': '2GC4X',
"four_year_graduate_two_or_more_races_percent": "{short_code}2GC4{suffix}{year}R",
# 'four_year_graduate_white_count': 'WGC4X',
"four_year_graduate_white_percent": "{short_code}WGC4{suffix}{year}R",
# 'four_year_graduate_at_risk_count': 'RGC4X',
"four_year_graduate_at_risk_percent": "{short_code}RGC4{suffix}{year}R",
# 'four_year_graduate_economically_disadvantaged_count': 'EGC4X',
"four_year_graduate_economically_disadvantaged_percent": "{short_code}EGC4{suffix}{year}R",
# 'four_year_graduate_limited_english_proficient_count': 'L3C4X',
"four_year_graduate_limited_english_proficient_percent": "{short_code}L3C4{suffix}{year}R",
# reference
"accountability_rating": "{short_code}_RATING",
# accountability
"student_achievement_rating": "{short_code}D1G",
"school_progress_rating": "{short_code}D2G",
"closing_the_gaps_rating": "{short_code}D3G",
}
| 61.265625 | 100 | 0.762153 | 2,485 | 19,605 | 5.433803 | 0.137223 | 0.13397 | 0.113753 | 0.040287 | 0.607198 | 0.463082 | 0.280826 | 0.218914 | 0.197734 | 0.142931 | 0 | 0.023412 | 0.104565 | 19,605 | 319 | 101 | 61.45768 | 0.74577 | 0.306401 | 0 | 0 | 0 | 0 | 0.776729 | 0.57025 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d09293ff75f6a47c63abbde7ef20019d664ac9e | 1,447 | py | Python | users/migrations/0013_auto_20190516_1528.py | JianaXu/productivity | f7478ce2f154538185f071d58a0e4a2cc0b17ee8 | [
"bzip2-1.0.6"
] | null | null | null | users/migrations/0013_auto_20190516_1528.py | JianaXu/productivity | f7478ce2f154538185f071d58a0e4a2cc0b17ee8 | [
"bzip2-1.0.6"
] | 10 | 2019-05-15T06:25:36.000Z | 2022-02-10T08:46:38.000Z | users/migrations/0013_auto_20190516_1528.py | JianaXu/productivity | f7478ce2f154538185f071d58a0e4a2cc0b17ee8 | [
"bzip2-1.0.6"
] | 1 | 2019-05-21T03:01:48.000Z | 2019-05-21T03:01:48.000Z | # Generated by Django 2.1.1 on 2019-05-16 07:28
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('users', '0012_auto_20190516_1223'),
]
operations = [
migrations.RemoveField(
model_name='group',
name='ctime',
),
migrations.RemoveField(
model_name='group',
name='current_task',
),
migrations.RemoveField(
model_name='group',
name='mime',
),
migrations.RemoveField(
model_name='group',
name='name',
),
migrations.RemoveField(
model_name='group',
name='status',
),
migrations.AddField(
model_name='group',
name='curr_grp',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='current_grp', to=settings.AUTH_USER_MODEL),
),
migrations.RemoveField(
model_name='group',
name='user',
),
migrations.AddField(
model_name='group',
name='user',
field=models.ManyToManyField(to=settings.AUTH_USER_MODEL),
),
]
| 28.372549 | 153 | 0.545957 | 134 | 1,447 | 5.716418 | 0.402985 | 0.093995 | 0.146214 | 0.18799 | 0.469974 | 0.399478 | 0 | 0 | 0 | 0 | 0 | 0.032735 | 0.345543 | 1,447 | 50 | 154 | 28.94 | 0.776135 | 0.031099 | 0 | 0.590909 | 1 | 0 | 0.093333 | 0.017037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068182 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d0abc1979057205543e071fcc6933f213509abc | 890 | py | Python | modules/dbnd-airflow-monitor/src/airflow_monitor/shared/base_server_monitor_config.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | null | null | null | modules/dbnd-airflow-monitor/src/airflow_monitor/shared/base_server_monitor_config.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | null | null | null | modules/dbnd-airflow-monitor/src/airflow_monitor/shared/base_server_monitor_config.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | null | null | null | from typing import Optional
from uuid import UUID
import attr
from airflow_monitor.shared.base_monitor_config import BaseMonitorConfig
@attr.s
class BaseServerConfig(object):
source_name: str = attr.ib()
source_type: str = attr.ib()
tracking_source_uid: UUID = attr.ib()
sync_interval: int = attr.ib(default=10) # Sync interval in seconds
is_sync_enabled: bool = attr.ib(default=True)
fetcher_type = attr.ib(default=None) # type: str
log_level = attr.ib(default=None) # type: str
@classmethod
def create(
cls, server_config: dict, monitor_config: Optional[BaseMonitorConfig] = None
):
raise NotImplementedError()
@attr.s
class TrackingServiceConfig:
url = attr.ib()
access_token = attr.ib(default=None)
user = attr.ib(default=None)
password = attr.ib(default=None)
service_type = attr.ib(default=None)
| 25.428571 | 84 | 0.705618 | 118 | 890 | 5.186441 | 0.449153 | 0.117647 | 0.169935 | 0.166667 | 0.119281 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0.002789 | 0.194382 | 890 | 34 | 85 | 26.176471 | 0.850767 | 0.049438 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.04 | 0.16 | 0 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6d1cc7830a743e72f8e4afa75122aae3870fb6d8 | 142 | py | Python | Code/htmlfileread.py | coolvikas/pythonHTTPWebServer | 2b89be729380473a24d5089717c1890128861828 | [
"Apache-2.0"
] | null | null | null | Code/htmlfileread.py | coolvikas/pythonHTTPWebServer | 2b89be729380473a24d5089717c1890128861828 | [
"Apache-2.0"
] | null | null | null | Code/htmlfileread.py | coolvikas/pythonHTTPWebServer | 2b89be729380473a24d5089717c1890128861828 | [
"Apache-2.0"
] | null | null | null | def file_html(fname):
fptr=open(fname,"r")
to_send = ""
for lines in fptr:
to_send += lines;
return to_send
#file_html("myfile.html");
| 15.777778 | 26 | 0.676056 | 24 | 142 | 3.791667 | 0.583333 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 142 | 8 | 27 | 17.75 | 0.764706 | 0.176056 | 0 | 0 | 0 | 0 | 0.008621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d231af20a05018c0a6b1e05a8b4da554a3d0312 | 6,672 | py | Python | cs15211/CombinationSum.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2021-07-05T01:53:30.000Z | 2021-07-05T01:53:30.000Z | cs15211/CombinationSum.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | null | null | null | cs15211/CombinationSum.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2018-01-08T07:14:08.000Z | 2018-01-08T07:14:08.000Z | __source__ = 'https://leetcode.com/problems/combination-sum/description/'
# https://github.com/kamyu104/LeetCode/blob/master/Python/combination-sum.py
# Time: O(n^m)
# Space: O(m)
#
# Description: Leetcode # 39. Combination Sum
#
# Given a set of candidate numbers (C) and a target number (T),
# find all unique combinations in C where the candidate numbers sums to T.
#
# The same repeated number may be chosen from C unlimited number of times.
#
# Note:
# All numbers (including target) will be positive integers.
# Elements in a combination (a1, a2, ... , ak) must be in non-descending order. (ie, a1 <= a2 <= ... <= ak).
# The solution set must not contain duplicate combinations.
# For example, given candidate set 2,3,6,7 and target 7,
# A solution set is:
# [7]
# [2, 2, 3]
#
# Companies
# Snapchat Uber
# Related Topics
# Array Backtracking
# Similar Questions
# Letter Combinations of a Phone Number Combination Sum II Combinations Combination Sum III
# Factor Combinations Combination Sum IV
# 40. Combination Sum II has duplicate
#
import unittest
class Solution:
# @param candidates, a list of integers
# @param target, integer
# @return a list of lists of integers
def combinationSum(self, candidates, target):
result = []
self.combinationSumRecu(sorted(candidates), result,0, [], target)
return result
def combinationSumRecu(self, candidates, result, start, intermediate, target):
if target == 0:
result.append(intermediate)
while start < len(candidates) and candidates[start] <= target:
self.combinationSumRecu(candidates, result, start, intermediate + [candidates[start]], target - candidates[start])
start += 1
class SolutionOther:
# @param candidates, a list of integers
# @param target, integer
# @return a list of lists of integers
def combinationSum(self, candidates, target):
candidates.sort()
self.ans, tmp = [], []
self.dfs(candidates, target, 0, 0,tmp)
return self.ans
def dfs(self,candidates, target, p, now, tmp):
if now == target:
#self.ans.append(tmp[:])
self.ans.append(tmp)
return
for i in range(p, len(candidates)):
if now + candidates[i] <= target :
#tmp.append(candidates[i])
self.dfs(candidates, target, i, now+candidates[i] , tmp+[candidates[i]])
#tmp.pop()
#test
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
test = SolutionOther()
print test.combinationSum([6,2,3], 7)
#print test.combinationSum([2,1], 2)
candidates, target = [2, 3, 6, 7], 7
result = Solution().combinationSum(candidates, target)
print result
if __name__ == '__main__':
unittest.main()
Java = '''
# Thought:
No duplicate in arr, no need to sort arr
40. Combination Sum II has duplicate
# 8ms 99.46%
public class Solution {
public List<List<Integer>> combinationSum(int[] candidates, int target) {
List<List<Integer>> list = new ArrayList<>();
//Arrays.sort(nums);
backtrack(list, new ArrayList<>(), candidates, target, 0);
return list;
}
private void backtrack(List<List<Integer>> list, List<Integer> tempList, int [] nums, int remain, int start){
if (remain < 0) return;
if (remain == 0) {
list.add(new ArrayList<>(tempList));
return;
}
for (int i = start; i < nums.length; i++) {
tempList.add(nums[i]);
backtrack(list, tempList, nums, remain - nums[i], i); // not i + 1 because we can reuse same elements
tempList.remove(tempList.size() - 1);
}
}
}
# 9ms 91.46% if not sort array
# 10ms 83.99% //sort array
class Solution {
public List<List<Integer>> combinationSum(int[] candidates, int target) {
List<List<Integer>> res = new ArrayList<>();
//Arrays.sort(candidates); //run faster if not sort array
dfs(candidates, target, 0, new ArrayList<>(), res);
return res;
}
private void dfs(int[] candidates, int target, int start, List<Integer> tmp, List<List<Integer>> res){
if (target == 0){
res.add(new ArrayList<>(tmp));
return; //no impact on runtime
}
for(int i = start; i < candidates.length; i++){
if( target - candidates[i] >= 0){
tmp.add(candidates[i]);
dfs(candidates, target - candidates[i], i, tmp, res);
tmp.remove(new Integer(candidates[i]));
}
}
}
}
# 8ms 99.46%
class Solution {
public List<List<Integer>> combinationSum(int[] candidates, int target) {
List<List<Integer>> result = new ArrayList<>();
if (candidates.length == 0) {
return result;
}
Arrays.sort(candidates); // need to sort array due to (i)
combinationSum(candidates, target, 0, result, new ArrayList<>());
return result;
}
private void combinationSum(int[] candidates, int target, int index, List<List<Integer>> result, List<Integer> cur) {
if (target == 0) {
result.add(new ArrayList<>(cur));
return;
}
int size = cur.size();
for (int i = index; i < candidates.length && candidates[i] <= target; i++) { //need to sort arr (i) due to candidates[i] <= target
cur.add(candidates[i]);
combinationSum(candidates, target - candidates[i], i, result, cur);
cur.remove(size);
}
}
}
# 14ms 45.41%
class Solution {
public List<List<Integer>> combinationSum(int[] candidates, int target) {
List<List<List<Integer>>> dp = new ArrayList<>(target);
Arrays.sort(candidates);
for (int i = 1; i <= target; i++) {
List<List<Integer>> cur = new ArrayList<>();
for (int j = 0; j < candidates.length && candidates[j] <= i; j++) {
if (candidates[j] == i) {
cur.add(Arrays.asList(candidates[j]));
} else {
for (List<Integer> prev : dp.get(i - candidates[j] - 1)) {
if (candidates[j] <= prev.get(0)) {
List<Integer> newList = new ArrayList<>();
newList.add(candidates[j]);
newList.addAll(prev);
cur.add(newList);
}
}
}
}
dp.add(cur);
}
return dp.get(target - 1);
}
}
'''
| 35.301587 | 138 | 0.578537 | 789 | 6,672 | 4.875792 | 0.230672 | 0.048609 | 0.050689 | 0.034312 | 0.198856 | 0.160905 | 0.145308 | 0.145308 | 0.145308 | 0.145308 | 0 | 0.017793 | 0.292416 | 6,672 | 188 | 139 | 35.489362 | 0.797077 | 0.179856 | 0 | 0.125926 | 0 | 0.051852 | 0.711418 | 0.075875 | 0 | 0 | 0 | 0 | 0.007407 | 0 | null | null | 0 | 0.007407 | null | null | 0.014815 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d2a99fecde31a6a4af9bf695cbcd53620a95d2a | 1,066 | py | Python | ax/benchmark/benchmark_method.py | sparks-baird/Ax | 57ba8714902ac218eb87dc2f90090678aa307a43 | [
"MIT"
] | null | null | null | ax/benchmark/benchmark_method.py | sparks-baird/Ax | 57ba8714902ac218eb87dc2f90090678aa307a43 | [
"MIT"
] | null | null | null | ax/benchmark/benchmark_method.py | sparks-baird/Ax | 57ba8714902ac218eb87dc2f90090678aa307a43 | [
"MIT"
] | null | null | null | # Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from dataclasses import dataclass
from ax.exceptions.core import UserInputError
from ax.modelbridge.generation_strategy import GenerationStrategy
from ax.service.utils.scheduler_options import SchedulerOptions
from ax.utils.common.base import Base
@dataclass(frozen=True)
class BenchmarkMethod(Base):
"""Benchmark method, represented in terms of Ax generation strategy (which tells us
which models to use when) and scheduler options (which tell us extra execution
information like maximum parallelism, early stopping configuration, etc.)
"""
name: str
generation_strategy: GenerationStrategy
scheduler_options: SchedulerOptions
def __post_init__(self) -> None:
if self.scheduler_options.total_trials is None:
raise UserInputError(
"SchedulerOptions.total_trials may not be None in BenchmarkMethod."
)
| 35.533333 | 87 | 0.757974 | 132 | 1,066 | 6.030303 | 0.621212 | 0.030151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187617 | 1,066 | 29 | 88 | 36.758621 | 0.919169 | 0.379925 | 0 | 0 | 0 | 0 | 0.101721 | 0.045383 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6d374b34660addae6755625188158494a9832526 | 182 | py | Python | scripts/python/do_nothing.py | catforward/batch-launcher | 35fe0c67ba30f34bba781a5e9efeece3d34bf505 | [
"MIT"
] | null | null | null | scripts/python/do_nothing.py | catforward/batch-launcher | 35fe0c67ba30f34bba781a5e9efeece3d34bf505 | [
"MIT"
] | null | null | null | scripts/python/do_nothing.py | catforward/batch-launcher | 35fe0c67ba30f34bba781a5e9efeece3d34bf505 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os, sys
print("my id is %s. wrote by %s." % \
(sys.argv[1], os.getenv("PROC_TYPE", default = "unknown")))
sys.exit(0) | 20.222222 | 60 | 0.571429 | 30 | 182 | 3.433333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027211 | 0.192308 | 182 | 9 | 61 | 20.222222 | 0.673469 | 0.236264 | 0 | 0 | 0 | 0 | 0.315385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d395494c821bd0b349fbd6c23a637634a0b0583 | 279 | py | Python | examples/usage/expressionsB.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | examples/usage/expressionsB.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | examples/usage/expressionsB.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | from viewdom import html, render
def make_bigly(name: str) -> str:
return f'BIGLY: {name.upper()}'
name = 'viewdom'
result = render(html('<div>Hello {make_bigly(name)}</div>'))
# '<div>Hello BIGLY: VIEWDOM</div>'
# end-before
expected = '<div>Hello BIGLY: VIEWDOM</div>'
| 21.461538 | 60 | 0.666667 | 40 | 279 | 4.6 | 0.475 | 0.146739 | 0.141304 | 0.217391 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 279 | 12 | 61 | 23.25 | 0.766667 | 0.157706 | 0 | 0 | 0 | 0 | 0.405172 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6d3b68e07d86851fe5e0a884e7544b6d16b3b476 | 206 | py | Python | faketests/slowtests/test_3.py | Djailla/pytest-sugar | 6ac4a1fa18a51e67c7a759bc11a9264cb78619e4 | [
"BSD-3-Clause"
] | 418 | 2019-08-30T00:41:51.000Z | 2022-03-21T09:31:54.000Z | faketests/slowtests/test_3.py | Djailla/pytest-sugar | 6ac4a1fa18a51e67c7a759bc11a9264cb78619e4 | [
"BSD-3-Clause"
] | 62 | 2019-08-27T16:50:36.000Z | 2022-03-11T07:03:32.000Z | faketests/slowtests/test_3.py | Djailla/pytest-sugar | 6ac4a1fa18a51e67c7a759bc11a9264cb78619e4 | [
"BSD-3-Clause"
] | 25 | 2019-11-27T15:58:16.000Z | 2022-03-21T09:31:56.000Z | import time
import pytest
@pytest.mark.parametrize("index", range(7))
def test_cat(index):
"""Perform several tests with varying execution times."""
time.sleep(0.2 + (index * 0.1))
assert True
| 22.888889 | 61 | 0.68932 | 30 | 206 | 4.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02924 | 0.169903 | 206 | 8 | 62 | 25.75 | 0.795322 | 0.247573 | 0 | 0 | 0 | 0 | 0.033557 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6d4e1c612f0259f7f3638caf74ccb8db92fe0856 | 7,093 | py | Python | src/lava/lib/dl/slayer/neuron/norm.py | timcheck/lava-dl | e680722071129fde952ea0d744984aa2a038797a | [
"BSD-3-Clause"
] | 37 | 2021-09-30T16:47:15.000Z | 2022-03-07T22:29:21.000Z | src/lava/lib/dl/slayer/neuron/norm.py | timcheck/lava-dl | e680722071129fde952ea0d744984aa2a038797a | [
"BSD-3-Clause"
] | 36 | 2021-11-04T16:54:55.000Z | 2022-03-31T02:26:29.000Z | src/lava/lib/dl/slayer/neuron/norm.py | timcheck/lava-dl | e680722071129fde952ea0d744984aa2a038797a | [
"BSD-3-Clause"
] | 20 | 2021-10-29T22:55:58.000Z | 2022-03-22T17:27:16.000Z | # Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
"""Neuron normalization methods."""
import torch
class MeanOnlyBatchNorm(torch.nn.Module):
"""Implements mean only batch norm with optional user defined quantization
using pre-hook-function. The mean of batchnorm translates to negative bias
of the neuron.
Parameters
----------
num_features : int
number of features. It is automatically initialized on first run if the
value is None. Default is None.
momentum : float
momentum of mean calculation. Defaults to 0.1.
pre_hook_fx : function pointer or lambda
pre-hook-function that is applied to the normalization output.
User can provide a quantization method as needed.
Defaults to None.
Attributes
----------
num_features
momentum
pre_hook_fx
running_mean : torch tensor
running mean estimate.
update : bool
enable mean estimte update.
"""
def __init__(self, num_features=None, momentum=0.1, pre_hook_fx=None):
""" """
super(MeanOnlyBatchNorm, self).__init__()
self.num_features = num_features
self.momentum = momentum
if pre_hook_fx is None:
self.pre_hook_fx = lambda x: x
else:
self.pre_hook_fx = pre_hook_fx
self.register_buffer(
'running_mean',
torch.zeros(1 if num_features is None else num_features)
)
self.reset_parameters()
self.update = True
def reset_parameters(self):
"""Reset states."""
self.running_mean.zero_()
@property
def bias(self):
"""Equivalent bias shift."""
return -self.pre_hook_fx(self.running_mean, descale=True)
def forward(self, inp):
"""
"""
size = inp.shape
if self.num_features is None:
self.num_features = inp.shape[1]
if self.training and self.update is True:
mean = torch.mean(
inp.view(size[0], self.num_features, -1),
dim=[0, 2]
)
# n = inp.numel() / inp.shape[1]
with torch.no_grad():
self.running_mean = (1 - self.momentum) * self.running_mean \
+ self.momentum * mean
else:
mean = self.running_mean
if len(size) == 2:
out = (inp - self.pre_hook_fx(mean.view(1, -1)))
elif len(size) == 3:
out = (inp - self.pre_hook_fx(mean.view(1, -1, 1)))
elif len(size) == 4:
out = (inp - self.pre_hook_fx(mean.view(1, -1, 1, 1)))
elif len(size) == 5:
out = (inp - self.pre_hook_fx(mean.view(1, -1, 1, 1, 1)))
else:
print(f'Found unexpected number of dims {len(size)} in input.')
return out
class WgtScaleBatchNorm(torch.nn.Module):
"""Implements batch norm with variance scale in powers of 2. This allows
eventual normalizaton to be implemented with bit-shift in a hardware
friendly manner. Optional user defined quantization can be enabled using a
pre-hook-function. The mean of batchnorm translates to negative bias of the
neuron.
Parameters
----------
num_features : int
number of features. It is automatically initialized on first run if the
value is None. Default is None.
momentum : float
momentum of mean calculation. Defaults to 0.1.
weight_exp_bits : int
number of allowable bits for weight exponentation. Defaults to 3.
eps : float
infitesimal value. Defaults to 1e-5.
pre_hook_fx : function pointer or lambda
pre-hook-function that is applied to the normalization output.
User can provide a quantization method as needed.
Defaults to None.
Attributes
----------
num_features
momentum
weight_exp_bits
eps
pre_hook_fx
running_mean : torch tensor
running mean estimate.
running_var : torch tensor
running variance estimate.
update : bool
enable mean estimte update.
"""
def __init__(
self,
num_features=None, momentum=0.1,
weight_exp_bits=3, eps=1e-5,
pre_hook_fx=None
):
""" """
super(WgtScaleBatchNorm, self).__init__()
self.num_features = num_features
self.momentum = momentum
self.weight_exp_bits = weight_exp_bits
self.eps = eps
if pre_hook_fx is None:
self.pre_hook_fx = lambda x: x
else:
self.pre_hook_fx = pre_hook_fx
self.register_buffer(
'running_mean',
torch.zeros(1 if num_features is None else num_features)
)
self.register_buffer(
'running_var',
torch.zeros(1)
)
self.reset_parameters()
self.update = True
def reset_parameters(self):
"""Reset states."""
self.running_mean.zero_()
self.running_var.zero_()
def std(self, var):
"""
"""
std = torch.sqrt(var + self.eps)
return torch.ones(1) << torch.ceil(torch.log2(std)).clamp(
-self.weight_exp_bits, self.weight_exp_bits
)
@property
def bias(self):
"""Equivalent bias shift."""
return -self.pre_hook_fx(self.running_mean, descale=True)
@property
def weight_exp(self):
"""Equivalent weight exponent value."""
return torch.ceil(torch.log2(torch.sqrt(self.running_var + self.eps)))
def forward(self, inp):
"""
"""
size = inp.shape
if self.num_features is None:
self.num_features = inp.shape[1]
if self.training and self.update is True:
mean = torch.mean(
inp.view(size[0], self.num_features, -1),
dim=[0, 2]
)
var = torch.var(inp, unbiased=False)
n = inp.numel() / inp.shape[1]
with torch.no_grad():
self.running_mean = (1 - self.momentum) * self.running_mean \
+ self.momentum * mean
self.running_var = (1 - self.momentum) * self.running_var \
+ self.momentum * var * n / (n + 1)
else:
mean = self.running_mean
var = self.running_var
std = self.std(var)
if len(size) == 2:
out = (
inp - self.pre_hook_fx(mean.view(1, -1))
) / std.view(1, -1)
elif len(size) == 3:
out = (
inp - self.pre_hook_fx(mean.view(1, -1, 1))
) / std.view(1, -1, 1)
elif len(size) == 4:
out = (
inp - self.pre_hook_fx(mean.view(1, -1, 1, 1))
) / std.view(1, -1, 1, 1)
elif len(size) == 5:
out = (
inp - self.pre_hook_fx(mean.view(1, -1, 1, 1, 1))
) / std.view(1, -1, 1, 1, 1)
else:
print(f'Found unexpected number of dims {len(size)}.')
return out
| 30.705628 | 79 | 0.565064 | 897 | 7,093 | 4.32107 | 0.179487 | 0.01548 | 0.055728 | 0.046956 | 0.736326 | 0.700722 | 0.699948 | 0.699948 | 0.699174 | 0.699174 | 0 | 0.019405 | 0.331595 | 7,093 | 230 | 80 | 30.83913 | 0.798144 | 0.29367 | 0 | 0.608 | 0 | 0 | 0.028031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.008 | 0 | 0.152 | 0.016 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d4ed1f29076811117c1ee8af17b2c9fb71cec92 | 2,596 | py | Python | domonic/constants/keyboard.py | Jordan-Cottle/domonic | 05d70af0d1564d3ba80c860bb5f5bfe089293b18 | [
"MIT"
] | 1 | 2022-03-09T03:39:04.000Z | 2022-03-09T03:39:04.000Z | domonic/constants/keyboard.py | Jordan-Cottle/domonic | 05d70af0d1564d3ba80c860bb5f5bfe089293b18 | [
"MIT"
] | null | null | null | domonic/constants/keyboard.py | Jordan-Cottle/domonic | 05d70af0d1564d3ba80c860bb5f5bfe089293b18 | [
"MIT"
] | null | null | null | """
domonic.constants.keyboard
====================================
"""
class KeyCode():
A = '65' #:
ALTERNATE = '18' #:
B = '66' #:
BACKQUOTE = '192' #:
BACKSLASH = '220' #:
BACKSPACE = '8' #:
C = '67' #:
CAPS_LOCK = '20' #:
COMMA = '188' #:
COMMAND = '15' #:
CONTROL = '17' #:
D = '68' #:
DELETE = '46' #:
DOWN = '40' #:
E = '69' #:
END = '35' #:
ENTER = '13' #:
RETURN = '13' #:
EQUAL = '187' #:
ESCAPE = '27' #:
F = '70' #:
F1 = '112' #:
F10 = '121' #:
F11 = '122' #:
F12 = '123' #:
F13 = '124' #:
F14 = '125' #:
F15 = '126' #:
F2 = '113' #:
F3 = '114' #:
F4 = '115' #:
F5 = '116' #:
F6 = '117' #:
F7 = '118' #:
F8 = '119' #:
F9 = '120' #:
G = '71' #:
H = '72' #:
HOME = '36' #:
I = '73' #:
INSERT = '45' #:
J = '74' #:
K = '75' #:
L = '76' #:
LEFT = '37' #:
LEFTBRACKET = '219' #:
M = '77' #:
MINUS = '189' #:
N = '78' #:
NUMBER_0 = '48' #:
NUMBER_1 = '49' #:
NUMBER_2 = '50' #:
NUMBER_3 = '51' #:
NUMBER_4 = '52' #:
NUMBER_5 = '53' #:
NUMBER_6 = '54' #:
NUMBER_7 = '55' #:
NUMBER_8 = '56' #:
NUMBER_9 = '57' #:
NUMPAD = '21' #:
NUMPAD_0 = '96' #:
NUMPAD_1 = '97' #:
NUMPAD_2 = '98' #:
NUMPAD_3 = '99' #:
NUMPAD_4 = '100' #:
NUMPAD_5 = '101' #:
NUMPAD_6 = '102' #:
NUMPAD_7 = '103' #:
NUMPAD_8 = '104' #:
NUMPAD_9 = '105' #:
NUMPAD_ADD = '107' #:
NUMPAD_DECIMAL = '110' #:
NUMPAD_DIVIDE = '111' #:
NUMPAD_ENTER = '108' #:
NUMPAD_MULTIPLY = '106' #:
NUMPAD_SUBTRACT = '109' #:
O = '79' #:
P = '80' #:
PAGE_DOWN = '34' #:
PAGE_UP = '33' #:
PERIOD = '190' #:
Q = '81' #:
QUOTE = '222' #:
R = '82' #:
RIGHT = '39' #:
RIGHTBRACKET = '221' #:
S = '83' #:
SEMICOLON = '186' #:
SHIFT = '16' #: ?? left or right or both?
SLASH = '191' #:
SPACE = '32' #:
T = '84' #:
TAB = '9' #:
U = '85' #:
UP = '38' #:
V = '86' #:
W = '87' #:
X = '88' #:
Y = '89' #:
Z = '9' #:
# TODO - do the modifiers
# find attribute by value
# def get_letter(self, attr):
# for key, value in self.__dict__.iteritems():
# if value == attr:
# return key
# return None
def __init__(self):
""" constructor for the keyboard class """
pass
| 20.768 | 54 | 0.373267 | 275 | 2,596 | 3.385455 | 0.785455 | 0.027927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178179 | 0.403313 | 2,596 | 124 | 55 | 20.935484 | 0.422853 | 0.160247 | 0 | 0 | 0 | 0 | 0.114578 | 0 | 0 | 0 | 0 | 0.008065 | 0 | 1 | 0.009709 | false | 0.009709 | 0 | 0 | 0.990291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6d5062c8e3235dd5b1d309a37505fc11bcdff8b7 | 370 | py | Python | setup.py | jeffFranklin/linkbot | 8696061bb03d80ff70a5436a967be02df0e7a938 | [
"Apache-2.0"
] | 1 | 2018-10-11T16:48:19.000Z | 2018-10-11T16:48:19.000Z | setup.py | jeffFranklin/linkbot | 8696061bb03d80ff70a5436a967be02df0e7a938 | [
"Apache-2.0"
] | null | null | null | setup.py | jeffFranklin/linkbot | 8696061bb03d80ff70a5436a967be02df0e7a938 | [
"Apache-2.0"
] | 1 | 2018-03-30T16:32:30.000Z | 2018-03-30T16:32:30.000Z | from setuptools import setup
install_requires = ['beautifulsoup4',
'simplejson',
'slacker',
'jira',
'requests',
'websocket-client']
setup(name='linkbot',
install_requires=install_requires,
description='slackbot listening for mentions of jira issues, etc')
| 28.461538 | 72 | 0.543243 | 29 | 370 | 6.827586 | 0.793103 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004274 | 0.367568 | 370 | 12 | 73 | 30.833333 | 0.84188 | 0 | 0 | 0 | 0 | 0 | 0.316216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6d5f77bccd5162155079b37338363a35a130b2e2 | 901 | py | Python | service/src/service/edit_distance.py | xuqiongkai/ALTER | a9ec98c0ed576a68f14711eeff6f1c7d2d34c6f7 | [
"MIT"
] | 8 | 2019-09-18T01:14:07.000Z | 2022-02-05T03:43:24.000Z | service/src/service/edit_distance.py | xuqiongkai/ALTER | a9ec98c0ed576a68f14711eeff6f1c7d2d34c6f7 | [
"MIT"
] | 4 | 2019-11-15T03:09:52.000Z | 2022-03-24T15:01:55.000Z | service/src/service/edit_distance.py | xuqiongkai/ALTER | a9ec98c0ed576a68f14711eeff6f1c7d2d34c6f7 | [
"MIT"
] | 4 | 2019-09-17T22:17:30.000Z | 2022-02-05T03:43:28.000Z | import editdistance
class EditDistanceService:
INSTACE = None
@classmethod
def create(cls):
if cls.INSTACE is None:
cls.INSTACE = EditDistanceService()
@classmethod
def instance(cls):
if cls.INSTACE is None:
cls.create()
return cls.INSTACE
def compute(self, words1, words2):
return editdistance.eval(words1, words2)
# if len(s1) > len(s2):
# s1, s2 = s2, s1
# distances = range(len(s1) + 1)
# for i2, c2 in enumerate(s2):
# distances_ = [i2+1]
# for i1, c1 in enumerate(s1):
# if c1 == c2:
# distances_.append(distances[i1])
# else:
# distances_.append(1 + min((distances[i1], distances[i1 + 1], distances_[-1])))
# distances = distances_
# return distances[-1]
| 26.5 | 100 | 0.521643 | 95 | 901 | 4.894737 | 0.368421 | 0.086022 | 0.034409 | 0.064516 | 0.103226 | 0.103226 | 0.103226 | 0 | 0 | 0 | 0 | 0.050788 | 0.36626 | 901 | 33 | 101 | 27.30303 | 0.763573 | 0.418424 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ed8e6ab269e092f762ee8df541cc82f929c9e8fd | 283 | py | Python | molecule/default/tests/test_default.py | tottoto/ansible-role-kubectl | a6df49ffa401b4bde930feda75fb28abc484cb16 | [
"MIT"
] | null | null | null | molecule/default/tests/test_default.py | tottoto/ansible-role-kubectl | a6df49ffa401b4bde930feda75fb28abc484cb16 | [
"MIT"
] | 4 | 2020-02-26T20:19:31.000Z | 2021-09-23T23:33:14.000Z | molecule/default/tests/test_default.py | tottoto/ansible-role-kubectl | a6df49ffa401b4bde930feda75fb28abc484cb16 | [
"MIT"
] | null | null | null | import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_kubectl_is_installed(host):
kubectl = host.package('kubectl')
assert kubectl.is_installed
| 23.583333 | 63 | 0.791519 | 36 | 283 | 5.944444 | 0.611111 | 0.130841 | 0.196262 | 0.252336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109541 | 283 | 11 | 64 | 25.727273 | 0.849206 | 0 | 0 | 0 | 0 | 0 | 0.116608 | 0.081272 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ed9138a01663758b17298e639488cbf64b28b95b | 1,247 | py | Python | core/screen/screenshot_image.py | echim/pySteps | c33ac3446593b545aece475062d140527dcb443c | [
"MIT"
] | 8 | 2018-05-15T21:20:40.000Z | 2021-08-19T00:25:18.000Z | core/screen/screenshot_image.py | echim/pySteps | c33ac3446593b545aece475062d140527dcb443c | [
"MIT"
] | null | null | null | core/screen/screenshot_image.py | echim/pySteps | c33ac3446593b545aece475062d140527dcb443c | [
"MIT"
] | 2 | 2018-09-12T01:33:54.000Z | 2021-01-25T02:21:58.000Z | import cv2
import numpy as np
from pyautogui import screenshot
from pyautogui import size as get_screen_size
from core.screen.screen_rectangle import ScreenRectangle
class ScreenshotImage:
def __init__(self, in_region: ScreenRectangle = None):
screen_width, screen_height = get_screen_size()
region_coordinates = (0, 0, screen_width, screen_height)
if in_region is not None:
region_coordinates = (in_region.start_point.x, in_region.start_point.y, in_region.width, in_region.height)
screen_pil_image = screenshot(region=region_coordinates)
self._gray_array = cv2.cvtColor(np.array(screen_pil_image), cv2.COLOR_BGR2GRAY)
height, width = self._gray_array.shape
self._width = width
self._height = height
@property
def image_gray_array(self):
return self._gray_array
@property
def width(self) -> int:
return self._width
@property
def height(self) -> int:
return self._height
def binarize(self):
# img2 = cv2.adaptiveThreshold(self._gray_array, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
return cv2.threshold(self._gray_array, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
| 31.974359 | 119 | 0.709703 | 166 | 1,247 | 5.024096 | 0.349398 | 0.057554 | 0.077938 | 0.055156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024341 | 0.209302 | 1,247 | 38 | 120 | 32.815789 | 0.821501 | 0.08741 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.185185 | 0.148148 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
eda0102bee225153a9a9c699f2932d3638a5b5ac | 192 | py | Python | Test.py | pgDora56/MakeVirtualBlogArticle | 17de05ace2dfc85566563796553cde212b647749 | [
"MIT"
] | null | null | null | Test.py | pgDora56/MakeVirtualBlogArticle | 17de05ace2dfc85566563796553cde212b647749 | [
"MIT"
] | null | null | null | Test.py | pgDora56/MakeVirtualBlogArticle | 17de05ace2dfc85566563796553cde212b647749 | [
"MIT"
] | null | null | null | import MeCab
with open(r"output\417\001.txt", encoding="utf-8") as f:
s = f.read()
m = MeCab.Tagger().parse(s)
print(m)
m = MeCab.Tagger().parse("[NEWLINE]")
print(m)
| 17.454545 | 57 | 0.572917 | 31 | 192 | 3.548387 | 0.677419 | 0.109091 | 0.218182 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04698 | 0.223958 | 192 | 10 | 58 | 19.2 | 0.691275 | 0 | 0 | 0.285714 | 0 | 0 | 0.176796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
eda65be31fd3ddf73831bba093f8142205944ce0 | 6,629 | py | Python | scripts/extract_coll_format.py | therosko/Thesis-NER-in-English-novels | 7988c3aa4f904e91b1e674090dbdc6487b4ad042 | [
"Apache-2.0"
] | null | null | null | scripts/extract_coll_format.py | therosko/Thesis-NER-in-English-novels | 7988c3aa4f904e91b1e674090dbdc6487b4ad042 | [
"Apache-2.0"
] | null | null | null | scripts/extract_coll_format.py | therosko/Thesis-NER-in-English-novels | 7988c3aa4f904e91b1e674090dbdc6487b4ad042 | [
"Apache-2.0"
] | null | null | null |
####################################################################################################################################################
# Flair Dekker
import pandas as pd
import os
import csv
# import own script
from hyphens import *
from patch_flair_parsing import *
from calculate_metrics import *
def check_for_inconsistencies_dekker(current_file,gs_d):
try:
for index, word, ner in current_file.itertuples(index=True):
if word != gs_d["original_word"].loc[index]:
if (word == '(' and gs_d["original_word"].loc[index] == '-LRB-') or (word == ')' and gs_d["original_word"].loc[index] == '-RRB-') or (word == '[' and gs_d["original_word"].loc[index] == '-LSB-') or (word == ']' and gs_d["original_word"].loc[index] == '-RSB-'):
pass
elif (word in ["‘","-","' \" '",'"',"“",'-',"”","'",",","’"]) and (gs_d["original_word"].loc[index] in ['`',"``","--","''","'",'--']):
pass
elif (word == "—") and (gs_d["original_word"].loc[index] == '--'):
#print("Warning ", index, " '", word, "' in current is not the same as '", gs_d["original_word"].loc[index], "'in gs")
pass
elif (word == "'t" and gs_d["original_word"].loc[index] == "`") or (word == "is" and gs_d["original_word"].loc[index] == "tis") or (word == "honorable" and gs_d["original_word"].loc[index] == "honourable") or (word == "honor" and gs_d["original_word"].loc[index] == "honour"):
pass
elif (re.match(r"[a-zA-Z]*’[a-zA-Z]+", word)) and (re.match(r"[a-zA-Z]*'[a-zA-Z]+", gs_d["original_word"].loc[index])):
pass
elif (re.match(r"[a-zA-Z]*'[a-zA-Z]+", word)) and (re.match(r"[a-zA-Z]*’[a-zA-Z]+", gs_d["original_word"].loc[index])):
pass
else:
print("Position ", index, " '", word, "' in current is not the same as '", gs_d["original_word"].loc[index], "'in gs")
print(current_file.iloc[index-1:index+4])
print(gs_d.iloc[index-1:index+4])
break
#Note: some original texts are longer than the annotated files, we stop the comparisson at that length
except KeyError:
print("Reached end of annotated file. Cropped currect_file.")
print("Last word ", word, " in line ", index)
current_file = current_file.truncate(after=index-1)
pass
filename = "Dracula.tsv"
dekker_filepath = "/mnt/data/gold_standard/overlap/dekker_et_al/" + str(filename.replace('.tsv','')) + ".gs"
flair_filepath = '/mnt/flair/' + filename
print(filename)
# read Flair
current_file= pd.read_csv(flair_filepath, sep='\t', quoting=csv.QUOTE_NONE, usecols=[0,1])
current_file = correct_hyphened(current_file)
# patch inconsistencies between parsing of flair and gold standards (using LitBank)
current_file = patch_flair(current_file, filename)
current_file.loc[~current_file["ner"].isin(['S-PER','I-PER','B-PER','E-PER']), "ner"] = "O"
# read Dekker et al. gs
gs_d = pd.read_csv(dekker_filepath, sep=' ', quoting=csv.QUOTE_NONE, usecols=[0,1], names=["original_word", "gs"])
gs_d = correct_hyphened(gs_d)
gs_d.loc[~gs_d["gs"].isin(['I-PERSON']), "gs"] = "O"
check_for_inconsistencies_dekker(current_file,gs_d)
# merge the two dataframes
merged_flair_dekkeretal = pd.merge(gs_d, current_file, left_index=True, right_index=True)
del merged_flair_dekkeretal['original_word_y']
merged_flair_dekkeretal.to_csv(passed_variable+'_conll_2.tsv', sep=' ', index=False, encoding='utf-8', quoting=csv.QUOTE_NONE, header=False)
####################################################################################################################################################
# Flair LitBank
import pandas as pd
import os
import csv
# import own script
from modules.hyphens import *
from modules.patch_flair_parsing import *
from modules.calculate_metrics import *
books_mapping = {'AliceInWonderland': '11_alices_adventures_in_wonderland',
'DavidCopperfield': '766_david_copperfield',
'Dracula': '345_dracula',
'Emma': '158_emma',
'Frankenstein': '84_frankenstein_or_the_modern_prometheus',
'HuckleberryFinn': '76_adventures_of_huckleberry_finn',
'MobyDick': '2489_moby_dick',
'OliverTwist': '730_oliver_twist',
'PrideAndPrejudice': '1342_pride_and_prejudice',
'TheCallOfTheWild': '215_the_call_of_the_wild',
'Ulysses': '4300_ulysses',
'VanityFair': '599_vanity_fair'}
def check_for_inconsistencies_litbank(current_file,gs_lb):
try:
for index, word, ner in current_file.itertuples(index=True):
if word != gs_lb["original_word"].loc[index]:
print("Position ", index, " '", word, "' in current is not the same as '", gs_lb["original_word"].loc[index], "'in gs")
print(current_file.iloc[index-1:index+4])
print(gs_lb.iloc[index-1:index+4])
break
#Note: some original texts are longer than the annotated files, we stop the comparisson at that length
except KeyError:
print("Reached end of annotated file. Cropped currect_file.")
print("Last word ", word, " in line ", index)
current_file = current_file.truncate(after=index-1)
pass
filename = "TheCallOfTheWild.tsv"
litbank_filepath = "/mnt/data/gold_standard/overlap/litbank/" + books_mapping.get(str(filename.replace('.tsv',''))) + ".tsv"
flair_filepath = '/mnt/flair/' + filename
print(filename)
# read flair
current_file= pd.read_csv(flair_filepath, sep='\t', quoting=csv.QUOTE_NONE, usecols=[0,1])
current_file = correct_hyphened(current_file)
# patch inconsistencies between parsing of flair and gold standards (using LitBank)
current_file = patch_flair(current_file, filename)
current_file.loc[~current_file["ner"].isin(['S-PER','I-PER','B-PER','E-PER']), "ner"] = "O"
# read litbank gs
gs_lb = pd.read_csv(litbank_filepath, sep='\t', quoting=csv.QUOTE_NONE, usecols=[0,1], names=["original_word", "gs"])
gs_lb.loc[~gs_lb["gs"].isin(['I-PER','B-PER']), "gs"] = "O"
check_for_inconsistencies_litbank(current_file,gs_lb)
# merge the two dataframes
merged_flair_litbank = pd.merge(gs_lb, current_file, left_index=True, right_index=True)
del merged_flair_litbank['original_word_y']
merged_flair_litbank.to_csv(str(filename.replace('.tsv',''))+'_conll_3.tsv', sep=' ', index=False, encoding='utf-8', quoting=csv.QUOTE_NONE, header=False)
| 53.459677 | 292 | 0.610348 | 871 | 6,629 | 4.43054 | 0.222732 | 0.079813 | 0.066079 | 0.088106 | 0.73957 | 0.710288 | 0.664421 | 0.623478 | 0.570096 | 0.545219 | 0 | 0.010763 | 0.187057 | 6,629 | 123 | 293 | 53.894309 | 0.70514 | 0.098808 | 0 | 0.456522 | 0 | 0.043478 | 0.246996 | 0.051943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.097826 | 0.130435 | 0 | 0.152174 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
edaa2e6e59e705770a65843fb484ab08934862e0 | 148 | py | Python | ml_model_evaluation/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | ml_model_evaluation/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | ml_model_evaluation/__init__.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | """Top-level package for ML Model Evaluation Toolkit."""
__author__ = """Nicolas Kaenzig"""
__email__ = "nkaenzig@gmail.com"
__version__ = "0.1.0"
| 24.666667 | 56 | 0.709459 | 19 | 148 | 4.894737 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023077 | 0.121622 | 148 | 5 | 57 | 29.6 | 0.692308 | 0.337838 | 0 | 0 | 0 | 0 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
edc129a8cacd43a8eb75eac16b6fa359f8e35f39 | 3,241 | py | Python | python/Generate_Employee_Codes_Data.py | jdglaser/census-dw | 8d6eabf7e5c9fc5c16eb93a1ce6077fa49c8482b | [
"MIT"
] | null | null | null | python/Generate_Employee_Codes_Data.py | jdglaser/census-dw | 8d6eabf7e5c9fc5c16eb93a1ce6077fa49c8482b | [
"MIT"
] | null | null | null | python/Generate_Employee_Codes_Data.py | jdglaser/census-dw | 8d6eabf7e5c9fc5c16eb93a1ce6077fa49c8482b | [
"MIT"
] | null | null | null | '''
The functions in this script are used to fill data in the census data warehouse.
These functions are used to fill dbo.DIM_CBP_Employee_Sizes
'''
__author__ = "Jarred Glaser"
__last_updated__ = "2019-11-08"
import pandas as pd
from sqlalchemy import create_engine
values = {
"001": "All establishments",
"204": "Establishments with no paid employees",
"205": "Establishments with paid employees",
"207": "Establishments with less than 10 employees",
"209": "Establishments with less than 20 employees",
"210": "Establishments with less than 5 employees",
"211": "Establishments with less than 4 employees",
"212": "Establishments with 1 to 4 employees",
"213": "Establishments with 1 employee",
"214": "Establishments with 2 employees",
"215": "Establishments with 3 or 4 employees",
"219": "Establishments with 0 to 4 employees",
"220": "Establishments with 5 to 9 employees",
"221": "Establishments with 5 or 6 employees",
"222": "Establishments with 7 to 9 employees",
"223": "Establishments with 10 to 14 employees",
"230": "Establishments with 10 to 19 employees",
"231": "Establishments with 10 to 14 employees",
"232": "Establishments with 15 to 19 employees",
"235": "Establishments with 20 or more employees",
"240": "Establishments with 20 to 99 employees",
"241": "Establishments with 20 to 49 employees",
"242": "Establishments with 50 to 99 employees",
"243": "Establishments with 50 employees or more",
"249": "Establishments with 100 to 499 employees",
"250": "Establishments with 100 or more employees",
"251": "Establishments with 100 to 249 employees",
"252": "Establishments with 250 to 499 employees",
"253": "Establishments with 500 employees or more",
"254": "Establishments with 500 to 999 employees",
"260": "Establishments with 1,000 employees or more",
"261": "Establishments with 1,000 to 2,499 employees",
"262": "Establishments with 1,000 to 1,499 employees",
"263": "Establishments with 1,500 to 2,499 employees",
"270": "Establishments with 2,500 employees or more",
"271": "Establishments with 2,500 to 4,999 employees",
"272": "Establishments with 5,000 to 9,999 employees",
"273": "Establishments with 5,000 employees or more",
"280": "Establishments with 10,000 employees or more",
"281": "Establishments with 10,000 to 24,999 employees",
"282": "Establishments with 25,000 to 49,999 employees",
"283": "Establishments with 50,000 to 99,999 employees",
"290": "Establishments with 100,000 employees or more",
"298": "Covered by administrative records"
}
def load_employee_codes(engine):
df = pd.DataFrame({"CBP_Employee_Size_Code":list(values.keys()),"Long_Description":list(values.values())})
df["Short_Description"] = df["Long_Description"].str.replace("Establishments with ","").str.replace(" to ","-").str.title()
df.to_sql("DIM_CBP_Employee_Sizes",engine,if_exists="append",index=False)
if __name__ == "__main__":
engine = create_engine('INSERT CONNECTION STRING',fast_executemany=True)
print("Done")
| 47.661765 | 129 | 0.676952 | 422 | 3,241 | 5.113744 | 0.350711 | 0.358665 | 0.048656 | 0.048193 | 0.052827 | 0.030584 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 0.203024 | 3,241 | 67 | 130 | 48.373134 | 0.713511 | 0.043505 | 0 | 0 | 0 | 0 | 0.661061 | 0.01423 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.035088 | 0 | 0.052632 | 0.017544 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
edc2c3d1d08a03106229f701c423aadb58f2f336 | 171 | py | Python | template/python/base-class.py | hiroebe/sonictemplate-vim | 55288245b005076c424da33f15b3f2301af24f3a | [
"MIT"
] | 130 | 2020-01-29T11:49:38.000Z | 2022-03-18T11:31:47.000Z | template/python/base-class.py | hiroebe/sonictemplate-vim | 55288245b005076c424da33f15b3f2301af24f3a | [
"MIT"
] | 17 | 2020-03-16T07:16:43.000Z | 2022-02-05T02:13:39.000Z | template/python/base-class.py | hiroebe/sonictemplate-vim | 55288245b005076c424da33f15b3f2301af24f3a | [
"MIT"
] | 14 | 2018-11-24T01:47:14.000Z | 2020-09-11T04:29:35.000Z | """
{{_name_}}
"""
class {{_expr_:substitute('{{_input_:name}}', '\w\+', '\u\0', '')}}(object):
def __init__(self{{_cursor_}}):
def __repr__(self):
return
| 15.545455 | 76 | 0.532164 | 18 | 171 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.169591 | 171 | 10 | 77 | 17.1 | 0.521127 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
edcbd1b459fd418223a16b044ad6e4114c07aee2 | 3,296 | py | Python | antioch/test/test_permissions.py | philchristensen/antioch | 7fe27c961ae81b7655c6428038c85eefad27e980 | [
"MIT"
] | 15 | 2015-10-07T06:30:22.000Z | 2022-03-07T19:44:55.000Z | antioch/test/test_permissions.py | philchristensen/antioch | 7fe27c961ae81b7655c6428038c85eefad27e980 | [
"MIT"
] | 2 | 2016-10-17T05:04:58.000Z | 2018-09-10T02:36:15.000Z | antioch/test/test_permissions.py | philchristensen/antioch | 7fe27c961ae81b7655c6428038c85eefad27e980 | [
"MIT"
] | 2 | 2018-07-30T12:58:33.000Z | 2018-11-26T03:17:34.000Z | # antioch
# Copyright (c) 1999-2019 Phil Christensen
#
# See LICENSE for details
import sys, os, os.path, time
from django.test import TestCase
from django.db import connection
from antioch import test
from antioch.core import errors, exchange
# sys.setrecursionlimit(100)
class PermissionsTestCase(TestCase):
fixtures = ['core-minimal.json']
def setUp(self):
self.exchange = exchange.ObjectExchange(connection)
self.wizard = self.exchange.get_object('wizard')
self.user = self.exchange.get_object('user')
def test_defaults(self):
thing = self.exchange.instantiate('object', name='thing')
thing.set_owner(self.wizard)
self.assertTrue(self.user.is_allowed('read', thing))
self.assertFalse(self.user.is_allowed('write', thing))
self.assertFalse(self.user.is_allowed('entrust', thing))
self.assertFalse(self.user.is_allowed('move', thing))
self.assertFalse(self.user.is_allowed('transmute', thing))
self.assertFalse(self.user.is_allowed('derive', thing))
self.assertFalse(self.user.is_allowed('develop', thing))
def test_owner_defaults(self):
thing = self.exchange.instantiate('object', name='thing')
thing.set_owner(self.user)
self.assertTrue(self.user.is_allowed('read', thing))
self.assertTrue(self.user.is_allowed('write', thing))
self.assertTrue(self.user.is_allowed('entrust', thing))
self.assertTrue(self.user.is_allowed('move', thing))
self.assertTrue(self.user.is_allowed('transmute', thing))
self.assertTrue(self.user.is_allowed('derive', thing))
self.assertTrue(self.user.is_allowed('develop', thing))
def test_everyone_defaults(self):
thing = self.exchange.instantiate('object', name='thing')
jim = self.exchange.instantiate('object', name='Jim', unique_name=True)
jim.set_player(True, passwd='jim')
self.assertTrue(jim.is_allowed('read', thing))
self.assertFalse(jim.is_allowed('write', thing))
self.assertFalse(jim.is_allowed('entrust', thing))
self.assertFalse(jim.is_allowed('move', thing))
self.assertFalse(jim.is_allowed('transmute', thing))
self.assertFalse(jim.is_allowed('derive', thing))
self.assertFalse(jim.is_allowed('develop', thing))
def test_wizard_defaults(self):
thing = self.exchange.instantiate('object', name='thing')
self.assertTrue(self.wizard.is_allowed('read', thing))
self.assertTrue(self.wizard.is_allowed('write', thing))
self.assertTrue(self.wizard.is_allowed('entrust', thing))
self.assertTrue(self.wizard.is_allowed('move', thing))
self.assertTrue(self.wizard.is_allowed('transmute', thing))
self.assertTrue(self.wizard.is_allowed('derive', thing))
self.assertTrue(self.wizard.is_allowed('develop', thing))
def test_simple_deny(self):
thing = self.exchange.instantiate('object', name='thing')
thing.set_owner(self.wizard)
thing.allow('everyone', 'anything')
thing.deny(self.user, 'write')
self.assertTrue(self.user.is_allowed('read', thing))
self.assertFalse(self.user.is_allowed('write', thing))
| 41.721519 | 79 | 0.67233 | 401 | 3,296 | 5.411471 | 0.167082 | 0.128571 | 0.132719 | 0.125346 | 0.747926 | 0.732719 | 0.606452 | 0.281106 | 0.247926 | 0.176959 | 0 | 0.004106 | 0.187197 | 3,296 | 78 | 80 | 42.25641 | 0.805898 | 0.030036 | 0 | 0.206897 | 0 | 0 | 0.091507 | 0 | 0 | 0 | 0 | 0 | 0.517241 | 1 | 0.103448 | false | 0.017241 | 0.086207 | 0 | 0.224138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
eddbe83e48db8fef41ac38d6e5ca40fd5e8b0743 | 1,024 | py | Python | sxml/utils.py | vmstarchenko/sxml | 3b6fc3a89f404acfe298491555d15df269125e8f | [
"MIT"
] | null | null | null | sxml/utils.py | vmstarchenko/sxml | 3b6fc3a89f404acfe298491555d15df269125e8f | [
"MIT"
] | null | null | null | sxml/utils.py | vmstarchenko/sxml | 3b6fc3a89f404acfe298491555d15df269125e8f | [
"MIT"
] | null | null | null | import re
import importlib.util
import sys
from .options import Option
def clean_spaces(text: str) -> str:
return re.sub(r'\s+', ' ', text).strip()
def patch_options(options, kwargs):
return {
k: options[v.key] if isinstance(v, Option) else v
for k, v in kwargs.items()
}
def wrap_global(func):
class keep_args_instance:
def __init__(self, namespace, **kwargs):
self.kwargs = kwargs
def __call__(self, *args, **kwargs):
kwargs = {**self.kwargs, **kwargs}
kwargs = patch_options(kwargs.pop('options'), kwargs)
return func(*args, **kwargs)
return keep_args_instance
# https://docs.python.org/3/library/importlib.html#implementing-lazy-imports
def lazy_import(name):
spec = importlib.util.find_spec(name)
loader = importlib.util.LazyLoader(spec.loader)
spec.loader = loader
module = importlib.util.module_from_spec(spec)
sys.modules[name] = module
loader.exec_module(module)
return module
| 25.6 | 76 | 0.657227 | 133 | 1,024 | 4.902256 | 0.443609 | 0.079755 | 0.058282 | 0.067485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001253 | 0.220703 | 1,024 | 39 | 77 | 26.25641 | 0.815789 | 0.071289 | 0 | 0 | 0 | 0 | 0.011603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.285714 | 0.071429 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ede60152664399f915e6a4e1c0d2aab4aa9ad6f4 | 3,889 | py | Python | quantsbin/derivativepricing/namesnmapper.py | quantsbin/Quantsbin | 362522653b4b8ebcf14461e3f44fe22dea465adc | [
"MIT"
] | 132 | 2018-06-20T08:40:48.000Z | 2022-03-24T11:34:22.000Z | quantsbin/derivativepricing/namesnmapper.py | williamjiamin/Quantsbin | 14a135174d9f08a70e36ed55279fbd7458e1ad48 | [
"MIT"
] | 5 | 2018-07-08T06:23:53.000Z | 2021-08-08T06:30:43.000Z | quantsbin/derivativepricing/namesnmapper.py | williamjiamin/Quantsbin | 14a135174d9f08a70e36ed55279fbd7458e1ad48 | [
"MIT"
] | 35 | 2018-07-12T10:07:30.000Z | 2022-03-01T04:00:17.000Z | """
developed by Quantsbin - Jun'18
"""
from enum import Enum
class AssetClass(Enum):
EQOPTION = 'EqOption'
FXOPTION = 'FXOption'
FUTOPTION = 'FutOption'
COMOPTION = 'ComOption'
class DerivativeType(Enum):
VANILLA_OPTION = 'Vanilla Option'
class PricingModel(Enum):
BLACKSCHOLESMERTON = 'BSM'
BLACK76 = 'B76'
GK = 'GK'
MC_GBM = "MC_GBM"
MC_GBM_LSM = "MC_GBM_LSM"
BINOMIAL = "Binomial"
class UnderlyingParameters(Enum):
SPOT = "spot0"
VOLATILITY = "volatility"
PRICEDATE = "_pricing_date"
RF_RATE = "rf_rate"
CNV_YIELD = "cnv_yield"
COST_YIELD = "cost_yield"
UNEXPLAINED = "unexplained"
class RiskParameter(Enum):
DELTA = 'delta'
GAMMA = 'gamma'
THETA = 'theta'
VEGA = 'vega'
RHO = 'rho'
PHI = 'phi'
RHO_FOREIGN = 'rho_foreign'
RHO_CONV = 'rho_conv_yield'
class VanillaOptionType(Enum):
CALL = 'Call'
PUT = 'Put'
class ExpiryType(Enum):
AMERICAN = 'American'
EUROPEAN = 'European'
class UdlType(Enum):
INDEX = 'Index'
STOCK = 'Stock'
FX = 'Currency'
COMMODITY = 'Commodity'
FUTURES = 'Futures'
class DivType(Enum):
DISCRETE = 'Discrete'
YIELD = 'Yield'
OBJECT_MODEL = {
UdlType.STOCK.value: {ExpiryType.EUROPEAN.value: [PricingModel.BLACKSCHOLESMERTON.value, PricingModel.MC_GBM.value
, PricingModel.BINOMIAL.value],
ExpiryType.AMERICAN.value: [PricingModel.MC_GBM.value, PricingModel.BINOMIAL.value]}
, UdlType.FUTURES.value: {ExpiryType.EUROPEAN.value: [PricingModel.BLACK76.value, PricingModel.MC_GBM.value
, PricingModel.BINOMIAL.value],
ExpiryType.AMERICAN.value: [PricingModel.MC_GBM.value, PricingModel.BINOMIAL.value]}
, UdlType.FX.value: {ExpiryType.EUROPEAN.value: [PricingModel.GK.value, PricingModel.MC_GBM.value
, PricingModel.BINOMIAL.value],
ExpiryType.AMERICAN.value: [PricingModel.MC_GBM.value, PricingModel.BINOMIAL.value]}
, UdlType.COMMODITY.value: {ExpiryType.EUROPEAN.value: [PricingModel.GK.value, PricingModel.MC_GBM.value
, PricingModel.BINOMIAL.value],
ExpiryType.AMERICAN.value: [PricingModel.MC_GBM.value, PricingModel.BINOMIAL.value]}
}
DEFAULT_MODEL = {
UdlType.STOCK.value:
{DerivativeType.VANILLA_OPTION.value: {ExpiryType.EUROPEAN.value: PricingModel.BLACKSCHOLESMERTON.value,
ExpiryType.AMERICAN.value: PricingModel.BINOMIAL.value},
},
UdlType.FUTURES.value:
{DerivativeType.VANILLA_OPTION.value: {ExpiryType.EUROPEAN.value: PricingModel.BLACK76.value,
ExpiryType.AMERICAN.value: PricingModel.BINOMIAL.value},
},
UdlType.FX.value:
{DerivativeType.VANILLA_OPTION.value: {ExpiryType.EUROPEAN.value: PricingModel.GK.value,
ExpiryType.AMERICAN.value: PricingModel.BINOMIAL.value},
},
UdlType.COMMODITY.value:
{DerivativeType.VANILLA_OPTION.value: {ExpiryType.EUROPEAN.value: PricingModel.GK.value,
ExpiryType.AMERICAN.value: PricingModel.BINOMIAL.value},
}
}
IV_MODELS = [PricingModel.BLACKSCHOLESMERTON.value, PricingModel.BLACK76.value, PricingModel.GK.value]
ANALYTICAL_GREEKS = [PricingModel.BLACKSCHOLESMERTON.value, PricingModel.BLACK76.value, PricingModel.GK.value]
from . import pricingmodels as pm
MODEL_MAPPER = {
PricingModel.BLACKSCHOLESMERTON.value: pm.BSM,
PricingModel.BLACK76.value: pm.B76,
PricingModel.GK.value: pm.GK,
PricingModel.MC_GBM.value: pm.MonteCarloGBM,
PricingModel.BINOMIAL.value: pm.BinomialModel
}
| 32.408333 | 118 | 0.643096 | 369 | 3,889 | 6.674797 | 0.211382 | 0.220869 | 0.131953 | 0.146163 | 0.584653 | 0.584653 | 0.584653 | 0.517255 | 0.465692 | 0.34389 | 0 | 0.006522 | 0.250964 | 3,889 | 119 | 119 | 32.680672 | 0.838998 | 0.007971 | 0 | 0.157303 | 0 | 0 | 0.068921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022472 | 0 | 0.539326 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
edebda6cc858b9f96203506817b64f8c7df83a73 | 123 | py | Python | user_messages/__init__.py | matthiask/django-user-messages | dd76bb0c423a011f3a6473c128830b227e97f771 | [
"MIT"
] | 21 | 2018-04-18T17:58:12.000Z | 2022-01-19T12:41:01.000Z | user_messages/__init__.py | matthiask/django-user-messages | dd76bb0c423a011f3a6473c128830b227e97f771 | [
"MIT"
] | 4 | 2018-04-24T11:04:15.000Z | 2022-02-03T18:35:21.000Z | user_messages/__init__.py | matthiask/django-user-messages | dd76bb0c423a011f3a6473c128830b227e97f771 | [
"MIT"
] | 7 | 2018-03-04T16:03:44.000Z | 2022-02-03T15:50:39.000Z | VERSION = (0, 7, 0)
__version__ = ".".join(map(str, VERSION))
default_app_config = "user_messages.apps.UserMessagesConfig"
| 30.75 | 60 | 0.739837 | 16 | 123 | 5.25 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.097561 | 123 | 3 | 61 | 41 | 0.72973 | 0 | 0 | 0 | 0 | 0 | 0.308943 | 0.300813 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
edf01de9776603359ad97ab421c5a8408e19e47b | 164 | py | Python | setup.py | cdominik/optool | da0ba3a8d4913c60f35020293ab656f3f5c52000 | [
"MIT"
] | 11 | 2020-10-13T13:24:44.000Z | 2021-12-17T09:46:00.000Z | setup.py | cdominik/optool | da0ba3a8d4913c60f35020293ab656f3f5c52000 | [
"MIT"
] | 2 | 2021-08-28T15:11:09.000Z | 2022-01-26T10:14:58.000Z | setup.py | cdominik/optool | da0ba3a8d4913c60f35020293ab656f3f5c52000 | [
"MIT"
] | 1 | 2022-03-17T20:33:36.000Z | 2022-03-17T20:33:36.000Z | from setuptools import setup
setup(
name='optool',
version='1.9.4',
py_modules=['optool'],
install_requires=[
'numpy','matplotlib'
]
)
| 14.909091 | 28 | 0.591463 | 18 | 164 | 5.277778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.25 | 164 | 10 | 29 | 16.4 | 0.747967 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
edfb80cf8ed104501c31146fad7007f0df5f648f | 13,879 | py | Python | chained/functions/lambded.py | andrewsonin/chained | 77788acae642d9553d9d454d4fc6086856b52425 | [
"MIT"
] | null | null | null | chained/functions/lambded.py | andrewsonin/chained | 77788acae642d9553d9d454d4fc6086856b52425 | [
"MIT"
] | null | null | null | chained/functions/lambded.py | andrewsonin/chained | 77788acae642d9553d9d454d4fc6086856b52425 | [
"MIT"
] | null | null | null | from functools import partial
from keyword import iskeyword
from typing import Tuple, Final, Callable, Any, List, Generator, NoReturn, Dict
from chained.type_utils.meta import ChainedMeta
def _call_monkey_patcher(self, *args, **kwargs):
"""LambdaExpr.__call__ monkey patcher"""
return self.eval()(*args, **kwargs)
def _token_expander(value: Any) -> Generator[Any, None, None]:
"""Expands tokens from an instance of ``LambdaExpr``. Otherwise - yields single 'value'.
>>> x = LambdaExpr('x', '+', 'y')
>>> tuple(_token_expander(x))
('x', '+', 'y')
>>> tuple(_token_expander('value'))
('value',)
Args:
value: token or `LambdaExpr` to expand
Returns:
resulting generator
"""
if isinstance(value, LambdaExpr):
yield from value._tokens
else:
yield value
class LambdaExpr(metaclass=ChainedMeta):
"""Implements functionality for shortened creation of lambda functions."""
__slots__ = (
'_tokens',
'_lambda',
'_string_repr'
)
def __init__(self, *tokens: Any) -> None:
self._tokens: Final[Tuple[Any, ...]] = tokens
self._lambda: Callable = partial(_call_monkey_patcher, self)
def __call__(self, *args, **kwargs):
# When the object of type 'LambdaExpr' is called for the first time,
# the class attribute '_lambda' is replaced with the one that evaluated by the 'eval' method.
return self._lambda(*args, **kwargs)
def __getattr__(self, name: str) -> 'LambdaExpr':
"""
Emulates something like ``lambda x: x.attr``
using ``x.attr``, where ``x`` was defined as ``x = LambdaVar('x')``.
>>> x = LambdaVar('x')
>>> tuple(map(x.real, (3, 4, 5 + 2j)))
(3, 4, 5.0)
Args:
name: name of an attribute
Returns:
Corresponding lambda expression
"""
return LambdaExpr('(', *self._tokens, f').{name}')
def __repr__(self) -> str:
"""
>>> x = LambdaVar('x')
>>> y = LambdaVar('y')
>>> (x - y).__repr__()[:35]
'<LambdaExpr(lambda x,y:(x)-(y)) at '
Returns:
__repr__ of the `LambdaExpr`
"""
try:
string_repr = self.__getattribute__('_string_repr')
except AttributeError:
self.eval()
string_repr = self._string_repr
return f'<{self.__class__.__name__}({string_repr}) at {hex(id(self))}>'
def __str__(self) -> str:
"""
>>> x = LambdaVar('x')
>>> y = LambdaVar('y')
>>> str(x - y)
'(x)-(y)'
Returns:
string representation of the `LambdaExpr`
"""
def tok_filter():
for tok in map(str, self._tokens):
if tok[0] != '*' or len(tok) < 3: # Normal variable, or "*", or "**"
yield tok
elif tok[1] != '*':
yield tok[1:] # *args
else:
yield tok[2:] # **kwargs
return ''.join(tok_filter())
def _(self, *args, **kwargs) -> 'LambdaExpr':
"""
Emulates ``__call__`` inside ``LambdaExpr``.
>>> x = LambdaExpr('x')
>>> str(x._('4', 'a', k='23', www='32'))
'(x)((4),(a),k=(23),www=(32),)'
>>> x = LambdaExpr('x')
>>> str(x._('4', "'a'", k='23', www='32'))
"(x)((4),('a'),k=(23),www=(32),)"
>>> str(x._(k='23', www='32'))
'(x)(k=(23),www=(32),)'
>>> str(x._('4', 'a'))
'(x)((4),(a),)'
>>> str(x._('4'))
'(x)((4),)'
>>> str(x._(kwarg='kw'))
'(x)(kwarg=(kw),)'
>>> str(x._())
'(x)()'
Args:
*args: positional arguments to pass
**kwargs: keyword arguments to pass
Returns:
lambda expression
"""
def args_tokenizer() -> Generator[Any, None, None]:
for arg in args:
yield '('
yield from _token_expander(arg)
yield '),'
def kwargs_tokenizer() -> Generator[Any, None, None]:
for k, v in kwargs.items():
yield f'{k}=('
yield from _token_expander(v)
yield '),'
return LambdaExpr(
'(', *self._tokens, ')(',
*args_tokenizer(),
*kwargs_tokenizer(),
')'
)
def _collapse(self, inter_token: str, right: 'LambdaExpr') -> 'LambdaExpr':
"""Collapses 'self' with 'right' so that they are both evaluated before the effect of 'inter_token'
>>> x = LambdaExpr('x')
>>> y = LambdaExpr('y')
>>> z = LambdaExpr('z')
>>> str(x._collapse('*', y + z))
'(x)*((y)+(z))'
Args:
inter_token: middle token
right: instance of `LambdaExpr` to the right
Returns:
resulting `LambdaExpr`
"""
if isinstance(right, LambdaExpr):
return LambdaExpr(
'(', *self._tokens, ')',
inter_token,
'(', *right._tokens, ')'
)
return LambdaExpr(
'(', *self._tokens, ')',
inter_token,
'(', right, ')'
)
def _get_args(self) -> List:
"""Returns an argument list of a future lambda function built on the ``LambdaExpr``.
Returns:
argument list
"""
arg_set = set(self._tokens) & _registered_vars.keys()
starred_args = []
if (args := '*args') in arg_set:
arg_set.remove(args)
starred_args.append(args)
if (kwargs := '**kwargs') in arg_set:
arg_set.remove(kwargs)
starred_args.append(kwargs)
arg_list = sorted(arg_set)
arg_list += starred_args
return arg_list
def eval(self) -> Callable:
"""Evaluates tokens into a lambda function.
>>> x = LambdaVar('x')
>>> y = LambdaVar('y')
>>> func = (x * y - 3 + 1).eval()
>>> func(3, 4)
10
>>> func(2, 2)
2
"""
string_repr = f'lambda {",".join(self._get_args())}:{self}'
self._string_repr: str = string_repr
evaluated_lambda = eval(string_repr)
self._lambda = evaluated_lambda
return evaluated_lambda
# >>> Unary operators
def __pos__(self) -> 'LambdaExpr':
return LambdaExpr('+(', *self._tokens, ')')
def __neg__(self) -> 'LambdaExpr':
return LambdaExpr('-(', *self._tokens, ')')
def __invert__(self) -> 'LambdaExpr':
return LambdaExpr('~(', *self._tokens, ')')
def __abs__(self) -> 'LambdaExpr':
return LambdaExpr('abs(', *self._tokens, ')')
def __round__(self, n=None) -> 'LambdaExpr':
"""
>>> x = LambdaVar('x')
>>> tuple(map(round(x), (3.4, 44.334)))
(3, 44)
>>> tuple(map(round(x, 1), (3.4, 44.334)))
(3.4, 44.3)
Args:
n: precision
Returns:
rounded number
"""
n = n._tokens if isinstance(n, LambdaExpr) else (n,)
return LambdaExpr('round(', *self._tokens, ',', *n, ')')
# >>> Comparison methods
def __eq__(self, other) -> 'LambdaExpr': # type: ignore
"""
>>> str(LambdaExpr('x') == LambdaExpr('y'))
'(x)==(y)'
"""
return self._collapse('==', other)
def __ne__(self, other) -> 'LambdaExpr': # type: ignore
return self._collapse('!=', other)
def __lt__(self, other) -> 'LambdaExpr':
return self._collapse('<', other)
def __gt__(self, other) -> 'LambdaExpr':
return self._collapse('>', other)
def __le__(self, other) -> 'LambdaExpr':
return self._collapse('<=', other)
def __ge__(self, other) -> 'LambdaExpr':
return self._collapse('>=', other)
# >>> Normal arithmetic operators
def __add__(self, other) -> 'LambdaExpr':
return self._collapse('+', other)
def __sub__(self, other) -> 'LambdaExpr':
return self._collapse('-', other)
def __mul__(self, other) -> 'LambdaExpr':
return self._collapse('*', other)
def __floordiv__(self, other) -> 'LambdaExpr':
return self._collapse('//', other)
def __truediv__(self, other) -> 'LambdaExpr':
return self._collapse('/', other)
def __mod__(self, other) -> 'LambdaExpr':
return self._collapse('%', other)
def __divmod__(self, other) -> 'LambdaExpr':
return LambdaExpr('divmod(', *self._tokens, ')')
def __pow__(self, other) -> 'LambdaExpr':
return self._collapse('**', other)
def __matmul__(self, other) -> 'LambdaExpr':
return self._collapse('@', other)
def __lshift__(self, other) -> 'LambdaExpr':
return self._collapse('<<', other)
def __rshift__(self, other) -> 'LambdaExpr':
return self._collapse('>>', other)
def __and__(self, other) -> 'LambdaExpr':
return self._collapse('&', other)
def __or__(self, other) -> 'LambdaExpr':
return self._collapse('|', other)
def __xor__(self, other) -> 'LambdaExpr':
return self._collapse('^', other)
# >>> Type conversion magic methods
def __int__(self) -> 'LambdaExpr':
return LambdaExpr('int(', *self._tokens, ')')
def __float__(self) -> 'LambdaExpr':
return LambdaExpr('float(', *self._tokens, ')')
def __complex__(self) -> 'LambdaExpr':
return LambdaExpr('complex(', *self._tokens, ')')
def __oct__(self) -> 'LambdaExpr':
return LambdaExpr('oct(', *self._tokens, ')')
def __hex__(self) -> 'LambdaExpr':
return LambdaExpr('hex(', *self._tokens, ')')
# >>> Miscellaneous
def __hash__(self) -> 'LambdaExpr': # type: ignore
return LambdaExpr('hash(', *self._tokens, ')')
def __nonzero__(self) -> 'LambdaExpr':
return LambdaExpr('bool(', *self._tokens, ')')
# >>> Container methods
def __len__(self) -> 'LambdaExpr':
return LambdaExpr('len(', *self._tokens, ')')
def __getitem__(self, key) -> 'LambdaExpr':
return LambdaExpr('(', *self._tokens, ')[', key, ']')
def __setitem__(self, key, value) -> 'LambdaExpr':
return LambdaExpr('(', *self._tokens, ')[', key, ']=(', value, ')')
def __delitem__(self, key) -> 'LambdaExpr':
return LambdaExpr('del (', *self._tokens, ')[', key, ']')
def __iter__(self) -> 'LambdaExpr':
return LambdaExpr('iter(', *self._tokens, ')')
def __reversed__(self) -> 'LambdaExpr':
return LambdaExpr('reversed(', *self._tokens, ')')
def __contains__(self, item) -> 'LambdaExpr':
return LambdaExpr('(', item, ') in (', *self._tokens, ')')
# >>> Keyword substitutes
def _if(self, cond, /) -> 'LambdaExpr':
cond = cond._tokens if isinstance(cond, LambdaExpr) else (cond,)
return LambdaExpr('(', *self._tokens, ') if (', *cond, ')')
def _else(self, alt, /) -> 'LambdaExpr':
alt = alt._tokens if isinstance(alt, LambdaExpr) else (alt,)
return LambdaExpr(*self._tokens, ' else (', *alt, ')')
def _for(self, item, /):
item = item._tokens if isinstance(item, LambdaExpr) else (item,)
return LambdaExpr('(', *self._tokens, ') for (', *item, ')')
def _in(self, item, /):
item = item._tokens if isinstance(item, LambdaExpr) else (item,)
return LambdaExpr(*self._tokens, ' in (', *item, ')')
class _LambdaVarMeta(ChainedMeta):
__slots__ = ()
def __call__(cls, name: str): # type: ignore
instance = _registered_vars.get(name, None)
if instance is not None:
return instance
if not name.isidentifier() or iskeyword(name):
raise NameError(f'LambdaVar with name `{name}` is not a valid identifier')
return super().__call__(name)
class LambdaVar(LambdaExpr, metaclass=_LambdaVarMeta):
"""
>>> a = LambdaVar('a')
>>> b = LambdaVar('b')
>>> tuple(map(a - b, (10, 20, 30), (10, 20, 20)))
(0, 0, 10)
"""
__slots__ = ()
def __new__(cls, name: str) -> 'LambdaVar':
return super().__new__(cls)
def __init__(self, name: str) -> None:
super().__init__(name)
self._string_repr = name
_registered_vars[name] = self
class _StarredLambdaVarMeta(_LambdaVarMeta):
__slots__ = ()
def __call__(cls):
return cls.__new__(cls)
class _StarredLambdaVar(LambdaVar, metaclass=_StarredLambdaVarMeta):
"""Special abstract ``LambdaVar`` handler for ``*args`` and ``**kwargs``."""
__slots__ = ()
def __new__(cls, name: str):
instance = _registered_vars.get(name, None)
if instance is not None:
return instance
instance = LambdaExpr.__new__(cls)
instance._string_repr = name
_registered_vars[name] = instance
return instance
def __call__(self, *args, **kwargs) -> NoReturn:
raise TypeError(
f'Cannot build a lambda function based only on the starred `LambdaVar` instance {repr(self)}'
)
def __iter__(self) -> Generator[str, None, None]: # type: ignore
pass
class LambdaArgs(_StarredLambdaVar):
__slots__ = ()
def __new__(cls) -> 'LambdaArgs':
return super().__new__(LambdaArgs, '*args')
def __iter__(self) -> Generator[str, None, None]: # type: ignore
yield 'args'
class LambdaKwargs(_StarredLambdaVar):
__slots__ = ()
def __new__(cls) -> 'LambdaKwargs':
return super().__new__(LambdaKwargs, '**kwargs')
def __iter__(self) -> Generator[str, None, None]: # type: ignore
yield 'kwargs'
_registered_vars: Final[Dict[str, LambdaVar]] = {}
x = LambdaVar('x')
y = LambdaVar('y')
z = LambdaVar('z')
| 29.529787 | 107 | 0.541393 | 1,444 | 13,879 | 4.880194 | 0.175208 | 0.081737 | 0.053924 | 0.062012 | 0.318433 | 0.266638 | 0.217256 | 0.175394 | 0.071378 | 0.06556 | 0 | 0.009651 | 0.290727 | 13,879 | 469 | 108 | 29.592751 | 0.706217 | 0.217955 | 0 | 0.125 | 0 | 0 | 0.101386 | 0.007577 | 0 | 0 | 0 | 0 | 0 | 1 | 0.293103 | false | 0.00431 | 0.017241 | 0.181034 | 0.633621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
edfe065eb3de09fc44af52b386a14934e51092ab | 494 | py | Python | speckle/schemas/Line.py | pjkottke/PySpeckle | e332c5e4c76960ab5b698f120c07d0eb15ac2262 | [
"MIT"
] | 24 | 2018-11-16T03:17:58.000Z | 2021-11-19T11:32:01.000Z | speckle/schemas/Line.py | pjkottke/PySpeckle | e332c5e4c76960ab5b698f120c07d0eb15ac2262 | [
"MIT"
] | 43 | 2019-01-15T17:48:24.000Z | 2020-12-02T19:09:55.000Z | speckle/schemas/Line.py | pjkottke/PySpeckle | e332c5e4c76960ab5b698f120c07d0eb15ac2262 | [
"MIT"
] | 12 | 2019-02-06T20:54:17.000Z | 2020-12-01T21:49:06.000Z | import json
import json
import hashlib
from pydantic import BaseModel, validator
from typing import List, Optional
from speckle.base.resource import ResourceBaseSchema
from speckle.resources.objects import SpeckleObject
from speckle.schemas import Interval
NAME = 'line'
class Schema(SpeckleObject):
type: Optional[str] = "Line"
name: Optional[str] = "SpeckleLine"
Value: List[float] = []
domain: Optional[Interval] = Interval()
class Config:
case_sensitive = False
| 24.7 | 52 | 0.755061 | 58 | 494 | 6.413793 | 0.568966 | 0.08871 | 0.086022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165992 | 494 | 19 | 53 | 26 | 0.902913 | 0 | 0 | 0.125 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
61199627024d137c74ab6e479ac4de2d438e5357 | 1,014 | py | Python | Autocoders/Python/utils/NoseTests/src/Config.py | chrisdonlan/fprime | 0cab90e238cff1b50c20f1e148a44cf8827a5bf8 | [
"Apache-2.0"
] | 5 | 2019-10-22T03:41:02.000Z | 2022-01-16T12:48:31.000Z | Autocoders/Python/utils/NoseTests/src/Config.py | chrisdonlan/fprime | 0cab90e238cff1b50c20f1e148a44cf8827a5bf8 | [
"Apache-2.0"
] | 27 | 2019-02-07T17:58:58.000Z | 2019-08-13T00:46:24.000Z | Autocoders/Python/utils/NoseTests/src/Config.py | chrisdonlan/fprime | 0cab90e238cff1b50c20f1e148a44cf8827a5bf8 | [
"Apache-2.0"
] | 3 | 2019-01-01T18:44:37.000Z | 2019-08-01T01:19:39.000Z | import sys
from . import BuildConstants
class Config(object):
__instance = None
system = None
cCompiler = None
includes = None
libs = None
def __init__(self):
self.system = sys.platform
self.cCompiler = None
self.includes = None
self.libs = None
self.BuildConstants = BuildConstants.getInstance()
def getIncludes(self):
return self.includes
def getLibs(self):
return self.libs
def getCompiler(self):
return self.cCompiler
def setIncludes(self):
self.includes = BuildConstants.get(self.system, "includes")
def setLibs(self):
self.libs = BuildConstants.get(self.system, "libs")
def setCompiler(self):
if self.system == "darwin" or self.system == "linux":
self.cCompiler = BuildConstants.get(self.system, "compiler")
def getInstance(self):
if Config.__instance is None:
Config.__instance = Config()
return Config.__instance
| 22.533333 | 72 | 0.628205 | 108 | 1,014 | 5.787037 | 0.296296 | 0.096 | 0.0672 | 0.1296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.278107 | 1,014 | 44 | 73 | 23.045455 | 0.853825 | 0 | 0 | 0 | 0 | 0 | 0.030572 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258065 | false | 0 | 0.064516 | 0.096774 | 0.645161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
611cf37b69aa1de32e5e2eeaf63ef45b0b9c1091 | 1,912 | py | Python | src/pyqtschema/widgets/base.py | amsico/pyqtschema | 27fdc3c56bcef6975f6cd2624859e43f446320a1 | [
"MIT"
] | 1 | 2022-02-17T09:04:13.000Z | 2022-02-17T09:04:13.000Z | src/pyqtschema/widgets/base.py | amsico/pyqtschema | 27fdc3c56bcef6975f6cd2624859e43f446320a1 | [
"MIT"
] | 12 | 2022-01-28T22:31:46.000Z | 2022-02-09T23:06:07.000Z | src/pyqtschema/widgets/base.py | amsico/pyqtschema | 27fdc3c56bcef6975f6cd2624859e43f446320a1 | [
"MIT"
] | null | null | null | from functools import wraps
from typing import Tuple
from PyQt5.QtGui import QColor
from .signal import Signal
class StateProperty(property):
def setter(self, fset):
@wraps(fset)
def _setter(*args):
*head, value = args
if value is not None:
fset(*head, value)
return super().setter(_setter)
state_property = StateProperty
class SchemaWidgetMixin:
on_changed = Signal()
VALID_COLOUR = '#ffffff'
INVALID_COLOUR = '#f6989d'
def __init__(self, schema: dict, ui_schema: dict, widget_builder: 'IBuilder', **kwargs):
super().__init__(**kwargs)
self.schema = schema
self.ui_schema = ui_schema
self.widget_builder = widget_builder
self.on_changed.connect(lambda _: self.clear_error())
self._show_title: bool = kwargs.get('show_title', True)
self.configure()
def configure(self):
pass
def show_title(self) -> bool:
""" show/hide the title in the form-widget """
return self._show_title
@state_property
def state(self):
raise NotImplementedError(f"{self.__class__.__name__}.state")
@state.setter
def state(self, state):
raise NotImplementedError(f"{self.__class__.__name__}.state")
def handle_error(self, path: Tuple[str], err: Exception):
if path:
raise ValueError("Cannot handle nested error by default")
self._set_valid_state(err)
def clear_error(self):
self._set_valid_state(None)
def _set_valid_state(self, error: Exception = None):
palette = self.palette()
colour = QColor()
colour.setNamedColor(self.VALID_COLOUR if error is None else self.INVALID_COLOUR)
palette.setColor(self.backgroundRole(), colour)
self.setPalette(palette)
self.setToolTip("" if error is None else error.message) # TODO
| 26.191781 | 92 | 0.648013 | 228 | 1,912 | 5.175439 | 0.359649 | 0.030508 | 0.033051 | 0.049153 | 0.101695 | 0.072881 | 0.072881 | 0 | 0 | 0 | 0 | 0.003494 | 0.251569 | 1,912 | 72 | 93 | 26.555556 | 0.821104 | 0.023536 | 0 | 0.041667 | 0 | 0 | 0.07043 | 0.033333 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0.208333 | false | 0.020833 | 0.083333 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
612084590d1479f82fcaff1c57d19d0b25438539 | 375 | py | Python | kili/queries/issue/queries.py | ASonay/kili-playground | 9624073703b5e6151cf496f44f17f531576875b7 | [
"Apache-2.0"
] | 214 | 2019-08-05T14:55:01.000Z | 2022-03-28T21:02:22.000Z | kili/queries/issue/queries.py | x213212/kili-playground | dfb94c2d54bedfd7fec452b91f811587a2156c13 | [
"Apache-2.0"
] | 10 | 2020-05-14T10:44:16.000Z | 2022-03-08T09:39:24.000Z | kili/queries/issue/queries.py | x213212/kili-playground | dfb94c2d54bedfd7fec452b91f811587a2156c13 | [
"Apache-2.0"
] | 19 | 2019-11-26T22:41:09.000Z | 2022-01-16T19:17:38.000Z | """
Queries of issue queries
"""
def gql_issues(fragment):
"""
Return the GraphQL issues query
"""
return f'''
query ($where: IssueWhere!, $first: PageSize!, $skip: Int!) {{
data: issues(where: $where, first: $first, skip: $skip) {{
{fragment}
}}
}}
'''
GQL_ISSUES_COUNT = '''
query($where: IssueWhere!) {
data: countIssues(where: $where)
}
'''
| 15.625 | 62 | 0.594667 | 41 | 375 | 5.365854 | 0.512195 | 0.081818 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208 | 375 | 23 | 63 | 16.304348 | 0.740741 | 0.149333 | 0 | 0.307692 | 0 | 0 | 0.726351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6124cc91aa35fe91816d7742ae48f2a71bd859b7 | 2,083 | py | Python | mautrix/client/state_store/sqlalchemy/mx_room_state.py | tulir/mautrix-appservice-python | d180603445bb0bc465a7b2ff918c4ac28a5dbfc2 | [
"MIT"
] | 1 | 2018-08-24T13:33:30.000Z | 2018-08-24T13:33:30.000Z | mautrix/client/state_store/sqlalchemy/mx_room_state.py | tulir/mautrix-appservice-python | d180603445bb0bc465a7b2ff918c4ac28a5dbfc2 | [
"MIT"
] | 4 | 2018-07-10T11:43:46.000Z | 2018-09-03T22:08:02.000Z | mautrix/client/state_store/sqlalchemy/mx_room_state.py | tulir/mautrix-appservice-python | d180603445bb0bc465a7b2ff918c4ac28a5dbfc2 | [
"MIT"
] | 2 | 2018-07-03T04:07:08.000Z | 2018-09-10T03:13:59.000Z | # Copyright (c) 2022 Tulir Asokan
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
from __future__ import annotations
from typing import Type
import json
from sqlalchemy import Boolean, Column, Text, types
from mautrix.types import (
PowerLevelStateEventContent as PowerLevels,
RoomEncryptionStateEventContent as EncryptionInfo,
RoomID,
Serializable,
)
from mautrix.util.db import Base
class SerializableType(types.TypeDecorator):
impl = types.Text
def __init__(self, python_type: Type[Serializable], *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self._python_type = python_type
@property
def python_type(self) -> Type[Serializable]:
return self._python_type
def process_bind_param(self, value: Serializable, dialect) -> str | None:
if value is not None:
return json.dumps(value.serialize())
return None
def process_result_value(self, value: str, dialect) -> Serializable | None:
if value is not None:
return self.python_type.deserialize(json.loads(value))
return None
def process_literal_param(self, value, dialect):
return value
class RoomState(Base):
__tablename__ = "mx_room_state"
room_id: RoomID = Column(Text, primary_key=True)
is_encrypted: bool = Column(Boolean, nullable=True)
has_full_member_list: bool = Column(Boolean, nullable=True)
encryption: EncryptionInfo = Column(SerializableType(EncryptionInfo), nullable=True)
power_levels: PowerLevels = Column(SerializableType(PowerLevels), nullable=True)
@property
def has_power_levels(self) -> bool:
return bool(self.power_levels)
@property
def has_encryption_info(self) -> bool:
return self.is_encrypted is not None
@classmethod
def get(cls, room_id: RoomID) -> RoomState | None:
return cls._select_one_or_none(cls.c.room_id == room_id)
| 31.089552 | 88 | 0.708113 | 266 | 2,083 | 5.349624 | 0.417293 | 0.042164 | 0.039353 | 0.02811 | 0.077301 | 0.036543 | 0.036543 | 0 | 0 | 0 | 0 | 0.004825 | 0.204033 | 2,083 | 66 | 89 | 31.560606 | 0.853438 | 0.107537 | 0 | 0.155556 | 0 | 0 | 0.007016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.133333 | 0.111111 | 0.711111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
b61c4cafdf5a3278da20f6b8bf7ced09fb26ca7b | 1,300 | py | Python | warp/tests/test_import.py | NVIDIA/warp | fc7d3255435fc2fe6b54300e689f74e6d67418ca | [
"CNRI-Python-GPL-Compatible",
"Unlicense",
"0BSD",
"Apache-2.0",
"MIT"
] | 306 | 2022-03-21T23:24:13.000Z | 2022-03-31T21:11:28.000Z | warp/tests/test_import.py | NVIDIA/warp | fc7d3255435fc2fe6b54300e689f74e6d67418ca | [
"CNRI-Python-GPL-Compatible",
"Unlicense",
"0BSD",
"Apache-2.0",
"MIT"
] | 11 | 2022-03-23T06:23:25.000Z | 2022-03-31T22:17:18.000Z | warp/tests/test_import.py | NVIDIA/warp | fc7d3255435fc2fe6b54300e689f74e6d67418ca | [
"CNRI-Python-GPL-Compatible",
"Unlicense",
"0BSD",
"Apache-2.0",
"MIT"
] | 18 | 2022-03-22T16:27:21.000Z | 2022-03-30T20:07:47.000Z | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# include parent path
import numpy as np
import math
import warp as wp
from warp.tests.test_base import *
import unittest
wp.init()
#from test_func import sqr
import warp.tests.test_func as test_func
@wp.kernel
def test_import_func():
# test a cross-module function reference is resolved correctly
x = test_func.sqr(2.0)
y = test_func.cube(2.0)
wp.expect_eq(x, 4.0)
wp.expect_eq(y, 8.0)
def register(parent):
devices = wp.get_devices()
class TestImport(parent):
pass
add_kernel_test(TestImport, kernel=test_import_func, name="test_import_func", dim=1, devices=devices)
return TestImport
if __name__ == '__main__':
c = register(unittest.TestCase)
#unittest.main(verbosity=2)
wp.force_load()
loader = unittest.defaultTestLoader
testSuite = loader.loadTestsFromTestCase(c)
testSuite.debug() | 24.528302 | 105 | 0.743846 | 181 | 1,300 | 5.198895 | 0.530387 | 0.042508 | 0.044633 | 0.023379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013121 | 0.179231 | 1,300 | 53 | 106 | 24.528302 | 0.868791 | 0.416923 | 0 | 0 | 0 | 0 | 0.032086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.04 | 0.4 | 0 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b6209c445497bea1ab0c7570c19a91057e31a667 | 1,111 | py | Python | Webpage/arbeitsstunden/migrations/0015_auto_20210718_1006.py | ASV-Aachen/Website | bbfc02d71dde67fdf89a4b819b795a73435da7cf | [
"Apache-2.0"
] | null | null | null | Webpage/arbeitsstunden/migrations/0015_auto_20210718_1006.py | ASV-Aachen/Website | bbfc02d71dde67fdf89a4b819b795a73435da7cf | [
"Apache-2.0"
] | 46 | 2022-01-08T12:03:24.000Z | 2022-03-30T08:51:05.000Z | Webpage/arbeitsstunden/migrations/0015_auto_20210718_1006.py | ASV-Aachen/Website | bbfc02d71dde67fdf89a4b819b795a73435da7cf | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-18 10:06
import datetime
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('arbeitsstunden', '0014_auto_20210705_1948'),
]
operations = [
migrations.AlterField(
model_name='customhours',
name='season',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='arbeitsstunden.season', unique=True),
),
migrations.AlterField(
model_name='customhours',
name='used_account',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='arbeitsstunden.account', unique=True),
),
migrations.AlterField(
model_name='project',
name='tags',
field=models.ManyToManyField(blank=True, to='arbeitsstunden.tag'),
),
migrations.AlterField(
model_name='work',
name='setupDate',
field=models.DateField(default=datetime.date(2021, 7, 18)),
),
]
| 30.861111 | 123 | 0.616562 | 112 | 1,111 | 6.026786 | 0.473214 | 0.047407 | 0.148148 | 0.171852 | 0.422222 | 0.422222 | 0.219259 | 0.219259 | 0.219259 | 0.219259 | 0 | 0.046455 | 0.263726 | 1,111 | 35 | 124 | 31.742857 | 0.778729 | 0.040504 | 0 | 0.344828 | 1 | 0 | 0.152256 | 0.06203 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.103448 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b62746ea549f4b55c3d583736d4f4533db5b42da | 572 | py | Python | ReLERNN/imports.py | peterdfields/ReLERNN | 9187dca7f9068caa0b6490d389581084da3c0a1d | [
"MIT"
] | 14 | 2020-06-18T03:22:56.000Z | 2022-03-14T08:17:55.000Z | ReLERNN/imports.py | peterdfields/ReLERNN | 9187dca7f9068caa0b6490d389581084da3c0a1d | [
"MIT"
] | 10 | 2020-07-06T07:53:55.000Z | 2021-08-17T13:45:23.000Z | ReLERNN/imports.py | peterdfields/ReLERNN | 9187dca7f9068caa0b6490d389581084da3c0a1d | [
"MIT"
] | 5 | 2019-06-11T14:37:52.000Z | 2020-03-25T07:54:24.000Z | import glob
import pickle
import sys
import msprime as msp
import numpy as np
import os
import multiprocessing as mp
import shutil
import random
import copy
import argparse
import h5py
import allel
import time
from sklearn.neighbors import NearestNeighbors
from sklearn.utils import resample
import matplotlib as mpl
mpl.use('pdf')
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Model, model_from_json
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, TerminateOnNaN
| 21.185185 | 85 | 0.844406 | 82 | 572 | 5.865854 | 0.512195 | 0.087318 | 0.118503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002004 | 0.127622 | 572 | 26 | 86 | 22 | 0.961924 | 0 | 0 | 0 | 0 | 0 | 0.005245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.956522 | 0 | 0.956522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b630ab4753b0be00f46c81db30f53c9e533bb3f2 | 28,223 | py | Python | members_ownership/Rishav/data_prep_1.py | TeamEpicProjects/Customer-Prioritization-for-Marketing | 88dfbca68759394200b2c4ec042d3293591a567a | [
"MIT"
] | 2 | 2021-09-09T11:49:06.000Z | 2021-09-12T17:02:59.000Z | members_ownership/Rishav/data_prep_1.py | TeamEpicProjects/Customer-Prioritization-for-Marketing | 88dfbca68759394200b2c4ec042d3293591a567a | [
"MIT"
] | null | null | null | members_ownership/Rishav/data_prep_1.py | TeamEpicProjects/Customer-Prioritization-for-Marketing | 88dfbca68759394200b2c4ec042d3293591a567a | [
"MIT"
] | 2 | 2021-09-10T04:20:10.000Z | 2021-09-11T11:26:03.000Z | # This script merges and consolidates the beacons and sessions datasets into 1 dataset
from pandarallel import pandarallel
import pandas as pd
import datetime
import sys
import os
pandarallel.initialize(progress_bar=False, nb_workers=4)
base_path = os.path.dirname(os.path.realpath(__file__))
def df_info(df):
"""
Function to get information about a dataframe
"""
col_name_list = list(df.columns)
col_type_list = [type(col) for col in df.iloc[0, :]]
col_null_count_list = [df[col].isnull().sum() for col in col_name_list]
col_unique_count_list = [df[col].nunique() for col in col_name_list]
col_memory_usage_list = [df[col].memory_usage(deep=True) for col in col_name_list]
df_total_memory_usage = sum(col_memory_usage_list) / 1048576
return pd.DataFrame({'col_name': col_name_list, 'col_type': col_type_list, 'null_count': col_null_count_list, 'nunique': col_unique_count_list}), df_total_memory_usage
######################
# Due to memory constraint, we split this script using an argument
# We are simply going to check if an argument was supplied or not
# If not supplied, we shall execute processing stages 1, 2 & 3
# If supplied, we shall execute only stage 4
if len(sys.argv) < 2:
# Reading the dataset b_3m and printing its basic summary
print('\n{}\tReading raw data: b_3m.csv ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
in_filename = '../data/sanitized/subset/b_3m.csv'
df_b = pd.read_csv(os.path.join(base_path, in_filename))
df_b_info = df_info(df_b)
print('\n{}\t"b_3m" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_b.shape[0], df_b.shape[1], df_b_info[1]))
print(df_b_info[0].to_string())
print('\n"b_3m" dataset head:')
print(df_b.head().to_string())
# Reading the dataset s and printing its basic summary
print('\n{}\tReading raw data: s.csv ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
in_filename = '../data/sanitized/s.csv'
df_s = pd.read_csv(os.path.join(base_path, in_filename))
df_s_info = df_info(df_s)
print('\n{}\t"s" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_s.shape[0], df_s.shape[1], df_s_info[1]))
print(df_s_info[0].to_string())
print('\n"s" dataset head:')
print(df_s.head().to_string())
######################
# Dropping null values and converting column types for dataset b
print('\n{}\tProcessing stage 1: "b_3m" dataset: dropping rows with na, converting column types ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b.dropna(inplace=True)
df_b.uuid = df_b.uuid.parallel_apply(lambda x: str(int(x)))
df_b.beacon_value = df_b.beacon_value.parallel_apply(lambda x: int(x))
df_b_info = df_info(df_b)
print('\n{}\t"b_3m" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_b.shape[0], df_b.shape[1], df_b_info[1]))
print(df_b_info[0].to_string())
print('\n"b_3m" dataset head:')
print(df_b.head().to_string())
# Dropping null values and converting column types for dataset s
print('\n{}\tProcessing stage 1: "s" dataset: dropping rows with na, dropping columns, converting column types ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_s.dropna(inplace=True)
df_s.drop(columns=['status'], inplace=True)
df_s.uuid = df_s.uuid.parallel_apply(lambda x: str(int(x)))
df_s.phone = df_s.phone.parallel_apply(lambda x: str(int(x)))
df_s.email = df_s.email.parallel_apply(lambda x: str(int(x)))
df_s.log_date = df_s.log_date.parallel_apply(lambda x: datetime.datetime.strptime(x[:10], '%Y-%m-%d').date())
df_s_info = df_info(df_s)
print('\n{}\t"s" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_s.shape[0], df_s.shape[1], df_s_info[1]))
print(df_s_info[0].to_string())
print('\n"s" dataset head:')
print(df_s.head().to_string())
######################
print('\n{}\tProcessing stage 2: consolidating "b_3m" dataset by date and uuid ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
# Preparing to consolidate the dataset b_3m
df_b_gb_date_uuid = df_b.groupby(['log_date', 'uuid'])
df_b_cb_date_uuid = pd.DataFrame(df_b_gb_date_uuid.groups.keys(), columns=['date', 'uuid'])
# Feature consolidation
print('{}\t\tConsolidating sum_beacon_value ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b_cb_date_uuid['sum_beacon_value'] = df_b_cb_date_uuid.parallel_apply(lambda x: df_b_gb_date_uuid.get_group((x[0], x[1]))['beacon_value'].values.sum(), axis=1)
print('{}\t\tConsolidating nunique_beacon_type ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b_cb_date_uuid['nunique_beacon_type'] = df_b_cb_date_uuid.parallel_apply(lambda x: df_b_gb_date_uuid.get_group((x[0], x[1]))['beacon_type'].nunique(), axis=1)
print('{}\t\tConsolidating count_user_stay ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b_cb_date_uuid['count_user_stay'] = df_b_cb_date_uuid.parallel_apply(lambda x: (df_b_gb_date_uuid.get_group((x[0], x[1]))['beacon_type'].values == 'user_stay').sum(), axis=1)
print('{}\t\tConsolidating count_pay_attempt ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b_cb_date_uuid['count_pay_attempt'] = df_b_cb_date_uuid.parallel_apply(lambda x: df_b_gb_date_uuid.get_group((x[0], x[1]))['beacon_type'].str.contains('pay').sum(), axis=1)
print('{}\t\tConsolidating count_buy_click ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_b_cb_date_uuid['count_buy_click'] = df_b_cb_date_uuid.parallel_apply(lambda x: df_b_gb_date_uuid.get_group((x[0], x[1]))['beacon_type'].str.contains('buy|bottom').sum(), axis=1)
# Printing the summary of the consolidated beacons table
df_b_cb_date_uuid_info = df_info(df_b_cb_date_uuid)
print('\n{}\t"b_3m_cb_date_uuid" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_b_cb_date_uuid.shape[0], df_b_cb_date_uuid.shape[1], df_b_cb_date_uuid_info[1]))
print(df_b_cb_date_uuid_info[0].to_string())
print('\n"b_3m_cb_date_uuid" dataset head:')
print(df_b_cb_date_uuid.head().to_string())
######################
# Merging the consolidated dataset b with s
print('\n{}\tProcessing stage 3: merging "s" with "b_3m_cb_date_uuid" ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged = df_b_cb_date_uuid.merge(df_s.drop(columns=['log_date']), on='uuid', how='inner')
out_filename = '../data/sanitized/processed_base/bs_merged_3m.csv'
df_bs_merged.to_csv(os.path.join(base_path, out_filename), index=False)
df_bs_merged_info = df_info(df_bs_merged)
print('\n{}\t"bs_merged" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_bs_merged.shape[0], df_bs_merged.shape[1], df_bs_merged_info[1]))
print(df_bs_merged_info[0].to_string())
print('\n"bs_merged" dataset head:')
print(df_bs_merged.head().to_string())
else:
# Reading the consolidated dataset and printing its summary
print('\n{}\tReading dataset: bs_merged_3m.csv ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
in_filename = '../data/sanitized/processed_base/bs_merged_3m.csv'
df_bs_merged = pd.read_csv(os.path.join(base_path, in_filename))
df_bs_merged_info = df_info(df_bs_merged)
print('\n{}\t"bs_merged" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_bs_merged.shape[0], df_bs_merged.shape[1], df_bs_merged_info[1]))
print(df_bs_merged_info[0].to_string())
print('\n"bs_merged" dataset head:')
print(df_bs_merged.head().to_string())
######################
# Consolidating the merged dataset by date and email
print('\n{}\tProcessing stage 4: consolidating "bs_merged" dataset by date and email ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_gb_date_email = df_bs_merged.groupby(['date', 'email'])
df_bs_merged_cb_date_email = pd.DataFrame(df_bs_merged_gb_date_email.groups.keys(), columns=['date', 'email'])
print('{}\t\tConsolidating count_sessions ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['count_sessions'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: len(df_bs_merged_gb_date_email.get_group((x[0], x[1]))), axis=1)
print('{}\t\tConsolidating sum_beacon_value ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['sum_beacon_value'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['sum_beacon_value'].values.sum(), axis=1)
print('{}\t\tConsolidating nunique_beacon_type ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_beacon_type'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['nunique_beacon_type'].values.sum(), axis=1)
print('{}\t\tConsolidating count_user_stay ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['count_user_stay'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['count_user_stay'].values.sum(), axis=1)
print('{}\t\tConsolidating count_pay_attempt ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['count_pay_attempt'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['count_pay_attempt'].values.sum(), axis=1)
print('{}\t\tConsolidating count_buy_click ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['count_buy_click'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['count_buy_click'].values.sum(), axis=1)
print('{}\t\tConsolidating nunique_gender ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_gender'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['gender'].nunique(), axis=1)
print('{}\t\tConsolidating nunique_dob ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_dob'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['dob'].nunique(), axis=1)
print('{}\t\tConsolidating nunique_language ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_language'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['language'].nunique(), axis=1)
print('{}\t\tConsolidating nunique_report_type ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_report_type'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['report_type'].nunique(), axis=1)
print('{}\t\tConsolidating nunique_device ...'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
df_bs_merged_cb_date_email['nunique_device'] = df_bs_merged_cb_date_email.parallel_apply(lambda x: df_bs_merged_gb_date_email.get_group((x[0], x[1]))['device'].nunique(), axis=1)
out_filename = '../data/sanitized/processed_base/bs_merged_consolidated_3m.csv'
df_bs_merged_cb_date_email.to_csv(os.path.join(base_path, out_filename), index=False)
df_bs_merged_cb_date_email_info = df_info(df_bs_merged_cb_date_email)
print('\n{}\t"bs_merged_cb_date_email" dataset summary:'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')))
print('\t{} rows x {} columns | {:.2f} MB approx memory usage'.format(df_bs_merged_cb_date_email.shape[0], df_bs_merged_cb_date_email.shape[1], df_bs_merged_cb_date_email_info[1]))
print(df_bs_merged_cb_date_email_info[0].to_string())
print('\n"bs_merged_cb_date_email" dataset head:')
print(df_bs_merged_cb_date_email.head().to_string())
######################
# /home/ngkpg/anaconda3/envs/pyconda37/bin/python3.7 /home/ngkpg/Documents/Packt_GP/GP1/code/data_prep_1.py
# INFO: Pandarallel will run on 4 workers.
# INFO: Pandarallel will use Memory file system to transfer data between the main process and workers.
#
# 2021-09-18 22:05:25 Reading raw data: b_3m.csv ...
#
# 2021-09-18 22:05:29 "b_3m" dataset summary:
# 6970265 rows x 4 columns | 993.62 MB approx memory usage
# col_name col_type null_count nunique
# 0 uuid <class 'numpy.float64'> 2 1794005
# 1 beacon_type <class 'str'> 2 42
# 2 beacon_value <class 'numpy.float64'> 2 277
# 3 log_date <class 'str'> 0 92
#
# "b_3m" dataset head:
# uuid beacon_type beacon_value log_date
# 0 8264419.0 user_stay 2.0 2021-05-01
# 1 8264429.0 masked_content 1.0 2021-05-01
# 2 8264423.0 user_stay 2.0 2021-05-01
# 3 8264430.0 bottom_banner 1.0 2021-05-01
# 4 8264421.0 user_stay 3.0 2021-05-01
#
# 2021-09-18 22:05:29 Reading raw data: s.csv ...
#
# 2021-09-18 22:05:50 "s" dataset summary:
# 9095602 rows x 10 columns | 3655.10 MB approx memory usage
# col_name col_type null_count nunique
# 0 uuid <class 'numpy.int64'> 0 9095602
# 1 phone <class 'numpy.float64'> 977 3399997
# 2 status <class 'numpy.int64'> 0 1
# 3 gender <class 'str'> 4765 6
# 4 dob <class 'str'> 20 36934
# 5 language <class 'str'> 398 17
# 6 email <class 'numpy.float64'> 733 3259793
# 7 report_type <class 'str'> 70 81
# 8 device <class 'str'> 187 5
# 9 log_date <class 'str'> 0 8285461
#
# "s" dataset head:
# uuid phone status gender dob language email report_type device log_date
# 0 10058150 145.0 1 Male 00000000 TAM 0.0 LS-MT mobile 2019-02-26 16:07:25
# 1 0 145.0 1 Male 00000000 TAM 0.0 LS-MT mobile 2019-02-26 16:12:08
# 2 1 145.0 1 Male 00000000 TAM 0.0 LS-MT mobile 2019-02-26 16:33:00
# 3 10058153 607734.0 1 Female 00000000 TEL 1.0 LS-MP mobile 2019-02-26 16:44:19
# 4 26 607735.0 1 Female 00000000 TAM 2.0 LS-MT mobile 2019-02-26 16:44:32
#
# 2021-09-18 22:05:50 Processing stage 1: "b_3m" dataset: dropping rows with na, converting column types ...
#
# 2021-09-18 22:05:58 "b_3m" dataset summary:
# 6970263 rows x 4 columns | 1578.80 MB approx memory usage
# col_name col_type null_count nunique
# 0 uuid <class 'str'> 0 1794005
# 1 beacon_type <class 'str'> 0 42
# 2 beacon_value <class 'numpy.int64'> 0 277
# 3 log_date <class 'str'> 0 92
#
# "b_3m" dataset head:
# uuid beacon_type beacon_value log_date
# 0 8264419 user_stay 2 2021-05-01
# 1 8264429 masked_content 1 2021-05-01
# 2 8264423 user_stay 2 2021-05-01
# 3 8264430 bottom_banner 1 2021-05-01
# 4 8264421 user_stay 3 2021-05-01
#
# 2021-09-18 22:05:58 Processing stage 1: "s" dataset: dropping rows with na, dropping columns, converting column types ...
#
# 2021-09-18 22:06:50 "s" dataset summary:
# 9088534 rows x 9 columns | 5340.69 MB approx memory usage
# col_name col_type null_count nunique
# 0 uuid <class 'str'> 0 9088534
# 1 phone <class 'str'> 0 3398850
# 2 gender <class 'str'> 0 6
# 3 dob <class 'str'> 0 36751
# 4 language <class 'str'> 0 17
# 5 email <class 'str'> 0 3258713
# 6 report_type <class 'str'> 0 78
# 7 device <class 'str'> 0 5
# 8 log_date <class 'datetime.date'> 0 827
#
# "s" dataset head:
# uuid phone gender dob language email report_type device log_date
# 0 10058150 145 Male 00000000 TAM 0 LS-MT mobile 2019-02-26
# 1 0 145 Male 00000000 TAM 0 LS-MT mobile 2019-02-26
# 2 1 145 Male 00000000 TAM 0 LS-MT mobile 2019-02-26
# 3 10058153 607734 Female 00000000 TEL 1 LS-MP mobile 2019-02-26
# 4 26 607735 Female 00000000 TAM 2 LS-MT mobile 2019-02-26
#
# 2021-09-18 22:06:50 Processing stage 2: consolidating "b_3m" dataset by date and uuid ...
# 2021-09-18 22:07:11 Consolidating sum_beacon_value ...
# 2021-09-18 22:08:54 Consolidating nunique_beacon_type ...
# 2021-09-18 22:11:07 Consolidating count_user_stay ...
# 2021-09-18 22:12:53 Consolidating count_pay_attempt ...
# 2021-09-18 22:16:14 Consolidating count_buy_click ...
#
# 2021-09-18 22:19:37 "b_3m_cb_date_uuid" dataset summary:
# 1811593 rows x 7 columns | 295.49 MB approx memory usage
# col_name col_type null_count nunique
# 0 date <class 'str'> 0 92
# 1 uuid <class 'str'> 0 1794005
# 2 sum_beacon_value <class 'numpy.int64'> 0 1563
# 3 nunique_beacon_type <class 'numpy.int64'> 0 13
# 4 count_user_stay <class 'numpy.int64'> 0 163
# 5 count_pay_attempt <class 'numpy.int64'> 0 42
# 6 count_buy_click <class 'numpy.int64'> 0 78
#
# "b_3m_cb_date_uuid" dataset head:
# date uuid sum_beacon_value nunique_beacon_type count_user_stay count_pay_attempt count_buy_click
# 0 2021-05-01 1446394 355 13 28 141 152
# 1 2021-05-01 4167303 3 1 0 3 0
# 2 2021-05-01 6005417 1 1 0 1 0
# 3 2021-05-01 7017557 1 1 1 0 0
# 4 2021-05-01 7192621 1 1 0 1 0
#
# 2021-09-18 22:19:37 Processing stage 3: merging "s" with "b_3m_cb_date_uuid" ...
#
# 2021-09-18 22:20:24 "bs_merged" dataset summary:
# 1604430 rows x 14 columns | 1107.75 MB approx memory usage
# col_name col_type null_count nunique
# 0 date <class 'str'> 0 92
# 1 uuid <class 'str'> 0 1597911
# 2 sum_beacon_value <class 'numpy.int64'> 0 1556
# 3 nunique_beacon_type <class 'numpy.int64'> 0 13
# 4 count_user_stay <class 'numpy.int64'> 0 163
# 5 count_pay_attempt <class 'numpy.int64'> 0 34
# 6 count_buy_click <class 'numpy.int64'> 0 34
# 7 phone <class 'str'> 0 835655
# 8 gender <class 'str'> 0 6
# 9 dob <class 'str'> 0 30252
# 10 language <class 'str'> 0 14
# 11 email <class 'str'> 0 824412
# 12 report_type <class 'str'> 0 68
# 13 device <class 'str'> 0 5
#
# "bs_merged" dataset head:
# date uuid sum_beacon_value nunique_beacon_type count_user_stay count_pay_attempt count_buy_click phone gender dob language email report_type device
# 0 2021-05-01 1446394 355 13 28 141 152 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 1 2021-05-02 1446394 96 5 0 33 63 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 2 2021-05-03 1446394 157 11 17 33 97 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 3 2021-05-04 1446394 229 11 28 32 139 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 4 2021-05-05 1446394 290 12 40 41 140 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
######################
# /home/ngkpg/anaconda3/envs/pyconda37/bin/python3.7 /home/ngkpg/Documents/Packt_GP/GP1/code/data_prep_1.py skip
# INFO: Pandarallel will run on 4 workers.
# INFO: Pandarallel will use Memory file system to transfer data between the main process and workers.
#
# 2021-09-18 23:15:26 Reading dataset: bs_merged_3m.csv ...
#
# 2021-09-18 23:15:28 "bs_merged" dataset summary:
# 1604430 rows x 14 columns | 680.17 MB approx memory usage
# col_name col_type null_count nunique
# 0 date <class 'str'> 0 92
# 1 uuid <class 'numpy.int64'> 0 1597911
# 2 sum_beacon_value <class 'numpy.int64'> 0 1556
# 3 nunique_beacon_type <class 'numpy.int64'> 0 13
# 4 count_user_stay <class 'numpy.int64'> 0 163
# 5 count_pay_attempt <class 'numpy.int64'> 0 34
# 6 count_buy_click <class 'numpy.int64'> 0 34
# 7 phone <class 'numpy.int64'> 0 835655
# 8 gender <class 'str'> 0 6
# 9 dob <class 'str'> 0 30252
# 10 language <class 'str'> 0 14
# 11 email <class 'numpy.int64'> 0 824412
# 12 report_type <class 'str'> 0 68
# 13 device <class 'str'> 0 5
#
# "bs_merged" dataset head:
# date uuid sum_beacon_value nunique_beacon_type count_user_stay count_pay_attempt count_buy_click phone gender dob language email report_type device
# 0 2021-05-01 1446394 355 13 28 141 152 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 1 2021-05-02 1446394 96 5 0 33 63 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 2 2021-05-03 1446394 157 11 17 33 97 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 3 2021-05-04 1446394 229 11 28 32 139 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
# 4 2021-05-05 1446394 290 12 40 41 140 395287 Male 1996-06-14 HIN 524282 LS-MT mobile
#
# 2021-09-18 23:15:28 Processing stage 4: consolidating "bs_merged" dataset by date and email ...
# 2021-09-18 23:15:37 Consolidating count_sessions ...
# 2021-09-18 23:16:15 Consolidating sum_beacon_value ...
# 2021-09-18 23:17:10 Consolidating nunique_beacon_type ...
# 2021-09-18 23:18:05 Consolidating count_user_stay ...
# 2021-09-18 23:19:01 Consolidating count_pay_attempt ...
# 2021-09-18 23:19:56 Consolidating count_buy_click ...
# 2021-09-18 23:20:53 Consolidating nunique_gender ...
# 2021-09-18 23:22:09 Consolidating nunique_dob ...
# 2021-09-18 23:23:22 Consolidating nunique_language ...
# 2021-09-18 23:24:35 Consolidating nunique_report_type ...
# 2021-09-18 23:25:48 Consolidating nunique_device ...
#
# 2021-09-18 23:27:04 "bs_merged_cb_date_email" dataset summary:
# 1064216 rows x 13 columns | 165.43 MB approx memory usage
# col_name col_type null_count nunique
# 0 date <class 'str'> 0 92
# 1 email <class 'numpy.int64'> 0 824412
# 2 count_sessions <class 'numpy.int64'> 0 56
# 3 sum_beacon_value <class 'numpy.int64'> 0 2549
# 4 nunique_beacon_type <class 'numpy.int64'> 0 62
# 5 count_user_stay <class 'numpy.int64'> 0 237
# 6 count_pay_attempt <class 'numpy.int64'> 0 45
# 7 count_buy_click <class 'numpy.int64'> 0 42
# 8 nunique_gender <class 'numpy.int64'> 0 3
# 9 nunique_dob <class 'numpy.int64'> 0 42
# 10 nunique_language <class 'numpy.int64'> 0 8
# 11 nunique_report_type <class 'numpy.int64'> 0 13
# 12 nunique_device <class 'numpy.int64'> 0 5
#
# "bs_merged_cb_date_email" dataset head:
# date email count_sessions sum_beacon_value nunique_beacon_type count_user_stay count_pay_attempt count_buy_click nunique_gender nunique_dob nunique_language nunique_report_type nunique_device
# 0 2021-05-01 125 3 30 3 12 0 0 2 2 1 1 1
# 1 2021-05-01 141 5 39 5 16 0 0 2 5 4 2 2
# 2 2021-05-01 195 1 10 1 4 0 0 1 1 1 1 1
# 3 2021-05-01 645 1 10 1 4 0 0 1 1 1 1 1
# 4 2021-05-01 798 1 3 1 2 0 0 1 1 1 1 1
| 70.734336 | 215 | 0.570563 | 4,092 | 28,223 | 3.713832 | 0.090176 | 0.04264 | 0.040798 | 0.032243 | 0.777719 | 0.735343 | 0.693755 | 0.624268 | 0.602948 | 0.588932 | 0 | 0.122404 | 0.30383 | 28,223 | 398 | 216 | 70.91206 | 0.651059 | 0.542395 | 0 | 0.356589 | 0 | 0.015504 | 0.279528 | 0.026394 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007752 | false | 0 | 0.03876 | 0 | 0.054264 | 0.496124 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
b64517ec4f6d43ec7b9cf51a4bfdace68382f169 | 1,684 | py | Python | tests/graphql/objects/and_or_not_permissions/objects.py | karlosss/simple_api | 03f87035c648f161d5e7a59b24f4e04bd34399f1 | [
"MIT"
] | 2 | 2020-11-13T14:00:06.000Z | 2020-12-19T11:50:22.000Z | tests/graphql/objects/and_or_not_permissions/objects.py | karlosss/simple_api | 03f87035c648f161d5e7a59b24f4e04bd34399f1 | [
"MIT"
] | 5 | 2021-02-04T14:27:43.000Z | 2021-06-04T23:22:24.000Z | tests/graphql/objects/and_or_not_permissions/objects.py | karlosss/simple_api | 03f87035c648f161d5e7a59b24f4e04bd34399f1 | [
"MIT"
] | 1 | 2021-01-06T13:54:38.000Z | 2021-01-06T13:54:38.000Z | from simple_api.adapters.graphql.graphql import GraphQLAdapter
from simple_api.adapters.utils import generate
from simple_api.object.actions import Action
from simple_api.object.datatypes import BooleanType
from simple_api.object.permissions import AllowNone, Not, AllowAll, Or, And
from tests.graphql.graphql_test_utils import build_patterns
actions = {
"allow1": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=Not(AllowNone)),
"allow2": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=Or(AllowAll, AllowNone)),
"allow3": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=And(AllowAll, AllowAll)),
"allow4": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=(AllowAll, AllowAll)),
"allow5": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True,
permissions=Or(Not(AllowAll), Not(And(AllowAll, AllowNone)))),
"deny1": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=Not(AllowAll)),
"deny2": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=And(AllowAll, AllowNone)),
"deny3": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=Or(AllowNone, AllowNone)),
"deny4": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True, permissions=(AllowNone, AllowNone)),
"deny5": Action(return_value=BooleanType(), exec_fn=lambda **kwargs: True,
permissions=And(Not(And(AllowAll, AllowNone)), Not(AllowAll))),
}
schema = generate(GraphQLAdapter, actions)
patterns = build_patterns(schema)
| 62.37037 | 117 | 0.748812 | 202 | 1,684 | 6.09901 | 0.207921 | 0.097403 | 0.137987 | 0.227273 | 0.525162 | 0.525162 | 0.525162 | 0.525162 | 0.525162 | 0.525162 | 0 | 0.006711 | 0.115202 | 1,684 | 26 | 118 | 64.769231 | 0.820134 | 0 | 0 | 0 | 1 | 0 | 0.03266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b6493add2b6c73278cf7ee4ca1cf5048f5519767 | 4,433 | py | Python | climetlab/arguments/transformers.py | pmaciel/climetlab | faa0077d615aed05f9e0b4cc2e73f1cf7a9ae61c | [
"Apache-2.0"
] | null | null | null | climetlab/arguments/transformers.py | pmaciel/climetlab | faa0077d615aed05f9e0b4cc2e73f1cf7a9ae61c | [
"Apache-2.0"
] | null | null | null | climetlab/arguments/transformers.py | pmaciel/climetlab | faa0077d615aed05f9e0b4cc2e73f1cf7a9ae61c | [
"Apache-2.0"
] | null | null | null | # (C) Copyright 2021 ECMWF.
#
# This software is licensed under the terms of the Apache Licence Version 2.0
# which can be obtained at http://www.apache.org/licenses/LICENSE-2.0.
# In applying this licence, ECMWF does not waive the privileges and immunities
# granted to it by virtue of its status as an intergovernmental organisation
# nor does it submit to any jurisdiction.
#
import logging
from climetlab.arguments.climetlab_types import Type
from climetlab.vocabularies.aliases import unalias
LOG = logging.getLogger(__name__)
class _all:
def __repr__(self):
return "climetlab.ALL"
ALL = _all()
class Action:
def execute(self, kwargs):
raise NotImplementedError()
def __repr__(self) -> str:
return f"{self.__class__}"
class ArgumentTransformer(Action):
def __init__(self, owner):
self.owner = owner
def execute(self, kwargs):
if self.name in kwargs: # TODO: discuss that
kwargs[self.name] = self.transform(kwargs[self.name])
return kwargs
def transform(self, value):
raise NotImplementedError(self.__class__.__name__)
@property
def name(self):
if self.owner is None:
return "-"
if isinstance(self.owner, str):
return self.owner
return self.owner.name
class _TypedTransformer(ArgumentTransformer):
def __init__(self, owner, type) -> None:
super().__init__(owner)
self.type = type if isinstance(type, Type) else type()
class AliasTransformer(_TypedTransformer):
def __init__(self, owner, type, aliases) -> None:
super().__init__(owner, type)
self.aliases = aliases
if isinstance(self.aliases, str):
self.unalias = self.from_string
return
if isinstance(self.aliases, dict):
self.unalias = self.from_dict
return
if callable(self.aliases):
self.unalias = self.aliases
return
self.unalias = self.unsupported
def unsupported(self, value):
raise NotImplementedError(self.aliases)
def from_string(self, value):
return unalias(self.aliases, value)
def from_dict(self, value):
try:
return self.aliases[value]
except KeyError: # No alias for this value
pass
except TypeError: # if value is not hashable
pass
return value
def _transform_one(self, value):
old = object()
while old != value:
old = value
value = self.unalias(old)
LOG.debug(" Unalias %s --> %s", old, value)
return value
def transform(self, value):
LOG.debug(" Unaliasing %s", value)
if isinstance(value, list):
return [self._transform_one(v) for v in value]
if isinstance(value, tuple):
return tuple([self._transform_one(v) for v in value])
return self._transform_one(value)
def __repr__(self) -> str:
return f"AliasTransformer({self.owner},{self.aliases},{self.type})"
class FormatTransformer(_TypedTransformer):
def __init__(self, owner, format, type) -> None:
super().__init__(owner, type)
self.format = format
def transform(self, value):
if value is None:
return value
return self.type.format(value, self.format)
def __repr__(self) -> str:
return f"FormatTransformer({self.owner},{self.format},{self.type})"
class TypeTransformer(_TypedTransformer):
def __init__(self, owner, type):
super().__init__(owner, type)
def transform(self, value):
if value is None:
return value
return self.type.cast(value)
def __repr__(self) -> str:
return f"TypeTransformer({self.owner},{self.type}"
class AvailabilityChecker(Action):
def __init__(self, availability) -> None:
self.availability = availability
def execute(self, kwargs):
LOG.debug("Checking availability for %s", kwargs)
assert isinstance(kwargs, dict), kwargs
without_none = {k: v for k, v in kwargs.items() if v is not None}
self.availability.check(without_none)
return kwargs
def __repr__(self) -> str:
txt = "Availability:"
for line in self.availability.tree().split("\n"):
if line:
txt += "\n " + line
return txt
| 27.880503 | 78 | 0.626438 | 523 | 4,433 | 5.116635 | 0.25239 | 0.043722 | 0.024664 | 0.026158 | 0.201794 | 0.146487 | 0.084454 | 0.065022 | 0.044096 | 0.044096 | 0 | 0.002488 | 0.274532 | 4,433 | 158 | 79 | 28.056962 | 0.829602 | 0.097 | 0 | 0.256881 | 0 | 0 | 0.067903 | 0.038587 | 0 | 0 | 0 | 0.006329 | 0.009174 | 1 | 0.220183 | false | 0.018349 | 0.027523 | 0.055046 | 0.550459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b64af0874db7513a282cd648f069e473670a402f | 996 | py | Python | account/views/user.py | NTUSA/fudez-app | 91c8d85238fb642488ef5616be441e4417f21d0a | [
"MIT"
] | 2 | 2017-04-13T08:52:19.000Z | 2018-05-07T12:14:34.000Z | account/views/user.py | NTUSA/fudez-app | 91c8d85238fb642488ef5616be441e4417f21d0a | [
"MIT"
] | null | null | null | account/views/user.py | NTUSA/fudez-app | 91c8d85238fb642488ef5616be441e4417f21d0a | [
"MIT"
] | 1 | 2018-07-28T16:09:59.000Z | 2018-07-28T16:09:59.000Z | from django.shortcuts import get_object_or_404
from rest_framework import viewsets, permissions
from rest_framework.decorators import detail_route, list_route
from rest_framework.response import Response
from account.models import User
from account.serializers import UserSerializer, SimpleUserSerializer, FullUserSerializer, UserOverwriteSerializer
from .permissions import IsAdminUserOrReadOnly
class UserViewSet(viewsets.ModelViewSet):
lookup_field = 'username'
queryset = User.objects.all()
permission_classes = (permissions.IsAuthenticated, IsAdminUserOrReadOnly)
def get_serializer_class(self):
if self.request.method in permissions.SAFE_METHODS:
if self.action == 'list':
return SimpleUserSerializer
else:
return FullUserSerializer
else:
if self.request.method.lower() == 'post':
return UserSerializer
else:
return UserOverwriteSerializer
| 34.344828 | 113 | 0.7249 | 96 | 996 | 7.385417 | 0.541667 | 0.03385 | 0.071932 | 0.053597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003866 | 0.220884 | 996 | 28 | 114 | 35.571429 | 0.909794 | 0 | 0 | 0.136364 | 0 | 0 | 0.016064 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.318182 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b64ea61e9feea2c4b2afa6eb14912a58e6654b6f | 130 | py | Python | vendor/tango_tree/config.py | marehr/veb-data-structures | 4f21f1d78bb270ff89bafb27177388f5999ea3ad | [
"MIT"
] | null | null | null | vendor/tango_tree/config.py | marehr/veb-data-structures | 4f21f1d78bb270ff89bafb27177388f5999ea3ad | [
"MIT"
] | null | null | null | vendor/tango_tree/config.py | marehr/veb-data-structures | 4f21f1d78bb270ff89bafb27177388f5999ea3ad | [
"MIT"
] | null | null | null | import os
import sys
# load binary_tree
path = os.path.abspath('..')
if path not in sys.path:
sys.path.append(path)
del path
| 11.818182 | 28 | 0.7 | 23 | 130 | 3.913043 | 0.565217 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176923 | 130 | 10 | 29 | 13 | 0.841122 | 0.123077 | 0 | 0 | 0 | 0 | 0.017857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b64fa0d4c1acd16b3d5af8656c560a61ce7f686a | 620 | py | Python | Firmware/adctest.py | sanielfishawy/ODrive | 6b77a17bda88f2d709ae4d3d26a81c7aed50d384 | [
"MIT"
] | 1,068 | 2016-05-31T22:39:08.000Z | 2020-12-20T22:13:01.000Z | Firmware/adctest.py | sanielfishawy/ODrive | 6b77a17bda88f2d709ae4d3d26a81c7aed50d384 | [
"MIT"
] | 389 | 2017-10-16T09:44:20.000Z | 2020-12-21T14:19:38.000Z | Firmware/adctest.py | sanielfishawy/ODrive | 6b77a17bda88f2d709ae4d3d26a81c7aed50d384 | [
"MIT"
] | 681 | 2016-06-12T01:41:00.000Z | 2020-12-21T12:49:32.000Z |
import numpy as np
import matplotlib.pyplot as plt
adchist = [(0, 137477),
(1, 98524),
(2, 71744),
(3, 60967),
(4, 44372),
(5, 46348),
(6, 19944),
(7, 10092),
(8, 13713),
(9, 11182),
(10, 6903),
(11, 4072),
(12, 2642),
(13, 968),
(14, 296),
(15, 166),
(16, 17),
(17, 2),
(-1, 39662),
(-2, 43502),
(-3, 57596),
(-4, 33915),
(-5, 25611),
(-6, 10880),
(-7, 8237),
(-8, 3518),
(-9, 4789),
(-10, 4689),
(-11, 6345),
(-12, 3901),
(-13, 5781),
(-14, 4803),
(-15, 6428),
(-16, 3563),
(-17, 4478),
(-18, 976),
(-19, 491)]
adchist.sort()
adchist = np.array(adchist)
plt.figure()
plt.bar(adchist[:,0], adchist[:,1])
plt.show() | 12.653061 | 35 | 0.530645 | 100 | 620 | 3.29 | 0.68 | 0.048632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40458 | 0.154839 | 620 | 49 | 36 | 12.653061 | 0.223282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b6676df32798e39afffcc6ddaceea2dd644ae976 | 18,646 | py | Python | dlkit/json_/relationship/objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | dlkit/json_/relationship/objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | dlkit/json_/relationship/objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """JSON implementations of relationship objects."""
# pylint: disable=no-init
# Numerous classes don't require __init__.
# pylint: disable=too-many-public-methods,too-few-public-methods
# Number of methods are defined in specification
# pylint: disable=protected-access
# Access to protected methods allowed in package json package scope
# pylint: disable=too-many-ancestors
# Inheritance defined in specification
import importlib
from . import default_mdata
from .. import utilities
from ..osid import objects as osid_objects
from ..osid.metadata import Metadata
from ..primitives import Id
from ..utilities import get_provider_manager
from ..utilities import get_registry
from ..utilities import update_display_text_defaults
from dlkit.abstract_osid.osid import errors
from dlkit.abstract_osid.relationship import objects as abc_relationship_objects
from dlkit.primordium.id.primitives import Id
class Relationship(abc_relationship_objects.Relationship, osid_objects.OsidRelationship):
"""A ``Relationship`` is an object between two peers.
The genus type indicates the relationship between the peer and the
related peer.
"""
_namespace = 'relationship.Relationship'
def __init__(self, **kwargs):
osid_objects.OsidObject.__init__(self, object_name='RELATIONSHIP', **kwargs)
self._catalog_name = 'Family'
def get_source_id(self):
"""Gets the from peer ``Id`` in this relationship.
return: (osid.id.Id) - the peer
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.relationship.Relationship.get_source_id
return Id(self._my_map['sourceId'])
source_id = property(fget=get_source_id)
def get_destination_id(self):
"""Gets the to peer ``Id`` in this relationship.
return: (osid.id.Id) - the related peer
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.relationship.Relationship.get_source_id
return Id(self._my_map['destinationId'])
destination_id = property(fget=get_destination_id)
@utilities.arguments_not_none
def get_relationship_record(self, relationship_record_type):
"""Gets the relationshop record corresponding to the given ``Relationship`` record ``Type``.
This method is used to retrieve an object implementing the
requested record. The ``relationship_record_type`` may be the
``Type`` returned in ``get_record_types()`` or any of its
parents in a ``Type`` hierarchy where
``has_record_type(relationship_record_type)`` is ``true`` .
arg: relationship_record_type (osid.type.Type): the type of
relationship record to retrieve
return: (osid.relationship.records.RelationshipRecord) - the
relationship record
raise: NullArgument - ``relationship_record_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure occurred
raise: Unsupported -
``has_record_type(relationship_record_type)`` is
``false``
*compliance: mandatory -- This method must be implemented.*
"""
return self._get_record(relationship_record_type)
class RelationshipForm(abc_relationship_objects.RelationshipForm, osid_objects.OsidRelationshipForm):
"""This is the form for creating and updating ``Relationships``.
Like all ``OsidForm`` objects, various data elements may be set here
for use in the create and update methods in the
``RelationshipAdminSession``. For each data element that may be set,
metadata may be examined to provide display hints or data
constraints.
"""
_namespace = 'relationship.Relationship'
def __init__(self, **kwargs):
osid_objects.OsidRelationshipForm.__init__(self, object_name='RELATIONSHIP', **kwargs)
self._mdata = default_mdata.get_relationship_mdata()
self._init_metadata(**kwargs)
if not self.is_for_update():
self._init_map(**kwargs)
def _init_metadata(self, **kwargs):
"""Initialize form metadata"""
osid_objects.OsidRelationshipForm._init_metadata(self, **kwargs)
def _init_map(self, record_types=None, **kwargs):
"""Initialize form map"""
osid_objects.OsidRelationshipForm._init_map(self, record_types=record_types)
self._my_map['sourceId'] = str(kwargs['source_id'])
self._my_map['destinationId'] = str(kwargs['destination_id'])
self._my_map['assignedFamilyIds'] = [str(kwargs['family_id'])]
@utilities.arguments_not_none
def get_relationship_form_record(self, relationship_record_type):
"""Gets the ``RelationshipFormRecord`` corresponding to the given relationship record ``Type``.
arg: relationship_record_type (osid.type.Type): a
relationship record type
return: (osid.relationship.records.RelationshipFormRecord) - the
relationship form record
raise: NullArgument - ``relationship_record_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure occurred
raise: Unsupported -
``has_record_type(relationship_record_type)`` is
``false``
*compliance: mandatory -- This method must be implemented.*
"""
return self._get_record(relationship_record_type)
class RelationshipList(abc_relationship_objects.RelationshipList, osid_objects.OsidList):
"""Like all ``OsidLists,`` ``Relationship`` provides a means for accessing ``Relationship`` elements sequentially either one at a time or many at a time.
Examples: while (rl.hasNext()) { Relationship relationship =
rl.getNextRelationship(); }
or
while (rl.hasNext()) {
Relationship[] relationships = rl.getNextRelationships(rl.available());
}
"""
def get_next_relationship(self):
"""Gets the next ``Relationship`` in this list.
return: (osid.relationship.Relationship) - the next
``Relationship`` in this list. The ``has_next()`` method
should be used to test that a next ``Relationship`` is
available before calling this method.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resource
return next(self)
def next(self):
return self._get_next_object(Relationship)
__next__ = next
next_relationship = property(fget=get_next_relationship)
@utilities.arguments_not_none
def get_next_relationships(self, n):
"""Gets the next set of ``Relationships`` elements in this list.
The specified amount must be less than or equal to the return
from ``available()``.
arg: n (cardinal): the number of ``Relationship`` elements
requested which must be less than or equal to
``available()``
return: (osid.relationship.Relationship) - an array of
``Relationship`` elements.The length of the array is
less than or equal to the number specified.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resources
return self._get_next_n(RelationshipList, number=n)
class Family(abc_relationship_objects.Family, osid_objects.OsidCatalog):
"""A ``Family`` represents a collection of relationships.
Like all OSID objects, a ``Family`` is identified by its ``Id`` and
any persisted references should use the ``Id``.
"""
_namespace = 'relationship.Family'
def __init__(self, **kwargs):
osid_objects.OsidCatalog.__init__(self, object_name='FAMILY', **kwargs)
@utilities.arguments_not_none
def get_family_record(self, family_record_type):
"""Gets the famly record corresponding to the given ``Family`` record ``Type``.
This method is used to retrieve an object implementing the
requested record. The ``family_record_type`` may be the ``Type``
returned in ``get_record_types()`` or any of its parents in a
``Type`` hierarchy where ``has_record_type(family_record_type)``
is ``true`` .
arg: family_record_type (osid.type.Type): the type of family
record to retrieve
return: (osid.relationship.records.FamilyRecord) - the family
record
raise: NullArgument - ``family_record_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure occurred
raise: Unsupported - ``has_record_type(family_record_type)`` is
``false``
*compliance: mandatory -- This method must be implemented.*
"""
raise errors.Unimplemented()
class FamilyForm(abc_relationship_objects.FamilyForm, osid_objects.OsidCatalogForm):
"""This is the form for creating and updating ``Family`` objects.
Like all ``OsidForm`` objects, various data elements may be set here
for use in the create and update methods in the
``FamilyAdminSession``. For each data element that may be set,
metadata may be examined to provide display hints or data
constraints.
"""
_namespace = 'relationship.Family'
def __init__(self, **kwargs):
osid_objects.OsidCatalogForm.__init__(self, object_name='FAMILY', **kwargs)
self._mdata = default_mdata.get_family_mdata()
self._init_metadata(**kwargs)
if not self.is_for_update():
self._init_map(**kwargs)
def _init_metadata(self, **kwargs):
"""Initialize form metadata"""
osid_objects.OsidCatalogForm._init_metadata(self, **kwargs)
def _init_map(self, record_types=None, **kwargs):
"""Initialize form map"""
osid_objects.OsidCatalogForm._init_map(self, record_types, **kwargs)
@utilities.arguments_not_none
def get_family_form_record(self, family_record_type):
"""Gets the ``FamilyFormRecord`` corresponding to the given family record ``Type``.
arg: family_record_type (osid.type.Type): the family record
type
return: (osid.relationship.records.FamilyFormRecord) - the
family form record
raise: NullArgument - ``family_record_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure occurred
raise: Unsupported - ``has_record_type(family_record_type)`` is
``false``
*compliance: mandatory -- This method must be implemented.*
"""
raise errors.Unimplemented()
class FamilyList(abc_relationship_objects.FamilyList, osid_objects.OsidList):
"""Like all ``OsidLists,`` ``FamilyList`` provides a means for accessing ``Family`` elements sequentially either one at a time or many at a time.
Examples: while (fl.hasNext()) { Family family = fl.getNextFamily();
}
or
while (fl.hasNext()) {
Family[] families = fl.getNextFamilies(fl.available());
}
"""
def get_next_family(self):
"""Gets the next ``Family`` in this list.
return: (osid.relationship.Family) - the next ``Family`` in this
list. The ``has_next()`` method should be used to test
that a next ``Family`` is available before calling this
method.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resource
return next(self)
def next(self):
return self._get_next_object(Family)
__next__ = next
next_family = property(fget=get_next_family)
@utilities.arguments_not_none
def get_next_families(self, n):
"""Gets the next set of ``Family elements`` in this list.
The specified amount must be less than or equal to the return
from ``available()``.
arg: n (cardinal): the number of ``Family`` elements
requested which must be less than or equal to
``available()``
return: (osid.relationship.Family) - an array of ``Family``
elements.The length of the array is less than or equal
to the number specified.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resources
return self._get_next_n(FamilyList, number=n)
class FamilyNode(abc_relationship_objects.FamilyNode, osid_objects.OsidNode):
"""This interface is a container for a partial hierarchy retrieval.
The number of hierarchy levels traversable through this interface
depend on the number of levels requested in the
``FamilyHierarchySession``.
"""
def __init__(self, node_map, runtime=None, proxy=None, lookup_session=None):
osid_objects.OsidNode.__init__(self, node_map)
self._lookup_session = lookup_session
self._runtime = runtime
self._proxy = proxy
def get_object_node_map(self):
node_map = dict(self.get_family().get_object_map())
node_map['type'] = 'FamilyNode'
node_map['parentNodes'] = []
node_map['childNodes'] = []
for family_node in self.get_parent_family_nodes():
node_map['parentNodes'].append(family_node.get_object_node_map())
for family_node in self.get_child_family_nodes():
node_map['childNodes'].append(family_node.get_object_node_map())
return node_map
def get_family(self):
"""Gets the ``Family`` at this node.
return: (osid.relationship.Family) - the family represented by
this node
*compliance: mandatory -- This method must be implemented.*
"""
if self._lookup_session is None:
mgr = get_provider_manager('RELATIONSHIP', runtime=self._runtime, proxy=self._proxy)
self._lookup_session = mgr.get_family_lookup_session(proxy=getattr(self, "_proxy", None))
return self._lookup_session.get_family(Id(self._my_map['id']))
family = property(fget=get_family)
def get_parent_family_nodes(self):
"""Gets the parents of this family.
return: (osid.relationship.FamilyNodeList) - the parents of the
``id``
*compliance: mandatory -- This method must be implemented.*
"""
parent_family_nodes = []
for node in self._my_map['parentNodes']:
parent_family_nodes.append(FamilyNode(
node._my_map,
runtime=self._runtime,
proxy=self._proxy,
lookup_session=self._lookup_session))
return FamilyNodeList(parent_family_nodes)
parent_family_nodes = property(fget=get_parent_family_nodes)
def get_child_family_nodes(self):
"""Gets the children of this family.
return: (osid.relationship.FamilyNodeList) - the children of
this family
*compliance: mandatory -- This method must be implemented.*
"""
parent_family_nodes = []
for node in self._my_map['childNodes']:
parent_family_nodes.append(FamilyNode(
node._my_map,
runtime=self._runtime,
proxy=self._proxy,
lookup_session=self._lookup_session))
return FamilyNodeList(parent_family_nodes)
child_family_nodes = property(fget=get_child_family_nodes)
class FamilyNodeList(abc_relationship_objects.FamilyNodeList, osid_objects.OsidList):
"""Like all ``OsidLists,`` ``FamilyNodeList`` provides a means for accessing ``FamilyNode`` elements sequentially either one at a time or many at a time.
Examples: while (fnl.hasNext()) { FamilyNode node =
fnl.getNextFamilyNode(); }
or
while (fnl.hasNext()) {
FamilyNode[] nodes = fnl.getNextFamilyNodes(fnl.available());
}
"""
def get_next_family_node(self):
"""Gets the next ``FamilyNode`` in this list.
return: (osid.relationship.FamilyNode) - the next ``FamilyNode``
in this list. The ``has_next()`` method should be used
to test that a next ``FamilyNode`` is available before
calling this method.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resource
return next(self)
def next(self):
return self._get_next_object(FamilyNode)
__next__ = next
next_family_node = property(fget=get_next_family_node)
@utilities.arguments_not_none
def get_next_family_nodes(self, n):
"""Gets the next set of ``FamilyNode elements`` in this list.
The specified amount must be less than or equal to the return
from ``available()``.
arg: n (cardinal): the number of ``FamilyNode`` elements
requested which must be less than or equal to
``available()``
return: (osid.relationship.FamilyNode) - an array of
``FamilyNode`` elements.The length of the array is less
than or equal to the number specified.
raise: IllegalState - no more elements available in this list
raise: OperationFailed - unable to complete request
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for osid.resource.ResourceList.get_next_resources
return self._get_next_n(FamilyNodeList, number=n)
| 39.420719 | 158 | 0.669527 | 2,160 | 18,646 | 5.581944 | 0.118519 | 0.028199 | 0.028614 | 0.036079 | 0.675126 | 0.638633 | 0.592934 | 0.523762 | 0.486191 | 0.470432 | 0 | 0 | 0.241124 | 18,646 | 472 | 159 | 39.504237 | 0.852085 | 0.542744 | 0 | 0.376812 | 0 | 0 | 0.044915 | 0.007062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202899 | false | 0 | 0.086957 | 0.021739 | 0.57971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b673221ee8b609b02be959fb23bbc76db4c129ac | 3,732 | py | Python | backend/api/models/user_model.py | ferdn4ndo/infotrem | 4728c5fe8385dcc0a1c75068429fa20e2afbf6f2 | [
"MIT"
] | null | null | null | backend/api/models/user_model.py | ferdn4ndo/infotrem | 4728c5fe8385dcc0a1c75068429fa20e2afbf6f2 | [
"MIT"
] | 1 | 2020-06-21T18:38:14.000Z | 2020-06-21T21:57:09.000Z | backend/api/models/user_model.py | ferdn4ndo/infotrem | 4728c5fe8385dcc0a1c75068429fa20e2afbf6f2 | [
"MIT"
] | null | null | null | from uuid import uuid4
from django.contrib.auth.base_user import BaseUserManager, AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from .location_state_model import LocationState
from .location_city_model import LocationCity
class UserManager(BaseUserManager):
"""
Custom user model manager where uuid is the unique identifiers
for authentication instead of usernames and tokens are the
passwords.
"""
def create_user(self, password, **extra_fields):
"""
Create and save a User with the given email and password.
"""
if not password:
raise ValueError(_("The password must be set"))
user = self.model(**extra_fields)
user.set_password(password)
user.save()
return user
def create_superuser(self, password, **extra_fields):
"""
Create and save a SuperUser with the given email and password.
"""
extra_fields.setdefault('is_admin', True)
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_superuser', True)
if extra_fields.get('is_admin') is not True:
raise ValueError(_("Superuser must have is_admin=True."))
if extra_fields.get('is_staff') is not True:
raise ValueError(_("Superuser must have is_staff=True."))
return self.create_user(password, **extra_fields)
class User(AbstractBaseUser, PermissionsMixin):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
email = models.EmailField(unique=True)
email_validated = models.BooleanField(default=False)
email_validation_sent = models.BooleanField(default=False)
email_validation_hash = models.CharField(max_length=128, blank=True, null=True)
name = models.CharField(max_length=255, blank=True, null=True)
cpf = models.BigIntegerField(null=True)
birth_date = models.DateField(null=True)
address = models.TextField(max_length=255, blank=True, null=True)
number = models.PositiveIntegerField(null=True)
complement = models.CharField(max_length=255, blank=True, null=True)
city = models.ForeignKey(to=LocationCity, null=True, on_delete=models.SET_NULL)
state = models.ForeignKey(to=LocationState, null=True, on_delete=models.SET_NULL)
zipcode = models.PositiveIntegerField(null=True)
phone = models.BigIntegerField(null=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
registered_at = models.DateTimeField(auto_now_add=True, verbose_name=_("Record creation timestamp"))
last_activity_at = models.DateTimeField(
verbose_name=_("Last activity registered by the user in the system"),
default=timezone.now
)
updated_at = models.DateTimeField(auto_now=True, verbose_name=_("Record last update timestamp"), null=True)
USERNAME_FIELD = 'id'
REQUIRED_FIELDS = []
objects = UserManager()
def __str__(self):
"""
String representation of the model, defined by the UUID
"""
return str(self.id)
def save(self, *args, **kwargs):
super().save(*args, **kwargs)
@property
def is_complete(self) -> bool:
return (
self.name is not None
and self.email is not None
and self.cpf is not None
and self.birth_date is not None
and self.address is not None
and self.city is not None
and self.state is not None
and self.zipcode is not None
)
| 38.081633 | 111 | 0.692122 | 467 | 3,732 | 5.383298 | 0.299786 | 0.038186 | 0.02864 | 0.033413 | 0.322593 | 0.208831 | 0.133254 | 0.098648 | 0.069212 | 0 | 0 | 0.004795 | 0.217578 | 3,732 | 97 | 112 | 38.474227 | 0.856164 | 0.082797 | 0 | 0 | 0 | 0 | 0.072372 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072464 | false | 0.086957 | 0.115942 | 0.014493 | 0.623188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b684ea846bdcddff0f795f27f74230ffc1d9b10e | 438 | py | Python | Testing/03_use_type_hint/type_hint_classes.py | t2y/python-study | 52a132ea600d4696164e540d8a8f8f5fc58e097a | [
"Apache-2.0"
] | 18 | 2016-08-15T00:24:44.000Z | 2020-11-30T15:11:52.000Z | Testing/03_use_type_hint/type_hint_classes.py | t2y/python-study | 52a132ea600d4696164e540d8a8f8f5fc58e097a | [
"Apache-2.0"
] | null | null | null | Testing/03_use_type_hint/type_hint_classes.py | t2y/python-study | 52a132ea600d4696164e540d8a8f8f5fc58e097a | [
"Apache-2.0"
] | 6 | 2016-09-28T10:47:03.000Z | 2020-10-14T10:20:06.000Z | # -*- coding: utf-8 -*-
class MyClass:
# The __init__ method doesn't return anything, so it gets return
# type None just like any other method that doesn't return anything.
def __init__(self) -> None:
...
# For instance methods, omit `self`.
def my_class_method(self, num: int, str1: str) -> str:
return num * str1
# User-defined classes are written with just their own names.
x = MyClass() # type: MyClass
| 31.285714 | 71 | 0.664384 | 64 | 438 | 4.390625 | 0.671875 | 0.042705 | 0.085409 | 0.142349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008876 | 0.228311 | 438 | 13 | 72 | 33.692308 | 0.822485 | 0.593607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b6892bdce8585559ff0c5db21e9464b1c3caadc2 | 5,391 | py | Python | src/genie/libs/parser/iosxr/tests/ShowL2vpnBridgeDomainBrief/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxr/tests/ShowL2vpnBridgeDomainBrief/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxr/tests/ShowL2vpnBridgeDomainBrief/cli/equal/golden1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"bridge_group": {
"D": {
"bridge_domain": {
"D-w": {
"ac": {
"num_ac": 1,
"num_ac_up": 1
},
"id": 0,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 1,
"num_pw_up": 1
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"GO-LEAFS": {
"bridge_domain": {
"GO": {
"ac": {
"num_ac": 3,
"num_ac_up": 2
},
"id": 9,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 7,
"num_pw_up": 5
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"MGMT": {
"bridge_domain": {
"DSL-MGMT": {
"ac": {
"num_ac": 1,
"num_ac_up": 0
},
"id": 2,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 0,
"num_pw_up": 0
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"WOrd": {
"bridge_domain": {
"CODE-123-4567": {
"ac": {
"num_ac": 1,
"num_ac_up": 0
},
"id": 3,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 1,
"num_pw_up": 0
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"admin1": {
"bridge_domain": {
"domain1": {
"ac": {
"num_ac": 1,
"num_ac_up": 0
},
"id": 6,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 1,
"num_pw_up": 0
},
"state": "admin down",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"d": {
"bridge_domain": {
"s-VLAN_20": {
"ac": {
"num_ac": 2,
"num_ac_up": 2
},
"id": 1,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 1,
"num_pw_up": 1
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
},
"g1": {
"bridge_domain": {
"bd1": {
"ac": {
"num_ac": 1,
"num_ac_up": 1
},
"id": 0,
"pw": {
"num_pw": 1,
"num_pw_up": 1
},
"state": "up"
}
}
},
"woRD": {
"bridge_domain": {
"THING_PLACE_DIA-765_4321": {
"ac": {
"num_ac": 1,
"num_ac_up": 1
},
"id": 4,
"pbb": {
"num_pbb": 0,
"num_pbb_up": 0
},
"pw": {
"num_pw": 1,
"num_pw_up": 1
},
"state": "up",
"vni": {
"num_vni": 0,
"num_vni_up": 0
}
}
}
}
}
} | 28.675532 | 45 | 0.171026 | 320 | 5,391 | 2.55625 | 0.14375 | 0.07335 | 0.06846 | 0.07824 | 0.694377 | 0.669927 | 0.669927 | 0.669927 | 0.646699 | 0.55868 | 0 | 0.057667 | 0.716936 | 5,391 | 188 | 46 | 28.675532 | 0.478375 | 0 | 0 | 0.537234 | 0 | 0 | 0.157085 | 0.004451 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b6ac6373bcde0d0084efd2d2ae066e71a618c49d | 975 | py | Python | back.py | ikenichaa/thai_soundex | 093d0b98d628bb82e1b62a40a309783e6ef672fe | [
"Apache-2.0"
] | null | null | null | back.py | ikenichaa/thai_soundex | 093d0b98d628bb82e1b62a40a309783e6ef672fe | [
"Apache-2.0"
] | null | null | null | back.py | ikenichaa/thai_soundex | 093d0b98d628bb82e1b62a40a309783e6ef672fe | [
"Apache-2.0"
] | null | null | null | def main(input):
myDict = {}
myDict['ก']='k'
for key in ['ข','ค', 'ฆ']:
myDict[key] = 'k'
myDict['ง']='ng'
myDict['จ']='t'
for key in ['ฉ','ฌ', 'ช']:
myDict[key] = 't'
for key in ['ซ','ศ', 'ษ','ส']:
myDict[key] = 't'
for key in ['ย']:
myDict[key] = 'j'
for key in ['ฎ', 'ด']:
myDict[key] = 't'
for key in ['ฏ', 'ต']:
myDict[key] = 't'
for key in ['ฐ','ฑ', 'ฒ','ถ','ท','ธ']:
myDict[key] = 't'
for key in ['ณ', 'น','ญ']:
myDict[key] = 'n'
myDict['บ']='p'
myDict['ป']='p'
for key in ['ผ', 'พ','ภ']:
myDict[key] = 'p'
for key in ['ฝ', 'ฟ']:
myDict[key] = 'p'
myDict['ม']='m'
myDict['ร']='n'
for key in ['ล', 'ฬ']:
myDict[key] = 'n'
myDict['ว']='w'
for key in ['ฤ', 'ฦ']:
myDict[key] = 'r'
for key in ['ห', 'ฮ']:
myDict[key] = ''
myDict['อ']=''
x=myDict.get(input)
return x;
| 23.780488 | 42 | 0.389744 | 148 | 975 | 2.567568 | 0.425676 | 0.205263 | 0.273684 | 0.142105 | 0.236842 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 975 | 40 | 43 | 24.375 | 0.584615 | 0 | 0 | 0.230769 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b6b6c2ee5a358ae1677029fe17230741b104b351 | 432 | py | Python | secondtest/2013/1gaussjacobi.py | JoaoCostaIFG/MNUM | 6e042d8a6f64feb9eae9c79afec2fbab51f46fbd | [
"MIT"
] | 1 | 2019-12-07T10:34:30.000Z | 2019-12-07T10:34:30.000Z | secondtest/2013/1gaussjacobi.py | JoaoCostaIFG/MNUM | 6e042d8a6f64feb9eae9c79afec2fbab51f46fbd | [
"MIT"
] | null | null | null | secondtest/2013/1gaussjacobi.py | JoaoCostaIFG/MNUM | 6e042d8a6f64feb9eae9c79afec2fbab51f46fbd | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# CHECKED
A = [[4.5, -1, -1, 1], [-1, 4.5, 1, -1], [-1, 2, 4.5, -1], [2, -1, -1, 4.5]]
b = [1, -1, -1, 0]
x = [0.25, 0.25, 0.25, 0.25]
x_new = [0, 0, 0, 0]
for k in range(2):
for i in range(4):
x_new[i] = b[i]
for j in range(4):
if j != i:
x_new[i] -= A[i][j] * x[j]
x_new[i] *= (1 / A[i][i])
print("Ans", k, "\n", x_new)
x = x_new.copy()
| 19.636364 | 76 | 0.381944 | 94 | 432 | 1.691489 | 0.276596 | 0.100629 | 0.075472 | 0.113208 | 0.138365 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0.157343 | 0.337963 | 432 | 21 | 77 | 20.571429 | 0.398601 | 0.06713 | 0 | 0 | 0 | 0 | 0.0125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fcae020ff699888941770fffb83bc3aac4b2dcd5 | 294 | py | Python | Modulo_2/semana4/tarea/presentacion/presentacion.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_2/semana4/tarea/presentacion/presentacion.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_2/semana4/tarea/presentacion/presentacion.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | 1 | 2022-03-04T00:57:18.000Z | 2022-03-04T00:57:18.000Z |
def presentacion_inicial():
print("*"*20)
print("Que accion desea realizar : ")
print("1) Insertar Persona : ")
print("2) Insertar Empleado : ")
print("3) Consultar las personas : ")
print("4) Consultar por empleados : ")
return int(input("Opcion necesitas : "))
| 24.5 | 44 | 0.619048 | 33 | 294 | 5.484848 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026432 | 0.227891 | 294 | 11 | 45 | 26.727273 | 0.770925 | 0 | 0 | 0 | 0 | 0 | 0.515464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | true | 0 | 0 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
fcb05bfd43a1234321c1726697ae47be1b3693d7 | 1,300 | py | Python | release/stubs.min/System/Runtime/InteropServices/__init___parts/SEHException.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/Runtime/InteropServices/__init___parts/SEHException.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/Runtime/InteropServices/__init___parts/SEHException.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | class SEHException(ExternalException):
"""
Represents structured exception handling (SEH) errors.
SEHException()
SEHException(message: str)
SEHException(message: str,inner: Exception)
"""
def ZZZ(self):
"""hardcoded/mock instance of the class"""
return SEHException()
instance=ZZZ()
"""hardcoded/returns an instance of the class"""
def CanResume(self):
"""
CanResume(self: SEHException) -> bool
Indicates whether the exception can be recovered from,and whether the code can continue from the point at which the exception was thrown.
Returns: Always false,because resumable exceptions are not implemented.
"""
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,message=None,inner=None):
"""
__new__(cls: type)
__new__(cls: type,message: str)
__new__(cls: type,message: str,inner: Exception)
__new__(cls: type,info: SerializationInfo,context: StreamingContext)
"""
pass
def __reduce_ex__(self,*args):
pass
def __str__(self,*args):
pass
SerializeObjectState=None
| 32.5 | 215 | 0.707692 | 155 | 1,300 | 5.490323 | 0.445161 | 0.047004 | 0.047004 | 0.06698 | 0.179788 | 0.132785 | 0.132785 | 0.132785 | 0.132785 | 0.132785 | 0 | 0 | 0.178462 | 1,300 | 39 | 216 | 33.333333 | 0.796816 | 0.626154 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.3125 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fcb8cf21b29a5f272a7d618c6ac9079ebfc14c39 | 271 | py | Python | pythran/tests/cython/setup_tax.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,647 | 2015-01-13T01:45:38.000Z | 2022-03-28T01:23:41.000Z | pythran/tests/cython/setup_tax.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 1,116 | 2015-01-01T09:52:05.000Z | 2022-03-18T21:06:40.000Z | pythran/tests/cython/setup_tax.py | davidbrochart/pythran | 24b6c8650fe99791a4091cbdc2c24686e86aa67c | [
"BSD-3-Clause"
] | 180 | 2015-02-12T02:47:28.000Z | 2022-03-14T10:28:18.000Z | from distutils.core import setup
from Cython.Build import cythonize
setup(
name = "tax",
ext_modules = cythonize('tax.pyx'),
script_name = 'setup.py',
script_args = ['build_ext', '--inplace']
)
import tax
import numpy as np
print(tax.tax(np.ones(10)))
| 18.066667 | 44 | 0.678967 | 39 | 271 | 4.615385 | 0.589744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.180812 | 271 | 14 | 45 | 19.357143 | 0.801802 | 0 | 0 | 0 | 0 | 0 | 0.133829 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.363636 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fcbfc073469b07dd98a14e4c9164c0fd8b5cd461 | 75 | py | Python | stackstrap/__init__.py | movermeyer/stackstrap | daa942ba83c2f3183628a4dc84e8ea221ff49eb6 | [
"Apache-2.0"
] | 3 | 2015-04-19T09:36:49.000Z | 2017-02-08T23:41:22.000Z | stackstrap/__init__.py | movermeyer/stackstrap | daa942ba83c2f3183628a4dc84e8ea221ff49eb6 | [
"Apache-2.0"
] | 1 | 2021-03-25T21:27:19.000Z | 2021-03-25T21:27:19.000Z | stackstrap/__init__.py | movermeyer/stackstrap | daa942ba83c2f3183628a4dc84e8ea221ff49eb6 | [
"Apache-2.0"
] | 2 | 2015-05-22T08:25:54.000Z | 2018-03-05T00:33:25.000Z | __version__ = '0.2.2'
__url__ = 'https://github.com/stackstrap/stackstrap'
| 25 | 52 | 0.733333 | 10 | 75 | 4.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.08 | 75 | 2 | 53 | 37.5 | 0.637681 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fcc4a1cf356c4e727ef86d0eb7b9efb497dc75cc | 155 | py | Python | urls.py | bellachp/DashTut | 3ff19d52f34aea14a5e0c332fd18878cbf04279a | [
"MIT"
] | null | null | null | urls.py | bellachp/DashTut | 3ff19d52f34aea14a5e0c332fd18878cbf04279a | [
"MIT"
] | null | null | null | urls.py | bellachp/DashTut | 3ff19d52f34aea14a5e0c332fd18878cbf04279a | [
"MIT"
] | null | null | null | # urls.py
# urls for dash app
url_paths = {
"index": '/',
"home": '/home',
"scatter": '/apps/scatter-test',
"combo": '/apps/combo-test'
}
| 15.5 | 36 | 0.529032 | 19 | 155 | 4.263158 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232258 | 155 | 9 | 37 | 17.222222 | 0.680672 | 0.16129 | 0 | 0 | 0 | 0 | 0.480315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fcc9a18b1053b307181376309f390c8f485eb322 | 603 | py | Python | DEFCON_Debugging.py | tlear/DEFCON | 6f24c1585a877e8668157a93a8153502b3ddc246 | [
"MIT"
] | null | null | null | DEFCON_Debugging.py | tlear/DEFCON | 6f24c1585a877e8668157a93a8153502b3ddc246 | [
"MIT"
] | null | null | null | DEFCON_Debugging.py | tlear/DEFCON | 6f24c1585a877e8668157a93a8153502b3ddc246 | [
"MIT"
] | null | null | null | ################################################################
# DEFCON_Debugging
# (C) Tom Moore 2017
################################################################
################################################################
# IMPORTS
################################################################
import DEFCON_Config
################################################################
def Debug(text):
if (DEFCON_Config.debug):
print "\n" + text
################################################################
def Error(text):
if (type(text) != str):
return
Debug(text) | 28.714286 | 64 | 0.227197 | 30 | 603 | 4.466667 | 0.633333 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007286 | 0.089552 | 603 | 21 | 65 | 28.714286 | 0.236794 | 0.07131 | 0 | 0 | 0 | 0 | 0.011561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fcd03d8b6056bce23b5cd7fc66bae1e46820e9b4 | 645 | py | Python | gather/upload_poster_rooms.py | stephenfreund/PLDI-2021-Mini-Conf | 3d529d32a8563682ea69eb1e3b1742b447b71f42 | [
"MIT"
] | null | null | null | gather/upload_poster_rooms.py | stephenfreund/PLDI-2021-Mini-Conf | 3d529d32a8563682ea69eb1e3b1742b447b71f42 | [
"MIT"
] | null | null | null | gather/upload_poster_rooms.py | stephenfreund/PLDI-2021-Mini-Conf | 3d529d32a8563682ea69eb1e3b1742b447b71f42 | [
"MIT"
] | null | null | null | import csv
import io
import json
import time
import zlib
import sys
import os
import requests
import yaml
from libgather import Gather
if __name__ == "__main__":
config = yaml.load(open("../admin/config.yml").read(), Loader=yaml.SafeLoader) | yaml.load(open("config.yml").read(), Loader=yaml.SafeLoader)
rooms = yaml.load(open("poster-rooms.yml").read(), Loader=yaml.SafeLoader)
gather = Gather(config["gather_api_key"], config["gather_space_id"])
for room in rooms:
roomName = f'room-{room["UID"]}'
jsonContents = json.load(open(f'auto/room-{room["UID"]}.json'))
gather.setMap(roomName, jsonContents)
| 29.318182 | 145 | 0.697674 | 89 | 645 | 4.921348 | 0.438202 | 0.073059 | 0.082192 | 0.116438 | 0.212329 | 0.150685 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148837 | 645 | 21 | 146 | 30.714286 | 0.797814 | 0 | 0 | 0 | 0 | 0 | 0.19845 | 0.043411 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fcd127df2257bc5264c56c5910c76287e8e09b8a | 588 | py | Python | Aulas Python/m03/aula18a.py | joaquimjfernandes/Curso-de-Python | 3356211248d355eaa67098cebe5cdb8cf6badc75 | [
"MIT"
] | null | null | null | Aulas Python/m03/aula18a.py | joaquimjfernandes/Curso-de-Python | 3356211248d355eaa67098cebe5cdb8cf6badc75 | [
"MIT"
] | null | null | null | Aulas Python/m03/aula18a.py | joaquimjfernandes/Curso-de-Python | 3356211248d355eaa67098cebe5cdb8cf6badc75 | [
"MIT"
] | null | null | null | print('=' * 15, '\033[1;35mAULA 18 - Listas[Part #2]\033[m', '=' * 15)
# --------------------------------------------------------------------
dados = []
pessoas = []
galera = [['Miguel', 25], ['Berta', 59], ['Joaquim', 20]]
# --------------------------------------------------------------------
dados.append('Joaquim')
dados.append(20)
dados.append('Fernando')
dados.append(23)
dados.append('Maria')
dados.append(19)
pessoas.append(dados[:])
# galera.append(pessoas[:])
print(galera[0][0]) # Miguel
print(galera[1][1]) # 59
print(galera[2][0]) # Joaquim
print(galera[1]) # Berta + 59
| 30.947368 | 70 | 0.477891 | 65 | 588 | 4.323077 | 0.384615 | 0.234875 | 0.092527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073308 | 0.095238 | 588 | 18 | 71 | 32.666667 | 0.454887 | 0.326531 | 0 | 0 | 0 | 0 | 0.208763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fce34b6c81c35a8f9834ee5de2cc8e7f75af13c9 | 1,711 | py | Python | jobs/models.py | shifat151/portfolio | de025dd345232385e31f1e9741ddcbfe1ca33f93 | [
"MIT"
] | null | null | null | jobs/models.py | shifat151/portfolio | de025dd345232385e31f1e9741ddcbfe1ca33f93 | [
"MIT"
] | 6 | 2021-06-09T17:58:42.000Z | 2021-09-25T14:39:22.000Z | jobs/models.py | shifat151/portfolio | de025dd345232385e31f1e9741ddcbfe1ca33f93 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Profile(models.Model):
pic=models.ImageField(upload_to='images/')
pub_date=models.DateTimeField(auto_now=True)
obj=models.TextField(blank=True)
# ctime() method is for converting datetime string into a string
def __str__(self):
return self.pub_date.ctime()
# class Objective(models.Model):
# obj=models.TextField(blank=True)
# list_obj=[]
# def __str__(self):
# self.list_obj=self.obj.split()
# return " ".join(self.list_obj[:5])+'...'
class Language(models.Model):
image = models.ImageField(upload_to='images/', blank=True)
lang_name=models.CharField(max_length=20)
lang_summary=models.CharField(max_length=200)
# For admin views instead of object
def __str__(self):
return self.lang_name
class Framework(models.Model):
framework_name=models.CharField(max_length=20)
language=models.ForeignKey(Language,on_delete=models.CASCADE)
def __str__(self):
return self.framework_name
class Project(models.Model):
proj_title=models.CharField(max_length=100)
proj_desc=models.TextField()
proj_link=models.URLField()
frameworks=models.ManyToManyField(Framework)
def __str__(self):
return self.proj_title
class Academic(models.Model):
exam_name=models.CharField(max_length=50)
exam_school=models.CharField(max_length=50)
exam_gpa=models.FloatField()
exam_year=models.SmallIntegerField()
new_list=[]
adm_list=[]
def __str__(self):
self.new_list=self.exam_name.split()
self.adm_list=list(map(lambda x:x[:1], self.new_list))
return ".".join(self.adm_list)
| 29.5 | 68 | 0.700175 | 227 | 1,711 | 5.017621 | 0.378855 | 0.057946 | 0.052678 | 0.126427 | 0.279192 | 0.105356 | 0 | 0 | 0 | 0 | 0 | 0.011388 | 0.178843 | 1,711 | 58 | 69 | 29.5 | 0.799288 | 0.184687 | 0 | 0.138889 | 0 | 0 | 0.010823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.027778 | 0.111111 | 0.944444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
fcfada5ca785383cf0f56e4e1dc9cd5953621980 | 7,449 | py | Python | temboo/core/Library/Dropbox/FilesAndMetadata/ListFolderContents.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Dropbox/FilesAndMetadata/ListFolderContents.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Dropbox/FilesAndMetadata/ListFolderContents.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | # -*- coding: utf-8 -*-
###############################################################################
#
# ListFolderContents
# Retrieves metadata (including folder contents) for a folder or file in Dropbox.
#
# Python versions 2.6, 2.7, 3.x
#
# Copyright 2014, Temboo Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
#
#
###############################################################################
from temboo.core.choreography import Choreography
from temboo.core.choreography import InputSet
from temboo.core.choreography import ResultSet
from temboo.core.choreography import ChoreographyExecution
import json
class ListFolderContents(Choreography):
def __init__(self, temboo_session):
"""
Create a new instance of the ListFolderContents Choreo. A TembooSession object, containing a valid
set of Temboo credentials, must be supplied.
"""
super(ListFolderContents, self).__init__(temboo_session, '/Library/Dropbox/FilesAndMetadata/ListFolderContents')
def new_input_set(self):
return ListFolderContentsInputSet()
def _make_result_set(self, result, path):
return ListFolderContentsResultSet(result, path)
def _make_execution(self, session, exec_id, path):
return ListFolderContentsChoreographyExecution(session, exec_id, path)
class ListFolderContentsInputSet(InputSet):
"""
An InputSet with methods appropriate for specifying the inputs to the ListFolderContents
Choreo. The InputSet object is used to specify input parameters when executing this Choreo.
"""
def set_AccessTokenSecret(self, value):
"""
Set the value of the AccessTokenSecret input for this Choreo. ((required, string) The Access Token Secret retrieved during the OAuth process.)
"""
super(ListFolderContentsInputSet, self)._set_input('AccessTokenSecret', value)
def set_AccessToken(self, value):
"""
Set the value of the AccessToken input for this Choreo. ((required, string) The Access Token retrieved during the OAuth process.)
"""
super(ListFolderContentsInputSet, self)._set_input('AccessToken', value)
def set_AppKey(self, value):
"""
Set the value of the AppKey input for this Choreo. ((required, string) The App Key provided by Dropbox (AKA the OAuth Consumer Key).)
"""
super(ListFolderContentsInputSet, self)._set_input('AppKey', value)
def set_AppSecret(self, value):
"""
Set the value of the AppSecret input for this Choreo. ((required, string) The App Secret provided by Dropbox (AKA the OAuth Consumer Secret).)
"""
super(ListFolderContentsInputSet, self)._set_input('AppSecret', value)
def set_FileLimit(self, value):
"""
Set the value of the FileLimit input for this Choreo. ((optional, integer) Dropbox will not return a list that exceeds this specified limit. Defaults to 10,000.)
"""
super(ListFolderContentsInputSet, self)._set_input('FileLimit', value)
def set_Folder(self, value):
"""
Set the value of the Folder input for this Choreo. ((optional, string) The path to a folder for which to retrieve metadata (i.e. /RootFolder/SubFolder). Note that a path to file can also be passed.)
"""
super(ListFolderContentsInputSet, self)._set_input('Folder', value)
def set_Hash(self, value):
"""
Set the value of the Hash input for this Choreo. ((optional, string) The value of a hash field from a previous request to get metadata on a folder. When provided, a 304 (not Modified) status code is returned instead of a folder listing if no changes have been made.)
"""
super(ListFolderContentsInputSet, self)._set_input('Hash', value)
def set_IncludeDeleted(self, value):
"""
Set the value of the IncludeDeleted input for this Choreo. ((optional, boolean) Only applicable when List is set. If this parameter is set to true, contents will include the metadata of deleted children.)
"""
super(ListFolderContentsInputSet, self)._set_input('IncludeDeleted', value)
def set_List(self, value):
"""
Set the value of the List input for this Choreo. ((optional, boolean) If true (the default), the folder's metadata will include a contents field with a list of metadata entries for the contents of the folder.)
"""
super(ListFolderContentsInputSet, self)._set_input('List', value)
def set_Locale(self, value):
"""
Set the value of the Locale input for this Choreo. ((optional, string) If your app supports any language other than English, insert the appropriate IETF language tag, and the metadata returned will have its size field translated based on the given locale (e.g., pt-BR).)
"""
super(ListFolderContentsInputSet, self)._set_input('Locale', value)
def set_ResponseFormat(self, value):
"""
Set the value of the ResponseFormat input for this Choreo. ((optional, string) The format that the response should be in. Can be set to xml or json. Defaults to json.)
"""
super(ListFolderContentsInputSet, self)._set_input('ResponseFormat', value)
def set_Revision(self, value):
"""
Set the value of the Revision input for this Choreo. ((optional, string) When including a particular revision number, only the metadata for that revision will be returned.)
"""
super(ListFolderContentsInputSet, self)._set_input('Revision', value)
def set_Root(self, value):
"""
Set the value of the Root input for this Choreo. ((optional, string) Defaults to "auto" which automatically determines the root folder using your app's permission level. Other options are "sandbox" (App Folder) and "dropbox" (Full Dropbox).)
"""
super(ListFolderContentsInputSet, self)._set_input('Root', value)
class ListFolderContentsResultSet(ResultSet):
"""
A ResultSet with methods tailored to the values returned by the ListFolderContents Choreo.
The ResultSet object is used to retrieve the results of a Choreo execution.
"""
def getJSONFromString(self, str):
return json.loads(str)
def get_Response(self):
"""
Retrieve the value for the "Response" output from this Choreo execution. (The response from Dropbox. Corresponds to the ResponseFormat input. Defaults to json.)
"""
return self._output.get('Response', None)
def get_ResponseStatusCode(self):
"""
Retrieve the value for the "ResponseStatusCode" output from this Choreo execution. ((integer) The response status code returned from Dropbox.)
"""
return self._output.get('ResponseStatusCode', None)
class ListFolderContentsChoreographyExecution(ChoreographyExecution):
def _make_result_set(self, response, path):
return ListFolderContentsResultSet(response, path)
| 50.331081 | 278 | 0.694053 | 918 | 7,449 | 5.561002 | 0.264706 | 0.015671 | 0.027424 | 0.038198 | 0.343781 | 0.201763 | 0.159843 | 0.061508 | 0.046621 | 0.028599 | 0 | 0.003713 | 0.204591 | 7,449 | 147 | 279 | 50.673469 | 0.85789 | 0.52893 | 0 | 0 | 0 | 0 | 0.06518 | 0.017839 | 0 | 0 | 0 | 0 | 0 | 1 | 0.411765 | false | 0 | 0.098039 | 0.098039 | 0.72549 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1e16e212df58abc55b333c32adf7bad6f18c87d3 | 240 | py | Python | generate_videos.py | weiqiao/pydrake_kuka | 2f1c9a5bb3946cd8a51be93e4225ec099a6427de | [
"MIT"
] | 5 | 2018-11-21T22:36:25.000Z | 2021-05-30T19:05:07.000Z | generate_videos.py | weiqiao/pydrake_kuka | 2f1c9a5bb3946cd8a51be93e4225ec099a6427de | [
"MIT"
] | 7 | 2018-10-19T20:33:51.000Z | 2018-11-15T23:22:01.000Z | generate_videos.py | weiqiao/pydrake_kuka | 2f1c9a5bb3946cd8a51be93e4225ec099a6427de | [
"MIT"
] | 5 | 2018-08-17T22:42:45.000Z | 2021-04-01T14:21:45.000Z | import os
import random
# Fullscreen meshlab on right monitor for this to work
for k in range(100, 200):
n_objects = random.randint(5, 10)
os.system("python kuka_pydrake_sim.py -T 60 --seed %d --hacky_save_video -N %d" % (k, n_objects))
| 30 | 98 | 0.725 | 44 | 240 | 3.818182 | 0.795455 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054455 | 0.158333 | 240 | 7 | 99 | 34.285714 | 0.777228 | 0.216667 | 0 | 0 | 0 | 0 | 0.360215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
1e2483e1d54d7612f075a68d1c2fadeaa1f2c2d9 | 3,194 | py | Python | gui/main_password.py | FrCln/password-manager | 7e5aca609846e38db634de754be4e7c35a5f21a2 | [
"MIT"
] | null | null | null | gui/main_password.py | FrCln/password-manager | 7e5aca609846e38db634de754be4e7c35a5f21a2 | [
"MIT"
] | null | null | null | gui/main_password.py | FrCln/password-manager | 7e5aca609846e38db634de754be4e7c35a5f21a2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'mainpassword.ui'
#
# Created by: PyQt5 UI code generator 5.15.0
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
import os
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(QtWidgets.QMainWindow):
send_password_signal = QtCore.pyqtSignal()
create_new_signal = QtCore.pyqtSignal()
def __init__(self, parent, *args):
super(Ui_MainWindow, self).__init__(*args)
self.parent = parent
self.setObjectName("MainWindow")
self.setFixedSize(391, 167)
self.centralwidget = QtWidgets.QWidget(self)
self.centralwidget.setObjectName("centralwidget")
self.verticalLayoutWidget = QtWidgets.QWidget(self.centralwidget)
self.verticalLayoutWidget.setGeometry(QtCore.QRect(9, 9, 371, 151))
self.verticalLayoutWidget.setObjectName("verticalLayoutWidget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.passwordInput = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.passwordInput.setEchoMode(QtWidgets.QLineEdit.Password)
self.passwordInput.setObjectName("lineEdit")
self.verticalLayout.addWidget(self.passwordInput)
self.OKButton = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.OKButton.setObjectName("pushButton")
self.verticalLayout.addWidget(self.OKButton)
self.createNewButton = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.createNewButton.setObjectName("pushButton_2")
self.verticalLayout.addWidget(self.createNewButton)
self.setCentralWidget(self.centralwidget)
self.retranslateUi()
QtCore.QMetaObject.connectSlotsByName(self)
self.build_handlers()
def retranslateUi(self):
_translate = QtCore.QCoreApplication.translate
self.setWindowTitle(_translate("MainWindow", "Введите пароль"))
self.OKButton.setText(_translate("MainWindow", "ОК"))
self.createNewButton.setText(_translate("MainWindow", "Создать новую базу паролей"))
def build_handlers(self):
self.OKButton.clicked.connect(self.enter_master_password)
self.createNewButton.clicked.connect(self.cancel_master_password)
def enter_master_password(self):
self.parent.main_password = self.passwordInput.text()
self.send_password_signal.emit()
self.close()
def cancel_master_password(self):
if 'pwd.bin' in os.listdir():
mb = QtWidgets.QMessageBox()
mb.setWindowTitle("Создание новой базы")
mb.setText("Имеющаяся база будет удалена! Вы уверены?")
button_ok = mb.addButton("Да", QtWidgets.QMessageBox.AcceptRole)
button_cancel = mb.addButton("Нет", QtWidgets.QMessageBox.RejectRole)
mb.exec()
if mb.clickedButton() == button_ok:
self.create_new_signal.emit()
self.close()
| 43.162162 | 92 | 0.705698 | 327 | 3,194 | 6.779817 | 0.425076 | 0.075778 | 0.050519 | 0.041949 | 0.043302 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01051 | 0.195679 | 3,194 | 73 | 93 | 43.753425 | 0.852472 | 0.086725 | 0 | 0.036364 | 1 | 0 | 0.079409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.2 | 0.036364 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
1e504a5b5c5d4acfa54f1f3b5a915886ad803336 | 510 | py | Python | app.py | andrequeiroz2/api-tags | ccc84c6b1ca5a6f907ea4c36db49a763c7d50445 | [
"MIT"
] | null | null | null | app.py | andrequeiroz2/api-tags | ccc84c6b1ca5a6f907ea4c36db49a763c7d50445 | [
"MIT"
] | null | null | null | app.py | andrequeiroz2/api-tags | ccc84c6b1ca5a6f907ea4c36db49a763c7d50445 | [
"MIT"
] | null | null | null | import os
from flask import Flask
from tags import database
from tags.api.tags.tags_route import init_tags_api
from flask_restful import Api
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = 'todo-api/api-tags:1.0'
app.config['MONGODB_SETTINGS'] = {
'db': 'tags',
'host': 'mongodb://admin:passwordD21@mongodbtags:27017/tags?authSource=admin',
}
database.init_app(app)
api = Api(app)
init_tags_api(api)
return app
| 19.615385 | 87 | 0.647059 | 69 | 510 | 4.57971 | 0.434783 | 0.066456 | 0.06962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023136 | 0.237255 | 510 | 25 | 88 | 20.4 | 0.789203 | 0 | 0 | 0 | 0 | 0 | 0.243137 | 0.172549 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.3125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
1e54324c83d7632c21e244fadff2630d67738ec7 | 652 | py | Python | tests/_utils/uniqueue.py | ssfdust/smorest-sfs | 139f6817989ab041c81761d183169de20a26597e | [
"Apache-2.0"
] | 8 | 2020-05-11T07:11:03.000Z | 2022-03-25T01:58:18.000Z | tests/_utils/uniqueue.py | ssfdust/smorest-sfs | 139f6817989ab041c81761d183169de20a26597e | [
"Apache-2.0"
] | null | null | null | tests/_utils/uniqueue.py | ssfdust/smorest-sfs | 139f6817989ab041c81761d183169de20a26597e | [
"Apache-2.0"
] | 2 | 2020-05-11T03:53:38.000Z | 2021-03-25T01:11:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import annotations
import queue
from typing import TYPE_CHECKING, TypeVar
T = TypeVar("T")
if TYPE_CHECKING:
SimpleQueue = queue.SimpleQueue
else:
class FakeGenericMeta(type):
def __getitem__(self, item):
return self
class SimpleQueue(queue.SimpleQueue, metaclass=FakeGenericMeta):
pass
class UniqueQueue(SimpleQueue[T]):
_queue: UniqueQueue[T]
def __new__(cls) -> UniqueQueue[T]:
if not hasattr(cls, "_queue"):
orig = super(UniqueQueue, cls)
cls._queue = orig.__new__(cls)
return cls._queue
| 20.375 | 68 | 0.654908 | 74 | 652 | 5.472973 | 0.486486 | 0.059259 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00202 | 0.240798 | 652 | 31 | 69 | 21.032258 | 0.816162 | 0.064417 | 0 | 0 | 0 | 0 | 0.011513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.157895 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
1e645c13a0829341024ad7a9c2cc7fd0efc82650 | 556 | py | Python | src/transitions/sudden.py | Addono/TekniBridge | 5411f6ae7903ac9383e11e9e3e42895862dbe955 | [
"BSD-2-Clause"
] | null | null | null | src/transitions/sudden.py | Addono/TekniBridge | 5411f6ae7903ac9383e11e9e3e42895862dbe955 | [
"BSD-2-Clause"
] | null | null | null | src/transitions/sudden.py | Addono/TekniBridge | 5411f6ae7903ac9383e11e9e3e42895862dbe955 | [
"BSD-2-Clause"
] | null | null | null | from typing import List
from led import Led
from transitions import AbstractTransition
class Sudden(AbstractTransition):
def __init__(self, red: float, green: float, blue: float) -> None:
super().__init__()
self.target = Led(red, green, blue)
@AbstractTransition.brightness.setter
def brightness(self, brightness):
self.target.brightness = brightness
AbstractTransition.brightness.fset(self, brightness)
def step(self, previous: List[Led]) -> List[Led]:
return [self.target for _ in previous]
| 26.47619 | 70 | 0.697842 | 63 | 556 | 6.015873 | 0.428571 | 0.079156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205036 | 556 | 20 | 71 | 27.8 | 0.857466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.230769 | 0.076923 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1e64ce0f434fa8c4560c935663521ee4a611d5c4 | 51 | py | Python | ktrain/graph/stellargraph/version.py | happy-machine/ktrain | 221e7ce91f8cfdc280fc733083e901fcedb9f7e5 | [
"MIT"
] | null | null | null | ktrain/graph/stellargraph/version.py | happy-machine/ktrain | 221e7ce91f8cfdc280fc733083e901fcedb9f7e5 | [
"MIT"
] | null | null | null | ktrain/graph/stellargraph/version.py | happy-machine/ktrain | 221e7ce91f8cfdc280fc733083e901fcedb9f7e5 | [
"MIT"
] | null | null | null | # Global version information
__version__ = "0.7.2"
| 17 | 28 | 0.745098 | 7 | 51 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.137255 | 51 | 2 | 29 | 25.5 | 0.704545 | 0.509804 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1e698dcbf8b39e78eb99dae57a10881a374d2261 | 10,371 | py | Python | yamtbx/dataproc/xds/xds_inp.py | harumome/kamo | 060e6fbd9c5a30f86065cec7807ad3449027607e | [
"BSD-3-Clause"
] | null | null | null | yamtbx/dataproc/xds/xds_inp.py | harumome/kamo | 060e6fbd9c5a30f86065cec7807ad3449027607e | [
"BSD-3-Clause"
] | null | null | null | yamtbx/dataproc/xds/xds_inp.py | harumome/kamo | 060e6fbd9c5a30f86065cec7807ad3449027607e | [
"BSD-3-Clause"
] | null | null | null | """
(c) RIKEN 2015. All rights reserved.
Author: Keitaro Yamashita
This software is released under the new BSD License; see LICENSE.
"""
import os
from yamtbx.dataproc import XIO
from yamtbx.dataproc import cbf
from yamtbx.dataproc.dataset import group_img_files_template
def sensor_thickness_from_minicbf(img):
header = cbf.get_pilatus_header(img)
sensor = filter(lambda x: "sensor, thick" in x, header.splitlines())[0]
thick, unit = sensor[sensor.index("thickness ")+len("thickness "):].split()
assert unit == "m"
return float(thick) * 1.e3
# sensor_thickness_from_minicbf()
# Reference: http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Generate_XDS.INP
def generate_xds_inp(img_files, inp_dir, reverse_phi, anomalous, spot_range=None, minimum=False,
crystal_symmetry=None, integrate_nimages=None,
osc_range=None, orgx=None, orgy=None, rotation_axis=None, distance=None,
wavelength=None,
minpk=None, exclude_resolution_range=[]):
groups = group_img_files_template(img_files)
assert len(groups) == 1
template, fstart, fend = groups[0]
#print inp_dir, img_files[0], template
tmp = [os.path.dirname(os.path.relpath(img_files[0], inp_dir)),
os.path.dirname(os.path.abspath(img_files[0]))]
imdir = min(tmp, key=lambda x:len(x))
template = os.path.join(imdir, os.path.basename(template))
#print imdir
im = None
for imgfile in img_files:
if os.path.isfile(imgfile):
im = XIO.Image(imgfile)
break
if im is None:
raise Exception("No actual images found.")
if crystal_symmetry is None:
sgnum = 0
cell_str = "50 60 70 90 90 90"
else:
sgnum = crystal_symmetry.space_group_info().type().number()
cell_str = " ".join(map(lambda x: "%.2f"%x, crystal_symmetry.unit_cell().parameters()))
if osc_range is None: osc_range = im.header["PhiWidth"]
data_range = "%d %d" % (fstart, fend)
if spot_range is None:
spot_range = "%d %d" % (fstart, (fend+fstart)/2)
elif spot_range == "first":
spot_range = "%d %d" % (fstart, fstart)
elif spot_range == "all":
spot_range = "%d %d" % (fstart, fend)
elif len(spot_range) == 2:
spot_range = "%d %d" % spot_range
else:
print "Error!"
return
if rotation_axis is None:
if im.header["ImageType"] == "raxis": rotation_axis = (0,1,0)
else: rotation_axis = (1,0,0)
if reverse_phi: rotation_axis = map(lambda x:-1*x, rotation_axis)
rotation_axis = " ".join(map(lambda x: "%.2f"%x, rotation_axis))
if integrate_nimages is None:
delphi = 5
else:
delphi = osc_range * integrate_nimages
nx, ny = im.header["Width"], im.header["Height"],
qx, qy = im.header["PixelX"], im.header["PixelY"]
if orgx is None: orgx = im.header["BeamX"]/qx
if orgy is None: orgy = im.header["BeamY"]/qy
if wavelength is None: wavelength = im.header["Wavelength"]
if distance is None: distance = im.header["Distance"]
friedel = "FALSE" if anomalous else "TRUE"
sensor_thickness = 0 # FIXME
if im.header["ImageType"] == "marccd":
detector = "CCDCHESS MINIMUM_VALID_PIXEL_VALUE= 1 OVERLOAD= 65500"
elif im.header["ImageType"] == "raxis":
detector = "RAXIS MINIMUM_VALID_PIXEL_VALUE= 0 OVERLOAD= 2000000"
distance *= -1
elif im.header["ImageType"] == "minicbf":
detector = "PILATUS MINIMUM_VALID_PIXEL_VALUE=0 OVERLOAD= 1048576"
sensor_thickness = sensor_thickness_from_minicbf(img_files[0])
elif im.header["ImageType"] == "adsc":
detector = "ADSC MINIMUM_VALID_PIXEL_VALUE= 1 OVERLOAD= 65000"
elif im.header["ImageType"] == "mscccd":
detector = "SATURN MINIMUM_VALID_PIXEL_VALUE= 1 OVERLOAD= 262112" # XXX Should read header!!
distance *= -1
extra_kwds = ""
if minpk is not None: extra_kwds += " MINPK= %.2f\n" % minpk
for r1, r2 in exclude_resolution_range:
if r1 < r2: r1, r2 = r2, r1
extra_kwds += " EXCLUDE_RESOLUTION_RANGE= %.3f %.3f\n" % (r1, r2)
inp_str = """\
JOB= XYCORR INIT COLSPOT IDXREF DEFPIX INTEGRATE CORRECT
ORGX= %(orgx).2f ORGY= %(orgy).2f
DETECTOR_DISTANCE= %(distance).2f
OSCILLATION_RANGE= %(osc_range).3f
X-RAY_WAVELENGTH= %(wavelength).5f
NAME_TEMPLATE_OF_DATA_FRAMES= %(template)s
! REFERENCE_DATA_SET=xxx/XDS_ASCII.HKL ! e.g. to ensure consistent indexing
DATA_RANGE= %(data_range)s
SPOT_RANGE= %(spot_range)s
! BACKGROUND_RANGE=1 10 ! rather use defaults (first 5 degree of rotation)
FRIEDEL'S_LAW= %(friedel)s
DELPHI= %(delphi).2f
%(extra_kwds)s
! parameters specifically for this detector and beamline:
DETECTOR= %(detector)s
SENSOR_THICKNESS= %(sensor_thickness).2f
! attention CCD detectors: for very high resolution (better than 1A) make sure to specify SILICON
! as about 32* what CORRECT.LP suggests (absorption of phosphor is much higher than that of silicon)
NX= %(nx)s NY= %(ny)s QX= %(qx)s QY= %(qy)s ! to make CORRECT happy if frames are unavailable
ROTATION_AXIS= %(rotation_axis)s
""" % locals()
# XXX Really, really BAD idea!!
# Synchrotron can have R-AXIS, and In-house detecotr can have horizontal goniometer..!!
if im.header["ImageType"] == "raxis":
inp_str += """\
DIRECTION_OF_DETECTOR_X-AXIS= 1 0 0
DIRECTION_OF_DETECTOR_Y-AXIS= 0 -1 0
INCIDENT_BEAM_DIRECTION= 0 0 1
!FRACTION_OF_POLARIZATION= 0.98 ! uncomment if synchrotron
POLARIZATION_PLANE_NORMAL= 1 0 0
"""
else:
if im.header["ImageType"] == "mscccd":
inp_str += """\
DIRECTION_OF_DETECTOR_X-AXIS= -1 0 0
DIRECTION_OF_DETECTOR_Y-AXIS= 0 1 0
"""
else:
inp_str += """\
DIRECTION_OF_DETECTOR_X-AXIS= 1 0 0
DIRECTION_OF_DETECTOR_Y-AXIS= 0 1 0
"""
inp_str += """\
INCIDENT_BEAM_DIRECTION= 0 0 1
FRACTION_OF_POLARIZATION= 0.98 ! better value is provided by beamline staff!
POLARIZATION_PLANE_NORMAL= 0 1 0
"""
if not minimum:
inp_str += """\
SPACE_GROUP_NUMBER= %(sgnum)d ! 0 if unknown
UNIT_CELL_CONSTANTS= %(cell)s ! put correct values if known
INCLUDE_RESOLUTION_RANGE=50 0 ! after CORRECT, insert high resol limit; re-run CORRECT
TRUSTED_REGION=0.00 1.4 ! partially use corners of detectors; 1.41421=full use
VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS=6000. 30000. ! often 7000 or 8000 is ok
STRONG_PIXEL=4 ! COLSPOT: only use strong reflections (default is 3)
MINIMUM_NUMBER_OF_PIXELS_IN_A_SPOT=3 ! default of 6 is sometimes too high
! close spots: reduce SEPMIN and CLUSTER_RADIUS from their defaults of 6 and 3, e.g. to 4 and 2
! for bad or low resolution data remove the "!" in the following line:
REFINE(IDXREF)=CELL BEAM ORIENTATION AXIS ! DISTANCE POSITION
REFINE(INTEGRATE)= DISTANCE POSITION BEAM ORIENTATION ! AXIS CELL
! REFINE(CORRECT)=CELL BEAM ORIENTATION AXIS DISTANCE POSITION ! Default is: refine everything
!used by DEFPIX and CORRECT to exclude ice-reflections / ice rings - uncomment if necessary
!EXCLUDE_RESOLUTION_RANGE= 3.93 3.87 !ice-ring at 3.897 Angstrom
!EXCLUDE_RESOLUTION_RANGE= 3.70 3.64 !ice-ring at 3.669 Angstrom
!EXCLUDE_RESOLUTION_RANGE= 3.47 3.41 !ice-ring at 3.441 Angstrom
!EXCLUDE_RESOLUTION_RANGE= 2.70 2.64 !ice-ring at 2.671 Angstrom
!EXCLUDE_RESOLUTION_RANGE= 2.28 2.22 !ice-ring at 2.249 Angstrom
!EXCLUDE_RESOLUTION_RANGE= 2.102 2.042 !ice-ring at 2.072 Angstrom - strong
!EXCLUDE_RESOLUTION_RANGE= 1.978 1.918 !ice-ring at 1.948 Angstrom - weak
!EXCLUDE_RESOLUTION_RANGE= 1.948 1.888 !ice-ring at 1.918 Angstrom - strong
!EXCLUDE_RESOLUTION_RANGE= 1.913 1.853 !ice-ring at 1.883 Angstrom - weak
!EXCLUDE_RESOLUTION_RANGE= 1.751 1.691 !ice-ring at 1.721 Angstrom - weak
""" % dict(sgnum=sgnum, cell=cell_str)
if im.header["ImageType"] == "minicbf":
inp_str += """\
NUMBER_OF_PROFILE_GRID_POINTS_ALONG_ALPHA/BETA= 13 ! Default is 9 - Increasing may improve data
NUMBER_OF_PROFILE_GRID_POINTS_ALONG_GAMMA= 13 ! accuracy, particularly if finely-sliced on phi,
! and does not seem to have any downsides.
"""
if nx == 1475:
if 1:#! grep -q Flat_field tmp2: #XXX FIXME
inp_str += """\
! the following specifications are for a detector _without_ proper
! flat_field correction; they cut away one additional pixel adjacent
! to each UNTRUSTED_RECTANGLE
!EXCLUSION OF VERTICAL DEAD AREAS OF THE PILATUS 2M DETECTOR
UNTRUSTED_RECTANGLE= 486 496 0 1680
UNTRUSTED_RECTANGLE= 980 990 0 1680
!EXCLUSION OF HORIZONTAL DEAD AREAS OF THE PILATUS 2M DETECTOR
UNTRUSTED_RECTANGLE= 0 1476 194 214
UNTRUSTED_RECTANGLE= 0 1476 406 426
UNTRUSTED_RECTANGLE= 0 1476 618 638
UNTRUSTED_RECTANGLE= 0 1476 830 850
UNTRUSTED_RECTANGLE= 0 1476 1042 1062
UNTRUSTED_RECTANGLE= 0 1476 1254 1274
UNTRUSTED_RECTANGLE= 0 1476 1466 1486
"""
else:
inp_str += """\
!EXCLUSION OF VERTICAL DEAD AREAS OF THE PILATUS 2M DETECTOR
UNTRUSTED_RECTANGLE= 487 495 0 1680
UNTRUSTED_RECTANGLE= 981 989 0 1680
!EXCLUSION OF HORIZONTAL DEAD AREAS OF THE PILATUS 2M DETECTOR
UNTRUSTED_RECTANGLE= 0 1476 195 213
UNTRUSTED_RECTANGLE= 0 1476 407 425
UNTRUSTED_RECTANGLE= 0 1476 619 637
UNTRUSTED_RECTANGLE= 0 1476 831 849
UNTRUSTED_RECTANGLE= 0 1476 1043 1061
UNTRUSTED_RECTANGLE= 0 1476 1255 1273
UNTRUSTED_RECTANGLE= 0 1476 1467 1485
"""
elif nx == 2463:
# Pilatus 6M
# FIXME: here we could test if a Flat_field correction was applied like we do for 2M
inp_str += """\
UNTRUSTED_RECTANGLE= 487 495 0 2528
UNTRUSTED_RECTANGLE= 981 989 0 2528
UNTRUSTED_RECTANGLE=1475 1483 0 2528
UNTRUSTED_RECTANGLE=1969 1977 0 2528
UNTRUSTED_RECTANGLE= 0 2464 195 213
UNTRUSTED_RECTANGLE= 0 2464 407 425
UNTRUSTED_RECTANGLE= 0 2464 619 637
UNTRUSTED_RECTANGLE= 0 2464 831 849
UNTRUSTED_RECTANGLE= 0 2464 1043 1061
UNTRUSTED_RECTANGLE= 0 2464 1255 1273
UNTRUSTED_RECTANGLE= 0 2464 1467 1485
UNTRUSTED_RECTANGLE= 0 2464 1679 1697
UNTRUSTED_RECTANGLE= 0 2464 1891 1909
UNTRUSTED_RECTANGLE= 0 2464 2103 2121
UNTRUSTED_RECTANGLE= 0 2464 2315 2333
"""
return inp_str
| 41.650602 | 101 | 0.686819 | 1,514 | 10,371 | 4.540951 | 0.301849 | 0.089018 | 0.069091 | 0.046836 | 0.265309 | 0.158109 | 0.083491 | 0.083491 | 0.083491 | 0.083491 | 0 | 0.086385 | 0.215312 | 10,371 | 248 | 102 | 41.818548 | 0.758417 | 0.042715 | 0 | 0.165094 | 0 | 0.018868 | 0.606628 | 0.109747 | 0 | 0 | 0 | 0.004032 | 0.009434 | 0 | null | null | 0 | 0.018868 | null | null | 0.004717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1e6b0d9235d8b1a1193601cbf06d5bfb7871ca2e | 3,009 | py | Python | fbssdc/ast.py | Eijebong/binjs-ref | 52ba3ec9e9e2c439febc6a25f5f7952bd35dc539 | [
"MIT"
] | 391 | 2017-12-18T12:33:29.000Z | 2022-03-09T10:11:57.000Z | fbssdc/ast.py | Eijebong/binjs-ref | 52ba3ec9e9e2c439febc6a25f5f7952bd35dc539 | [
"MIT"
] | 297 | 2018-02-04T13:31:14.000Z | 2021-01-03T21:06:50.000Z | fbssdc/ast.py | Eijebong/binjs-ref | 52ba3ec9e9e2c439febc6a25f5f7952bd35dc539 | [
"MIT"
] | 31 | 2018-02-12T16:57:50.000Z | 2020-12-26T16:28:53.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import doctest
import json
import os
import subprocess
import idl
BINJS_SIGNATURE = b'BINJS\x02'
def load_ast(filename):
with open(filename) as inp:
return json.loads(inp.read())
def load_test_ast(filename):
return load_ast(os.path.join(os.path.dirname(__file__), 'test-data', filename))
# Map ASTs to something which includes all of the data.
# Values:
# - tagged object (with "type")
# - int
# - float
# - string
# - bool
# - None
# - list
# TODO: Add a "specials" for objects added by AST mutilations, like LazyIOUs.
class AstVisitor(object):
def __init__(self, types):
self.types = types
def visit(self, ty, value):
'''Visits value with declared type ty.
Declared types may be much broader than the actual type of value.
The encoder must narrow this uncertainty for the decoder to make
coordinated decisions.
'''
if type(value) in [bool, float, int, str, type(None)]:
self.visit_primitive(ty, value)
elif type(value) is list:
# FIXME: When es6 IDL uses ... or FrozenArray<...>, unpack and
# narrow the type to this value.
assert isinstance(ty, idl.TyFrozenArray), str(ty)
self.visit_list(ty, value)
elif type(value) is dict:
actual_ty = self.types.interfaces[value['type']]
self.visit_struct(ty, actual_ty, value)
else:
assert False, f'unreachable: {type(value)}'
def visit_list(self, ty, xs):
for i, x in enumerate(xs):
self.visit_list_item(ty.element_ty, i, x)
def visit_list_item(self, ty, i, x):
self.visit(ty, x)
def visit_struct(self, declared_ty, actual_ty, obj):
for i, attr in enumerate(actual_ty.attributes()):
self.visit_field(actual_ty, obj, i, attr)
def visit_field(self, struct_ty, obj, i, attr):
self.visit(attr.resolved_ty, obj[attr.name])
def visit_primitive(self, ty, value):
pass
# This type is not used but it is useful as a unit test for AstVisitor.
class AstStringIndexer(AstVisitor):
'''
>>> types = idl.parse_es6_idl()
>>> tree = load_test_ast('y5R7cnYctJv.js.dump')
>>> visitor = AstStringIndexer(types)
>>> visitor.visit(types.interfaces['Script'], tree)
>>> len(visitor.strings)
1330
>>> visitor.strings[10:14]
['IdentifierExpression', 'CavalryLogger', 'start_js', 'ArrayExpression']
'''
def __init__(self, types):
super().__init__(types)
self.strings = list()
def visit_primitive(self, ty, value):
super().visit_primitive(ty, value)
if type(value) is str:
self.strings.append(value)
# When a lazy node is deported, this breadcrumb is dropped in its
# place. References to lazy functions are deserialized in order so
# there is no need to add indexes to them.
# TODO: Make this a singleton.
class LazyIOU(object):
pass
if __name__ == '__main__':
doctest.testmod()
| 27.605505 | 81 | 0.689598 | 441 | 3,009 | 4.575964 | 0.417234 | 0.02775 | 0.016353 | 0.015857 | 0.049554 | 0.049554 | 0 | 0 | 0 | 0 | 0 | 0.006173 | 0.192423 | 3,009 | 108 | 82 | 27.861111 | 0.82428 | 0.41675 | 0 | 0.122449 | 0 | 0 | 0.033215 | 0 | 0 | 0 | 0 | 0.027778 | 0.040816 | 1 | 0.22449 | false | 0.040816 | 0.102041 | 0.020408 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1e7511d443f59d368f6c1b20379fd9fed48d9b34 | 7,491 | py | Python | vapor/fn.py | xiaket/vapor | 1c2c5dc85ac0c84571f8903733d3eadaf0a05d3c | [
"MIT"
] | null | null | null | vapor/fn.py | xiaket/vapor | 1c2c5dc85ac0c84571f8903733d3eadaf0a05d3c | [
"MIT"
] | null | null | null | vapor/fn.py | xiaket/vapor | 1c2c5dc85ac0c84571f8903733d3eadaf0a05d3c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Models that maps to Cloudformation functions.
"""
def replace_fn(node):
"""Iteratively replace all Fn/Ref in the node"""
if isinstance(node, list):
return [replace_fn(item) for item in node]
if isinstance(node, dict):
return {name: replace_fn(value) for name, value in node.items()}
if isinstance(node, (str, int, float)):
return node
if isinstance(node, Ref):
return node.render()
if hasattr(Fn, node.__class__.__name__):
return node.render()
raise ValueError(f"Invalid value specified in the code: {node}")
class Ref:
"""Represents a ref function in Cloudformation."""
# This is our DSL, it's a very thin wrapper around dictionary.
# pylint: disable=R0903
def __init__(self, target):
"""Creates a Ref node with a target."""
self.target = target
def render(self):
"""Render the node as a dictionary."""
return {"Ref": self.target}
class Base64:
"""Fn::Base64 function."""
# pylint: disable=R0903
def __init__(self, value):
self.value = value
def render(self):
"""Render the node with Fn::Base64."""
return {"Fn::Base64": replace_fn(self.value)}
class Cidr:
"""Fn::Cidr function."""
# pylint: disable=R0903
def __init__(self, ipblock, count, cidr_bits):
self.ipblock = ipblock
self.count = count
self.cidr_bits = cidr_bits
def render(self):
"""Render the node with Fn::Cidr."""
return {
"Fn::Cidr": [
replace_fn(self.ipblock),
replace_fn(self.count),
replace_fn(self.cidr_bits),
]
}
class And:
"""Fn::And function."""
# pylint: disable=R0903
def __init__(self, *args):
self.conditions = list(args)
def render(self):
"""Render the node with Fn::And."""
return {"Fn::And": replace_fn(self.conditions)}
class Equals:
"""Fn::Equals function."""
# pylint: disable=R0903
def __init__(self, lhs, rhs):
self.lhs = lhs
self.rhs = rhs
def render(self):
"""Render the node with Fn::Equals."""
return {"Fn::Equals": [replace_fn(self.lhs), replace_fn(self.rhs)]}
class If:
"""Fn::If function."""
# pylint: disable=R0903
def __init__(self, condition, true_value, false_value):
self.condition = condition
self.true_value = true_value
self.false_value = false_value
def render(self):
"""Render the node with Fn::If."""
return {
"Fn::If": [
replace_fn(self.condition),
replace_fn(self.true_value),
replace_fn(self.false_value),
]
}
class Not:
"""Fn::Not function."""
# pylint: disable=R0903
def __init__(self, condition):
self.condition = condition
def render(self):
"""Render the node with Fn::Not."""
return {"Fn::Not": [replace_fn(self.condition)]}
class Or:
"""Fn::Or function."""
# pylint: disable=R0903
def __init__(self, *args):
self.conditions = list(args)
def render(self):
"""Render the node with Fn::Or."""
return {"Fn::Or": replace_fn(self.conditions)}
class FindInMap:
"""Fn::FindInMap function."""
# pylint: disable=R0903
def __init__(self, map_name, l1key, l2key):
self.map_name = map_name
self.l1key = l1key
self.l2key = l2key
def render(self):
"""Render the node with Fn::FindInMap."""
return {
"Fn::FindInMap": [
replace_fn(self.map_name),
replace_fn(self.l1key),
replace_fn(self.l2key),
]
}
class GetAtt:
"""Fn::GetAtt function."""
# pylint: disable=R0903
def __init__(self, logical_name, attr):
self.logical_name = logical_name
self.attr = attr
def render(self):
"""Render the node with Fn::GetAtt."""
return {"Fn::GetAtt": [replace_fn(self.logical_name), replace_fn(self.attr)]}
class GetAZs:
"""Fn::GetAZs function."""
# pylint: disable=R0903
def __init__(self, region):
self.region = region
def render(self):
"""Render the node with Fn::GetAZs."""
return {"Fn::GetAZs": replace_fn(self.region)}
class ImportValue:
"""Fn::ImportValue function."""
# pylint: disable=R0903
def __init__(self, export):
self.export = export
def render(self):
"""Render the node with Fn::ImportValue."""
return {"Fn::ImportValue": replace_fn(self.export)}
class Join:
"""Fn::Join function."""
# pylint: disable=R0903
def __init__(self, delimiter, elements):
self.delimiter = delimiter
self.elements = elements
def render(self):
"""Render the node with Fn::Join."""
return {"Fn::Join": [replace_fn(self.delimiter), replace_fn(self.elements)]}
class Select:
"""Fn::Select function."""
# pylint: disable=R0903
def __init__(self, index, elements):
self.index = index
self.elements = elements
def render(self):
"""Render the node with Fn::Select."""
return {"Fn::Select": [replace_fn(self.index), replace_fn(self.elements)]}
class Split:
"""Fn::Split function."""
# pylint: disable=R0903
def __init__(self, delimiter, target):
self.delimiter = delimiter
self.target = target
def render(self):
"""Render the node with Fn::Split."""
return {"Fn::Split": [replace_fn(self.delimiter), replace_fn(self.target)]}
class Sub:
"""Fn::Sub function."""
# pylint: disable=R0903
def __init__(self, target, mapping=None):
if not isinstance(target, str):
raise ValueError(
f"The first argument of Fn::Sub must be string: `{target}`"
)
if mapping is None:
self.mapping = {}
self.target = target
self.mapping = mapping
def render(self):
"""Render the node with Fn::Sub."""
if self.mapping:
return {"Fn::Sub": [replace_fn(self.target), replace_fn(self.mapping)]}
return {"Fn::Sub": replace_fn(self.target)}
class Transform:
"""Fn::Transform function."""
# pylint: disable=R0903
def __init__(self, construct):
is_dict = isinstance(construct, dict)
match_keys = set(construct.keys()) == {"Name", "Parameters"}
if not is_dict or not match_keys:
raise ValueError("Invalid Transform construct")
self.construct = construct
def render(self):
"""Render the node with Fn::Transform."""
return {
"Fn::Transform": {
"Name": replace_fn(self.construct["Name"]),
"Parameters": replace_fn(self.construct["Parameters"]),
}
}
class Fn:
"""
This is a container for all functions.
Rationale is instead of having to import all the functions,
we just import Fn and use any function as Fn.FuncName
"""
# pylint: disable=R0903
Base64 = Base64
Cidr = Cidr
And = And
Equals = Equals
If = If
Not = Not
Or = Or
FindInMap = FindInMap
GetAtt = GetAtt
GetAZs = GetAZs
ImportValue = ImportValue
Join = Join
Select = Select
Split = Split
Sub = Sub
Transform = Transform
| 23.409375 | 85 | 0.579762 | 880 | 7,491 | 4.784091 | 0.138636 | 0.070546 | 0.092637 | 0.084798 | 0.380998 | 0.354632 | 0.354632 | 0.24038 | 0.126841 | 0.069834 | 0 | 0.017426 | 0.287545 | 7,491 | 319 | 86 | 23.482759 | 0.771407 | 0.223602 | 0 | 0.223602 | 0 | 0 | 0.058539 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.018634 | 0 | 0.590062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1e8b7a87cd18bdcc0ff56d000a8a800cceb9a789 | 476 | py | Python | setup.py | uofuseismo/shakemap-aqms | 228dc31fa987c71e48c70896946c54677497ce4a | [
"CC0-1.0"
] | null | null | null | setup.py | uofuseismo/shakemap-aqms | 228dc31fa987c71e48c70896946c54677497ce4a | [
"CC0-1.0"
] | null | null | null | setup.py | uofuseismo/shakemap-aqms | 228dc31fa987c71e48c70896946c54677497ce4a | [
"CC0-1.0"
] | null | null | null | from distutils.core import setup
import os.path
setup(name='shakemap_aqms',
version='1.0',
description='AQMS Modules for ShakeMap',
author='Bruce Worden',
author_email='cbworden@usgs.gov',
url='http://github.com/cbworden/shakemap-aqms',
packages=['shakemap_aqms',
'shakemap_aqms.coremods',
],
package_data={
'shakemap_aqms': [os.path.join('config', '*')]
},
scripts=[],
)
| 25.052632 | 56 | 0.57563 | 50 | 476 | 5.36 | 0.68 | 0.223881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00578 | 0.273109 | 476 | 18 | 57 | 26.444444 | 0.768786 | 0 | 0 | 0 | 0 | 0 | 0.346639 | 0.046218 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1e8e9df20ccdcd4d2401718bd824b567efdf4520 | 1,583 | py | Python | external/lemonade/dist/examples/calc/calc.py | almartin82/bayeslite | a27f243b5f16cc6a01e84336a829e5b65d665b7b | [
"Apache-2.0"
] | 964 | 2015-09-24T15:02:05.000Z | 2022-03-29T21:41:21.000Z | external/lemonade/dist/examples/calc/calc.py | almartin82/bayeslite | a27f243b5f16cc6a01e84336a829e5b65d665b7b | [
"Apache-2.0"
] | 435 | 2015-09-23T16:46:58.000Z | 2020-04-19T12:32:03.000Z | external/lemonade/dist/examples/calc/calc.py | almartin82/bayeslite | a27f243b5f16cc6a01e84336a829e5b65d665b7b | [
"Apache-2.0"
] | 86 | 2015-10-24T20:08:30.000Z | 2021-08-09T13:53:00.000Z |
import sys
def generateGrammar():
from lemonade.main import generate
from os.path import join, dirname
from StringIO import StringIO
inputFile = join(dirname(__file__), "gram.y")
outputStream = StringIO()
generate(inputFile, outputStream)
return outputStream.getvalue()
# generate and import our grammar
exec generateGrammar() in globals()
#
# the lexer
#
tokenType = {
'+': PLUS,
'-': MINUS,
'/': DIVIDE,
'*': TIMES,
}
def tokenize(input):
import re
tokenText = re.split("([+-/*])|\s*", input)
for text in tokenText:
if text is None:
continue
type = tokenType.get(text)
if type is None:
type = NUM
value = float(text)
else:
value = None
yield (type, value)
return
#
# the delegate
#
class Delegate(object):
def accept(self):
return
def parse_failed(self):
assert False, "Giving up. Parser is hopelessly lost..."
def syntax_error(self, token):
print >>sys.stderr, "Syntax error!"
return
#
# reduce actions
#
def sub(self, a, b): return a - b
def add(self, a, b): return a + b
def mul(self, a, b): return a * b
def div(self, a, b): return a / b
def num(self, value): return value
def print_result(self, result):
print result
return
p = Parser(Delegate())
#p.trace(sys.stdout, "# ")
if len(sys.argv) == 2:
p.parse(tokenize(sys.argv[1]))
else:
print >>sys.stderr, "usage: %s EXPRESSION" % sys.argv[0]
| 18.195402 | 64 | 0.576121 | 196 | 1,583 | 4.617347 | 0.459184 | 0.01768 | 0.026519 | 0.053039 | 0.075138 | 0.075138 | 0.075138 | 0 | 0 | 0 | 0 | 0.002703 | 0.2988 | 1,583 | 86 | 65 | 18.406977 | 0.812613 | 0.058749 | 0 | 0.117647 | 1 | 0 | 0.064407 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0 | null | null | 0 | 0.098039 | null | null | 0.078431 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1e9379e5b3b2bc4ac79a3648f31470e66c317df6 | 376 | py | Python | weeby/overlays.py | asheeeshh/weeby.py | 3da48e55bd1f824902bb7b2f7d31675cdeb080b6 | [
"MIT"
] | 5 | 2021-10-19T11:46:47.000Z | 2022-02-28T03:29:16.000Z | weeby/overlays.py | asheeeshh/weeby.py | 3da48e55bd1f824902bb7b2f7d31675cdeb080b6 | [
"MIT"
] | null | null | null | weeby/overlays.py | asheeeshh/weeby.py | 3da48e55bd1f824902bb7b2f7d31675cdeb080b6 | [
"MIT"
] | 1 | 2021-12-09T08:53:43.000Z | 2021-12-09T08:53:43.000Z | from . import config
from .util import make_request, image_request
class Overlay:
def __init__(self, token: str) -> None:
self.token = token
def overlay(self, type: str, image_url: str):
url = config.api_url + f"overlays/{type}?image={image_url}"
headers= {"Authorization" : f"Bearer {self.token}"}
return(image_request(url, headers)) | 34.181818 | 67 | 0.662234 | 50 | 376 | 4.78 | 0.48 | 0.112971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210106 | 376 | 11 | 68 | 34.181818 | 0.804714 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 0.087533 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1eaab01b62d068e6b6e85924d9d72b4eac38dad8 | 235 | py | Python | src/preprocess/setup.py | CyberAgentAILab/canvas-vae | a39a48ad9ae6217123e971cb4c5cf54c672eca74 | [
"Apache-2.0"
] | 12 | 2021-08-30T06:40:06.000Z | 2022-03-26T10:38:08.000Z | src/preprocess/setup.py | CyberAgentAILab/canvas-vae | a39a48ad9ae6217123e971cb4c5cf54c672eca74 | [
"Apache-2.0"
] | 1 | 2022-03-22T04:57:34.000Z | 2022-03-22T04:57:34.000Z | src/preprocess/setup.py | CyberAgentAILab/canvas-vae | a39a48ad9ae6217123e971cb4c5cf54c672eca74 | [
"Apache-2.0"
] | 5 | 2021-10-30T09:19:09.000Z | 2022-02-02T02:06:39.000Z | """Packager for cloud environment."""
from setuptools import setup, find_packages
setup(
name='preprocess',
version='1.0.0',
packages=find_packages(),
install_requires=[
'tensorflow',
'numpy',
],
)
| 18.076923 | 43 | 0.621277 | 24 | 235 | 5.958333 | 0.791667 | 0.167832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.234043 | 235 | 12 | 44 | 19.583333 | 0.777778 | 0.131915 | 0 | 0 | 0 | 0 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ead5d3e79d4e71898dd29a86d16aaccdbe4ddbc | 1,089 | py | Python | 14_funcoes/a124_todos_parametros.py | smartao/estudos_python | 252a2e592ff929dfc6c06fc09b42cb7063ad0b5a | [
"MIT"
] | null | null | null | 14_funcoes/a124_todos_parametros.py | smartao/estudos_python | 252a2e592ff929dfc6c06fc09b42cb7063ad0b5a | [
"MIT"
] | 9 | 2019-11-15T14:21:43.000Z | 2020-03-15T14:37:13.000Z | 14_funcoes/a124_todos_parametros.py | smartao/estudos_python | 252a2e592ff929dfc6c06fc09b42cb7063ad0b5a | [
"MIT"
] | null | null | null | #!/usr/bin/python3
'''
Pegando todos os parametros
Um argumento pocisional sempre deve estar antes de parametros nomeados
def todos_params(*args, **kwargs):
Significa que quer pegar os argumentos de uma forma genérica
tanto pocisionais quanto os nomeados
'''
def todos_params(*args, **kwargs):
print(f'args: {args}')
print(f'kwargs: {kwargs}')
if __name__ == '__main__':
# 3 parametros posicionais
todos_params('a', 'b', 'c')
# 3 parametros pocisinais + 2 nomeados
todos_params(1, 2, 3, legal=True, valor=12.99)
# 3 parametros posicionais + 2 nomeados
todos_params('Ana', False, [1, 2, 3], tamanho='M', fragil=False)
# 2 parametros nomeados
todos_params(primeiro='João', segundo='Maria')
# 1 parametro pociaional + 1 nomeados
todos_params('Maria', primeiro='João')
# Forma incorreto, passando parametro pocional depois do nomeado
# todos_params(primeiro='João', 'Maria')
# Fontes:
# Curso Python 3 - Curso Completo do Básico ao Avançado Udemy Aula 124
# https://github.com/cod3rcursos/curso-python/tree/master/funcoes
| 31.114286 | 70 | 0.701561 | 146 | 1,089 | 5.123288 | 0.568493 | 0.117647 | 0.101604 | 0.058824 | 0.085562 | 0.085562 | 0 | 0 | 0 | 0 | 0 | 0.026846 | 0.179063 | 1,089 | 34 | 71 | 32.029412 | 0.809843 | 0.606061 | 0 | 0 | 0 | 0 | 0.148418 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 1 | 0.111111 | true | 0 | 0 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ebe52eedc566dc315d1fdcd8dd171eaf6d35325 | 439 | py | Python | webapp/dataaccess/plotlydash/__init__.py | Dataplate/dataplate | dc05f30570d20008add7325f14c33e730f253ced | [
"BSD-3-Clause"
] | 13 | 2020-12-29T06:11:39.000Z | 2022-03-24T22:23:02.000Z | webapp/dataaccess/plotlydash/__init__.py | rafael-gumiero/dataplate | f748bf658e2a5d1921d8c989dcd843caf8a062ca | [
"BSD-3-Clause"
] | null | null | null | webapp/dataaccess/plotlydash/__init__.py | rafael-gumiero/dataplate | f748bf658e2a5d1921d8c989dcd843caf8a062ca | [
"BSD-3-Clause"
] | 1 | 2022-03-24T22:22:08.000Z | 2022-03-24T22:22:08.000Z | from flask import Blueprint
from flask_login import login_required
bp_dash = Blueprint('dashboard', __name__, template_folder='../templates', url_prefix='/admin/dashboard/')
def _protect_dashviews(dashapp):
for view_func in dashapp.server.view_functions:
if view_func.startswith(dashapp.config.url_base_pathname):
dashapp.server.view_functions[view_func] = login_required(dashapp.server.view_functions[view_func]) | 48.777778 | 111 | 0.788155 | 57 | 439 | 5.701754 | 0.54386 | 0.098462 | 0.156923 | 0.24 | 0.209231 | 0.209231 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113895 | 439 | 9 | 111 | 48.777778 | 0.835476 | 0 | 0 | 0 | 0 | 0 | 0.086364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ec004a846fd164c978f762c473a34b9f2cb0e55 | 235 | py | Python | test/test_util.py | jeffw-github/autoprotocol-python | df87a07846b28fe433bb49f2c0c8054e2f197e89 | [
"BSD-3-Clause"
] | 113 | 2015-01-29T05:12:21.000Z | 2022-03-18T01:39:14.000Z | test/test_util.py | jeffw-github/autoprotocol-python | df87a07846b28fe433bb49f2c0c8054e2f197e89 | [
"BSD-3-Clause"
] | 131 | 2015-02-24T23:12:14.000Z | 2022-01-11T17:16:41.000Z | test/test_util.py | jeffw-github/autoprotocol-python | df87a07846b28fe433bb49f2c0c8054e2f197e89 | [
"BSD-3-Clause"
] | 45 | 2015-01-25T03:47:16.000Z | 2021-05-29T03:55:28.000Z | import json
class TestUtils:
@staticmethod
def read_json_file(file_path: str):
file = open("./test/data/{0}".format(file_path))
data = json.load(file)
return json.dumps(data, indent=2, sort_keys=True)
| 23.5 | 57 | 0.646809 | 33 | 235 | 4.454545 | 0.69697 | 0.108844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0.221277 | 235 | 9 | 58 | 26.111111 | 0.79235 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1ec13ef63116630aba73dd39abdff816ffab65db | 692 | py | Python | jacked/matchers/_type.py | ramonhagenaars/jacked | 5cd31eb0fde46f31732529addda93f55d4dee55c | [
"MIT"
] | 4 | 2019-06-11T11:53:42.000Z | 2019-10-29T08:57:41.000Z | jacked/matchers/_type.py | ramonhagenaars/jacked | 5cd31eb0fde46f31732529addda93f55d4dee55c | [
"MIT"
] | 3 | 2019-06-19T20:54:28.000Z | 2019-06-21T12:57:59.000Z | jacked/matchers/_type.py | ramonhagenaars/jacked | 5cd31eb0fde46f31732529addda93f55d4dee55c | [
"MIT"
] | null | null | null | """
PRIVATE MODULE: do not import (from) it directly.
This module contains the ``TypeMatcher``class.
"""
import inspect
from jacked._injectable import Injectable
from jacked._container import Container
from jacked.matchers._base_matcher import BaseMatcher
class TypeMatcher(BaseMatcher):
def match(
self,
hint: object,
injectable: Injectable,
container: Container):
cls = hint.__args__[0]
if (inspect.isclass(injectable.subject)
and issubclass(injectable.subject, cls)):
return injectable.subject
def _matching_type(self):
return type
def priority(self):
return 200
| 23.862069 | 57 | 0.657514 | 73 | 692 | 6.09589 | 0.534247 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.265896 | 692 | 28 | 58 | 24.714286 | 0.86811 | 0.140173 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.222222 | 0.111111 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
1ecf459877e93b77d15aa389ffd7f71974f99eea | 420 | py | Python | code/get_ordinals.py | lebronlambert/Information_Extraction_DeepRL | 0463cde18edd77e00e6efd7b57fe937024ac56ef | [
"MIT"
] | 251 | 2016-06-15T15:01:59.000Z | 2021-11-29T09:23:30.000Z | code/get_ordinals.py | lebronlambert/Information_Extraction_DeepRL | 0463cde18edd77e00e6efd7b57fe937024ac56ef | [
"MIT"
] | 5 | 2017-01-04T21:50:38.000Z | 2018-06-06T01:42:07.000Z | code/get_ordinals.py | lebronlambert/Information_Extraction_DeepRL | 0463cde18edd77e00e6efd7b57fe937024ac56ef | [
"MIT"
] | 87 | 2016-06-14T17:27:05.000Z | 2021-11-09T08:43:09.000Z | import pickle
import inflect
p = inflect.engine()
words = set(['first','second','third','fourth','fifth','sixth','seventh','eighth','ninth','tenth','eleventh','twelfth','thirteenth','fourteenth','fifteenth',
'sixteenth','seventeenth','eighteenth','nineteenth','twentieth','twenty-first','twenty-second','twenty-third','twenty-fourth','twenty-fifth'])
pickle.dump(words, open("../data/constants/word_ordinals.p", "wb")) | 60 | 157 | 0.714286 | 49 | 420 | 6.102041 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 420 | 7 | 158 | 60 | 0.738272 | 0 | 0 | 0 | 0 | 0 | 0.581948 | 0.078385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
1ed66bc0d99f35be3eb38f391314fbb25a48d280 | 219 | py | Python | softmax.py | maomran/softmax | 330a16712d44a0195011710e24b65e7732e784b0 | [
"Apache-2.0"
] | 14 | 2018-06-19T01:58:18.000Z | 2022-03-23T18:27:45.000Z | softmax.py | MohammadHashemi-ai/softmax | 330a16712d44a0195011710e24b65e7732e784b0 | [
"Apache-2.0"
] | 3 | 2019-04-11T15:45:18.000Z | 2021-10-02T02:22:38.000Z | softmax.py | MohammadHashemi-ai/softmax | 330a16712d44a0195011710e24b65e7732e784b0 | [
"Apache-2.0"
] | 10 | 2018-10-31T06:46:51.000Z | 2022-03-23T18:27:35.000Z |
import math
z = [1.0,1 ,1, 1.0]
z_exp = [math.exp(i) for i in z]
print([round(i, 2) for i in z_exp])
sum_z_exp = sum(z_exp)
print(round(sum_z_exp, 2))
softmax = [round(i / sum_z_exp, 3) for i in z_exp]
print(softmax)
| 19.909091 | 50 | 0.652968 | 53 | 219 | 2.509434 | 0.283019 | 0.210526 | 0.210526 | 0.157895 | 0.255639 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049724 | 0.173516 | 219 | 10 | 51 | 21.9 | 0.685083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.375 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ee0cd00d8489da58d377eeb5afb082ee1712314 | 243 | py | Python | src/Dialog/HelpDialog.py | jtrfid/tkzgeom | b3b1baf33b89e7b670bc736d28818456ac4547ad | [
"MIT"
] | null | null | null | src/Dialog/HelpDialog.py | jtrfid/tkzgeom | b3b1baf33b89e7b670bc736d28818456ac4547ad | [
"MIT"
] | null | null | null | src/Dialog/HelpDialog.py | jtrfid/tkzgeom | b3b1baf33b89e7b670bc736d28818456ac4547ad | [
"MIT"
] | null | null | null | from PyQt5 import QtCore, QtWidgets, QtGui, uic
class HelpDialog(QtWidgets.QDialog):
def __init__(self):
super(HelpDialog, self).__init__()
self.ui = uic.loadUi('layouts/help.ui', self)
self.setWindowTitle("Help")
| 30.375 | 53 | 0.679012 | 29 | 243 | 5.413793 | 0.655172 | 0.101911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 0.193416 | 243 | 7 | 54 | 34.714286 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0.078189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
94b4086538bb9962754c3bd9f36ec1aa28af101a | 1,665 | py | Python | gallery/models.py | vsidera/PhotoAlbum | 1a6dbf7129e22e18be5de5648e488c4e9ccecbee | [
"MIT"
] | null | null | null | gallery/models.py | vsidera/PhotoAlbum | 1a6dbf7129e22e18be5de5648e488c4e9ccecbee | [
"MIT"
] | null | null | null | gallery/models.py | vsidera/PhotoAlbum | 1a6dbf7129e22e18be5de5648e488c4e9ccecbee | [
"MIT"
] | null | null | null | from django.db import models
from cloudinary.models import CloudinaryField
# Create your models here.
class Images(models.Model):
image_link = CloudinaryField('image')
title = models.CharField(max_length=80)
description = models.TextField()
category = models.ForeignKey('Categories', on_delete=models.CASCADE, default=1)
location = models.ForeignKey('Locations', on_delete=models.CASCADE, default=1)
def __str__(self):
return self.title
def save_image(self):
self.save()
def delete_image(self):
self.delete()
@classmethod
def get_all(cls):
pics = Images.objects.all()
return pics
@classmethod
def search_image(cls, cat):
retrieved = cls.objects.filter(category__name__contains=cat) #images assoc w/ this cat
return retrieved #list of instances
@classmethod
def filter_by_location(cls ,location):
retrieved = Images.objects.filter(location__town__contains=location)
return retrieved
class Categories(models.Model):
name = models.CharField(max_length=30)
def __str__(self):
return self.name
def save_category(self):
self.save()
def delete_category(self):
self.delete()
class Locations(models.Model):
town = models.CharField(max_length=30)
country = models.CharField(max_length=30)
def __str__(self):
return self.town
def save_location(self):
self.save()
def delete_location(self):
self.delete()
@classmethod
def get_all(cls):
cities = Locations.objects.all()
return cities | 25.615385 | 94 | 0.657057 | 194 | 1,665 | 5.443299 | 0.309278 | 0.045455 | 0.068182 | 0.090909 | 0.315341 | 0.212121 | 0.157197 | 0.157197 | 0.087121 | 0.087121 | 0 | 0.007987 | 0.248048 | 1,665 | 65 | 95 | 25.615385 | 0.835463 | 0.039039 | 0 | 0.361702 | 0 | 0 | 0.015028 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.276596 | false | 0 | 0.042553 | 0.06383 | 0.702128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
94c45a2bfafb15399fad02cc3b4cd9d86f79d7e8 | 1,297 | py | Python | 06_advanced_python_topics/decorator.py | iwanbolzern/python-course | a4e9088adab09b4e92710eb350392578c0977a71 | [
"MIT"
] | null | null | null | 06_advanced_python_topics/decorator.py | iwanbolzern/python-course | a4e9088adab09b4e92710eb350392578c0977a71 | [
"MIT"
] | null | null | null | 06_advanced_python_topics/decorator.py | iwanbolzern/python-course | a4e9088adab09b4e92710eb350392578c0977a71 | [
"MIT"
] | null | null | null | from datetime import datetime
from typing import Callable
def add(a: int, b: int) -> int:
return a + b
def subtract(a: int, b: int) -> int:
return a - b
def calculate(operation: Callable[[int, int], int],
a: int, b: int) -> int:
"""Demonstration of first class citizen"""
return operation(a, b)
def calc_decorator(func):
def wrapper(*args, **kwargs):
print('Something is happening before the calculation is performed.')
res = func(*args, **kwargs)
print('Something is happening after the calculation is performed.')
return res
return wrapper
def not_during_the_night(func):
def wrapper(*args, **kwargs):
if 7 <= datetime.now().hour < 22:
return func(*args, **kwargs)
else:
raise RuntimeError('Not allowed to work between 22pm and 07am')
return wrapper
@not_during_the_night
def do_work(a: int, b: int) -> int:
return a + b
if __name__ == '__main__':
# first class citizen
print(f'1 + 1 = {calculate(add, 1, 1)}')
print(f'1 - 1 = {calculate(subtract, 1, 1)}')
# decorated function (simple example)
decorated_add = calc_decorator(add)
print(f'1 + 1 = {decorated_add(1, 1)}')
# syntactic sugar
print(f'1 + 1 = {do_work(1, 1)}')
| 23.581818 | 76 | 0.613724 | 180 | 1,297 | 4.311111 | 0.355556 | 0.020619 | 0.025773 | 0.041237 | 0.278351 | 0.171392 | 0.081186 | 0.081186 | 0.056701 | 0 | 0 | 0.023859 | 0.256746 | 1,297 | 54 | 77 | 24.018519 | 0.78112 | 0.08404 | 0 | 0.1875 | 0 | 0 | 0.239831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0.09375 | 0.5625 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
94c9b5188d932c4b2ed200b6b817f43f0d634bb4 | 907 | py | Python | iqoptionapi/http/changebalance.py | mustx1/MYIQ | 3afb597aa8a8abc278b7d70dad46af81789eae3e | [
"MIT"
] | 3 | 2021-02-24T00:29:55.000Z | 2021-08-13T12:21:13.000Z | iqoptionapi/http/changebalance.py | mustx1/MYIQ | 3afb597aa8a8abc278b7d70dad46af81789eae3e | [
"MIT"
] | 5 | 2022-01-20T00:32:49.000Z | 2022-02-16T23:12:10.000Z | iqoptionapi/http/changebalance.py | mustx1/MYIQ | 3afb597aa8a8abc278b7d70dad46af81789eae3e | [
"MIT"
] | 2 | 2020-11-10T19:03:38.000Z | 2020-12-07T10:42:36.000Z | """Module for IQ option changebalance resource."""
from iqoptionapi.http.resource import Resource
from iqoptionapi.http.profile import Profile
class Changebalance(Resource):
"""Class for IQ option changebalance resource."""
# pylint: disable=too-few-public-methods
url = "/".join((Profile.url, "changebalance"))
def _post(self, data=None, headers=None):
"""Send get request for IQ Option API changebalance http resource.
:returns: The instance of :class:`requests.Response`.
"""
return self.send_http_request("POST", data=data, headers=headers)
def __call__(self,balance_id):
"""Method to get IQ Option API changebalance http request.
:param str balance_id: The balance identifier.
:returns: The instance of :class:`requests.Response`.
"""
data = {"balance_id": balance_id}
return self._post(data)
| 31.275862 | 74 | 0.676957 | 109 | 907 | 5.522936 | 0.412844 | 0.053156 | 0.054817 | 0.079734 | 0.335548 | 0.136213 | 0.136213 | 0 | 0 | 0 | 0 | 0 | 0.210584 | 907 | 28 | 75 | 32.392857 | 0.840782 | 0.44763 | 0 | 0 | 0 | 0 | 0.063927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a207ed19f70c2abb92a14c67d1bb7ebb192f8c4b | 3,796 | py | Python | lhzexploit_send.py | LiHongzhuo/wxBot | 98c7acee5a076332c8de2bac091e04d7f661edb6 | [
"Apache-2.0"
] | null | null | null | lhzexploit_send.py | LiHongzhuo/wxBot | 98c7acee5a076332c8de2bac091e04d7f661edb6 | [
"Apache-2.0"
] | null | null | null | lhzexploit_send.py | LiHongzhuo/wxBot | 98c7acee5a076332c8de2bac091e04d7f661edb6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from wxbot import *
import os
import ConfigParser
import lhz_mysql
import MySQLdb
import sys
import lhzoss
import lhzhttp
import lhzasr
import thread
import lhzconf
import lhzutil
import requests
import lhzimg
class MyWXBot(WXBot):
def makelove(self):
print '中华人民共和国'.decode('utf-8').encode('gbk')
def main():
bot = MyWXBot()
# bot.DEBUG = True
# bot.conf['qr'] = 'png'
bot.continuelogin = True # 李宏卓改的功能,使微信可持续登录
bot.loop_proc = False #李宏卓改的功能,使不轮询数据
bot.run()
if bot.status == 'makelove':
# from_user_id = '@@6b2a2f64ce7f0eb592e368b36ac33500847782b48cf055c2452bd13a42e9c78f'
# sql = "select MsgType,contentdata from wechatdata where from_user_id='%s' group by MsgType" % (from_user_id)
from_user_name = MySQLdb.escape_string(lhzconf.send_from_groupname)
sql = "select wid,MsgType,contentdata,content2 from wechatdata where from_user_name='%s' and MsgType in (1,3,34,43) and wid>315 limit 999" % (from_user_name)
# print sql.decode('utf-8').encode('gbk')
results = lhz_mysql.query(sql)
print ('共'+str(len(results))+'条').decode('utf-8').encode('gbk')
uids = []
for un in lhzconf.send_to_groupname:
uid = bot.get_user_id(un)
uids.append(uid)
# print from_user_name.decode('utf-8').encode('gbk')
# print to_user_name.decode('utf-8').encode('gbk')
# print len(results)
for row in results:
wid = row[0]
MsgType = row[1]
contentdata = row[2]
content2 = row[3]
strtoprint = str(wid) + ' ' + str(MsgType) +' '+ contentdata+ ' ' + (' ' if content2 is None else content2)
try:
strtoprint = strtoprint.decode('utf-8')
strtoprint = strtoprint.encode('gbk')
except Exception as e:
pass
print strtoprint
if MsgType in (1,43):#文字
pass
for uid in uids:
bot.send_msg_by_uid(contentdata,uid)
else:
# r = bot.session.get(contentdata)
r = requests.get(contentdata if content2 is None or MsgType!=3 else content2)
data = r.content
lastindexofsep = lhzutil.find_last(contentdata,'/')
fn = contentdata[lastindexofsep+1:]
fn = os.path.join(bot.temp_pwd, fn)
with open(fn, 'wb') as f:
f.write(data)
if MsgType == 3:#图片
pass
#压缩图片
if os.path.getsize(fn)>1000000:
lhzimg.JfzBlogImgThumb(fn, fn)
lastindex = lhzutil.find_last(fn, os.path.sep)
key = fn[lastindex+1:]
if '_thumb.' not in fn:
key = lhzutil.thumbFilePath(key)
content2 = lhzoss.upload(fn, key)
#更新数据库
sql = "update wechatdata set content2='%s' where wid=%d" % (content2, wid)
lhz_mysql.execute(sql)
for uid in uids:
bot.send_img_msg_by_uid(fn,uid)
elif MsgType == 34:#语音
pass
for uid in uids:
bot.send_file_msg_by_uid(fn,uid)
# elif MsgType == 43:#视频
# pass
# bot.send_file_msg_by_uid(fn,uid)
os.remove(fn)
time.sleep(9)
else:
print 'login failed'
if __name__ == '__main__':
main()
| 39.541667 | 166 | 0.513172 | 419 | 3,796 | 4.520286 | 0.355609 | 0.014784 | 0.031679 | 0.042239 | 0.195354 | 0.108237 | 0.098205 | 0.059134 | 0 | 0 | 0 | 0.039863 | 0.385406 | 3,796 | 95 | 167 | 39.957895 | 0.771967 | 0.145416 | 0 | 0.115385 | 0 | 0.012821 | 0.080908 | 0.010233 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.051282 | 0.179487 | null | null | 0.089744 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a2298083ce23c56c820c746c853d5f1e55a1c6d8 | 461 | py | Python | Python/pyworkout/files/ex18.py | honchardev/Fun | ca7c0076e9bb3017c5d7e89aa7d5bd54a83c8ecc | [
"MIT"
] | null | null | null | Python/pyworkout/files/ex18.py | honchardev/Fun | ca7c0076e9bb3017c5d7e89aa7d5bd54a83c8ecc | [
"MIT"
] | 3 | 2020-03-24T16:26:35.000Z | 2020-04-15T19:40:41.000Z | Python/pyworkout/files/ex18.py | honchardev/Fun | ca7c0076e9bb3017c5d7e89aa7d5bd54a83c8ecc | [
"MIT"
] | null | null | null | def get_final_line(
filepath: str
) -> str:
with open(filepath) as fs_r:
for line in fs_r:
pass
return line
def get_final_line__readlines(
filepath: str
) -> str:
with open(filepath) as fs_r:
return fs_r.readlines()[-1]
def main():
filepath = 'file.txt'
final_line_content = get_final_line(filepath)
print(f'Final line content: [{final_line_content}]')
if __name__ == '__main__':
main()
| 18.44 | 56 | 0.624729 | 64 | 461 | 4.125 | 0.390625 | 0.204545 | 0.136364 | 0.113636 | 0.265152 | 0.265152 | 0.265152 | 0.265152 | 0.265152 | 0 | 0 | 0.00295 | 0.264642 | 461 | 24 | 57 | 19.208333 | 0.775811 | 0 | 0 | 0.333333 | 0 | 0 | 0.125813 | 0.047722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.055556 | 0 | 0 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
bf3ff739f5d2991ad3d76f833bf7356a90d392e0 | 861 | py | Python | gunnery/account/backend.py | dholdaway/gunnery | 87664986cd8bf1a17a9cbfb98fb1012cef9adaec | [
"Apache-2.0"
] | null | null | null | gunnery/account/backend.py | dholdaway/gunnery | 87664986cd8bf1a17a9cbfb98fb1012cef9adaec | [
"Apache-2.0"
] | 1 | 2021-06-10T23:55:50.000Z | 2021-06-10T23:55:50.000Z | gunnery/account/backend.py | dholdaway/gunnery | 87664986cd8bf1a17a9cbfb98fb1012cef9adaec | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth.models import check_password
from django.contrib.auth import get_user_model
_user = get_user_model()
class EmailAuthBackend(object):
"""
Email Authentication Backend
Allows a user to sign in using an email/password pair rather than
a username/password pair.
"""
def authenticate(self, username=None, password=None):
""" Authenticate a user based on email address as the user name. """
try:
user = _user.objects.get(email=username)
if user.check_password(password):
return user
except _user.DoesNotExist:
return None
def get_user(self, user_id):
""" Get a _user object from the user_id. """
try:
return _user.objects.get(pk=user_id)
except _user.DoesNotExist:
return None | 30.75 | 76 | 0.637631 | 107 | 861 | 4.981308 | 0.429907 | 0.0394 | 0.06379 | 0.078799 | 0.120075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 861 | 28 | 77 | 30.75 | 0.866667 | 0.255517 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.1875 | 0.125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
bf4fb6bd0129319ecf0305a17ab66e72508012dc | 659 | py | Python | BS01-flask-bootstrap-table-demo/app/models.py | AngelLiang/Flask-Demos | cf0a74885b873cb2583b3870ccdf3508d3af602e | [
"MIT"
] | 3 | 2020-06-17T05:44:48.000Z | 2021-09-11T02:49:38.000Z | BS01-flask-bootstrap-table-demo/app/models.py | AngelLiang/Flask-Demos | cf0a74885b873cb2583b3870ccdf3508d3af602e | [
"MIT"
] | 3 | 2021-06-08T20:57:03.000Z | 2022-02-23T14:54:59.000Z | BS01-flask-bootstrap-table-demo/app/models.py | AngelLiang/Flask-Demos | cf0a74885b873cb2583b3870ccdf3508d3af602e | [
"MIT"
] | 6 | 2020-06-17T05:44:56.000Z | 2022-03-29T12:53:05.000Z | from sqlalchemy_mptt.mixins import BaseNestedSets
from .extensions import db
class Tree(db.Model, BaseNestedSets):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64))
# parent_id = db.Column(db.Integer, db.ForeignKey('tree.id'))
# parent = db.relationship('Tree', remote_side=[id], backref='children')
def __str__(self):
return "{}".format(self.name)
def to_dict(self):
return {
'id': self.id,
'name': self.name,
'left': self.left,
'right': self.right,
'parent_id': self.parent_id,
'tree_id': self.tree_id,
}
| 27.458333 | 76 | 0.590288 | 82 | 659 | 4.585366 | 0.439024 | 0.06383 | 0.079787 | 0.06383 | 0.101064 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004132 | 0.265554 | 659 | 23 | 77 | 28.652174 | 0.772727 | 0.197269 | 0 | 0 | 0 | 0 | 0.062738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
bf53ef3d918d4715c6704f2df580b7658e51cdbf | 159 | py | Python | pythonProject/ex012.py | aknowxd/python_cursoemvideo_1 | 826a7df2c813c209a68193d4f03638e60ca2eb4a | [
"MIT"
] | null | null | null | pythonProject/ex012.py | aknowxd/python_cursoemvideo_1 | 826a7df2c813c209a68193d4f03638e60ca2eb4a | [
"MIT"
] | null | null | null | pythonProject/ex012.py | aknowxd/python_cursoemvideo_1 | 826a7df2c813c209a68193d4f03638e60ca2eb4a | [
"MIT"
] | null | null | null | price = float(input("Digite o preco do produto: "))
discount = (5/100) * price
total = price - discount
print("O valor com desconto eh: {:.2f}".format(total)) | 31.8 | 54 | 0.679245 | 24 | 159 | 4.5 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.150943 | 159 | 5 | 54 | 31.8 | 0.762963 | 0 | 0 | 0 | 0 | 0 | 0.3625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bf55c375383410b68471e4aaaea5cbb22ffb9121 | 1,661 | py | Python | accounts/migrations/0004_auto_20210202_0653.py | MattatPath/Path-backend | c8fde0f39162ff020c475badc76b191f9fec600c | [
"Apache-2.0"
] | null | null | null | accounts/migrations/0004_auto_20210202_0653.py | MattatPath/Path-backend | c8fde0f39162ff020c475badc76b191f9fec600c | [
"Apache-2.0"
] | null | null | null | accounts/migrations/0004_auto_20210202_0653.py | MattatPath/Path-backend | c8fde0f39162ff020c475badc76b191f9fec600c | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.12 on 2021-02-02 06:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('accounts', '0003_auto_20210202_0503'),
]
operations = [
migrations.AddField(
model_name='account',
name='email',
field=models.CharField(default='NEED TO FORMAT EMAIL', max_length=42),
),
migrations.AddField(
model_name='account',
name='name',
field=models.CharField(default='None', max_length=42),
),
migrations.AddField(
model_name='account',
name='profits',
field=models.CharField(default='0.00', max_length=42),
),
migrations.AlterField(
model_name='account',
name='country_code',
field=models.CharField(default='NO', max_length=2),
),
migrations.AlterField(
model_name='account',
name='current_balance',
field=models.CharField(default='0.00', max_length=10),
),
migrations.AlterField(
model_name='account',
name='interest_rate',
field=models.CharField(default='0.01', max_length=10),
),
migrations.AlterField(
model_name='transaction',
name='deposit_date',
field=models.CharField(default='NEED TO FORMAT DATE', max_length=10),
),
migrations.AlterField(
model_name='transaction',
name='wallet',
field=models.CharField(default='MISSING WALLET', max_length=42),
),
]
| 30.759259 | 82 | 0.560506 | 163 | 1,661 | 5.570552 | 0.349693 | 0.079295 | 0.176211 | 0.237885 | 0.60793 | 0.577093 | 0.435022 | 0.314978 | 0.229075 | 0 | 0 | 0.049296 | 0.316075 | 1,661 | 53 | 83 | 31.339623 | 0.75 | 0.027694 | 0 | 0.510638 | 1 | 0 | 0.148791 | 0.014259 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bf77938f692240f0c7e1c8436d0758c664f12b9f | 979 | py | Python | swap_user/__init__.py | artinnok/django-swap-user | f2c02b9fc5829651a6dab9c6d053dfe2425e2266 | [
"MIT"
] | null | null | null | swap_user/__init__.py | artinnok/django-swap-user | f2c02b9fc5829651a6dab9c6d053dfe2425e2266 | [
"MIT"
] | null | null | null | swap_user/__init__.py | artinnok/django-swap-user | f2c02b9fc5829651a6dab9c6d053dfe2425e2266 | [
"MIT"
] | null | null | null | """
________ ___ __ ________ ________ ___ ___ ________ _______ ________
|\ ____\|\ \ |\ \|\ __ \|\ __ \ |\ \|\ \|\ ____\|\ ___ \ |\ __ \
\ \ \___|\ \ \ \ \ \ \ \|\ \ \ \|\ \ \ \ \\\ \ \ \___|\ \ __/|\ \ \|\ \
\ \_____ \ \ \ __\ \ \ \ __ \ \ ____\ \ \ \\\ \ \_____ \ \ \_|/_\ \ _ _\
\|____|\ \ \ \|\__\_\ \ \ \ \ \ \ \___| \ \ \\\ \|____|\ \ \ \_|\ \ \ \\ \|
____\_\ \ \____________\ \__\ \__\ \__\ \ \_______\____\_\ \ \_______\ \__\\ _\
|\_________\|____________|\|__|\|__|\|__| \|_______|\_________\|_______|\|__|\|__|
\|_________| \|_________|
"""
__title__ = "Django Swap User"
__version__ = "0.9.8"
__author__ = "Artem Innokentiev"
__license__ = "MIT"
__copyright__ = "Copyright 2022 © Artem Innokentiev"
VERSION = __version__
default_app_config = "swap_user.apps.DjangoSwapUser"
| 44.5 | 95 | 0.427988 | 28 | 979 | 5.571429 | 0.75 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010938 | 0.346272 | 979 | 21 | 96 | 46.619048 | 0.23125 | 0.733401 | 0 | 0 | 0 | 0 | 0.436975 | 0.121849 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bfa0ea90fc49fb1ca4300d442097a204d2777368 | 662 | py | Python | emencia_paste_djangocms_3/django_buildout/project/mods_available/filebrowser/__init__.py | emencia/emencia_paste_djangocms_3 | 29eabbcb17e21996a6e1d99592fc719dc8833b59 | [
"MIT"
] | 1 | 2015-01-17T21:38:50.000Z | 2015-01-17T21:38:50.000Z | emencia_paste_djangocms_3/django_buildout/project/mods_available/filebrowser/__init__.py | emencia/emencia_paste_djangocms_3 | 29eabbcb17e21996a6e1d99592fc719dc8833b59 | [
"MIT"
] | 5 | 2015-01-28T15:51:02.000Z | 2015-03-16T15:51:15.000Z | emencia_paste_djangocms_3/django_buildout/project/mods_available/filebrowser/__init__.py | emencia/emencia_paste_djangocms_3 | 29eabbcb17e21996a6e1d99592fc719dc8833b59 | [
"MIT"
] | null | null | null | """
Add `Django Filebrowser`_ to your project so you can use a centralized interface to manage the uploaded files to be used with other components (`cms`_, `zinnia`_, etc.).
The version used is a special version called *no grappelli* that can be used outside of the *django-grapelli* environment.
Filebrowser manage files with a nice interface to centralize them and also manage image resizing versions (original, small, medium, etc..), you can edit these versions or add new ones in the settings.
.. note::
Don't try to use other resizing app like sorl-thumbnails or easy-thumbnails, they will not work with Image fields managed with Filebrowser.
""" | 66.2 | 200 | 0.765861 | 104 | 662 | 4.846154 | 0.663462 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170695 | 662 | 10 | 201 | 66.2 | 0.918033 | 0.987915 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bfadd469cad5706d4a73cff7e85fb26ba6227888 | 734 | py | Python | inf.py | ultrasound1372/RFXPy | 8f8ba1576739f8ef35a33b10cd2324e24f87fa37 | [
"MIT"
] | null | null | null | inf.py | ultrasound1372/RFXPy | 8f8ba1576739f8ef35a33b10cd2324e24f87fa37 | [
"MIT"
] | null | null | null | inf.py | ultrasound1372/RFXPy | 8f8ba1576739f8ef35a33b10cd2324e24f87fa37 | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
# -*- Coding: UTF-8 -*-
# infinity test case
# The port seems to differ from the original code here
# the original code will madly clip this string
# while this code seems to ignore the infinity and just produces a normal wave
# unsure of which behavior is desireable at this time
# or how to get this to behave like the original
# that version of PureBasic appears to ignore division by 0 and give inf
# but sign is undetermined, I don't have that version to test with
import RFX
g=RFX.SFXRWave()
g.load('AQAAAM3MTL0AAAAAAAAAABSuxz4K1yO+zcwMP/Yo3D4fhWs+KVyPPQAAgD6PwnU+AAAAAI/C9TwAAEC/mplZP8P1KL/NzEw9j8J1Pj0K1757FC4+zczMPs3MTL3Xo3A/DwA=')
f=open('inf.wav','wb')
f.write(g.Create())
f.close()
| 43.176471 | 143 | 0.756131 | 115 | 734 | 4.826087 | 0.730435 | 0.059459 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040584 | 0.160763 | 734 | 16 | 144 | 45.875 | 0.86039 | 0.645777 | 0 | 0 | 0 | 0 | 0.602564 | 0.564103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bfb93d0cad239247bcffb88a0a11566cfbf6e76a | 407 | py | Python | poop/hfdp/factory/challenge/pacific_calendar.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | 37 | 2020-12-27T00:13:07.000Z | 2022-01-31T19:30:18.000Z | poop/hfdp/factory/challenge/pacific_calendar.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | null | null | null | poop/hfdp/factory/challenge/pacific_calendar.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | 7 | 2020-12-26T22:33:47.000Z | 2021-11-07T01:29:59.000Z | from poop.hfdp.factory.challenge.calendar import Calendar
from poop.hfdp.factory.challenge.zone_factory import ZoneFactory
class PacificCalendar(Calendar):
def __init__(self, zone_factory: ZoneFactory) -> None:
self._zone = zone_factory.create_zone("US/Pacific")
def create_calendar(self, appointments: list[str]) -> None:
print(f"Making the calendar with appts: {appointments}")
| 37 | 64 | 0.7543 | 51 | 407 | 5.823529 | 0.529412 | 0.111111 | 0.080808 | 0.127946 | 0.188552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142506 | 407 | 10 | 65 | 40.7 | 0.851003 | 0 | 0 | 0 | 0 | 0 | 0.137592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bfc6d536bd00d6d52359bd4df423a1f22afad57a | 183 | py | Python | 01_Language/01_Functions/python/mail.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 3 | 2020-06-28T07:42:51.000Z | 2021-01-15T10:32:11.000Z | 01_Language/01_Functions/python/mail.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 9 | 2021-03-10T22:45:40.000Z | 2022-02-27T06:53:20.000Z | 01_Language/01_Functions/python/mail.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 1 | 2021-01-15T10:51:24.000Z | 2021-01-15T10:51:24.000Z | # coding: utf-8
import smtplib
msg = "Line 1\nLine 2\nLine 3"
server = smtplib.SMTP('localhost')
server.sendmail('sender@example.com', 'caffeinated@example.com', msg)
server.quit()
| 20.333333 | 69 | 0.726776 | 27 | 183 | 4.925926 | 0.740741 | 0.150376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02454 | 0.10929 | 183 | 8 | 70 | 22.875 | 0.791411 | 0.071038 | 0 | 0 | 0 | 0 | 0.428571 | 0.136905 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bfc7e34ff6a0e29781c1e28e5ff06849303db0af | 818 | py | Python | setup.py | swharden/webinspect | 432674b61666d66e5be330b61f9fad0b46dac84e | [
"MIT"
] | 3 | 2019-02-28T18:07:02.000Z | 2020-03-26T06:58:16.000Z | setup.py | swharden/webinspect | 432674b61666d66e5be330b61f9fad0b46dac84e | [
"MIT"
] | null | null | null | setup.py | swharden/webinspect | 432674b61666d66e5be330b61f9fad0b46dac84e | [
"MIT"
] | 2 | 2017-07-02T00:55:17.000Z | 2020-03-26T06:58:17.000Z | from distutils.core import setup
import webinspect
setup(
name='webinspect',
version=webinspect.__version__,
author='Scott W Harden',
author_email='SWHarden@gmail.com',
packages=['webinspect'],
url='https://github.com/swharden/webinspect',
license='MIT License',
description='Inspect python objects in a web browser.',
long_description="""webinspect allows python developers to learn about objects' methods by displaying their properties in a web browser. This is extremely useful when trying to figure out how to use confusing and/or poorly documented classes. Just stick webinspect.launch(someObject) anywhere in your code and a web browser will automatically launch displaying all of the information about the object. See examples on the documentation page.""",
) | 54.533333 | 450 | 0.755501 | 108 | 818 | 5.666667 | 0.703704 | 0.019608 | 0.053922 | 0.042484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168704 | 818 | 15 | 451 | 54.533333 | 0.9 | 0 | 0 | 0 | 0 | 0.076923 | 0.698137 | 0.036025 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
44a3b7ef00e06d7c272e92b08ddb7119a66bf685 | 3,205 | py | Python | edb/common/binwrapper.py | aaronbrighton/edgedb | 4aacd1d4e248ae0d483c075ba93fc462da291ef4 | [
"Apache-2.0"
] | 7,302 | 2018-05-10T18:36:31.000Z | 2022-03-31T17:49:36.000Z | edb/common/binwrapper.py | aaronbrighton/edgedb | 4aacd1d4e248ae0d483c075ba93fc462da291ef4 | [
"Apache-2.0"
] | 1,602 | 2018-05-10T17:45:38.000Z | 2022-03-31T23:46:19.000Z | edb/common/binwrapper.py | aaronbrighton/edgedb | 4aacd1d4e248ae0d483c075ba93fc462da291ef4 | [
"Apache-2.0"
] | 236 | 2018-05-13T14:15:29.000Z | 2022-03-29T19:39:19.000Z | #
# This source file is part of the EdgeDB open source project.
#
# Copyright 2019-present MagicStack Inc. and the EdgeDB authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import annotations
import io
import struct
class BinWrapper:
"""A utility binary-reader wrapper over any io.BytesIO object."""
i64 = struct.Struct('!q')
i32 = struct.Struct('!l')
i16 = struct.Struct('!h')
i8 = struct.Struct('!b')
ui64 = struct.Struct('!Q')
ui32 = struct.Struct('!L')
ui16 = struct.Struct('!H')
ui8 = struct.Struct('!B')
def __init__(self, buf: io.BytesIO) -> None:
self.buf = buf
def write_ui64(self, val: int) -> None:
self.buf.write(self.ui64.pack(val))
def write_ui32(self, val: int) -> None:
self.buf.write(self.ui32.pack(val))
def write_ui16(self, val: int) -> None:
self.buf.write(self.ui16.pack(val))
def write_ui8(self, val: int) -> None:
self.buf.write(self.ui8.pack(val))
def write_i64(self, val: int) -> None:
self.buf.write(self.i64.pack(val))
def write_i32(self, val: int) -> None:
self.buf.write(self.i32.pack(val))
def write_i16(self, val: int) -> None:
self.buf.write(self.i16.pack(val))
def write_i8(self, val: int) -> None:
self.buf.write(self.i8.pack(val))
def write_len32_prefixed_bytes(self, val: bytes) -> None:
self.write_ui32(len(val))
self.buf.write(val)
def write_bytes(self, val: bytes) -> None:
self.buf.write(val)
def read_ui64(self) -> int:
data = self.buf.read(8)
return self.ui64.unpack(data)[0]
def read_ui32(self) -> int:
data = self.buf.read(4)
return self.ui32.unpack(data)[0]
def read_ui16(self) -> int:
data = self.buf.read(2)
return self.ui16.unpack(data)[0]
def read_ui8(self) -> int:
data = self.buf.read(1)
return self.ui8.unpack(data)[0]
def read_i64(self) -> int:
data = self.buf.read(8)
return self.i64.unpack(data)[0]
def read_i32(self) -> int:
data = self.buf.read(4)
return self.i32.unpack(data)[0]
def read_i16(self) -> int:
data = self.buf.read(2)
return self.i16.unpack(data)[0]
def read_i8(self) -> int:
data = self.buf.read(1)
return self.i8.unpack(data)[0]
def read_bytes(self, size: int) -> bytes:
data = self.buf.read(size)
if len(data) != size:
raise BufferError(f'cannot read bytes with len={size}')
return data
def read_len32_prefixed_bytes(self) -> bytes:
size = self.read_ui32()
return self.read_bytes(size)
| 28.114035 | 74 | 0.626209 | 480 | 3,205 | 4.108333 | 0.25625 | 0.074544 | 0.055781 | 0.073022 | 0.370183 | 0.280933 | 0.255578 | 0.255578 | 0.133874 | 0 | 0 | 0.04182 | 0.239002 | 3,205 | 113 | 75 | 28.362832 | 0.766708 | 0.219969 | 0 | 0.149254 | 0 | 0 | 0.019774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.313433 | false | 0 | 0.044776 | 0 | 0.641791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
44a6007d691a4403a60198e26a7de946aa966052 | 376 | py | Python | tests/test_blog.py | Mzazi25/blogposts | b4673737171caa660d7f1296c741870ab07dc693 | [
"Unlicense"
] | null | null | null | tests/test_blog.py | Mzazi25/blogposts | b4673737171caa660d7f1296c741870ab07dc693 | [
"Unlicense"
] | null | null | null | tests/test_blog.py | Mzazi25/blogposts | b4673737171caa660d7f1296c741870ab07dc693 | [
"Unlicense"
] | null | null | null | import unittest
from app .models import Blog
class BlogModelTest(unittest.TestCase):
'''
Test Class to test the behavior of the Blog Model
'''
def setUp(self):
'''
setup method that runs before every test
'''
self.new_blog = Blog(Blog= 'checki man')
def test_blog(self):
self.assertTrue(self.test_blog is not None) | 26.857143 | 53 | 0.632979 | 50 | 376 | 4.7 | 0.6 | 0.068085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276596 | 376 | 14 | 54 | 26.857143 | 0.863971 | 0.239362 | 0 | 0 | 0 | 0 | 0.040323 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
44ae3590b20cfb4480211cdbd4a0c557865bce64 | 176 | py | Python | backend/users/views.py | hearot/pycon | 9a722c4803d8583ae8168066c7b710ce96d659a5 | [
"MIT"
] | null | null | null | backend/users/views.py | hearot/pycon | 9a722c4803d8583ae8168066c7b710ce96d659a5 | [
"MIT"
] | null | null | null | backend/users/views.py | hearot/pycon | 9a722c4803d8583ae8168066c7b710ce96d659a5 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def post_login_view(request):
user = request.user
return render(request, 'users/post-login.html', {
'user': user,
})
| 17.6 | 53 | 0.659091 | 22 | 176 | 5.181818 | 0.636364 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221591 | 176 | 9 | 54 | 19.555556 | 0.832117 | 0 | 0 | 0 | 0 | 0 | 0.142045 | 0.119318 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
44b3097b60b69d9b29158f0327f88b4045eeb0d5 | 353 | py | Python | spacebar/astro/bodies.py | brandon-sexton/spacebar | f53d6f12341baa7ec4902b65899b848d9a46c1b1 | [
"MIT"
] | null | null | null | spacebar/astro/bodies.py | brandon-sexton/spacebar | f53d6f12341baa7ec4902b65899b848d9a46c1b1 | [
"MIT"
] | null | null | null | spacebar/astro/bodies.py | brandon-sexton/spacebar | f53d6f12341baa7ec4902b65899b848d9a46c1b1 | [
"MIT"
] | null | null | null | class Earth:
"""
Class used to represent Earth. Values are defined using EGM96 geopotential
model. Constant naming convention is intentionally not used since the
values defined here may be updated in the future by introducing other
geopotential models
"""
mu = 3.986004415e5 #km^3/s^2
equatorial_radius = 6378.1363 #km | 39.222222 | 79 | 0.719547 | 49 | 353 | 5.163265 | 0.816327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084559 | 0.229462 | 353 | 9 | 80 | 39.222222 | 0.845588 | 0.70255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.