hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
890a303490f0d4c10fdac1554b0797ec1d40e603 | 1,828 | py | Python | signuplogin/views.py | xflows/textflows | 7fd99cebe29bcb25ea21b8bfb7dca2d0b663ea2b | [
"MIT"
] | 18 | 2015-07-29T07:14:41.000Z | 2021-05-31T16:10:49.000Z | signuplogin/views.py | xflows/textflows | 7fd99cebe29bcb25ea21b8bfb7dca2d0b663ea2b | [
"MIT"
] | null | null | null | signuplogin/views.py | xflows/textflows | 7fd99cebe29bcb25ea21b8bfb7dca2d0b663ea2b | [
"MIT"
] | 8 | 2016-02-05T10:13:40.000Z | 2020-11-10T14:36:31.000Z | from django.contrib.auth.models import User
from django.contrib.auth import authenticate, login, logout
from django.shortcuts import render, get_object_or_404, redirect
from django.utils import timezone
def signuplogin(request):
if request.method == 'POST':
if request.POST.get('login'):
uname = request.POST['username']
passw = request.POST['password']
user = authenticate(username=uname, password=passw)
if user is not None:
login(request, user)
if not request.POST.get('remember', None):
request.session.set_expiry(0)
return ajaxresponse(request, 'OK')
else:
return ajaxresponse(request, 'ERR')
elif request.POST.get('register'):
uname = request.POST['rusername']
passw = request.POST['rpassword']
rpass = request.POST['repeat_password']
mail = request.POST['email']
try:
tuser = User.objects.get(username__exact=uname)
except User.DoesNotExist:
tuser = None
if User.objects.filter(email=mail).count()>0:
return ajaxresponse(request, 'MAILTAKEN')
if tuser is not None:
return ajaxresponse(request, 'TAKEN')
else:
new_user = User.objects.create_user(uname, mail, passw, last_login = timezone.now() )
user = authenticate(username=uname, password=passw)
login(request, user)
return ajaxresponse(request, 'OK')
else:
return render(request, 'signuplogin/signuplogin.html')
def ajaxresponse(request, txt):
return render(request, 'signuplogin/ajaxresponse.html', {'response':txt}) | 35.843137 | 101 | 0.584245 | 187 | 1,828 | 5.657754 | 0.358289 | 0.093573 | 0.118147 | 0.039698 | 0.149338 | 0.149338 | 0 | 0 | 0 | 0 | 0 | 0.003987 | 0.314004 | 1,828 | 51 | 102 | 35.843137 | 0.839713 | 0 | 0 | 0.230769 | 0 | 0 | 0.090213 | 0.031165 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0.153846 | 0.102564 | 0.025641 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
890a660bfe2a29351888f55b1a581b8fa290f798 | 1,573 | py | Python | calculate_gc.py | kaclark/DHS_intergenic_analysis | 5ae1dc1c257ae9dc0e001e07402bebd8e31f0f60 | [
"MIT"
] | null | null | null | calculate_gc.py | kaclark/DHS_intergenic_analysis | 5ae1dc1c257ae9dc0e001e07402bebd8e31f0f60 | [
"MIT"
] | null | null | null | calculate_gc.py | kaclark/DHS_intergenic_analysis | 5ae1dc1c257ae9dc0e001e07402bebd8e31f0f60 | [
"MIT"
] | null | null | null | #Calculates length and GC content of DHSs sites from fasta files
#DHS_#_intergenic.fa generated from bedtools getfasta using original DHS files filtered for only intergenic regions
#Exports DHS_#_gc.csv
#Exports DHS_#_lengths.csv
from Bio import SeqIO
import csv
#DhS files
files = ['1','2','4','8']
for entry in files:
length = []
gc_content = []
group = []
ids = []
#Load fasta file
for record in SeqIO.parse("data/mm10_data/DHSs_" + entry + "_intergenic.fa", "fasta"):
seq = str(record.seq)
seq_len = len(seq)
#count occurences of g and c in lowercase and capital forms
gs = seq.count('g')
cgs = seq.count('G')
cs = seq.count('c')
ccs = seq.count('C')
#total base counts
total = gs + cs + cgs + ccs
gc_con = float(total)/seq_len
gc_content.append(gc_con)
length.append(seq_len)
ids.append(record.id)
with open('data/mm10_data/DHS_'+ entry + '_lengths.csv', mode='w') as data:
data_writer = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
#Add row with DHS id and sequence length
for x in range(len(gc_content)):
data_writer.writerow([ids[x], length[x]])
with open('data/mm10_data/DHS_'+ entry + '_gc.csv', mode='w') as data:
data_writer = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
#Add row with DHS id and GC content
for x in range(len(gc_content)):
data_writer.writerow([ids[x], gc_content[x]]) | 33.468085 | 115 | 0.624285 | 228 | 1,573 | 4.171053 | 0.355263 | 0.066246 | 0.037855 | 0.033649 | 0.353312 | 0.353312 | 0.353312 | 0.294427 | 0.294427 | 0.294427 | 0 | 0.008432 | 0.246027 | 1,573 | 47 | 116 | 33.468085 | 0.793423 | 0.24857 | 0 | 0.142857 | 1 | 0 | 0.094017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8913d86d344ae2c464ce977cf9852b8da922a5bd | 487 | py | Python | imdb/migrations/0004_auto_20200318_1539.py | raviteja1766/modern-resume-theme | 59b2cf57f0a1308f8d6650ca47318e8bd32e6bea | [
"MIT"
] | 1 | 2020-01-09T10:37:07.000Z | 2020-01-09T10:37:07.000Z | imdb/migrations/0004_auto_20200318_1539.py | raviteja1766/modern-resume-theme | 59b2cf57f0a1308f8d6650ca47318e8bd32e6bea | [
"MIT"
] | null | null | null | imdb/migrations/0004_auto_20200318_1539.py | raviteja1766/modern-resume-theme | 59b2cf57f0a1308f8d6650ca47318e8bd32e6bea | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-03-18 15:39
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('imdb', '0003_remove_cast_is_debut_movie'),
]
operations = [
migrations.AlterField(
model_name='movie',
name='director',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='imdb.Director'),
),
]
| 24.35 | 112 | 0.644764 | 58 | 487 | 5.293103 | 0.689655 | 0.078176 | 0.091205 | 0.143322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051075 | 0.23614 | 487 | 19 | 113 | 25.631579 | 0.774194 | 0.092402 | 0 | 0 | 1 | 0 | 0.138636 | 0.070455 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64dfd06c0a6fd052ad9d98834a2ee70b80ecc1c1 | 728 | py | Python | fundamentals/15-advance-objects-and-data-structures/5-advance-lists.py | davidokun/Python | 0172e4c6669dc0bdb1beab762948f0ade248bde0 | [
"MIT"
] | null | null | null | fundamentals/15-advance-objects-and-data-structures/5-advance-lists.py | davidokun/Python | 0172e4c6669dc0bdb1beab762948f0ade248bde0 | [
"MIT"
] | null | null | null | fundamentals/15-advance-objects-and-data-structures/5-advance-lists.py | davidokun/Python | 0172e4c6669dc0bdb1beab762948f0ade248bde0 | [
"MIT"
] | null | null | null | # Advance Lists
my_list = [1, 2, 3]
# Add element
print('\n# Add element\n')
my_list.append(4)
my_list.append(4)
print(my_list)
# Count element's occurrences
print('\n# Count element\'s occurrences\n')
print(f'2 = {my_list.count(2)}')
print(f'4 = {my_list.count(4)}')
print(f'5 = {my_list.count(5)}')
# Extend
print('\n# Extend\n')
x = [1, 2, 3]
x.append([4, 5])
print(f'Use append = {x}')
x = [1, 2, 3]
x.extend([4, 5])
print(f'Use Extend = {x}')
# Index
print('\n# Index\n')
print(f'List = {my_list}')
print(f'Index of 2 = {my_list.index(2)}')
print(f'Index of 4 = {my_list.index(4)}')
# Insert
print('\n# Insert\n')
print(f'List = {my_list}')
my_list.insert(2, 'Inserted')
print(f'After insert in pos 2 = {my_list}')
| 18.666667 | 43 | 0.627747 | 140 | 728 | 3.171429 | 0.2 | 0.175676 | 0.099099 | 0.058559 | 0.148649 | 0.076577 | 0 | 0 | 0 | 0 | 0 | 0.043062 | 0.138736 | 728 | 38 | 44 | 19.157895 | 0.665072 | 0.100275 | 0 | 0.25 | 0 | 0 | 0.467593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
64ed44e453363a672a9e69494e503817cb7b74a6 | 431 | py | Python | examples/parse_by_sol.py | sluzhynskyi/nasa | 4f961d3614e3539447348ff512c318fbf43ce29b | [
"MIT"
] | 4 | 2019-08-22T06:48:55.000Z | 2021-07-26T12:20:26.000Z | examples/parse_by_sol.py | sluzhynskyi/nasa | 4f961d3614e3539447348ff512c318fbf43ce29b | [
"MIT"
] | 3 | 2019-03-06T20:22:56.000Z | 2021-06-01T23:46:15.000Z | examples/parse_by_sol.py | sluzhynskyi/nasa | 4f961d3614e3539447348ff512c318fbf43ce29b | [
"MIT"
] | 1 | 2019-08-17T06:20:10.000Z | 2019-08-17T06:20:10.000Z | import json
import urllib.request
import pprint
import webbrowser
URL = "https://mars.jpl.nasa.gov/msl-raw-images/image/images_sol2320.json"
jsonFILE = json.loads(urllib.request.urlopen(URL).read())
#pprint.pprint(jsonFILE)
camera = jsonFILE['images'][0]['cameraModelType']
sol = jsonFILE['images'][0]['sol']
link = jsonFILE['images'][0]['urlList']
print('Camera type: {0}, Sol #{1}'.format(camera, sol))
webbrowser.open(link)
| 25.352941 | 74 | 0.728538 | 60 | 431 | 5.216667 | 0.533333 | 0.134185 | 0.14377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.081207 | 431 | 16 | 75 | 26.9375 | 0.767677 | 0.053364 | 0 | 0 | 0 | 0.090909 | 0.331695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
64f4984c197937ee5b93ae798a55664cc01163da | 3,311 | py | Python | tcpreq/tcp/options.py | TheJokr/tcpreq | 270c133a94a9d1210978fb884ac7c9516e68f6a1 | [
"MIT"
] | 18 | 2020-04-03T15:00:46.000Z | 2021-12-10T10:19:32.000Z | tcpreq/tcp/options.py | TheJokr/tcpreq | 270c133a94a9d1210978fb884ac7c9516e68f6a1 | [
"MIT"
] | null | null | null | tcpreq/tcp/options.py | TheJokr/tcpreq | 270c133a94a9d1210978fb884ac7c9516e68f6a1 | [
"MIT"
] | 1 | 2020-07-04T12:53:22.000Z | 2020-07-04T12:53:22.000Z | from typing import TypeVar, Type, Dict, Union, Generator
# Options are immutable
class BaseOption(object):
"""Common base class for all options."""
__slots__ = ("_raw",)
def __init__(self, data: bytes) -> None:
self._raw = data
def __len__(self) -> int:
return len(self._raw)
def __bytes__(self) -> bytes:
return self._raw
def __eq__(self, other: object) -> bool:
if not isinstance(other, BaseOption):
return NotImplemented
return self._raw == other._raw
def __hash__(self) -> int:
return self._raw.__hash__()
def hex(self) -> str:
return self._raw.hex()
class _LegacyOption(BaseOption):
"""Class for the two option kinds without a length octet."""
__slots__ = ()
def __init__(self, kind: int) -> None:
super(_LegacyOption, self).__init__(kind.to_bytes(1, "big"))
# Contrary to SizedOption, _LegacyOption instances are singletons.
# Therefore its from_bytes method is to be called directly on the instances.
def from_bytes(self, data: bytearray) -> "_LegacyOption":
if len(data) < 1:
raise ValueError("Data too short")
del data[0]
return self
end_of_options = _LegacyOption(0)
noop = _LegacyOption(1)
_T = TypeVar("_T", bound="SizedOption")
class SizedOption(BaseOption):
"""Base class for all option kinds with a length octet."""
__slots__ = ()
def __init__(self, kind: int, payload: bytes) -> None:
opt_head = bytes((kind, 2 + len(payload)))
super(SizedOption, self).__init__(opt_head + payload)
@classmethod
def from_bytes(cls: Type[_T], data: bytearray) -> _T:
if len(data) < 2:
raise ValueError("Data too short")
elif data[1] < 2 or data[1] > len(data):
raise ValueError("Illegal option length")
length = data[1]
res = cls.__new__(cls) # type: _T
res._raw = bytes(data[:length])
del data[:length]
return res
@property
def size(self) -> int:
return self._raw[1]
@property
def payload(self) -> bytes:
return self._raw[2:]
class MSSOption(SizedOption):
"""SizedOption to negotiate the connection's maximum segment size (MSS)."""
__slots__ = ()
def __init__(self, mss: int) -> None:
super(MSSOption, self).__init__(2, mss.to_bytes(2, "big"))
@classmethod
def from_bytes(cls: Type[_T], data: bytearray) -> _T:
res: _T = super(MSSOption, cls).from_bytes(data) # type: ignore
if res.size != 4:
raise ValueError("Illegal option length")
return res
@property
def mss(self) -> int:
return int.from_bytes(self._raw[2:4], "big")
# mypy currently doesn't support function attributes
# See https://github.com/python/mypy/issues/2087
_PARSE_KIND_TBL: Dict[int, Union[_LegacyOption, Type[SizedOption]]] = {
0: end_of_options,
1: noop,
2: MSSOption
}
def parse_options(data: bytearray) -> Generator[BaseOption, None, None]:
"""Parse header options based on their kind. Default to SizedOption."""
while data:
kind = data[0]
opt = _PARSE_KIND_TBL.get(kind, SizedOption).from_bytes(data)
yield opt
if opt is end_of_options:
return
| 27.591667 | 80 | 0.625189 | 419 | 3,311 | 4.656325 | 0.291169 | 0.032291 | 0.03998 | 0.024603 | 0.211174 | 0.082009 | 0.082009 | 0.082009 | 0.082009 | 0.04613 | 0 | 0.010543 | 0.25521 | 3,311 | 119 | 81 | 27.823529 | 0.780616 | 0.169435 | 0 | 0.210526 | 0 | 0 | 0.040103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.013158 | 0.092105 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64f736f4cce6304d20edac3cc86504e315653ab0 | 1,416 | py | Python | inter_graph_2.py | fernandorazon/blmsat | 37b2c81d7d7760fb4a76e819fb386ac9184d54f6 | [
"MIT"
] | null | null | null | inter_graph_2.py | fernandorazon/blmsat | 37b2c81d7d7760fb4a76e819fb386ac9184d54f6 | [
"MIT"
] | null | null | null | inter_graph_2.py | fernandorazon/blmsat | 37b2c81d7d7760fb4a76e819fb386ac9184d54f6 | [
"MIT"
] | null | null | null |
#Adicionalmente, para poder tratar a nuestra ventana como un objeto con sus atributos como los deseamos
#Se puede declarar una clase heredera de la clase tk con los atributos deseados
from tkinter import *
from tkinter import ttk
from cansat import testLogLineString
from getBalamsatData import aceleracion as ac
#Hago los datos de ac presentables correctamente
acdata = []
for i, line in enumerate(ac,0):
acdata.append(str(ac[0])+' '+str(ac[1])+' 'str(ac[2])+'\n')
#import cansat
class Ventana():
def __init__(self):
root = Tk() #Esta es la instancia de una ventana
#root.geometry('500x500') #Este atributo define el tamaño de la ventana
root.title('Balamsat') #Este atributo define el titulo de la ventana
#Agrego un label con los datos obtenidos de la aceleracion
T = Label(root, text = acdata)
T.pack(side = LEFT)
#ttk.Button(root,text = 'Imprimir', command = root.destroy).pack()
#Agrego una tabla de aceleracion
root.mainloop()
#Se puede definir a la funcion main como la forma de instanciar una ventana personalizada cuando se inicia la aplicacion
def main():
mi_app = Ventana()
return 0
#El atributo __name__ nos da acceso al nombre de un modulo, y permite a python conocer si un modulo es importado o no
if __name__ == '__main__':
main()
| 25.285714 | 121 | 0.673729 | 201 | 1,416 | 4.661692 | 0.552239 | 0.017076 | 0.036286 | 0.042689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010368 | 0.250706 | 1,416 | 55 | 122 | 25.745455 | 0.872762 | 0.56709 | 0 | 0 | 0 | 0 | 0.036832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.210526 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64fd1abdb8ce8f4b4a97b798eb8645f38ce491f0 | 1,216 | py | Python | example/cbs_example.py | bulletRush/QCloud_yunapi_wrapper | e8510c1891628de1ef48d9cba6ed070e200ba66c | [
"Apache-2.0"
] | null | null | null | example/cbs_example.py | bulletRush/QCloud_yunapi_wrapper | e8510c1891628de1ef48d9cba6ed070e200ba66c | [
"Apache-2.0"
] | null | null | null | example/cbs_example.py | bulletRush/QCloud_yunapi_wrapper | e8510c1891628de1ef48d9cba6ed070e200ba66c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import unittest
from qcloudsdk import ZoneId, CbsStorageType, Region, get_region_list, RegionConfig
from config import engine
class CbsTestCase(unittest.TestCase):
def setUp(self):
print("\n{0} BEGIN TEST: {1} {2}".format('*' * 20, self._testMethodName, '*' * 20))
self.engine = engine.with_region(Region.BJ)
def tearDown(self):
print("{0} END TEST: {1} {2}\n".format('#' * 20, self._testMethodName, '#' * 20))
def test_create(self):
self.engine.debug = False
for region in get_region_list(): # type: RegionConfig
self.engine.with_region(region.region)
for z in region.zone_id_list:
print "check:", region.region, z
print self.engine.cbs.create_cbs_storages(
zoneId=z, goodsNum=1, period=1,
storageType=CbsStorageType.SSD, storageSize=50,
)
def xtest_resize(self):
print self.engine.cbs.resize_cbs_storage(storageId="disk-o3y0ccya", storageSize=270)
def xtest_renew(self):
print self.engine.cbs.renew_cbs_storage(storageId="disk-4kmc04su", period=1)
if __name__ == '__main__':
unittest.main()
| 34.742857 | 92 | 0.634046 | 150 | 1,216 | 4.96 | 0.44 | 0.080645 | 0.060484 | 0.072581 | 0.134409 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029221 | 0.240132 | 1,216 | 34 | 93 | 35.764706 | 0.775974 | 0.032072 | 0 | 0 | 0 | 0 | 0.078298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.12 | null | null | 0.24 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f0518c3a0750da363aefe2c50b411be06aac802 | 538 | py | Python | desafio032.py | marcelocmedeiros/RevisaoPython | 04c602bf17e8ab37c9660337a8f8497eb498e10d | [
"MIT"
] | null | null | null | desafio032.py | marcelocmedeiros/RevisaoPython | 04c602bf17e8ab37c9660337a8f8497eb498e10d | [
"MIT"
] | null | null | null | desafio032.py | marcelocmedeiros/RevisaoPython | 04c602bf17e8ab37c9660337a8f8497eb498e10d | [
"MIT"
] | null | null | null | # Marcelo Campos de Medeiros
# ADS UNIFIP
# REVISÃO DE PYTHON
# AULA 10 CONDIÇÕES GUSTAVO GUANABARA
'''
Faça um Programa que leia um ano qualquer e mostre se ele é BISSEXTO.
'''
print('='*30)
print('{:#^30}'.format(' ANO BISSEXTO '))
print('='*30)
print()
ano = int(input('Informe o ano: '))
print()
# condição para ser bissexto ano % 4 == 0 and ano % 100 != 0 or ano % 400 == 0
if ano % 4 == 0 and ano % 100 != 0 or ano % 400 == 0:
print('O ano %d é BISSEXTO!'%ano)
else:
print('O ano {} NÃO É BISSEXTO!'.format(ano))
print() | 24.454545 | 78 | 0.626394 | 89 | 538 | 3.786517 | 0.516854 | 0.080119 | 0.089021 | 0.118694 | 0.142433 | 0.142433 | 0.142433 | 0.142433 | 0.142433 | 0.142433 | 0 | 0.065728 | 0.208178 | 538 | 22 | 79 | 24.454545 | 0.725352 | 0.444238 | 0 | 0.454545 | 0 | 0 | 0.284722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.727273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
8f05fc10f80a9807e002e6198f7f3aacb5fea65c | 1,635 | py | Python | exams/migrations/0008_populate_exam_run.py | Wassaf-Shahzad/micromasters | b1340a8c233499b1d8d22872a6bc1fe7f49fd323 | [
"BSD-3-Clause"
] | 32 | 2016-03-25T01:03:13.000Z | 2022-01-15T19:35:42.000Z | exams/migrations/0008_populate_exam_run.py | Wassaf-Shahzad/micromasters | b1340a8c233499b1d8d22872a6bc1fe7f49fd323 | [
"BSD-3-Clause"
] | 4,858 | 2016-03-03T13:48:30.000Z | 2022-03-29T22:09:51.000Z | exams/migrations/0008_populate_exam_run.py | umarmughal824/micromasters | ea92d3bcea9be4601150fc497302ddacc1161622 | [
"BSD-3-Clause"
] | 20 | 2016-08-18T22:07:44.000Z | 2021-11-15T13:35:35.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-04-24 19:10
from __future__ import unicode_literals
from datetime import datetime
from django.db import migrations
import pytz
PILOT_SERIES_CODE = 'PILOT'
PILOT_SCHEDULE_START_DATE = datetime(2017, 3, 6, tzinfo=pytz.UTC)
PILOT_SCHEDULE_END_DATE = datetime(2017, 3, 30, tzinfo=pytz.UTC)
PILOT_ELIGIBILITY_START_DATE = datetime(2017, 3, 18, tzinfo=pytz.UTC)
PILOT_ELIGIBILITY_END_DATE = datetime(2017, 3, 31, tzinfo=pytz.UTC)
def populate_pilot_runs(apps, schema_editor):
"""Generate pilot ExamRuns"""
ExamRun = apps.get_model('exams', 'ExamRun')
ExamAuthorization = apps.get_model('exams', 'ExamAuthorization')
runs_by_course_id = {}
for exam_auth in ExamAuthorization.objects.filter(exam_run__isnull=True).iterator():
if exam_auth.course_id not in runs_by_course_id:
runs_by_course_id[exam_auth.course_id] = ExamRun.objects.create(
course=exam_auth.course,
exam_series_code=PILOT_SERIES_CODE,
date_first_schedulable=PILOT_SCHEDULE_START_DATE,
date_last_schedulable=PILOT_SCHEDULE_END_DATE,
date_first_eligible=PILOT_ELIGIBILITY_START_DATE,
date_last_eligible=PILOT_ELIGIBILITY_END_DATE,
)
exam_auth.exam_run = runs_by_course_id[exam_auth.course_id]
exam_auth.save(update_fields=('exam_run',))
class Migration(migrations.Migration):
dependencies = [
('exams', '0007_add_exam_run'),
]
operations = [
migrations.RunPython(populate_pilot_runs, migrations.RunPython.noop)
]
| 36.333333 | 88 | 0.715596 | 215 | 1,635 | 5.065116 | 0.376744 | 0.051423 | 0.05877 | 0.062443 | 0.185491 | 0.055096 | 0.055096 | 0.055096 | 0 | 0 | 0 | 0.036309 | 0.191437 | 1,635 | 44 | 89 | 37.159091 | 0.787443 | 0.056881 | 0 | 0 | 1 | 0 | 0.044951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f0aa7ef68a763a962ad53d37519a4ebd1c168f7 | 6,437 | py | Python | test_meeus.py | mcoatanhay/meeuscalc | 707f7332944220330d4fc7608e7cba928a0b60db | [
"MIT"
] | null | null | null | test_meeus.py | mcoatanhay/meeuscalc | 707f7332944220330d4fc7608e7cba928a0b60db | [
"MIT"
] | null | null | null | test_meeus.py | mcoatanhay/meeuscalc | 707f7332944220330d4fc7608e7cba928a0b60db | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
# Fichier: test_meeus.py
# Auteur: Marc COATANHAY
"""
Tests pour le module meeus.
"""
# Import des modules
try:
import mes_modules_path
except:
pass
from meeus import *
import incertitudes.incert as incert
import unittest
# Définitions constantes et variables globales
# Définitions fonctions et classes
class MeeusTest(unittest.TestCase):
def test_avantjj0(self):
self.assertFalse(avantjj0(-4712, 1, 1.5))
self.assertTrue(avantjj0(-4712, 1, 1))
# self.assertRaises(avantjj0,[])
def test_coordonnees_moyennes(self):
afficher_calculs = False
RA = (incert.i(10) + incert.i(9)/60 + incert.i("32.5")/3600)*15
DE = incert.i(11) + incert.i(51)/60 + incert.i("33")/3600
reponses = {'RAf (°)': RA, 'DEf (°)': DE}
resultat = coordonnees_moyennes(3982, 2022.0)
if afficher_calculs:
print()
print("************************************")
print("* test_coordonnees_moyennes *")
print("************************************")
for ligne in resultat['calculs']:
print(ligne)
print('RAf (h) :', incert.tosexag(resultat['RAf (°)']/15))
print('DEf (°) :', incert.tosexag(resultat['DEf (°)']))
del resultat['calculs']
self.assertEqual(resultat, reponses)
def test_coordonnees_moyennes_rigoureuses(self):
afficher_calculs = False
RA = (incert.i(10) + incert.i(9)/60 + incert.i("32.48")/3600)*15
DE = incert.i(11) + incert.i(51)/60 + incert.i("31.9")/3600
reponses = {'RAf (°)': RA, 'DEf (°)': DE}
resultat = coordonnees_moyennes_rigoureuses(3982, 2022.0)
if afficher_calculs:
print()
print("*****************************************")
print("* test_coordonnees_moyennes rigoureuses *")
print("*****************************************")
for ligne in resultat['calculs']:
print(ligne)
print('RAf (h) :', incert.tosexag(resultat['RAf (°)']/15))
print('DEf (°) :', incert.tosexag(resultat['DEf (°)']))
del resultat['calculs']
self.assertEqual(resultat, reponses)
def test_coordonnees_moyennes2(self):
afficher_calculs = False
RA = (incert.i(10) + incert.i(9)/60 + incert.i("32.48")/3600)*15
DE = incert.i(11) + incert.i(51)/60 + incert.i("31.9")/3600
reponses = {'RAf (°)': RA, 'DEf (°)': DE}
resultat = coordonnees_moyennes2(3982,2022.0)
if afficher_calculs:
print()
print("************************************")
print("* test_coordonnees_moyennes2 *")
print("************************************")
for ligne in resultat['calculs']:
print(ligne)
print('RAf (h) :', incert.tosexag(resultat['RAf (°)']/15))
print('DEf (°) :', incert.tosexag(resultat['DEf (°)']))
del resultat['calculs']
self.assertEqual(resultat, reponses)
def test_coordonnees_moyennes3(self):
afficher_calculs = False
RA = (incert.i(10) + incert.i(9)/60 + incert.i("32.47")/3600)*15
DE = incert.i(11) + incert.i(51)/60 + incert.i("31.9")/3600
reponses = {'RAf (°)': RA, 'DEf (°)': DE}
resultat = coordonnees_moyennes3(3982,2022.0)
if afficher_calculs:
print()
print("*************************************")
print("* test_coordonnees_moyennes3 *")
print("*************************************")
for ligne in resultat['calculs']:
print(ligne)
print('RAf (h) :', incert.tosexag(resultat['RAf (°)']/15))
print('DEf (°) :', incert.tosexag(resultat['DEf (°)']))
del resultat['calculs']
# self.assertEqual(resultat, reponses)
def test_coordonnnes_moyennes_comparaisons(self):
afficher_calculs = False
resultats = []
resultats.append(coordonnees_moyennes(3982,2022.0))
resultats.append(coordonnees_moyennes_rigoureuses(3982,2022.0))
resultats.append(coordonnees_moyennes2(3982,2022.0))
resultats.append(coordonnees_moyennes3(3982,2022.0))
if afficher_calculs:
print()
print("**************************************")
print("* test_coordonnees_moyennes_synthèse *")
print("**************************************")
for resultat in resultats:
RAfh = incert.tosexag(resultat['RAf (°)']/15)
print(RAfh.valeur, format((RAfh.incert*3).s, '>10.3f'))
for resultat in resultats:
DEfd = incert.tosexag(resultat['DEf (°)'])
print(DEfd.valeur, format((DEfd.incert*3).s, '>10.3f'))
for i in range(0, 4):
del resultats[i]['calculs']
if i > 0:
self.assertEqual(resultats[0], resultats[i])
def test_parametres_precession(self):
afficher_calculs = False
reponses = [
{'m_alpha (s)': 3.071, 'n_alpha (s)': 1.337, 'n_delta (")': 20.06},
{'m_alpha (s)': 3.073, 'n_alpha (s)': 1.337, 'n_delta (")': 20.05},
{'m_alpha (s)': 3.075, 'n_alpha (s)': 1.336, 'n_delta (")': 20.04},
{'m_alpha (s)': 3.077, 'n_alpha (s)': 1.336, 'n_delta (")': 20.03},
{'m_alpha (s)': 3.079, 'n_alpha (s)': 1.335, 'n_delta (")': 20.03}]
resultat = []
if afficher_calculs:
print()
print("**************************************")
print("* test_parametres_precession *")
print("**************************************")
for i in range(-2, 3):
parametres_i = parametres_precession(incert.i(i))
resultat.append(parametres_i)
if afficher_calculs:
print('----------- siècle :', 2000 + 100*i)
print('m_alpha (s) :', parametres_i['m_alpha (s)'])
print('n_alpha (s) :', parametres_i['n_alpha (s)'])
print('n_delta (") :', parametres_i['n_delta (")'])
self.assertEqual(reponses, resultat)
if (__name__ == "__main__"):
unittest.main() | 43.493243 | 76 | 0.495262 | 689 | 6,437 | 4.531205 | 0.185776 | 0.056054 | 0.067265 | 0.049327 | 0.623639 | 0.60378 | 0.554452 | 0.532671 | 0.508328 | 0.508328 | 0 | 0.065217 | 0.285381 | 6,437 | 148 | 77 | 43.493243 | 0.608478 | 0.038838 | 0 | 0.52 | 0 | 0 | 0.208445 | 0.10247 | 0 | 0 | 0 | 0 | 0.056 | 0 | null | null | 0.008 | 0.032 | null | null | 0.336 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f103ba566459ea07e70a5039f59dd82746a1559 | 684 | py | Python | Medium/264. Ugly Number II/solution (1).py | czs108/LeetCode-Solutions | 889f5b6a573769ad077a6283c058ed925d52c9ec | [
"MIT"
] | 3 | 2020-05-09T12:55:09.000Z | 2022-03-11T18:56:05.000Z | Medium/264. Ugly Number II/solution (1).py | czs108/LeetCode-Solutions | 889f5b6a573769ad077a6283c058ed925d52c9ec | [
"MIT"
] | null | null | null | Medium/264. Ugly Number II/solution (1).py | czs108/LeetCode-Solutions | 889f5b6a573769ad077a6283c058ed925d52c9ec | [
"MIT"
] | 1 | 2022-03-11T18:56:16.000Z | 2022-03-11T18:56:16.000Z | # 264. Ugly Number II
# Runtime: 173 ms, faster than 54.94% of Python3 online submissions for Ugly Number II.
# Memory Usage: 14.2 MB, less than 73.79% of Python3 online submissions for Ugly Number II.
class Solution:
# Three Pointers
def nthUglyNumber(self, n: int) -> int:
nums = [1]
p2, p3, p5 = 0, 0, 0
for _ in range(1, n):
n2 = 2 * nums[p2]
n3 = 3 * nums[p3]
n5 = 5 * nums[p5]
nums.append(min(n2, n3, n5))
if nums[-1] == n2:
p2 += 1
if nums[-1] == n3:
p3 += 1
if nums[-1] == n5:
p5 += 1
return nums[-1] | 28.5 | 91 | 0.473684 | 98 | 684 | 3.295918 | 0.510204 | 0.077399 | 0.111455 | 0.160991 | 0.25387 | 0.25387 | 0.25387 | 0.25387 | 0 | 0 | 0 | 0.128079 | 0.406433 | 684 | 24 | 92 | 28.5 | 0.667488 | 0.307018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f12a3e97e343d879a1bf557daf38e1570274bd0 | 22,069 | py | Python | bench/dash.py | mlcgp/bench | c2f2fb31db4c76f98037e3e3e5be952547c00141 | [
"MIT"
] | null | null | null | bench/dash.py | mlcgp/bench | c2f2fb31db4c76f98037e3e3e5be952547c00141 | [
"MIT"
] | null | null | null | bench/dash.py | mlcgp/bench | c2f2fb31db4c76f98037e3e3e5be952547c00141 | [
"MIT"
] | null | null | null | import pandas as pd
import dash
import os
import base64
from inflection import humanize
from pathlib import Path
from sqlalchemy import create_engine, select, Table, MetaData
from sqlalchemy.orm import Session
from dash import dcc, html, Input, Output
from dash.exceptions import PreventUpdate
import dash_bootstrap_components as dbc
import plotly.express as px
import plotly.graph_objects as go
class Frame:
def __init__(self):
self.host = os.getenv("BENCH_DB_HOST")
self.port = os.getenv("BENCH_DB_PORT")
self.user = os.getenv("BENCH_DB_USER")
self.dbname = os.getenv("BENCH_DB_NAME")
self.password = os.getenv("BENCH_DB_PASSWORD")
def engine(self):
engine = create_engine(
f"postgresql://{self.user}:{self.password}@{self.host}:{self.port}/{self.dbname}"
)
return engine
def dataframe(self):
connection = self.engine().connect()
metadata = MetaData()
metrics = Table("metrics", metadata, autoload_with=self.engine())
stock = Table("stock", metadata, autoload_with=self.engine())
query = select([stock, metrics]).join(
stock, stock.columns.id == metrics.columns.stock_id
)
result = (connection.execute(query)).fetchall()
raw_df = pd.DataFrame(result)
raw_df.columns = result[0].keys()
cols = [humanize(col) for col in list(raw_df.columns)]
df = pd.DataFrame(result)
df.columns = cols
df.drop(
["Id", "Last updated", "Id 1", "Stock", "Period key"], axis=1, inplace=True
)
return df
class DashApp:
def __init__(self, data, port=8050):
self.data = data
self.port = port
def run(self, df):
app = dash.Dash(external_stylesheets=[dbc.themes.ZEPHYR])
app.title = "Bench"
logo = str(Path("bench/assets/banner.png").resolve())
encoded_image = base64.b64encode(open(logo, "rb").read())
companies = list(df["Company"].unique())
tickers = list(df["Symbol"].unique())
company_list = dict(zip(companies, tickers))
metrics_list = list(df.loc[:, "Gross margin":"Cash conversion cycle"].columns)
year_list = sorted(list(df["Filing year"].unique()))
quarter_list = sorted(list(df["Filing quarter"].unique()))
stock_dropdown = [
dbc.Row(
[
dbc.Label("Symbol"),
dcc.Dropdown(
id="ticker-dropdown",
options=[
{"label": k, "value": company_list[k]} for k in company_list
],
multi=True,
),
],
className="mb-3",
),
dbc.Row(
[
dbc.Label("Period"),
dcc.Dropdown(
id="period-dropdown",
options=[
{"label": "Annual", "value": "annual"},
{"label": "Quarterly", "value": "quarterly"},
],
),
],
className="mb-3",
),
]
benchmark = [
dbc.Row(
[
dbc.Col(
[
dbc.Row(
[
dbc.Col([dbc.Label("Previous")]),
]
),
dbc.Row(
[
dbc.Col(
[
dbc.Label("Year"),
dcc.Dropdown(
id="previous-year",
options=[
{"label": val, "value": val}
for val in year_list
],
placeholder="Year",
style={
"font-size": "9px",
"width": "105%",
},
),
],
className="mb-3",
),
dbc.Col(
[
dbc.Label("Quarter"),
dcc.Dropdown(
id="previous-quarter",
options=[
{
"label": val,
"value": val,
}
for val in quarter_list
],
placeholder="Q",
style={
"font-size": "9px",
"width": "40%",
},
),
],
className="mb-3",
),
]
),
]
),
dbc.Col(
[
dbc.Row(
[
dbc.Col([dbc.Label("Current")]),
]
),
dbc.Row(
[
dbc.Col(
[
dbc.Label("Year"),
dcc.Dropdown(
id="current-year",
options=[
{"label": val, "value": val}
for val in year_list
],
placeholder="Year",
style={
"font-size": "9px",
"width": "105%",
},
),
],
className="mb-3",
),
dbc.Col(
[
dbc.Label("Quarter"),
dcc.Dropdown(
id="current-quarter",
options=[
{
"label": val,
"value": val,
}
for val in quarter_list
],
placeholder="Q",
style={
"font-size": "9px",
"width": "40%",
},
),
],
className="mb-3",
),
]
),
]
),
]
)
]
metrics_dropdown = [
dbc.Row(
[
dcc.Dropdown(
id="metrics-dropdown",
options=[{"label": val, "value": val} for val in metrics_list],
),
],
className="mb-3",
),
]
app.layout = dbc.Container(
[
dbc.Row(
[
dbc.Row(
[
dbc.Col(
[
html.Img(
src="data:image/png;base64,{}".format(
encoded_image.decode()
),
style={
"width": "30%",
"float": "left",
},
)
],
width=True,
)
],
align="end",
),
html.Hr(),
dbc.Col(
[
dbc.Row(
[
dbc.Col(
width=100,
children=dbc.Card(
[
dbc.CardHeader("Parameters"),
dbc.CardBody(stock_dropdown),
],
style={
"class": "card border-light mb-3"
},
),
)
],
className="mb-3",
),
dbc.Row(
[
dbc.Col(
width=100,
children=dbc.Card(
[
dbc.CardHeader("Benchmark"),
dbc.CardBody(benchmark),
],
style={
"class": "card border-light mb-3"
},
),
)
],
className="mb-3",
),
dbc.Row(
[
dbc.Col(
width=100,
children=dbc.Card(
[
dbc.CardHeader("Metrics"),
dbc.CardBody(metrics_dropdown),
],
style={
"class": "card border-light mb-3"
},
),
)
],
className="mb-3",
),
],
className="col-4",
),
dbc.Col(
[
dbc.Row(
[
html.Div(
id="graph-container",
children=[
dcc.Graph(
id="bench-scatterplot",
style={
"position": "center",
"width": "100%",
"height": "100%",
},
config={"displayLogo": "False"},
responsive=True,
)
],
style={
"class": "mb-3",
"height": "80%",
"position": "center",
},
),
html.Div(
id="alert-container",
children=[
dbc.Alert(
"Select values to display the chart.",
color="warning",
style={
"color": "#856404",
"background-color": "#fff3cd",
"border-color": "#ffeeba",
},
)
],
),
],
style={"class": "mb-3", "height": "100%"},
)
],
className="col-8",
),
],
)
],
fluid=True,
)
@app.callback(
Output("previous-quarter", "disabled"),
Output("current-quarter", "disabled"),
Input("period-dropdown", "value"),
)
def disable_quarterly(value):
if value == "annual":
return [True, True]
return [False, False]
@app.callback(
Output("graph-container", "style"),
Output("alert-container", "style"),
Input("ticker-dropdown", "value"),
Input("period-dropdown", "value"),
Input("previous-year", "value"),
Input("previous-quarter", "value"),
Input("current-year", "value"),
Input("current-quarter", "value"),
Input("metrics-dropdown", "value"),
)
def hide_containers(
ticker_input,
period_input,
previous_year_input,
previous_quarter_input,
current_year_input,
current_quarter_input,
metrics_input,
):
if period_input == "annual":
if all(
[
ticker_input,
any([previous_year_input, current_year_input]),
metrics_input,
]
):
return {"display": "block"}, {"display": "none"}
else:
return {"display": "none"}, {"display": "block"}
elif period_input == "quarterly":
if all(
[
ticker_input,
previous_year_input,
previous_quarter_input,
current_year_input,
current_quarter_input,
metrics_input,
]
):
return {"display": "block"}, {"display": "none"}
else:
return {"display": "none"}, {"display": "block"}
else:
return {"display": "none"}, {"display": "block"}
@app.callback(
Output("bench-scatterplot", "figure"),
Input("ticker-dropdown", "value"),
Input("period-dropdown", "value"),
Input("previous-year", "value"),
Input("previous-quarter", "value"),
Input("current-year", "value"),
Input("current-quarter", "value"),
Input("metrics-dropdown", "value"),
)
def update_figure(
tickers,
period,
previous_year,
previous_quarter,
current_year,
current_quarter,
metric,
):
if not all([tickers, metric, period]):
raise PreventUpdate
else:
dff = df[df["Symbol"].isin(tickers)]
if period == "quarterly":
dfff_curr = dff[
(dff["Period type"] == period)
& (dff["Filing year"] == current_year)
& (dff["Filing quarter"] == current_quarter)
]
dfff_prev = dff[
(dff["Period type"] == period)
& (dff["Filing year"] == previous_year)
& (dff["Filing quarter"] == previous_quarter)
]
else:
dfff_curr = dff[
(dff["Period type"] == period)
& (dff["Filing year"] == current_year)
]
dfff_prev = dff[
(dff["Period type"] == period)
& (dff["Filing year"] == previous_year)
]
metric_filter = f"{metric}"
colors = [
"#323A59",
"#4F5B8C",
"#778DAA",
"#9C7975",
"#33121B",
"#690B14",
"#DA3626",
"#E65F25",
"#F38D20",
"#FDB913",
]
fig = px.scatter(
dfff_curr,
y=metric_filter,
color="Symbol",
color_discrete_sequence=colors,
hover_data=[
"Symbol",
"Filing year",
],
).update_traces(
legendgroup="symbol",
legendgrouptitle_text="",
hovertemplate="%{customdata[0]}<br>Year: %{customdata[1]}<br>%{y:.1f}",
)
fig2 = go.Figure(
fig.add_traces(
(
px.scatter(
dfff_prev,
y=metric_filter,
color="Symbol",
color_discrete_sequence=colors,
hover_data=[
"Symbol",
"Filing year",
],
).update_traces(
marker={"symbol": "circle-open"},
hovertemplate="%{customdata[0]}<br>Year: %{customdata[1]}<br>%{y:.1f}",
)
)._data
)
)
fig.update_layout(
{
"plot_bgcolor": "rgba(0,0,0,0)",
"paper_bgcolor": "rgba(0,0,0,0)",
"xaxis": dict(visible=False),
}
)
fig.update_traces(marker={"size": 12})
fig.update_xaxes(
showspikes=True,
spikethickness=1,
spikesnap="cursor",
spikemode="across",
)
fig.update_yaxes(
showspikes=True,
spikethickness=1,
spikemode="across",
spikesnap="cursor",
)
return fig
app.run_server(port=self.port)
| 41.561205 | 103 | 0.259006 | 1,128 | 22,069 | 4.964539 | 0.233156 | 0.008036 | 0.019286 | 0.019286 | 0.404286 | 0.377857 | 0.361786 | 0.361786 | 0.346964 | 0.346964 | 0 | 0.017803 | 0.656396 | 22,069 | 530 | 104 | 41.639623 | 0.720691 | 0 | 0 | 0.491054 | 0 | 0.001988 | 0.095156 | 0.010467 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015905 | false | 0.003976 | 0.025845 | 0 | 0.065606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f176f2b9ecbfc54e76e27799f6b813572d7f17c | 555 | py | Python | schema/views.py | leVirve-arxiv/OuO | 9a6a1ef50e6aeef8d0b84d1a1a377e5f19050ac2 | [
"MIT"
] | null | null | null | schema/views.py | leVirve-arxiv/OuO | 9a6a1ef50e6aeef8d0b84d1a1a377e5f19050ac2 | [
"MIT"
] | null | null | null | schema/views.py | leVirve-arxiv/OuO | 9a6a1ef50e6aeef8d0b84d1a1a377e5f19050ac2 | [
"MIT"
] | null | null | null | from schema.models import Member
from django.shortcuts import render
def save_userdata(backend, user, response, *args, **kwargs):
if backend.name == 'facebook':
try:
profile = Member.objects.get(user_id=user.id)
except Member.DoesNotExist:
profile = Member(user_id=user.id)
profile.uuid = response.get('id')
profile.name = response.get('name')
profile.save()
def index(request):
return render(request, 'index.html')
def login(request):
return render(request, 'index.html')
| 25.227273 | 60 | 0.65045 | 68 | 555 | 5.264706 | 0.470588 | 0.067039 | 0.055866 | 0.067039 | 0.195531 | 0.195531 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227027 | 555 | 21 | 61 | 26.428571 | 0.834499 | 0 | 0 | 0.133333 | 0 | 0 | 0.061261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0.133333 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
8f1cde6246d9ccea2e294e7fd4abb9c033eccaa5 | 352 | py | Python | pymajorme/helpers/constraints.py | danielkupco/PyMaJORME | 0cc0f5d98b32524474cf05919d3fccb3926393b2 | [
"MIT"
] | 1 | 2016-03-01T16:32:19.000Z | 2016-03-01T16:32:19.000Z | pymajorme/helpers/constraints.py | danielkupco/PyMaJORME | 0cc0f5d98b32524474cf05919d3fccb3926393b2 | [
"MIT"
] | 11 | 2016-02-01T14:22:08.000Z | 2016-02-23T22:58:17.000Z | pymajorme/helpers/constraints.py | danielkupco/PyMaJORME | 0cc0f5d98b32524474cf05919d3fccb3926393b2 | [
"MIT"
] | null | null | null |
class Constraints(object):
''' Contains all of the primary and foreign key constraint
names for the given entity as tuples of entities and
relations which are part of constraints '''
def __init__(self, pk_constraints, fk_constraints):
self.pk_constraints = pk_constraints
self.fk_constraints = fk_constraints
| 32 | 62 | 0.715909 | 45 | 352 | 5.377778 | 0.622222 | 0.161157 | 0.140496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232955 | 352 | 10 | 63 | 35.2 | 0.896296 | 0.417614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f3522751074b59b6cbb9d1db44c15311f2cfb48 | 3,863 | py | Python | .install/.backup/platform/gcutil/lib/google_compute_engine/gcutil_lib/windows_password_test.py | bopopescu/google-cloud-sdk | b34e6a18f1e89673508166acce816111c3421e4b | [
"Apache-2.0"
] | null | null | null | .install/.backup/platform/gcutil/lib/google_compute_engine/gcutil_lib/windows_password_test.py | bopopescu/google-cloud-sdk | b34e6a18f1e89673508166acce816111c3421e4b | [
"Apache-2.0"
] | null | null | null | .install/.backup/platform/gcutil/lib/google_compute_engine/gcutil_lib/windows_password_test.py | bopopescu/google-cloud-sdk | b34e6a18f1e89673508166acce816111c3421e4b | [
"Apache-2.0"
] | 1 | 2020-07-24T20:04:47.000Z | 2020-07-24T20:04:47.000Z | # Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for the windows_password module."""
import path_initializer
path_initializer.InitSysPath()
import re
import unittest
from gcutil_lib import gcutil_errors
from gcutil_lib import mock_get_pass
from gcutil_lib import windows_password
class WindowsPasswordTest(unittest.TestCase):
def testGetPasswordMismatch(self):
num_getpass_calls = [0]
mismatch_round = 2
strong_password = '!password1'
def _GetPassMismatch(_):
num_getpass_calls[0] += 1
if (num_getpass_calls[0] - 1) // 2 < mismatch_round:
return 'password0' if num_getpass_calls[0] % 2 == 0 else 'password1'
else:
return strong_password
with mock_get_pass.MockGetPass(_GetPassMismatch):
password = windows_password.GetPassword('type password')
self.assertEquals(strong_password, password)
self.assertEquals((mismatch_round + 1) * 2, num_getpass_calls[0])
num_getpass_calls = [0]
mismatch_round = 3
with mock_get_pass.MockGetPass(_GetPassMismatch):
self._AssertGetPasswordErrorMessageMatchesPattern(
'Passwords do not match')
def testGetPasswordWeakPassword(self):
weak_round = 2
num_getpass_calls = [0]
strong_password = '!password1'
def _GetPassWeakPassword(_):
num_getpass_calls[0] += 1
if (num_getpass_calls[0] - 1) // 2 < weak_round:
return 'weak_password'
else:
return strong_password
with mock_get_pass.MockGetPass(_GetPassWeakPassword):
password = windows_password.GetPassword('type password')
self.assertEquals(strong_password, password)
self.assertEquals((weak_round + 1) * 2, num_getpass_calls[0])
num_getpass_calls = [0]
weak_round = 3
with mock_get_pass.MockGetPass(_GetPassWeakPassword):
self._AssertGetPasswordErrorMessageMatchesPattern(
'must contain at least 3 types of characters')
def _AssertGetPasswordErrorMessageMatchesPattern(self, error_message_pattern):
try:
windows_password.GetPassword('type password')
self.fail('No exception thrown')
except gcutil_errors.CommandError as e:
self.assertFalse(re.search(error_message_pattern, e.message) is None)
def testValidateStrongPasswordRequirement(self):
def _AssertErrorMessageMatchesPattern(password, error_message_pattern):
try:
windows_password.ValidateStrongPasswordRequirement(
password)
self.fail('No exception thrown')
except gcutil_errors.CommandError as e:
self.assertFalse(re.search(error_message_pattern, e.message) is None)
# Password too short
regexp = r'must be at least \d+ characters long'
_AssertErrorMessageMatchesPattern('!Ab1234', regexp)
# Password does not contain enough categories of chars
regexp = 'must contain at least 3 types of characters'
_AssertErrorMessageMatchesPattern('a1234567', regexp)
_AssertErrorMessageMatchesPattern('Aabcdefg', regexp)
_AssertErrorMessageMatchesPattern('!abcdefg', regexp)
_AssertErrorMessageMatchesPattern('!1234567', regexp)
# Password contains 'gceadmin' account name
regexp = 'cannot contain the user account name'
_AssertErrorMessageMatchesPattern('Ab1GceAdmin!', regexp)
if __name__ == '__main__':
unittest.main()
| 35.118182 | 80 | 0.738027 | 443 | 3,863 | 6.221219 | 0.361174 | 0.039913 | 0.059869 | 0.063861 | 0.384978 | 0.372279 | 0.301887 | 0.278665 | 0.25254 | 0.216255 | 0 | 0.019023 | 0.183536 | 3,863 | 109 | 81 | 35.440367 | 0.854788 | 0.188455 | 0 | 0.422535 | 0 | 0 | 0.117855 | 0 | 0 | 0 | 0 | 0 | 0.225352 | 1 | 0.098592 | false | 0.56338 | 0.084507 | 0 | 0.253521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8f38235907b9eb58f5cbcd6b95c8ecc50d95b91f | 309 | py | Python | chat/urls.py | Rutujakadam0204/video_conferencing | 7c0ea87d16195bc565fc7fd7f389a431a3e47df2 | [
"MIT"
] | null | null | null | chat/urls.py | Rutujakadam0204/video_conferencing | 7c0ea87d16195bc565fc7fd7f389a431a3e47df2 | [
"MIT"
] | null | null | null | chat/urls.py | Rutujakadam0204/video_conferencing | 7c0ea87d16195bc565fc7fd7f389a431a3e47df2 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.urls import path
# from django.contrib.auth.decorators import login_required
# from rest_framework.urlpatterns import format_suffix_patterns
from . views import *
urlpatterns = [
# path('admin/', admin.site.urls),
path('', main_view, name='main_view'),
]
| 28.090909 | 63 | 0.757282 | 41 | 309 | 5.560976 | 0.536585 | 0.131579 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135922 | 309 | 10 | 64 | 30.9 | 0.853933 | 0.491909 | 0 | 0 | 0 | 0 | 0.059211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8f38956818d7d26413d81247226d22c1605a0678 | 6,153 | py | Python | chrome/common/extensions/docs/server2/app_yaml_helper_test.py | iplo/Chain | 8bc8943d66285d5258fffc41bed7c840516c4422 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 231 | 2015-01-08T09:04:44.000Z | 2021-12-30T03:03:10.000Z | chrome/common/extensions/docs/server2/app_yaml_helper_test.py | j4ckfrost/android_external_chromium_org | a1a3dad8b08d1fcf6b6b36c267158ed63217c780 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1 | 2017-02-14T21:55:58.000Z | 2017-02-14T21:55:58.000Z | chrome/common/extensions/docs/server2/app_yaml_helper_test.py | j4ckfrost/android_external_chromium_org | a1a3dad8b08d1fcf6b6b36c267158ed63217c780 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 268 | 2015-01-21T05:53:28.000Z | 2022-03-25T22:09:01.000Z | #!/usr/bin/env python
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
from app_yaml_helper import AppYamlHelper
from extensions_paths import SERVER2
from host_file_system_provider import HostFileSystemProvider
from mock_file_system import MockFileSystem
from object_store_creator import ObjectStoreCreator
from test_file_system import MoveTo, TestFileSystem
from test_util import DisableLogging
_ExtractVersion, _IsGreater, _GenerateAppYaml = (
AppYamlHelper.ExtractVersion,
AppYamlHelper.IsGreater,
AppYamlHelper.GenerateAppYaml)
class AppYamlHelperTest(unittest.TestCase):
def testExtractVersion(self):
def run_test(version):
self.assertEqual(version, _ExtractVersion(_GenerateAppYaml(version)))
run_test('0')
run_test('0-0')
run_test('0-0-0')
run_test('1')
run_test('1-0')
run_test('1-0-0')
run_test('1-0-1')
run_test('1-1-0')
run_test('1-1-1')
run_test('2-0-9')
run_test('2-0-12')
run_test('2-1')
run_test('2-1-0')
run_test('2-11-0')
run_test('3-1-0')
run_test('3-1-3')
run_test('3-12-0')
def testIsGreater(self):
def assert_is_greater(lhs, rhs):
self.assertTrue(_IsGreater(lhs, rhs), '%s is not > %s' % (lhs, rhs))
self.assertFalse(_IsGreater(rhs, lhs),
'%s should not be > %s' % (rhs, lhs))
assert_is_greater('0-0', '0')
assert_is_greater('0-0-0', '0')
assert_is_greater('0-0-0', '0-0')
assert_is_greater('1', '0')
assert_is_greater('1', '0-0')
assert_is_greater('1', '0-0-0')
assert_is_greater('1-0', '0-0')
assert_is_greater('1-0-0-0', '0-0-0')
assert_is_greater('2-0-12', '2-0-9')
assert_is_greater('2-0-12', '2-0-9-0')
assert_is_greater('2-0-12-0', '2-0-9')
assert_is_greater('2-0-12-0', '2-0-9-0')
assert_is_greater('2-1', '2-0-9')
assert_is_greater('2-1', '2-0-12')
assert_is_greater('2-1-0', '2-0-9')
assert_is_greater('2-1-0', '2-0-12')
assert_is_greater('3-1-0', '2-1')
assert_is_greater('3-1-0', '2-1-0')
assert_is_greater('3-1-0', '2-11-0')
assert_is_greater('3-1-3', '3-1-0')
assert_is_greater('3-12-0', '3-1-0')
assert_is_greater('3-12-0', '3-1-3')
assert_is_greater('3-12-0', '3-1-3-0')
@DisableLogging('warning')
def testInstanceMethods(self):
test_data = {
'app.yaml': _GenerateAppYaml('1-0'),
'app_yaml_helper.py': 'Copyright notice etc'
}
updates = []
# Pass a specific file system at head to the HostFileSystemProvider so that
# we know it's always going to be backed by a MockFileSystem. The Provider
# may decide to wrap it in caching etc.
file_system_at_head = MockFileSystem(
TestFileSystem(test_data, relative_to=SERVER2))
def apply_update(update):
update = MoveTo(SERVER2, update)
file_system_at_head.Update(update)
updates.append(update)
def host_file_system_constructor(branch, revision=None):
self.assertEqual('trunk', branch)
self.assertTrue(revision is not None)
return MockFileSystem.Create(
TestFileSystem(test_data, relative_to=SERVER2), updates[:revision])
object_store_creator = ObjectStoreCreator.ForTest()
host_file_system_provider = HostFileSystemProvider(
object_store_creator,
default_trunk_instance=file_system_at_head,
constructor_for_test=host_file_system_constructor)
helper = AppYamlHelper(object_store_creator, host_file_system_provider)
def assert_is_up_to_date(version):
self.assertTrue(helper.IsUpToDate(version),
'%s is not up to date' % version)
self.assertRaises(ValueError,
helper.GetFirstRevisionGreaterThan, version)
self.assertEqual(0, helper.GetFirstRevisionGreaterThan('0-5-0'))
assert_is_up_to_date('1-0-0')
assert_is_up_to_date('1-5-0')
# Revision 1.
apply_update({
'app.yaml': _GenerateAppYaml('1-5-0')
})
self.assertEqual(0, helper.GetFirstRevisionGreaterThan('0-5-0'))
self.assertEqual(1, helper.GetFirstRevisionGreaterThan('1-0-0'))
assert_is_up_to_date('1-5-0')
assert_is_up_to_date('2-5-0')
# Revision 2.
apply_update({
'app_yaml_helper.py': 'fixed a bug'
})
self.assertEqual(0, helper.GetFirstRevisionGreaterThan('0-5-0'))
self.assertEqual(1, helper.GetFirstRevisionGreaterThan('1-0-0'))
assert_is_up_to_date('1-5-0')
assert_is_up_to_date('2-5-0')
# Revision 3.
apply_update({
'app.yaml': _GenerateAppYaml('1-6-0')
})
self.assertEqual(0, helper.GetFirstRevisionGreaterThan('0-5-0'))
self.assertEqual(1, helper.GetFirstRevisionGreaterThan('1-0-0'))
self.assertEqual(3, helper.GetFirstRevisionGreaterThan('1-5-0'))
assert_is_up_to_date('2-5-0')
# Revision 4.
apply_update({
'app.yaml': _GenerateAppYaml('1-8-0')
})
# Revision 5.
apply_update({
'app.yaml': _GenerateAppYaml('2-0-0')
})
# Revision 6.
apply_update({
'app.yaml': _GenerateAppYaml('2-2-0')
})
# Revision 7.
apply_update({
'app.yaml': _GenerateAppYaml('2-4-0')
})
# Revision 8.
apply_update({
'app.yaml': _GenerateAppYaml('2-6-0')
})
self.assertEqual(0, helper.GetFirstRevisionGreaterThan('0-5-0'))
self.assertEqual(1, helper.GetFirstRevisionGreaterThan('1-0-0'))
self.assertEqual(3, helper.GetFirstRevisionGreaterThan('1-5-0'))
self.assertEqual(5, helper.GetFirstRevisionGreaterThan('1-8-0'))
self.assertEqual(6, helper.GetFirstRevisionGreaterThan('2-0-0'))
self.assertEqual(6, helper.GetFirstRevisionGreaterThan('2-1-0'))
self.assertEqual(7, helper.GetFirstRevisionGreaterThan('2-2-0'))
self.assertEqual(7, helper.GetFirstRevisionGreaterThan('2-3-0'))
self.assertEqual(8, helper.GetFirstRevisionGreaterThan('2-4-0'))
self.assertEqual(8, helper.GetFirstRevisionGreaterThan('2-5-0'))
assert_is_up_to_date('2-6-0')
assert_is_up_to_date('2-7-0')
if __name__ == '__main__':
unittest.main()
| 34.183333 | 79 | 0.674955 | 879 | 6,153 | 4.507395 | 0.156997 | 0.068652 | 0.090863 | 0.056537 | 0.507067 | 0.471479 | 0.38213 | 0.286724 | 0.233468 | 0.216557 | 0 | 0.062956 | 0.176499 | 6,153 | 179 | 80 | 34.374302 | 0.718966 | 0.074273 | 0 | 0.230769 | 0 | 0 | 0.117057 | 0 | 0 | 0 | 0 | 0 | 0.412587 | 1 | 0.055944 | false | 0 | 0.055944 | 0 | 0.125874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f4c733b84d84a0351bf8d9d1333251f23372afb | 2,507 | py | Python | test_extra_credit.py | robertavram/tournament | ec77d898006f15aa2c1bd1836cfc7655c6587b01 | [
"MIT"
] | null | null | null | test_extra_credit.py | robertavram/tournament | ec77d898006f15aa2c1bd1836cfc7655c6587b01 | [
"MIT"
] | null | null | null | test_extra_credit.py | robertavram/tournament | ec77d898006f15aa2c1bd1836cfc7655c6587b01 | [
"MIT"
] | null | null | null | from tournament import *
from random import randint
from math import log, ceil
def setupTournament(tname):
createTournament(tname)
return
def registerPlayers(tournament="Default Tournament", tplayers=10):
"""Registers a list of randomly generated names. """
# Potential first names:
pFN = ["Eden ", "Sharilyn ", "Merissa ", "Dalia ",
"Ema ", "Dede ", "Raul ", "Genaro ", "Raelene ", "Merrie "]
# Potential last names:
pLN = ["Alcantar", "Armstrong", "Brafford",
"Loughran", "Stager", "Saia", "Lyall",
"Tuma", "Mccarville", "Sollars"]
for i in range(tplayers):
name = pFN[randint(0, 9)] + pLN[randint(0, 9)]
registerPlayer(name, tournament)
return
def gameon(lpp, tournament):
"""Play the games with a random winner or loser"""
print "Pairs for this round: \n {} \n ".format(lpp)
for pair in lpp:
# Random winner and loser
if not pair[1]:
# Making sure that if the pair contains a bye we don't put the by
# as winner.
reportMatch(pair[0], pair[1], tournament)
continue
if randint(0, 1):
reportMatch(pair[0], pair[1], tournament, True)
else:
r_win = randint(0, 1)
r_loser = (r_win + 1) % 2
reportMatch(pair[r_win], pair[r_loser], tournament)
def testReportMatches(tournament="Default Tournament", tot_players=4):
""" Registers players in a tournament,
figures out the minimum number of rounds to be played in order to find a winner,
makes the pairings based on swissSupPairings,
plays the games using gameon,
prints the final results"""
deleteMatches(tournament)
deletePlayers(tournament)
# Register n players
registerPlayers(tournament, tot_players)
rounds = int(ceil(log(tot_players, 2)))
print "PLAYING {} ROUNDS".format(rounds)
for i in range(rounds):
pairings = swissSupPairings(tournament)
gameon(pairings, tournament)
standings = playerSupStandings(tournament)
print "\n FINAL RESULT FOR '{0}'\n".format(tournament)
for row in standings:
print row
if __name__ == '__main__':
tournaments = [["World Cup", 10], ["Private Tournament", 7]]
for t in tournaments:
print "\n\n TESTING TOURNAMENT '{}' WITH {} PLAYERS".format(t[0], t[1])
setupTournament(t[0])
testReportMatches(t[0], t[1])
print "Success! All tests pass!" | 29.494118 | 88 | 0.618269 | 302 | 2,507 | 5.07947 | 0.460265 | 0.020861 | 0.035202 | 0.014342 | 0.040417 | 0.040417 | 0 | 0 | 0 | 0 | 0 | 0.015184 | 0.26446 | 2,507 | 85 | 89 | 29.494118 | 0.816703 | 0.064619 | 0 | 0.041667 | 0 | 0 | 0.176589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.020833 | 0.0625 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f4d8e9c469beff91ba244881b1a589e7e632122 | 12,932 | py | Python | python/prizes.py | camfindlay/govhackaustralia.github.io | 3e4c1eb7d04a922ba7e6853ee5411af1c5d0b193 | [
"CC-BY-4.0"
] | 1 | 2020-06-20T21:50:06.000Z | 2020-06-20T21:50:06.000Z | python/prizes.py | camfindlay/govhackaustralia.github.io | 3e4c1eb7d04a922ba7e6853ee5411af1c5d0b193 | [
"CC-BY-4.0"
] | null | null | null | python/prizes.py | camfindlay/govhackaustralia.github.io | 3e4c1eb7d04a922ba7e6853ee5411af1c5d0b193 | [
"CC-BY-4.0"
] | 1 | 2020-06-20T21:50:09.000Z | 2020-06-20T21:50:09.000Z | import fnmatch
import codecs
import tablib
import os
import requests
import urlparse
import shutil
import frontmatter
import yaml
import io
class PrizeSpreadsheets(object):
"""
"""
def __init__(self, dir_path):
self.dir_path = dir_path
def get_spreadsheets(self):
sheets = []
for file in os.listdir(self.dir_path):
if file.endswith(".xlsx") and file.startswith("~") == False:
new_filename = "%s.xlsx" % (self.get_region(file))
os.rename(os.path.join(self.dir_path, file), os.path.join(self.dir_path, new_filename))
# print new_filename
sheets.append({
"filename": new_filename,
"region": self.get_region(file),
"sheet": PrizeSpreadsheet(os.path.join(self.dir_path, new_filename))
})
return sheets
def get_region(self, file):
regions = ["AUSTRALIA", "ACT", "NSW", "NT", "QLD", "SA", "TAS", "VIC", "WA", "NZ"]
if " " not in file:
region = os.path.splitext(file)[0]
else:
region = file.split(" ")[0].upper()
if region not in regions:
raise ValueError("Region '%s' is not valid." % (region))
if region == "NATIONAL":
return "AUSTRALIA"
return region
class PrizeSpreadsheet(object):
"""
"""
def __init__(self, file_path):
self.file_path = file_path
self.read_file()
def read_file(self):
from openpyxl import load_workbook
# print "file_path: %s" % (self.file_path)
self.wb = load_workbook(self.file_path)
def get_prize_sheet(self, index=0):
# print self.wb.get_sheet_names()
self.ws = self.wb.get_sheet_by_name(self.wb.get_sheet_names()[index])
return self.ws
def get_headers(self):
headers = []
for row in self.ws.iter_rows("A1:Z1"):
for cell in row:
if cell.value == None:
break
else:
col_name = "".join(cell.value.split()).lower()
if "eligibilitycriteria" in col_name or "eligiblitycriteria" in col_name:
col_name = "eligibilitycriteria"
headers.append(col_name)
return headers
def is_row_empty(self, row):
for cell in row:
if cell.value is not None:
return False
return True
def sheet_to_dict(self, ws):
rows = []
headers = self.get_headers()
print headers
# print
for row in ws.iter_rows(row_offset=1):
# print row
# Consider a single empty row to be the end of the available data
if self.is_row_empty(row) == True:
# print "Row is empty"
break
tmp = {}
for idx, cell in enumerate(row):
# print "%s: %s" % (idx, cell.value)
if idx >= len(headers):
break
else:
if type(cell.value) is unicode:
tmp[headers[idx]] = cell.value.strip()
else:
tmp[headers[idx]] = cell.value
rows.append(tmp)
# print
return rows
# Config
datadir = "python/data/prizes"
prizesdir = "_prizes/2016/"
organisationsdir = "_organisations/"
eventsdir = "_locations/"
# tmpdir = "python/tmp/"
# Init
# For seeing if the mentor's organisations exists
organisation_names = []
for root, dirnames, filenames in os.walk(organisationsdir):
for filename in fnmatch.filter(filenames, "*.md"):
organisation_names.append(filename.split(".")[0])
# For matching mentors to events
event_names = []
event_md_files = {}
for root, dirnames, filenames in os.walk(eventsdir):
for filename in fnmatch.filter(filenames, "*.md"):
event_name = filename.split(".")[0]
event_names.append(event_name)
event_md_files[event_name] = os.path.join(root, filename)
# Ingest available prize sheets for UPSERTing
gids = [] # GIDs used so far for sanity checking
validation_errors = []
sheets = PrizeSpreadsheets(datadir).get_spreadsheets()
for file in sheets:
print "Spreadsheet: %s" % (file["filename"])
print "Region: %s" % (file["region"])
ws = file["sheet"].get_prize_sheet(0)
rows = file["sheet"].sheet_to_dict(ws)
print "Prize Count: %s" % (len(rows))
print
print "Processing prizes..."
print
for idx, row in enumerate(rows):
# Assign our prize a globally unique id
# if "prizename" not in row:
# print row
# exit()
if row["prizename"] is None:
# @TODO
errmsg = "%s: Prize #'%s' has no name" % (file["region"].lower(), idx)
# raise ValueError(errmsg)
validation_errors.append(errmsg)
print errmsg
continue
else:
row["prizename"] = row["prizename"].replace(u'\u2013', "-").encode("utf-8")
gid = file["region"].lower() + "-" + row["prizename"].lower().strip().replace("/", " or ").replace(" ", "-").replace("'", "")
if gid in gids: # Hacky
gid = gid + "-2"
if gid in gids:
raise ValueError("GID '%s' is already in use." % (gid))
else:
gids.append(gid)
print row["prizename"]
print gid
print
if row["prizetype"] is None:
errmsg = "%s: Prize '%s' has no type set." % (file["region"].lower(), row["prizename"])
# raise ValueError(errmsg)
validation_errors.append(errmsg)
print errmsg
continue
jurisdiction = file["region"].lower()
if row["prizetype"].lower() == "international":
jurisdiction = "international"
prize = {
"name": row["prizename"],
"title": row["prizename"],
"gid": gid,
"jurisdiction": jurisdiction,
"type": row["prizetype"].title()
}
# Attach non-national prizes to their events
if file["region"] != "AUSTRALIA":
if row["prizelevelregionwideoreventspecific"] == "Event only":
if "eventspecificlocation" not in row:
raise ValueError("Event-only prize nominated without any accompanying event specified.")
if row["eventspecificlocation"] is None:
# @TODO
errmsg = "%s: Prize '%s' is an Event prize, but no event locations provided." % (file["region"].lower(), row["prizename"])
# raise ValueError(errmsg)
validation_errors.append(errmsg)
print errmsg
continue
event_gid = row["eventspecificlocation"].replace(" ", "-").replace(",", "").lower()
event_gid_original = event_gid
# Hacky fix
if event_gid == "mount-gambier":
event_gid = "mount-gambier-youth"
elif event_gid == "all-brisbane-events":
event_gid = "brisbane"
if event_gid not in event_names:
errmsg = "%s: Event GID '%s' does not exist." % (file["region"].lower(), event_gid)
raise ValueError(errmsg)
validation_errors.append(errmsg)
# print errmsg
else:
print "For Event: %s" % (event_gid)
post = frontmatter.load(event_md_files[event_gid])
prize["category"] = "local"
prize["events"] = [post.metadata["gid"]]
# Hacky fix
if event_gid_original == "all-brisbane-events":
prize["events"].append("brisbane-youth")
prize["events"].append("brisbane-maker")
if post.metadata["gid"] != event_gid:
print "WARNING: Event .md file does not match event gid. %s, %s" % (event_gid, post.metadata["gid"])
else:
prize["category"] = "state"
else:
prize["category"] = "australia"
# Attach sponsoring organisations
prize["organisation_title"] = row["sponsoredby"].strip().replace(" Prize", "")
organisation_gid = row["sponsoredby"].lower().replace(" ", "-").replace(",", "").strip()
if organisation_gid in organisation_names:
prize["organisation"] = organisation_gid
else:
# print "WARNING: Could not resolve organisation: %s (%s)" % (row["sponsoredby"], organisation_gid)
pass
# If a prize already exists, merge the latest info over the top
prize_md_dir = os.path.join(prizesdir, file["region"].lower())
prize_md_file = os.path.join(prize_md_dir, "%s.md" % (gid))
if os.path.exists(prize_md_file):
# print "NOTICE: Found an existing prize. Merging new data."
existing_prize = frontmatter.load(prize_md_file)
existing_prize.metadata.update(prize)
prize = existing_prize.metadata
# Convert prize $$$ value to an integer
estimatedprizevalue = ""
if type(row["estimateprizevalue$"]) is unicode and row["estimateprizevalue$"].strip().replace("$", "").isdigit():
estimatedprizevalue = int(row["estimateprizevalue$"].strip().replace("$", ""))
elif type(row["estimateprizevalue$"]) is float or type(row["estimateprizevalue$"]) is long:
estimatedprizevalue = int(row["estimateprizevalue$"])
elif row["estimateprizevalue$"] is not None:
estimatedprizevalue = row["estimateprizevalue$"].strip()
# Fixing up minor stuff
if row["prizecategorydescription"] is None:
row["prizecategorydescription"] = unicode("")
if row["prizereward"] is None:
row["prizereward"] = unicode("")
if row["eligibilitycriteria"] is None:
row["eligibilitycriteria"] = unicode("")
# print prize
# print row
print
print "---"
print
# continue
if not os.path.exists(prize_md_dir):
os.makedirs(prize_md_dir)
with io.open(prize_md_file, "w", encoding="utf-8") as f:
f.write(u'---\n')
f.write(unicode(yaml.safe_dump(prize, width=200, default_flow_style=False, encoding="utf-8", allow_unicode=True), "utf-8"))
f.write(u'---\n')
f.write(u'\n')
f.write(unicode(row["prizecategorydescription"].replace("|", "\n").rstrip()))
f.write(u'\n\n')
f.write(u'# Prize\n')
f.write(unicode(str(row["prizereward"]).replace("|", "\n").replace(".0", "").rstrip()))
f.write(u'\n\n')
f.write(u'# Eligibility Criteria\n')
f.write(unicode(row["eligibilitycriteria"].replace("|", "\n").rstrip()))
# print "\n"
print "############################################################"
# print "\n\n"
if len(validation_errors) > 0:
for i in validation_errors:
print i
# ---
# name: Prize 1
# id: prize_1
# photo_url: https://static.pexels.com/photos/3084/person-woman-park-music-large.jpg
# jurisdiction: australia
# type: Prize
# organisations:
# - organisation_1
# themes:
# - theme_1
# - theme_3
# datasets:
# - dataset_1
# - dataset_3
# dataportals:
# - dataportal_1
# ---
# Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam at ornare risus, at dignissim sapien. Sed eget est mi. Ut lacinia ornare tellus commodo sagittis. Integer euismod eleifend velit, eget dictum leo sagittis at.
# # Prize Details
# Phasellus rutrum euismod turpis elementum ornare. Donec ut risus id ante gravida molestie. Integer cursus tempus porta. Sed vitae nunc quis nibh dapibus aliquet vel sed dolor. Donec id risus ut ipsum fermentum cursus quis sed massa. Nulla sit amet blandit orci, dapibus condimentum augue. Fusce suscipit purus et ultricies fermentum.
# # Requirements
# Mauris at est urna. Aenean ut elit venenatis augue dictum viverra:
# - **Nulla facilisi.** Donec vel justo odio. Vivamus consequat hendrerit arcu vel vestibulum. Proin malesuada mauris vitae nulla iaculis fringilla.
# - **Proin tempor tempus ipsum id bibendum.** Duis vehicula nisi vel bibendum lacinia.
# - **Suspendisse libero dui**, hendrerit vitae eleifend sed, cursus ut tellus. Vivamus tristique, lectus in ullamcorper interdum, orci nisi vestibulum nisi, ac luctus est mi quis justo.
# - **Phasellus tempor laoreet felis a porta.** Aenean in sodales odio. Curabitur interdum bibendum orci, vitae hendrerit eros tempus at. | 37.484058 | 335 | 0.565496 | 1,432 | 12,932 | 5.00838 | 0.266061 | 0.017847 | 0.01464 | 0.005577 | 0.133714 | 0.110429 | 0.097323 | 0.059398 | 0.050474 | 0.035973 | 0 | 0.004354 | 0.3073 | 12,932 | 345 | 336 | 37.484058 | 0.796271 | 0.197108 | 0 | 0.172727 | 0 | 0 | 0.164788 | 0.022347 | 0 | 0 | 0 | 0.002899 | 0 | 0 | null | null | 0.004545 | 0.05 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f55227cf0237dd09fda9cb47b6ed01106b23c08 | 734 | py | Python | shop/migrations/0004_auto_20191223_1511.py | majestylink/majestyAccencis | 41bdde6f9982980609f93a8b44bcaf06cc5f6ea6 | [
"MIT"
] | null | null | null | shop/migrations/0004_auto_20191223_1511.py | majestylink/majestyAccencis | 41bdde6f9982980609f93a8b44bcaf06cc5f6ea6 | [
"MIT"
] | null | null | null | shop/migrations/0004_auto_20191223_1511.py | majestylink/majestyAccencis | 41bdde6f9982980609f93a8b44bcaf06cc5f6ea6 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.7 on 2019-12-23 14:11
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('shop', '0003_auto_20191223_1507'),
]
operations = [
migrations.AddField(
model_name='item',
name='label_type',
field=models.CharField(choices=[('P', 'primary'), ('S', 'secondary'), ('D', 'danger')], default='P', max_length=1, verbose_name='Product Label Type'),
preserve_default=False,
),
migrations.AlterField(
model_name='item',
name='label',
field=models.CharField(blank=True, max_length=10, null=True, verbose_name='Product Label'),
),
]
| 29.36 | 162 | 0.588556 | 81 | 734 | 5.197531 | 0.654321 | 0.042755 | 0.061758 | 0.08076 | 0.104513 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06308 | 0.265668 | 734 | 24 | 163 | 30.583333 | 0.717996 | 0.061308 | 0 | 0.222222 | 1 | 0 | 0.15575 | 0.033479 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f55a41e038080f5e759107ce256226edf627325 | 356 | py | Python | train/demo.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | 1 | 2019-05-04T09:26:29.000Z | 2019-05-04T09:26:29.000Z | train/demo.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | null | null | null | train/demo.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | null | null | null | class Demo(object):
def __init__(self,a=None,*args,**kwargs):
self.a=a
#import pdb;pdb.set_trace()
def get_a(self):
return self.a
@property
def b(self):
b="b2b"
return b
if __name__== '__main__':
demo=Demo('int a','int b','int_c',d='init_d')
print demo.get_a()
print demo.b | 23.733333 | 50 | 0.539326 | 54 | 356 | 3.240741 | 0.481481 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004049 | 0.30618 | 356 | 15 | 51 | 23.733333 | 0.704453 | 0.073034 | 0 | 0 | 0 | 0 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f62c6cee92460f2bc1d289c73585eb313eeb447 | 5,190 | py | Python | knowledge/SSHExec.py | procter-gamble-tech/pentest-report | f9e9d8f5dc76316fd5a1fe43956d60d1efd1aacc | [
"MIT"
] | 2 | 2021-02-18T13:46:06.000Z | 2021-04-30T05:11:19.000Z | knowledge/SSHExec.py | procter-gamble-tech/pentest-report | f9e9d8f5dc76316fd5a1fe43956d60d1efd1aacc | [
"MIT"
] | null | null | null | knowledge/SSHExec.py | procter-gamble-tech/pentest-report | f9e9d8f5dc76316fd5a1fe43956d60d1efd1aacc | [
"MIT"
] | 1 | 2021-02-18T13:45:37.000Z | 2021-02-18T13:45:37.000Z | #!/usr/bin/python3
from StateAction import StateAction
import paramiko
import socket
import argparse
class SSHExec(StateAction):
""" Runs an ssh command on all user/hosts """
def __init__(self, cmd_list, cmd_name, mode='foreach_host', users=None):
""" cmd list - these commands will be run over SSH """
""" cmd_name - under this alias the joint result of all commands will be stored in db (e.g. "shadow search"). You'll use it for search later. Two command lists of identical name will not be repeated."""
""" mode - 'foreach_userhost': run once per host, 'foreach_host': run once per user/host """
if(mode != 'foreach_userhost' and mode != 'foreach_host'):
raise ValueError()
if(mode == 'foreach_host'):
super().__init__('ssh_exec', ['host', 'port', 'username', 'password', 'cmd_name'], ['host', 'cmd_name'])
if(mode == 'foreach_userhost'):
super().__init__('ssh_exec', ['host', 'port', 'username', 'password', 'cmd_name'], ['host', 'username', 'cmd_name'])
self.cmd_list = cmd_list
self.cmd_name = cmd_name
self.users = users
def emit(self):
userhosts = list(self.utils.knowledge.find({'type': 'state_action', 'subtype': 'result', 'result.status': 1, 'pentest_id': self.utils.pentest_id, 'name': 'ssh_scanner'}))
userhosts = self.strip_list(userhosts,['input.host', 'input.port', 'input.username', 'input.password'])
userhostcommand = [{**uh, **{'cmd_list':self.cmd_list, 'cmd_name': self.cmd_name}} for uh in userhosts if (self.users==None or uh['username'] in self.users)]
# print(str(userhostcommand))
return userhostcommand
def work(self, params):
target_ssh=paramiko.SSHClient()
# tell the ssh client to automatically add any missing keys
target_ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
# connect to the target box via ssh
print("Connecting to {} as {}/{}".format(params['host'], params['username'], params['password']))
target_ssh.connect(params['host'],username=params['username'],password=params['password'])
# chan = target_ssh.get_transport().open_session()
commands = params['cmd_list']
results = []
#if no errors, execute commands on the target
for command in commands:
try:
# if target_ssh.get_transport().is_active():
# NOTE THE TIMEOUT !!!
stdin, stdout, stderr = target_ssh.exec_command(command, timeout=60.0)
print("Executing `{}`".format( command ))
stdoutstr = stdout.read().strip().decode()
stderrstr = stderr.read().strip().decode()
print("Result: ")
print(stdoutstr)
print(stderrstr)
results.append({'cmd': command, 'stdout': stdoutstr, 'stderr': stderrstr, 'exit_status': stdout.channel.recv_exit_status()})
except (Exception) as g:
print(g)
return {'status':2, 'exception': str(g)}
pass
# Close the target ssh session - this would be faster if you looped before closing
target_ssh.close()
last_exit_status = stdout.channel.recv_exit_status()
return {'status': 1 if last_exit_status == 0 else 0, 'cmd_results': results, 'last_exit_status': last_exit_status}
except (paramiko.BadHostKeyException, paramiko.AuthenticationException, paramiko.SSHException, socket.error) as e:
print("Failed: {}".format(str(e)))
return {'status':0, 'error': str(e)}
pass
parser = argparse.ArgumentParser(description='Executes ssh commands against all known hosts / credentials')
parser.add_argument('-m', '--mode', dest='mode', type=str, default='foreach_host', choices=['foreach_host', 'foreach_userhost'], help="determines if you are interested in successful execution by any user or all users. E.g. 'id' is interesting for every user (foreach_userhost). 'uname -a' for any user (foreach_host). Default: foreach_host.")
parser.add_argument('-U', '--user', type=str, nargs='*', help='Connect only as a given username. Can provide many usernames (e.g. -U john tom alice)' )
parser.add_argument('name', type=str, help='command label. Used to check if identical command was not executed before' )
parser.add_argument('cmd', type=str, nargs='+', help='list of shell commands to be executed' )
parser.add_argument('-r', '--retry', action='store_true', default=False, help='Rerun all, regardles of prior run or prior success.' )
parser.add_argument('-rr', '--remove-and-retry', action='store_true', default=False, help='--retry, with additional removal of successful results from prior runs' )
args = parser.parse_args()
# print(args)
ssh = SSHExec(args.cmd, args.name, users=args.user)
ssh.run(retry=(args.retry or args.remove_and_retry) , retry_successful=(args.retry or args.remove_and_retry), remove_previous=args.remove_and_retry)
| 55.806452 | 342 | 0.634489 | 648 | 5,190 | 4.935185 | 0.337963 | 0.021889 | 0.031895 | 0.013133 | 0.106942 | 0.095685 | 0.095685 | 0.031895 | 0.031895 | 0.031895 | 0 | 0.002503 | 0.230058 | 5,190 | 92 | 343 | 56.413043 | 0.797798 | 0.091329 | 0 | 0.067797 | 0 | 0.033898 | 0.274545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0.118644 | 0.067797 | 0 | 0.20339 | 0.118644 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8f63a5304c263db8cd4106423faef2227b9b49fb | 1,803 | py | Python | tf_quant_finance/rates/swap_curve_common.py | slowy07/tf-quant-finance | 0976f720fb58a2d7bfd863640c12a2425cd2f94f | [
"Apache-2.0"
] | 3,138 | 2019-07-24T21:43:17.000Z | 2022-03-30T12:11:09.000Z | tf_quant_finance/rates/swap_curve_common.py | slowy07/tf-quant-finance | 0976f720fb58a2d7bfd863640c12a2425cd2f94f | [
"Apache-2.0"
] | 63 | 2019-09-07T19:16:03.000Z | 2022-03-29T19:29:40.000Z | tf_quant_finance/rates/swap_curve_common.py | slowy07/tf-quant-finance | 0976f720fb58a2d7bfd863640c12a2425cd2f94f | [
"Apache-2.0"
] | 423 | 2019-07-26T21:28:05.000Z | 2022-03-26T13:07:44.000Z | # Lint as: python3
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Common utilities and data structures for swap curve construction."""
from tf_quant_finance import types
from tf_quant_finance import utils
__all__ = [
'SwapCurveBuilderResult'
]
@utils.dataclass
class SwapCurveBuilderResult:
"""Swap curve calibration results.
Attributes:
times: Rank 1 real `Tensor`. Times for the computed rates.
rates: Rank 1 `Tensor` of the same dtype as `times`. The inferred zero
rates.
discount_factors: Rank 1 `Tensor` of the same dtype as `times`. The inferred
discount factors.
initial_rates: Rank 1 `Tensor` of the same dtype as `times`. The initial
guess for the rates.
converged: Scalar boolean `Tensor`. Whether the procedure converged.
failed: Scalar boolean `Tensor`. Whether the procedure failed.
iterations: Scalar int32 `Tensor`. Number of iterations performed.
objective_value: Scalar real `Tensor`. The objective function at the optimal
soultion.
"""
times: types.RealTensor
rates: types.RealTensor
discount_factors: types.RealTensor
initial_rates: types.RealTensor
converged: types.BoolTensor
failed: types.BoolTensor
iterations: types.IntTensor
objective_value: types.RealTensor
| 34.673077 | 80 | 0.749861 | 244 | 1,803 | 5.483607 | 0.487705 | 0.044843 | 0.024664 | 0.029148 | 0.190583 | 0.154709 | 0.097907 | 0.097907 | 0.097907 | 0.097907 | 0 | 0.010163 | 0.181364 | 1,803 | 51 | 81 | 35.352941 | 0.896341 | 0.732668 | 0 | 0 | 0 | 0 | 0.051282 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8f6b79727ab29b5b63cc9dd751d6412c518b2feb | 557 | py | Python | Module2/myprg1.py | vishalnalwa/DAT210x-master---Old | 0cbe91636285fd55059b5059cb131811ea207850 | [
"MIT"
] | null | null | null | Module2/myprg1.py | vishalnalwa/DAT210x-master---Old | 0cbe91636285fd55059b5059cb131811ea207850 | [
"MIT"
] | null | null | null | Module2/myprg1.py | vishalnalwa/DAT210x-master---Old | 0cbe91636285fd55059b5059cb131811ea207850 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Jun 19 17:14:36 2017
@author: m037382
"""
import pandas as pd
df = pd.read_html('http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/2015/seasontype/2')
df1 = pd.concat(df)
df1.columns = ['RK', 'PLAYER', 'TEAM', 'GP','G','A','PTS','+/-',' PIM','PTS/G','SOG','PCT','GWG','PP-G','PP-A','SH-G','SH-A']
df1= df1[df1.RK != "RK"]
df1= df1.drop('RK',axis=1)
df1=df1.dropna()
df1=df1.reset_index(drop=True)
print df1
print len(df1.PCT.unique())
print add( df1.loc[15, 'GP'] , df1.loc[16, 'GP'])???? | 29.315789 | 125 | 0.61939 | 101 | 557 | 3.386139 | 0.633663 | 0.087719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087475 | 0.096948 | 557 | 19 | 126 | 29.315789 | 0.592445 | 0.037702 | 0 | 0 | 0 | 0.090909 | 0.324895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f751c06b143e725cfe778cbe9ed3c2d667f4adc | 907 | py | Python | Plugins/Aspose-Slides-Java-for-Jython/asposeslides/WorkingWithPresentation/Zoom.py | tienph91/Aspose.Slides-for-Java | 874a8245c6e1ae227393644d9bd35d5e65e07d52 | [
"MIT"
] | 27 | 2016-10-25T13:19:25.000Z | 2022-03-03T04:13:53.000Z | Plugins/Aspose-Slides-Java-for-Jython/asposeslides/WorkingWithPresentation/Zoom.py | tienph91/Aspose.Slides-for-Java | 874a8245c6e1ae227393644d9bd35d5e65e07d52 | [
"MIT"
] | 9 | 2017-03-14T13:02:17.000Z | 2021-11-25T13:22:20.000Z | Plugins/Aspose-Slides-Java-for-Jython/asposeslides/WorkingWithPresentation/Zoom.py | tienph91/Aspose.Slides-for-Java | 874a8245c6e1ae227393644d9bd35d5e65e07d52 | [
"MIT"
] | 38 | 2016-04-07T16:37:29.000Z | 2022-01-17T06:35:14.000Z | from asposeslides import Settings
from com.aspose.slides import Presentation
from com.aspose.slides import SaveFormat
from com.aspose.slides import XpsOptions
class Zoom:
def __init__(self):
dataDir = Settings.dataDir + 'WorkingWithPresentation/Zoom/'
# Create an instance of Presentation class
pres = Presentation()
# Setting View Properties of Presentation
pres.getViewProperties().getSlideViewProperties().setScale(50) # zoom value in percentages for slide view
pres.getViewProperties().getNotesViewProperties().setScale(50) # .Scale = 50 //zoom value in percentages for notes view
# Save the presentation as a PPTX file
save_format = SaveFormat
pres.save(dataDir + "Zoom.pptx", save_format.Pptx)
print "Set zoom value, please check the output file."
if __name__ == '__main__':
Zoom() | 34.884615 | 127 | 0.693495 | 102 | 907 | 6.029412 | 0.509804 | 0.034146 | 0.063415 | 0.092683 | 0.209756 | 0.087805 | 0 | 0 | 0 | 0 | 0 | 0.008608 | 0.231533 | 907 | 26 | 128 | 34.884615 | 0.873745 | 0.23484 | 0 | 0 | 0 | 0 | 0.132075 | 0.04209 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.266667 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f76e085c8945f356c1b5166c9e3418d881f3e6b | 3,649 | py | Python | python/common/fileIO/pcsPath.py | CountZer0/PipelineConstructionSet | 0aa73a8a63c72989b2d1c677efd78dad4388d335 | [
"BSD-3-Clause"
] | 21 | 2015-04-27T05:01:36.000Z | 2021-11-22T13:45:14.000Z | python/common/fileIO/pcsPath.py | 0xb1dd1e/PipelineConstructionSet | 621349da1b6d1437e95d0c9e48ee9f36d59f19fd | [
"BSD-3-Clause"
] | null | null | null | python/common/fileIO/pcsPath.py | 0xb1dd1e/PipelineConstructionSet | 621349da1b6d1437e95d0c9e48ee9f36d59f19fd | [
"BSD-3-Clause"
] | 7 | 2015-04-11T11:37:19.000Z | 2020-05-22T09:49:04.000Z | '''
Author: Jason.Parks
Created: April 17, 2012
Module: common.fileIO.Path
Purpose: custom Path class wrapper
'''
from common.fileIO.path import *
import os, stat
import sys
class Path(path):
'''
PipelineConstructionSet sub-class of pymel's path class (which came from Jason Orendorff's implementation in Python 2.2)
'''
def toP4Path(self):
''' Convert path to perforce path '''
self = self.makePretty()
# return '//depot/%s' % self[3:]
return '//%s' % self[3:]
def makePretty(self, lastSlash=True, backward=False):
''' cleans up the look of the paths
Params:
lastSlash: keep or remove the last slash in a directory path
backward: replace with backward or forward slashes
Returns: Path() class object
'''
slashChr = '/'
if backward:
slashChr = '\\'
# check for initial double forward slash
network = False
if self[:2] == '//' or self[:2] == '\\\\':
network = True
# convert to forward or backward
for unused in range(0, 4):
# Removal of double forward slashes
self = self.replace('//', slashChr)
self = self.replace('\\', slashChr)
self = self.replace('/', slashChr)
# check tail if dir
if os.path.isdir(self):
if self[-1:] == '/':
if not lastSlash:
self = self[:-1]
else:
if lastSlash:
self = self + slashChr
# reinstate initial double forward slash
if network:
if backward:
self = '\\%s' % self
else:
self = '/%s' % self
return Path(self)
@property
def pretty(self):
''' Property, do not use ()'''
return self.makePretty()
@property
def prettyWOSlash(self):
''' Property, do not use ()'''
return self.makePretty(lastSlash=False)
@property
def isbin(self):
"""
Property, do not use ()
Description: Tests for binary?
Inputs: path
Returns: True/False
"""
fd = open(self, 'rb')
for b in fd.read():
if ord(b) > 127:
fd.close()
return True
fd.close()
return False
@property
def isReadOnly(self):
''' Property, do not use ()'''
fileAtt = os.stat(self)[0]
if (not fileAtt & stat.S_IWRITE):
return 1
else:
return 0
@property
def isWritable(self):
''' Property, do not use ()'''
fileAtt = os.stat(self)[0]
if (not fileAtt & stat.S_IREAD):
return 1
else:
return 0
def makeReadOnly(self, _dir=False):
''' make file(s) readOnly
Params:
_dir: if process entire directory
Returns: True/False
'''
success = True
if _dir:
# check for dir passed
if self.isdir:
for _file in self.walk():
os.chmod(_file, stat.S_IREAD)
if self.isWritable:
success = False
else:
print "%s is not a directory"
else:
# process single
if self.exists:
os.chmod(self, stat.S_IREAD)
if self.isWritable:
success = False
else:
print "%s does not exists" % self
return success
def makeWritable(self, _dir=False):
''' make file(s) non-readOnly
Params:
_dir: if process entire directory
Returns: True/False
'''
success = True
if _dir:
# check for dir passed
if self.isdir:
for _file in self.walk():
os.chmod(_file, stat.S_IWRITE)
if self.isReadOnly:
success = False
else:
print "%s is not a directory"
else:
# process single
if self.exists:
os.chmod(self, stat.S_IWRITE)
if self.isReadOnly:
success = False
else:
print "%s does not exists" % self
return success
def modulePath(self):
found = 0
for p in sys.path:
if p == self:
found = 1
if found:
return True
else: return False
def string(self):
return str(self)
print "common.fileIO.Path imported"
| 20.272222 | 121 | 0.624007 | 496 | 3,649 | 4.558468 | 0.262097 | 0.026537 | 0.03096 | 0.037594 | 0.435648 | 0.41088 | 0.392304 | 0.392304 | 0.326404 | 0.326404 | 0 | 0.010271 | 0.252946 | 3,649 | 179 | 122 | 20.385475 | 0.819149 | 0.072075 | 0 | 0.476636 | 0 | 0 | 0.054286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037383 | null | null | 0.046729 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f7c29f9a92e05ecd92699cd4528d8f8ee852948 | 1,324 | py | Python | pytorch_forecasting/metrics/__init__.py | smetam/pytorch-forecasting | 0a5e13fc9eefab00f43b002d08808b235582ed3d | [
"MIT"
] | null | null | null | pytorch_forecasting/metrics/__init__.py | smetam/pytorch-forecasting | 0a5e13fc9eefab00f43b002d08808b235582ed3d | [
"MIT"
] | 2 | 2020-07-26T11:38:16.000Z | 2020-08-16T21:08:53.000Z | pytorch_forecasting/metrics/__init__.py | smetam/pytorch-forecasting | 0a5e13fc9eefab00f43b002d08808b235582ed3d | [
"MIT"
] | null | null | null | """
Metrics for (mulit-horizon) timeseries forecasting.
"""
from pytorch_forecasting.metrics.base_metrics import (
DistributionLoss,
Metric,
MultiHorizonMetric,
MultiLoss,
MultivariateDistributionLoss,
convert_torchmetric_to_pytorch_forecasting_metric,
)
from pytorch_forecasting.metrics.distributions import (
BetaDistributionLoss,
ImplicitQuantileNetworkDistributionLoss,
LogNormalDistributionLoss,
MQF2DistributionLoss,
MultivariateNormalDistributionLoss,
NegativeBinomialDistributionLoss,
NormalDistributionLoss,
)
from pytorch_forecasting.metrics.point import MAE, MAPE, MASE, RMSE, SMAPE, CrossEntropy, PoissonLoss, TweedieLoss
from pytorch_forecasting.metrics.quantile import QuantileLoss
__all__ = [
"MultiHorizonMetric",
"DistributionLoss",
"MultivariateDistributionLoss",
"MultiLoss",
"Metric",
"convert_torchmetric_to_pytorch_forecasting_metric",
"MAE",
"MAPE",
"MASE",
"PoissonLoss",
"TweedieLoss",
"CrossEntropy",
"SMAPE",
"RMSE",
"BetaDistributionLoss",
"NegativeBinomialDistributionLoss",
"NormalDistributionLoss",
"LogNormalDistributionLoss",
"MultivariateNormalDistributionLoss",
"ImplicitQuantileNetworkDistributionLoss",
"QuantileLoss",
"MQF2DistributionLoss",
]
| 27.020408 | 114 | 0.755287 | 86 | 1,324 | 11.406977 | 0.44186 | 0.110092 | 0.089704 | 0.118247 | 0.089704 | 0.089704 | 0 | 0 | 0 | 0 | 0 | 0.0018 | 0.160876 | 1,324 | 48 | 115 | 27.583333 | 0.881188 | 0.03852 | 0 | 0 | 0 | 0 | 0.303557 | 0.181028 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.093023 | 0 | 0.093023 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8f7fd280f59517d4cbc989bc26eb77bef14be853 | 1,885 | py | Python | tests/events/test_consumer.py | promulo/pinga | 185e8fc533882106f4f263a141d6ce829f6d3c39 | [
"MIT"
] | null | null | null | tests/events/test_consumer.py | promulo/pinga | 185e8fc533882106f4f263a141d6ce829f6d3c39 | [
"MIT"
] | 21 | 2020-12-20T02:00:45.000Z | 2020-12-24T14:32:54.000Z | tests/events/test_consumer.py | promulo/pinga | 185e8fc533882106f4f263a141d6ce829f6d3c39 | [
"MIT"
] | null | null | null | import json
from unittest.mock import ANY, MagicMock, call, patch
from pinga.events.consumer import Consumer
@patch("pinga.events.consumer.get_db_conn")
@patch("pinga.events.consumer.save_event")
@patch("pinga.events.consumer.get_logger")
@patch(
"pinga.events.consumer.KafkaConsumer",
return_value=[
MagicMock(value=b"hello"),
MagicMock(value=b"world"),
MagicMock(value=b"")
]
)
def test_consumer_consume_invalid_events(mock_kafka, mock_logger, mock_save, mock_db_conn):
consumer = Consumer()
consumer.consume()
mock_logger().error.assert_has_calls([
call("Received invalid message: 'hello', skipping"),
call("Received invalid message: 'world', skipping"),
call("Received invalid message: '', skipping")
])
mock_save.assert_not_called()
@patch("pinga.events.consumer.get_db_conn")
@patch("pinga.events.consumer.save_event")
@patch("pinga.events.consumer.KafkaConsumer")
def test_consumer_consume_happy_path(mock_kafka, mock_save, mock_db_conn):
event_1 = {
"url": "http://example.org/404",
"status": "down",
"httpStatus": 404,
"responseTimeSeconds": 0.160735
}
event_2 = {
"url": "a.a",
"status": "error",
"errorMessage": "some error"
}
mock_kafka.return_value = [
MagicMock(value=json.dumps(event_1).encode("utf-8")),
MagicMock(value=json.dumps(event_2).encode("utf-8"))
]
consumer = Consumer()
consumer.consume()
mock_save.assert_has_calls([call(ANY, event_1), call(ANY, event_2)])
@patch("pinga.events.consumer.get_db_conn")
@patch("pinga.events.consumer.create_events_table")
@patch("pinga.events.consumer.KafkaConsumer")
def test_consumer_table_created(mock_kafka, mock_create_table, mock_db_conn):
_ = Consumer()
mock_create_table.assert_called_once_with(mock_db_conn())
| 29.920635 | 91 | 0.690716 | 237 | 1,885 | 5.232068 | 0.274262 | 0.097581 | 0.168548 | 0.193548 | 0.473387 | 0.255645 | 0.255645 | 0.255645 | 0.191129 | 0.191129 | 0 | 0.013359 | 0.166048 | 1,885 | 62 | 92 | 30.403226 | 0.775445 | 0 | 0 | 0.215686 | 0 | 0 | 0.311936 | 0.180902 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56ae8dd61b1ca2359d4320007e8cf692e7b80936 | 3,437 | py | Python | q1pulse/lang/loops.py | sldesnoo-Delft/q1pulse | f5123b5c1e0dfbb59512d282ec7e3fb833e58b95 | [
"MIT"
] | 1 | 2021-11-12T09:40:14.000Z | 2021-11-12T09:40:14.000Z | q1pulse/lang/loops.py | sldesnoo-Delft/q1pulse | f5123b5c1e0dfbb59512d282ec7e3fb833e58b95 | [
"MIT"
] | null | null | null | q1pulse/lang/loops.py | sldesnoo-Delft/q1pulse | f5123b5c1e0dfbb59512d282ec7e3fb833e58b95 | [
"MIT"
] | null | null | null | from numbers import Number
from .math_expressions import get_dtype
from .register import Register
from .exceptions import Q1ValueError, Q1TypeError
class LoopVar(Register):
def __init__(self, name, loop, **kwargs):
super().__init__(name, **kwargs)
self._loop = loop
class Loop:
def __init__(self, loop_number, n, var_type=None, local=False):
self._loop_number = loop_number
self._n = n
self.label = f'local_{loop_number}' if local else f'loop_{loop_number}'
self._loop_reg = LoopVar(f'_cnt{self._loop_number}',
self, local=local, dtype=int)
if var_type:
reg_name = f'_var{self._loop_number}'
self._loopvar = LoopVar(reg_name, self, dtype=var_type)
else:
self._loopvar = None
@property
def n(self):
return self._n
@property
def loopvar(self):
return self._loopvar
def __repr__(self):
return f'repeat({self._n}):'
class RangeLoop(Loop):
def __init__(self, loop_number, start_or_n, stop=None, step=None):
if stop is None:
n = start_or_n
self._start = 0
self._stop = n
self._step = 1
else:
self._start = start_or_n
self._stop = stop
self._step = 1 if step is None else step
# loop is end exclusive
n = (self._stop-1 - self._start) // self._step
n = max(0, n+1)
super().__init__(loop_number, n, var_type=int)
@property
def start(self):
return self._start
@property
def step(self):
return self._step
def __repr__(self):
return f'loop_range({self._start}, {self._stop}, {self._step}):{self.loopvar}'
class LinspaceLoop(Loop):
def __init__(self, loop_number, start, stop, n, endpoint=True):
super().__init__(loop_number, n, var_type=float)
if max(start,stop) > 1.0 or min(start,stop) < -1.0:
raise Q1ValueError('value out of range [-1.0, 1.0]')
self._start = start
self._stop = stop
self._endpoint = endpoint
step_divisor = (n-1) if endpoint else n
self._step = (stop - start)/step_divisor
@property
def start(self):
return self._start
@property
def step(self):
return self._step
def __repr__(self):
endpoint = ', endpoint=False' if not self._endpoint else ''
return f'loop_linspace({self._start}, {self._stop}, {self.n}{endpoint}):{self.loopvar}'
class ArrayLoop(Loop):
def __init__(self, loop_number, values):
dtype = get_dtype(values[0])
super().__init__(loop_number, len(values), var_type=dtype)
self.values = values
for value in values[1:]:
if get_dtype(values[0]) != dtype:
raise Q1TypeError('Array values must all be same type')
if dtype == float:
literal_values = [value for value in values if isinstance(value, Number)]
if max(literal_values) > 1.0 or min(literal_values) < -1.0:
raise Q1ValueError('value out of range [-1.0, 1.0]')
self._table_label = f'_table{self._loop_number}'
self._data_ptr = Register(f'_ptr{self._loop_number}')
@property
def loopvar(self):
return self._loopvar
def __repr__(self):
return f'loop_array({self.values}):{self.loopvar}'
| 30.6875 | 95 | 0.600233 | 453 | 3,437 | 4.242826 | 0.169978 | 0.078044 | 0.065557 | 0.031217 | 0.30437 | 0.278356 | 0.244537 | 0.185224 | 0.185224 | 0.185224 | 0 | 0.012648 | 0.286878 | 3,437 | 111 | 96 | 30.963964 | 0.771522 | 0.00611 | 0 | 0.329545 | 0 | 0.011364 | 0.130053 | 0.072935 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.045455 | 0.113636 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
56b14293ce8e3f37b76431fb8d2ed7001e30c62e | 563 | py | Python | apis/betterself/v1/signout/views.py | jeffshek/betterself | 51468253fc31373eb96e0e82189b9413f3d76ff5 | [
"MIT"
] | 98 | 2017-07-29T14:26:36.000Z | 2022-02-28T04:10:15.000Z | apis/betterself/v1/signout/views.py | jeffshek/betterself | 51468253fc31373eb96e0e82189b9413f3d76ff5 | [
"MIT"
] | 1,483 | 2017-05-30T00:05:56.000Z | 2022-03-31T12:37:06.000Z | apis/betterself/v1/signout/views.py | lawrendran/betterself | 51468253fc31373eb96e0e82189b9413f3d76ff5 | [
"MIT"
] | 13 | 2017-11-08T00:02:35.000Z | 2022-02-28T04:10:32.000Z | from django.contrib.auth import logout
from django.views.generic import RedirectView
class SessionLogoutView(RedirectView):
"""
A view that will logout a user out and redirect to homepage.
"""
permanent = False
query_string = True
pattern_name = 'home'
def get_redirect_url(self, *args, **kwargs):
"""
Logout user and redirect to target url.
"""
if self.request.user.is_authenticated():
logout(self.request)
return super(SessionLogoutView, self).get_redirect_url(*args, **kwargs)
| 28.15 | 79 | 0.664298 | 67 | 563 | 5.477612 | 0.626866 | 0.054496 | 0.070845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241563 | 563 | 19 | 80 | 29.631579 | 0.859485 | 0.17762 | 0 | 0 | 0 | 0 | 0.009434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
56b4a1d9e88e438a1a646637bff86e026e3985e3 | 564 | py | Python | service_catalog/tables/global_hook_tables.py | LaudateCorpus1/squest | 98304f20c1d966fb3678d348ffd7c5be438bb6be | [
"Apache-2.0"
] | 112 | 2021-04-21T08:52:55.000Z | 2022-03-01T15:09:19.000Z | service_catalog/tables/global_hook_tables.py | LaudateCorpus1/squest | 98304f20c1d966fb3678d348ffd7c5be438bb6be | [
"Apache-2.0"
] | 216 | 2021-04-21T09:06:47.000Z | 2022-03-30T14:21:28.000Z | service_catalog/tables/global_hook_tables.py | LaudateCorpus1/squest | 98304f20c1d966fb3678d348ffd7c5be438bb6be | [
"Apache-2.0"
] | 21 | 2021-04-20T13:53:54.000Z | 2022-03-30T21:43:04.000Z | from django_tables2 import TemplateColumn
from service_catalog.models import GlobalHook
from Squest.utils.squest_table import SquestTable
class GlobalHookTable(SquestTable):
state = TemplateColumn(template_name='custom_columns/global_hook_state.html')
actions = TemplateColumn(template_name='custom_columns/global_hook_actions.html', orderable=False)
class Meta:
model = GlobalHook
attrs = {"id": "global_hook_table", "class": "table squest-pagination-tables"}
fields = ("name", "model", "state", "job_template", "actions")
| 37.6 | 102 | 0.753546 | 64 | 564 | 6.421875 | 0.53125 | 0.072993 | 0.126521 | 0.155718 | 0.238443 | 0.238443 | 0.238443 | 0 | 0 | 0 | 0 | 0.002066 | 0.141844 | 564 | 14 | 103 | 40.285714 | 0.847107 | 0 | 0 | 0 | 0 | 0 | 0.289007 | 0.177305 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
56b6f5d648b635feba668eb6da3c0a9f3f8d3fbc | 388 | py | Python | django/axf/app/migrations/0004_remove_usermodel_ticket.py | zhang15780/web_project | 820708ae68f4d1bc06cdde4a86e40a5457c11df8 | [
"Apache-2.0"
] | null | null | null | django/axf/app/migrations/0004_remove_usermodel_ticket.py | zhang15780/web_project | 820708ae68f4d1bc06cdde4a86e40a5457c11df8 | [
"Apache-2.0"
] | null | null | null | django/axf/app/migrations/0004_remove_usermodel_ticket.py | zhang15780/web_project | 820708ae68f4d1bc06cdde4a86e40a5457c11df8 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-05-08 01:38
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('app', '0003_usermodel_ticket'),
]
operations = [
migrations.RemoveField(
model_name='usermodel',
name='ticket',
),
]
| 19.4 | 46 | 0.610825 | 41 | 388 | 5.585366 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070922 | 0.273196 | 388 | 19 | 47 | 20.421053 | 0.741135 | 0.170103 | 0 | 0 | 1 | 0 | 0.122257 | 0.065831 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56b95aef4c51bc2cb8ee77c4f2107d9a962b6662 | 288 | py | Python | app/User/index.py | OpenHill/OpenHi | 92279362b6513e066dd10f6ccbff8ab8a30b066e | [
"Apache-2.0"
] | null | null | null | app/User/index.py | OpenHill/OpenHi | 92279362b6513e066dd10f6ccbff8ab8a30b066e | [
"Apache-2.0"
] | null | null | null | app/User/index.py | OpenHill/OpenHi | 92279362b6513e066dd10f6ccbff8ab8a30b066e | [
"Apache-2.0"
] | null | null | null | from app.models.DB.mainDB import Post, User
from . import user
from flask import render_template
@user.route("/")
def n():
postlist = Post.query.filter().first()
print(postlist.user.nikename)
userlist = User.query.filter().first()
print(userlist.post)
return "app"
| 20.571429 | 43 | 0.690972 | 39 | 288 | 5.076923 | 0.564103 | 0.080808 | 0.161616 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170139 | 288 | 13 | 44 | 22.153846 | 0.828452 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.5 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56bb6e027a6dbdead08ed1b439c286d109ad9a29 | 3,347 | py | Python | main.py | NilaiVemula/empty-my-fridge | 0a9e7c214126e8d6e658f0a1e5d3bf50d5e2618c | [
"MIT"
] | null | null | null | main.py | NilaiVemula/empty-my-fridge | 0a9e7c214126e8d6e658f0a1e5d3bf50d5e2618c | [
"MIT"
] | null | null | null | main.py | NilaiVemula/empty-my-fridge | 0a9e7c214126e8d6e658f0a1e5d3bf50d5e2618c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Main script for Empty My Fridge project. Provides a CLI for the Spoonacular API.
"""
import requests
from rich.console import Console
import config
def get_missing_ingredients(available_ingredients, amount_recipes):
"""
:param available_ingredients: comma separated list of ingredients the user has
:type available_ingredients: str
:param amount_recipes: number of recipes that the user wants
:type amount_recipes: int
:return: None
"""
# explanation of API parameters: https://spoonacular.com/food-api/docs#Search-Recipes-by-Ingredients
# ingredients (string): A comma-separated list of ingredients that the recipes should contain.
# number (number): The maximum number of recipes to return (between 1 and 100). Defaults to 10.
# limitLicense (bool): Whether the recipes should have an open license that allows display with proper attribution.
# ranking (number): Whether to maximize used ingredients (1) or minimize missing ingredients (2) first.
# ignorePantry (bool): Whether to ignore typical pantry items, such as water, salt, flour, etc.
parameters = {
'apiKey': config.API_KEY,
'ingredients': available_ingredients,
'number': amount_recipes,
'ranking': 2,
'ignorePantry': True
}
r = requests.get('https://api.spoonacular.com/recipes/findByIngredients/', params=parameters)
data = r.json()
recipes = []
missing_ingredients = []
links = []
for recipe in data:
# get recipe information
param = {
'apiKey': config.API_KEY,
'includeNutrition': False
}
a = requests.get('https://api.spoonacular.com/recipes/' + str(recipe['id']) + '/information', params=param)
d = a.json()
recipes.append(recipe['title'])
list_of_missing = recipe['missedIngredients']
missing = []
for item in list_of_missing:
missing.append(item['name'])
missing_ingredients.append(', '.join(map(str, missing)))
links.append(d['sourceUrl'])
return recipes, missing_ingredients, links
def main():
# initialize the rich environment for command line formatting
console = Console()
# get user inputs
ingredients = console.input("Enter the ingredients you have (each separated by a comma and a space): ")
amount_recipes = int(console.input("How many recipes do you want to see? "))
while amount_recipes < 1:
amount_recipes = int(console.input("Please enter 1 or higher: "))
# call method to get results
recipes, missing, links = get_missing_ingredients(ingredients, amount_recipes)
# format output
# unpack results and format into table
from rich.table import Table
# initialize table
table = Table(title='Recipes you can make with ' + ingredients)
table.add_column("Recipe Name", style="cyan")
table.add_column("Missing Ingredients", style="magenta")
table.add_column("Recipe Link", style="green")
# load data
for recipe, missing_ingredients, link in zip(recipes, missing, links):
table.add_row(recipe, missing_ingredients, link)
# FIXME: make full links and ingredient list show up in smaller windows
console.print(table)
if __name__ == "__main__":
main()
| 34.505155 | 119 | 0.679414 | 411 | 3,347 | 5.437956 | 0.411192 | 0.072483 | 0.021477 | 0.017897 | 0.088591 | 0.035794 | 0.035794 | 0 | 0 | 0 | 0 | 0.004975 | 0.219301 | 3,347 | 96 | 120 | 34.864583 | 0.850364 | 0.360323 | 0 | 0.041667 | 0 | 0 | 0.206814 | 0 | 0 | 0 | 0 | 0.010417 | 0 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.145833 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56ca1e265ad52858ba29fd79aa5275c28f935321 | 1,452 | py | Python | UmrConf.py | pzia/useful-meeting-reminder | 41b1bf6b0d57f8930700999503c1f0a66d78f0a6 | [
"MIT"
] | 2 | 2018-03-26T05:25:03.000Z | 2019-12-15T18:18:54.000Z | UmrConf.py | pzia/useful-meeting-reminder | 41b1bf6b0d57f8930700999503c1f0a66d78f0a6 | [
"MIT"
] | null | null | null | UmrConf.py | pzia/useful-meeting-reminder | 41b1bf6b0d57f8930700999503c1f0a66d78f0a6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#config
import configparser
#sys & os
import sys
import os.path
#set logging
import logging
logging.basicConfig(filename=os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'umr.log'),level=logging.DEBUG)
#Set locale
#FIXME : force French, should be in conf ?
import locale
locale.setlocale(locale.LC_TIME, 'fr_FR')
#Initialisation to None
gconfig = None
#config helpers
def localpath():
"""Path of launcher, supposed to be the root of the tree"""
return(os.path.dirname(os.path.abspath(sys.argv[0])))
def get_config(cname = 'umr.ini'):
"""Load config as a dict"""
#Configuration
global gconfig
if gconfig == None :
logging.info("Load config")
gconfig = configparser.ConfigParser()
gconfig.readfp(open(os.path.join(localpath(), cname)))
#logging.info("Config loaded, user %s" % gconfig.get('User', 'login'))
return gconfig
def get_path(pathname_config, filename_config = None):
"""Helper to get pathname from config, and create if necessary"""
#check path
conf_path = os.path.join(localpath(), gconfig.get('Path', pathname_config))
if not os.path.exists(conf_path) :
logging.debug("Creating %s", conf_path)
os.mkdir(conf_path)
if filename_config is not None:
filename = gconfig.get("Files", filename_config)
conf_path = os.path.join(conf_path, filename)
return(conf_path)
gconfig = get_config()
| 28.470588 | 120 | 0.685262 | 203 | 1,452 | 4.817734 | 0.384236 | 0.06135 | 0.0409 | 0.030675 | 0.106339 | 0.06953 | 0.06953 | 0.06953 | 0.06953 | 0 | 0 | 0.002523 | 0.181129 | 1,452 | 50 | 121 | 29.04 | 0.820017 | 0.249311 | 0 | 0 | 0 | 0 | 0.046992 | 0 | 0 | 0 | 0 | 0.02 | 0 | 1 | 0.111111 | false | 0 | 0.185185 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56ca763bb9fa3b6b3fb0286e4da0963af3f62d4c | 1,469 | py | Python | booking/models.py | thanderoy/HotelBookingSys | 5e2f769f75158ba271dfd10ed3850c4ca1364df3 | [
"BSD-2-Clause"
] | null | null | null | booking/models.py | thanderoy/HotelBookingSys | 5e2f769f75158ba271dfd10ed3850c4ca1364df3 | [
"BSD-2-Clause"
] | 1 | 2021-11-23T08:44:21.000Z | 2021-11-23T08:44:21.000Z | booking/models.py | thanderoy/HotelBookingSys | 5e2f769f75158ba271dfd10ed3850c4ca1364df3 | [
"BSD-2-Clause"
] | null | null | null | from django.db import models
from django.conf import settings
from django.urls import reverse
# Create your models here.
class Room(models.Model):
ROOM_CATEGORIES = (
('BZS', 'BUSINESS SUITE'),
('TNS', 'TWIN SUITE'),
('EXS', 'EXECUTIVE SUITE'),
('SGB', 'SINGLE BED'),
)
room_number = models.IntegerField(null=True, blank=True)
category = models.CharField(choices=ROOM_CATEGORIES, max_length=3)
beds = models.IntegerField(null=True, blank=True)
capacity = models.IntegerField(null=True, blank=True)
price = models.FloatField(null=True, blank=True)
image_url = models.CharField(max_length=1000, null=True,blank=True)
def __str__(self):
return f'{self.room_number}.{self.category} with {self.beds} bed(s) for {self.capacity} person(s) @ KSH. {self.price}'
class Booking(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
room = models.ForeignKey(Room, on_delete=models.CASCADE)
check_in = models.DateTimeField()
check_out = models.DateTimeField()
def __str__(self):
return f'{self.user} has booked {self.room} from {self.check_in} to {self.check_out}'
def get_category(self):
categories = dict(self.room.ROOM_CATEGORIES)
category = categories.get(self.room.category)
return category
def cancel_booking(self):
return reverse('booking:CancelBookingView', args=[self.pk, ])
| 36.725 | 127 | 0.6855 | 188 | 1,469 | 5.218085 | 0.393617 | 0.040775 | 0.066259 | 0.086646 | 0.149847 | 0.149847 | 0 | 0 | 0 | 0 | 0 | 0.004188 | 0.187202 | 1,469 | 40 | 128 | 36.725 | 0.81742 | 0.016338 | 0 | 0.064516 | 0 | 0.064516 | 0.186288 | 0.040859 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.096774 | 0.096774 | 0.774194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
56d9ce9e3fcbd7a37b87e6ee3644f57b43b23284 | 3,110 | py | Python | mopidy_frontend_for_adafruit_charlcdplate/Frontend.py | 9and3r/mopidy-frontend-for-adafruit-charlcdplate | 696e5f1c9174371907ef38ab3e00347ec7fb28e7 | [
"Apache-2.0"
] | null | null | null | mopidy_frontend_for_adafruit_charlcdplate/Frontend.py | 9and3r/mopidy-frontend-for-adafruit-charlcdplate | 696e5f1c9174371907ef38ab3e00347ec7fb28e7 | [
"Apache-2.0"
] | null | null | null | mopidy_frontend_for_adafruit_charlcdplate/Frontend.py | 9and3r/mopidy-frontend-for-adafruit-charlcdplate | 696e5f1c9174371907ef38ab3e00347ec7fb28e7 | [
"Apache-2.0"
] | null | null | null | import threading
from time import sleep
from mopidy import core
from MainScreen import MainScreen
from InputManager import InputManager
from DisplayObject import DisplayObject
import pykka
class FrontendAdafruitCharLCDPlate(pykka.ThreadingActor, core.CoreListener):
def __init__(self, config, core):
super(FrontendAdafruitCharLCDPlate, self).__init__()
self.input_manager = InputManager()
self.display_object = DisplayObject()
if True:
import Adafruit_CharLCD as LCD
self.display = LCD.Adafruit_CharLCDPlate()
else:
from web_socket_lcd_simulator import WebSockectLCDSimulator
self.display = WebSockectLCDSimulator()
self.main_screen = MainScreen(core)
self.running = True
def on_start(self):
# Add newline
self.display.set_color(1.0, 0.0, 0.0)
self.display.create_char(0, [16, 16, 16, 16, 16, 16, 0, 0])
self.display.create_char(1, [24, 24, 24, 24, 24, 24, 0, 0])
self.display.create_char(2, [28, 28, 28, 28, 28, 28, 0, 0])
self.display.create_char(3, [30, 30, 30, 30, 30, 30, 0, 0])
self.display.create_char(4, [31, 31, 31, 31, 31, 31, 0, 0])
try:
self.display.on_start()
except AttributeError:
pass
t = threading.Thread(target=self.start_working)
t.start()
def on_stop(self):
self.running = False
try:
self.display.on_stop()
except AttributeError:
pass
def send_screen_update(self):
self.display.clear()
self.display.message(self.display_object.getString())
def start_working(self):
while self.running:
self.update()
sleep(0.03)
def update(self):
# Check inputs
for event in self.input_manager.update(self.display):
print event
self.main_screen.input_event(event)
if self.main_screen.check_and_update(self.display_object, True) or self.display_object.update():
self.send_screen_update()
# Events
def playback_state_changed(self, old_state, new_state):
self.main_screen.playback_state_changed(old_state, new_state)
def track_playback_started(self, tl_track):
self.main_screen.track_playback_started(tl_track)
def track_playback_ended(self, tl_track, time_position):
self.main_screen.track_playback_ended(tl_track, time_position)
def track_playback_paused(self, tl_track, time_position):
self.main_screen.track_playback_paused(tl_track, time_position)
def track_playback_resumed(self, tl_track, time_position):
self.main_screen.track_playback_resumed(tl_track, time_position)
def seeked(self, time_position):
self.main_screen.seeked(time_position)
def volume_changed(self, volume):
self.main_screen.volume_changed(volume)
def stream_title_changed(self, title):
self.main_screen.stream_title_changed(title)
def playlists_loaded(self):
self.main_screen.playlists_loaded()
| 29.339623 | 104 | 0.668167 | 396 | 3,110 | 5 | 0.255051 | 0.094444 | 0.084848 | 0.057576 | 0.237374 | 0.169192 | 0.111111 | 0.075758 | 0.075758 | 0.075758 | 0 | 0.035548 | 0.240193 | 3,110 | 105 | 105 | 29.619048 | 0.80237 | 0.009968 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.028571 | 0.128571 | null | null | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56dd60f209869d7af1b845822dbd7aa5a4dd5e06 | 1,038 | gyp | Python | mine.gyp | indutny/mine.uv | 2187dc6b4f13c5c71d852bfb4234b3168225013b | [
"Unlicense",
"MIT"
] | 6 | 2015-01-07T06:34:57.000Z | 2021-09-02T15:18:14.000Z | mine.gyp | indutny/mine.uv | 2187dc6b4f13c5c71d852bfb4234b3168225013b | [
"Unlicense",
"MIT"
] | null | null | null | mine.gyp | indutny/mine.uv | 2187dc6b4f13c5c71d852bfb4234b3168225013b | [
"Unlicense",
"MIT"
] | null | null | null | {
"targets": [{
"target_name": "mine.uv",
"type": "executable",
"dependencies": [
"mine.uv-lib",
],
"sources": [
"src/main.c",
],
}, {
"target_name": "mine.uv-lib",
"type": "<(library)",
"include_dirs": [ "src" ],
"dependencies": [
"deps/uv/uv.gyp:libuv",
"deps/openssl/openssl.gyp:openssl",
"deps/zlib/zlib.gyp:zlib",
],
"direct_dependent_settings": {
"include_dirs": [ "src" ],
},
"sources": [
"src/client.c",
"src/client-handshake.c",
"src/client-protocol.c",
"src/format/anvil-encode.c",
"src/format/anvil-parse.c",
"src/format/nbt-common.c",
"src/format/nbt-encode.c",
"src/format/nbt-parse.c",
"src/format/nbt-utils.c",
"src/format/nbt-value.c",
"src/protocol/framer.c",
"src/protocol/parser.c",
"src/utils/buffer.c",
"src/utils/common.c",
"src/utils/string.c",
"src/server.c",
"src/session.c",
"src/world.c",
],
}]
}
| 21.183673 | 41 | 0.513487 | 123 | 1,038 | 4.284553 | 0.341463 | 0.129032 | 0.132827 | 0.12334 | 0.068311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264933 | 1,038 | 48 | 42 | 21.625 | 0.690695 | 0 | 0 | 0.227273 | 0 | 0 | 0.589595 | 0.314066 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56e908fc2dbcd320e835739f93b05700c0192f60 | 1,059 | py | Python | module/duplicates.py | aiskov/storekeeper | 4413b523377974b69b57f514177f232a19269497 | [
"Apache-2.0"
] | null | null | null | module/duplicates.py | aiskov/storekeeper | 4413b523377974b69b57f514177f232a19269497 | [
"Apache-2.0"
] | null | null | null | module/duplicates.py | aiskov/storekeeper | 4413b523377974b69b57f514177f232a19269497 | [
"Apache-2.0"
] | null | null | null | import os
from hashlib import sha224
def search_duplicates(target, options=None):
digests = dict()
for root, dirs, files in os.walk(target):
for item in files:
path = '%s/%s' % (root, item)
hash_digest = _create_digest(path)
if hash_digest not in digests:
digests[hash_digest] = set()
digests[hash_digest].add(path)
if 'verbose' in options and options.verbose and len(digests[hash_digest]) > 1:
print 'Duplicates found by hash: %s' % hash_digest
for file_path in digests[hash_digest]:
print ' %s' % file_path
return [value for key, value in digests.iteritems() if len(value) > 1]
def _create_digest(target, buffer_size=50000000):
hash_sum = sha224()
with open(target) as f:
while True:
data = f.read(buffer_size)
if not data:
break
hash_sum.update(data)
return '[%s]->%s' % (os.path.getsize(target), hash_sum.hexdigest())
| 26.475 | 90 | 0.576959 | 135 | 1,059 | 4.385185 | 0.422222 | 0.118243 | 0.114865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022253 | 0.321058 | 1,059 | 39 | 91 | 27.153846 | 0.801113 | 0 | 0 | 0 | 0 | 0 | 0.05104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.08 | null | null | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56ee3be0b0c7e78b9f55c418420f0b8ca2d9d317 | 1,430 | py | Python | Game/migrations/0005_auto_20181108_1248.py | rusito-23/InvestSimulator | 59c501813fc42e2a6a44d93ecffefbe070108144 | [
"MIT"
] | null | null | null | Game/migrations/0005_auto_20181108_1248.py | rusito-23/InvestSimulator | 59c501813fc42e2a6a44d93ecffefbe070108144 | [
"MIT"
] | 6 | 2020-06-05T20:06:51.000Z | 2022-03-11T23:42:56.000Z | Game/migrations/0005_auto_20181108_1248.py | rusito-23/InvestSimulator | 59c501813fc42e2a6a44d93ecffefbe070108144 | [
"MIT"
] | 1 | 2019-06-24T19:54:12.000Z | 2019-06-24T19:54:12.000Z | # Generated by Django 2.1.2 on 2018-11-08 12:48
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('Game', '0004_auto_20181106_1255'),
]
operations = [
migrations.CreateModel(
name='Loan',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('loaned', models.FloatField(default=-1)),
('due_date', models.DateField()),
('borrower', models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='Game.Wallet')),
],
),
migrations.CreateModel(
name='LoanOffer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('loaned', models.FloatField(default=-1)),
('interest_rate', models.FloatField(default=-1)),
('days', models.IntegerField(default=-1)),
('lender', models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='Game.Wallet')),
],
),
migrations.AddField(
model_name='loan',
name='offer',
field=models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='Game.LoanOffer'),
),
]
| 36.666667 | 114 | 0.576923 | 146 | 1,430 | 5.527397 | 0.424658 | 0.049566 | 0.069393 | 0.109046 | 0.536555 | 0.536555 | 0.536555 | 0.536555 | 0.536555 | 0.536555 | 0 | 0.033915 | 0.278322 | 1,430 | 38 | 115 | 37.631579 | 0.748062 | 0.031469 | 0 | 0.40625 | 1 | 0 | 0.104121 | 0.016631 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
56f371e5353c0afeca86a59339a84ebab13cdb12 | 3,119 | py | Python | tests_manual/db_connection_3_FROM_CONFIG.py | BrainAnnex/brain-annex | 07701ba0309c448e9030a19a10dca4d73c155afe | [
"MIT"
] | null | null | null | tests_manual/db_connection_3_FROM_CONFIG.py | BrainAnnex/brain-annex | 07701ba0309c448e9030a19a10dca4d73c155afe | [
"MIT"
] | 3 | 2021-12-19T03:58:42.000Z | 2022-02-11T07:40:46.000Z | tests_manual/db_connection_3_FROM_CONFIG.py | BrainAnnex/brain-annex | 07701ba0309c448e9030a19a10dca4d73c155afe | [
"MIT"
] | null | null | null | # Test of ability to connect to the Neo4j database from the credentials
# in config file(s) variables NEO4J_HOST, NEO4J_USER, NEO4J_PASSWORD
from configparser import ConfigParser
from BrainAnnex.modules.neo_access.neo_access import NeoAccess
print("About to test the database connection, using the credentials STORED in the configuration file(s)\n")
# IMPORT AND VALIDATE THE CONFIGURABLE PARAMETERS #
def extract_par(name, d, display=True) -> str:
if name not in d:
raise Exception(f"The configuration file needs a line with a value for {name}")
value = d[name]
if display:
print(f"{name}: {value}")
else:
print(f"{name}: *********")
return value
#####
config = ConfigParser()
# Attempt to import parameters from the default config file first, then from 'config.ini'
# (possibly overwriting some or all values from the default config file)
found_files = config.read(['../config.defaults.ini', '../config.ini'])
#print("found_files: ", found_files) # This will be a list of the names of the files that were found
if found_files == []:
raise Exception("No configurations files found! Make sure to have a 'config.ini' file in the same folder as main.py")
if found_files == ['config.defaults.ini']:
raise Exception("Only found a DEFAULT version of the config file ('config.defaults.ini'); make sure to duplicate it, name it 'config.ini' and personalize it")
if found_files == ['config.ini']:
print("A local, personalized, version of the config file found ('config.ini'); all configuration will be based on this file")
else:
print("Two config files found: anything in 'config.ini' will over-ride any counterpart in 'config.defaults.ini'")
#print ("Sections found in config file(s): ", config.sections())
if "SETTINGS" not in config:
raise Exception("Incorrectly set up configuration file - the following line should be present at the top: [SETTINGS]")
SETTINGS = config['SETTINGS']
NEO4J_HOST = extract_par("NEO4J_HOST", SETTINGS)
NEO4J_USER = extract_par("NEO4J_USER", SETTINGS)
NEO4J_PASSWORD = extract_par("NEO4J_PASSWORD", SETTINGS, display=False)
MEDIA_FOLDER = extract_par("MEDIA_FOLDER", SETTINGS)
UPLOAD_FOLDER = extract_par("UPLOAD_FOLDER", SETTINGS)
PORT_NUMBER = extract_par("PORT_NUMBER", SETTINGS) # The Flask default is 5000
try:
PORT_NUMBER = int(PORT_NUMBER)
except Exception:
raise Exception(f"The passed value for PORT_NUMBER ({PORT_NUMBER}) is not an integer as expected")
# END OF CONFIGURATION IMPORT
if not NEO4J_HOST \
or not NEO4J_USER \
or not NEO4J_PASSWORD:
print("To run this test, ALL of the following variables must be set in the config file(s): NEO4J_HOST, NEO4J_USER, NEO4J_PASSWORD. "
"Test skipped")
else:
# Attempt to connect to the Neo4j database from credentials in environment variables
obj = NeoAccess(host=NEO4J_HOST,
credentials=(NEO4J_USER, NEO4J_PASSWORD),
debug=True, autoconnect=True)
print("Version of the Neo4j driver: ", obj.version())
print("End of test") | 38.506173 | 162 | 0.712408 | 445 | 3,119 | 4.896629 | 0.310112 | 0.032125 | 0.031207 | 0.030289 | 0.099128 | 0.056907 | 0.028453 | 0 | 0 | 0 | 0 | 0.009893 | 0.189804 | 3,119 | 81 | 163 | 38.506173 | 0.852394 | 0.216416 | 0 | 0.065217 | 0 | 0.108696 | 0.474248 | 0.027606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.108696 | 0.043478 | 0 | 0.086957 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
56ff2ba473246165aa39a267af57e3ffba8e8382 | 766 | py | Python | tests/docs/test_swportal.py | Saanidhyavats/dffml | 5664a75aa7fcd5921b1a3e2da9203d94ed960286 | [
"MIT"
] | 3 | 2021-03-08T18:41:21.000Z | 2021-06-05T20:15:14.000Z | tests/docs/test_swportal.py | NikhilBartwal/dffml | 16180144f388924d9e5840c4aa80d08970af5e60 | [
"MIT"
] | 1 | 2021-03-20T07:05:55.000Z | 2021-03-20T07:05:55.000Z | tests/docs/test_swportal.py | NikhilBartwal/dffml | 16180144f388924d9e5840c4aa80d08970af5e60 | [
"MIT"
] | 1 | 2021-04-19T23:58:26.000Z | 2021-04-19T23:58:26.000Z | import pathlib
import unittest
import platform
from dffml import AsyncTestCase
from dffml.util.testing.consoletest.cli import main as consoletest
@unittest.skipIf(
platform.system() in ["Windows", "Darwin"],
f"Does not work on {platform.system()}",
)
class TestSWPortal(AsyncTestCase):
async def test_readme(self):
await consoletest(
[
str(
pathlib.Path(__file__).parents[2]
/ "examples"
/ "swportal"
/ "README.rst"
),
"--setup",
str(
pathlib.Path(__file__).parent
/ "swportal_consoletest_test_setup.py"
),
]
)
| 25.533333 | 66 | 0.505222 | 65 | 766 | 5.769231 | 0.646154 | 0.048 | 0.074667 | 0.096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002179 | 0.400783 | 766 | 29 | 67 | 26.413793 | 0.814815 | 0 | 0 | 0.153846 | 0 | 0 | 0.151436 | 0.044386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.192308 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7100d1bf6005dd52b07fe32271eb797c1309b6c4 | 1,038 | py | Python | alipay/create_direct_pay_by_user/dpn/models.py | freeyoung/django-alipay3 | 34b8ea95229ab03eddc299be7da6219bb56ef25f | [
"MIT"
] | null | null | null | alipay/create_direct_pay_by_user/dpn/models.py | freeyoung/django-alipay3 | 34b8ea95229ab03eddc299be7da6219bb56ef25f | [
"MIT"
] | null | null | null | alipay/create_direct_pay_by_user/dpn/models.py | freeyoung/django-alipay3 | 34b8ea95229ab03eddc299be7da6219bb56ef25f | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from django.db import models
from alipay import conf
from alipay.models import AliPayBaseModel
from alipay.create_direct_pay_by_user.dpn.signals import alipay_dpn_flagged, alipay_dpn_successful
class AliPayDPN(AliPayBaseModel):
"""
AliPay DPN
"""
gmt_close = models.DateTimeField(blank=True, null=True)
extra_common_param = models.CharField(blank=True, null=True, max_length=256)
out_channel_type = models.CharField(blank=True, null=True, max_length=256)
out_channel_amount = models.CharField(blank=True, null=True, max_length=256)
out_channel_inst = models.CharField(blank=True, null=True, max_length=256)
class Meta:
db_table = 'alipay_dpn'
verbose_name = 'AliPay DPN'
def send_signals(self):
if self.notify_type != 'trade_status_sync':
return
if self.is_transaction():
if self.flag:
alipay_dpn_flagged.send(sender=self)
else:
alipay_dpn_successful.send(sender=self)
| 32.4375 | 98 | 0.693642 | 136 | 1,038 | 5.051471 | 0.426471 | 0.091703 | 0.094614 | 0.123726 | 0.299854 | 0.299854 | 0.299854 | 0.299854 | 0.299854 | 0.235808 | 0 | 0.015892 | 0.211946 | 1,038 | 31 | 99 | 33.483871 | 0.823961 | 0.030829 | 0 | 0 | 0 | 0 | 0.037412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.190476 | 0 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
71014c38dd434377dbc86a1671b95f35defbc373 | 2,555 | py | Python | app/api/v2/views/user_views.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | null | null | null | app/api/v2/views/user_views.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | 1 | 2019-01-19T13:10:02.000Z | 2019-01-19T13:10:02.000Z | app/api/v2/views/user_views.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | null | null | null | from flask import jsonify, request
from ...v2 import version_2 as v2
from ..schemas.user_schema import UserSchema
from ..models.user_model import User
db = User()
@v2.route('/', methods=['GET'])
@v2.route('/welcome', methods=['GET'])
def index():
return jsonify({'status': 200, 'message': 'Welcome !! A meetup Application:'}), 200
@v2.route('/register', methods=['POST'])
def register():
""" Function to register new user """
json_data = request.get_json()
# No data has been provided
if not json_data:
return jsonify({'status': 400, 'message': 'No Sign up data provided'}), 400
# Check if request is valid
data, errors = UserSchema().load(json_data)
if errors:
return jsonify({'status': 400, 'message' : 'Invalid data. Please fill all the required fields', 'errors': errors}), 400
# Checking if the username exists
if db.exists('username', data['username']):
return jsonify({'status': 409, 'message' : 'Username already exists'}), 409
# Checking if the email exists
if db.exists('email', data['email']):
return jsonify({'status': 409, 'message' : 'Email already exists'}), 409
# Save new user and get result
new_user = db.save(data)
result = UserSchema(exclude=['password']).dump(new_user).data
return jsonify({
'status': 201,
'message' : 'User created successfully',
'data': result,
}), 201
@v2.route('/login', methods=['POST'])
def login():
json_data = request.get_json()
# Check if the request contains any data
if not json_data:
return jsonify({'status': 400, 'message': 'No data has provided! Please put your login credentials'}), 400
# Check if credentials have been passed
data, errors = UserSchema().load(json_data, partial=True)
if errors:
return jsonify({'status': 400, 'message': 'Invalid data. Please fill all required fields', 'errors': errors}), 400
try:
username = data['username']
password = data['password']
except:
return jsonify({'status': 400, 'message': 'Invalid credentials.Confirm!'}), 400
# Check if username exists
if not db.exists('username', username):
return jsonify({'status': 404, 'message' : 'User not found'}), 404
user = db.find('username', username)
# Checking if password match
db.checkpassword(user['password'], password)
return jsonify({
'status': 200,
'message': 'User logged in successfully',
}), 200
| 30.058824 | 127 | 0.626223 | 312 | 2,555 | 5.086538 | 0.294872 | 0.090107 | 0.131695 | 0.069313 | 0.332703 | 0.195337 | 0.132325 | 0.132325 | 0.132325 | 0.132325 | 0 | 0.03715 | 0.23092 | 2,555 | 84 | 128 | 30.416667 | 0.770483 | 0.119765 | 0 | 0.163265 | 0 | 0 | 0.274396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0.061224 | 0.081633 | 0.020408 | 0.367347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7108effd9b9b381b5c4d66c7863ecea3a015fab6 | 305 | py | Python | settings/__init__.py | xuehuiping/DCA | 4d71593c78e59f035305e1983d92146c5156789d | [
"MIT"
] | 5 | 2019-10-07T09:25:25.000Z | 2019-11-05T08:40:44.000Z | settings/__init__.py | xuehuiping/DCA | 4d71593c78e59f035305e1983d92146c5156789d | [
"MIT"
] | 5 | 2019-10-31T02:51:55.000Z | 2019-12-31T07:09:49.000Z | settings/__init__.py | QtNN/DCA | b48be18dab45006abc80b94445622893d0c142ba | [
"MIT"
] | 2 | 2020-03-23T18:11:27.000Z | 2020-12-14T10:48:00.000Z | """Helpers to build the blocks of the training
pipeline for users defined parameters.
"""
from .setup_xp import load_params, set_device, get_embedders
from .model import build_multi_agt_summarizer
__all__ = [
"build_multi_agt_summarizer",
"load_params",
"set_device",
"get_embedders"
]
| 20.333333 | 60 | 0.75082 | 41 | 305 | 5.170732 | 0.658537 | 0.09434 | 0.122642 | 0.179245 | 0.292453 | 0.292453 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167213 | 305 | 14 | 61 | 21.785714 | 0.834646 | 0.268852 | 0 | 0 | 0 | 0 | 0.277778 | 0.12037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7108f5884575eb480313b1e686e0da4928d0f2be | 350 | py | Python | relogic/logickit/serving/__init__.py | Impavidity/relogic | f647106e143cd603b95b63e06ea530cdd516aefe | [
"MIT"
] | 24 | 2019-07-20T02:10:21.000Z | 2022-03-15T07:13:07.000Z | relogic/logickit/serving/__init__.py | One-paper-luck/relogic | f647106e143cd603b95b63e06ea530cdd516aefe | [
"MIT"
] | 3 | 2019-11-28T04:19:25.000Z | 2019-11-30T23:29:19.000Z | relogic/logickit/serving/__init__.py | One-paper-luck/relogic | f647106e143cd603b95b63e06ea530cdd516aefe | [
"MIT"
] | 5 | 2019-11-27T03:12:07.000Z | 2021-12-08T11:45:43.000Z | from flask import Flask, jsonify, request
app = Flask(__name__)
@app.route('/')
def index():
return jsonify("Hello World")
class Server(object):
def __init__(self, trainer=None):
self.trainer = trainer
def start(self):
app.run(host="0.0.0.0", port=5000, debug=True)
if __name__ == "__main__":
server = Server()
server.start() | 17.5 | 50 | 0.668571 | 49 | 350 | 4.44898 | 0.612245 | 0.027523 | 0.027523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 0.171429 | 350 | 20 | 51 | 17.5 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0.076923 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7114c66935c88efa75790e08dde08f63c3013e83 | 28,361 | py | Python | data_processor.py | ZouJoshua/deeptext_project | efb4db34a4a22951bba4ae724d05c41958bbd347 | [
"Apache-2.0"
] | 2 | 2021-03-01T06:37:27.000Z | 2021-04-07T09:40:55.000Z | data_processor.py | ZouJoshua/deeptext_project | efb4db34a4a22951bba4ae724d05c41958bbd347 | [
"Apache-2.0"
] | 5 | 2020-09-26T01:16:58.000Z | 2022-02-10T01:49:30.000Z | data_processor.py | ZouJoshua/deeptext_project | efb4db34a4a22951bba4ae724d05c41958bbd347 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding:utf8
# Copyright (c) 2018, Tencent. All rights reserved
# This file contain DataProcessor class which can
# 1. Generate dict from train text data:
# Text format: label\t[(token )+]\t[(char )+]\t[(custom_feature )+].
# Label could be flattened or hierarchical which is separated by "--".
# 2. Convert data to tfrecord according to dict and config.
# 3. Provide input_fn for estimator train and serving.
import codecs
import json
import os
import sys
from collections import Counter
import tensorflow as tf
import util
from config import Config
class DataProcessor(object):
VOCAB_UNKNOWN = "_UNK"
VOCAB_PADDING = "_PAD"
# Text format: label\t[(token )+]\t[(char )+]\t[(custom_feature )+].
LINE_SPLIT_NUMBER = 4
# Index of line, also index in dict list
LABEL_INDEX = 0
TOKEN_INDEX = 1
CHAR_INDEX = 2
CUSTOM_FEATURE_INDEX = 3
USE_SPECIAL_LABEL = False
SPECIAL_LABEL_LIST = ["社会", "时政", "国际", "军事"]
SPECIAL_LABEL = "|".join(SPECIAL_LABEL_LIST)
# TODO(marvinmu): check config
def __init__(self, config, logger=None):
self.config = config
if logger:
self.logger = logger
else:
self.logger = util.Logger(config)
self.dict_names = ["label", "token", "char", "custom_feature",
"token_ngram", "char_ngram", "char_in_token"]
self.dict_files = []
for dict_name in self.dict_names:
self.dict_files.append(
self.config.data.dict_dir + "/" + dict_name + ".dict")
self.label_dict_file = self.dict_files[0]
# Should keep all labels
self.min_count = [0, self.config.feature_common.min_token_count,
self.config.feature_common.min_char_count,
self.config.var_len_feature.min_custom_feature_count,
self.config.var_len_feature.min_token_ngram_count,
self.config.var_len_feature.min_char_ngram_count,
self.config.feature_common.min_char_count_in_token]
# Should keep all labels
self.max_dict_size = \
[1000 * 1000, self.config.feature_common.max_token_dict_size,
self.config.feature_common.max_char_dict_size,
self.config.var_len_feature.max_custom_feature_dict_size,
self.config.var_len_feature.max_token_ngram_dict_size,
self.config.var_len_feature.max_char_ngram_dict_size,
self.config.feature_common.max_char_in_token_dict_size]
# Label and custom feature has no max_sequence_length.
self.max_sequence_length = \
[0, self.config.fixed_len_feature.max_token_sequence_length,
self.config.fixed_len_feature.max_char_sequence_length, 0]
# Label and custom feature has no ngram.
self.ngram_list = [0, self.config.var_len_feature.token_ngram,
self.config.var_len_feature.char_ngram, 0]
self.label_map = dict()
self.token_map = dict()
self.char_map = dict()
self.custom_feature_map = dict()
self.token_gram_map = dict()
self.char_gram_map = dict()
self.char_in_token_map = dict()
self.dict_list = [self.label_map, self.token_map, self.char_map,
self.custom_feature_map, self.token_gram_map,
self.char_gram_map, self.char_in_token_map]
self.id_to_label_map = dict()
self.id_to_token_map = dict()
self.id_to_char_map = dict()
self.id_to_custom_feature_map = dict()
self.id_to_token_gram_map = dict()
self.id_to_char_gram_map = dict()
self.id_to_char_in_token_map = dict()
self.id_to_vocab_dict_list = [
self.id_to_label_map, self.id_to_token_map,
self.id_to_char_map, self.id_to_custom_feature_map,
self.id_to_token_gram_map, self.id_to_char_gram_map,
self.id_to_char_in_token_map]
self.train_text_file, self.validate_text_file, self.test_text_file = \
self.config.data.train_text_file, \
self.config.data.validate_text_file, \
self.config.data.test_text_file
self.tfrecord_files = [
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.train_text_file) + ".tfrecord",
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.validate_text_file) + ".tfrecord",
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.test_text_file) + ".tfrecord"]
self.train_file, self.validate_file, self.test_file = \
self.tfrecord_files
self.feature_files = [
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.train_text_file) + ".feature",
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.validate_text_file) + ".feature",
self.config.data.tfrecord_dir + "/" + os.path.basename(
self.test_text_file) + ".feature"]
(self.train_feature_file, self.validate_feature_file,
self.test_feature_file) = self.feature_files
self.pretrained_embedding_files = [
"", config.feature_common.token_pretrained_embedding_file,
config.feature_common.char_pretrained_embedding_file,
config.var_len_feature.custom_feature_pretrained_embedding_file, ]
self.int_list_column = ["fixed_len_token", "var_len_token",
"char_in_token", "char_in_token_real_len",
"fixed_len_char", "var_len_char",
"var_len_token_ngram", "var_len_char_ngram",
"var_len_custom_feature"]
self.int_column = ["token_fixed_real_len", "char_fixed_real_len"]
self.float_column = ["token_var_real_len", "char_var_real_len",
"token_ngram_var_real_len",
"char_ngram_var_real_len",
"custom_feature_var_real_len"]
def _save_dict(self, dict_file, counter, name):
"""Save all vocab to file.
Args:
dict_file: File to save to.
counter: Vocab counts.
name: Dict name.
"""
dict_list = counter.most_common()
dict_file = codecs.open(dict_file, "w", encoding=util.CHARSET)
# Save _UNK for vocab not in dict and _PAD for padding
count = 1000 * 1000 # Must bigger than min count
if name != "label":
dict_list = [(self.VOCAB_PADDING, count),
(self.VOCAB_UNKNOWN, count)] + dict_list
for vocab in dict_list:
dict_file.write("%s\t%d\n" % (vocab[0], vocab[1]))
dict_file.close()
self.logger.info("Total count of %s: %d" % (name, len(dict_list)))
def _load_dict(self, dict_map, id_to_vocab_dict_map, dict_file, min_count,
max_dict_size, name):
"""Load dict according to params.
Args:
dict_map: Vocab dict map.
id_to_vocab_dict_map: Id to vocab dict map.
dict_file: File to load.
min_count: Vocab whose count is equal or greater than min_count
will be loaded.
max_dict_size: Load top max_dict_size vocabs sorted by count.
name: Dict name.
Returns:
dict.
"""
if not os.path.exists(dict_file):
self.logger.warn("Not exists %s for %s" % (dict_file, name))
else:
for line in codecs.open(dict_file, "r", encoding=util.CHARSET):
vocab = line.strip("\n").split("\t")
try:
temp = vocab[1]
except IndexError:
continue
if int(temp) >= min_count:
index = len(dict_map)
dict_map[vocab[0]] = index
id_to_vocab_dict_map[index] = vocab[0]
if len(dict_map) >= max_dict_size:
self.logger.warn(
"Reach the max size(%d) of %s, ignore the rest" % (
max_dict_size, name))
break
self.logger.info("Load %d vocab of %s" % (len(dict_map), name))
def load_all_dict(self):
"""Load all dict.
"""
for i, dict_name in enumerate(self.dict_names):
self._load_dict(self.dict_list[i], self.id_to_vocab_dict_list[i],
self.dict_files[i], self.min_count[i],
self.max_dict_size[i], dict_name)
def _generate_dict(self, text_file_list):
"""Generate dict and label dict given train text file.
Save all vocab to files and load dicts.
Text format: label\t[(token )+]\t[(char )+]\t[(feature )+].
Label could be flattened or hierarchical which is separated by "--".
Args:
text_file_list:
Text file list, usually only contain train text file.
"""
sample_size = 0
label_counter = Counter()
token_counter = Counter()
char_in_token_counter = Counter()
token_ngram_counter = Counter()
char_counter = Counter()
char_ngram_counter = Counter()
custom_feature_counter = Counter()
counters = [label_counter, token_counter, char_counter,
custom_feature_counter, token_ngram_counter,
char_ngram_counter, char_in_token_counter]
for text_file in text_file_list:
self.logger.info("Generate dict using text file %s" % text_file)
for line in codecs.open(text_file, "r", encoding='utf8'):
content = line.strip("\n").split('\t')
if len(content) != self.LINE_SPLIT_NUMBER:
self.logger.error("Wrong line: %s" % line)
continue
sample_size += 1
for i, _ in enumerate(content):
vocabs = content[i].strip().split(" ")
counters[i].update(vocabs)
if i == self.LABEL_INDEX:
# 增加特殊类处理
if vocabs[0] in self.SPECIAL_LABEL_LIST and self.USE_SPECIAL_LABEL:
vocabs = [self.SPECIAL_LABEL]
counters[i].update(vocabs)
# If vocab is token, extract char info of each token
if i == self.TOKEN_INDEX:
char_in_token = []
for vocab in vocabs:
char_in_token.extend(vocab)
char_in_token_counter.update(char_in_token)
if self.ngram_list[i] > 1:
ngram_list = []
for j in range(2, self.ngram_list[i] + 1):
ngram_list.extend(["".join(vocabs[k:k + j]) for k in
range(len(vocabs) - j + 1)])
counters[i + 3].update(ngram_list)
for counter in counters:
if "" in counter.keys():
counter.pop("")
self.logger.info("sample size: %d" % sample_size)
for i, dict_name in enumerate(self.dict_names):
self._save_dict(self.dict_files[i], counters[i], self.dict_names[i])
def _get_vocab_id_list(self, dict_map, vocabs, ngram, sequence_length,
max_var_length, ngram_dict_map=None,
char_in_token_map=None,
max_char_sequence_length_per_token=-1):
"""Convert vocab string list to vocab id list.
Args:
dict_map: Dict used to map string to id.
vocabs: Vocab string list.
ngram: Ngram to use (if bigger than 1).
sequence_length: List length for fixed length vocab id list.
max_var_length: List length for var length vocab id list.
ngram_dict_map: Ngram dict map.
max_char_sequence_length_per_token:
Useful when using char to get token embedding.
Returns:
fixed length vocab id list,
real length of fixed vocab id list,
var length vocab id list,
ngram string list.
"""
if len(dict_map) == 0 or len(vocabs) == 0:
return [], 0, [], [], [], []
vocabs_iter = [x for x in vocabs if x in dict_map]
var_len_vocabs = [dict_map[x] for x in vocabs_iter]
if len(var_len_vocabs) > max_var_length:
var_len_vocabs = var_len_vocabs[0:max_var_length]
if not var_len_vocabs:
var_len_vocabs.append(dict_map[self.VOCAB_UNKNOWN])
if len(vocabs) > sequence_length:
vocabs = vocabs[0:sequence_length]
fixed_len_vocabs = []
fixed_len_vocabs.extend(
[dict_map[x] if x in dict_map else dict_map[self.VOCAB_UNKNOWN]
for x in vocabs])
fixed_real_len = len(fixed_len_vocabs)
if fixed_real_len < sequence_length:
fixed_len_vocabs.extend([dict_map[self.VOCAB_PADDING]] * (
sequence_length - len(fixed_len_vocabs)))
ngram_list = []
if ngram > 1:
ngram_list_str = []
for i in range(2, ngram + 1):
ngram_list_str.extend(["".join(vocabs[j:j + i]) for j in
range(len(vocabs) - i + 1)])
ngram_iter = [x for x in ngram_list_str if x in ngram_dict_map]
ngram_list = [ngram_dict_map[x] for x in ngram_iter]
if not ngram_list:
ngram_list.append(ngram_dict_map[self.VOCAB_UNKNOWN])
char_in_token = []
char_in_token_real_len = []
if max_char_sequence_length_per_token > 0:
length = 0
for vocab in vocabs:
length += 1
chars = []
chars.extend(
[char_in_token_map[x] if x in char_in_token_map else
char_in_token_map[self.VOCAB_UNKNOWN] for x in vocab])
if len(chars) > max_char_sequence_length_per_token:
chars = chars[0:max_char_sequence_length_per_token]
char_in_token_real_len.append(len(chars))
if len(chars) < max_char_sequence_length_per_token:
chars.extend([char_in_token_map[self.VOCAB_PADDING]] * (
max_char_sequence_length_per_token - len(chars)))
char_in_token.extend(chars)
while length < sequence_length:
length += 1
char_in_token.extend(
[char_in_token_map[self.VOCAB_PADDING]] *
max_char_sequence_length_per_token)
char_in_token_real_len.append(0)
return (fixed_len_vocabs, fixed_real_len, var_len_vocabs, ngram_list,
char_in_token, char_in_token_real_len)
def _get_features_from_text(self, text, has_label=True):
"""Parse text to features that model can use.
Args:
text: Input text
has_label: If true, result will contain label.
Returns:
Features that model can use.
"""
content = text.split('\t')
if len(content) != self.LINE_SPLIT_NUMBER:
self.logger.error("Wrong format line: %s" % text)
return None
label_string = content[self.LABEL_INDEX]
if has_label and label_string not in self.label_map:
self.logger.error("Wrong label of line: %s" % text)
return None
token = content[self.TOKEN_INDEX].strip().split(" ")
(fixed_len_token, token_fixed_real_len, var_len_token,
var_len_token_ngram, char_in_token, char_in_token_real_len) = \
self._get_vocab_id_list(
self.token_map, token, self.config.var_len_feature.token_ngram,
self.config.fixed_len_feature.max_token_sequence_length,
self.config.var_len_feature.max_var_token_length,
self.token_gram_map, self.char_in_token_map,
self.config.fixed_len_feature.max_char_length_per_token)
chars = content[self.CHAR_INDEX].strip().split(" ")
(fixed_len_char, char_fixed_real_len, var_len_char, var_len_char_ngram,
_, _) = self._get_vocab_id_list(
self.char_map, chars, self.config.var_len_feature.char_ngram,
self.config.fixed_len_feature.max_char_sequence_length,
self.config.var_len_feature.max_var_char_length,
self.char_gram_map)
custom_features = content[self.CUSTOM_FEATURE_INDEX].strip().split(
" ")
_, _, var_len_custom_feature, _, _, _ = self._get_vocab_id_list(
self.custom_feature_map, custom_features, 0, 0, 0,
self.config.var_len_feature.max_var_custom_feature_length, None)
feature_sample = dict({
"fixed_len_token": fixed_len_token,
"token_fixed_real_len": token_fixed_real_len,
"var_len_token": var_len_token,
"token_var_real_len": len(var_len_token),
"char_in_token": char_in_token,
"char_in_token_real_len": char_in_token_real_len,
"var_len_token_ngram": var_len_token_ngram,
"token_ngram_var_real_len": len(var_len_token_ngram),
"fixed_len_char": fixed_len_char,
"char_fixed_real_len": char_fixed_real_len,
"var_len_char": var_len_char,
"char_var_real_len": len(var_len_char),
"var_len_char_ngram": var_len_char_ngram,
"char_ngram_var_real_len": len(var_len_char_ngram),
"var_len_custom_feature": var_len_custom_feature,
"custom_feature_var_real_len": len(var_len_custom_feature)
})
if has_label:
label = self.label_map[content[0]]
if content[self.LABEL_INDEX] in self.SPECIAL_LABEL_LIST and self.USE_SPECIAL_LABEL:
label = self.label_map[self.SPECIAL_LABEL]
feature_sample["label"] = label
return feature_sample
def _convert_features_to_tfexample(self, feature_sample,
has_label=True):
"""Convert feature sample to tf.example
Args:
feature_sample: Feature sample.
has_label: If true, result will contain label
Returns:
tf.example
"""
if not feature_sample:
return None
tfexample = tf.train.Example()
for name in self.int_list_column:
tfexample.features.feature[name].int64_list.value.extend(
feature_sample[name])
for name in self.int_column:
tfexample.features.feature[name].int64_list.value.append(
feature_sample[name])
for name in self.float_column:
tfexample.features.feature[name].float_list.value.append(
feature_sample[name])
if has_label:
tfexample.features.feature["label"].int64_list.value.append(
feature_sample["label"])
return tfexample
def get_tfexample_from_text(self, text, has_label=True):
feature_sample = self._get_features_from_text(text, has_label)
tfexample = self._convert_features_to_tfexample(feature_sample,
has_label)
return tfexample, feature_sample
def _get_tfrecord_from_text_file(self, text_file, tfrecord_file,
feature_file):
"""Get tfrecord from text file.
Text format: label\t[(token )+]\t[(char )+]\t[(feature )+].
Label could be flattened or hierarchical which is separated by "--".
Args:
text_file: Text file.
tfrecord_file: Tfrecord file to write.
feature_file: Feature file, will save feature sample for debug.
For validate and test evaluation
"""
self.logger.info("Get tfrecord from text file %s" % text_file)
writer = tf.python_io.TFRecordWriter(tfrecord_file)
sample_size = 0
with codecs.open(feature_file, "w",
encoding=util.CHARSET) as label_file:
for line in codecs.open(text_file, "r", encoding='utf8'):
tfexample, feature_sample = self.get_tfexample_from_text(line)
if tfexample is not None:
feature_str = json.dumps(feature_sample, ensure_ascii=False)
label_file.write(
self.id_to_label_map[feature_sample["label"]] + "\t" +
feature_str + "\n")
writer.write(tfexample.SerializeToString())
sample_size += 1
writer.close()
self.logger.info(
"Text file %s has sample %d" % (text_file, sample_size))
def process_from_text_file(self, use_exists_dict=False):
"""Process text data to tfrecord for training and generate dicts.
"""
if not os.path.exists(self.config.data.tfrecord_dir):
os.makedirs(self.config.data.tfrecord_dir)
if not os.path.exists(self.config.data.dict_dir):
os.makedirs(self.config.data.dict_dir)
if use_exists_dict:
self.load_all_dict()
else:
self._generate_dict([self.config.data.train_text_file])
# If using pretrained embedding, dict can be generated by all text file.
# when repeating the result in the paper of textcnn, the following code
# should be used.
# self._generate_dict([self.config.data.train_text_file,
# self.config.data.validate_text_file,
# self.config.data.test_text_file])
self.load_all_dict()
text_files = [self.config.data.train_text_file,
self.config.data.validate_text_file,
self.config.data.test_text_file]
for i, text_file in enumerate(text_files):
self._get_tfrecord_from_text_file(text_file, self.tfrecord_files[i],
self.feature_files[i])
@staticmethod
def _get_feature_spec(has_label):
"""Feature map to parse tf.example
Args:
has_label: If true, feature map include label
Return:
feature map
"""
feature_spec = dict({
"fixed_len_token": tf.VarLenFeature(dtype=tf.int64),
"token_fixed_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.int64),
"var_len_token": tf.VarLenFeature(dtype=tf.int64),
"token_var_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.float32),
"char_in_token": tf.VarLenFeature(dtype=tf.int64),
"char_in_token_real_len": tf.VarLenFeature(dtype=tf.int64),
"var_len_token_ngram": tf.VarLenFeature(dtype=tf.int64),
"token_ngram_var_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.float32),
"fixed_len_char": tf.VarLenFeature(dtype=tf.int64),
"char_fixed_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.int64),
"var_len_char": tf.VarLenFeature(dtype=tf.int64),
"char_var_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.float32),
"var_len_char_ngram": tf.VarLenFeature(dtype=tf.int64),
"char_ngram_var_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.float32),
"var_len_custom_feature": tf.VarLenFeature(dtype=tf.int64),
"custom_feature_var_real_len": tf.FixedLenFeature(shape=(1,),
dtype=tf.float32),
})
if has_label:
feature_spec["label"] = tf.FixedLenFeature(shape=(1,),
dtype=tf.int64)
return feature_spec
def check_tfrecord(self, file_names, field_name, dtype=tf.int32):
"""Check one field of tfrecord
Args:
file_names: List of file names.
field_name: Field to check.
dtype: Field data type.
"""
filename_queue = tf.train.string_input_producer(file_names,
shuffle=False)
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example,
self._get_feature_spec(True))
feature = tf.cast(features[field_name], dtype)
check_file = codecs.open("tf_check.txt", "w", encoding=util.CHARSET)
with tf.Session(config=tf.ConfigProto(
device_count={"CPU":12},
inter_op_parallelism_threads=1,
intra_op_parallelism_threads=1,
gpu_options=gpu_options,
)) as sess:
#with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
check_file.write(feature.eval())
coord.request_stop()
coord.join(threads)
check_file.close()
def _parse_tfexample(self, example, mode=tf.estimator.ModeKeys.TRAIN):
"""Parse input example.
Args:
example: Tf.example.
mode: Estimator mode.
Return:
parsed feature and label.
"""
parsed = tf.parse_single_example(example, self._get_feature_spec(True))
parsed = self._sparse_to_dense(parsed)
label = None
if mode != tf.estimator.ModeKeys.PREDICT:
label = parsed.pop("label")
return parsed, label
def _sparse_to_dense(self, parsed_example):
for key in self.int_list_column:
if "var" not in key:
parsed_example[key] = tf.sparse_tensor_to_dense(
parsed_example[key])
return parsed_example
def dataset_input_fn(self, mode, input_file, batch_size, num_epochs=1):
"""Input function using tf.dataset for estimator
Args:
mode: input mode of tf.estimator.ModeKeys.{TRAIN, EVAL, PREDICT}.
input_file: Input tfrecord file.
batch_size: Batch size for model.
num_epochs: Number epoch.
Returns:
tf.dataset
"""
dataset = tf.data.TFRecordDataset(input_file)
dataset = dataset.map(self._parse_tfexample)
if mode != tf.estimator.ModeKeys.PREDICT:
dataset = dataset.shuffle(
buffer_size=self.config.data.shuffle_buffer)
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
return dataset
def serving_input_receiver_fn(self):
"""Input function for session server
Returns:
input_receiver_fn for session server
"""
serialized_tf_example = tf.placeholder(
dtype=tf.string, name='input_example_tensor')
receiver_tensors = {'examples': serialized_tf_example}
parsed_example = tf.parse_example(serialized_tf_example,
self._get_feature_spec(False))
parsed_example = self._sparse_to_dense(parsed_example)
return tf.estimator.export.ServingInputReceiver(parsed_example,
receiver_tensors)
def main(_):
config = Config(config_file=sys.argv[1])
data_processor = DataProcessor(config)
data_processor.process_from_text_file()
if __name__ == '__main__':
tf.app.run()
| 44.946117 | 95 | 0.58242 | 3,452 | 28,361 | 4.455968 | 0.092121 | 0.019893 | 0.02646 | 0.017293 | 0.453387 | 0.355675 | 0.279808 | 0.233845 | 0.179691 | 0.146925 | 0 | 0.006776 | 0.328691 | 28,361 | 630 | 96 | 45.01746 | 0.801145 | 0.141145 | 0 | 0.11804 | 1 | 0 | 0.060404 | 0.015006 | 0 | 0 | 0 | 0.001587 | 0 | 1 | 0.040089 | false | 0 | 0.017817 | 0 | 0.111359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71156411fe36becbc71f0c141f19cc2f60370ca9 | 538 | py | Python | openerp/addons/web_diagram/__openerp__.py | ntiufalara/openerp7 | 903800da0644ec0dd9c1dcd34205541f84d45fe4 | [
"MIT"
] | 3 | 2016-01-29T14:39:49.000Z | 2018-12-29T22:42:00.000Z | openerp/addons/web_diagram/__openerp__.py | ntiufalara/openerp7 | 903800da0644ec0dd9c1dcd34205541f84d45fe4 | [
"MIT"
] | 2 | 2016-03-23T14:29:41.000Z | 2017-02-20T17:11:30.000Z | openerp/addons/web_diagram/__openerp__.py | ntiufalara/openerp7 | 903800da0644ec0dd9c1dcd34205541f84d45fe4 | [
"MIT"
] | null | null | null | {
'name': 'OpenERP Web Diagram',
'category': 'Hidden',
'description': """
Openerp Web Diagram view.
=========================
""",
'version': '2.0',
'depends': ['web'],
'js': [
'static/lib/js/raphael.js',
'static/lib/js/jquery.mousewheel.js',
'static/src/js/vec2.js',
'static/src/js/graph.js',
'static/src/js/diagram.js',
],
'css': [
'static/src/css/base_diagram.css',
],
'qweb': [
'static/src/xml/*.xml',
],
'auto_install': True,
}
| 20.692308 | 45 | 0.477695 | 58 | 538 | 4.396552 | 0.482759 | 0.156863 | 0.129412 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007595 | 0.265799 | 538 | 25 | 46 | 21.52 | 0.637975 | 0 | 0 | 0.125 | 0 | 0 | 0.592937 | 0.336431 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71194ec745a663de0dbcd5a9d549628cdd546302 | 2,578 | py | Python | Xiuzhen_MongoDB.py | cby3149/Novel_sender | f59e4b70e80193b2d47a24b2bdc969365f2bf251 | [
"MIT"
] | null | null | null | Xiuzhen_MongoDB.py | cby3149/Novel_sender | f59e4b70e80193b2d47a24b2bdc969365f2bf251 | [
"MIT"
] | null | null | null | Xiuzhen_MongoDB.py | cby3149/Novel_sender | f59e4b70e80193b2d47a24b2bdc969365f2bf251 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
import smtplib
from email.mime.text import MIMEText
import time
import pymongo
#-------------------------------------------------
client = pymongo.MongoClient()
db = client.noval
collection = db.xiuzhen
#-------------------------------------------------
t = 1
c = 0
while True:
t += 1
url = 'https://www.booktxt.net/1_1439/'
headers = {
'User_Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2)' +
'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
r = requests.get(url, headers=headers).content
bs = BeautifulSoup(r, 'lxml', from_encoding='utf-8')
for each in bs.find_all(class_="box_con"):
for i in each.find_all('a'):
href = i.get('href')
title = i.string
href = url + href
if href.find('html') == -1:
pass
else:
if collection.find_one({'Link':href}) == None:
response = requests.get(href, headers=headers).content
r = BeautifulSoup(response, 'lxml', from_encoding='uft-8')
content = r.find_all(id="content")[0]
content = str(content).replace('<br/>', '\n')
content = content.replace('<div id="content">', '')
mail_host = "smtp.gmail.com"
mail_user = "Email account"
mail_pass = "Password"
sender = 'Same as mail_user'
receivers = ['Receiver email address']
title = title
message = MIMEText(content, 'plain', 'utf-8')
message['From'] = "{}".format(sender)
message['To'] = ",".join(receivers)
message['Subject'] = title
try:
smtpObj = smtplib.SMTP_SSL(mail_host, 465)
smtpObj.login(mail_user, mail_pass)
smtpObj.sendmail(sender, receivers, message.as_string())
print("mail has been send successfully.")
except smtplib.SMTPException as e:
print(e)
collection.insert_one({'Title':title,'Link':href})
c = c + 1
if c == 0:
print("running " +str(t) + " times," + " no new chapter find")
else:
print("running " +str(t) + " times," + ' Find new chapter')
c = 0
time.sleep(86400)
| 36.309859 | 83 | 0.481769 | 272 | 2,578 | 4.485294 | 0.485294 | 0.009836 | 0.034426 | 0.02623 | 0.034426 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031401 | 0.357642 | 2,578 | 70 | 84 | 36.828571 | 0.705314 | 0.04616 | 0 | 0.068966 | 0 | 0.017241 | 0.178484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.051724 | 0.103448 | 0 | 0.103448 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
711a5cd8239c681a5dc0a53833847add9bd9b2ce | 8,578 | py | Python | storops/vnx/xmlapi_parser.py | tunaruraul/storops | 7092c516c55b4c2f00c7c22383e1ad46ecfec091 | [
"Apache-2.0"
] | 60 | 2016-04-18T23:42:10.000Z | 2022-03-23T02:26:03.000Z | storops/vnx/xmlapi_parser.py | tunaruraul/storops | 7092c516c55b4c2f00c7c22383e1ad46ecfec091 | [
"Apache-2.0"
] | 317 | 2016-05-25T06:45:37.000Z | 2022-03-25T13:22:38.000Z | storops/vnx/xmlapi_parser.py | tunaruraul/storops | 7092c516c55b4c2f00c7c22383e1ad46ecfec091 | [
"Apache-2.0"
] | 34 | 2016-03-18T02:39:12.000Z | 2022-01-07T12:54:14.000Z | # coding=utf-8
# Copyright (c) 2015 EMC Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import unicode_literals
import logging
import re
import six
from xml import etree
__author__ = 'Jay Xu'
log = logging.getLogger(__name__)
class XMLAPIParser(object):
def __init__(self):
# The following Boolean acts as the flag for the common sub-element.
# For instance:
# <CifsServers>
# <li> server_1 </li>
# </CifsServers>
# <Alias>
# <li> interface_1 </li>
# </Alias>
self.tag = None
self.elt = {}
self.stack = []
@staticmethod
def _delete_ns(tag):
i = tag.find('}')
if i >= 0:
tag = tag[i + 1:]
return tag
def parse(self, xml):
result = {
'type': None,
'taskId': None,
'maxSeverity': None,
'objects': [],
'problems': [],
}
events = ("start", "end")
context = etree.ElementTree.iterparse(six.BytesIO(xml.encode('utf-8')),
events=events)
for action, elem in context:
self.tag = self._delete_ns(elem.tag)
func = self._get_func(action, self.tag)
self.track_stack(action, elem)
if func in vars(XMLAPIParser):
if action == 'start':
eval('self.' + func)(elem, result)
elif action == 'end':
eval('self.' + func)(elem, result)
return result
def track_stack(self, action, elem):
if action == 'start':
self.stack.append(elem)
elif action == 'end':
self.stack.pop()
@staticmethod
def _get_func(action, tag):
if tag == 'W2KServerData':
return action + '_' + 'w2k_server_data'
temp_list = re.sub(r"([A-Z])", r" \1", tag).split()
if temp_list:
func_name = action + '_' + '_'.join(temp_list)
else:
func_name = action + '_' + tag
return func_name.lower()
@staticmethod
def _copy_property(source, target):
for key in source:
target[key] = source[key]
@classmethod
def _append_elm_property(cls, elm, result, identifier):
for obj in result['objects']:
if cls.has_identifier(obj, elm, identifier):
for key, value in elm.attrib.items():
obj[key] = value
@staticmethod
def has_identifier(obj, elm, identifier):
return (identifier in obj and
identifier in elm.attrib and
elm.attrib[identifier] == obj[identifier])
def _append_element(self, elm, result, identifier):
sub_elm = {}
self._copy_property(elm.attrib, sub_elm)
for obj in result['objects']:
if self.has_identifier(obj, elm, identifier):
if self.tag in obj:
obj[self.tag].append(sub_elm)
else:
obj[self.tag] = [sub_elm]
def start_task_response(self, elm, result):
result['type'] = 'TaskResponse'
self._copy_property(elm.attrib, result)
@staticmethod
def start_fault(_, result):
result['type'] = 'Fault'
def _parent_tag(self):
if len(self.stack) >= 2:
parent = self.stack[-2]
ret = self._delete_ns(parent.tag)
else:
ret = None
return ret
def start_status(self, elm, result):
parent_tag = self._parent_tag()
if parent_tag == 'TaskResponse':
result['maxSeverity'] = elm.attrib['maxSeverity']
elif parent_tag in ['MoverStatus', 'Vdm', 'MoverHost']:
self.elt['maxSeverity'] = elm.attrib['maxSeverity']
def start_query_status(self, elm, result):
result['type'] = 'QueryStatus'
self._copy_property(elm.attrib, result)
def start_problem(self, elm, result):
self.elt = {}
self._copy_property(elm.attrib, self.elt)
result['problems'].append(self.elt)
def start_description(self, elm, _):
self.elt['Description'] = elm.text
def start_action(self, elm, _):
self.elt['Action'] = elm.text
def start_diagnostics(self, elm, _):
self.elt['Diagnostics'] = elm.text
def start_file_system(self, elm, result):
self._as_object(elm, result)
def start_file_system_capacity_info(self, elm, result):
identifier = 'fileSystem'
self._append_elm_property(elm, result, identifier)
def start_storage_pool(self, elm, result):
self._as_object(elm, result)
def start_system_storage_pool_data(self, elm, _):
self._copy_property(elm.attrib, self.elt)
def start_mover(self, elm, result):
self._as_object(elm, result)
def start_mover_host(self, elm, result):
self._as_object(elm, result)
def start_nfs_export(self, elm, result):
self._as_object(elm, result)
def _as_object(self, elm, result):
self.elt = {}
self._copy_property(elm.attrib, self.elt)
result['objects'].append(self.elt)
def start_mover_status(self, elm, result):
identifier = 'mover'
self._append_elm_property(elm, result, identifier)
def start_mover_route(self, elm, result):
self._append_element(elm, result, 'mover')
def start_mover_deduplication_settings(self, elm, result):
self._append_element(elm, result, 'mover')
def start_mover_dns_domain(self, elm, result):
self._append_element(elm, result, 'mover')
def start_mover_interface(self, elm, result):
self._append_element(elm, result, 'mover')
def start_logical_network_device(self, elm, result):
self._append_element(elm, result, 'mover')
def start_vdm(self, elm, result):
self._as_object(elm, result)
def _add_element(self, name, item):
if name not in self.elt:
self.elt[name] = []
self.elt[name].append(item)
def start_li(self, elm, _):
parent_tag = self._parent_tag()
host_nodes = ('AccessHosts', 'RwHosts', 'RoHosts', 'RootHosts')
if parent_tag == 'CifsServers':
self._add_element('CifsServers', elm.text)
elif parent_tag == 'Aliases':
self._add_element('Aliases', elm.text)
elif parent_tag == 'Interfaces':
self._add_element('Interfaces', elm.text)
elif parent_tag in host_nodes:
if parent_tag not in self.elt:
self.elt[parent_tag] = []
self.elt[parent_tag].append(elm.text)
def start_cifs_server(self, elm, result):
self._as_object(elm, result)
def start_w2k_server_data(self, elm, _):
self._copy_property(elm.attrib, self.elt)
def start_cifs_share(self, elm, result):
self._as_object(elm, result)
def start_checkpoint(self, elm, result):
self._as_object(elm, result)
def start_ro_file_system_hosts(self, elm, _):
self._copy_property(elm.attrib, self.elt)
def start_standalone_server_data(self, elm, _):
self._copy_property(elm.attrib, self.elt)
def start_fibre_channel_device_data(self, elm, _):
self._copy_attrib_to_parent(elm)
def start_network_device_data(self, elm, _):
self._copy_attrib_to_parent(elm)
def _copy_attrib_to_parent(self, elm):
if len(self.stack) >= 2:
parent = self.stack[-2]
for k, v in elm.attrib.items():
parent.attrib[k] = v
def start_mover_motherboard(self, elm, result):
self._append_element(elm, result, 'moverHost')
def end_physical_device(self, elm, result):
self._append_element(elm, result, 'moverHost')
def start_fc_descriptor(self, elm, result):
self._append_element(elm, result, 'moverHost')
def start_mount(self, elm, result):
self._as_object(elm, result)
| 31.07971 | 79 | 0.600023 | 1,050 | 8,578 | 4.671429 | 0.214286 | 0.086239 | 0.068909 | 0.069317 | 0.394495 | 0.326606 | 0.291132 | 0.291132 | 0.2842 | 0.234455 | 0 | 0.003591 | 0.285731 | 8,578 | 275 | 80 | 31.192727 | 0.796964 | 0.09571 | 0 | 0.289474 | 0 | 0 | 0.059097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.252632 | false | 0 | 0.026316 | 0.005263 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
711bc5a70ab83ad507c97768d9fc033d0adfbe89 | 1,033 | py | Python | hello.py | pangtouyu/flasky4a | 39926626e15820b5939d5e13fb08efd3c95daaf4 | [
"MIT"
] | null | null | null | hello.py | pangtouyu/flasky4a | 39926626e15820b5939d5e13fb08efd3c95daaf4 | [
"MIT"
] | null | null | null | hello.py | pangtouyu/flasky4a | 39926626e15820b5939d5e13fb08efd3c95daaf4 | [
"MIT"
] | null | null | null | from flask import Flask, render_template
from flask.ext.script import Manager
from flask.ext.bootstrap import Bootstrap
from flask.ext.moment import Moment
from flask.ext.wtf import Form
from wtforms import StringField, SubmitField
from wtforms.validators import Required
app = Flask(__name__)
app.config['SECRET_KEY'] = 'hard to guess string'
manager = Manager(app)
bootstrap = Bootstrap(app)
moment = Moment(app)
class NameForm(Form):
name = StringField('What is your name?', validators=[Required()])
submit = SubmitField('Submit')
@app.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
@app.errorhandler(500)
def internal_server_error(e):
return render_template('500.html'), 500
@app.route('/', methods=['GET', 'POST'])
def index():
name = None
form = NameForm()
if form.validate_on_submit():
name = form.name.data
form.name.data = ''
return render_template('index.html', form=form, name=name)
if __name__ == '__main__':
manager.run()
| 23.477273 | 69 | 0.71636 | 139 | 1,033 | 5.158273 | 0.410072 | 0.062762 | 0.066946 | 0.058577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020713 | 0.158761 | 1,033 | 43 | 70 | 24.023256 | 0.804373 | 0 | 0 | 0 | 0 | 0 | 0.092933 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.225806 | 0.064516 | 0.516129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
711cb4c4101a4cf75337a29ed311280d0f1883ce | 14,198 | py | Python | crawler.py | langzeyu/book-crawler | e2d96648384658c7775bd02d94eab086c9ece677 | [
"MIT"
] | 5 | 2019-04-02T05:00:03.000Z | 2021-04-21T11:03:50.000Z | crawler.py | langzeyu/book-crawler | e2d96648384658c7775bd02d94eab086c9ece677 | [
"MIT"
] | null | null | null | crawler.py | langzeyu/book-crawler | e2d96648384658c7775bd02d94eab086c9ece677 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
import os
import hashlib
import urllib2
import urlparse
import zipfile
import logging
import re
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), 'lib'))
from tornado import template
from BeautifulSoup import BeautifulSoup
from scrapy.selector import HtmlXPathSelector
from rules import *
import codecs
import encodings
encodings.aliases.aliases['gb2312'] = 'gb18030'
encodings.aliases.aliases['gbk'] = 'gb18030'
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.44 Safari/534.24"
tpl_dir = os.path.join(os.path.dirname(__file__), 'tpl')
data_dir = os.path.join(os.path.dirname(__file__), 'data')
iswindows = 'win32' in sys.platform.lower() or 'win64' in sys.platform.lower()
isosx = 'darwin' in sys.platform.lower()
isfreebsd = 'freebsd' in sys.platform.lower()
islinux = not(iswindows or isosx or isfreebsd)
if iswindows:
kindlegen = os.path.join(os.path.dirname(__file__), 'lib', 'kindlegen.exe')
else:
kindlegen = os.path.join(os.path.dirname(__file__), 'lib', 'kindlegen')
def get_contents(url, referer=None):
"""docstring for get_contents"""
try:
req = urllib2.Request(url)
req.add_header('User-Agent', user_agent)
if referer is not None:
req.add_header('Referer', referer)
response = urllib2.urlopen(req)
data = response.read()
response.close()
response = None
del response
return data
except Exception, e:
logging.error("get %s failed!" % url)
return None
class Book(object):
"""docstring for Book"""
id = ""
name = ""
author = ''
cover = None
chapter_list = []
source = ""
book_dir = ""
chapters_dir = ""
index = 1
def __init__(self, source=None, id=None):
if id is None:
self.id = hashlib.sha1(source).hexdigest()
else:
self.id = id
self.source = source
self.book_dir = os.path.join(data_dir, self.id)
self.chapters_dir = os.path.join(self.book_dir, 'chapters')
if os.path.isfile(os.path.join(self.book_dir, 'bookname')):
fp = open(os.path.join(self.book_dir, 'bookname'), 'r')
self.name = fp.read()
fp.close()
def init(self):
"""docstring for init"""
if os.path.isdir(self.book_dir) is False:
os.mkdir(self.book_dir)
if os.path.isdir(self.chapters_dir) is False:
os.mkdir(self.chapters_dir)
def lock(self):
"""docstring for lock"""
fp = open(os.path.join(self.book_dir, 'lock'), 'w')
fp.write('lock')
fp.close()
def unlock(self):
"""docstring for unlock"""
lock_file = os.path.join(self.book_dir, 'lock')
if os.path.isfile(lock_file): os.unlink(lock_file)
def set_name(self, name):
self.name = name
self.render("bookname", os.path.join(self.book_dir, "bookname"), name=self.name)
@property
def is_lock(self):
"""docstring for is_lock"""
return os.path.isfile(os.path.join(self.book_dir, 'lock'))
@property
def is_exists(self):
"""docstring for exists"""
return os.path.isdir(self.book_dir)
@property
def is_ready(self):
"""docstring for is_ready"""
if os.path.isfile(os.path.join(self.book_dir, "book.mobi")) \
and os.path.isfile(os.path.join(self.book_dir, "book.epub")):
return True
else:
return False
@property
def mobi(self):
mobi = os.path.join(self.book_dir, "book.mobi")
if os.path.isfile(mobi):
return os.path.join(self.book_dir, "book.mobi") #"%s/book.mobi" % self.id
else:
return None
@property
def epub(self):
"""docstring for get_epub"""
epub = os.path.join(self.book_dir, "book.epub")
if os.path.isfile(epub):
return os.path.join(self.book_dir, "book.epub") #"%s/book.epub" % self.id
else:
return None
def add_chapter(self, title, content, source=None, index=None):
"""docstring for fname"""
if index is None:
index = self.index
self.index += 1
file_name = "chapter_%04d.html" % index
self.render("chapter.html", os.path.join(self.chapters_dir, file_name), title=title, content=content)
self.chapter_list.append({
'index': index,
'title': title,
'file': file_name
})
def collect(self):
pass
def create(self):
"""docstring for create_index"""
self.render("toc.html", os.path.join(self.book_dir, "toc.html"), title=self.name, chapters=self.chapter_list)
self.render("toc.ncx", os.path.join(self.book_dir, "toc.ncx"), title=self.name, chapters=self.chapter_list)
self.render("content.opf", os.path.join(self.book_dir, "content.opf"), title=self.name, author=self.author, chapters=self.chapter_list, cover=self.cover)
self.render("cover.html", os.path.join(self.book_dir, "cover.html"), title=self.name, author=self.author, cover=self.cover)
self.render("style.css", os.path.join(self.book_dir, "style.css"))
def render(self, tpl_name, file_name, **kargs):
"""docstring for render"""
content = self.render_string(tpl_name, id=self.id, **kargs)
fp = open(file_name, 'w')
fp.write(content)
fp.close()
def render_string(self, tpl_name, **kargs):
"""docstring for render_string"""
return template.Loader(tpl_dir).load(tpl_name).generate(**kargs)
def set_cover(self, data):
"""docstring for set_cover"""
fp = open(os.path.join(self.book_dir, "cover.jpg"), 'w')
fp.write(data)
fp.close()
self.cover = "cover.jpg"
def toEpub(self):
"""docstring for toEpub"""
epub_file = os.path.join(self.book_dir, "book.epub")
zip = zipfile.ZipFile(epub_file, mode='w', compression=zipfile.ZIP_DEFLATED)
zip.write(os.path.join(tpl_dir, 'mimetype'), 'mimetype', zipfile.ZIP_STORED)
zip.write(os.path.join(tpl_dir, 'META-INF', 'container.xml'), 'META-INF/container.xml')
zip.write(os.path.join(tpl_dir, 'style.css'), "style.css")
zip.write(os.path.join(self.book_dir, 'content.opf'), "content.opf")
zip.write(os.path.join(self.book_dir, 'cover.html'), "cover.html")
zip.write(os.path.join(self.book_dir, 'toc.html'), "toc.html")
zip.write(os.path.join(self.book_dir, 'toc.ncx'), "toc.ncx")
for chapter in self.chapter_list:
zip.write(os.path.join(self.chapters_dir, chapter['file']), "chapters/%s" % chapter['file'])
zip.close()
def toPdf(self):
"""docstring for toPdf"""
pass
def toText(self):
"""docstring for toText"""
pass
def toMobi(self):
"""docstring for toMobi"""
if os.path.isfile(kindlegen):
os.system('%s %s -o "%s"' % (kindlegen, os.path.join(self.book_dir, "content.opf"), "book.mobi"))
class Crawler(object):
"""docstring for Crawler"""
remove_tags = ['script','object','video','embed','iframe','noscript','img']
remove_attrs = ['title','width','height','onclick','onload']
def __init__(self, url):
self.url = url
def write_file(self, data, filename='t.html'):
"""docstring for write_file"""
fp = open(filename, 'w')
fp.write(data)
fp.close()
def collect(self):
"""docstring for run"""
url_obj = urlparse.urlparse(self.url)
if url_obj.hostname in Rules:
rule = Rules[url_obj.hostname]
else:
logging.error("rule not found!")
return
if 'url_validate' in rule and not re.match(rule['url_validate'], self.url):
logging.error("url invalid!")
return
logging.debug("start collect index...")
html = get_contents(self.url)
if html is None:
logging.error("get %s failed!" % self.url)
return
book = Book(self.url)
if book.is_lock:
logging.info("book is queueing")
return
else:
book.init()
# book.lock()
if 'encoding' in rule:
encoding = rule['encoding']
else:
encoding = 'utf-8'
soup = BeautifulSoup(html, fromEncoding=encoding)
html = soup.renderContents('utf-8')
hxs = HtmlXPathSelector(text=html)
# self.write_file(html)
book_name = hxs.select(rule['book_name']).extract()
if type(book_name) is list:
book_name = book_name[0].strip()
elif not book_name:
book_name = soup.html.head.title.string
book.set_name(book_name)
if 'book_author' in rule:
book_author = hxs.select(rule['book_author']).extract()
if type(book_author) is list:
book.author = book_author[0].strip()
elif type(book_author) is str:
book.author = book_author
if 'book_cover' in rule:
cover_url = hxs.select(rule['book_cover']).extract()
if type(cover_url) is list:
cover_url = cover_url[0]
if cover_url:
cover_data = get_contents(cover_url, self.url)
book.set_cover(cover_data)
if 'chapter_url' in rule:
if 'book_id' in rule:
book_id = rule['book_id'](self.url)
chapter_url = rule['chapter_url'] % book_id
else:
chapter_url = hxs.select(rule['chapter_url']).extract()
if type(chapter_url) is list:
chapter_url = chapter_url[0]
html = get_contents(chapter_url)
soup = BeautifulSoup(html, fromEncoding=encoding)
html = soup.renderContents('utf-8')
hxs = HtmlXPathSelector(text=html)
chapter_list = hxs.select(rule['chapter_list']).extract()
else:
chapter_list = hxs.select(rule['chapter_list']).extract()
logging.debug("start collect %s" % book.name)
logging.debug("start collect chapters...")
for chapter in chapter_list:
try:
subsoup = BeautifulSoup(chapter)
a = subsoup.find('a')
if a['href'].partition("://")[0] in ('http', 'https'):
chapter_url = a['href']
else:
chapter_url = urlparse.urljoin(self.url, a['href'])
chapter_content = get_contents(chapter_url, self.url)
if chapter_content is None:
continue
subsoup = BeautifulSoup(chapter_content, fromEncoding=encoding)
chapter_content = subsoup.renderContents('utf-8')
hxs = HtmlXPathSelector(text=chapter_content)
chapter_title = hxs.select(rule['chapter_title']).extract()
if type(chapter_title) is list:
chapter_title = chapter_title[0].strip()
else:
chapter_title = subsoup.html.head.title.string
if 'chapter_content_url' in rule:
chapter_content_url = hxs.select(rule['chapter_content_url']).extract()
if type(chapter_content_url) is list and len(chapter_content_url) > 0:
chapter_content_url = chapter_content_url[0].strip()
# elif not chapter_content_url:
else:
continue
chapter_content = get_contents(chapter_content_url, chapter_url)
subsoup = BeautifulSoup(chapter_content, fromEncoding=encoding)
chapter_content = subsoup.renderContents('utf-8')
else:
chapter_content = hxs.select(rule['chapter_content']).extract()
if type(chapter_content) is list:
chapter_content = ''.join(chapter_content).strip()
if not chapter_content:
continue
if 'chapter_content_filter' in rule:
chapter_content = rule['chapter_content_filter'](chapter_content)
subsoup = BeautifulSoup(chapter_content)
for tag in subsoup.findAll(self.remove_tags):
tag.extract()
for tag in list(subsoup.findAll(attrs={"style":"display:none"})):
tag.extract()
book.add_chapter(chapter_title, subsoup.renderContents('utf-8'))
except Exception, e:
logging.error(e)
logging.debug("start building...")
book.create()
book.toEpub()
book.toMobi()
book.unlock()
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s:%(msecs)03d %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
# Crawler("http://read.dangdang.com/book_15062").collect()
# Crawler("http://read.360buy.com/5033/index.html").collect()
Crawler("http://www.qidian.com/BookReader/1887208.aspx").collect()
| 33.095571 | 161 | 0.548951 | 1,655 | 14,198 | 4.565559 | 0.156495 | 0.041292 | 0.047644 | 0.050026 | 0.315246 | 0.252118 | 0.212017 | 0.189386 | 0.103229 | 0.086686 | 0 | 0.009067 | 0.324201 | 14,198 | 429 | 162 | 33.095571 | 0.778426 | 0.019017 | 0 | 0.207746 | 0 | 0.003521 | 0.103348 | 0.006665 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010563 | 0.049296 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
711cc2a2023d2b89e80fa1cd960c54c2be2e7173 | 340 | py | Python | library/test/test_compiler/sbs_code_tests/95_annotation_global.py | creativemindplus/skybison | d1740e08d8de85a0a56b650675717da67de171a0 | [
"CNRI-Python-GPL-Compatible"
] | 278 | 2021-08-31T00:46:51.000Z | 2022-02-13T19:43:28.000Z | library/test/test_compiler/sbs_code_tests/95_annotation_global.py | creativemindplus/skybison | d1740e08d8de85a0a56b650675717da67de171a0 | [
"CNRI-Python-GPL-Compatible"
] | 9 | 2021-11-05T22:28:43.000Z | 2021-11-23T08:39:04.000Z | library/test/test_compiler/sbs_code_tests/95_annotation_global.py | tekknolagi/skybison | bea8fc2af0a70e7203b4c19f36c14a745512a335 | [
"CNRI-Python-GPL-Compatible"
] | 12 | 2021-08-31T07:49:54.000Z | 2021-10-08T01:09:01.000Z | # Copyright (c) Facebook, Inc. and its affiliates. (http://www.facebook.com)
def f():
(some_global): int
print(some_global)
# EXPECTED:
[
...,
LOAD_CONST(Code((1, 0))),
LOAD_CONST('f'),
MAKE_FUNCTION(0),
STORE_NAME('f'),
LOAD_CONST(None),
RETURN_VALUE(0),
CODE_START('f'),
~LOAD_CONST('int'),
]
| 18.888889 | 76 | 0.588235 | 46 | 340 | 4.130435 | 0.652174 | 0.189474 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014981 | 0.214706 | 340 | 17 | 77 | 20 | 0.696629 | 0.247059 | 0 | 0 | 0 | 0 | 0.023715 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | true | 0 | 0 | 0 | 0.071429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7124602becc2e6ec9b01d8eb8d0b8a172881c1d3 | 835 | py | Python | src/chapter2/exercise8.py | group9BCS1/BCS-2021 | 4210c2c9942f59ea250be71434d8d96c55950020 | [
"MIT"
] | 1 | 2021-05-20T17:15:10.000Z | 2021-05-20T17:15:10.000Z | src/chapter2/exercise8.py | group9BCS1/BCS-2021 | 4210c2c9942f59ea250be71434d8d96c55950020 | [
"MIT"
] | null | null | null | src/chapter2/exercise8.py | group9BCS1/BCS-2021 | 4210c2c9942f59ea250be71434d8d96c55950020 | [
"MIT"
] | 2 | 2021-05-15T04:38:10.000Z | 2021-05-20T17:06:21.000Z | #This program computes compound interest
#Prompt the user to input the inital investment
C = int(input('Enter the initial amount of an investment(C): '))
#Prompt the user to input the yearly rate of interest
r = float(input('Enter the yearly rate of interest(r): '))
#Prompt the user to input the number of years until maturation
t = int(input('Enter the number of years until maturation(t): '))
#Prompt the user to input the number of times the interest is compounded per year
n = int(input('Enter the number of times the interest is compounded per year(n): '))
#This is the formula to compute the compound interest. It is printed to the nearest penny
p = str(round(C * (((1 + (r/n)) ** (t*n))), 2))
#This outputs the compound interest to the nearest penny
print('The final value of the investment to the nearest penny is: ', p) | 43.947368 | 89 | 0.735329 | 144 | 835 | 4.263889 | 0.340278 | 0.058632 | 0.084691 | 0.09772 | 0.513029 | 0.513029 | 0.322476 | 0.236156 | 0.153094 | 0.153094 | 0 | 0.002899 | 0.173653 | 835 | 19 | 90 | 43.947368 | 0.886957 | 0.504192 | 0 | 0 | 0 | 0 | 0.627451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
713167a830164ccca62c062da7d4668a476d9d1e | 1,004 | py | Python | misc/kwcheck.py | zzahti/skytools | 71c0852dfa3fc9021b1821e50b2ba58dde7c27fe | [
"0BSD"
] | 116 | 2015-01-06T17:56:12.000Z | 2021-08-16T06:33:01.000Z | misc/kwcheck.py | zzahti/skytools | 71c0852dfa3fc9021b1821e50b2ba58dde7c27fe | [
"0BSD"
] | 17 | 2015-02-17T17:50:53.000Z | 2020-01-15T08:05:46.000Z | misc/kwcheck.py | zzahti/skytools | 71c0852dfa3fc9021b1821e50b2ba58dde7c27fe | [
"0BSD"
] | 51 | 2015-02-18T16:12:13.000Z | 2021-03-07T19:22:58.000Z | #! /usr/bin/env python
import sys
import re
import pkgloader
pkgloader.require('skytools', '3.0')
import skytools.quoting
kwmap = skytools.quoting._ident_kwmap
fn = "/opt/src/pgsql/postgresql/src/include/parser/kwlist.h"
if len(sys.argv) == 2:
fn = sys.argv[1]
rc = re.compile(r'PG_KEYWORD[(]"(.*)" , \s* \w+ , \s* (\w+) [)]', re.X)
data = open(fn, 'r').read()
full_map = {}
cur_map = {}
print "== new =="
for kw, cat in rc.findall(data):
full_map[kw] = 1
if cat == 'UNRESERVED_KEYWORD':
continue
if cat == 'COL_NAME_KEYWORD':
continue
cur_map[kw] = 1
if kw not in kwmap:
print kw, cat
kwmap[kw] = 1
print "== obsolete =="
kws = kwmap.keys()
kws.sort()
for k in kws:
if k not in full_map:
print k, '(not in full_map)'
elif k not in cur_map:
print k, '(not in cur_map)'
print "== full list =="
ln = ""
for k in kws:
ln += '"%s":1, ' % k
if len(ln) > 70:
print ln.strip()
ln = ""
print ln.strip()
| 19.686275 | 71 | 0.570717 | 160 | 1,004 | 3.49375 | 0.40625 | 0.044723 | 0.042934 | 0.028623 | 0.116279 | 0.060823 | 0 | 0 | 0 | 0 | 0 | 0.013316 | 0.251992 | 1,004 | 50 | 72 | 20.08 | 0.731025 | 0.020916 | 0 | 0.2 | 0 | 0 | 0.227319 | 0.054027 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7134b11fbf615621dd4092408e906a7218019a12 | 5,223 | py | Python | tests/main.py | tdegeus/enstat | c26f079dbbfd9fbe76cb8dc5a3da4290f1996645 | [
"MIT"
] | null | null | null | tests/main.py | tdegeus/enstat | c26f079dbbfd9fbe76cb8dc5a3da4290f1996645 | [
"MIT"
] | 5 | 2021-02-02T10:38:59.000Z | 2021-04-05T15:22:22.000Z | tests/main.py | tdegeus/EnsembleAverage | 2e1eeb58ec6a031b0539d5c65f5db71a4bd904d0 | [
"MIT"
] | 1 | 2021-03-05T10:28:40.000Z | 2021-03-05T10:28:40.000Z | import unittest
from collections import defaultdict
import numpy as np
import enstat.mean
class Test_mean(unittest.TestCase):
"""
tests
"""
def test_scalar(self):
"""
Basic test of "mean" and "std" using a random sample.
"""
average = enstat.scalar()
average.add_sample(np.array(1.0))
self.assertFalse(np.isnan(average.mean()))
self.assertTrue(np.isnan(average.std()))
average.add_sample(np.array(1.0))
self.assertFalse(np.isnan(average.mean()))
self.assertFalse(np.isnan(average.std()))
def test_scalar_division(self):
"""
Check for zero division.
"""
average = enstat.scalar()
a = np.random.random(50 * 20).reshape(50, 20)
for i in range(a.shape[0]):
average.add_sample(a[i, :])
self.assertTrue(np.isclose(average.mean(), np.mean(a)))
self.assertTrue(np.isclose(average.std(), np.std(a), rtol=1e-3))
def test_static(self):
"""
Basic test of "mean" and "std" using a random sample.
"""
average = enstat.static()
a = np.random.random(35 * 50 * 20).reshape(35, 50, 20)
for i in range(a.shape[0]):
average.add_sample(a[i, :, :])
self.assertTrue(np.allclose(average.mean(), np.mean(a, axis=0)))
self.assertTrue(np.allclose(average.std(), np.std(a, axis=0), rtol=5e-1, atol=1e-3))
self.assertTrue(average.shape() == a.shape[1:])
self.assertTrue(average.size() == np.prod(a.shape[1:]))
def test_static_ravel(self):
"""
Like :py:func:`test_static` but with a test of `ravel`.
"""
arraylike = enstat.static()
scalar = enstat.scalar()
a = np.random.random(35 * 50 * 20).reshape(35, 50, 20)
for i in range(a.shape[0]):
arraylike.add_sample(a[i, :, :])
scalar.add_sample(a[i, :, :])
flat = arraylike.ravel()
self.assertTrue(np.allclose(flat.mean(), np.mean(a)))
self.assertTrue(np.allclose(flat.std(), np.std(a), rtol=5e-1, atol=1e-3))
self.assertTrue(np.allclose(flat.mean(), scalar.mean()))
self.assertTrue(np.allclose(flat.std(), scalar.std(), rtol=5e-1, atol=1e-3))
def test_static_division(self):
"""
Check for zero division.
"""
average = enstat.static()
average.add_sample(np.array([1.0]))
self.assertFalse(np.isnan(average.mean()))
self.assertTrue(np.isnan(average.std()))
average.add_sample(np.array([1.0]))
self.assertFalse(np.isnan(average.mean()))
self.assertFalse(np.isnan(average.std()))
def test_static_mask(self):
average = enstat.static()
a = np.random.random(35 * 50 * 20).reshape(35, 50, 20)
m = np.random.random(35 * 50 * 20).reshape(35, 50, 20) > 0.8
for i in range(a.shape[0]):
average.add_sample(a[i, :, :], m[i, :, :])
self.assertTrue(
np.isclose(
np.sum(average.first()) / np.sum(average.norm()),
np.mean(a[np.logical_not(m)]),
)
)
self.assertTrue(
np.isclose(
np.sum(average.first()) / np.sum(average.norm()),
np.mean(a[np.logical_not(m)]),
)
)
self.assertTrue(np.all(np.equal(average.norm(), np.sum(np.logical_not(m), axis=0))))
def test_dynamic1d(self):
average = enstat.dynamic1d()
average.add_sample(np.array([1, 2, 3]))
average.add_sample(np.array([1, 2, 3]))
average.add_sample(np.array([1, 2]))
average.add_sample(np.array([1]))
self.assertTrue(np.allclose(average.mean(), np.array([1, 2, 3])))
self.assertTrue(np.allclose(average.std(), np.array([0, 0, 0])))
self.assertEqual(average.shape(), (3,))
self.assertEqual(average.size(), 3)
class Test_defaultdict(unittest.TestCase):
"""
functionality
"""
def test_scalar(self):
average = defaultdict(enstat.scalar)
a = np.random.random(50 * 20).reshape(50, 20)
b = np.random.random(52 * 21).reshape(52, 21)
for i in range(a.shape[0]):
average["a"].add_sample(a[i, :])
for i in range(b.shape[0]):
average["b"].add_sample(b[i, :])
self.assertTrue(np.isclose(average["a"].mean(), np.mean(a)))
self.assertTrue(np.isclose(average["b"].mean(), np.mean(b)))
def test_static(self):
average = defaultdict(enstat.static)
a = np.random.random(35 * 50 * 20).reshape(35, 50, 20)
b = np.random.random(37 * 52 * 21).reshape(37, 52, 21)
for i in range(a.shape[0]):
average["a"].add_sample(a[i, :, :])
for i in range(b.shape[0]):
average["b"].add_sample(b[i, :, :])
self.assertTrue(np.allclose(average["a"].mean(), np.mean(a, axis=0)))
self.assertTrue(np.allclose(average["b"].mean(), np.mean(b, axis=0)))
self.assertTrue(average["a"].shape() == a.shape[1:])
self.assertTrue(average["b"].shape() == b.shape[1:])
if __name__ == "__main__":
unittest.main()
| 28.232432 | 92 | 0.559066 | 708 | 5,223 | 4.062147 | 0.120057 | 0.111961 | 0.105702 | 0.083449 | 0.754172 | 0.732962 | 0.649166 | 0.584492 | 0.535814 | 0.507302 | 0 | 0.040509 | 0.262684 | 5,223 | 184 | 93 | 28.38587 | 0.70631 | 0.04461 | 0 | 0.436893 | 0 | 0 | 0.003716 | 0 | 0 | 0 | 0 | 0 | 0.300971 | 1 | 0.087379 | false | 0 | 0.038835 | 0 | 0.145631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71351ec8e090e943f24c2a7631465b81d875953c | 1,570 | py | Python | build/lib/longform/models.py | mattlinares/longform | 3539948a10a0c1e52debaf796b6f9f6e050f94d3 | [
"MIT"
] | null | null | null | build/lib/longform/models.py | mattlinares/longform | 3539948a10a0c1e52debaf796b6f9f6e050f94d3 | [
"MIT"
] | null | null | null | build/lib/longform/models.py | mattlinares/longform | 3539948a10a0c1e52debaf796b6f9f6e050f94d3 | [
"MIT"
] | null | null | null | from django.db import models
from django.conf import settings
from modelcluster.fields import ParentalKey
from wagtail.core.models import Page, Orderable
from wagtail.core.fields import StreamField
from wagtail.wagtailadmin.edit_handlers import (
FieldPanel, StreamFieldPanel, InlinePanel, PageChooserPanel
)
from wagtail.wagtailimages.edit_handlers import ImageChooserPanel
from wagtail.wagtailsearch import index
from .blocks import LongformBlock
class LongformPage(Page):
body = StreamField(LongformBlock())
introduction = models.CharField(max_length=255)
background_image = models.ForeignKey(
settings.WAGTAILIMAGES_IMAGE_MODEL,
null=True, blank=True,
on_delete=models.SET_NULL,
related_name='+',
)
search_fields = Page.search_fields + [
index.SearchField('body'),
]
content_panels = Page.content_panels + [
FieldPanel('introduction'),
ImageChooserPanel('background_image'),
StreamFieldPanel('body'),
]
subpage_types = []
def get_template(self, request, *args, **kwargs):
if request.is_ajax():
return self.ajax_template or self.template
else:
return 'longform/longform_page.html'
def get_context(self, request):
context = super().get_context(request)
if request.GET.get('accessible'):
context.update(render_accessible=True)
return context
class Meta:
verbose_name = "Longform Page"
verbose_name_plural = "Longform Pages"
abstract = True
| 27.54386 | 65 | 0.695541 | 166 | 1,570 | 6.427711 | 0.475904 | 0.051546 | 0.028116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002449 | 0.219745 | 1,570 | 56 | 66 | 28.035714 | 0.868571 | 0 | 0 | 0 | 0 | 0 | 0.064331 | 0.017197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.209302 | 0 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
713f8728f170e98a510f966331e3baefa1955731 | 1,930 | py | Python | plugin.program.super.favourites/viewer.py | TheWardoctor/wardoctors-repo | 893f646d9e27251ffc00ca5f918e4eb859a5c8f0 | [
"Apache-2.0"
] | 1 | 2019-03-05T09:37:15.000Z | 2019-03-05T09:37:15.000Z | plugin.program.super.favourites/viewer.py | TheWardoctor/wardoctors-repo | 893f646d9e27251ffc00ca5f918e4eb859a5c8f0 | [
"Apache-2.0"
] | null | null | null | plugin.program.super.favourites/viewer.py | TheWardoctor/wardoctors-repo | 893f646d9e27251ffc00ca5f918e4eb859a5c8f0 | [
"Apache-2.0"
] | 1 | 2021-11-05T20:48:09.000Z | 2021-11-05T20:48:09.000Z | #
# Copyright (C) 2014-2015
# Sean Poyser (seanpoyser@gmail.com)
#
# This Program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This Program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with XBMC; see the file COPYING. If not, write to
# the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
# http://www.gnu.org/copyleft/gpl.html
#
import xbmcgui
import xbmcaddon
class Viewer(xbmcgui.WindowXMLDialog):
ACTION_EXIT = (9, 10, 247, 275, 61467, 216, 257, 61448)
def __init__(self, *args, **kwargs):
pass
def onInit(self):
self.getControl(1000).setImage(self.fanart)
self.getControl(1100).setImage(self.thumb)
self.setFocus(self.getControl(2000))
def onClick(self, controlId):
self.close()
pass
def onFocus(self, controlId):
pass
def onAction(self, action):
actionId = action.getId()
buttonId = action.getButtonCode()
if buttonId in self.ACTION_EXIT or actionId in self.ACTION_EXIT:
self.close()
if actionId != 107:
self.close()
def show(fanart, thumb, addon=None):
try:
if addon:
path = xbmcaddon.Addon(addon).getAddonInfo('path')
else:
path = xbmcaddon.Addon().getAddonInfo('path')
v = Viewer('viewer.xml', path, 'Default')
v.fanart = fanart
v.thumb = thumb
v.doModal()
del v
except:
pass | 25.064935 | 72 | 0.640933 | 251 | 1,930 | 4.900398 | 0.565737 | 0.029268 | 0.031707 | 0.046341 | 0.066667 | 0.045528 | 0 | 0 | 0 | 0 | 0 | 0.040283 | 0.266839 | 1,930 | 77 | 73 | 25.064935 | 0.828975 | 0.395855 | 0 | 0.2 | 0 | 0 | 0.021911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0.114286 | 0.057143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
852f1cbba9e7736017df5f793b0b98d952024c1d | 677 | py | Python | src/namegen.py | Advik-B/Data-Generator | d7e21b3140b51ad5a8fc980777b10905bb6a9f2a | [
"MIT"
] | 1 | 2022-01-08T15:35:11.000Z | 2022-01-08T15:35:11.000Z | src/namegen.py | Advik-B/Data-Generator | d7e21b3140b51ad5a8fc980777b10905bb6a9f2a | [
"MIT"
] | null | null | null | src/namegen.py | Advik-B/Data-Generator | d7e21b3140b51ad5a8fc980777b10905bb6a9f2a | [
"MIT"
] | null | null | null | import random
class Human_:
"""Base class for humans"""
def setrandomgender(self, allow_other=False, **kwargs):
if type(kwargs.get('custom_gender')) != dict:
self.base_genders = [{'Male': ['He', 'Him']}, {'Female': ['She', 'Her']}]
elif type(kwargs.get('custom_gender')) == dict:
tmp = kwargs.get('custom_gender')
if len(list(tmp)) == 1:
self.gender = list(tmp)[0]
else:
self.gender = random.choice(list(tmp))
self.refer_to_me_as = tmp[self.gender]
class Human(Human_):
def __init__(self):
self.age = random.randint(1, 101)
self.gender | 35.631579 | 85 | 0.55096 | 82 | 677 | 4.378049 | 0.52439 | 0.111421 | 0.125348 | 0.175487 | 0.16156 | 0.16156 | 0 | 0 | 0 | 0 | 0 | 0.012448 | 0.288035 | 677 | 19 | 86 | 35.631579 | 0.732365 | 0.031019 | 0 | 0 | 0 | 0 | 0.092166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8532bab920d1825b8422ff7fb44acb95b58ad085 | 622 | gyp | Python | binding.gyp | dimshik100/Epoc.js | 240950392974d398776f76f08a91cfd71b095487 | [
"MIT"
] | 799 | 2016-07-31T19:54:59.000Z | 2022-03-21T05:55:15.000Z | binding.gyp | dimshik100/Epoc.js | 240950392974d398776f76f08a91cfd71b095487 | [
"MIT"
] | 8 | 2016-11-03T09:09:28.000Z | 2019-09-01T17:07:27.000Z | binding.gyp | dimshik100/Epoc.js | 240950392974d398776f76f08a91cfd71b095487 | [
"MIT"
] | 54 | 2017-05-23T19:36:33.000Z | 2021-12-29T15:11:12.000Z | {
"targets": [
{
"target_name": "index",
"sources": [ "epoc.cc"],
"include_dirs" : [
"<!(node -e \"require('nan')\")"
],
"conditions": [
['OS=="mac"', {
"cflags": [ "-m64" ],
"ldflags": [ "-m64" ],
"xcode_settings": {
"OTHER_CFLAGS": ["-ObjC++"],
"ARCHS": [ "x86_64" ]
},
"link_settings": {
"libraries": [
"/Library/Frameworks/edk.framework/edk"
],
"include_dirs": ["./lib/includes/", "./lib/"]
}
}]
]
}
]
}
| 22.214286 | 59 | 0.350482 | 41 | 622 | 5.146341 | 0.829268 | 0.104265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.427653 | 622 | 27 | 60 | 23.037037 | 0.570225 | 0 | 0 | 0.074074 | 0 | 0 | 0.38746 | 0.059486 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8532c99e8a8433241294b42f1d47108571d2bbf0 | 983 | py | Python | tokens.py | manuel-io/minicloud | bdf6a08709f5c9d0d92caddc1c719c438b6970ce | [
"MIT"
] | 1 | 2021-01-15T02:10:32.000Z | 2021-01-15T02:10:32.000Z | tokens.py | manuel-io/minicloud | bdf6a08709f5c9d0d92caddc1c719c438b6970ce | [
"MIT"
] | 3 | 2020-06-13T00:02:13.000Z | 2021-09-12T13:20:16.000Z | tokens.py | manuel-io/minicloud | bdf6a08709f5c9d0d92caddc1c719c438b6970ce | [
"MIT"
] | null | null | null | import sys, psycopg2, psycopg2.extras
def generate(user_id, db):
try:
with db.cursor(cursor_factory = psycopg2.extras.DictCursor) as cursor:
cursor.execute("""
INSERT INTO minicloud_auths (user_id) VALUES (%s) RETURNING token
""", [int(user_id)])
data = cursor.fetchone()
db.commit()
return data['token']
except Exception as e:
db.rollback()
sys.stderr('%s\n' % 'No token generated!')
return None
def revoke(user_id, db):
try:
with db.cursor(cursor_factory = psycopg2.extras.DictCursor) as cursor:
cursor.execute("""
DELETE FROM minicloud_auths WHERE user_id = %s
""", [user_id])
db.commit()
sys.stderr.write('X-Auth-Token revoked\n')
return True
except Exception as e:
db.rollback()
sys.stderr.write('Revoke X-Auth-Token failed: %s\n' % str(e))
return False
| 27.305556 | 78 | 0.571719 | 119 | 983 | 4.638655 | 0.420168 | 0.065217 | 0.043478 | 0.039855 | 0.427536 | 0.427536 | 0.427536 | 0.427536 | 0.293478 | 0.293478 | 0 | 0.0059 | 0.310275 | 983 | 35 | 79 | 28.085714 | 0.80826 | 0 | 0 | 0.444444 | 1 | 0 | 0.249237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.037037 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85340618b43f73553925b20a11d7ffb839d7dd94 | 903 | py | Python | gorden_crawler/spiders/item_lacoste.py | Enmming/gorden_cralwer | 3c279e4f80eaf90f3f03acd31b75cf991952adee | [
"Apache-2.0"
] | 2 | 2019-02-22T13:51:08.000Z | 2020-08-03T14:01:30.000Z | gorden_crawler/spiders/item_lacoste.py | Enmming/gorden_cralwer | 3c279e4f80eaf90f3f03acd31b75cf991952adee | [
"Apache-2.0"
] | null | null | null | gorden_crawler/spiders/item_lacoste.py | Enmming/gorden_cralwer | 3c279e4f80eaf90f3f03acd31b75cf991952adee | [
"Apache-2.0"
] | 1 | 2020-08-03T14:01:32.000Z | 2020-08-03T14:01:32.000Z | # -*- coding: utf-8 -*-
from scrapy.spiders import Spider
from scrapy.selector import Selector
from gorden_crawler.items import BaseItem, ImageItem, SkuItem, Color
from scrapy import Request
from scrapy_redis.spiders import RedisSpider
import re
import execjs
import json
from gorden_crawler.spiders.shiji_base import ItemSpider
from gorden_crawler.spiders.lacoste import LacosteSpider,LacosteBaseSpider
class ItemLacosteSpider(ItemSpider, LacosteBaseSpider):
name = "item_lacoste"
allowed_domains = ["lacoste.com"]
#正式运行的时候,start_urls为空,通过redis来喂养爬虫
start_urls = []
base_url = 'http://www.lacoste.com'
'''具体的解析规则'''
def parse(self, response):
itemB = {}
itemB['type'] = 'base'
itemB['from_site'] = 'lacoste'
itemB['url'] = response.url
# ls = LacosteSpider()
return self.handle_parse_item(response,itemB)
| 25.8 | 74 | 0.705426 | 103 | 903 | 6.058252 | 0.524272 | 0.064103 | 0.081731 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001377 | 0.196013 | 903 | 34 | 75 | 26.558824 | 0.858127 | 0.083056 | 0 | 0 | 0 | 0 | 0.088779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.47619 | 0 | 0.809524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8536fa3da81b7b0e167c676d3c1b9ae1a7592c0b | 662 | py | Python | Server/app/schema/mutations/user/register.py | Team-SeeTo/SeeTo-Backend | 19990cd6f4895e773eaa504f7b7a07ddbb5856e5 | [
"Apache-2.0"
] | 4 | 2018-06-18T06:50:12.000Z | 2018-11-15T00:08:24.000Z | Server/app/schema/mutations/user/register.py | Team-SeeTo/SeeTo-Backend | 19990cd6f4895e773eaa504f7b7a07ddbb5856e5 | [
"Apache-2.0"
] | null | null | null | Server/app/schema/mutations/user/register.py | Team-SeeTo/SeeTo-Backend | 19990cd6f4895e773eaa504f7b7a07ddbb5856e5 | [
"Apache-2.0"
] | null | null | null |
import graphene
from app.models import User
class RegisterMutation(graphene.Mutation):
class Arguments(object):
email = graphene.String()
username = graphene.String()
password = graphene.String()
is_success = graphene.Boolean()
message = graphene.String()
@staticmethod
def mutate(root, info, **kwargs):
try:
new_user = User(**kwargs)
new_user.save()
except Exception as e:
print(str(e))
return RegisterMutation(is_success=False, message="Registration Failure")
return RegisterMutation(is_success=True, message="Successfully registered") | 24.518519 | 85 | 0.643505 | 67 | 662 | 6.283582 | 0.61194 | 0.133017 | 0.114014 | 0.147268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259819 | 662 | 27 | 86 | 24.518519 | 0.859184 | 0 | 0 | 0 | 0 | 0 | 0.064955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.111111 | 0 | 0.5 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
853a017f65875a9f0f760904e9bc237db20aee8e | 1,498 | py | Python | usaspending_api/references/migrations/0055_auto_20170319_1841.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | 1 | 2021-06-17T05:09:00.000Z | 2021-06-17T05:09:00.000Z | usaspending_api/references/migrations/0055_auto_20170319_1841.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | null | null | null | usaspending_api/references/migrations/0055_auto_20170319_1841.py | toolness/usaspending-api | ed9a396e20a52749f01f43494763903cc371f9c2 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2017-03-19 18:41
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('references', '0054_auto_20170308_2100'),
]
operations = [
migrations.CreateModel(
name='ObjectClass',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('major_object_class', models.CharField(db_index=True, max_length=2)),
('major_object_class_name', models.CharField(max_length=100)),
('object_class', models.CharField(db_index=True, max_length=3)),
('object_class_name', models.CharField(max_length=60)),
('direct_reimbursable', models.CharField(blank=True, db_index=True, max_length=1, null=True)),
('direct_reimbursable_name', models.CharField(blank=True, max_length=50, null=True)),
('create_date', models.DateTimeField(auto_now_add=True, null=True)),
('update_date', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'object_class',
'managed': True,
},
),
migrations.AlterUniqueTogether(
name='objectclass',
unique_together=set([('object_class', 'direct_reimbursable')]),
),
]
| 39.421053 | 114 | 0.600134 | 157 | 1,498 | 5.464968 | 0.458599 | 0.076923 | 0.060606 | 0.048951 | 0.291375 | 0.198135 | 0.198135 | 0.107226 | 0.107226 | 0 | 0 | 0.039198 | 0.26769 | 1,498 | 37 | 115 | 40.486486 | 0.742935 | 0.045394 | 0 | 0.066667 | 1 | 0 | 0.176594 | 0.049054 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
853c2817f044574374a2b109c19369e6b2a57e22 | 557 | py | Python | bioinformatics_stronghold/subs.py | Wytamma/Rosalind | d3ffecc324f806df79c43bc9f89128b5d2a57dfb | [
"MIT"
] | null | null | null | bioinformatics_stronghold/subs.py | Wytamma/Rosalind | d3ffecc324f806df79c43bc9f89128b5d2a57dfb | [
"MIT"
] | null | null | null | bioinformatics_stronghold/subs.py | Wytamma/Rosalind | d3ffecc324f806df79c43bc9f89128b5d2a57dfb | [
"MIT"
] | null | null | null | SAMPLE_DATASET = """GATATATGCATATACTT
ATAT
"""
SAMPLE_OUTPUT = """2 4 10"""
def kmer_generator(string, n):
"""returns a generator for kmers of length n"""
return (string[i : i + n] for i in range(0, len(string)))
def solution(dataset: list) -> str:
s, t = map(lambda x: x.strip(), dataset) # clean
locs = [
str(i) for i, kmer in enumerate(kmer_generator(s, len(t)), 1) if kmer == t
] # process
return " ".join(locs) # report
def test_solution():
assert solution(SAMPLE_DATASET.splitlines(True)) == SAMPLE_OUTPUT
| 25.318182 | 82 | 0.633752 | 81 | 557 | 4.271605 | 0.567901 | 0.075145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01373 | 0.21544 | 557 | 21 | 83 | 26.52381 | 0.778032 | 0.113106 | 0 | 0 | 0 | 0 | 0.061728 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.214286 | false | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8547dd8fc01f4fe84f90c45c7ee6972d940d2598 | 656 | py | Python | py/testdir_single_jvm/test_failswith512chunk.py | gigliovale/h2o | be350f3f2c2fb6f135cc07c41f83fd0e4f521ac1 | [
"Apache-2.0"
] | 882 | 2015-05-22T02:59:21.000Z | 2022-02-17T05:02:48.000Z | py/testdir_single_jvm/test_failswith512chunk.py | VonRosenchild/h2o-2 | be350f3f2c2fb6f135cc07c41f83fd0e4f521ac1 | [
"Apache-2.0"
] | 1 | 2015-01-14T23:54:56.000Z | 2015-01-15T20:04:17.000Z | py/testdir_single_jvm/test_failswith512chunk.py | VonRosenchild/h2o-2 | be350f3f2c2fb6f135cc07c41f83fd0e4f521ac1 | [
"Apache-2.0"
] | 392 | 2015-05-22T17:04:11.000Z | 2022-02-22T09:04:39.000Z | import unittest, time, sys
# not needed, but in case you move it down to subdir
sys.path.extend(['.','..','../..','py'])
import h2o, h2o_cmd, h2o_import as h2i
import h2o_browse as h2b
class Basic(unittest.TestCase):
def tearDown(self):
h2o.check_sandbox_for_errors()
@classmethod
def setUpClass(cls):
h2o.init()
@classmethod
def tearDownClass(cls):
h2o.tear_down_cloud()
def test_fail1_100x1100(self):
parseResult = h2i.import_parse(bucket='smalldata', path='fail1_100x11000.csv.gz', schema='put',
timeoutSecs=60, retryDelaySecs=0.15)
if __name__ == '__main__':
h2o.unit_main()
| 26.24 | 103 | 0.666159 | 87 | 656 | 4.781609 | 0.712644 | 0.043269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062857 | 0.199695 | 656 | 24 | 104 | 27.333333 | 0.729524 | 0.07622 | 0 | 0.111111 | 0 | 0 | 0.086093 | 0.036424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
855f7cb89772ea3d4d64fcd054158faff330adb1 | 438 | py | Python | dash_oyku/models.py | efebuyuk/jd_intern_project | a8db7ddc8f434d6148b743f6c8c6b071f29e1a41 | [
"MIT"
] | 1 | 2021-06-28T07:20:03.000Z | 2021-06-28T07:20:03.000Z | dash_oyku/models.py | efebuyuk/jd_intern_project | a8db7ddc8f434d6148b743f6c8c6b071f29e1a41 | [
"MIT"
] | null | null | null | dash_oyku/models.py | efebuyuk/jd_intern_project | a8db7ddc8f434d6148b743f6c8c6b071f29e1a41 | [
"MIT"
] | null | null | null | from django.db import models
from django.db.models import IntegerField, Model, JSONField
class notebook(models.Model):
name = models.CharField(max_length=500)
cell_count = models.IntegerField(default=0)
code_cell_count = models.IntegerField(default=0)
language = models.CharField(max_length=6)
dataset = JSONField()
function = JSONField()
library = JSONField()
| 31.285714 | 59 | 0.6621 | 48 | 438 | 5.9375 | 0.520833 | 0.070175 | 0.084211 | 0.168421 | 0.245614 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0.251142 | 438 | 13 | 60 | 33.692308 | 0.85061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85731618bb69d50a2638e91ab9c3f9baea8598f6 | 950 | py | Python | demo/classification/fast_neural_network/mnist.py | DanielMorales9/LinearRegression | 7c905e5317a2eb4cc3b2cab275bbcec8e9db57f9 | [
"MIT"
] | null | null | null | demo/classification/fast_neural_network/mnist.py | DanielMorales9/LinearRegression | 7c905e5317a2eb4cc3b2cab275bbcec8e9db57f9 | [
"MIT"
] | null | null | null | demo/classification/fast_neural_network/mnist.py | DanielMorales9/LinearRegression | 7c905e5317a2eb4cc3b2cab275bbcec8e9db57f9 | [
"MIT"
] | null | null | null | from classification import FastNeuralNetwork
import os
import scipy.io as scio
from random import random
from numpy import mean, zeros, reshape, ceil
from numpy.random import shuffle, rand
from chart import NNChart
nnc = NNChart()
path = os.path.dirname(os.path.abspath(__file__))
data = scio.loadmat(path+"/data/data1.mat")
x = data['X']
y = data['y']
y -= 1
y %= 10
num = rand(64) * len(x)
training = x[num.astype(int)]
nnc.display(training)
split = int(round(random() * len(x)))
X = zeros((x.shape[0], x.shape[1]+1))
X[:, :x.shape[1]] = x
X[:, -1] = y.T
shuffle(X)
x = X[:, :x.shape[1]]
y = x[:, -1]
y = y.astype(int)
hidden_units = 25
fnn = FastNeuralNetwork()
fnn.fit(x[:split, :], y[:split], hidden_units)
predictions = fnn.predict(x[split:, :])
theta = fnn.model
print "Accuracy: {}%".format(mean(predictions == y[split:]) * 100)
th1 = reshape(theta[:fnn.hl * (fnn.il + 1)], (fnn.hl, (fnn.il + 1)), order='F')
nnc.display(th1[:9, 1:])
| 22.619048 | 79 | 0.652632 | 159 | 950 | 3.861635 | 0.396226 | 0.019544 | 0.034202 | 0.026059 | 0.035831 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029448 | 0.142105 | 950 | 41 | 80 | 23.170732 | 0.723926 | 0 | 0 | 0 | 0 | 0 | 0.032632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.212121 | null | null | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85783967cfc0d695bb89fab6270b7c499143c02d | 643 | py | Python | python base/mail.py | umaremdy/reishy | 060b26812cac88c0333c914fab409c3ca6647278 | [
"CC-BY-3.0"
] | null | null | null | python base/mail.py | umaremdy/reishy | 060b26812cac88c0333c914fab409c3ca6647278 | [
"CC-BY-3.0"
] | null | null | null | python base/mail.py | umaremdy/reishy | 060b26812cac88c0333c914fab409c3ca6647278 | [
"CC-BY-3.0"
] | null | null | null | from flask_mail import Mail, Message
from flask import Flask
app =Flask(__name__)
app.config['MAIL_SERVER']='smtp.gmail.com'
app.config['MAIL_PORT'] = 465
app.config['MAIL_USERNAME'] = 'antoprince001@gmail.com'
app.config['MAIL_PASSWORD'] = 'gboyilovemylife100%'
app.config['MAIL_USE_TLS'] = False
app.config['MAIL_USE_SSL'] = True
mail = Mail(app)
@app.route("/")
def index():
msg = Message('Hello', sender = 'antoprince001@gmail.com', recipients = ['hemuhema2000@gmail.com','hinduabisundaram@gmail.com'])
msg.body = "PATREC Contract created"
mail.send(msg)
return "Sent"
if __name__ == '__main__':
app.run(debug = True) | 25.72 | 131 | 0.713841 | 87 | 643 | 5.034483 | 0.494253 | 0.123288 | 0.178082 | 0.077626 | 0.09589 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028319 | 0.121306 | 643 | 25 | 132 | 25.72 | 0.746903 | 0 | 0 | 0 | 0 | 0 | 0.369565 | 0.145963 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
858477fb81ec87aea406b9c3e37a4cb272d6dd6a | 441 | py | Python | scripts/hw-diags.py | 2Shirt/WizardK | 82a2e7f85c80a52f892c1553e7a45ec0174e7bc6 | [
"MIT"
] | null | null | null | scripts/hw-diags.py | 2Shirt/WizardK | 82a2e7f85c80a52f892c1553e7a45ec0174e7bc6 | [
"MIT"
] | 178 | 2017-11-17T19:14:31.000Z | 2021-12-15T07:43:29.000Z | scripts/hw-diags.py | 2Shirt/WizardK | 82a2e7f85c80a52f892c1553e7a45ec0174e7bc6 | [
"MIT"
] | 1 | 2017-11-17T19:32:36.000Z | 2017-11-17T19:32:36.000Z | #!/usr/bin/env python3
"""WizardKit: Hardware Diagnostics"""
# pylint: disable=invalid-name
# vim: sts=2 sw=2 ts=2
from docopt import docopt
import wk
if __name__ == '__main__':
try:
docopt(wk.hw.diags.DOCSTRING)
except SystemExit:
print('')
wk.std.pause('Press Enter to exit...')
raise
try:
wk.hw.diags.main()
except SystemExit:
raise
except: #pylint: disable=bare-except
wk.std.major_exception()
| 17.64 | 42 | 0.666667 | 61 | 441 | 4.672131 | 0.639344 | 0.091228 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.192744 | 441 | 24 | 43 | 18.375 | 0.789326 | 0.294785 | 0 | 0.4 | 0 | 0 | 0.099338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.133333 | 0 | 0.133333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8584ae1c5aab7faa348a572830afee0432e91071 | 1,055 | py | Python | slot/w/wand.py | mattkw/dl | 45bfc28ad9ff827045a3734730deb893a2436c09 | [
"Apache-2.0"
] | null | null | null | slot/w/wand.py | mattkw/dl | 45bfc28ad9ff827045a3734730deb893a2436c09 | [
"Apache-2.0"
] | null | null | null | slot/w/wand.py | mattkw/dl | 45bfc28ad9ff827045a3734730deb893a2436c09 | [
"Apache-2.0"
] | null | null | null | import slot
from slot import *
class wand5b2p2(WeaponBase):
ele = ['all']
wt = 'wand'
att = 470
s3 = {
"dmg" : 4*2.44 ,
"sp" : 8757 ,
"startup" : 0.1 ,
"recovery" : 1.9 ,
}
class wand4b1(WeaponBase):
ele = ['flame','wind','shadow']
wt = 'wand'
att = 372
s3 = {
"dmg" : 9.84 ,
"sp" : 8453 ,
"startup" : 0.1 ,
"recovery" : 1.9 ,
}
class wand5b10(WeaponBase):
ele = ['flame','wind','shadow']
wt = 'wand'
att = 454
s3 = {
}
class wand5b1(WeaponBase):
ele = ['flame','wind','shadow']
wt = 'wand'
att = 528
s3 = {
}
class wand5b2(WeaponBase):
ele = ['water','light']
wt = 'wand'
att = 573
s3 = {
"dmg" : 4*2.71 ,
"sp" : 8757 ,
"startup" : 0.1 ,
"recovery" : 1.9 ,
}
flame = wand5b1
wind = wand5b1
shadow = wand5b1
water = wand5b2
light = wand5b2
| 18.189655 | 35 | 0.412322 | 107 | 1,055 | 4.065421 | 0.373832 | 0.149425 | 0.103448 | 0.117241 | 0.436782 | 0.436782 | 0.436782 | 0.370115 | 0 | 0 | 0 | 0.129195 | 0.435071 | 1,055 | 57 | 36 | 18.508772 | 0.600671 | 0 | 0 | 0.428571 | 0 | 0 | 0.13093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.55102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8588525112b3883bbd9b701dec095f2efcbfd29f | 519 | py | Python | tests/server_generator/test_bottle_server_generator.py | mikaeilorfanian/client_server_generator | 784245563d3633396e01ba918197a1fe41a2205b | [
"MIT"
] | null | null | null | tests/server_generator/test_bottle_server_generator.py | mikaeilorfanian/client_server_generator | 784245563d3633396e01ba918197a1fe41a2205b | [
"MIT"
] | 5 | 2017-08-24T13:39:01.000Z | 2017-08-25T07:16:58.000Z | tests/server_generator/test_bottle_server_generator.py | mikaeilorfanian/client_server_generator | 784245563d3633396e01ba918197a1fe41a2205b | [
"MIT"
] | null | null | null | import requests
from server_generator.server_generator import MagicRouter
from tests.utils import start_test_bottle_app
class HelloNameURLHandler(MagicRouter):
route_name = 'hello_name'
def handler(self):
return 'hello {}'.format(self.name)
def test_setup():
route = HelloNameURLHandler()
start_test_bottle_app(route)
# url = URL('hello_name')
# url.get()
r = requests.get('http://localhost:8080/hello/world')
assert r.status_code == 200
assert 'hello world' in r.text
| 23.590909 | 57 | 0.712909 | 67 | 519 | 5.328358 | 0.537313 | 0.084034 | 0.084034 | 0.10084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016471 | 0.181118 | 519 | 21 | 58 | 24.714286 | 0.823529 | 0.063584 | 0 | 0 | 0 | 0 | 0.128364 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0 | 0.230769 | 0.076923 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
85909d74fe38a6ffcecf31ffda0296f5cfcf1a2e | 208 | py | Python | uniqueOccurrences.py | hazardinho/LeetcodeSolutions | 3f7fee882c1cdc83ecf7c9fd05d2a7f1afb130e6 | [
"MIT"
] | 1 | 2020-05-21T09:29:48.000Z | 2020-05-21T09:29:48.000Z | uniqueOccurrences.py | Ich1goSan/LeetcodeSolutions | 3f7fee882c1cdc83ecf7c9fd05d2a7f1afb130e6 | [
"MIT"
] | null | null | null | uniqueOccurrences.py | Ich1goSan/LeetcodeSolutions | 3f7fee882c1cdc83ecf7c9fd05d2a7f1afb130e6 | [
"MIT"
] | null | null | null | def uniqueOccurrences(self, arr: List[int]) -> bool:
m = {}
for i in arr:
if i in m:
m[i] += 1
else:
m[i] = 1
return len(m.values()) == len(set(m.values())) | 26 | 52 | 0.447115 | 31 | 208 | 3 | 0.580645 | 0.064516 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015504 | 0.379808 | 208 | 8 | 53 | 26 | 0.705426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
859208b5a93d0e79afb42da91a5ef97b75f19467 | 1,559 | py | Python | bonfire_led_pwm_tb.py | bonfireprocessor/bonfire-pwm | 2e28cb5c19454343122fc26589da86f17613f204 | [
"MIT"
] | null | null | null | bonfire_led_pwm_tb.py | bonfireprocessor/bonfire-pwm | 2e28cb5c19454343122fc26589da86f17613f204 | [
"MIT"
] | null | null | null | bonfire_led_pwm_tb.py | bonfireprocessor/bonfire-pwm | 2e28cb5c19454343122fc26589da86f17613f204 | [
"MIT"
] | null | null | null | from myhdl import *
from bonfire_led_pwm import bonfire_led_pwm
#from bonfire_led_pwm4 import bonfire_led_pwm4
from wishbone_bundle import *
from ClkDriver import *
numChannels=4
values=(2,0x00205070,0x00ffffff,0x00000000,0x00deadbe)
@block
def led_pwm_tb():
wb_bus = Wishbone_bundle(True,8,0,32,False,False)
red_v=Signal(intbv(0)[numChannels:])
green_v=Signal(intbv(0)[numChannels:])
blue_v=Signal(intbv(0)[numChannels:])
clock = Signal(bool(0))
reset = ResetSignal(0, active=1, isasync=False)
clk_driver= ClkDriver(clock)
dut=bonfire_led_pwm(wb_bus,red_v,green_v,blue_v,clock,reset,numChannels)
start=Signal(bool(0))
finish=Signal(bool(0))
write_inst=wb_bus.simulation_writer(start,0,values,finish,clock,reset)
@instance
def stimulus():
reset.next=True
yield delay(40)
reset.next=False
yield delay(40)
#Trigger the simulation writer
start.next=True
yield finish
# for i in range(4):
# #print i
# yield wb_sim_write(wb_bus,clock,i,values[i])
yield delay(20)
for i in range(1):
for i in range(len(values)):
yield wb_sim_read(wb_bus,clock,i)
print i, wb_bus.db_read
if values[i]!=wb_bus.db_read:
print "Error at address",i
return instances()
inst=led_pwm_tb()
#inst.convert(hdl='VHDL',name='bonfire_led_pwm_tb',path='tb')
#inst.analyze_convert()
inst.config_sim(trace=True)
inst.run_sim(50000)
| 21.957746 | 76 | 0.652341 | 227 | 1,559 | 4.286344 | 0.370044 | 0.035971 | 0.053443 | 0.040082 | 0.098664 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045264 | 0.234766 | 1,559 | 70 | 77 | 22.271429 | 0.770327 | 0.151379 | 0 | 0.052632 | 0 | 0 | 0.012186 | 0 | 0 | 0 | 0.030465 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
859249876ca19a024249e015e76aec296b5a74be | 699 | py | Python | setup.py | andypbarrett/Sunlight_under_seaice | 2c42c4f562ce27089a1d24cd4ff1740c5b7fa6b6 | [
"MIT"
] | 3 | 2020-10-29T20:21:53.000Z | 2021-08-17T18:52:23.000Z | setup.py | andypbarrett/Sunlight_under_seaice | 2c42c4f562ce27089a1d24cd4ff1740c5b7fa6b6 | [
"MIT"
] | null | null | null | setup.py | andypbarrett/Sunlight_under_seaice | 2c42c4f562ce27089a1d24cd4ff1740c5b7fa6b6 | [
"MIT"
] | 1 | 2021-02-26T17:33:23.000Z | 2021-02-26T17:33:23.000Z | from setuptools import setup
setup(
# Needed to silence warnings (and to be a worthwhile package)
name='sunderseaice',
url='',
author='Andy Barrett',
author_email='andypbarrett@gmail.com',
# Needed to actually package something
packages=setuptools.find_packages(),
# Needed for dependencies
install_requires=['numpy','matplotlib','cartopy','pandas','xarray'],
# *strongly* suggested for sharing
version='0.1',
# The license can be anything you like
license='MIT',
description='Tools for Sunlight Under Sea Ice project',
# We will also need a readme eventually (there will be a warning)
# long_description=open('README.txt').read(),
)
| 33.285714 | 72 | 0.692418 | 87 | 699 | 5.517241 | 0.781609 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003552 | 0.194564 | 699 | 20 | 73 | 34.95 | 0.849023 | 0.426323 | 0 | 0 | 0 | 0 | 0.320611 | 0.05598 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8594791d879179d2b9776fd3591fb68f07fc02f7 | 1,611 | py | Python | src/dx/share/scraper/crawl.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | null | null | null | src/dx/share/scraper/crawl.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | 2 | 2021-01-03T16:22:11.000Z | 2021-02-07T08:41:57.000Z | src/dx/share/scraper/crawl.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | null | null | null | from .parse_topics import topics
from .url_utils import base_url
from .time_utils import StopWatch
from .soup_processor import soup_from_response
from .soup_structure import AMSBookInfoPage
from sys import stderr
from time import sleep
__all__ = ["crawl"]
def crawl(book_GET_func, initialise_at=1, volumes=None, sort=True, dry_run=False):
pages = []
parsed_pages = []
book_url_iterator = book_GET_func(initialise_at, volumes, sort, dry_run)
try:
for page in book_url_iterator:
url_subpath = page.url[len(base_url)-1:]
if page.ok:
#TODO: process the page
print(f"GET success: '{url_subpath}'", file=stderr)
pages.append(page)
# Process the results here!
if not dry_run:
soup = soup_from_response(page)
try:
parsed = AMSBookInfoPage(soup)
except Exception as e:
print(f"Caught {type(e).__name__}: '{e}'", file=stderr)
parsed = e # Do this so as to append it and store the exception
finally:
parsed_pages.append(parsed)
else:
print(f"GET failure: '{url_subpath}'", file=stderr)
except KeyboardInterrupt:
# Graceful early exit
n_res = len(pages)
s = "s" if n_res > 1 else ""
print(f" » » » Crawler killed (got {n_res} page{s})", file=stderr)
return pages, parsed_pages
def __main__():
crawl()
if __name__ == "__main__":
__main__()
| 35.021739 | 87 | 0.577281 | 199 | 1,611 | 4.40201 | 0.427136 | 0.027397 | 0.03653 | 0.047945 | 0.052511 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002791 | 0.332713 | 1,611 | 45 | 88 | 35.8 | 0.809302 | 0.073867 | 0 | 0.052632 | 0 | 0 | 0.097446 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 1 | 0.052632 | false | 0 | 0.184211 | 0 | 0.263158 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8597bb717cd9dbb832a4f653849811a40ea39f93 | 1,224 | py | Python | app/schemas/Recipe.py | koyokakijakarta/foodrecipe | cb8e3ce17140c2b2c524af6838f53359ec7f6dbd | [
"MIT"
] | 2 | 2021-08-02T13:40:56.000Z | 2021-08-04T22:18:02.000Z | app/schemas/Recipe.py | koyokakijakarta/foodrecipes | cb8e3ce17140c2b2c524af6838f53359ec7f6dbd | [
"MIT"
] | null | null | null | app/schemas/Recipe.py | koyokakijakarta/foodrecipes | cb8e3ce17140c2b2c524af6838f53359ec7f6dbd | [
"MIT"
] | null | null | null | from app import ma
from app.models import Recipe as RecipeModels
from app.schemas.Response import ResponseSchema
from marshmallow import validate, ValidationError, EXCLUDE, post_load
def validate_image(image):
allowed_extensions = ['png', 'jpg', 'jpeg', 'gif']
if "." not in image or ("." in image and image.rsplit(".", 1)[1].lower() not in allowed_extensions):
raise ValidationError("Image type must be a png, jpg, jpeg, or gif")
class RecipeSchema(ma.Schema):
class Meta:
unknown = EXCLUDE
id = ma.Int()
user = ma.Str()
name = ma.Str(required=True, validate=validate.Length(min=1))
country = ma.Str(required=True, validate=validate.Length(min=1))
category = ma.Str(required=True, validate=validate.Length(min=1))
image = ma.Str(required=True, validate=validate_image)
class RecipeSchema2(RecipeSchema):
description = ma.Str(required=True)
class Recipe(RecipeSchema2):
id_user = ma.Int(required=True)
@post_load
def create_recipe(self, data, **_):
return RecipeModels.Recipe(**data)
class RecipeAll(ResponseSchema):
data = ma.Nested(RecipeSchema, many=True)
class RecipeDetail(ResponseSchema):
data = ma.Nested(RecipeSchema2)
| 29.142857 | 104 | 0.706699 | 160 | 1,224 | 5.35 | 0.4 | 0.035047 | 0.075935 | 0.099299 | 0.189252 | 0.189252 | 0.150701 | 0.150701 | 0.150701 | 0 | 0 | 0.007882 | 0.170752 | 1,224 | 41 | 105 | 29.853659 | 0.835468 | 0 | 0 | 0 | 0 | 0 | 0.048203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0.035714 | 0.821429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
859aafab5afb402fc156c1e0795b29e20656f384 | 237 | py | Python | python3/hamming_distance.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | 1 | 2020-10-08T09:17:40.000Z | 2020-10-08T09:17:40.000Z | python3/hamming_distance.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | null | null | null | python3/hamming_distance.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | null | null | null | # leetcode
class Solution:
def hammingDistance(self, x: int, y: int) -> int:
ans = 0
xor = bin(x^y)[2:]
for l in xor:
if l == '1':
ans += 1
return ans | 19.75 | 53 | 0.392405 | 29 | 237 | 3.206897 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.493671 | 237 | 12 | 54 | 19.75 | 0.741667 | 0.033755 | 0 | 0 | 0 | 0 | 0.004386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85a3072aae9e3b8d27f8bf5d2ee3424715f3b0cd | 1,161 | py | Python | hrsalespipes/api_integrations/migrations/0001_initial.py | hanztura/hrsalespipes | 77accf3132726ced05d84fa2a41891b841f310b8 | [
"Apache-2.0"
] | 3 | 2020-03-26T12:43:43.000Z | 2021-05-10T14:35:51.000Z | hrsalespipes/api_integrations/migrations/0001_initial.py | hanztura/hrsalespipes | 77accf3132726ced05d84fa2a41891b841f310b8 | [
"Apache-2.0"
] | 5 | 2021-04-08T21:15:15.000Z | 2022-02-10T11:03:12.000Z | hrsalespipes/api_integrations/migrations/0001_initial.py | hanztura/hrsalespipes | 77accf3132726ced05d84fa2a41891b841f310b8 | [
"Apache-2.0"
] | 1 | 2022-01-30T19:24:48.000Z | 2022-01-30T19:24:48.000Z | # Generated by Django 2.2.10 on 2020-04-30 07:25
from django.db import migrations, models
import django_extensions.db.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='LinkedinApi',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('code', models.TextField(blank=True)),
('state', models.CharField(blank=True, max_length=200)),
('access_token', models.TextField(blank=True)),
('expires_in', models.SmallIntegerField(blank=True, null=True)),
],
options={
'ordering': ('-modified', '-created'),
'get_latest_by': 'modified',
'abstract': False,
},
),
]
| 35.181818 | 124 | 0.587425 | 112 | 1,161 | 5.946429 | 0.535714 | 0.054054 | 0.081081 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.279931 | 1,161 | 32 | 125 | 36.28125 | 0.773923 | 0.039621 | 0 | 0 | 1 | 0 | 0.116801 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85a35bbea18ed77b4c16abc34395e75e650abec6 | 2,352 | py | Python | aitools/proofs/builtin_provers.py | OneManEquipe/thinkerino | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | 1 | 2020-05-02T19:44:18.000Z | 2020-05-02T19:44:18.000Z | aitools/proofs/builtin_provers.py | OneManEquipe/aitools | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | 29 | 2019-08-07T17:49:03.000Z | 2021-08-31T10:25:00.000Z | aitools/proofs/builtin_provers.py | OneManEquipe/thinkerino | 4b9508156371643f31d1d26608e42e3aafcb0153 | [
"MIT"
] | null | null | null | import logging
from aitools.logic.core import Expression, Variable, LogicObject
from aitools.logic.language import Language
from aitools.logic.unification import Substitution
from aitools.logic.utils import VariableSource
from aitools.proofs.components import HandlerArgumentMode, HandlerSafety
from aitools.proofs.knowledge_base import KnowledgeBase
from aitools.proofs.language import Implies, Not
from aitools.proofs.provers import TruthSubstitutionPremises, Prover, TruthSubstitution
logger = logging.getLogger(__name__)
language = Language()
async def restricted_modus_ponens(formula: LogicObject, substitution: Substitution, kb: KnowledgeBase):
"""Restricted backward version of modus ponens, which won't perform recursive proof of implications"""
v = VariableSource(language=language)
if not (isinstance(formula, Expression) and formula.children[0] == Implies):
rule_pattern = Implies(v.premise, formula)
async for rule_proof in kb.async_prove(rule_pattern, previous_substitution=substitution):
premise = rule_proof.substitution.get_bound_object_for(v.premise)
async for premise_proof in kb.async_prove(premise, previous_substitution=rule_proof.substitution):
yield TruthSubstitutionPremises(truth=True, substitution=premise_proof.substitution,
premises=(rule_proof, premise_proof))
RestrictedModusPonens = Prover(
listened_formula=Variable(language=language), handler=restricted_modus_ponens, argument_mode=HandlerArgumentMode.RAW,
pass_substitution_as=..., pass_knowledge_base_as='kb', pure=True, safety=HandlerSafety.SAFE
)
async def closed_world_assumption(formula: LogicObject, substitution: Substitution, kb: KnowledgeBase):
language = Language()
v = VariableSource(language=language)
match = Substitution.unify(formula, Not(v.P))
if match is not None:
try:
await kb.async_prove(match.get_bound_object_for(v.P)).__anext__()
except StopAsyncIteration:
return TruthSubstitution(True, substitution)
ClosedWorldAssumption = Prover(
listened_formula=Variable(language=language), handler=closed_world_assumption, argument_mode=HandlerArgumentMode.RAW,
pass_substitution_as=..., pass_knowledge_base_as='kb', pure=True, safety=HandlerSafety.SAFE
)
| 47.04 | 121 | 0.770408 | 262 | 2,352 | 6.721374 | 0.354962 | 0.049972 | 0.036343 | 0.0477 | 0.28393 | 0.241908 | 0.177172 | 0.118115 | 0.118115 | 0.118115 | 0 | 0.000501 | 0.15051 | 2,352 | 49 | 122 | 48 | 0.880881 | 0 | 0 | 0.162162 | 0 | 0 | 0.001778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.243243 | 0 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
85a8eea592075db3dcccddbe695f43785edf8bf5 | 5,334 | py | Python | cryptongo.py | manu9812/MongoDB | d8cdb217e00f598747d0eee3780c7edff63f25c8 | [
"Apache-2.0"
] | null | null | null | cryptongo.py | manu9812/MongoDB | d8cdb217e00f598747d0eee3780c7edff63f25c8 | [
"Apache-2.0"
] | null | null | null | cryptongo.py | manu9812/MongoDB | d8cdb217e00f598747d0eee3780c7edff63f25c8 | [
"Apache-2.0"
] | null | null | null | import pymongo
import requests
from collections import OrderedDict
from hashlib import sha512
API_URL = 'https://api.coinmarketcap.com/v2/ticker/'
API_URL_START = 'https://api.coinmarketcap.com/v2/ticker/?start={}'
API_URL_LISTINGS = 'https://api.coinmarketcap.com/v2/listings/'
def get_db_connection(uri):
"""
Define la conexión a la BD.
MongoClient por defecto se conecta al localhost.
:param uri: URI de conexión.
:return: BD a utilizar.
"""
client = pymongo.MongoClient(uri)
return client.cryptongo
def get_hash(value):
"""
:param value: String con todos los valores de los campos del documento concatenados.
:return: String hash generado con sha-512.
"""
return sha512(value.encode('utf-8')).hexdigest()
def field_name(field):
"""
:param field: Tupla que tiene el nombre y valor de un campo del documento original.
:return: El nombre del campo.
"""
return field[0]
def remove_element_dictionary(dictionary, key):
"""
Un diccionario puede ser dinámico, por lo cuál se utiliza este método para eliminar un elemento del mismo.
:param dictionary:
:param key:
:return: Diccionario con el elemento eliminado.
"""
r = dict(dictionary)
del r[key]
return r
def get_ticker_hash(ticker_data):
"""
Genera el hash a partir de cada uno de los valores del documento.
El documento (o diccionario) será ordenado alfabéticamente según su key.
Como no se está ordenando los elementos del subdocumento 'quotes' se elimina temporalmente del diccionario
para no tenerlo en cuenta al momento de la creación del hash.
Si existe un valor diferente para el campo last_updated indica que los valores del subdocumento quotes también
cambió, por lo cuál este documento será almacenado posteriormente en la BD.
:param ticker_data: Documento de la criptomoneda.
:return: La función que genera el hash según el string con todos los valores del documento concatenados.
"""
ticker_data = remove_element_dictionary(ticker_data, 'quotes')
ticker_data = OrderedDict(sorted(ticker_data.items(), key=field_name))
# Se concatena en un string todos los valores ordenados del diccionario.
ticker_value = ''
for _, value in ticker_data.items():
ticker_value += str(value)
return get_hash(ticker_value)
def get_cryptocurrencies_from_api(position):
"""
De la API de CoinMarketCap se obtiene los documentos desde una posición inicial.
La API por cada endpoint sólo permite 100 documentos.
:return: Resultado de la consulta en formato json.
"""
url = API_URL_START.format(position)
r = requests.get(url)
if r.status_code == 200:
result = r.json()
return result
raise Exception('API Error') # Lanzar una excepción.
def get_num_cryptocurrencies_from_api():
"""
De la API de CoinMarketCap se obtiene la cantidad de criptomonedas existentes.
:return: Valor entero con la cantidad de documentos.
"""
r = requests.get(API_URL_LISTINGS)
if r.status_code == 200:
result = r.json()
return result['metadata']['num_cryptocurrencies']
raise Exception('API Error')
def check_if_exists(db_connection, ticker_hash):
"""
Verifica si la información ya existe en la BD (por medio de un hash).
La BD almacenará un historico de las criptomonedas.
:param db_connection: Conexión a la BD.
:param ticker_hash: Hash del documento generado previamente.
:return: Verdadero si el documento ya se encuentra, Falso si no.
"""
if db_connection.tickers.find_one({'ticker_hash': ticker_hash}):
return True
return False
def save_ticker(db_connection, ticker_data=None):
"""
Almacena el documento en la BD siempre y cuando no exista.
Se identifica si el documento ya fue almacenado o no por la generación de un hash.
:param db_connection:
:param ticker_data: Datos del documento.
:return: Verdadero si almacena el documento.
"""
# Evita operaciones si no existe información.
if not ticker_data:
return False
ticker_hash = get_ticker_hash(ticker_data)
if check_if_exists(db_connection, ticker_hash):
return False
ticker_data['ticker_hash'] = ticker_hash
# Almacena el documento en la BD de Mongo por medio de insertOne()
db_connection.tickers.insert_one(ticker_data)
return True
if __name__ == '__main__':
"""
Para crear la conexión ssh del contenedor de Docker,
es necesario que el programa inicialmente se este ejecutando contantemente.
import time
while True:
time.sleep(300)
"""
connection = get_db_connection('mongodb://crypto-mongodb-dev:27017/')
num_cryptocurrencies = get_num_cryptocurrencies_from_api()
print('\nCryptomonedas actuales en Coin Market Cap: {}'.format(num_cryptocurrencies))
cont = 0
for i in range(1, num_cryptocurrencies, 100):
tickers = get_cryptocurrencies_from_api(i)
tickers_data = tickers['data']
for value in tickers_data.values():
if save_ticker(connection, value):
cont += 1
print('Tickers almacenados... {}'.format(cont)) if cont > 0 else print('...')
print('\nCryptomonedas totales almacenadas: {}'.format(cont))
| 31.011628 | 114 | 0.701162 | 721 | 5,334 | 5.054092 | 0.327323 | 0.035675 | 0.017563 | 0.019759 | 0.137761 | 0.088913 | 0.057629 | 0.021405 | 0.021405 | 0.021405 | 0 | 0.009111 | 0.218035 | 5,334 | 171 | 115 | 31.192982 | 0.864541 | 0.425572 | 0 | 0.171875 | 0 | 0 | 0.141874 | 0.013384 | 0 | 0 | 0 | 0.023392 | 0 | 1 | 0.140625 | false | 0 | 0.0625 | 0 | 0.390625 | 0.046875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85b0e1fc58fb93fa89127aff65fd7e5a22d465df | 2,994 | py | Python | subs/data.py | Alimo1029/Alimo1029-s-Blog | 41d28f83f2a475669535108f2c7f78713193d86c | [
"MIT"
] | null | null | null | subs/data.py | Alimo1029/Alimo1029-s-Blog | 41d28f83f2a475669535108f2c7f78713193d86c | [
"MIT"
] | null | null | null | subs/data.py | Alimo1029/Alimo1029-s-Blog | 41d28f83f2a475669535108f2c7f78713193d86c | [
"MIT"
] | null | null | null | # 数据库操作
# 导入模块
import pymysql
# 初始化函数
def _init():
# 连接数据库
db = pymysql.connect(
host='localhost',
port=3306,
user='Blog',
passwd='20041123',
db='alimo1029-blog',
charset='utf8'
)
# 创建游标
cursor = db.cursor()
return db, cursor
class C_user:
def addUser(db, cursor, username, password, mail):
# 添加新用户
try:
args = (username, password, mail)
sql = "INSERT INTO users(username, password, mail) VALUES (%s,%s,%s)"
# print(sql)
cursor.execute(sql, args)
db.commit()
return True
except:
# 异常
db.rollback()
return False
def searchUser(db, cursor, username, password, mail, need):
"""need = 0代表登录模式
need = 1代表注册模式
need = 2代表单纯查询
"""
# 搜索数据库
args = (username, password)
L_sql = "SELECT username FROM users WHERE username = %s AND password = %s"
R_sql = "SELECT username FROM users WHERE username = %s"
S_sql = "SELECT username FROM users WHERE username = %s"
# print(L_sql)
if (need == 0):
# 登录模式
cursor.execute(L_sql, args)
sehData = cursor.fetchone()
if (sehData != None):
# 账号和密码组合获取成功
# print('ok')
# print(sehData)
sehData = None
return True
else:
# 获取不到,账号或密码错误
# print('sorry')
L_sql = None
return False
elif (need == 1):
# 注册模式
cursor.execute(R_sql, username)
sehData = cursor.fetchone()
if (sehData == None):
# 无人使用,可以注册
C_user.addUser(db, cursor, username, password, mail)
#db.commit()
return True
else:
# 有人使用,禁止注册
sehData = None
return False
elif (need == 2):
# 查询模式
cursor.execute(S_sql, username)
sehData = cursor.fetchone()
db.commit()
if (sehData == None):
# 查无此人
return 0
else:
# 查有此人
sehData = None
return 1
else:
return False
def delUser(db, cursor, username):
# 删除指定用户
sql = "DELETE FROM users WHERE username = %s"
try:
cursor.execute(sql, username)
db.commit()
return True
except:
return False
# db, cursor = _init()
# print(C_user.addUser(db, cursor, 'limo1029', '20041123'))
# print(C_user.searchUser(db, cursor, 'limo1029asd', '20041123', '1282160815@qq.com', 1))
# print(C_user.delUser(db, cursor, 'limo1029')) | 27.981308 | 90 | 0.454242 | 279 | 2,994 | 4.820789 | 0.322581 | 0.05948 | 0.074349 | 0.065428 | 0.350186 | 0.191822 | 0.089219 | 0.089219 | 0 | 0 | 0 | 0.038717 | 0.447896 | 2,994 | 107 | 91 | 27.981308 | 0.774955 | 0.151971 | 0 | 0.430769 | 0 | 0 | 0.123629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0.123077 | 0.015385 | 0 | 0.276923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
85b1527e81795a67da561b6fc694b9544c6fff5c | 1,832 | py | Python | wt/pygardena/rest_api.py | geeak/wt.pygardena | d2c6e838b837d87e3e622bc78551c9023444256d | [
"Apache-2.0"
] | null | null | null | wt/pygardena/rest_api.py | geeak/wt.pygardena | d2c6e838b837d87e3e622bc78551c9023444256d | [
"Apache-2.0"
] | null | null | null | wt/pygardena/rest_api.py | geeak/wt.pygardena | d2c6e838b837d87e3e622bc78551c9023444256d | [
"Apache-2.0"
] | null | null | null | import requests
from urllib.parse import urljoin
class RestAPI(requests.Session):
"""
Encapsulates all REST calls to the Gardena API.
"""
# The base URL for all Gardena requests. Change this if you want to fire the requests against other services
base_url = "https://sg-api.dss.husqvarnagroup.net/sg-1/"
def __init__(self, *args, **kw):
super().__init__(*args, **kw)
self.headers["Accept"] = "application/json"
def request(self, method, url, *args, **kw):
return super().request(method, urljoin(self.base_url, url), *args, **kw)
def post_sessions(self, email_address, password):
"""
Posts session data to the server.
"""
response = self.post('sessions', json={
'sessions': {
'email': email_address,
'password': password,
}
})
return response.json()
def get_locations(self, user_id):
"""
Gets all the locations that are associated with a given user ID.
"""
response = self.get("locations", params={
'user_id': user_id,
})
return response.json()
def get_devices(self, location_id):
"""
Loads the devices from a given location ID.
"""
response = self.get('devices', params={
'locationId': location_id
})
return response.json()
def post_command(self, device, command_name, parameters=None):
data = {'name': command_name}
if parameters is not None:
data['parameters'] = parameters
url = 'devices/{device.id}/abilities/{device.category}/command'.format(device=device)
return self.post(url, params={
'locationId': device.location.id
}, json=data) | 32.140351 | 112 | 0.576419 | 205 | 1,832 | 5.039024 | 0.404878 | 0.023233 | 0.052275 | 0.060987 | 0.070668 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000783 | 0.302948 | 1,832 | 57 | 113 | 32.140351 | 0.808144 | 0.162664 | 0 | 0.171429 | 0 | 0 | 0.142069 | 0.037931 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0.057143 | 0.057143 | 0.028571 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
85c689c0c223234396fe26601b1d33f6c48083a4 | 1,699 | py | Python | classes.py | ploggingdev/python_learn | 0b056c0a57e21d9a0c86eb8bbd6e14b236e4d215 | [
"MIT"
] | 10 | 2017-06-23T07:30:52.000Z | 2019-09-11T20:14:08.000Z | classes.py | ploggingdev/python_learn | 0b056c0a57e21d9a0c86eb8bbd6e14b236e4d215 | [
"MIT"
] | null | null | null | classes.py | ploggingdev/python_learn | 0b056c0a57e21d9a0c86eb8bbd6e14b236e4d215 | [
"MIT"
] | 11 | 2017-01-12T15:34:17.000Z | 2019-07-12T11:16:16.000Z | import copy
class Point:
"""A class to represent a point"""
details = "Represent points"
def __init__(self,x=0,y=0):
self.x=x
self.y=y
def __str__(self):
return "x={} y={}".format(self.x,self.y)
def get_sum(self):
"""Return sum of x and y components"""
return self.x + self.y
def add_point(self,to_add):
"""Update self by add x and y component of new_point"""
self.x += to_add.x
self.y += to_add.y
def __add__(self,new_point):
self.x += new_point.x
self.y += new_point.y
return self
def print_point(point):
"""print contents of given point"""
print("x={} y={}".format(point.x,point.y))
point = Point(10,20)
print(point)
print(point.details)
print(point.get_sum())
new_point = Point(2,4)
print(new_point)
point.add_point(new_point)
print(point)
print_point(new_point)
copied_point = copy.copy(point)
copied_point.x = 5
copied_point.y = 5
print(point)
print(copied_point)
print(copied_point + new_point)
class Vehicle(object):
"""Represents a vehicle"""
def __init__(self,engine_power=100):
self.engine_power = engine_power
def __str__(self):
return "Engine power : {}HP".format(self.engine_power)
def print_name(self):
print("From vehicle")
class Car(Vehicle):
"""Represents a car"""
def __init__(self,wheels=4):
super().__init__()
self.wheels = wheels
def __str__(self):
return "Engine power : {}HP , Wheels = {}".format(self.engine_power,self.wheels)
def print_name(self):
print("From vehicle")
car = Car()
print(car)
car.print_name() | 20.719512 | 88 | 0.616833 | 248 | 1,699 | 3.971774 | 0.197581 | 0.073096 | 0.030457 | 0.048731 | 0.150254 | 0.123858 | 0.123858 | 0 | 0 | 0 | 0 | 0.010903 | 0.244261 | 1,699 | 82 | 89 | 20.719512 | 0.756231 | 0.105356 | 0 | 0.192308 | 0 | 0 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.019231 | 0.057692 | 0.423077 | 0.326923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85c7248c5f7b74ca201de147525cc98df11221a7 | 5,814 | py | Python | dusty/systems/docker/testing_image.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 421 | 2015-06-02T16:29:59.000Z | 2021-06-03T18:44:42.000Z | dusty/systems/docker/testing_image.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 404 | 2015-06-02T20:23:42.000Z | 2019-08-21T16:59:41.000Z | dusty/systems/docker/testing_image.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 16 | 2015-06-16T17:21:02.000Z | 2020-03-27T02:27:09.000Z | from __future__ import absolute_import
import docker
from ...compiler.compose import container_code_path, get_volume_mounts
from ...compiler.spec_assembler import get_expanded_libs_specs
from ...log import log_to_client
from ...command_file import dusty_command_file_name, lib_install_commands_for_app_or_lib
from .common import spec_for_service
from . import get_docker_client
from ... import constants
class ImageCreationError(Exception):
def __init__(self, code):
self.code = code
message = 'Run exited with code {}'.format(code)
super(ImageCreationError, self).__init__(message)
def _ensure_base_image(app_or_lib_name):
testing_spec = _testing_spec(app_or_lib_name)
log_to_client('Getting the base image for the new image')
docker_client = get_docker_client()
if 'image' in testing_spec:
_ensure_image_pulled(testing_spec['image'])
return testing_spec['image']
elif 'build' in testing_spec:
image_tag = 'dusty_testing_base/image'
log_to_client('Need to build the base image based off of the Dockerfile here: {}'.format(testing_spec['build']))
try:
docker_client.remove_image(image=image_tag)
except:
log_to_client('Not able to remove image {}'.format(image_tag))
docker_client.build(path=testing_spec['build'], tag=image_tag)
return image_tag
def _ensure_image_pulled(image_name):
docker_client = get_docker_client()
full_image_name = image_name
if ':' not in image_name:
full_image_name = '{}:latest'.format(image_name)
for image in docker_client.images():
if full_image_name in image['RepoTags']:
break
else:
split = image_name.split(':')
repo, tag = split[0], 'latest' if len(split) == 1 else split[1]
docker_client.pull(repo, tag, insecure_registry=True)
def _get_split_volumes(volumes):
print volumes
split_volumes = []
for volume in volumes:
volume_list = volume.split(':')
split_volumes.append({'host_location': volume_list[0],
'container_location': volume_list[1]})
return split_volumes
def _get_create_container_volumes(split_volumes):
return [volume_dict['container_location'] for volume_dict in split_volumes]
def _get_create_container_binds(split_volumes):
binds_dict = {}
for volume_dict in split_volumes:
binds_dict[volume_dict['host_location']] = {'bind': volume_dict['container_location'], 'ro': False}
return binds_dict
def _create_tagged_image(base_image_tag, new_image_tag, app_or_lib_name):
docker_client = get_docker_client()
command = _get_test_image_setup_command(app_or_lib_name)
split_volumes = _get_split_volumes(get_volume_mounts(app_or_lib_name, get_expanded_libs_specs(), test=True))
create_container_volumes = _get_create_container_volumes(split_volumes)
create_container_binds = _get_create_container_binds(split_volumes)
container = docker_client.create_container(image=base_image_tag,
command=command,
volumes=create_container_volumes,
host_config=docker.utils.create_host_config(binds=create_container_binds))
docker_client.start(container=container['Id'])
log_to_client('Running commands to create new image:')
for line in docker_client.logs(container['Id'], stdout=True, stderr=True, stream=True):
log_to_client(line.strip())
exit_code = docker_client.wait(container['Id'])
if exit_code:
raise ImageCreationError(exit_code)
new_image = docker_client.commit(container=container['Id'])
try:
docker_client.remove_image(image=new_image_tag)
except:
log_to_client('Not able to remove image {}'.format(new_image_tag))
docker_client.tag(image=new_image['Id'], repository=new_image_tag, force=True)
docker_client.remove_container(container=container['Id'], v=True)
def _testing_spec(app_or_lib_name):
expanded_specs = get_expanded_libs_specs()
return spec_for_service(app_or_lib_name, expanded_specs)['test']
def test_image_name(app_or_lib_name):
return "dusty/test_{}".format(app_or_lib_name)
def _get_test_image_setup_command(app_or_lib_name):
return 'sh {}/{}'.format(constants.CONTAINER_COMMAND_FILES_DIR, dusty_command_file_name(app_or_lib_name))
def test_image_exists(app_or_lib_name):
image_name = test_image_name(app_or_lib_name)
docker_client = get_docker_client()
images = docker_client.images()
for image in images:
# Need to be careful, RepoTags can be explicitly set to None
repo_tags = image.get('RepoTags') or []
if image_name in repo_tags or '{}:latest'.format(image_name) in repo_tags:
return True
return False
def create_test_image(app_or_lib_name):
"""
Create a new test image by applying changes to the base image specified
in the app or lib spec
"""
log_to_client('Creating the testing image')
base_image_tag = _ensure_base_image(app_or_lib_name)
new_image_name = test_image_name(app_or_lib_name)
_create_tagged_image(base_image_tag, new_image_name, app_or_lib_name)
def update_test_image(app_or_lib_name):
"""
Apply updates to an existing testing image that has already been created
by Dusty - updating this test image should be quicker than creating a new
test image from the base image in the spec
"""
log_to_client('Updating the testing image')
if not test_image_exists(app_or_lib_name):
create_test_image(app_or_lib_name)
return
test_image_tag = test_image_name(app_or_lib_name)
_create_tagged_image(test_image_tag, test_image_tag, app_or_lib_name)
| 41.827338 | 121 | 0.722222 | 820 | 5,814 | 4.703659 | 0.184146 | 0.031112 | 0.04978 | 0.068447 | 0.315789 | 0.281566 | 0.158413 | 0.10993 | 0.095152 | 0.048224 | 0 | 0.001066 | 0.192982 | 5,814 | 138 | 122 | 42.130435 | 0.820972 | 0.009976 | 0 | 0.073395 | 0 | 0 | 0.090474 | 0.004422 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.082569 | null | null | 0.009174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85c79914bdd90b98f0604e1d7179a76a1a86f2d2 | 589 | py | Python | ios_code_generator/models/__init__.py | banxi1988/iOSCodeGenerator | bb07930660f87378040b8783d203a6a2637b289e | [
"MIT"
] | 34 | 2016-12-22T13:18:01.000Z | 2018-04-23T05:53:32.000Z | ios_code_generator/models/__init__.py | codetalks-new/iOSCodeGenerator | bb07930660f87378040b8783d203a6a2637b289e | [
"MIT"
] | 3 | 2017-05-13T08:07:22.000Z | 2018-01-03T00:28:16.000Z | ios_code_generator/models/__init__.py | codetalks-new/iOSCodeGenerator | bb07930660f87378040b8783d203a6a2637b289e | [
"MIT"
] | 6 | 2016-12-23T00:37:37.000Z | 2019-05-29T02:15:33.000Z | # -*- coding: utf-8 -*-
__author__ = 'banxi'
from .core import *
def _auto_import_models():
import os
import importlib
searchpath = os.path.dirname(__file__)
exclude_modules = ['__init__', 'core']
for filename in os.listdir(searchpath):
if not filename.endswith('.py'):
continue
module_name = filename[:-3]
if module_name in exclude_modules:
continue
importlib.import_module("."+module_name, 'ios_code_generator.models')
# 由于 Model generator 使用装饰器自动注册的机制, 所以需要模块导入之后才能自动注册,所以在这里需要自动导入.
_auto_import_models()
| 25.608696 | 77 | 0.66893 | 66 | 589 | 5.575758 | 0.606061 | 0.081522 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004367 | 0.222411 | 589 | 22 | 78 | 26.772727 | 0.799127 | 0.142615 | 0 | 0.133333 | 0 | 0 | 0.091816 | 0.0499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.4 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a41c8e50b0c70eef5aa82e699a4fb84a00b348ab | 2,629 | py | Python | Projeto_Capitulo/randomQuizGenerator.py | alexcarlos06/Automatize_Tarefas_Ma-antes | da4c02de415fb6dec703b3009b40925ccb77a4d8 | [
"MIT"
] | null | null | null | Projeto_Capitulo/randomQuizGenerator.py | alexcarlos06/Automatize_Tarefas_Ma-antes | da4c02de415fb6dec703b3009b40925ccb77a4d8 | [
"MIT"
] | null | null | null | Projeto_Capitulo/randomQuizGenerator.py | alexcarlos06/Automatize_Tarefas_Ma-antes | da4c02de415fb6dec703b3009b40925ccb77a4d8 | [
"MIT"
] | null | null | null | #! python3
# randomQUizGenerator.py - Cria provas com perguntas e respostas em ordem aleatória
# juntamente com os gabaritos contendo as respostas.
import random
captals = {'Acre (AC)': 'Rio Branco', 'Alagoas (AL)': 'Maceió', 'Amapá (AP)': 'Macapá', 'Amazonas (AM)': 'Manaus',
'Bahia (BA)': 'Salvador', 'Ceará (CE)': 'Fortaleza', 'Distrito Federal (DF)': 'Brasília',
'Espírito Santo (ES)': 'Vitória', 'Goiás (GO)': 'Goiânia', 'Maranhão (MA)': 'São Luís', 'Mato Grosso (MT)':
'Cuiabá', 'Mato Grosso do Sul (MS)': 'Campo Grande', 'Minas Gerais (MG)': 'Belo Horizonte', 'Pará (PA)':
'Belém', 'Paraíba (PB)': 'João Pessoa', 'Paraná (PR)': 'Curitiba', 'Pernambuco (PE)': 'Recife',
'Piauí (PI)': 'Teresina', 'Rio de Janeiro (RJ)': 'Rio de Janeiro', 'Rio Grande do Norte (RN)': 'Natal',
'Rio Grande do Sul (RS)': 'Porto Alegre', 'Rondônia (RO)': 'Porto Velho', 'Roraima (RR)': 'Boa Vista',
'Santa Catarina (SC)': 'Florianópolis', 'São Paulo (SP)': 'São Paulo', 'Sergipe (SE)': 'Aracaju',
'Tocantins (TO)': 'Palmas'}
# Gera os arquivos contendo as provas de acordo com a quantidade de loopings.
for quizNum in range(1):
# Cria os arquivos de prova e gabarito
quizFile = open(f'captalsquiz{quizNum + 1}.txt', 'w', encoding='utf-8')
answerKeyFile = open(f'capital_answers{quizNum + 1}.txt', "w", encoding='UTF-8')
# Escreve o cabeçalho da prova
quizFile.write(f'Name: \n\nDate: \n\nPeriod: \n\n')
quizFile.write(f'{"*" * 20} State Captals Quiz (Form {quizNum + 1})')
quizFile.write('\n\n')
# Embaralha a lista de estados
states = list(captals.keys())
random.shuffle(states)
# Percorre todos os estados criando um looping para gerar uma pergunta para cada
for questionNum, _ in enumerate(states):
correctAnswer = captals[states[questionNum]]
wrongAnswers = list(captals.values())
del wrongAnswers[wrongAnswers.index(correctAnswer)]
wrongAnswers = random.sample(wrongAnswers, 3)
answerOptions = wrongAnswers + [correctAnswer]
random.shuffle(answerOptions)
# Grava pergunta e as opções de resposta no arquivo de prova
quizFile.write(f'{questionNum + 1}. What is the captal of {states[questionNum]}? \n')
for i in range(4):
quizFile.write(f'\t( {"ABCD"[i]} ) {answerOptions[i]}\n')
quizFile.write('\n')
# Grava o gabarito com as respostas em um arquivo
answerKeyFile.write(f'{questionNum + 1}. {"ABCD"[answerOptions.index(correctAnswer)]}\n')
quizFile.close()
answerKeyFile.close()
| 51.54902 | 118 | 0.63218 | 326 | 2,629 | 5.092025 | 0.588957 | 0.046988 | 0.033735 | 0.014458 | 0.028916 | 0.028916 | 0.028916 | 0 | 0 | 0 | 0 | 0.006256 | 0.209585 | 2,629 | 50 | 119 | 52.58 | 0.792589 | 0.189806 | 0 | 0 | 0 | 0 | 0.447642 | 0.042925 | 0 | 0 | 0 | 0.02 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a42561eef40ebc9c8026f949326c153bc04718cb | 2,701 | py | Python | pycmbs/utils/download.py | pygeo/pycmbs | 0df863e1575ffad21c1ea9790bcbd3a7982d99c6 | [
"MIT"
] | 9 | 2015-04-01T04:22:25.000Z | 2018-08-31T03:51:34.000Z | pycmbs/utils/download.py | pygeo/pycmbs | 0df863e1575ffad21c1ea9790bcbd3a7982d99c6 | [
"MIT"
] | 14 | 2015-01-27T20:33:10.000Z | 2016-06-02T07:23:25.000Z | pycmbs/utils/download.py | pygeo/pycmbs | 0df863e1575ffad21c1ea9790bcbd3a7982d99c6 | [
"MIT"
] | 8 | 2015-02-07T20:46:42.000Z | 2019-10-25T00:36:32.000Z | # -*- coding: utf-8 -*-
"""
This file is part of pyCMBS.
(c) 2012- Alexander Loew
For COPYING and LICENSE details, please refer to the LICENSE file
"""
import os
from pycmbs.data import Data
import tempfile
def get_example_directory():
""" returns directory where this file is located """
testfile = 'xxxxdownload_test.txt'
# by default use examples directory
r = os.path.dirname(os.path.realpath(__file__))
# check if one can write into the directory
try:
f = open(r + os.sep + testfile, 'w')
f.write('test')
f.close()
os.remove(r + os.sep + testfile)
except:
# if no write access then
r = tempfile.mkdtemp()
if r[-1] != os.sep:
r += os.sep
return r
def get_example_data_directory():
""" returns directory where the example data should be """
return get_example_directory() + 'example_data' + os.sep
def get_sample_file(name='air', return_object=True):
"""
returns Data object of example file including or the filename
with the full path. If the file is not existing yet,
then it will be downloaded.
Parameters
----------
name : str
specifies which type of sample file should be returned
['air','rain']
return_object : bool
return Data object if True, otherwise the filename is returned
"""
files = {
'air': {'name' : 'air.mon.mean.nc', 'url': 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.derived/surface/air.mon.mean.nc', 'variable': 'air'},
'rain': {'name' : 'pr_wtr.eatm.mon.mean.nc', 'url': 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.derived/surface/pr_wtr.eatm.mon.mean.nc', 'variable': 'pr_wtr'}
}
if name not in files.keys():
raise ValueError('Invalid sample file')
tdir = get_example_data_directory()
fname = tdir + files[name]['name']
# download data if not existing yet
if not os.path.exists(fname):
url = files[name]['url']
_download_file(url, tdir)
if not os.path.exists(fname):
print fname
raise ValueError('Download failed!')
# ... here everything should be fine
if return_object:
return Data(fname, files[name]['variable'], read=True)
else:
return fname
def _download_file(url, tdir):
""" download URL to target directory tdir """
if not os.path.exists(tdir):
os.makedirs(tdir)
print('Downloading file ... this might take a few minutes')
print('URL: ' + url)
curdir=os.getcwd()
os.chdir(tdir)
os.system('wget --ftp-user=anonymous --ftp-password=nix ' + url)
os.chdir(curdir)
print('Downloading finished!')
| 28.431579 | 176 | 0.626064 | 368 | 2,701 | 4.519022 | 0.394022 | 0.01804 | 0.021648 | 0.019844 | 0.134696 | 0.134696 | 0.076969 | 0.076969 | 0.076969 | 0.076969 | 0 | 0.002946 | 0.245835 | 2,701 | 94 | 177 | 28.734043 | 0.813451 | 0.070344 | 0 | 0.042553 | 0 | 0.042553 | 0.253151 | 0.115068 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.021277 | 0.06383 | null | null | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4313a8cb9b42c4803f94ed9278e0e452cf914fe | 1,044 | py | Python | requestForProposals.py | converj/reasonSurvey | dd40784da4b07cfeb9fd873bab820d627cb026c1 | [
"Apache-2.0"
] | null | null | null | requestForProposals.py | converj/reasonSurvey | dd40784da4b07cfeb9fd873bab820d627cb026c1 | [
"Apache-2.0"
] | null | null | null | requestForProposals.py | converj/reasonSurvey | dd40784da4b07cfeb9fd873bab820d627cb026c1 | [
"Apache-2.0"
] | null | null | null | # Import external modules.
from google.appengine.ext import ndb
import logging
# Import local modules.
from configuration import const as conf
from constants import Constants
const = Constants()
const.MAX_RETRY = 3
# Parent key: none
class RequestForProposals(ndb.Model):
title = ndb.StringProperty()
detail = ndb.StringProperty()
creator = ndb.StringProperty()
allowEdit = ndb.BooleanProperty()
# Store frozen-flag in request-for-proposals record, because storing frozen-flag in link-key-record
# may be inconsistent if multiple link-keys exist, and link-key-record is designed to be constant.
freezeUserInput = ndb.BooleanProperty( default=False )
# Experimental
hideReasons = ndb.BooleanProperty( default=False )
@ndb.transactional( retries=const.MAX_RETRY )
def setEditable( requestId, editable ):
logging.debug( 'setEditable() editable={}'.format(editable) )
requestRecord = RequestForProposals.get_by_id( int(requestId) )
requestRecord.allowEdit = editable
requestRecord.put()
| 29.828571 | 103 | 0.750958 | 121 | 1,044 | 6.446281 | 0.595041 | 0.065385 | 0.033333 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001142 | 0.16092 | 1,044 | 34 | 104 | 30.705882 | 0.889269 | 0.259579 | 0 | 0 | 0 | 0 | 0.03268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a432127ca424bf2078e641f02350e6a86d7a56dc | 3,943 | py | Python | RelevantRecommendation.py | ma-zhiyuan/DynameDB-data-transfer | 481d321f4ba21ac190f9ac65995f7373447aad46 | [
"Apache-2.0"
] | null | null | null | RelevantRecommendation.py | ma-zhiyuan/DynameDB-data-transfer | 481d321f4ba21ac190f9ac65995f7373447aad46 | [
"Apache-2.0"
] | null | null | null | RelevantRecommendation.py | ma-zhiyuan/DynameDB-data-transfer | 481d321f4ba21ac190f9ac65995f7373447aad46 | [
"Apache-2.0"
] | null | null | null | #-*-coding:utf8-*-
from __future__ import print_function # Python 2/3 compatibility
import boto3
import time
import json
import decimal
import datetime
import json
from boto3.dynamodb.conditions import Key, Attr
from botocore.exceptions import ClientError
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
if o % 1 > 0:
return float(o)
else:
return int(o)
return super(DecimalEncoder, self).default(o)
class DetailInfoOnline(object):
def __init__(self, table_kind, region_name, endpoint_url, table_name):
self._dynamodb = boto3.resource(table_kind, region_name = region_name, endpoint_url = endpoint_url)
self._table = self._dynamodb.Table(table_name)
def query_item(self, docid):
try:
response = self._table.query(
KeyConditionExpression=Key('id').eq(docid)
)
if len(response) > 0:
return response['Items']
else:
return []
except Exception,e:
print (str(e))
return []
def scan_all_with_wait(self):
try:
pe = "id, info"
response = self._table.scan(
ProjectionExpression=pe,
)
result = []
while "Items" in response and 0 < len(response["Items"]):
if 0 != len(response["Items"]):
result += response["Items"]
if "LastEvaluatedKey" not in response:
break
lek = response["LastEvaluatedKey"]
response = self._table.scan(
ProjectionExpression=pe,
ExclusiveStartKey=lek
)
time.sleep(2)
print(len(result))
#break
return result
except Exception, e:
print (str(e))
return []
def scan_all(self):
try:
pe = "userId, anonymId"
response = self._table.scan(
ProjectionExpression=pe,
)
result = []
while "Items" in response and 0 < len(response["Items"]):
if 0 != len(response["Items"]):
print(response["Items"])
result += response["Items"]
if "LastEvaluatedKey" not in response:
break
lek = response["LastEvaluatedKey"]
print(lek)
response = self._table.scan(
ProjectionExpression=pe,
ExclusiveStartKey=lek
)
return result
except Exception, e:
print (str(e))
return []
def query_mul_item(self, docid_list):
try:
fe = Attr("id").is_in(docid_list)
response = self._table.scan(
FilterExpression=fe
)
result = []
while "Items" in response and 0 < len(response["Items"]):
if 0 != len(response["Items"]):
result += response["Items"]
if "LastEvaluatedKey" not in response:
break
lek = response['LastEvaluatedKey']
response = self._table.scan(
FilterExpression=fe,
ExclusiveStartKey=lek
)
return result
except Exception,e:
print(str(e))
return []
if __name__ =="__main__":
print("start...")
data_obj = DetailInfoOnline('dynamodb', 'ap-northeast-2', "https://dynamodb.ap-northeast-2.amazonaws.com", 'MXNewsUserDev')
res = data_obj.scan_all()
print(res[0])
print("end...") | 33.700855 | 127 | 0.492772 | 356 | 3,943 | 5.317416 | 0.27809 | 0.075541 | 0.062863 | 0.066561 | 0.500264 | 0.500264 | 0.470153 | 0.470153 | 0.425251 | 0.425251 | 0 | 0.008197 | 0.412123 | 3,943 | 117 | 128 | 33.700855 | 0.808456 | 0.01192 | 0 | 0.537736 | 0 | 0 | 0.076014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.084906 | null | null | 0.103774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4322add8a068e7ab80e2464fbfab257f4b3067a | 5,465 | py | Python | library/docker/docker.py | Kafkamorph/shutit | e281be48d6f102518506ad07754116452781a369 | [
"MIT"
] | null | null | null | library/docker/docker.py | Kafkamorph/shutit | e281be48d6f102518506ad07754116452781a369 | [
"MIT"
] | null | null | null | library/docker/docker.py | Kafkamorph/shutit | e281be48d6f102518506ad07754116452781a369 | [
"MIT"
] | null | null | null | #Copyright (C) 2014 OpenBet Limited
#
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is furnished
#to do so, subject to the following conditions:
#
#The above copyright notice and this permission notice shall be included in all
#copies or substantial portions of the Software.
#
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
#FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
#COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
#IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
#CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from shutit_module import ShutItModule
class docker(ShutItModule):
def build(self,shutit):
shutit.send('echo deb http://archive.ubuntu.com/ubuntu precise universe > /etc/apt/sources.list.d/universe.list')
shutit.send('apt-get update -qq')
shutit.install('iptables')
shutit.install('ca-certificates')
shutit.install('lxc')
shutit.install('curl')
shutit.install('aufs-tools')
shutit.send('pushd /usr/bin')
shutit.send('curl https://get.docker.io/builds/Linux/x86_64/docker-latest > docker')
shutit.send('chmod +x docker')
wrapdocker = """cat > /usr/bin/wrapdocker << 'END'
#!/bin/bash
# First, make sure that cgroups are mounted correctly.
CGROUP=/sys/fs/cgroup
[ -d $CGROUP ] ||
mkdir $CGROUP
mountpoint -q $CGROUP ||
mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
echo "Could not make a tmpfs mount. Did you use -privileged?"
exit 1
}
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security
then
mount -t securityfs none /sys/kernel/security || {
echo "Could not mount /sys/kernel/security."
echo "AppArmor detection and -privileged mode might break."
}
fi
# Mount the cgroup hierarchies exactly as they are in the parent system.
for SUBSYS in $(cut -d: -f2 /proc/1/cgroup)
do
[ -d $CGROUP/$SUBSYS ] || mkdir $CGROUP/$SUBSYS
mountpoint -q $CGROUP/$SUBSYS ||
mount -n -t cgroup -o $SUBSYS cgroup $CGROUP/$SUBSYS
# The two following sections address a bug which manifests itself
# by a cryptic "lxc-start: no ns_cgroup option specified" when
# trying to start containers withina container.
# The bug seems to appear when the cgroup hierarchies are not
# mounted on the exact same directories in the host, and in the
# container.
# Named, control-less cgroups are mounted with "-o name=foo"
# (and appear as such under /proc/<pid>/cgroup) but are usually
# mounted on a directory named "foo" (without the "name=" prefix).
# Systemd and OpenRC (and possibly others) both create such a
# cgroup. To avoid the aforementioned bug, we symlink "foo" to
# "name=foo". This shouldn't have any adverse effect.
echo $SUBSYS | grep -q ^name= && {
NAME=$(echo $SUBSYS | sed s/^name=//)
ln -s $SUBSYS $CGROUP/$NAME
}
# Likewise, on at least one system, it has been reported that
# systemd would mount the CPU and CPU accounting controllers
# (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu"
# but on a directory called "cpu,cpuacct" (note the inversion
# in the order of the groups). This tries to work around it.
[ $SUBSYS = cpuacct,cpu ] && ln -s $SUBSYS $CGROUP/cpu,cpuacct
done
# Note: as I write those lines, the LXC userland tools cannot setup
# a "sub-container" properly if the "devices" cgroup is not in its
# own hierarchy. Let's detect this and issue a warning.
grep -q :devices: /proc/1/cgroup ||
echo "WARNING: the 'devices' cgroup should be in its own hierarchy."
grep -qw devices /proc/1/cgroup ||
echo "WARNING: it looks like the 'devices' cgroup is not mounted."
# Now, close extraneous file descriptors.
pushd /proc/self/fd >/dev/null
for FD in *
do
case "$FD" in
# Keep stdin/stdout/stderr
[012])
;;
# Nuke everything else
*)
eval exec "$FD>&-"
;;
esac
done
popd >/dev/null
# If a pidfile is still around (for example after a container restart),
# delete it so that docker can start.
rm -rf /var/run/docker.pid
# If we were given a PORT environment variable, start as a simple daemon;
# otherwise, spawn a shell as well
if [ "$PORT" ]
then
exec docker -d -H 0.0.0.0:$PORT
else
docker -d &
exec bash
fi
END"""
shutit.send(wrapdocker)
shutit.send('chmod +x /usr/bin/wrapdocker')
start_docker = """cat > /root/start_docker.sh << 'END'
#!/bin/bash
/root/start_ssh_server.sh
docker -d &
/usr/bin/wrapdocker
echo "SSH Server up"
echo "Docker daemon running"
END"""
shutit.send(start_docker)
shutit.send('chmod +x /root/start_docker.sh')
shutit.send('popd')
return True
def is_installed(self,shutit):
config_dict = shutit.cfg
return False
def module():
return docker(
'shutit.tk.docker.docker', 0.396,
description='docker server (communicates with host\'s docker daemon)',
depends=['shutit.tk.setup', 'shutit.tk.ssh_server.ssh_server']
)
| 34.588608 | 115 | 0.699726 | 821 | 5,465 | 4.6419 | 0.429963 | 0.02624 | 0.017843 | 0.012595 | 0.037785 | 0.015219 | 0 | 0 | 0 | 0 | 0 | 0.006812 | 0.194145 | 5,465 | 157 | 116 | 34.808917 | 0.858538 | 0.190485 | 0 | 0.135593 | 0 | 0.025424 | 0.851759 | 0.050397 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025424 | false | 0 | 0.008475 | 0.008475 | 0.067797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a436e3db21c5de4499c7dacad151e9a9365a2dd6 | 18,815 | py | Python | Tac Tac Toe/ttt.py | promitbasak/TicTacToe-Pygame | 6114cee9498d70942f48a0b6eb360f02bcf72df0 | [
"MIT"
] | 3 | 2020-06-15T13:50:51.000Z | 2021-08-18T05:10:17.000Z | Tac Tac Toe/ttt.py | promitbasak/TicTacToe-Pygame | 6114cee9498d70942f48a0b6eb360f02bcf72df0 | [
"MIT"
] | null | null | null | Tac Tac Toe/ttt.py | promitbasak/TicTacToe-Pygame | 6114cee9498d70942f48a0b6eb360f02bcf72df0 | [
"MIT"
] | 1 | 2020-06-15T13:52:49.000Z | 2020-06-15T13:52:49.000Z | import random
import time
CELLS = 9
PLAYERS = 2
CORNERS = [1, 3, 7, 9]
NON_CORNERS = [2, 4, 6, 8]
board = {}
for i in range(9):
board[i + 1] = 0
signs = {0: " ", 1: "X", 2: "O"}
winner = None
def rpermutation(a):
array = a[:]
for _ in range(len(array)):
yield array.pop(random.randint(0, len(array)-1))
class player:
def __init__(self, name, mark):
self.name = name
self.sign = "X" if mark == 1 else "O"
self.mark = mark
self.playings = []
self.antimark = mark % 2 + 1
def getturn(self):
showboard()
print(f"\n{self.name}'s Turn:")
time.sleep(1)
print(f"\n{self.name} is giving his turn", end="")
for _ in range(6):
print(".", end="")
time.sleep(0.1)
print()
class user(player):
def getturn(self):
showboard()
print(f"\n{self.name}'s Turn:")
while True:
turnprompt = f"Enter where do you want to put {self.sign}\nEnter corresponding cell number [1-9]: "
turn = getintinput(turnprompt, 1, 9)
if board[turn] == 0:
break
elif board[turn] == self.mark:
print("You have already used that cell, please choose another!!!")
else:
print("Opponent has already used that cell, please choose another!!!")
return turn
class easy(player):
def getturn(self):
super().getturn()
while True:
turn = random.choice(getemptycells())
return turn
class medium(player):
def getturn(self):
super().getturn()
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.mark and (self.mark in row):
try:
# print("1row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.mark and (self.mark in col):
try:
# print("1col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.mark and (self.mark in diag):
try:
# print("1diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.mark and (self.mark in antidiag):
try:
# print("1antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.antimark and (self.antimark in row):
try:
# print("row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.antimark and (self.antimark in col):
try:
# print("col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.antimark and (self.antimark in diag):
try:
# print("diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.antimark and (self.antimark in antidiag):
try:
# print("antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
while True:
turn = random.choice(getemptycells())
return turn
class hard(player):
def getturn(self):
super().getturn()
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.mark and (self.mark in row):
try:
# print("1row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.mark and (self.mark in col):
try:
# print("1col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.mark and (self.mark in diag):
try:
# print("1diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.mark and (self.mark in antidiag):
try:
# print("1antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.antimark and (self.antimark in row):
try:
# print("row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.antimark and (self.antimark in col):
try:
# print("col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.antimark and (self.antimark in diag):
try:
# print("diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.antimark and (self.antimark in antidiag):
try:
# print("antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
for i in list(rpermutation(CORNERS)):
if not board[i]:
if sum([board[i] for i in getadjacentcorners(i)]):
try:
return cellvalidator(i)
except:
pass
if not board[5]:
return 5
if board[5] == self.mark:
for i in list(rpermutation(NON_CORNERS)):
if board[i] == self.mark:
# print("last but one")
try:
return cellvalidator(CELLS + 1 - i)
except:
pass
# print("corner")
try:
return cellvalidator(random.choice([i for i in getemptycells() if i in CORNERS]))
except:
pass
# print("last")
return cellvalidator(random.choice(getemptycells()))
class deadly(player):
def getturn(self):
super().getturn()
################# Priority ##################
# Aggressive
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.mark and (self.mark in row):
try:
# print("1row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.mark and (self.mark in col):
try:
# print("1col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.mark and (self.mark in diag):
try:
# print("1diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.mark and (self.mark in antidiag):
try:
# print("1antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
# Defensive
for i in range(3):
row = [board[i * 3 + 1], board[i * 3 + 2], board[i * 3 + 3]]
if sum(row) == 2 * self.antimark and (self.antimark in row):
try:
# print("row")
return cellvalidator(i * 3 + row.index(0) + 1)
except:
pass
col = [board[i + 1], board[i + 4], board[i + 7]]
if sum(col) == 2 * self.antimark and (self.antimark in col):
try:
# print("col")
return cellvalidator(i + col.index(0) * 3 + 1)
except:
pass
diag = [board[1], board[5], board[9]]
if sum(diag) == 2 * self.antimark and (self.antimark in diag):
try:
# print("diag")
return cellvalidator(diag.index(0) * 4 + 1)
except:
pass
antidiag = [board[3], board[5], board[7]]
if sum(antidiag) == 2 * self.antimark and (self.antimark in antidiag):
try:
# print("antidiag")
return cellvalidator(3 + antidiag.index(0) * 2)
except:
pass
########################################
emptycells = getemptycells()
mycells = self.getmycells()
oppenentcells = self.getoppenentcells()
# Only move Defensive
if len(emptycells) == 8:
if sum([board[i] for i in CORNERS]) != 0:
return 5
elif 5 in oppenentcells:
return random.choice(CORNERS)
# Only move 2 Defensive
if len(emptycells) % 2 == 0 and 5 in mycells:
for i in list(rpermutation(NON_CORNERS)):
try:
return cellvalidator(i)
except:
pass
# Aggressive
if len(emptycells) == 9:
return random.choice(CORNERS + [5] + [5])
if len(emptycells) == 7 and (5 in mycells) and sum([board[i] for i in CORNERS]) != 0:
for i in CORNERS:
if i in oppenentcells:
try:
return cellvalidator(CELLS + 1 - i)
except:
pass
if len(emptycells) % 2 != 0:
if sum([board[i] for i in CORNERS]) != 0:
for i in list(rpermutation(CORNERS)):
if not board[i] and sum([board[i] for i in getadjacentcorners(i)]):
adjcells = getadjacentcells(i)
if not(adjcells[0] in oppenentcells or adjcells[1] in oppenentcells):
try:
return cellvalidator(i)
except:
pass
else:
try:
# print("corner")
return cellvalidator(random.choice(CORNERS))
except:
pass
for i in list(rpermutation(CORNERS)):
if not board[i]:
if sum([board[i] for i in getadjacentcorners(i)]):
try:
return cellvalidator(i)
except:
pass
if not board[5]:
return 5
# Adjacent corners
for i in list(rpermutation(CORNERS)):
if not board[i]:
if sum([board[i] for i in getadjacentcorners(i)]):
try:
return cellvalidator(i)
except:
pass
# Non Corners for mid
if len(emptycells) % 2 == 0:
if board[5] == self.mark:
for i in list(rpermutation(NON_CORNERS)):
if board[i] == self.mark:
# print("last but one")
try:
return cellvalidator(CELLS + 1 - i)
except:
pass
# Corners
try:
# print("corner")
return cellvalidator(random.choice([i for i in getemptycells() if i in CORNERS]))
except:
pass
if board[5] == self.mark:
for i in list(rpermutation(NON_CORNERS)):
if board[i] == self.mark:
# print("last but one")
try:
return cellvalidator(CELLS + 1 - i)
except:
pass
# print("last")
return cellvalidator(random.choice(getemptycells()))
def getmycells(self):
return [i for i in range(1, 10) if board[i] == self.mark]
def getoppenentcells(self):
return [i for i in range(1, 10) if board[i] == self.antimark]
def getintinput(prompt, lower, upper):
while True:
print()
try:
num = int(input(prompt))
except:
print("You must enter one number from the options shown above!!!")
else:
if num < lower or num > upper:
print("You must choose a number between {} and {}!!!".format(lower, upper))
else:
break
return num
def headline():
print("""
==============================
TIC TAC TOE
==============================
""")
welcome = "Welcome to the game..."
for i in welcome:
print(i, end="")
time.sleep(0.1)
print("\n")
while True:
name = input("Enter your name: ")
if len(name) != 0:
break
else:
print("You must enter a name to play!!!")
mark = getintinput("Choose your mark\n1. Cross (X) 2. Round (O): ", 1, 2)
return name, mark
def init():
diff = getintinput("Choose your difficulty\n1. Easy 2. Medium 3. Hard 4. Deadly: ", 1, 4)
return diff
def showboard():
print()
print(" | | ")
print(f" {signs[board[1]]} | {signs[board[2]]} | {signs[board[3]]} ")
print(" | | ")
for _ in range(11):
print("—", end="")
print("\n | | ")
print(f" {signs[board[4]]} | {signs[board[5]]} | {signs[board[6]]} ")
print(" | | ")
for _ in range(11):
print("—", end="")
print("\n | | ")
print(f" {signs[board[7]]} | {signs[board[8]]} | {signs[board[9]]} ")
print(" | | ")
def getemptycells():
return [i for i in range(1, 10) if board[i] == 0]
def cellvalidator(cell):
if board[cell] == 0:
return cell
else:
# print(f"Cell {cell} is occupied!!!")
raise Exception()
def getadjacentcorners(cell):
adjacent = CORNERS[:]
adjacent.remove(cell)
adjacent.remove(CELLS + 1 - cell)
return adjacent
def getadjacentcells(cell):
if cell < 5:
return [cell * 2, 5 - cell]
else:
return [15 - cell, cell - 1]
def solve():
for i in range(3):
if board[i * 3 + 1] == board[i * 3 + 2] == board[i * 3 + 3] and board[i * 3 + 1] != 0:
return board[i * 3 + 1]
elif board[i + 1] == board[i + 4] == board[i + 7] and board[i + 1] != 0:
return board[i + 1]
if board[1] == board[5] == board[9] and board[1] != 0:
return board[1]
elif board[3] == board[5] == board[7] and board[3] != 0:
return board[3]
try:
list(board.values()).index(0)
except:
return -1
def marker(cell, mark):
if (cell < 1 and cell > 10):
print(f"Cell: {cell} not exist!!!")
raise Exception()
elif board[cell] != 0:
print(f"Cell: {cell} is occupied!!!")
raise Exception()
else:
board[cell] = mark
def getwinner(winner, human, comp):
showboard()
if winner == -1:
print("\nMatch Drawn", end="")
for _ in range(3):
print(".", end="")
time.sleep(0.2)
elif winner:
if human.mark == winner:
print(f"\n{human.name} Wins!", end="")
else:
print(f"\n{comp.name} Wins!", end="")
for _ in range(15):
print("!", end="")
time.sleep(0.05)
key = getintinput("\nDO you want to play again?\n1. Yes 2. No: ", 1, 2)
board = {}
for i in range(9):
board[i + 1] = 0
winner = None
return key, board, winner
############### GAME ##################
name, mark = headline()
while True:
diff = init()
human = user(name, mark)
if diff == 1:
comp = easy("Computer", mark % 2 + 1)
elif diff == 2:
comp = medium("Computer", mark % 2 + 1)
elif diff == 3:
comp = hard("Computer", mark % 2 + 1)
else:
comp = deadly("Computer", mark % 2 + 1)
if random.randint(0, 1):
players = [human, comp]
else:
players = [comp, human]
while True:
for p in players:
marker(p.getturn(), p.mark)
winner = solve()
if winner:
break
if winner:
break
key, board, winner = getwinner(winner, human, comp)
if key == 2:
print("")
endstr = "Good Bye..."
for i in endstr:
print(i, end="")
time.sleep(0.1)
break
############### END ###################
| 33.359929 | 113 | 0.427212 | 2,051 | 18,815 | 3.912726 | 0.090687 | 0.048598 | 0.023925 | 0.016449 | 0.669907 | 0.651589 | 0.618193 | 0.591651 | 0.573084 | 0.541308 | 0 | 0.03599 | 0.441775 | 18,815 | 563 | 114 | 33.419183 | 0.727887 | 0.035982 | 0 | 0.681319 | 0 | 0.008791 | 0.066072 | 0.00345 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046154 | false | 0.079121 | 0.004396 | 0.006593 | 0.2 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a43b7b785ea2069cf30bf58f80e9cd62796231df | 387 | py | Python | setup.py | exord/cobmcmc | 162ba8d4fab35aa44bc8a4828eb51e25df13c4e2 | [
"MIT"
] | null | null | null | setup.py | exord/cobmcmc | 162ba8d4fab35aa44bc8a4828eb51e25df13c4e2 | [
"MIT"
] | null | null | null | setup.py | exord/cobmcmc | 162ba8d4fab35aa44bc8a4828eb51e25df13c4e2 | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(name='cobmcmc',
description='Change-of-Basis, '
'a flexible Metropolis-Hastings MCMC algorithm.',
version='0.1.0',
author='Rodrigo F. Diaz',
author_email='rodrigo.diaz@unige.ch',
url='',
long_description='**COMING SOON**',
packages=['cobmcmc'],
requires=['numpy', 'scipy']
)
| 27.642857 | 67 | 0.589147 | 42 | 387 | 5.380952 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010345 | 0.250646 | 387 | 13 | 68 | 29.769231 | 0.768966 | 0 | 0 | 0 | 0 | 0 | 0.369509 | 0.054264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a43edff2bca252c202547a4220b20784da0fb45a | 2,757 | py | Python | Assignment_3.py | Lee-Lilly/Embedded-python | 57a3da93be9ce27c5f196b76a3ad83340e44451c | [
"MIT"
] | null | null | null | Assignment_3.py | Lee-Lilly/Embedded-python | 57a3da93be9ce27c5f196b76a3ad83340e44451c | [
"MIT"
] | null | null | null | Assignment_3.py | Lee-Lilly/Embedded-python | 57a3da93be9ce27c5f196b76a3ad83340e44451c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# ---
# # Python Basics - Assingment 3 ToDo
#
# ---
# **Exercise 1**
#
# **Task 1** Define a function called **repeat_stuff** that takes in two inputs, **stuff**, and **num_repeats**.
#
# We will want to make this function print a string with stuff repeated num_repeats amount of times. For now, only put an empty print statement inside the function.
#
# In[6]:
def repeat_stuff(stuff, num_repeats):
for i in range(num_repeats):
print(stuff)
repeat_stuff (input("Input an word: "), int(input("Repeat how many times? ")))
# ---
# **Task 2** Outside of the function, call repeat_stuff.
#
# You can use the value "Row " for stuff and 3 for num_repeats.
# Change the print statement inside repeat_stuff to a **return** statement instead. It should **return stuff*num_repeats**
# Note: Multiplying a string just makes a new string with the old one repeated! For example: "na*6 -> result is nanananana
# Then, give the parameter **num_repeats** a default value of **10**.
# In[7]:
def repeat_stuff(stuff, num_repeats):
return stuff*num_repeats
repeat_stuff (input("Input an word: "), int(input("Repeat how many times? ")))
# ---
# **Task 3** Add **repeat_stuff("Row ", 3)** and the string **"Your Boat. " ** together and save the result to a variable called **lyrics**.
#
# Create a variable called **song **and assign it the value of **repeat_stuff** called with the singular input **lyrics**.
#
# Print song.
#
# Good job!!
# In[14]:
def repeat_stuff(stuff, num_repeats):
return stuff*num_repeats
lyrics = repeat_stuff("Row ", 3) + "Your Boat! \n"
song = repeat_stuff(lyrics, 3)
print(song)
# ---
# **Exercise 2**
#
# The program reads the height in centimeters and then converts the height to feet and inches.
#
# Problem Solution
#
# 1. Take the height in centimeters and store it in a variable.
# 2. Convert the height in centimeters into inches and feet.
# 3. Print the length in inches and feet.
# 4. Exit.
#
# Hint!
# inches=0.394*cm
# feet=0.0328*cm
# In[1]:
def cm_feet(figure):
return round(figure/30.48, 2)
height = float(input("Enter your height in cm: " ))
res = cm_feet(height)
print("You are ", res, "feet tall.")
# ---
# **Exercise 3**
#
# This is a Python Program to check whether a number is positive or negative.
#
# Problem Solution
#
# 1. Take the value of the integer and store in a variable.
# 2. Use an if statement to determine whether the number is positive or negative.
# 3. Exit.
# In[1]:
def evaluate(num):
if num < 0:
print(num, "is negative.")
elif num > 0:
print(num, "is positive.")
elif num == 0:
print(num, "is zero.")
num = int(input("Enter an integer: "))
evaluate(num)
| 24.184211 | 164 | 0.665579 | 427 | 2,757 | 4.238876 | 0.344262 | 0.072928 | 0.049724 | 0.031492 | 0.241989 | 0.154144 | 0.118232 | 0.118232 | 0.118232 | 0.118232 | 0 | 0.020464 | 0.202394 | 2,757 | 113 | 165 | 24.39823 | 0.802638 | 0.628219 | 0 | 0.269231 | 0 | 0 | 0.192746 | 0 | 0 | 0 | 0 | 0.00885 | 0 | 1 | 0.192308 | false | 0 | 0 | 0.115385 | 0.307692 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a4412e6d819800bfa4c95e8966db3d3ce6218c2c | 875 | py | Python | cogs/moderation.py | robonone/RoboNone | 8795367ee38fac2110b4b5fc6d1f733a07f7caa4 | [
"MIT"
] | 6 | 2019-03-16T05:55:14.000Z | 2019-04-05T10:10:37.000Z | cogs/moderation.py | robonone/RoboNone | 8795367ee38fac2110b4b5fc6d1f733a07f7caa4 | [
"MIT"
] | 6 | 2019-03-09T07:29:27.000Z | 2019-04-07T17:57:14.000Z | cogs/moderation.py | peteranytime/RoboNone | 8795367ee38fac2110b4b5fc6d1f733a07f7caa4 | [
"MIT"
] | 3 | 2019-06-07T11:08:33.000Z | 2021-02-13T07:33:10.000Z | import argparse
import copy
import datetime
import re
import shlex
from typing import Union
import time
import discord
from discord.ext import commands
class Arguments(argparse.ArgumentParser):
def error(self, message):
raise RuntimeError(message)
def setup(bot):
bot.add_cog(Moderation(bot))
def is_owner(ctx):
return ctx.author == ctx.guild.owner_id
class Moderation(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(hidden=True)
@commands.check(is_owner)
async def sudo(self, ctx, who: Union[discord.Member, discord.User], *, command: str):
"""Run a command as another user."""
msg = copy.copy(ctx.message)
msg.author = who
msg.content = ctx.prefix + command
new_ctx = await self.bot.get_context(msg, cls=type(ctx))
await self.bot.invoke(new_ctx)
| 22.435897 | 89 | 0.684571 | 121 | 875 | 4.859504 | 0.487603 | 0.047619 | 0.040816 | 0.05102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210286 | 875 | 38 | 90 | 23.026316 | 0.850941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.333333 | 0.037037 | 0.592593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a4426592f5f9a887ec3c6cdd36ebe20492020363 | 484 | py | Python | magichour/validate/splitter.py | Lab41/magichour | 69c03e87fe5d067068f298a5cf9e005a97b337db | [
"Apache-2.0"
] | 34 | 2016-01-10T22:03:10.000Z | 2020-06-29T08:06:58.000Z | magichour/validate/splitter.py | Lab41/magichour | 69c03e87fe5d067068f298a5cf9e005a97b337db | [
"Apache-2.0"
] | 2 | 2017-07-24T16:45:42.000Z | 2017-10-20T20:18:49.000Z | magichour/validate/splitter.py | Lab41/magichour | 69c03e87fe5d067068f298a5cf9e005a97b337db | [
"Apache-2.0"
] | 20 | 2015-12-14T19:45:02.000Z | 2019-07-10T14:51:48.000Z | from random import shuffle
def splitRDD(rdd, weights=[0.8, 0.1, 0.1]):
train, validation, test = rdd.randomSplit(weights=weights)
return train, validation, test
def split(data, weights=[0.8, 0.1, 0.1]):
d = [x for x in data]
shuffle(d)
idx_train = len(d) * weights[0]
idx_validation = len(d) * (weights[0] + weights[1])
train = d[:idx_train]
validation = d[idx_train:idx_validation]
test = d[idx_validation:]
return train, validation, test
| 26.888889 | 62 | 0.650826 | 75 | 484 | 4.12 | 0.32 | 0.10356 | 0.184466 | 0.064725 | 0.084142 | 0.084142 | 0.084142 | 0 | 0 | 0 | 0 | 0.039063 | 0.206612 | 484 | 17 | 63 | 28.470588 | 0.765625 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a442c144831d337a7da0927dc8fc3b093e18af2f | 1,085 | py | Python | newton.py | Robokkie/NA | bce44ecd6f64ecea79d256cad4115d57e80e9576 | [
"MIT"
] | null | null | null | newton.py | Robokkie/NA | bce44ecd6f64ecea79d256cad4115d57e80e9576 | [
"MIT"
] | null | null | null | newton.py | Robokkie/NA | bce44ecd6f64ecea79d256cad4115d57e80e9576 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
import math
from function import Function
import function
if __name__ == '__main__':
print "please input the number that a,b,c,d for function"
a,b,c,d = map(int, raw_input('input a,b,c,d=').split(","))
func=Function(a,b,c,d)
func.showinfo()
while True:
print "input x,r | x<r & f(x)*f(r)<0"
x,r = map(float,raw_input('input x,r=').split(","))
if (func.fnf(x)<func.fnf(r)) and (func.fnf(x)*func.fnf(r)<0):
print "correct"
break
else:
print "wrong numbers (x>r or f(x)*f(r)>=0)"
th = float(raw_input('please input threshold:'))
k=0
print "number of process, approximation, diff"
print("{0:4d},{1:8.5f},{2}".format(k,x,"---"))
approximations=[]
while True:
k+=1
xn = x - func.fnf(x)/func.diff(x)
print("{0:4d},{1:8.5f},{2:8.5f}".format(k,xn,xn-x))
temp=(k,x)
approximations.append(temp)
if(math.fabs(xn-x) < th):
break
else:
x=xn
print "number of process = {0:4d}".format(k)
print "the solution = {0:10.6f}".format(xn)
function.write_csv("newton_results.csv",approximations) | 20.865385 | 63 | 0.62212 | 196 | 1,085 | 3.377551 | 0.357143 | 0.015106 | 0.018127 | 0.024169 | 0.138973 | 0.087613 | 0.039275 | 0 | 0 | 0 | 0 | 0.028478 | 0.158525 | 1,085 | 52 | 64 | 20.865385 | 0.696605 | 0.037788 | 0 | 0.176471 | 0 | 0 | 0.315436 | 0.023011 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088235 | null | null | 0.264706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4441378c3135c8d9a9f00c4a3a64345ab77f794 | 6,482 | py | Python | fourq-master/impl/compare.py | rikard-sics/group-oscore-key | 463163334e42da86ecf26374ed89e2cb750f6e18 | [
"BSD-3-Clause"
] | 1 | 2022-01-07T13:46:08.000Z | 2022-01-07T13:46:08.000Z | fourq-master/impl/compare.py | rikard-sics/group-oscore-key | 463163334e42da86ecf26374ed89e2cb750f6e18 | [
"BSD-3-Clause"
] | null | null | null | fourq-master/impl/compare.py | rikard-sics/group-oscore-key | 463163334e42da86ecf26374ed89e2cb750f6e18 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from random import getrandbits
from time import time
from fields import GFp2, GFp25519, p1271, p25519
import curve4q
import curve25519
# Adjust these if you want more/fewer samples
FIELD_TEST_LOOPS = 1000
DH_TEST_LOOPS = 100
def compare_fields():
base_corpus = [getrandbits(256) for i in range(FIELD_TEST_LOOPS)]
corpus1271 = [(x % p1271, (x >> 128) % p1271) for x in base_corpus]
corpus25519 = [x % p25519 for x in base_corpus]
def speedtest(f):
# Test in GFp2
tic = time()
for i in range(0, FIELD_TEST_LOOPS-1):
f(GFp2, i, corpus1271)
toc = time()
t1271 = 1000 * (toc - tic)
# Test in GFp25519
tic = time()
for i in range(0, FIELD_TEST_LOOPS-1):
f(GFp25519, i, corpus25519)
toc = time()
t25519 = 1000 * (toc - tic)
return (t1271, t25519)
tests = {
"add": lambda field, i, corpus: field.add(corpus[i], corpus[i+1]),
"mul": lambda field, i, corpus: field.mul(corpus[i], corpus[i+1]),
"sqr": lambda field, i, corpus: field.sqr(corpus[i]),
"inv": lambda field, i, corpus: field.inv(corpus[i])
}
print "===== Time for {} field operations =====".format(FIELD_TEST_LOOPS)
print
print "{:5s} {:>8s} {:>8s}".format("Op", "GFp2", "GFp25519")
for name in tests:
(t1271, t25519) = speedtest(tests[name])
print "{:5s} {:8.2f}ms {:8.2f}ms".format(name, t1271, t25519)
print
def compare_ops():
opcounts = {}
m = getrandbits(256)
G = curve4q.AffineToR1(curve4q.Gx, curve4q.Gy)
k = '77076d0a7318a57d3c16c17251b26645df4c2f87ebc0992ab177fba51db92c2a'.decode('hex')
u = '0900000000000000000000000000000000000000000000000000000000000000'.decode('hex')
G392 = curve4q.MUL_endo(392, G)
T_windowed = curve4q.table_windowed(G)
T_endo = curve4q.table_endo(G)
T392_windowed = curve4q.table_windowed(G392)
T392_endo = curve4q.table_endo(G392)
# R1toR2
GFp2.ctr_reset()
Q = curve4q.R1toR2(G)
opcounts["R1toR2"] = GFp2.ctr()
# R1toR3
GFp2.ctr_reset()
Q = curve4q.R1toR3(G)
opcounts["R1toR3"] = GFp2.ctr()
# R2toR4
G2 = curve4q.R1toR2(G)
GFp2.ctr_reset()
Q = curve4q.R2toR4(G2)
opcounts["R2toR4"] = GFp2.ctr()
# ADD_core
P = curve4q.R1toR3(G)
Q = curve4q.R1toR2(G)
GFp2.ctr_reset()
R = curve4q.ADD_core(P, Q)
opcounts["ADD_core"] = GFp2.ctr()
# ADD
P = G
Q = curve4q.R1toR2(G)
GFp2.ctr_reset()
R = curve4q.ADD(P, Q)
opcounts["ADD"] = GFp2.ctr()
# DBL
GFp2.ctr_reset()
Q = curve4q.DBL(G)
opcounts["DBL"] = GFp2.ctr()
# MUL_windowed
GFp2.ctr_reset()
Q = curve4q.MUL_windowed(m, G)
opcounts["MUL_windowed"] = GFp2.ctr()
# MUL_windowed_fixed
GFp2.ctr_reset()
Q = curve4q.MUL_windowed(m, G, table=T_windowed)
opcounts["MUL_windowed_fixed"] = GFp2.ctr()
# Phi
GFp2.ctr_reset()
phiP = curve4q.phi(G)
opcounts["phi"] = GFp2.ctr()
# Psi
GFp2.ctr_reset()
psiP = curve4q.psi(G)
opcounts["psi"] = GFp2.ctr()
# MUL_endo
GFp2.ctr_reset()
mP = curve4q.MUL_endo(m, G)
opcounts["MUL_endo"] = GFp2.ctr()
# MUL_endo_fixed
GFp2.ctr_reset()
Q = curve4q.MUL_endo(m, G, table=T_endo)
opcounts["MUL_endo_fixed"] = GFp2.ctr()
# DH_windowed
GFp2.ctr_reset()
mP = curve4q.DH_windowed(m, G[:2])
opcounts["DH_windowed"] = GFp2.ctr()
# DH_windowed_fixed
GFp2.ctr_reset()
mP = curve4q.DH_windowed(m, G[:2], table=T392_windowed)
opcounts["DH_windowed_fixed"] = GFp2.ctr()
# DH_endo
GFp2.ctr_reset()
mP = curve4q.DH_endo(m, G[:2])
opcounts["DH_endo"] = GFp2.ctr()
# DH_endo_fixed
GFp2.ctr_reset()
mP = curve4q.DH_endo(m, G[:2], table=T392_endo)
opcounts["DH_endo_fixed"] = GFp2.ctr()
# x25519
GFp25519.ctr_reset()
ku = curve25519.x25519(k, u)
opcounts["x25519"] = GFp25519.ctr()
rows = ["R1toR2", "R1toR3", "R2toR4", "ADD_core", "ADD", "DBL",
"phi", "psi", "psiphi",
"MUL_windowed", "MUL_windowed_fixed", "MUL_endo", "MUL_endo_fixed",
"DH_windowed", "DH_windowed_fixed", "DH_endo", "DH_endo_fixed",
"x25519"]
print "===== Field operation count ====="
print
print "{:18s} {:>7s} {:>7s} {:>7s} {:>7s}".format("", "M", "S", "A", "I")
for name in rows:
if name not in opcounts:
continue
opctr = opcounts[name]
print "{:20s} {:7.1f} {:7.1f} {:7.1f} {:7.1f}".format(name, opctr[2], opctr[1], opctr[0], opctr[3])
print
def compare_time():
G = (curve4q.Gx, curve4q.Gy)
u = '09'.ljust(64, '0').decode('hex')
G392 = curve4q.MUL_endo(392, curve4q.AffineToR1(curve4q.Gx, curve4q.Gy))
T392_windowed = curve4q.table_windowed(G392)
T392_endo = curve4q.table_endo(G392)
coeff4q = [getrandbits(256) for i in range(DH_TEST_LOOPS)]
coeff25519 = ["{:x}".format(m).ljust(64, '0').decode('hex') for m in coeff4q]
time4q_windowed = 0
time4q_windowed_fixed = 0
time4q_endo = 0
time4q_endo_fixed = 0
time25519 = 0
for i in range(len(coeff4q)):
tic = time()
mP = curve4q.DH_windowed(coeff4q[i], G)
toc = time()
time4q_windowed += toc - tic
tic = time()
mP = curve4q.DH_windowed(coeff4q[i], G, table=T392_windowed)
toc = time()
time4q_windowed_fixed += toc - tic
tic = time()
mP = curve4q.DH_endo(coeff4q[i], G)
toc = time()
time4q_endo += toc - tic
tic = time()
mP = curve4q.DH_endo(coeff4q[i], G, table=T392_endo)
toc = time()
time4q_endo_fixed += toc - tic
tic = time()
ku = curve25519.x25519(coeff25519[i], u)
toc = time()
time25519 += toc - tic
print "===== Time for {} field operations =====".format(DH_TEST_LOOPS)
print
print "{:<28s} {:>7.2f}ms".format("Curve4Q (windowed)", 1000 * time4q_windowed)
print "{:<28s} {:>7.2f}ms".format("Curve4Q (win fixed base)", 1000 * time4q_windowed_fixed)
print "{:<28s} {:>7.2f}ms".format("Curve4Q (endomorphisms)", 1000 * time4q_endo)
print "{:<28s} {:>7.2f}ms".format("Curve4Q (endo fixed base)", 1000 * time4q_endo_fixed)
print "{:<28s} {:>7.2f}ms".format("Curve25519", 1000 * time25519)
if __name__ == "__main__":
compare_fields()
compare_ops()
compare_time()
| 28.808889 | 107 | 0.597346 | 869 | 6,482 | 4.294591 | 0.155351 | 0.060021 | 0.051447 | 0.024384 | 0.426581 | 0.304126 | 0.229368 | 0.164523 | 0.164523 | 0.128081 | 0 | 0.119101 | 0.244832 | 6,482 | 224 | 108 | 28.9375 | 0.64331 | 0.03934 | 0 | 0.271605 | 0 | 0.006173 | 0.145504 | 0.020625 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.030864 | null | null | 0.104938 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4463b95c1d4eeb9a2a5beb88b29bf8801a14e7e | 1,005 | py | Python | var/spack/repos/builtin/packages/py-flit-core/package.py | zygyz/spack | e0f044561e9b3cb384bfb0f364555e5875b1a492 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 348 | 2015-12-30T04:05:54.000Z | 2022-02-21T10:57:53.000Z | var/spack/repos/builtin/packages/py-flit-core/package.py | LLNL/spack | 0b2507053e58ef4058631c10e5d9bf7ea2222c20 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 3,994 | 2015-12-09T09:43:00.000Z | 2017-11-04T02:46:39.000Z | var/spack/repos/builtin/packages/py-flit-core/package.py | hfp/spack | 0b2507053e58ef4058631c10e5d9bf7ea2222c20 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 359 | 2015-12-16T18:25:55.000Z | 2017-11-02T14:51:13.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import glob
import os
import zipfile
from spack import *
class PyFlitCore(PythonPackage):
"""Distribution-building parts of Flit."""
homepage = "https://github.com/takluyver/flit"
url = "https://github.com/takluyver/flit/archive/refs/tags/3.3.0.tar.gz"
maintainers = ['takluyver']
version('3.3.0', sha256='f5340b268563dd408bf8e2df6dbc8d4d08bc76cdff0d8c7f8a4be94e5f01f22f')
depends_on('python@3.4:', type=('build', 'run'))
depends_on('py-toml', type=('build', 'run'))
def build(self, spec, prefix):
with working_dir('flit_core'):
python('build_dists.py')
def install(self, spec, prefix):
wheel = glob.glob(os.path.join('flit_core', 'dist', '*.whl'))[0]
with zipfile.ZipFile(wheel) as f:
f.extractall(python_purelib)
| 30.454545 | 95 | 0.681592 | 129 | 1,005 | 5.255814 | 0.658915 | 0.032448 | 0.041298 | 0.067847 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067389 | 0.173134 | 1,005 | 32 | 96 | 31.40625 | 0.748496 | 0.224876 | 0 | 0 | 0 | 0.055556 | 0.324675 | 0.083117 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.