hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3cd083fcbe9d9a5b10140d9dc0aeef9db24a13dd | 309 | py | Python | ocrd/validator/ocrd_tool_validator.py | saw-leipzig/pyocrd | a346f472d3dfff1fecc14eda7b8088ab882ee8ca | [
"Apache-2.0"
] | null | null | null | ocrd/validator/ocrd_tool_validator.py | saw-leipzig/pyocrd | a346f472d3dfff1fecc14eda7b8088ab882ee8ca | [
"Apache-2.0"
] | null | null | null | ocrd/validator/ocrd_tool_validator.py | saw-leipzig/pyocrd | a346f472d3dfff1fecc14eda7b8088ab882ee8ca | [
"Apache-2.0"
] | null | null | null | from ..constants import OCRD_TOOL_SCHEMA
from .json_validator import JsonValidator
#
# -------------------------------------------------
#
class OcrdToolValidator(JsonValidator):
@staticmethod
def validate_json(obj, schema=OCRD_TOOL_SCHEMA):
return JsonValidator.validate_json(obj, schema)
| 23.769231 | 55 | 0.650485 | 29 | 309 | 6.689655 | 0.551724 | 0.082474 | 0.14433 | 0.216495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12945 | 309 | 12 | 56 | 25.75 | 0.72119 | 0.158576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
3cd8bdb0825abd530d408097519713a5a2607861 | 379 | py | Python | slybot/slybot/plugins/scrapely_annotations/extraction/__init__.py | hackrush01/portia | c7414034361fecada76e1693666674c274b0421a | [
"BSD-3-Clause"
] | 6,390 | 2015-01-01T17:05:13.000Z | 2022-03-31T08:20:12.000Z | slybot/slybot/plugins/scrapely_annotations/extraction/__init__.py | hackrush01/portia | c7414034361fecada76e1693666674c274b0421a | [
"BSD-3-Clause"
] | 442 | 2015-01-04T17:32:20.000Z | 2022-03-15T21:21:23.000Z | slybot/slybot/plugins/scrapely_annotations/extraction/__init__.py | hackrush01/portia | c7414034361fecada76e1693666674c274b0421a | [
"BSD-3-Clause"
] | 1,288 | 2015-01-09T05:54:20.000Z | 2022-03-31T03:21:51.000Z | from .container_extractors import (
BaseContainerExtractor, ContainerExtractor, RepeatedContainerExtractor,
RepeatedFieldsExtractor
)
from .extractors import SlybotIBLExtractor, TemplatePageMultiItemExtractor
from .pageparsing import (
parse_template, SlybotTemplatePage, SlybotTemplatePageParser
)
from .region_extractors import BaseExtractor, SlybotRecordExtractor
| 37.9 | 75 | 0.85752 | 26 | 379 | 12.384615 | 0.692308 | 0.149068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102902 | 379 | 9 | 76 | 42.111111 | 0.947059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.444444 | 0 | 0.444444 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3ce9b549214d32dee8d4c1e3f89bccf7ac3443fb | 211 | py | Python | pnz.py | Prashant269/python | facf2683c20ace046e8c2adcd7fe96aad609331d | [
"bzip2-1.0.6"
] | null | null | null | pnz.py | Prashant269/python | facf2683c20ace046e8c2adcd7fe96aad609331d | [
"bzip2-1.0.6"
] | null | null | null | pnz.py | Prashant269/python | facf2683c20ace046e8c2adcd7fe96aad609331d | [
"bzip2-1.0.6"
] | null | null | null | #To check a number is positive,negative,zero
num=input("enter any number")
if num>1:
print num,"is a positive number"
elif num<0:
print num,"is a negative number"
else:
print num,"is a Zero"
| 23.444444 | 45 | 0.668246 | 37 | 211 | 3.810811 | 0.486486 | 0.170213 | 0.212766 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.227488 | 211 | 8 | 46 | 26.375 | 0.852761 | 0.203791 | 0 | 0 | 0 | 0 | 0.408805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.428571 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
59951424e7b4e3ebbc716cd31c20625cc26d750b | 33 | py | Python | dexter/phys/phys.py | WJM96/dexter | ca338e654bf37b7b9e53cf461a52d46eb2c80dea | [
"MIT"
] | null | null | null | dexter/phys/phys.py | WJM96/dexter | ca338e654bf37b7b9e53cf461a52d46eb2c80dea | [
"MIT"
] | null | null | null | dexter/phys/phys.py | WJM96/dexter | ca338e654bf37b7b9e53cf461a52d46eb2c80dea | [
"MIT"
] | null | null | null | class classname(object):
pass | 16.5 | 24 | 0.727273 | 4 | 33 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 33 | 2 | 25 | 16.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
59a54bdaf3d93c56829f22b2bcbe64992951f825 | 5,662 | py | Python | App/model/auditLog.py | Stvchm9703/flask-api-example | 712ee2f17f9dd41f4c40b5aed3bcf42d942c07e2 | [
"CC0-1.0"
] | null | null | null | App/model/auditLog.py | Stvchm9703/flask-api-example | 712ee2f17f9dd41f4c40b5aed3bcf42d942c07e2 | [
"CC0-1.0"
] | null | null | null | App/model/auditLog.py | Stvchm9703/flask-api-example | 712ee2f17f9dd41f4c40b5aed3bcf42d942c07e2 | [
"CC0-1.0"
] | null | null | null | from App import db
from . import Serializer
from datetime import datetime
class auditLog(db.Model, Serializer):
# table audit_log : table modal
__tablename__ = 'audit_logs'
# log id
id = db.Column('id', db.Integer(), primary_key=True, autoincrement=True)
# unit_id / uno_id
uno_id = db.Column('uno_id', db.VARCHAR(length=45))
# audit log lookup to code
code_id = db.Column('code_id', db.VARCHAR(length=45))
# loging time
timestamp = db.Column('timestamp', db.TIMESTAMP(), nullable=False)
# logging status
status = db.Column('status', db.VARCHAR(length=45))
# extra-info
user_id = db.Column('user_id', db.VARCHAR(length=45))
mission_id = db.Column('mission_id', db.VARCHAR(length=45))
waypoint_id = db.Column('waypoint_id', db.String(255))
# initrialization
def __init__(self, uno_id, code_id,
status, audit_time, user_id,
mission_id, waypoint_id
):
self.uno_id = uno_id
self.code_id = code_id
self.status = status
self.audit_time = audit_time
self.user_id = user_id
self.mission_id = mission_id
self.waypoint_id = waypoint_id
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
try:
d['timestamp'] = d['timestamp'].strftime('%Y-%m-%d %H:%M:%S.%f')
except BaseException:
d['timestamp'] = str(d['timestamp'])
return d
class auditLogLookup(db.Model, Serializer):
# table audit_log_lookups : table modal
__tablename__ = 'audit_log_lookups'
# rec-id
id = db.Column('id', db.Integer(), primary_key=True, autoincrement=True)
# code-id
code_id = db.Column('code_id', db.VARCHAR(length=45))
# status code
status = db.Column('status', db.VARCHAR(length=45))
# status description
description = db.Column('description', db.VARCHAR(length=45))
# log type
type = db.Column('type', db.VARCHAR(length=45))
# severity level
severity = db.Column('severity', db.VARCHAR(length=45))
# initrialization
def __init__(self, code_id, status, description, type, severity):
self.code_id = code_id
self.status = status
self.description = description
self.type = type
self.severity = severity
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
return d
class vAlertLog(db.Model, Serializer):
# view table alert-logs-v
__tablename__ = 'alert_logs_v'
# unit_name / uno_id
uno_id = db.Column('uno_id', db.VARCHAR())
# unit_name / uno_name
uno_name = db.Column('uno_name', db.VARCHAR())
# code_id
code_id = db.Column('code_id', db.VARCHAR())
# loging time
timestamp = db.Column('timestamp', db.TIMESTAMP(),primary_key=True)
# status description
display = db.Column('display', db.VARCHAR())
# severity level
severity = db.Column('severity', db.VARCHAR())
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
try:
d['timestamp'] = d['timestamp'].strftime('%Y-%m-%d %H:%M:%S.%f')
except BaseException:
d['timestamp'] = str(d['timestamp'])
return d
class vAlertLiveLog(db.Model, Serializer):
# view table audit-logs-live-v
__tablename__ = 'alert_logs_live_v'
# unit_name / uno_id
uno_id = db.Column('uno_id', db.VARCHAR(length=45))
# unit_name / uno_name
uno_name = db.Column('uno_name', db.VARCHAR(length=45))
# code_id
code_id = db.Column('code_id', db.VARCHAR(length=45))
# status
status = db.Column('status', db.VARCHAR(length=45))
# loging time
timestamp = db.Column('timestamp', db.TIMESTAMP(), primary_key=True)
# status description
display = db.Column('display', db.VARCHAR(length=45))
# severity level
severity = db.Column('severity', db.VARCHAR(length=45))
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
try:
d['timestamp'] = d['timestamp'].strftime('%Y-%m-%d %H:%M:%S.%f')
except BaseException:
d['timestamp'] = str(d['timestamp'])
return d
class vEventLog(db.Model, Serializer):
# view table Event-logs-live-v
__tablename__ = 'event_logs_v'
# unit_name / uno_id
uno_id = db.Column('uno_id', db.VARCHAR(length=45))
# unit_name / uno_name
uno_name = db.Column('uno_name', db.VARCHAR(length=45))
# code_id
code_id = db.Column('code_id', db.VARCHAR(length=45))
# loging time
timestamp = db.Column('timestamp', db.TIMESTAMP(), primary_key=True)
# status description
display = db.Column('display', db.VARCHAR(length=45))
# severity level
severity = db.Column('severity', db.VARCHAR(length=45))
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
try:
d['timestamp'] = d['timestamp'].strftime('%Y-%m-%d %H:%M:%S.%f')
except BaseException:
d['timestamp'] = str(d['timestamp'])
return d
class unoLookup(db.Model, Serializer):
# view table Event-logs-live-v
__tablename__ = 'uno_lookups'
# rec_id
id = db.Column('id', db.Integer(), primary_key=True, autoincrement=True)
# unit_name / uno_id
uno_id = db.Column('uno_id', db.VARCHAR(length=45))
# unit_name / uno_name
uno_name = db.Column('description', db.VARCHAR(length=45))
# serialize db object to json
def serialize(self):
d = Serializer.serialize(self)
return d
| 33.305882 | 76 | 0.62893 | 749 | 5,662 | 4.583445 | 0.108144 | 0.083892 | 0.100495 | 0.113895 | 0.786484 | 0.75852 | 0.741043 | 0.727352 | 0.703175 | 0.663851 | 0 | 0.011327 | 0.235959 | 5,662 | 169 | 77 | 33.502959 | 0.782247 | 0.155246 | 0 | 0.607843 | 0 | 0 | 0.118082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.029412 | 0 | 0.637255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ab8d1e6fd57e31494a7128980c8ffac15eda5498 | 8,162 | py | Python | TOPdeskPy/base.py | TwinkelToe/TOPdeskPy | a8952019711a79538199d1a4f88faa0952eabff0 | [
"MIT"
] | 2 | 2020-07-23T15:15:03.000Z | 2022-02-16T20:06:20.000Z | TOPdeskPy/base.py | TwinkelToe/TOPdeskPy | a8952019711a79538199d1a4f88faa0952eabff0 | [
"MIT"
] | null | null | null | TOPdeskPy/base.py | TwinkelToe/TOPdeskPy | a8952019711a79538199d1a4f88faa0952eabff0 | [
"MIT"
] | null | null | null | import re, base64
from . import _incident
from . import _person
from . import _utils
from . import _operator
# from .incident import incident
class connect:
# Created API-version 3.0.5
def __init__(self, topdesk_url, topdesk_username, topdesk_password):
self._topdesk_url = topdesk_url
self._credpair = (base64.b64encode((topdesk_username + ':' + topdesk_password).encode("utf-8"))).decode("utf-8")
self._partial_content_container = []
self.incident = _incident.incident(self._topdesk_url, self._credpair)
self.person = _person.person(self._topdesk_url, self._credpair)
self.utils = _utils.utils(self._topdesk_url, self._credpair)
self.department = self._department(self._topdesk_url, self._credpair)
self.branche = self._branche(self._topdesk_url, self._credpair)
self.location = self._location(self._topdesk_url, self._credpair)
self.supplier = self._supplier(self._topdesk_url, self._credpair)
self.operatorgroup = self._operatorgroup(self._topdesk_url, self._credpair)
self.operator = _operator.operator(self._topdesk_url, self._credpair)
self.budgetholder = self._budgetholder(self._topdesk_url, self._credpair)
self.operational_activities = self._operational_activities(self._topdesk_url, self._credpair)
class _operatorgroup:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_operators(self, operatorgroup_id):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/operatorgroups/id/{}/operators".format(operatorgroup_id)))
def get_list(self, archived=False, page_size=100, query=None):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/operatorgroups", archived, page_size, query))
def get_id_operatorgroup(self, query):
result = self.get_list()
canidates = list()
for operatorgroup in result:
if re.match(rf"(.+)?{query}(.+)?", operatorgroup['groupName'], re.IGNORECASE):
canidates.append(operatorgroup['id'])
return self.utils.print_lookup_canidates(canidates)
def create(self, groupName, **kwargs):
kwargs['groupName'] = groupName
return self.utils.handle_topdesk_response(self.utils.post_to_topdesk("/tas/api/operatorgroups", (self.utils.add_id_jsonbody(**kwargs))))
def update(self, operatorgroup_id, **kwargs):
return self.utils.handle_topdesk_response(self.utils.put_to_topdesk("/tas/api/operatorgroups/id/{}".format(operatorgroup_id), self.utils.add_id_jsonbody(**kwargs)))
def archive(self, operatorgroup_id, reason_id=None):
if reason_id:
param = {'id': reason_id}
return self.utils.handle_topdesk_response(self.utils.put_to_topdesk("/tas/api/operatorgroups/id/{}/archive".format(operatorgroup_id), param))
def unarchive(self, operatorgroup_id):
return self.utils.handle_topdesk_response(self.utils.put_to_topdesk("/tas/api/operatorgroups/id/{}/unarchive".format(operatorgroup_id), None))
class _supplier:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get(self, id):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/suppliers/{}".format(id)))
def get_list(self, archived=False, page_size=100, query=None):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/suppliers", archived, page_size, query))
class _location:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_list(self, archived=False, page_size=100, query=None):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/locations", archived, page_size, query))
def get(self, id):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/locations/id/{}".format(id)))
class _branche:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_list(self, archived=False, page_size=100, query=None):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/branches", archived, page_size, query))
def get(self, id):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/branches/id/{}".format(id)))
def create(self, name, **kwargs):
kwargs['name'] = name
return self.utils.handle_topdesk_response(self.utils.post_to_topdesk("/tas/api/branches", self.utils.add_id_jsonbody(**kwargs)))
def update(self, branche_id, **kwargs):
return self.utils.handle_topdesk_response(self.utils.put_to_topdesk("/tas/api/branches/id/{}".format(branche_id), self.utils.add_id_jsonbody(**kwargs)))
class _operational_activities:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_list(self, **kwargs):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/operationalActivities", extended_uri=kwargs))
def get(self, id):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/operationalActivities/{}".format(id)))
class _department:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_list(self, archived=False, page_size=100):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/departments", archived, page_size))
def create(self, name, **kwargs):
kwargs['name'] = name
return self.utils.handle_topdesk_response(self.utils.post_to_topdesk("/tas/api/departments", self.utils.add_id_jsonbody(**kwargs)))
class _budgetholder:
def __init__(self, topdesk_url, credpair):
self._topdesk_url = topdesk_url
self._credpair = credpair
self.utils = _utils.utils(self._topdesk_url, self._credpair)
def get_list(self):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/budgetholders"))
def create(self, name, **kwargs):
kwargs['name'] = name
return self.utils.handle_topdesk_response(self.utils.post_to_topdesk("/tas/api/branches", self.utils.add_id_jsonbody(**kwargs)))
def get_countries(self):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/countries"))
def get_archiving_reasons(self):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/archiving-reasons"))
def get_timespent_reasons(self):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/timespent-reasons"))
def get_permissiongroups(self):
return self.utils.handle_topdesk_response(self.utils.request_topdesk("/tas/api/permissiongroups"))
def notification(self, title, **kwargs):
kwargs['title'] = title
return self.utils.handle_topdesk_response(self.utils.post_to_topdesk("/tas/api/tasknotifications/custom", self.utils.add_id_jsonbody(**kwargs)))
if __name__ == "__main__":
pass
| 49.168675 | 176 | 0.688434 | 986 | 8,162 | 5.380325 | 0.103448 | 0.11197 | 0.089727 | 0.107823 | 0.738549 | 0.721018 | 0.656739 | 0.630914 | 0.630914 | 0.62017 | 0 | 0.003956 | 0.194805 | 8,162 | 165 | 177 | 49.466667 | 0.803256 | 0.006861 | 0 | 0.377049 | 0 | 0 | 0.085894 | 0.058744 | 0 | 0 | 0 | 0 | 0 | 1 | 0.278689 | false | 0.02459 | 0.040984 | 0.155738 | 0.598361 | 0.008197 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
aba87f992a1e6167a86879159d7bb2f8a26cb1a8 | 52 | py | Python | src/flipr_api/__main__.py | AntorFr/flipr-api | 907192ea1bee194fa4b05bf328f506c30c4fb59a | [
"MIT"
] | null | null | null | src/flipr_api/__main__.py | AntorFr/flipr-api | 907192ea1bee194fa4b05bf328f506c30c4fb59a | [
"MIT"
] | null | null | null | src/flipr_api/__main__.py | AntorFr/flipr-api | 907192ea1bee194fa4b05bf328f506c30c4fb59a | [
"MIT"
] | null | null | null | """CLI usage of the API."""
# TODO: to be completed
| 17.333333 | 27 | 0.634615 | 9 | 52 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 52 | 2 | 28 | 26 | 0.785714 | 0.846154 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.5 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
abe43e329d6ee334636a04bb6937e68a692e8901 | 40 | py | Python | exercises/tournament/tournament.py | wonhyeongseo/python | ccd399510a58ad42d03420e43de67893f55dd411 | [
"MIT"
] | 2 | 2018-11-24T01:00:38.000Z | 2019-02-05T07:32:44.000Z | exercises/tournament/tournament.py | toroad/python | ce085c81a82ae5fb460fe166323dbbaa5a2588c5 | [
"MIT"
] | null | null | null | exercises/tournament/tournament.py | toroad/python | ce085c81a82ae5fb460fe166323dbbaa5a2588c5 | [
"MIT"
] | 1 | 2021-12-29T19:26:23.000Z | 2021-12-29T19:26:23.000Z | def tally(tournament_results):
pass
| 13.333333 | 30 | 0.75 | 5 | 40 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 2 | 31 | 20 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
e60ae4de7ce12c96b9d0450289db2091ac012164 | 73 | py | Python | wxcloudrun/cooperation_manage/__init__.py | nixiaopan/nipan | 4e57e7f25089bae64a233c2e3ba49b703147183e | [
"MIT"
] | null | null | null | wxcloudrun/cooperation_manage/__init__.py | nixiaopan/nipan | 4e57e7f25089bae64a233c2e3ba49b703147183e | [
"MIT"
] | null | null | null | wxcloudrun/cooperation_manage/__init__.py | nixiaopan/nipan | 4e57e7f25089bae64a233c2e3ba49b703147183e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
@author: nixiaopan
@time: 2022/4/9 22:17
'''
| 12.166667 | 23 | 0.534247 | 11 | 73 | 3.545455 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 0.164384 | 73 | 5 | 24 | 14.6 | 0.459016 | 0.863014 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e6140889640c305ed293d90538a505f5e852b156 | 194 | py | Python | alfredworkflow/settings.py | ktal90/SmartThings-Alfred | b30bbd27982533daf0eb556e48ce673ec11ebdf1 | [
"MIT"
] | 2 | 2019-10-23T13:28:54.000Z | 2022-01-26T02:37:25.000Z | alfredworkflow/settings.py | ktal90/SmartThings-Alfred | b30bbd27982533daf0eb556e48ce673ec11ebdf1 | [
"MIT"
] | null | null | null | alfredworkflow/settings.py | ktal90/SmartThings-Alfred | b30bbd27982533daf0eb556e48ce673ec11ebdf1 | [
"MIT"
] | 2 | 2019-11-12T06:46:10.000Z | 2022-01-12T19:57:42.000Z | PROTOCOL = "https"
HOSTNAME = "graph.api.smartthings.com"
PORT = 443
SERVER_PORT = 2222
CLIENT_ID = 'cb87b758-af81-11e2-981c-9989e5deaab7'
CLIENT_SECRET = 'e06c70fa-af81-11e2-981c-9989e5deaab7'
| 27.714286 | 54 | 0.778351 | 26 | 194 | 5.692308 | 0.769231 | 0.108108 | 0.162162 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.092784 | 194 | 6 | 55 | 32.333333 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0.525773 | 0.5 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0521c58a2350b549ed8a0e08bcf67a00362dcad5 | 211 | py | Python | fileshack/templatetags/staticfiles.py | peterkuma/fileshackproject | ac8a5b4ae943f28c8a34d7795cf5fee5b23e2db0 | [
"MIT"
] | 16 | 2015-01-21T16:04:59.000Z | 2022-02-09T11:09:21.000Z | fileshack/templatetags/staticfiles.py | peterkuma/fileshackproject | ac8a5b4ae943f28c8a34d7795cf5fee5b23e2db0 | [
"MIT"
] | 1 | 2021-08-03T10:47:47.000Z | 2021-08-03T10:47:47.000Z | fileshack/templatetags/staticfiles.py | peterkuma/fileshackproject | ac8a5b4ae943f28c8a34d7795cf5fee5b23e2db0 | [
"MIT"
] | 3 | 2015-08-06T18:31:44.000Z | 2018-01-08T20:54:29.000Z | from urllib.parse import urljoin
from django import template
from django.conf import settings
register = template.Library()
@register.simple_tag
def static(path):
return urljoin(settings.STATIC_URL, path)
| 21.1 | 45 | 0.800948 | 29 | 211 | 5.758621 | 0.62069 | 0.11976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127962 | 211 | 9 | 46 | 23.444444 | 0.907609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
5562cd340004ebfeb4631753a7f0d2c5c6826cc2 | 46 | py | Python | web-app/backend/tests/assets/__init__.py | bayesimpact/representative-population-generator | 078ef492dc282f68ce81c1f33fab029f4382ea98 | [
"Apache-2.0"
] | null | null | null | web-app/backend/tests/assets/__init__.py | bayesimpact/representative-population-generator | 078ef492dc282f68ce81c1f33fab029f4382ea98 | [
"Apache-2.0"
] | null | null | null | web-app/backend/tests/assets/__init__.py | bayesimpact/representative-population-generator | 078ef492dc282f68ce81c1f33fab029f4382ea98 | [
"Apache-2.0"
] | null | null | null | """Assests needed for testing the backend."""
| 23 | 45 | 0.717391 | 6 | 46 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 46 | 1 | 46 | 46 | 0.825 | 0.847826 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
558b7195a66a65e5fa430d0aaebd57d9d355fb17 | 5,645 | py | Python | tests/test_cli.py | u8slvn/hyperfocus | 6eee37de78162bb404b8d34eea6220abedf58de6 | [
"MIT"
] | null | null | null | tests/test_cli.py | u8slvn/hyperfocus | 6eee37de78162bb404b8d34eea6220abedf58de6 | [
"MIT"
] | null | null | null | tests/test_cli.py | u8slvn/hyperfocus | 6eee37de78162bb404b8d34eea6220abedf58de6 | [
"MIT"
] | null | null | null | from pathlib import Path
import pytest
from freezegun import freeze_time
from typer.testing import CliRunner
from hyperfocus import __app_name__, __version__
from hyperfocus.cli import hyperfocus_app
from tests.conftest import pytest_regex
runner = CliRunner()
def test_main_cmd_version(cli_config):
result = runner.invoke(hyperfocus_app, ["--version"])
expected = f"{__app_name__} version {__version__}\n"
assert expected == result.stdout
@pytest.mark.dependency()
@freeze_time("2012-12-21")
def test_call_main_cmd_without_init(cli_config):
result = runner.invoke(hyperfocus_app, [])
assert result.stdout == "Config does not exist, please run init command first\n"
assert result.exit_code == 1
@pytest.mark.dependency()
@freeze_time("2012-12-21")
def test_init_cmd(cli_config, tmp_test_dir):
db_test_path = Path(str(tmp_test_dir)) / "db_test.sqlite"
result = runner.invoke(hyperfocus_app, ["init"], input=f"{db_test_path}\n")
pattern = pytest_regex(
r"\? Database location \[(.*)\]: (.*)\n"
r"ℹ\(init\) Config file created successfully in (.*)\n"
r"ℹ\(init\) Database initialized successfully in (.*)\n"
)
assert pattern == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_init_cmd"])
@freeze_time("2012-12-21")
def test_main_cmd_with_no_tasks(cli_config):
result = runner.invoke(hyperfocus_app, [])
expected = (
"✨ Fri, 21 December 2012\n"
"✨ A new day starts, good luck!\n\n"
"No tasks yet for today...\n"
)
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_init_cmd"])
@freeze_time("2012-12-21")
def test_add_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["add"], input="Test\nTest\n")
expected = (
"? Task title: Test\n"
"? Task description (optional): Test\n"
"✔(created) Task #1 ⬢ Test ⊕\n"
)
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_main_cmd_with_tasks(cli_config):
result = runner.invoke(hyperfocus_app, [])
expected = " # tasks\n--- --------\n 1 ⬢ Test ⊕ \n"
assert result.stdout == expected
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_done_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["done", "1"])
expected = "✔(updated) Task #1 ⬢ Test ⊕\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_done_non_existing_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["done", "9"])
expected = "✘(not found) Task id does not exist\n"
assert expected == result.stdout
assert result.exit_code == 1
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_reset_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["reset", "1"])
expected = "✔(updated) Task #1 ⬢ Test ⊕\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_reset_task_cmd"])
@freeze_time("2012-12-21")
def test_reset_task_cmd_second_time(cli_config):
result = runner.invoke(hyperfocus_app, ["reset", "1"])
expected = "▼(no change) Task #1 ⬢ Test ⊕\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_block_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["block", "1"])
expected = "✔(updated) Task #1 ⬢ Test ⊕\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_delete_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["delete", "1"])
expected = "✔(updated) Task #1 ⬢ Test ⊕\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_delete_task_cmd"])
@freeze_time("2012-12-21")
def test_main_cmd_with_deleted_task(cli_config):
result = runner.invoke(hyperfocus_app, [])
expected = "No tasks yet for today...\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_update_task_with_no_id_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["reset"], input="1")
expected = (
" # tasks\n"
"--- --------\n"
" 1 ⬢ Test ⊕ \n\n"
"? Reset task: 1\n"
"✔(updated) Task #1 ⬢ Test ⊕\n"
)
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_show_task_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["show", "1"])
expected = "Task: #1 ⬢ Test\n" "Test\n"
assert expected == result.stdout
assert result.exit_code == 0
@pytest.mark.dependency(depends=["test_add_task_cmd"])
@freeze_time("2012-12-21")
def test_show_task_with_no_id_cmd(cli_config):
result = runner.invoke(hyperfocus_app, ["show"], input="1")
expected = (
" # tasks\n"
"--- --------\n"
" 1 ⬢ Test ⊕ \n\n"
"? Show task details: 1\n"
"Task: #1 ⬢ Test\n"
"Test\n"
)
assert expected == result.stdout
assert result.exit_code == 0
| 28.948718 | 84 | 0.671391 | 817 | 5,645 | 4.440636 | 0.134639 | 0.036659 | 0.079383 | 0.123484 | 0.770673 | 0.745865 | 0.744212 | 0.722161 | 0.702867 | 0.60226 | 0 | 0.035276 | 0.176439 | 5,645 | 194 | 85 | 29.097938 | 0.738223 | 0 | 0 | 0.535211 | 0 | 0 | 0.23791 | 0 | 0 | 0 | 0 | 0 | 0.21831 | 1 | 0.112676 | false | 0 | 0.049296 | 0 | 0.161972 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
559ad45b921e5df3391af22f3673f045dc78d811 | 15 | py | Python | country/india.py | HarshitaRastogi/LearningPython | b49082d639d13bf405ec34b957e8b82386dc7429 | [
"Apache-2.0"
] | null | null | null | country/india.py | HarshitaRastogi/LearningPython | b49082d639d13bf405ec34b957e8b82386dc7429 | [
"Apache-2.0"
] | null | null | null | country/india.py | HarshitaRastogi/LearningPython | b49082d639d13bf405ec34b957e8b82386dc7429 | [
"Apache-2.0"
] | null | null | null | print("India")
| 7.5 | 14 | 0.666667 | 2 | 15 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 15 | 1 | 15 | 15 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
55a86f611743ba7694924f89fe9344a9cb78f045 | 190 | py | Python | readabilipy/extractors/__init__.py | ezedonovan/ReadabiliPy | 67b0d73525662657ad52db513e3d2af622d13ef0 | [
"MIT"
] | 590 | 2022-02-11T11:10:29.000Z | 2022-03-31T23:31:05.000Z | readabilipy/extractors/__init__.py | ezedonovan/ReadabiliPy | 67b0d73525662657ad52db513e3d2af622d13ef0 | [
"MIT"
] | 58 | 2018-11-20T14:11:59.000Z | 2021-12-08T06:20:02.000Z | readabilipy/extractors/__init__.py | ezedonovan/ReadabiliPy | 67b0d73525662657ad52db513e3d2af622d13ef0 | [
"MIT"
] | 19 | 2022-02-11T13:41:06.000Z | 2022-03-25T14:16:02.000Z | from .extract_date import extract_date, ensure_iso_date_format
from .extract_title import extract_title
__all__ = [
'extract_date',
'extract_title',
'ensure_iso_date_format',
]
| 21.111111 | 62 | 0.768421 | 25 | 190 | 5.2 | 0.36 | 0.253846 | 0.2 | 0.292308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152632 | 190 | 8 | 63 | 23.75 | 0.807453 | 0 | 0 | 0 | 0 | 0 | 0.247368 | 0.115789 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
55b58c7a0b1cfe89f8b441196f0804774ee89a9e | 101 | py | Python | tests/__init__.py | tamerh/biobtreePy | a657e921a2cc4e2b67ae32dc1ae83dd3daaef7ff | [
"MIT"
] | 2 | 2019-10-31T11:04:14.000Z | 2020-12-04T02:26:19.000Z | tests/__init__.py | tamerh/biobtreePy | a657e921a2cc4e2b67ae32dc1ae83dd3daaef7ff | [
"MIT"
] | null | null | null | tests/__init__.py | tamerh/biobtreePy | a657e921a2cc4e2b67ae32dc1ae83dd3daaef7ff | [
"MIT"
] | null | null | null | # import os
# import sys
# sys.path.insert(0, os.path.realpath(os.path.join(__file__, "..", "..")))
| 20.2 | 74 | 0.623762 | 15 | 101 | 3.933333 | 0.6 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.118812 | 101 | 4 | 75 | 25.25 | 0.651685 | 0.920792 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
55b5ad4667a1855685f01560412b8e10a621e5d9 | 290 | py | Python | THU--Deep_Learning/homework-3/layers/__init__.py | AlbertMillan/THU--ACM_2019-2021 | fc5b3cc7efcd8066e04cf6da6785c6051cd665fb | [
"MIT"
] | 4 | 2020-12-29T14:49:44.000Z | 2021-09-17T07:46:28.000Z | Deep Learning/Assignments/Assignment 3/layers/__init__.py | Sahandfer/Tsinghua | 0c1944aee8395e57429f27d9b0dd478460dcc8b3 | [
"CC-BY-4.0"
] | null | null | null | Deep Learning/Assignments/Assignment 3/layers/__init__.py | Sahandfer/Tsinghua | 0c1944aee8395e57429f27d9b0dd478460dcc8b3 | [
"CC-BY-4.0"
] | 2 | 2020-12-26T03:05:53.000Z | 2020-12-29T14:51:51.000Z | # -*- encoding: utf-8 -*-
from layers.fc_layer import FCLayer
from layers.relu_layer import ReLULayer
from layers.conv_layer import ConvLayer
from layers.pooling_layer import MaxPoolingLayer
from layers.reshape_layer import ReshapeLayer
from layers.dropout_layer import DropoutLayer | 36.25 | 49 | 0.824138 | 39 | 290 | 5.974359 | 0.487179 | 0.257511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003937 | 0.124138 | 290 | 8 | 50 | 36.25 | 0.913386 | 0.07931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e96f8220bfbfae8a02a4642fb7993c11b74ba759 | 394 | py | Python | nntools/dataset/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | nntools/dataset/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | nntools/dataset/__init__.py | ClementPla/NNTools | 61562be2d931a7f720ceee1bd91a37a2b9a329af | [
"MIT"
] | null | null | null | from nntools.dataset.classif_dataset import ClassificationDataset
from nntools.dataset.image_tools import nntools_wrapper
from nntools.dataset.multi_image_dataset import MultiImageDataset
from nntools.dataset.seg_dataset import SegmentationDataset
from nntools.dataset.tools import Composition
from nntools.dataset.utils import get_segmentation_class_count, class_weighting, \
random_split
| 49.25 | 82 | 0.883249 | 49 | 394 | 6.877551 | 0.44898 | 0.195846 | 0.320475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07868 | 394 | 7 | 83 | 56.285714 | 0.928375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e9d7b8109e5180605b6c400c126b29e5c4ddf47f | 132 | py | Python | src/utils/dataset/__init__.py | andompesta/omnitext | da6467b6cd9086b2278f7a1560596261f125800e | [
"MIT"
] | null | null | null | src/utils/dataset/__init__.py | andompesta/omnitext | da6467b6cd9086b2278f7a1560596261f125800e | [
"MIT"
] | null | null | null | src/utils/dataset/__init__.py | andompesta/omnitext | da6467b6cd9086b2278f7a1560596261f125800e | [
"MIT"
] | null | null | null | from .classification import get_classify_dataset, classify_collate
__all__ = [
"get_classify_dataset",
"classify_collate"
] | 22 | 66 | 0.780303 | 14 | 132 | 6.642857 | 0.571429 | 0.236559 | 0.387097 | 0.55914 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143939 | 132 | 6 | 67 | 22 | 0.823009 | 0 | 0 | 0 | 0 | 0 | 0.270677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
75b3570f1de17be56b17c377a667fc93844e7da2 | 5,034 | py | Python | tests/decoding/test_decode_single_type.py | vaporyco/eth-abi | 9e02579d3cfe93ae8dd1247d589e2574f1047bad | [
"MIT"
] | null | null | null | tests/decoding/test_decode_single_type.py | vaporyco/eth-abi | 9e02579d3cfe93ae8dd1247d589e2574f1047bad | [
"MIT"
] | null | null | null | tests/decoding/test_decode_single_type.py | vaporyco/eth-abi | 9e02579d3cfe93ae8dd1247d589e2574f1047bad | [
"MIT"
] | null | null | null | import pytest
from eth_utils import (
decode_hex,
)
from eth_abi.abi import decode_single
@pytest.mark.parametrize(
'input,expected',
(
('0000000000000000000000000000000000000000000000000000000000000015', 21),
('0000000000000000000000000000000000000000000000000000000000000001', 1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', -1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9c', -100),
)
)
def test_0x_prefix_optional(input, expected):
output = decode_single('int256', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
('0000000000000000000000000000000000000000000000000000000000000015', 21),
('0000000000000000000000000000000000000000000000000000000000000001', 1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', -1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9c', -100),
)
)
def test_int8_decoding(input, expected):
output = decode_single('int8', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
('0x0000000000000000000000000000000000000000000000000000000000000015', 21),
('0x0000000000000000000000000000000000000000000000000000000000000001', 1),
('0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', -1),
('0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9c', -100),
)
)
def test_decode_int256(input, expected):
output = decode_single('int256', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
('0000000000000000000000000000000000000000000000000000000000000015', 21),
('0000000000000000000000000000000000000000000000000000000000000001', 1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', -1),
('ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9c', -100),
)
)
def test_decode_accepts_bytes(input, expected):
output = decode_single('int256', decode_hex(input))
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
('0x0000000000000000000000000000000000000000000000000000000000000015', 21),
('0x0000000000000000000000000000000000000000000000000000000000000001', 1),
('0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', 2 ** 256 - 1),
('0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9c', 2 ** 256 -100),
)
)
def test_decode_uint256(input, expected):
output = decode_single('uint256', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
('0x0000000000000000000000000000000000000000000000000000000000000001', True),
('0x0000000000000000000000000000000000000000000000000000000000000000', False),
)
)
def test_decode_bool(input, expected):
output = decode_single('bool', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
(
'0x7465737400000000000000000000000000000000000000000000000000000000',
b'test\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
),
(
'0x6162636465666768696a6b6c6d6e6f707172737475767778797a000000000000',
b'abcdefghijklmnopqrstuvwxyz\x00\x00\x00\x00\x00\x00',
),
(
'0x3031323334353637383921402324255e262a2829000000000000000000000000',
b'0123456789!@#$%^&*()\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
),
(
'0x6162630000000000616263000000000000000000000000000000000000000000',
b'abc\x00\x00\x00\x00\x00abc\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
),
)
)
def test_decode_bytes32(input, expected):
output = decode_single('bytes32', input)
assert output == expected
@pytest.mark.parametrize(
'input,expected',
(
(
'0x0000000000000000000000000000000000000000000000000000000000000000',
'0x0000000000000000000000000000000000000000',
),
(
'0x000000000000000000000000c305c901078781c232a2a521c2af7980f8385ee9',
'0xc305c901078781c232a2a521c2af7980f8385ee9',
),
(
'0x0000000000000000000000000005c901078781c232a2a521c2af7980f8385ee9',
'0x0005c901078781c232a2a521c2af7980f8385ee9',
),
(
'0x000000000000000000000000c305c901078781c232a2a521c2af7980f8385000',
'0xc305c901078781c232a2a521c2af7980f8385000',
),
(
'0x0000000000000000000000000005c901078781c232a2a521c2af7980f8385000',
'0x0005c901078781c232a2a521c2af7980f8385000',
),
)
)
def test_decode_address(input, expected):
output = decode_single('address', input)
assert output == expected
| 34.479452 | 132 | 0.713151 | 313 | 5,034 | 11.370607 | 0.204473 | 0.111267 | 0.154257 | 0.188817 | 0.597359 | 0.553807 | 0.538353 | 0.538353 | 0.538353 | 0.48862 | 0 | 0.418195 | 0.187723 | 5,034 | 145 | 133 | 34.717241 | 0.452189 | 0 | 0 | 0.401575 | 0 | 0.023622 | 0.543306 | 0.51172 | 0 | 0 | 0.290822 | 0 | 0.062992 | 1 | 0.062992 | false | 0 | 0.023622 | 0 | 0.086614 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
75c20ee1d2ba58fd280f72db60b13d2de83ed64e | 54 | py | Python | upnp_redirect/output/local_output.py | pataquets/py_upnp_redirect | 4f0721da8b67c5f0a8e8ead4df0843382defbe03 | [
"MIT"
] | 1 | 2020-06-12T02:46:56.000Z | 2020-06-12T02:46:56.000Z | upnp_redirect/output/local_output.py | pataquets/py_upnp_redirect | 4f0721da8b67c5f0a8e8ead4df0843382defbe03 | [
"MIT"
] | 2 | 2020-03-21T03:49:04.000Z | 2022-02-28T23:18:48.000Z | upnp_redirect/output/local_output.py | pataquets/py_upnp_redirect | 4f0721da8b67c5f0a8e8ead4df0843382defbe03 | [
"MIT"
] | 1 | 2020-03-12T19:42:34.000Z | 2020-03-12T19:42:34.000Z | def create_local_output(output_args):
return None
| 18 | 37 | 0.796296 | 8 | 54 | 5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 54 | 2 | 38 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
75c37175269d21b76362ae6a9ba3d963366ea967 | 133 | py | Python | quizesApp/admin.py | i3dprogrammer/Django-GraduationProject | 2445b9f773638a344be8625f0d9ef6c149e68015 | [
"MIT"
] | null | null | null | quizesApp/admin.py | i3dprogrammer/Django-GraduationProject | 2445b9f773638a344be8625f0d9ef6c149e68015 | [
"MIT"
] | null | null | null | quizesApp/admin.py | i3dprogrammer/Django-GraduationProject | 2445b9f773638a344be8625f0d9ef6c149e68015 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Quiz, QuizComplete
admin.site.register(Quiz)
admin.site.register(QuizComplete)
| 22.166667 | 38 | 0.827068 | 18 | 133 | 6.111111 | 0.555556 | 0.163636 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090226 | 133 | 5 | 39 | 26.6 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f94b467982aaf9a098e8f3e5f1ed7db920ae6da0 | 454 | py | Python | db_create.py | spizeck/aircraftlogs | 34581fc8ed4d0161e36b9348126066d053528f62 | [
"MIT"
] | 1 | 2021-09-13T02:34:32.000Z | 2021-09-13T02:34:32.000Z | db_create.py | spizeck/aircraftlogs | 34581fc8ed4d0161e36b9348126066d053528f62 | [
"MIT"
] | null | null | null | db_create.py | spizeck/aircraftlogs | 34581fc8ed4d0161e36b9348126066d053528f62 | [
"MIT"
] | 2 | 2016-08-22T01:59:31.000Z | 2021-07-24T01:34:58.000Z | # from config import SQLALCHEMY_DATABASE_URI
# from config import SQLALCHEMY_MIGRATE_REPO
from app import db
# import os.path
db.create_all()
# if not os.path.exists(SQLALCHEMY_MIGRATE_REPO):
# api.create(SQLALCHEMY_MIGRATE_REPO, 'database repository')
# api.version_control(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO)
# else:
# api.version_control(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO, api.version(SQLALCHEMY_MIGRATE_REPO)) | 45.4 | 113 | 0.819383 | 61 | 454 | 5.754098 | 0.360656 | 0.290598 | 0.358974 | 0.148148 | 0.336182 | 0.336182 | 0.336182 | 0.336182 | 0.336182 | 0 | 0 | 0 | 0.101322 | 454 | 10 | 113 | 45.4 | 0.860294 | 0.887665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f9600e7ccf3b257d2e2e8f262b29a713dc3c1b35 | 26 | py | Python | crslab/model/policy/pmi/__init__.py | hcmus-nlp-chatbot/CRSLab | b3ab262a4ad93cbae98fe66541eb735377768a35 | [
"MIT"
] | 315 | 2021-01-05T06:31:57.000Z | 2022-03-16T21:12:23.000Z | crslab/model/policy/pmi/__init__.py | hcmus-nlp-chatbot/CRSLab | b3ab262a4ad93cbae98fe66541eb735377768a35 | [
"MIT"
] | 23 | 2021-01-09T05:43:26.000Z | 2022-03-28T21:05:49.000Z | crslab/model/policy/pmi/__init__.py | hcmus-nlp-chatbot/CRSLab | b3ab262a4ad93cbae98fe66541eb735377768a35 | [
"MIT"
] | 71 | 2021-01-05T06:31:59.000Z | 2022-03-06T06:30:35.000Z | from .pmi import PMIModel
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f9646efff499deae3b83de24eed1f90967b89ec0 | 83 | py | Python | ipfs_io/exceptions.py | esovetkin/ipfs_io | 9b0050ab4b25a1d3a4b400d445211fa45c1fb8ae | [
"MIT"
] | null | null | null | ipfs_io/exceptions.py | esovetkin/ipfs_io | 9b0050ab4b25a1d3a4b400d445211fa45c1fb8ae | [
"MIT"
] | null | null | null | ipfs_io/exceptions.py | esovetkin/ipfs_io | 9b0050ab4b25a1d3a4b400d445211fa45c1fb8ae | [
"MIT"
] | null | null | null | class FAILED_FILE(Exception):
pass
class FAILED_METADATA(Exception):
pass
| 13.833333 | 33 | 0.746988 | 10 | 83 | 6 | 0.6 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 5 | 34 | 16.6 | 0.882353 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
f99f408338eb0e6c5ecf1df6b1dd3e51ee3b3d81 | 120 | py | Python | Databaselayer/IFetchUserType.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Databaselayer/IFetchUserType.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Databaselayer/IFetchUserType.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | import hashlib, os
import logging
class IFetchUserType:
def getUserType_DBL(Self,email): raise NotImplementedError
| 20 | 62 | 0.816667 | 14 | 120 | 6.928571 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 120 | 5 | 63 | 24 | 0.932692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f9c5fcbaa485e251b34eb830bc2292fc2037b136 | 268 | py | Python | Adapters/Azure/Config/__init__.py | tbarlow12/azure-resource-scanner | 3e5f592e016a3f8a633634df3ba502c4901805ef | [
"MIT"
] | null | null | null | Adapters/Azure/Config/__init__.py | tbarlow12/azure-resource-scanner | 3e5f592e016a3f8a633634df3ba502c4901805ef | [
"MIT"
] | null | null | null | Adapters/Azure/Config/__init__.py | tbarlow12/azure-resource-scanner | 3e5f592e016a3f8a633634df3ba502c4901805ef | [
"MIT"
] | null | null | null | from .azure_credential_config import AzureCredentialConfig
from .azure_resource_config import AzureResourceServiceConfig
from .azure_storage_config import AzureStorageConfig
from .azure_cosmosdb_config import AzureCosmosDbConfig
from .azure_config import AzureConfig
| 44.666667 | 61 | 0.902985 | 29 | 268 | 8.034483 | 0.448276 | 0.193133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078358 | 268 | 5 | 62 | 53.6 | 0.94332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
dde4a977b445258b506ae67a5d7ac01f453f693d | 138 | py | Python | src/api/user/claims.py | jmilosze/wfrp-hammergen | b9508901a928fd5cdf2bbd9453c6fcafb28c9ec8 | [
"Apache-2.0"
] | 1 | 2022-01-31T07:58:20.000Z | 2022-01-31T07:58:20.000Z | src/api/user/claims.py | jmilosze/wfrp-hammergen | b9508901a928fd5cdf2bbd9453c6fcafb28c9ec8 | [
"Apache-2.0"
] | 1 | 2022-01-04T21:54:19.000Z | 2022-01-04T21:54:19.000Z | src/api/user/claims.py | jmilosze/wfrp-hammergen | b9508901a928fd5cdf2bbd9453c6fcafb28c9ec8 | [
"Apache-2.0"
] | null | null | null | from enum import Enum
class UserClaims(Enum):
USER = "user"
PASSWORD_RESET = "password_reset"
MASTER_ADMIN = "master_admin"
| 17.25 | 37 | 0.702899 | 17 | 138 | 5.470588 | 0.588235 | 0.27957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210145 | 138 | 7 | 38 | 19.714286 | 0.853211 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
fb0f6db6dfc6f3b280c9dc8e76acfb7aea30de03 | 332 | py | Python | vk_air/objects/server.py | sultan1k/vk_air | 110864a370914afebbeb4fa27ee382a5bbbed516 | [
"MIT"
] | 5 | 2021-12-02T14:41:54.000Z | 2022-02-17T18:20:36.000Z | vk_air/objects/server.py | sultan1k/vk_air | 110864a370914afebbeb4fa27ee382a5bbbed516 | [
"MIT"
] | null | null | null | vk_air/objects/server.py | sultan1k/vk_air | 110864a370914afebbeb4fa27ee382a5bbbed516 | [
"MIT"
] | null | null | null | class MessagesUploadServer:
def __init__(self, obj):
self.obj = obj
@property
def upload_url(self):
return self.obj.get('upload_url')
@property
def album_id(self):
return self.obj.get('album_id')
@property
def group_id(self):
return self.obj.get('group_id') | 22.133333 | 41 | 0.596386 | 42 | 332 | 4.47619 | 0.333333 | 0.18617 | 0.223404 | 0.271277 | 0.340426 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292169 | 332 | 15 | 42 | 22.133333 | 0.8 | 0 | 0 | 0.25 | 0 | 0 | 0.078078 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.25 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
349bb5d0c7947150d5c02133ec0cd30c1b07a95f | 77 | py | Python | gll/range_tree.py | 0x1997/gll | 51b9f0491d9a55b2f827c883f2afae8aac494da9 | [
"MIT"
] | null | null | null | gll/range_tree.py | 0x1997/gll | 51b9f0491d9a55b2f827c883f2afae8aac494da9 | [
"MIT"
] | null | null | null | gll/range_tree.py | 0x1997/gll | 51b9f0491d9a55b2f827c883f2afae8aac494da9 | [
"MIT"
] | null | null | null | # coding: utf8
class RangeTree(object):
def __init__(self):
pass
| 15.4 | 24 | 0.636364 | 9 | 77 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.25974 | 77 | 4 | 25 | 19.25 | 0.77193 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
34b1d2597b7d4200cc91d9e298c4aacd3ce563be | 64 | py | Python | sru/__init__.py | kzjeef/sru | a7c30233e2b7bde13187843a7d2fd2dcdd409c48 | [
"MIT"
] | 1 | 2018-03-13T03:00:28.000Z | 2018-03-13T03:00:28.000Z | sru/__init__.py | kzjeef/sru | a7c30233e2b7bde13187843a7d2fd2dcdd409c48 | [
"MIT"
] | null | null | null | sru/__init__.py | kzjeef/sru | a7c30233e2b7bde13187843a7d2fd2dcdd409c48 | [
"MIT"
] | 1 | 2019-04-23T07:46:23.000Z | 2019-04-23T07:46:23.000Z | from .version import __version__
from .cuda_functional import *
| 21.333333 | 32 | 0.828125 | 8 | 64 | 6 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 33 | 32 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
34c5497e6d4bf336b4833ebc3cc13f256687be09 | 53 | py | Python | hellowoeld.py | suhasd89/pypypython | bcf41db0a80935ff0399749a8a9fdf270e8d7975 | [
"MIT"
] | null | null | null | hellowoeld.py | suhasd89/pypypython | bcf41db0a80935ff0399749a8a9fdf270e8d7975 | [
"MIT"
] | null | null | null | hellowoeld.py | suhasd89/pypypython | bcf41db0a80935ff0399749a8a9fdf270e8d7975 | [
"MIT"
] | null | null | null | print('this is program for first github repository')
| 26.5 | 52 | 0.792453 | 8 | 53 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 53 | 1 | 53 | 53 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
34fa9c96af30b51bb870a2a6f75ee274516564e8 | 101 | py | Python | rplugin/python3/ultest/handler/parsers/__init__.py | jpserra/vim-ultest | 06f965a62c32906f220c37e7b758a275d6a992f6 | [
"MIT"
] | 313 | 2020-12-16T18:27:23.000Z | 2022-03-29T00:41:15.000Z | rplugin/python3/ultest/handler/parsers/__init__.py | jpserra/vim-ultest | 06f965a62c32906f220c37e7b758a275d6a992f6 | [
"MIT"
] | 99 | 2020-11-04T07:47:08.000Z | 2022-03-29T07:07:56.000Z | rplugin/python3/ultest/handler/parsers/__init__.py | jpserra/vim-ultest | 06f965a62c32906f220c37e7b758a275d6a992f6 | [
"MIT"
] | 17 | 2021-03-07T19:05:43.000Z | 2022-03-02T01:35:53.000Z | from .file import FileParser, Position
from .output import OutputParser, OutputPatterns, ParseResult
| 33.666667 | 61 | 0.841584 | 11 | 101 | 7.727273 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108911 | 101 | 2 | 62 | 50.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
55184465ff1509bc3437017ef7ebfdc97048e934 | 91 | py | Python | view_demo/manytomany/views.py | maoxuelin083/Django-Study | 6eb332a97c898b11e6d1c1faf80dbb14f4f835c6 | [
"Apache-2.0"
] | null | null | null | view_demo/manytomany/views.py | maoxuelin083/Django-Study | 6eb332a97c898b11e6d1c1faf80dbb14f4f835c6 | [
"Apache-2.0"
] | null | null | null | view_demo/manytomany/views.py | maoxuelin083/Django-Study | 6eb332a97c898b11e6d1c1faf80dbb14f4f835c6 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def add(request):
pass | 13 | 35 | 0.736264 | 13 | 91 | 5.153846 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197802 | 91 | 7 | 36 | 13 | 0.917808 | 0.252747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
9b27990d8c58e4e77d16dc611b75baf7b7ee4092 | 96 | py | Python | weld/pandas_weld/tests/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | 5 | 2018-03-05T13:19:35.000Z | 2020-11-17T15:59:41.000Z | weld/pandas_weld/tests/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | 1 | 2021-06-01T22:27:44.000Z | 2021-06-01T22:27:44.000Z | weld/pandas_weld/tests/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | null | null | null | from indexes import *
from test_frame import DataFrameTests
from test_series import SeriesTests
| 24 | 37 | 0.864583 | 13 | 96 | 6.230769 | 0.615385 | 0.197531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 38 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9b56cb7d63eb58f9971b7de0fab8d2266cdb461b | 258 | py | Python | datahub/dataset/company_future_interest_countries/pagination.py | alixedi/data-hub-api-cd-poc | a5e5ea45bb496c0d2a06635864514af0c7d4291a | [
"MIT"
] | null | null | null | datahub/dataset/company_future_interest_countries/pagination.py | alixedi/data-hub-api-cd-poc | a5e5ea45bb496c0d2a06635864514af0c7d4291a | [
"MIT"
] | 16 | 2020-04-01T15:25:35.000Z | 2020-04-14T14:07:30.000Z | datahub/dataset/company_future_interest_countries/pagination.py | alixedi/data-hub-api-cd-poc | a5e5ea45bb496c0d2a06635864514af0c7d4291a | [
"MIT"
] | null | null | null | from datahub.dataset.core.pagination import DatasetCursorPagination
class CompanyFutureInterestCountriesDatasetViewCursorPagination(DatasetCursorPagination):
"""
Cursor Pagination for CompanyFutureInterestCountries
"""
ordering = ('id', )
| 25.8 | 89 | 0.794574 | 16 | 258 | 12.8125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135659 | 258 | 9 | 90 | 28.666667 | 0.919283 | 0.20155 | 0 | 0 | 0 | 0 | 0.010526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9b8cd857aa53a03f36f2c1059dfc26bdedcb847c | 113 | py | Python | src/torchprune/torchprune/method/messi/__init__.py | dani3l125/torchprune | f2589ec7514bd531ddaa7da3aed6388bb13712d3 | [
"MIT"
] | 74 | 2021-03-05T01:25:00.000Z | 2022-03-26T06:15:32.000Z | src/torchprune/torchprune/method/messi/__init__.py | dani3l125/torchprune | f2589ec7514bd531ddaa7da3aed6388bb13712d3 | [
"MIT"
] | 4 | 2021-05-25T06:01:22.000Z | 2022-01-24T22:38:09.000Z | src/torchprune/torchprune/method/messi/__init__.py | dani3l125/torchprune | f2589ec7514bd531ddaa7da3aed6388bb13712d3 | [
"MIT"
] | 7 | 2021-03-24T14:14:32.000Z | 2022-02-19T17:27:56.000Z | # flake8: noqa: F403,F401
"""The package for (generalized) Messi."""
from .messi_net import MessiNet, MessiNet5
| 22.6 | 42 | 0.734513 | 15 | 113 | 5.466667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 0.132743 | 113 | 4 | 43 | 28.25 | 0.755102 | 0.539823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9b9ebf1561be54fccd431bfa74a8da99b6f4a3f9 | 168 | py | Python | myproject/blog/admin.py | ChoiDongKyu96/myFirstDynamicBlog | df6c270c9b25caf310505d0faf99739a858bd612 | [
"MIT"
] | null | null | null | myproject/blog/admin.py | ChoiDongKyu96/myFirstDynamicBlog | df6c270c9b25caf310505d0faf99739a858bd612 | [
"MIT"
] | null | null | null | myproject/blog/admin.py | ChoiDongKyu96/myFirstDynamicBlog | df6c270c9b25caf310505d0faf99739a858bd612 | [
"MIT"
] | null | null | null | from django.contrib import admin
from blog.models import Post# models.py에서 Post 모델을 가져온다.
# Register your models here.
admin.site.register(Post) # Post를 관리자 페이지에 등록한다
| 28 | 56 | 0.785714 | 27 | 168 | 4.888889 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 168 | 5 | 57 | 33.6 | 0.916667 | 0.434524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
32d891418365ae240d7f5299538cc3dbba05b5ec | 592 | py | Python | tests/basics/sys_getsizeof.py | iotctl/pycopy | eeb841afea61b19800d054b3b289729665fc9aa4 | [
"MIT"
] | 663 | 2018-12-30T00:17:59.000Z | 2022-03-14T05:03:41.000Z | tests/basics/sys_getsizeof.py | iotctl/pycopy | eeb841afea61b19800d054b3b289729665fc9aa4 | [
"MIT"
] | 41 | 2019-06-06T08:31:19.000Z | 2022-02-13T16:53:41.000Z | tests/basics/sys_getsizeof.py | iotctl/pycopy | eeb841afea61b19800d054b3b289729665fc9aa4 | [
"MIT"
] | 60 | 2019-06-01T04:25:00.000Z | 2022-02-25T01:47:31.000Z | # test sys.getsizeof() function
# note - float test is in ../float/
import sys
try:
sys.getsizeof
except AttributeError:
print('SKIP')
raise SystemExit
print(sys.getsizeof(1) >= 2)
print(sys.getsizeof("") >= 2)
print(sys.getsizeof((1, 2)) >= 2)
print(sys.getsizeof([1, 2]) >= 2)
print(sys.getsizeof({1: 2}) >= 2)
class A:
pass
print(sys.getsizeof(A()) > 0)
try:
assert sys.getsizeof(set()) >= 2
except NameError:
pass
# Only test deque if we have it
try:
from ucollections import deque
assert sys.getsizeof(deque((), 1)) > 0
except ImportError:
pass
| 17.939394 | 42 | 0.646959 | 86 | 592 | 4.453488 | 0.395349 | 0.313316 | 0.266319 | 0.18799 | 0.208877 | 0.159269 | 0.159269 | 0.159269 | 0.159269 | 0.159269 | 0 | 0.033613 | 0.195946 | 592 | 32 | 43 | 18.5 | 0.771008 | 0.157095 | 0 | 0.26087 | 0 | 0 | 0.008081 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0 | true | 0.130435 | 0.130435 | 0 | 0.173913 | 0.304348 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
fd4fb7004090c63c7e2fe21c0a5d7151dcdeddf2 | 843 | py | Python | articlequality/utilities/__init__.py | kevinbazira/articlequality | 04e8dc62b5e9f005bd272ffe48d8fa1d5e80b41a | [
"MIT"
] | 23 | 2019-01-14T11:11:41.000Z | 2021-12-12T23:30:55.000Z | articlequality/utilities/__init__.py | kevinbazira/articlequality | 04e8dc62b5e9f005bd272ffe48d8fa1d5e80b41a | [
"MIT"
] | 78 | 2018-11-28T22:45:52.000Z | 2022-02-11T13:47:13.000Z | articlequality/utilities/__init__.py | kevinbazira/articlequality | 04e8dc62b5e9f005bd272ffe48d8fa1d5e80b41a | [
"MIT"
] | 23 | 2018-10-13T05:49:58.000Z | 2022-01-21T18:28:31.000Z | """
This module implements a set of utilities for extracting labeling events, text
and features from the command-line. When the articlequality python package is
installed, a `articlequality` utility should be available from the commandline.
Run `revscoring -h` for more information:
Article Quality CLI
===================
articlequality
++++++++++++++
.. automodule:: articlequality.articlequality
Sub-utilities
=============
extract_from_text
+++++++++++++++++
.. automodule:: articlequality.utilities.extract_from_text
extract_labelings
+++++++++++++++++
.. automodule:: articlequality.utilities.extract_labelings
extract_text
++++++++++++
.. automodule:: articlequality.utilities.extract_text
fetch_text
++++++++++
.. automodule:: articlequality.utilities.fetch_text
score
+++++
.. automodule:: articlequality.utilities.score
"""
| 22.783784 | 79 | 0.708185 | 84 | 843 | 6.988095 | 0.511905 | 0.245315 | 0.28109 | 0.189097 | 0.149915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102017 | 843 | 36 | 80 | 23.416667 | 0.775429 | 0.989324 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
bd0a1800739e8bbb1a8c94708349612cb1d1bf71 | 440 | py | Python | trpg_bot/logic/DiceLogic.py | k-seta/trpg-bot | 5cb31418d62f7ac2ce83aa187d17dbec601152ae | [
"MIT"
] | 2 | 2020-05-16T14:26:10.000Z | 2020-05-18T20:03:45.000Z | trpg_bot/logic/DiceLogic.py | k-seta/trpg-bot | 5cb31418d62f7ac2ce83aa187d17dbec601152ae | [
"MIT"
] | 40 | 2020-05-04T14:06:35.000Z | 2021-05-06T13:59:29.000Z | trpg_bot/logic/DiceLogic.py | k-seta/trpg-bot | 5cb31418d62f7ac2ce83aa187d17dbec601152ae | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#-*- coding:utf-8 -*-
import random
class DiceLogic:
def __init__(self):
pass
@staticmethod
def roll(amount, size):
return [random.randint(1, size) for i in range(amount)]
@staticmethod
def roll_d66():
return random.choice([11, 12, 13, 14, 15, 16, 12, 22, 23, 24, 25, 26, 13, 23, 33, 34, 35, 36, 14, 24, 34, 44, 45, 46, 15, 25, 35, 45, 55, 56, 16, 26, 36, 46, 56, 66])
| 24.444444 | 174 | 0.575 | 73 | 440 | 3.39726 | 0.671233 | 0.120968 | 0.153226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231707 | 0.254545 | 440 | 17 | 175 | 25.882353 | 0.52439 | 0.090909 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.1 | 0.1 | 0.2 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
1fef9096eb6a36c0b1681c5b19a7168103ad46c3 | 192 | py | Python | __init__.py | ffhan/lingua | 1f7bea099dac93696d5d2ebb8d76926efe5ceda4 | [
"MIT"
] | null | null | null | __init__.py | ffhan/lingua | 1f7bea099dac93696d5d2ebb8d76926efe5ceda4 | [
"MIT"
] | null | null | null | __init__.py | ffhan/lingua | 1f7bea099dac93696d5d2ebb8d76926efe5ceda4 | [
"MIT"
] | null | null | null | """
Lingua is a library that enables easy and fast language and grammar creation, checking and parsing.
"""
from . import form, automata, misc, command_tester, SimEnka, SimPa, MinDka, grammar
| 38.4 | 99 | 0.765625 | 27 | 192 | 5.407407 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151042 | 192 | 4 | 100 | 48 | 0.895706 | 0.515625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
95245384830fbadcafac8a49473c8c1ad2a56d12 | 165 | py | Python | example-challenge/submission_format.py | rush2406/vipriors-challenges-toolkit | ff2d6b944ff4aebb0d3ec9bb9fb8d8459850ccb6 | [
"BSD-3-Clause"
] | 56 | 2020-03-12T19:33:56.000Z | 2022-03-10T14:44:43.000Z | example-challenge/submission_format.py | rush2406/vipriors-challenges-toolkit | ff2d6b944ff4aebb0d3ec9bb9fb8d8459850ccb6 | [
"BSD-3-Clause"
] | 42 | 2020-04-12T10:13:24.000Z | 2021-10-11T11:27:24.000Z | example-challenge/submission_format.py | rush2406/vipriors-challenges-toolkit | ff2d6b944ff4aebb0d3ec9bb9fb8d8459850ccb6 | [
"BSD-3-Clause"
] | 20 | 2020-04-01T11:00:37.000Z | 2022-03-04T00:25:42.000Z | """
Your method that takes model predictions and saves it in the format for
submissions goes here.
"""
def save_as_submissions(model_predictions, filepath):
pass | 27.5 | 71 | 0.781818 | 24 | 165 | 5.25 | 0.875 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 165 | 6 | 72 | 27.5 | 0.9 | 0.569697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
95345e7d05e6f0f2f0d63623684ab330a212d3d4 | 125 | py | Python | src/cfec/constraints/_one_hot.py | LoGosX/counterfactuals | b52fb03ec1a8caa28e15e3422762d7b3b755efb6 | [
"MIT"
] | null | null | null | src/cfec/constraints/_one_hot.py | LoGosX/counterfactuals | b52fb03ec1a8caa28e15e3422762d7b3b755efb6 | [
"MIT"
] | null | null | null | src/cfec/constraints/_one_hot.py | LoGosX/counterfactuals | b52fb03ec1a8caa28e15e3422762d7b3b755efb6 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
@dataclass
class OneHot:
name: str
start_column: int
end_column: int
| 13.888889 | 34 | 0.68 | 15 | 125 | 5.533333 | 0.8 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28 | 125 | 8 | 35 | 15.625 | 0.922222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
20f95baf62b788518184d8d4b78bef14b2a23854 | 314 | py | Python | taotao-cloud-python/taotao-cloud-oldboy/day61-student-manager-system-upload-ORM/day61/app01/admin.py | shuigedeng/taotao-cloud-paren | 3d281b919490f7cbee4520211e2eee5da7387564 | [
"Apache-2.0"
] | 47 | 2021-04-13T10:32:13.000Z | 2022-03-31T10:30:30.000Z | taotao-cloud-python/taotao-cloud-oldboy/day61-student-manager-system-upload-ORM/day61/app01/admin.py | shuigedeng/taotao-cloud-paren | 3d281b919490f7cbee4520211e2eee5da7387564 | [
"Apache-2.0"
] | 1 | 2021-11-01T07:41:04.000Z | 2021-11-01T07:41:10.000Z | taotao-cloud-python/taotao-cloud-oldboy/day61-student-manager-system-upload-ORM/day61/app01/admin.py | shuigedeng/taotao-cloud-paren | 3d281b919490f7cbee4520211e2eee5da7387564 | [
"Apache-2.0"
] | 21 | 2021-04-13T10:32:17.000Z | 2022-03-26T07:43:22.000Z | from django.contrib import admin
from app01 import models
# admin.site.register(models.UserInfo)
# admin.site.register(models.Part)
# admin.site.register(models.User)
# admin.site.register(models.Tag)
# admin.site.register(models.UserToTag)
admin.site.register(models.Person)
admin.site.register(models.UserType)
| 28.545455 | 39 | 0.802548 | 44 | 314 | 5.727273 | 0.363636 | 0.25 | 0.472222 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006826 | 0.066879 | 314 | 10 | 40 | 31.4 | 0.853242 | 0.547771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1f287702502be962a8c271fb0b6dcbd4ca62d0bc | 65 | py | Python | password_expire/__init__.py | cash/django-password-expire | 6692f1807fcbf32c6577fa4a76d199cd127aa21d | [
"BSD-3-Clause"
] | null | null | null | password_expire/__init__.py | cash/django-password-expire | 6692f1807fcbf32c6577fa4a76d199cd127aa21d | [
"BSD-3-Clause"
] | null | null | null | password_expire/__init__.py | cash/django-password-expire | 6692f1807fcbf32c6577fa4a76d199cd127aa21d | [
"BSD-3-Clause"
] | null | null | null | default_app_config = 'password_expire.apps.PasswordExpireConfig'
| 32.5 | 64 | 0.876923 | 7 | 65 | 7.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 65 | 1 | 65 | 65 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0.630769 | 0.630769 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1f8554db632e7228f4d9afb1ae7ea7dd615f42c8 | 54 | py | Python | createPackage/__init__.py | Sreekiranar/createPackage | c4c05ec098519be9927eb8cb1ad0709e961c877f | [
"MIT"
] | 1 | 2020-03-29T18:22:53.000Z | 2020-03-29T18:22:53.000Z | createPackage/__init__.py | Sreekiranar/createPackage | c4c05ec098519be9927eb8cb1ad0709e961c877f | [
"MIT"
] | null | null | null | createPackage/__init__.py | Sreekiranar/createPackage | c4c05ec098519be9927eb8cb1ad0709e961c877f | [
"MIT"
] | null | null | null | from .createPackage import *
from .templates import *
| 18 | 28 | 0.777778 | 6 | 54 | 7 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 54 | 2 | 29 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2f688bc232b56b2c6aaaef533cf5cb96c1918b47 | 102 | py | Python | blockchain-py/errors.py | andybi7676/blockchain-py | 96be123b43240e74ce9b8519824c94790c1e59ee | [
"Apache-2.0"
] | null | null | null | blockchain-py/errors.py | andybi7676/blockchain-py | 96be123b43240e74ce9b8519824c94790c1e59ee | [
"Apache-2.0"
] | null | null | null | blockchain-py/errors.py | andybi7676/blockchain-py | 96be123b43240e74ce9b8519824c94790c1e59ee | [
"Apache-2.0"
] | null | null | null | class NotEnoughFundsError(Exception):
pass
class NotFoundTransaction(Exception):
pass
| 14.571429 | 38 | 0.72549 | 8 | 102 | 9.25 | 0.625 | 0.351351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 102 | 6 | 39 | 17 | 0.925 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
2f69e255016630d7a392702a84148c68451d4cb7 | 35 | py | Python | myfirst.py | Shashikant9198/My_first | 05443df92c6e0a747a5cb8c6aacbad1acbbeefb5 | [
"MIT"
] | null | null | null | myfirst.py | Shashikant9198/My_first | 05443df92c6e0a747a5cb8c6aacbad1acbbeefb5 | [
"MIT"
] | null | null | null | myfirst.py | Shashikant9198/My_first | 05443df92c6e0a747a5cb8c6aacbad1acbbeefb5 | [
"MIT"
] | null | null | null | print("Myfirst")
print(" Dheeraj")
| 11.666667 | 17 | 0.685714 | 4 | 35 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 2 | 18 | 17.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
2f7ea3d6f37bfaf582b3b829da439bf48774811a | 189 | py | Python | app/forms/__init__.py | Depado/intrepid-frontend | 18e66525e2d22298ed325fa21b4e34bccedcd051 | [
"WTFPL"
] | 1 | 2015-11-08T10:13:46.000Z | 2015-11-08T10:13:46.000Z | app/forms/__init__.py | Depado/intrepid-frontend | 18e66525e2d22298ed325fa21b4e34bccedcd051 | [
"WTFPL"
] | null | null | null | app/forms/__init__.py | Depado/intrepid-frontend | 18e66525e2d22298ed325fa21b4e34bccedcd051 | [
"WTFPL"
] | null | null | null | # -*- coding: utf-8 -*-
from interface_forms import LoginForm, SettingForm, TermForm
from scenario_ip_form import IpForm
from nmap_form import NmapForm
from type_ip_form import TypeIpForm
| 27 | 60 | 0.814815 | 27 | 189 | 5.481481 | 0.666667 | 0.202703 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006061 | 0.126984 | 189 | 6 | 61 | 31.5 | 0.890909 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
85cf0825c88ae4bff433f60d93e0a80da872bbd2 | 99,961 | py | Python | test/integration/fslogic_test/fslogic_test.py | onedata/oneclient | 46b2ca465bd4785f1b9a63aa7ce7fc5b2faaf130 | [
"MIT"
] | 4 | 2016-02-15T15:52:28.000Z | 2021-03-31T11:04:22.000Z | test/integration/fslogic_test/fslogic_test.py | onedata/oneclient | 46b2ca465bd4785f1b9a63aa7ce7fc5b2faaf130 | [
"MIT"
] | 11 | 2016-04-01T15:30:19.000Z | 2021-12-13T13:18:18.000Z | test/integration/fslogic_test/fslogic_test.py | onedata/oneclient | 46b2ca465bd4785f1b9a63aa7ce7fc5b2faaf130 | [
"MIT"
] | 3 | 2016-08-26T17:38:06.000Z | 2021-12-12T20:10:59.000Z | from __future__ import print_function
import random
__author__ = "Konrad Zemek"
__copyright__ = """(C) 2015 ACK CYFRONET AGH,
This software is released under the MIT license cited in 'LICENSE.txt'."""
import os
import sys
from threading import Thread
from multiprocessing import Pool
import time
import math
import json
import pytest
from stat import *
import xml.etree.ElementTree as ET
script_dir = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, os.path.dirname(script_dir))
from test_common import *
# noinspection PyUnresolvedReferences
from environment import appmock, common, docker
# noinspection PyUnresolvedReferences
import fslogic
# noinspection PyUnresolvedReferences
from proto import messages_pb2, fuse_messages_pb2, event_messages_pb2, \
common_messages_pb2, stream_messages_pb2
SYNCHRONIZE_BLOCK_PRIORITY_IMMEDIATE = 32
@pytest.yield_fixture
def endpoint(appmock_client):
app = appmock_client.tcp_endpoint(443)
yield app
appmock_client.reset_tcp_history()
@pytest.yield_fixture(scope="function")
def fl(endpoint):
fsl = fslogic.FsLogicProxy(endpoint.ip, endpoint.port, 10000, 5*60, "")
yield fsl
fsl.stop()
@pytest.yield_fixture
def fl_dircache(endpoint):
fsl = fslogic.FsLogicProxy(endpoint.ip, endpoint.port,
25, # Max metadata cache size
3, # Directory cache expires after 3 seconds
"")
yield fsl
fsl.stop()
@pytest.yield_fixture
def fl_archivematica(endpoint):
fsl = fslogic.FsLogicProxy(endpoint.ip, endpoint.port,
10000, 5*60, "--enable-archivematica")
yield fsl
fsl.stop()
@pytest.yield_fixture
def fl_onlyfullreplicas(endpoint):
fsl = fslogic.FsLogicProxy(endpoint.ip, endpoint.port,
10000, 5*60, "--only-full-replicas")
yield fsl
fsl.stop()
@pytest.fixture
def uuid():
return random_str()
@pytest.fixture
def parentUuid():
return random_str()
@pytest.fixture
def stat(endpoint, fl, uuid):
response = prepare_attr_response(uuid, fuse_messages_pb2.REG)
with reply(endpoint, response):
return fl.getattr(uuid)
@pytest.fixture
def parentStat(endpoint, fl, parentUuid):
response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
with reply(endpoint, response):
return fl.getattr(parentUuid)
def prepare_file_blocks(blocks=[]):
file_blocks = []
for file_block in blocks:
block = common_messages_pb2.FileBlock()
if len(file_block) == 2:
offset, block_size = file_block
else:
offset, block_size, storage_id, file_id = file_block
block.storage_id = storage_id
block.file_id = file_id
block.offset = offset
block.size = block_size
file_blocks.append(block)
return file_blocks
def prepare_sync_response(uuid, data, blocks):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_location_changed.file_location.CopyFrom(location)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_sync_and_checksum_response(uuid, data, blocks, checksum):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.sync_response.checksum = checksum
server_response.fuse_response.sync_response.file_location_changed.file_location.CopyFrom(location)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_partial_sync_response(uuid, data, blocks, start, end):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_location_changed.file_location.CopyFrom(location)
server_response.fuse_response.file_location_changed.change_beg_offset = start
server_response.fuse_response.file_location_changed.change_end_offset = end
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_sync_eagain_response(uuid, data, blocks):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_location.CopyFrom(location)
server_response.fuse_response.status.code = common_messages_pb2.Status.eagain
return server_response
def prepare_sync_ecanceled_response(uuid, data, blocks):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_location.CopyFrom(location)
server_response.fuse_response.status.code = common_messages_pb2.Status.ecanceled
return server_response
def prepare_sync_request(uuid, offset, size):
block = common_messages_pb2.FileBlock()
block.offset = offset
block.size = size
req = fuse_messages_pb2.SynchronizeBlock()
req.uuid = uuid
req.block.CopyFrom(block)
client_request = messages_pb2.ClientMessage()
client_request.fuse_request.synchronize_block.CopyFrom(req)
return client_request
def prepare_sync_and_fetch_checksum_request(uuid, offset, size):
block = common_messages_pb2.FileBlock()
block.offset = offset
block.size = size
req = fuse_messages_pb2.SynchronizeBlockAndComputeChecksum()
req.uuid = uuid
req.block.CopyFrom(block)
client_request = messages_pb2.ClientMessage()
client_request.fuse_request.synchronize_block_and_compute_checksum.CopyFrom(req)
return client_request
def prepare_attr_response(uuid, filetype, size=None, parent_uuid=None, name='filename'):
repl = fuse_messages_pb2.FileAttr()
repl.uuid = uuid
if parent_uuid:
repl.parent_uuid = parent_uuid
repl.name = name
repl.mode = random.randint(0, 1023)
repl.uid = random.randint(0, 20000)
repl.gid = random.randint(0, 20000)
repl.mtime = int(time.time()) - random.randint(0, 1000000)
repl.atime = repl.mtime - random.randint(0, 1000000)
repl.ctime = repl.atime - random.randint(0, 1000000)
repl.type = filetype
repl.size = size if size else random.randint(0, 1000000000)
repl.owner_id = ''
repl.provider_id = ''
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_attr.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_attr_response_mode(uuid, filetype, mode, parent_uuid=None):
repl = fuse_messages_pb2.FileAttr()
repl.uuid = uuid
if parent_uuid:
repl.parent_uuid = parent_uuid
repl.name = 'filename'
repl.mode = mode
repl.uid = random.randint(0, 20000)
repl.gid = random.randint(0, 20000)
repl.mtime = int(time.time()) - random.randint(0, 1000000)
repl.atime = repl.mtime - random.randint(0, 1000000)
repl.ctime = repl.atime - random.randint(0, 1000000)
repl.type = filetype
repl.owner_id = ''
repl.provider_id = ''
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_attr.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_readlink_response(uuid, link):
repl = fuse_messages_pb2.Symlink()
repl.link = link
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.symlink.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_fsstat_response(uuid, space_id, storage_count=1, size=1024*1024, occupied=1024):
repl = fuse_messages_pb2.FSStats()
repl.space_id = space_id
storages = []
for i in range(0, storage_count):
storage = fuse_messages_pb2.StorageStats()
storage.storage_id = "storage_"+str(i)
storage.size = size
storage.occupied = occupied
storages.append(storage)
repl.storage_stats.extend(storages)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.fs_stats.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_helper_response():
repl = fuse_messages_pb2.HelperParams()
repl.helper_name = 'null'
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.helper_params.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_location(uuid, blocks=[]):
file_blocks = prepare_file_blocks(blocks)
repl = fuse_messages_pb2.FileLocation()
repl.uuid = uuid
repl.space_id = 'space1'
repl.storage_id = 'storage1'
repl.file_id = 'file1'
repl.provider_id = 'provider1'
repl.blocks.extend(file_blocks)
repl.version = 1
return repl
def prepare_location_response(uuid, blocks=[]):
location = prepare_location(uuid, blocks)
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_location.CopyFrom(location)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_rename_response(new_uuid):
repl = fuse_messages_pb2.FileRenamed()
repl.new_uuid = new_uuid
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_renamed.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_processing_status_response(status):
repl = messages_pb2.ProcessingStatus()
repl.code = status
server_response = messages_pb2.ServerMessage()
server_response.processing_status.CopyFrom(repl)
return server_response
def prepare_open_response(handle_id='handle_id'):
repl = fuse_messages_pb2.FileOpened()
repl.handle_id = handle_id
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.file_opened.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def prepare_file_children_attr_response(parent_uuid, prefix, count):
child_attrs = []
for i in range(count):
f = prepare_attr_response(random_str(), fuse_messages_pb2.REG, 1, parent_uuid).\
fuse_response.file_attr
f.name = prefix+str(i)
child_attrs.append(f)
response = fuse_messages_pb2.FileChildrenAttrs()
response.child_attrs.extend(child_attrs)
return response
def prepare_events(evt_list):
evts = event_messages_pb2.Events()
evts.events.extend(evt_list)
msg = messages_pb2.ServerMessage()
msg.events.CopyFrom(evts)
return msg
def prepare_file_attr_changed_event(uuid, type, size, parent_uuid, mode=None):
attr = fuse_messages_pb2.FileAttr()
attr.uuid = uuid
attr.name = 'filename'
attr.mode = mode if mode else random_int(upper_bound=0777)
attr.uid = random_int(upper_bound=20000)
attr.gid = random_int(upper_bound=20000)
attr.mtime = int(time.time()) - random_int(upper_bound=1000000)
attr.atime = attr.mtime - random_int(upper_bound=1000000)
attr.ctime = attr.atime - random_int(upper_bound=1000000)
attr.type = type
if size:
attr.size = size
attr.owner_id = ''
attr.provider_id = ''
attr.parent_uuid = parent_uuid
attr_evt = event_messages_pb2.FileAttrChangedEvent()
attr_evt.file_attr.CopyFrom(attr)
evt = event_messages_pb2.Event()
evt.file_attr_changed.CopyFrom(attr_evt)
return prepare_events([evt])
def prepare_file_renamed_event(uuid, new_uuid, new_name, new_parent_uuid):
top_entry = common_messages_pb2.FileRenamedEntry()
top_entry.old_uuid = uuid
top_entry.new_uuid = new_uuid
top_entry.new_name = new_name
top_entry.new_parent_uuid = new_parent_uuid
rename_evt = event_messages_pb2.FileRenamedEvent()
rename_evt.top_entry.CopyFrom(top_entry)
evt = event_messages_pb2.Event()
evt.file_renamed.CopyFrom(rename_evt)
return prepare_events([evt])
def do_open(endpoint, fl, uuid, size=None, blocks=[], handle_id='handle_id'):
attr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG,
size=size)
location_response = prepare_location_response(uuid, blocks)
open_response = prepare_open_response(handle_id)
with reply(endpoint, [attr_response,
location_response,
open_response]):
handle = fl.open(uuid, 0)
assert handle >= 0
return handle
def do_open_cached(endpoint, fl, uuid, size=None, blocks=[], handle_id='handle_id'):
location_response = prepare_location_response(uuid, blocks)
open_response = prepare_open_response(handle_id)
with reply(endpoint, [location_response,
open_response]):
handle = fl.open(uuid, 0)
assert handle >= 0
return handle
def do_release(endpoint, fl, uuid, fh):
fsync_response = messages_pb2.ServerMessage()
fsync_response.fuse_response.status.code = common_messages_pb2.Status.ok
release_response = messages_pb2.ServerMessage()
release_response.fuse_response.status.code = common_messages_pb2.Status.ok
result = None
with reply(endpoint, [fsync_response,
release_response]) as queue:
fl.release(uuid, fh)
result = queue
return result
def get_stream_id_from_location_subscription(subscription_message_data):
location_subsc = messages_pb2.ClientMessage()
location_subsc.ParseFromString(subscription_message_data)
return location_subsc.message_stream.stream_id
def test_statfs_should_get_storage_size(appmock_client, endpoint, fl, uuid):
block_size = 4096
response = prepare_fsstat_response(uuid, "space_1", 1, 1000*block_size, 21*block_size)
with reply(endpoint, [response]) as queue:
statfs = fl.statfs(uuid)
queue.get()
assert statfs.bsize == block_size
assert statfs.frsize == block_size
assert statfs.blocks == 1000
assert statfs.bavail == 1000-21
def test_statfs_should_report_empty_free_space_on_overoccupied_storage(appmock_client, endpoint, fl, uuid):
block_size = 4096
response = prepare_fsstat_response(uuid, "space_1", 2, 10*block_size, 20*block_size)
with reply(endpoint, [response]) as queue:
statfs = fl.statfs(uuid)
queue.get()
assert statfs.bsize == block_size
assert statfs.frsize == block_size
assert statfs.blocks == 2*10
assert statfs.bavail == 0
def test_getattrs_should_get_attrs(appmock_client, endpoint, fl, uuid, parentUuid):
response = prepare_attr_response(uuid, fuse_messages_pb2.REG, 1, parentUuid)
parentParentUuid = random_str()
parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR, None, parentParentUuid)
with reply(endpoint, [response,
parent_response]) as queue:
stat = fl.getattr(uuid)
client_message = queue.get()
assert client_message.HasField('fuse_request')
fuse_request = client_message.fuse_request
assert fuse_request.file_request.HasField('get_file_attr')
assert fuse_request.file_request.context_guid == uuid
repl = response.fuse_response.file_attr
assert repl.uuid == uuid
assert stat.atime == repl.atime
assert stat.mtime == repl.mtime
assert stat.ctime == repl.ctime
assert stat.gid == repl.gid
assert stat.uid == repl.uid
assert stat.mode == repl.mode | fslogic.regularMode()
assert stat.size == repl.size
def test_getattrs_should_pass_errors(appmock_client, endpoint, fl, uuid):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.enoent
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.getattr(uuid)
assert 'No such file or directory' in str(excinfo.value)
def test_getattrs_should_cache_attrs(appmock_client, endpoint, fl, uuid, parentUuid):
attr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG, 1, parentUuid)
attr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
# FsLogic should first request the FileAttr for uuid, and then
# call the FileAttr for it's parent since it isn't cached
# After that it should create subscriptions on the parent for the
# metadata changes in that directory
with reply(endpoint, [attr_response, attr_parent_response]):
stat = fl.getattr(uuid)
assert fl.metadata_cache_contains(uuid)
assert fl.metadata_cache_contains(parentUuid)
print("ASKJDHKAJSDHKJASHDKJHASKJD")
# This should return the attr without any calls to Oneprovider
new_stat = fl.getattr(uuid)
assert stat == new_stat
def test_mkdir_should_mkdir(appmock_client, endpoint, fl):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [response, getattr_response]) as queue:
fl.mkdir('parentUuid', 'name', 0123)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.context_guid == 'parentUuid'
assert file_request.HasField('create_dir')
create_dir = file_request.create_dir
assert create_dir.name == 'name'
assert create_dir.mode == 0123
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def test_mkdir_should_recreate_dir(appmock_client, endpoint, fl):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
def mkdir(getattr_response_param):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [response,
getattr_response_param]) as queue:
fl.mkdir('parentUuid', 'name', 0123)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.context_guid == 'parentUuid'
assert file_request.HasField('create_dir')
create_dir = file_request.create_dir
assert create_dir.name == 'name'
assert create_dir.mode == 0123
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
mkdir(getattr_response)
response_ok = messages_pb2.ServerMessage()
response_ok.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [getattr_response, response_ok]) as queue:
fl.unlink('parentUuid', 'name')
getattr_response2 = prepare_attr_response('parentUuid2', fuse_messages_pb2.DIR)
mkdir(getattr_response2)
def test_mkdir_should_pass_mkdir_errors(appmock_client, endpoint, fl):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_response, response]):
fl.mkdir('parentUuid', 'filename', 0123)
assert 'Operation not permitted' in str(excinfo.value)
def test_rmdir_should_rmdir(appmock_client, endpoint, fl, uuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
ok = messages_pb2.ServerMessage()
ok.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [getattr_response, ok]) as queue:
fl.rmdir('parentUuid', 'name')
queue.get()
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('delete_file')
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
with pytest.raises(RuntimeError) as excinfo:
fl.getattr(uuid)
assert 'No such file or directory' in str(excinfo.value)
def test_rmdir_should_pass_rmdir_errors(appmock_client, endpoint, fl, uuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_response, response]):
fl.rmdir('parentUuid', 'filename')
assert 'Operation not permitted' in str(excinfo.value)
def test_rename_should_rename_file_with_different_uuid(appmock_client, endpoint, fl, uuid):
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.REG, 1024, 'parentUuid')
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
getattr_newparent_response = \
prepare_attr_response('newParentUuid', fuse_messages_pb2.DIR)
rename_response = prepare_rename_response('newUuid')
#
# Prepare first response with 5 files
#
repl = prepare_file_children_attr_response('parentUuid', "afiles-", 5)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [getattr_response,
getattr_parent_response,
getattr_newparent_response,
readdir_response,
rename_response]) as queue:
# Ensure the source file is cached
fl.getattr(uuid)
queue.get()
# Ensure the target directory is cached
d = fl.opendir('newParentUuid')
fl.readdir('newParentUuid', 100, 0)
fl.releasedir('newParentUuid', d)
queue.get()
# Move the file to new directory
fl.rename('parentUuid', 'filename', 'newParentUuid', 'newName')
queue.get()
queue.get()
client_message = queue.get()
assert not fl.metadata_cache_contains(uuid)
assert fl.metadata_cache_contains('newUuid')
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('rename')
rename = file_request.rename
assert rename.target_parent_uuid == 'newParentUuid'
assert rename.target_name == 'newName'
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def test_rename_should_rename_file_with_the_same_uuid(appmock_client, endpoint, fl, uuid):
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.REG, 1024, 'parentUuid')
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
getattr_newparent_response = \
prepare_attr_response('newParentUuid', fuse_messages_pb2.DIR)
rename_response = prepare_rename_response(uuid)
#
# Prepare first response with 5 files
#
repl = prepare_file_children_attr_response('parentUuid', "afiles-", 5)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [getattr_response,
getattr_parent_response,
getattr_newparent_response,
readdir_response,
rename_response]) as queue:
# Ensure the source file is cached
fl.getattr(uuid)
queue.get()
# Ensure the target directory is cached
d = fl.opendir('newParentUuid')
fl.readdir('newParentUuid', 100, 0)
fl.releasedir('newParentUuid', d)
queue.get()
fl.rename('parentUuid', 'filename', 'newParentUuid', 'newName')
queue.get()
queue.get()
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('rename')
rename = file_request.rename
assert rename.target_parent_uuid == 'newParentUuid'
assert rename.target_name == 'newName'
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def test_rename_should_rename_directory(appmock_client, endpoint, fl, uuid):
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.DIR, 1234, 'parentUuid', 'name')
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
getattr_newparent_response = \
prepare_attr_response('newParentUuid', fuse_messages_pb2.DIR)
getattr_newattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.DIR, 1234, 'parentUuid', 'name')
rename_response = prepare_rename_response('newUuid')
with reply(endpoint, [getattr_response,
getattr_parent_response,
rename_response,
getattr_newattr_response]) as queue:
fl.rename('parentUuid', 'name', 'newParentUuid', 'newName')
queue.get()
queue.get()
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('rename')
rename = file_request.rename
assert rename.target_parent_uuid == 'newParentUuid'
assert rename.target_name == 'newName'
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def test_rename_event_should_ignore_uncached_files(appmock_client, endpoint, fl, uuid):
evt = prepare_file_renamed_event(
uuid, uuid, 'newName', 'newParentUuid')
with send(endpoint, [evt]):
pass
assert not fl.metadata_cache_contains(uuid)
assert not fl.metadata_cache_contains('newParentUuid')
def test_rename_event_should_update_old_file_parent_cache(appmock_client, endpoint, fl):
parentUuid = 'parentUuid'
getattr_reponse = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
getattr_newattr_response = \
prepare_attr_response('newUuid', fuse_messages_pb2.DIR, 1234, 'parentUuid', 'name')
#
# Prepare first response with 3 files
#
dir_size = 3
repl = prepare_file_children_attr_response(parentUuid, "afiles-", dir_size)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
# When adding the first directory entry, the client will make sure that the
# parent attributes are also cached
getattr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
readdir_response]) as queue:
d = fl.opendir(parentUuid)
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(parentUuid, d)
assert len(children_chunk) == len(['.', '..']) + dir_size
uuid = repl.child_attrs[0].uuid
evt = prepare_file_renamed_event(
uuid, 'newUuid', 'newName', 'newParentUuid')
with send(endpoint, [evt]):
wait_until(lambda: not fl.metadata_cache_contains(uuid))
assert fl.metadata_cache_contains('parentUuid')
assert not fl.metadata_cache_contains('newUuid')
assert not fl.metadata_cache_contains('newParentUuid')
children_chunk = []
with reply(endpoint, [getattr_newattr_response]) as queue:
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
assert len(children_chunk) == len(['.', '..']) + dir_size - 1
def test_rename_event_should_update_new_file_parent_cache(appmock_client, endpoint, fl, uuid):
parentUuid = 'newParentUuid'
getattr_reponse = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 3 files
#
dir_size = 3
repl = prepare_file_children_attr_response(parentUuid, "afiles-", dir_size)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
# When adding the first directory entry, the client will make sure that the
# parent attributes are also cached
getattr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
readdir_response]) as queue:
d = fl.opendir(parentUuid)
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(parentUuid, d)
assert len(children_chunk) == len(['.', '..']) + dir_size
evt = prepare_file_renamed_event(
uuid, 'newUuid', 'afiles-NEW', 'newParentUuid')
getattr_new_response = \
prepare_attr_response('newUuid', fuse_messages_pb2.REG, 1234,
'newParentUuid', 'afiles-NEW')
with send(endpoint, [evt]):
pass
with reply(endpoint, [getattr_new_response]) as queue:
wait_until(lambda: fl.metadata_cache_contains('newUuid'))
assert not fl.metadata_cache_contains(uuid)
assert fl.metadata_cache_contains('newParentUuid')
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
assert len(children_chunk) == len(['.', '..']) + dir_size + 1
assert 'afiles-NEW' in children_chunk
def test_rename_should_update_cache(appmock_client, endpoint, fl, uuid):
parentUuid = 'parentUuid'
newParentUuid = 'newParentUuid'
dir_size = 3
offset = 0
chunk_size = 100
#
# Prepare first response with 3 files
#
repl = prepare_file_children_attr_response(parentUuid, "afiles-", dir_size)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
# When adding the first directory entry, the client will make sure that the
# parent attributes are also cached
getattr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 3 files
#
repl_new = prepare_file_children_attr_response(newParentUuid, "bfiles-", dir_size)
repl_new.is_last = True
readdir_new_response = messages_pb2.ServerMessage()
readdir_new_response.fuse_response.file_children_attrs.CopyFrom(repl_new)
readdir_new_response.fuse_response.status.code = common_messages_pb2.Status.ok
# When adding the first directory entry, the client will make sure that the
# parent attributes are also cached
getattr_newparent_response = prepare_attr_response(newParentUuid, fuse_messages_pb2.DIR)
rename_response = prepare_rename_response('newUuid')
with reply(endpoint, [getattr_parent_response,
readdir_response,
getattr_newparent_response,
readdir_new_response,
rename_response]) as queue:
# Ensure the source directory is cached
d = fl.opendir(parentUuid)
fl.readdir(parentUuid, chunk_size, offset)
fl.releasedir(parentUuid, d)
# Ensure the target directory is cached
d = fl.opendir(newParentUuid)
fl.readdir(newParentUuid, chunk_size, offset)
fl.releasedir(newParentUuid, d)
# Rename the file
fl.rename(parentUuid, 'afiles-0', newParentUuid, 'afiles-NEW')
assert fl.metadata_cache_contains('newUuid')
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
assert len(children_chunk) == len(['.', '..']) + dir_size - 1
assert 'afiles-0' not in children_chunk
children_chunk = fl.readdir(newParentUuid, chunk_size, offset)
assert len(children_chunk) == len(['.', '..']) + dir_size + 1
assert 'afiles-NEW' in children_chunk
def test_rename_should_pass_rename_errors(appmock_client, endpoint, fl, uuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_response,
response]):
fl.rename('parentUuid', 'name', 'newParentUuid', 'newName')
assert 'Operation not permitted' in str(excinfo.value)
def test_chmod_should_change_mode(appmock_client, endpoint, fl, uuid):
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.REG, 1024, 'parentUuid')
ok_response = messages_pb2.ServerMessage()
ok_response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [ok_response,
getattr_parent_response,
ok_response,
getattr_response]) as queue:
fl.chmod(uuid, 0123)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('change_mode')
change_mode = file_request.change_mode
assert change_mode.mode == 0123
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def test_chmod_should_change_cached_mode(appmock_client, endpoint, fl, uuid, parentUuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG, 1, parentUuid)
getattr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
with reply(endpoint, [getattr_response,
getattr_parent_response]):
stat = fl.getattr(uuid)
assert stat.mode == getattr_response.fuse_response.file_attr.mode | \
fslogic.regularMode()
appmock_client.reset_tcp_history()
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [response, response]):
fl.chmod(uuid, 0356)
stat = fl.getattr(uuid)
assert stat.mode == 0356 | fslogic.regularMode()
def test_chmod_should_pass_chmod_errors(appmock_client, endpoint, fl, uuid):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.enoent
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.chmod(uuid, 0312)
assert 'No such file or directory' in str(excinfo.value)
def test_utime_should_update_times(appmock_client, endpoint, fl, uuid, stat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response) as queue:
fl.utime(uuid)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('update_times')
update_times = file_request.update_times
assert update_times.atime == update_times.mtime
assert update_times.atime == update_times.ctime
assert update_times.atime <= time.time()
assert file_request.context_guid == uuid
def test_utime_should_change_cached_times(appmock_client, endpoint, fl, uuid, parentUuid):
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.REG, 1, parentUuid)
getattr_parent_response = \
prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
with reply(endpoint, [getattr_response,
getattr_parent_response]):
stat = fl.getattr(uuid)
assert stat.atime == getattr_response.fuse_response.file_attr.atime
assert stat.mtime == getattr_response.fuse_response.file_attr.mtime
appmock_client.reset_tcp_history()
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response):
fl.utime(uuid)
stat = fslogic.Stat()
fl.getattr(uuid)
assert stat.atime != getattr_response.fuse_response.file_attr.atime
assert stat.mtime != getattr_response.fuse_response.file_attr.mtime
def test_utime_should_update_times_with_buf(appmock_client, endpoint, fl, uuid, stat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
ubuf = fslogic.Ubuf()
ubuf.actime = 54321
ubuf.modtime = 12345
with reply(endpoint, response) as queue:
fl.utime_buf(uuid, ubuf)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('update_times')
update_times = file_request.update_times
assert update_times.atime == ubuf.actime
assert update_times.mtime == ubuf.modtime
assert file_request.context_guid == uuid
def test_utime_should_pass_utime_errors(appmock_client, endpoint, fl, uuid, stat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.utime(uuid)
assert 'Operation not permitted' in str(excinfo.value)
ubuf = fslogic.Ubuf()
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.utime_buf(uuid, ubuf)
assert 'Operation not permitted' in str(excinfo.value)
def test_readdir_should_read_dir(appmock_client, endpoint, fl, stat):
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with another 5 file
#
repl2 = prepare_file_children_attr_response(uuid, "bfiles-", 5)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
response1,
response2]) as queue:
d = fl.opendir(uuid)
children_chunk = fl.readdir(uuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(uuid, d)
assert len(children_chunk) == 12
time.sleep(2)
#
# After the last request the value should be available
# from readdir cache, without any communication with provider
#
for i in range(3):
with reply(endpoint, []) as queue:
d = fl.opendir(uuid)
children_chunk = fl.readdir(uuid, 5, 0)
fl.releasedir(uuid, d)
assert len(children_chunk) == 5
time.sleep(1)
def test_readdir_should_skip_incomplete_replicas(appmock_client, endpoint, fl_onlyfullreplicas):
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
for f in repl1.child_attrs:
f.fully_replicated = True
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with another 5 file
#
repl2 = prepare_file_children_attr_response(uuid, "bfiles-", 5)
for f in repl2.child_attrs:
f.fully_replicated = False
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
response1,
response2]) as queue:
d = fl_onlyfullreplicas.opendir(uuid)
children_chunk = fl_onlyfullreplicas.readdir(uuid, chunk_size, offset)
_ = queue.get()
fl_onlyfullreplicas.releasedir(uuid, d)
assert len(children_chunk) == 5+2
time.sleep(2)
#
# After the last request the value should be available
# from readdir cache, without any communication with provider
#
for i in range(3):
with reply(endpoint, []) as queue:
d = fl_onlyfullreplicas.opendir(uuid)
children_chunk = fl_onlyfullreplicas.readdir(uuid, 5, 0)
fl_onlyfullreplicas.releasedir(uuid, d)
assert len(children_chunk) == 5
time.sleep(1)
def test_readdir_should_handle_fileattrchanged_event(appmock_client, endpoint, fl, parentUuid, stat):
getattr_reponse = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 3 files
#
dir_size = 3
repl = prepare_file_children_attr_response(parentUuid, "afiles-", dir_size)
repl.is_last = True
readdir_response = messages_pb2.ServerMessage()
readdir_response.fuse_response.file_children_attrs.CopyFrom(repl)
readdir_response.fuse_response.status.code = common_messages_pb2.Status.ok
# When adding the first directory entry, the client will make sure that the
# parent attributes are also cached
getattr_parent_response = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
readdir_response]) as queue:
d = fl.opendir(parentUuid)
children_chunk = fl.readdir(parentUuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(parentUuid, d)
assert len(children_chunk) == len(['.', '..']) + dir_size
#
# After readdir is complete, file attributes are available from cache
#
file_uuid = repl.child_attrs[0].uuid
attr = fl.getattr(file_uuid)
evt = prepare_file_attr_changed_event(
file_uuid, fuse_messages_pb2.REG, 12345, 'parentUuid')
with send(endpoint, [evt]):
pass
time.sleep(1)
# After the Oneprovider sends FileAttrChanged event, the file should be updated in
# the cache
attr = fl.getattr(file_uuid)
assert attr.size == 12345
def test_readdir_should_return_unique_entries(endpoint, fl, stat):
uuid = 'parentUuid'
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with the same 5 files
#
repl2 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
response1,
response2]) as queue:
d = fl.opendir(uuid)
children_chunk = fl.readdir(uuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(uuid, d)
children.extend(children_chunk)
assert len(children) == 5 + 2
def test_readdir_should_pass_readdir_errors(appmock_client, endpoint, fl, stat):
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_reponse,
response]):
d = fl.opendir(uuid)
fl.readdir(uuid, 1024, 0)
fl.releasedir(uuid, d)
assert 'Operation not permitted' in str(excinfo.value)
def test_readdir_should_not_get_stuck_on_errors(appmock_client, endpoint, fl, stat):
uuid = 'parentUuid'
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.DIR)
response0 = messages_pb2.ServerMessage()
response0.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_response,
response0]):
d = fl.opendir(uuid)
fl.readdir(uuid, 1024, 0)
fl.releasedir(uuid, d)
assert 'Operation not permitted' in str(excinfo.value)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with another 5 file
#
repl2 = prepare_file_children_attr_response(uuid, "bfiles-", 5)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [response1,
response2]) as queue:
d = fl.opendir(uuid)
children_chunk = fl.readdir(uuid, chunk_size, offset)
_ = queue.get()
fl.releasedir(uuid, d)
assert len(children_chunk) == 12
def test_metadatacache_should_ignore_changes_on_deleted_files(appmock_client, endpoint, fl):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
#
# Prepare readdir response with 1 file
#
repl1 = prepare_file_children_attr_response('parentUuid', "afiles-", 1)
repl1.is_last = True
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
response1]) as queue:
d = fl.opendir('parentUuid')
children_chunk = fl.readdir('parentUuid', chunk_size, offset)
_ = queue.get()
fl.releasedir('parentUuid', d)
children.extend(children_chunk)
assert len(children) == 1+2
time.sleep(1)
assert fl.metadata_cache_size() == 1+1
afiles_0_uuid = repl1.child_attrs[0].uuid
fl.getattr(afiles_0_uuid)
#
# Remove file 'afiles-0'
#
ok = messages_pb2.ServerMessage()
ok.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [ok]) as queue:
fl.unlink('parentUuid', 'afiles-0')
time.sleep(1)
evt = prepare_file_attr_changed_event(
afiles_0_uuid, fuse_messages_pb2.REG, None, 'parentUuid', 0655)
with send(endpoint, [evt]):
pass
time.sleep(1)
with pytest.raises(RuntimeError) as excinfo:
fl.getattr(afiles_0_uuid)
assert 'No such file or directory' in str(excinfo.value)
def test_metadatacache_should_ignore_changes_on_deleted_directories(appmock_client, endpoint, fl):
getattr_response = prepare_attr_response(
'parentUuid', fuse_messages_pb2.DIR, None, 'parentParentUuid', 'dir1')
getattr_parent_response = prepare_attr_response(
'parentParentUuid', fuse_messages_pb2.DIR, None)
#
# Prepare readdir response with 1 file
#
repl1 = prepare_file_children_attr_response('parentUuid', "afiles-", 1)
repl1.is_last = True
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
getattr_parent_response,
response1]) as queue:
d = fl.opendir('parentUuid')
children_chunk = fl.readdir('parentUuid', chunk_size, offset)
_ = queue.get()
fl.releasedir('parentUuid', d)
children.extend(children_chunk)
assert len(children) == 1+2
time.sleep(1)
assert fl.metadata_cache_size() == 2+1
#
# Remove file 'afiles-0'
#
ok = messages_pb2.ServerMessage()
ok.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [ok]) as queue:
fl.unlink('parentParentUuid', 'dir1')
evt = prepare_file_attr_changed_event(
'parentUuid', fuse_messages_pb2.DIR, 0, 'parentParentUuid')
with send(endpoint, [evt]):
pass
time.sleep(1)
with pytest.raises(RuntimeError) as excinfo:
fl.getattr('parentUuid')
assert 'No such file or directory' in str(excinfo.value)
def test_metadatacache_should_keep_open_file_metadata(appmock_client, endpoint, fl):
parent = 'parentUuid'
name = 'a.txt'
uuid1 = 'uuid1'
uuid2 = 'uuid2'
size = 1024
blocks=[(0, 10)]
handle_id = 'handle_id'
ok = messages_pb2.ServerMessage()
ok.fuse_response.status.code = common_messages_pb2.Status.ok
# Create a file and open it, then delete while opened, and the perform a read
# and release the file
attr_parent_response = prepare_attr_response(parent, fuse_messages_pb2.DIR)
attr_response = prepare_attr_response(uuid1, fuse_messages_pb2.REG,
size, parent, name)
location_response = prepare_location_response(uuid1, blocks)
open_response = prepare_open_response(handle_id)
with reply(endpoint, [attr_response,
location_response,
attr_parent_response,
open_response]):
fh = fl.open(uuid1, 0)
assert fh >= 0
assert fl.metadata_cache_contains(uuid1)
assert fl.metadata_cache_contains(parent)
with reply(endpoint, [ok]) as queue:
fl.unlink(parent, name)
assert not fl.metadata_cache_contains(uuid1)
assert 5 == len(fl.read(uuid1, fh, 0, 5))
do_release(endpoint, fl, uuid1, fh)
# Repeat the same steps again with a different file with the same name
# in the same directory
attr_response = prepare_attr_response(uuid2, fuse_messages_pb2.REG,
size, parent, name)
location_response = prepare_location_response(uuid2, blocks)
open_response = prepare_open_response(handle_id)
with reply(endpoint, [attr_response,
location_response,
open_response]):
fh = fl.open(uuid2, 0)
assert fh >= 0
with reply(endpoint, [ok]) as queue:
fl.unlink(parent, name)
assert 5 == len(fl.read(uuid2, fh, 0, 5))
do_release(endpoint, fl, uuid2, fh)
def test_metadatacache_should_drop_expired_directories(appmock_client, endpoint, fl_dircache):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
#
# Prepare readdir response with 10 files
#
repl1 = prepare_file_children_attr_response('parentUuid', "afiles-", 10)
repl1.is_last = True
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
response1]) as queue:
d = fl_dircache.opendir('parentUuid')
children_chunk = fl_dircache.readdir('parentUuid', chunk_size, offset)
_ = queue.get()
fl_dircache.releasedir('parentUuid', d)
children.extend(children_chunk)
assert len(children) == 10+2
time.sleep(1)
assert fl_dircache.metadata_cache_size() == 10 + 1
# Wait past directory cache expiry which is 3 seconds
time.sleep(5)
assert fl_dircache.metadata_cache_size() == 1
def test_metadatacache_should_drop_expired_directories_and_keep_parent_entries(appmock_client, endpoint, fl_dircache):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
#
# Prepare readdir response with 3 directories and 3 files in each directory
#
repl1 = prepare_file_children_attr_response('parentUuid', "parents-", 3)
repl1.is_last = True
parent_uuid_0 = repl1.child_attrs[0].uuid
repl1.child_attrs[0].type = fuse_messages_pb2.DIR
parent_uuid_1 = repl1.child_attrs[1].uuid
repl1.child_attrs[1].type = fuse_messages_pb2.DIR
parent_uuid_2 = repl1.child_attrs[2].uuid
repl1.child_attrs[2].type = fuse_messages_pb2.DIR
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
repl2 = prepare_file_children_attr_response(parent_uuid_0, "children-", 3)
repl2.is_last = True
child_uuid_0 = repl2.child_attrs[0].uuid
child_uuid_1 = repl2.child_attrs[1].uuid
child_uuid_2 = repl2.child_attrs[2].uuid
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
#
# First list the top directory
#
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
response1]) as queue:
d = fl_dircache.opendir('parentUuid')
children_chunk = fl_dircache.readdir('parentUuid', chunk_size, offset)
_ = queue.get()
fl_dircache.releasedir('parentUuid', d)
children.extend(children_chunk)
assert children.sort() == ['.', '..', 'parents-0', 'parents-1', 'parents-2'].sort()
#
# Now list parent-0 directory
#
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [response2]) as queue:
d = fl_dircache.opendir(parent_uuid_0)
children_chunk = fl_dircache.readdir(parent_uuid_0, chunk_size, offset)
_ = queue.get()
fl_dircache.releasedir(parent_uuid_0, d)
children.extend(children_chunk)
assert fl_dircache.metadata_cache_contains(child_uuid_0)
assert fl_dircache.metadata_cache_contains(child_uuid_1)
assert fl_dircache.metadata_cache_contains(child_uuid_2)
# Wait so that the contents of 'parents-0' are invalidated but keep
# the top directory active
time.sleep(2)
attr = fl_dircache.getattr(parent_uuid_1)
time.sleep(2)
attr = fl_dircache.getattr(parent_uuid_1)
time.sleep(2)
attr = fl_dircache.getattr(parent_uuid_1)
assert not fl_dircache.metadata_cache_contains(child_uuid_0)
assert not fl_dircache.metadata_cache_contains(child_uuid_1)
assert not fl_dircache.metadata_cache_contains(child_uuid_2)
assert fl_dircache.metadata_cache_contains(parent_uuid_2)
assert fl_dircache.metadata_cache_contains(parent_uuid_1)
assert fl_dircache.metadata_cache_contains(parent_uuid_0)
def test_metadatacache_should_prune_when_size_exceeded(appmock_client, endpoint, fl_dircache):
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
#
# Prepare readdir response with 20 files
#
repl1 = prepare_file_children_attr_response('parentUuid', "afiles-", 20)
repl1.is_last = True
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response,
response1]) as queue:
d = fl_dircache.opendir('parentUuid')
children_chunk = fl_dircache.readdir('parentUuid', chunk_size, offset)
_ = queue.get()
fl_dircache.releasedir('parentUuid', d)
children.extend(children_chunk)
assert len(children) == 20+2
time.sleep(1)
assert fl_dircache.metadata_cache_size() == 21
getattr_response2 = prepare_attr_response('parentUuid2', fuse_messages_pb2.DIR)
repl2 = prepare_file_children_attr_response('parentUuid2', "bfiles-", 10)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_response2,
response2]) as queue:
d = fl_dircache.opendir('parentUuid2')
children_chunk = fl_dircache.readdir('parentUuid2', chunk_size, offset)
_ = queue.get()
fl_dircache.releasedir('parentUuid2', d)
children.extend(children_chunk)
assert len(children) == 10+2
time.sleep(1)
assert not fl_dircache.metadata_cache_contains(repl1.child_attrs[0].uuid)
assert fl_dircache.metadata_cache_contains(repl2.child_attrs[0].uuid)
assert fl_dircache.metadata_cache_size() == 11
def test_link_should_create_hard_link(appmock_client, endpoint, fl, uuid, parentUuid):
name = random_str()
attr_response = prepare_attr_response(
uuid, fuse_messages_pb2.LNK, size=0,
parent_uuid=parentUuid, name=name)
with reply(endpoint, attr_response) as queue:
fl.link(uuid, parentUuid, name)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('make_link')
def test_symlink_should_create_symbolic_link(appmock_client, endpoint, fl, uuid, parentUuid):
name = random_str()
link = random_str()
attr_response = prepare_attr_response(
uuid, fuse_messages_pb2.SYMLNK, size=len(link),
parent_uuid=parentUuid, name=name)
with reply(endpoint, attr_response) as queue:
fl.symlink(parentUuid, name, link)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('make_symlink')
def test_readlink_should_read_symbolic_link(appmock_client, endpoint, fl, uuid):
link = random_str()
readlink_response = prepare_readlink_response(uuid, link)
with reply(endpoint, readlink_response) as queue:
link_result = fl.readlink(uuid)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('read_symlink')
assert(link == link_result)
def test_mknod_should_create_multiple_files(appmock_client, endpoint, fl, uuid, parentUuid, parentStat):
getattr_responses = []
for i in range(0,100):
getattr_responses.append(prepare_attr_response(
uuid+'_'+str(i), fuse_messages_pb2.REG, size=0,
parent_uuid=parentUuid, name='filename_'+str(i)))
with reply(endpoint, getattr_responses) as queue:
for i in range(0,100):
fl.mknod(parentUuid, 'filename_'+str(i), 0664 | S_IFREG)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('make_file')
make_file = file_request.make_file
assert make_file.name == 'filename_99'
assert make_file.mode == 0664
assert file_request.context_guid == parentUuid
def test_mknod_should_make_new_location(appmock_client, endpoint, fl, uuid, parentUuid, parentStat):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG, 10, parentUuid)
with reply(endpoint, [getattr_response]) as queue:
fl.mknod(parentUuid, 'childName', 0762 | S_IFREG)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('make_file')
make_file = file_request.make_file
assert make_file.name == 'childName'
assert make_file.mode == 0762
assert file_request.context_guid == parentUuid
def test_mknod_should_pass_location_errors(appmock_client, endpoint, fl, parentUuid, parentStat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.mknod(parentUuid, 'childName', 0123)
assert 'Operation not permitted' in str(excinfo.value)
def test_mknod_should_throw_on_unsupported_file_type(endpoint, fl, parentUuid, parentStat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
fl.mknod(parentUuid, 'childName', 0664 | S_IFSOCK)
assert 'Operation not supported' in str(excinfo.value)
with pytest.raises(RuntimeError) as excinfo:
fl.mknod(parentUuid, 'childName', 0664 | S_IFBLK)
assert 'Operation not supported' in str(excinfo.value)
with pytest.raises(RuntimeError) as excinfo:
fl.mknod(parentUuid, 'childName', 0664 | S_IFDIR)
assert 'Operation not supported' in str(excinfo.value)
with pytest.raises(RuntimeError) as excinfo:
fl.mknod(parentUuid, 'childName', 0664 | S_IFCHR)
assert 'Operation not supported' in str(excinfo.value)
with pytest.raises(RuntimeError) as excinfo:
fl.mknod(parentUuid, 'childName', 0664 | S_IFIFO)
assert 'Operation not supported' in str(excinfo.value)
def test_read_should_read(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, blocks=[(0, 10)])
assert 5 == len(fl.read(uuid, fh, 0, 5))
do_release(endpoint, fl, uuid, fh)
def test_read_should_fetch_file_location_after_closing_file(appmock_client, endpoint, fl, uuid):
blocks = [(0,10)]
fh = do_open(endpoint, fl, uuid, blocks=blocks)
assert 5 == len(fl.read(uuid, fh, 0, 5))
do_release(endpoint, fl, uuid, fh)
fh = do_open_cached(endpoint, fl, uuid, blocks=blocks)
assert 5 == len(fl.read(uuid, fh, 0, 5))
do_release(endpoint, fl, uuid, fh)
def test_read_should_read_zero_on_eof(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(0, 10)])
assert 10 == len(fl.read(uuid, fh, 0, 12))
assert 0 == len(fl.read(uuid, fh, 10, 2))
do_release(endpoint, fl, uuid, fh)
def test_read_should_pass_helper_errors(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(0, 10)])
with pytest.raises(RuntimeError) as excinfo:
fl.failHelper()
fl.read(uuid, fh, 0, 10)
assert 'Owner died' in str(excinfo.value)
do_release(endpoint, fl, uuid, fh)
def test_write_should_write(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(0, 10)])
assert 5 == fl.write(uuid, fh, 0, 5)
do_release(endpoint, fl, uuid, fh)
def test_write_should_change_file_size(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=5, blocks=[(0, 5)])
assert 20 == fl.write(uuid, fh, 10, 20)
stat = fl.getattr(uuid)
assert 30 == stat.size
do_release(endpoint, fl, uuid, fh)
def test_write_should_pass_helper_errors(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(0, 10)])
with pytest.raises(RuntimeError) as excinfo:
fl.failHelper()
fl.write(uuid, fh, 0, 10)
assert 'Owner died' in str(excinfo.value)
do_release(endpoint, fl, uuid, fh)
def test_truncate_should_truncate(appmock_client, endpoint, fl, uuid, stat):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
location_response = prepare_location_response(uuid)
with reply(endpoint, [response,
location_response]) as queue:
fl.truncate(uuid, 4)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('truncate')
truncate = file_request.truncate
assert truncate.size == 4
assert file_request.context_guid == uuid
def test_truncate_should_truncate_open_file(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(0, 10)])
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, [response, response]) as queue:
fl.truncate(uuid, 0)
client_message = queue.get()
do_release(endpoint, fl, uuid, fh)
def test_truncate_should_pass_truncate_errors(appmock_client, endpoint, fl, uuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG)
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.eperm
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, [getattr_response,
response]):
fl.truncate(uuid, 3)
assert 'Operation not permitted' in str(excinfo.value)
def test_readdir_big_directory(appmock_client, endpoint, fl, uuid, stat):
chunk_size = 2500
children_num = 10*chunk_size
getattr_response = prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
# Prepare an array of responses of appropriate sizes to client
# requests
responses = [getattr_response]
for i in xrange(0, children_num/chunk_size):
repl = fuse_messages_pb2.FileChildrenAttrs()
for j in xrange(0, chunk_size):
link = prepare_attr_response(uuid, fuse_messages_pb2.REG, 1024, 'parentUuid').\
fuse_response.file_attr
link.uuid = "childUuid_"+str(i)+"_"+str(j)
link.name = "file_"+str(i)+"_"+str(j)
repl.child_attrs.extend([link])
response = messages_pb2.ServerMessage()
response.fuse_response.file_children_attrs.CopyFrom(repl)
response.fuse_response.status.code = common_messages_pb2.Status.ok
responses.append(response)
# Prepare empty response after entire directory has been fetched
# by FsLogic
empty_repl = fuse_messages_pb2.FileChildrenAttrs()
empty_repl.child_attrs.extend([])
empty_repl.is_last = True
empty_response = messages_pb2.ServerMessage()
empty_response.fuse_response.file_children_attrs.CopyFrom(empty_repl)
empty_response.fuse_response.status.code = common_messages_pb2.Status.ok
responses.append(empty_response)
assert len(responses) == 1 + children_num/chunk_size + 1
children = []
offset = 0
with reply(endpoint, responses) as queue:
d = fl.opendir('parentUuid')
while True:
children_chunk = fl.readdir('parentUuid', chunk_size, offset)
client_message = queue.get()
children.extend(children_chunk)
if len(children_chunk) < chunk_size:
break
offset += len(children_chunk)
fl.releasedir('parentUuid', d)
assert len(children) == children_num + 2
def test_write_should_save_blocks(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=5)
assert 5 == fl.write(uuid, fh, 0, 5)
assert 5 == len(fl.read(uuid, fh, 0, 10))
do_release(endpoint, fl, uuid, fh)
def test_read_should_read_partial_content(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
data = fl.read(uuid, fh, 6, 4)
assert len(data) == 4
do_release(endpoint, fl, uuid, fh)
def test_read_should_request_synchronization(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
sync_response = prepare_sync_response(uuid, '', [(0, 10)])
appmock_client.reset_tcp_history()
with reply(endpoint, sync_response) as queue:
fl.read(uuid, fh, 2, 5)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('synchronize_block')
block = common_messages_pb2.FileBlock()
block.offset = 2
block.size = 8
sync = file_request.synchronize_block
assert sync.block == block
assert sync.priority == SYNCHRONIZE_BLOCK_PRIORITY_IMMEDIATE
assert file_request.context_guid == uuid
do_release(endpoint, fl, uuid, fh)
def test_read_should_fetch_location_on_invalid_checksum(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[])
fl.set_needs_data_consistency_check(True)
blocks = [(2, 5)]
responses = []
# Because the first read is smaller the then minimum default sync range (1MB)
# client will request a sync from the offset specified (2) to the end of the file
# as the file is smaller than 1MB
responses.append(prepare_sync_response(uuid, '', [(2, 10-2)]))
responses.append(prepare_sync_and_checksum_response(uuid, '', blocks, 'badchecksum'))
responses.append(prepare_location_response(uuid, blocks))
responses.append(prepare_location_response(uuid, blocks))
appmock_client.reset_tcp_history()
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, responses) as queue:
fl.read(uuid, fh, 2, 5)
client_message = queue.get()
fl.set_needs_data_consistency_check(False)
assert "Input/output error" in str(excinfo.value)
do_release(endpoint, fl, uuid, fh)
def test_read_should_retry_request_synchronization(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
responses = []
responses.append(prepare_sync_eagain_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_response(uuid, '', [(2, 8)]))
appmock_client.reset_tcp_history()
with reply(endpoint, responses) as queue:
fl.read(uuid, fh, 2, 5)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('synchronize_block')
block = common_messages_pb2.FileBlock()
block.offset = 2
block.size = 8
sync = file_request.synchronize_block
assert sync.block == block
assert sync.priority == SYNCHRONIZE_BLOCK_PRIORITY_IMMEDIATE
assert file_request.context_guid == uuid
do_release(endpoint, fl, uuid, fh)
def test_read_should_retry_canceled_synchronization_request(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
responses = []
responses.append(prepare_sync_ecanceled_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_ecanceled_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_response(uuid, '', [(2, 8)]))
appmock_client.reset_tcp_history()
with reply(endpoint, responses) as queue:
fl.read(uuid, fh, 2, 5)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('synchronize_block')
block = common_messages_pb2.FileBlock()
block.offset = 2
block.size = 8
sync = file_request.synchronize_block
assert sync.block == block
assert sync.priority == SYNCHRONIZE_BLOCK_PRIORITY_IMMEDIATE
assert file_request.context_guid == uuid
do_release(endpoint, fl, uuid, fh)
def test_read_should_not_retry_request_synchronization_too_many_times(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
responses = []
responses.append(prepare_sync_eagain_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_eagain_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_eagain_response(uuid, '', [(2, 8)]))
responses.append(prepare_sync_eagain_response(uuid, '', [(2, 8)]))
appmock_client.reset_tcp_history()
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, responses) as queue:
fl.read(uuid, fh, 2, 5)
client_message = queue.get()
assert 'Resource temporarily unavailable' in str(excinfo.value)
do_release(endpoint, fl, uuid, fh)
def test_read_should_continue_reading_after_synchronization(appmock_client,
endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
sync_response = prepare_sync_response(uuid, '', [(0, 10)])
appmock_client.reset_tcp_history()
with reply(endpoint, sync_response):
assert 5 == len(fl.read(uuid, fh, 2, 5))
do_release(endpoint, fl, uuid, fh)
def test_read_should_continue_reading_after_synchronization_partial(appmock_client,
endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[(4, 6)])
sync_response = prepare_partial_sync_response(uuid, '', [(0, 10)], 0, 10)
appmock_client.reset_tcp_history()
with reply(endpoint, sync_response):
assert 5 == len(fl.read(uuid, fh, 2, 5))
do_release(endpoint, fl, uuid, fh)
def test_read_should_should_open_file_block_once(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[
(0, 5, 'storage1', 'file1'), (5, 5, 'storage2', 'file2')])
fl.expect_call_sh_open("file1", 1)
fl.expect_call_sh_open("file2", 1)
assert 5 == len(fl.read(uuid, fh, 0, 5))
assert 5 == len(fl.read(uuid, fh, 5, 5))
assert 5 == len(fl.read(uuid, fh, 0, 5))
assert 5 == len(fl.read(uuid, fh, 0, 5))
assert 5 == len(fl.read(uuid, fh, 5, 5))
assert 5 == len(fl.read(uuid, fh, 5, 5))
assert fl.verify_and_clear_expectations()
do_release(endpoint, fl, uuid, fh)
def test_release_should_release_open_file_blocks(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=10, blocks=[
(0, 5, 'storage1', 'file1'), (5, 5, 'storage2', 'file2')])
assert 5 == len(fl.read(uuid, fh, 0, 5))
assert 5 == len(fl.read(uuid, fh, 5, 5))
fl.expect_call_sh_release('file1', 1)
fl.expect_call_sh_release('file2', 1)
do_release(endpoint, fl, uuid, fh)
assert fl.verify_and_clear_expectations()
@pytest.mark.skip()
def test_release_should_pass_helper_errors(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=5, blocks=[
(0, 5, 'storage1', 'file1')])
with pytest.raises(RuntimeError) as excinfo:
fl.failHelper()
fl.read(uuid, fh, 0, 5)
fl.expect_call_sh_release('file1', 1)
do_release(endpoint, fl, uuid, fh)
assert 'Owner died' in str(excinfo.value)
def test_release_should_send_release_message(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=0)
sent_messages = do_release(endpoint, fl, uuid, fh)
sent_messages.get() # skip fsync message
client_message = sent_messages.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
assert client_message.fuse_request.file_request.HasField('release')
def test_release_should_send_fsync_message(appmock_client, endpoint, fl, uuid):
fh = do_open(endpoint, fl, uuid, size=0)
sent_messages = do_release(endpoint, fl, uuid, fh)
client_message = sent_messages.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
assert client_message.fuse_request.file_request.HasField('fsync')
def test_fslogic_should_handle_processing_status_message(appmock_client, endpoint, fl, uuid):
getattr_response = \
prepare_attr_response(uuid, fuse_messages_pb2.DIR, 0, 'parentUuid', 'name')
getattr_newuuid_response = \
prepare_attr_response('newUuid', fuse_messages_pb2.DIR, 0, 'parentUuid', 'name')
getattr_parent_response = \
prepare_attr_response('parentUuid', fuse_messages_pb2.DIR)
getattr_newparent_response = \
prepare_attr_response('newParentUuid', fuse_messages_pb2.DIR)
rename_response = prepare_rename_response('newUuid')
processing_status_responses = \
[prepare_processing_status_response(messages_pb2.IN_PROGRESS)
for _ in range(5)]
responses = [getattr_response, getattr_parent_response]
responses.extend(processing_status_responses)
responses.append(rename_response)
responses.append(getattr_newuuid_response)
with reply(endpoint, responses) as queue:
fl.rename('parentUuid', 'name', 'newParentUuid', 'newName')
queue.get()
queue.get()
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('rename')
rename = file_request.rename
assert rename.target_parent_uuid == 'newParentUuid'
assert rename.target_name == 'newName'
assert file_request.context_guid == \
getattr_response.fuse_response.file_attr.uuid
def prepare_listxattr_response(uuid):
repl = fuse_messages_pb2.XattrList()
repl.names.extend(["xattr1", "xattr2", "xattr3", "xattr4"])
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.xattr_list.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def test_listxattrs_should_return_listxattrs(endpoint, fl, uuid):
getattr_response = prepare_attr_response(uuid, fuse_messages_pb2.REG)
listxattr_response = prepare_listxattr_response(uuid)
listxattrs = []
with reply(endpoint, [listxattr_response,
getattr_response]) as queue:
listxattrs = fl.listxattr(uuid)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('list_xattr')
assert file_request.context_guid == uuid
assert listxattr_response.status.code == common_messages_pb2.Status.ok
assert "xattr1" in set(listxattrs)
assert "xattr2" in set(listxattrs)
assert "xattr3" in set(listxattrs)
assert "xattr4" in set(listxattrs)
def prepare_getxattr_response(uuid, name, value):
repl = fuse_messages_pb2.Xattr()
repl.name = name
repl.value = value
server_response = messages_pb2.ServerMessage()
server_response.fuse_response.xattr.CopyFrom(repl)
server_response.fuse_response.status.code = common_messages_pb2.Status.ok
return server_response
def test_getxattr_should_return_xattr(endpoint, fl, uuid):
xattr_name = "org.onedata.acl"
xattr_value = "READ | WRITE | DELETE"
response = prepare_getxattr_response(uuid, xattr_name, xattr_value)
xattr = None
with reply(endpoint, response) as queue:
xattr = fl.getxattr(uuid, xattr_name)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('get_xattr')
assert xattr.name == xattr_name
assert xattr.value == xattr_value
def test_getxattr_should_return_enoattr_for_invalid_xattr(endpoint, fl, uuid):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.enodata
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.getxattr(uuid, "org.onedata.dontexist")
assert 'No data available' in str(excinfo.value)
def test_setxattr_should_set_xattr(endpoint, fl, uuid):
xattr_name = "org.onedata.acl"
xattr_value = "READ | WRITE | DELETE"
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response) as queue:
fl.setxattr(uuid, xattr_name, xattr_value, False, False)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('set_xattr')
assert file_request.set_xattr.HasField('xattr')
assert file_request.set_xattr.xattr.name == xattr_name
assert file_request.set_xattr.xattr.value == xattr_value
def test_setxattr_should_set_xattr_with_binary_data(endpoint, fl, uuid):
xattr_name = "org.onedata.acl"
xattr_value = b'BEGINSTRINGWITHNULLS\x00\x0F\x00\x0F\x00\x0F\x00\x0F\x00\x0F\x00\x0FENDSTRINGWITHNULLS'
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response) as queue:
fl.setxattr(uuid, xattr_name, xattr_value, False, False)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('set_xattr')
assert file_request.set_xattr.HasField('xattr')
assert file_request.set_xattr.xattr.name == xattr_name
assert file_request.set_xattr.xattr.value == xattr_value
def test_setxattr_should_set_xattr_with_long_value(endpoint, fl, uuid):
xattr_name = "org.onedata.acl"
xattr_value = "askljdhflajkshdfjklhasjkldfhajklshdfljkashdfjklhasljkdhfjklashdfjklhasljdfhljkashdfljkhasjkldfhkljasdfhaslkdhfljkashdfljkhasdjklfhajklsdhfljkashdflkjhasjkldfhlakjsdhflkjahsfjklhasdjklfghlajksdgjklashfjklashfljkahsdljkfhasjkldfhlkajshdflkjahsdfljkhasldjkfhlkashdflkjashdfljkhasldkjfhalksdhfljkashdfljkhasdlfjkhaljksdhfjklashdfjklhasjkldfhljkasdhfljkashdlfjkhasldjkfhaljskdhfljkashdfljkhaspeuwshfiuawhgelrfihjasdgffhjgsdfhjgaskhjdfgjkaszgdfjhasdkfgaksjdfgkjahsdgfkhjasgdfkjhagsdkhjfgakhsjdfgkjhasgdfkhjgasdkjhfgakjshdgfkjhasgdkjhfgaskjhdfgakjhsdgfkjhasdgfkjhagsdkfhjgaskjdhfgkajsgdfkhjagsdkfjhgasdkjhfgaksjhdgfkajshdgfkjhasdgfkjhagskjdhfgakjshdgfkhjasdgfkjhasgdkfhjgaskdhjfgaksjdfgkasjdhgfkajshdgfkjhasgdfkhjagskdhjfgaskhjdfgkjasdhgfkjasgdkhjasdgkfhjgaksjhdfgkajshdgfkjhasdgfkjhagsdhjkfgaskhjdfgahjksdgfkhjasdgfhasgdfjhgaskdhjfgadkshjgfakhjsdgfkjhadsgkfhjagshjkdfgadhjsaskljdhflajkshdfjklhasjkldfhajklshdfljkashdfjklhasljkdhfjklashdfjklhasljdfhljkashdfljkhasjkldfhkljasdfhaslkdhfljkashdfljkhasdjklfhajklsdhfljkashdflkjhasjkldfhlakjsdhflkjahsfjklhasdjklfghlajksdgjklashfjklashfljkahsdljkfhasjkldfhlkajshdflkjahsdfljkhasldjkfhlkashdflkjashdfljkhasldkjfhalksdhfljkashdfljkhasdlfjkhaljksdhfjklashdfjklhasjkldfhljkasdhfljkashdlfjkhasldjkfhaljskdhfljkashdfljkhaspeuwshfiuawhgelrfihjasdgffhjgsdfhjgaskhjdfgjkaszgdfjhasdkfgaksjdfgkjahsdgfkhjasgdfkjhagsdkhjfgakhsjdfgkjhasgdfkhjgasdkjhfgakjshdgfkjhasgdkjhfgaskjhdfgakjhsdgfkjhasdgfkjhagsdkfhjgaskjdhfgkajsgdfkhjagsdkfjhgasdkjhfgaksjhdgfkajshdgfkjhasdgfkjhagskjdhfgakjshdgfkhjasdgfkjhasgdkfhjgaskdhjfgaksjdfgkasjdhgfkajshdgfkjhasgdfkhjagskdhjfgaskhjdfgkjasdhgfkjasgdkhjasdgkfhjgaksjhdfgkajshdgfkjhasdgfkjhagsdhjkfgaskhjdfgahjksdgfkhjasdgfhasgdfjhgaskdhjfgadkshjgfakhjsdgfkjhadsgkfhjagshjkdfgadhjsaskljdhflajkshdfjklhasjkldfhajklshdfljkashdfjklhasljkdhfjklashdfjklhasljdfhljkashdfljkhasjkldfhkljasdfhaslkdhfljkashdfljkhasdjklfhajklsdhfljkashdflkjhasjkldfhlakjsdhflkjahsfjklhasdjklfghlajksdgjklashfjklashfljkahsdljkfhasjkldfhlkajshdflkjahsdfljkhasldjkfhlkashdflkjashdfljkhasldkjfhalksdhfljkashdfljkhasdlfjkhaljksdhfjklashdfjklhasjkldfhljkasdhfljkashdlfjkhasldjkfhaljskdhfljkashdfljkhaspeuwshfiuawhgelrfihjasdgffhjgsdfhjgaskhjdfgjkaszgdfjhasdkfgaksjdfgkjahsdgfkhjasgdfkjhagsdkhjfgakhsjdfgkjhasgdfkhjgasdkjhfgakjshdgfkjhasgdkjhfgaskjhdfgakjhsdgfkjhasdgfkjhagsdkfhjgaskjdhfgkajsgdfkhjagsdkfjhgasdkjhfgaksjhdgfkajshdgfkjhasdgfkjhagskjdhfgakjshdgfkhjasdgfkjhasgdkfhjgaskdhjfgaksjdfgkasjdhgfkajshdgfkjhasgdfkhjagskdhjfgaskhjdfgkjasdhgfkjasgdkhjasdgkfhjgaksjhdfgkajshdgfkjhasdgfkjhagsdhjkfgaskhjdfgahjksdgfkhjasdgfhasgdfjhgaskdhjfgadkshjgfakhjsdgfkjhadsgkfhjagshjkdfgadhjsaskljdhflajkshdfjklhasjkldfhajklshdfljkashdfjklhasljkdhfjklashdfjklhasljdfhljkashdfljkhasjkldfhkljasdfhaslkdhfljkashdfljkhasdjklfhajklsdhfljkashdflkjhasjkldfhlakjsdhflkjahsfjklhasdjklfghlajksdgjklashfjklashfljkahsdljkfhasjkldfhlkajshdflkjahsdfljkhasldjkfhlkashdflkjashdfljkhasldkjfhalksdhfljkashdfljkhasdlfjkhaljksdhfjklashdfjklhasjkldfhljkasdhfljkashdlfjkhasldjkfhaljskdhfljkashdfljkhaspeuwshfiuawhgelrfihjasdgffhjgsdfhjgaskhjdfgjkaszgdfjhasdkfgaksjdfgkjahsdgfkhjasgdfkjhagsdkhjfgakhsjdfgkjhasgdfkhjgasdkjhfgakjshdgfkjhasgdkjhfgaskjhdfgakjhsdgfkjhasdgfkjhagsdkfhjgaskjdhfgkajsgdfkhjagsdkfjhgasdkjhfgaksjhdgfkajshdgfkjhasdgfkjhagskjdhfgakjshdgfkhjasdgfkjhasgdkfhjgaskdhjfgaksjdfgkasjdhgfkajshdgfkjhasgdfkhjagskdhjfgaskhjdfgkjasdhgfkjasgdkhjasdgkfhjgaksjhdfgkajshdgfkjhasdgfkjhagsdhjkfgaskhjdfgahjksdgfkhjasdgfhasgdfjhgaskdhjfgadkshjgfakhjsdgfkjhadsgkfhjagshjkdfgadhjs"
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response) as queue:
fl.setxattr(uuid, xattr_name, xattr_value, False, False)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.HasField('set_xattr')
assert file_request.set_xattr.HasField('xattr')
assert file_request.set_xattr.xattr.name == xattr_name
assert file_request.set_xattr.xattr.value == xattr_value
def test_removexattr_should_remove_xattr(endpoint, fl, uuid):
xattr_name = "org.onedata.acl"
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.ok
with reply(endpoint, response) as queue:
fl.removexattr(uuid, xattr_name)
client_message = queue.get()
assert client_message.HasField('fuse_request')
assert client_message.fuse_request.HasField('file_request')
file_request = client_message.fuse_request.file_request
assert file_request.context_guid == uuid
remove_xattr_request = file_request.remove_xattr
assert remove_xattr_request.HasField('name')
assert remove_xattr_request.name == xattr_name
def test_removexattr_should_return_enoattr_for_invalid_xattr(endpoint, fl, uuid):
response = messages_pb2.ServerMessage()
response.fuse_response.status.code = common_messages_pb2.Status.enodata
with pytest.raises(RuntimeError) as excinfo:
with reply(endpoint, response):
fl.removexattr(uuid, "org.onedata.dontexist")
assert 'No data available' in str(excinfo.value)
def test_readdir_should_handle_archivematica_metadata(appmock_client, endpoint, fl_archivematica):
parentUuid = 'parentParentUuid'
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(uuid, fuse_messages_pb2.DIR, parent_uuid=parentUuid)
getattr_reponse_parent = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with another 5 file
#
repl2 = prepare_file_children_attr_response(uuid, "bfiles-", 5)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
uuid_am = uuid+".__onedata_archivematica"
with reply(endpoint, [getattr_reponse,
getattr_reponse_parent,
response1,
response2]) as queue:
parentAttr = fl_archivematica.lookup(parentUuid, "test.__onedata_archivematica")
d = fl_archivematica.opendir(uuid_am)
children_chunk = fl_archivematica.readdir(uuid_am, chunk_size, offset)
_ = queue.get()
fl_archivematica.releasedir(uuid_am, d)
assert len(children_chunk) == 14
assert "processingMCP.xml" in children_chunk
assert "metadata" in children_chunk
def test_read_should_read_archivematica_processingmcp(appmock_client, endpoint, fl_archivematica):
parentUuid = 'parentParentUuid'
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(
uuid, fuse_messages_pb2.DIR, parent_uuid=parentUuid, name="test")
getattr_reponse_parent = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
#
# Prepare first response with 5 files
#
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 5)
repl1.is_last = False
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare second response with another 5 file
#
repl2 = prepare_file_children_attr_response(uuid, "bfiles-", 5)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
#
# Prepare __archivematica metadata response
#
xattr_name = "onedata_json"
xattr_value = """
{
"__onedata": {
"__archivematica": {
"processingMCP": {
"preconfiguredChoices": {
"preconfiguredChoice": [
{
"appliesTo": "1ba589db-88d1-48cf-bb1a-a5f9d2b17378",
"goToChain": "0a24787c-00e3-4710-b324-90e792bfb484"
},
{
"appliesTo": "1c2550f1-3fc0-45d8-8bc4-4c06d720283b",
"goToChain": "0a24787c-00e3-4710-b324-90e792bfb484"
},
{
"appliesTo": "5e58066d-e113-4383-b20b-f301ed4d751c",
"goToChain": "4500f34e-f004-4ccf-8720-5c38d0be2254"
},
{
"appliesTo": "01c651cb-c174-4ba4-b985-1d87a44d6754",
"goToChain": "ecfad581-b007-4612-a0e0-fcc551f4057f"
}
]
}
}
}
}
}
"""
am_metadata_response = prepare_getxattr_response(uuid, xattr_name, xattr_value)
children = []
offset = 0
chunk_size = 50
with reply(endpoint, [getattr_reponse,
getattr_reponse_parent,
response1,
response2,
am_metadata_response]) as queue:
parentAttr = fl_archivematica.lookup(parentUuid, "test.__onedata_archivematica")
# First try to open the 'test' directory as is
d = fl_archivematica.opendir(uuid)
children_chunk = fl_archivematica.readdir(uuid, chunk_size, offset)
_ = queue.get()
fl_archivematica.releasedir(uuid, d)
print("STEP 0")
assert len(children_chunk) == 12
assert "processingMCP.xml" not in children_chunk
print("STEP 1")
uuid_am = uuid + ".__onedata_archivematica"
# Now try to open the 'test' directory in Archivematica mode
d = fl_archivematica.opendir(uuid_am)
children_chunk = fl_archivematica.readdir(uuid_am, chunk_size, offset)
_ = queue.get()
fl_archivematica.releasedir(uuid_am, d)
print("STEP 2")
assert len(children_chunk) == 14
assert "processingMCP.xml" in children_chunk
print("STEP 3")
assert fl_archivematica.metadata_cache_contains(uuid+"-processing-mcp")
assert fl_archivematica.metadata_cache_contains(uuid+"-metadata")
attr = fl_archivematica.lookup(uuid_am, "processingMCP.xml")
fh = fl_archivematica.open(uuid+"-processing-mcp", 0)
data = fl_archivematica.read(uuid+"-processing-mcp", fh, 0, 4096)
fl_archivematica.release(uuid+"-processing-mcp", fh)
processing_mcp = ET.fromstring(data)
assert processing_mcp.tag == "processingMCP"
preconfigured_choices = processing_mcp.find("preconfiguredChoices")
assert preconfigured_choices.tag == "preconfiguredChoices"
choice_list = preconfigured_choices.findall("preconfiguredChoice")
assert len(choice_list) == 4
def test_read_should_read_archivematica_metadata_json(appmock_client, endpoint, fl_archivematica):
parentUuid = 'parentParentUuid'
uuid = 'parentUuid'
getattr_reponse = prepare_attr_response(uuid, fuse_messages_pb2.DIR, parent_uuid=parentUuid)
getattr_reponse_parent = prepare_attr_response(parentUuid, fuse_messages_pb2.DIR)
repl1 = prepare_file_children_attr_response(uuid, "afiles-", 1)
repl1.is_last = True
dir1_response = prepare_attr_response(
"dir1Uuid", fuse_messages_pb2.DIR, parent_uuid=uuid, name='dir1')
repl1.child_attrs.extend([dir1_response.fuse_response.file_attr])
response1 = messages_pb2.ServerMessage()
response1.fuse_response.file_children_attrs.CopyFrom(repl1)
response1.fuse_response.status.code = common_messages_pb2.Status.ok
xattr_list1 = fuse_messages_pb2.XattrList()
xattr_list1.names.extend(["onedata_json", "dc.license"])
xattr_list1_response = messages_pb2.ServerMessage()
xattr_list1_response.fuse_response.xattr_list.CopyFrom(xattr_list1)
xattr_list1_response.fuse_response.status.code = common_messages_pb2.Status.ok
xattr_list2 = fuse_messages_pb2.XattrList()
xattr_list2.names.extend(["onedata_json"])
xattr_list2_response = messages_pb2.ServerMessage()
xattr_list2_response.fuse_response.xattr_list.CopyFrom(xattr_list2)
xattr_list2_response.fuse_response.status.code = common_messages_pb2.Status.ok
xattr_value1 = """{"dc.language":"CSV","dc.identifier":"123"}"""
metadata_response1 = prepare_getxattr_response(uuid, "onedata_json", xattr_value1)
xattr_value2 = "CC-0"
metadata_response2 = prepare_getxattr_response(uuid, "dc.license", xattr_value2)
xattr_value3 = """{"dc.language":"CSV","dc.identifier":"456"}"""
metadata_response3 = prepare_getxattr_response(uuid, "onedata_json", xattr_value3)
repl2 = prepare_file_children_attr_response("dir1Uuid", "bfiles-", 1)
repl2.is_last = True
response2 = messages_pb2.ServerMessage()
response2.fuse_response.file_children_attrs.CopyFrom(repl2)
response2.fuse_response.status.code = common_messages_pb2.Status.ok
children = []
offset = 0
chunk_size = 50
uuid_am = uuid+".__onedata_archivematica"
with reply(endpoint, [getattr_reponse,
getattr_reponse_parent,
response1,
response2,
xattr_list1_response,
metadata_response1,
metadata_response2,
xattr_list2_response,
metadata_response3]) as queue:
parentAttr = fl_archivematica.lookup(parentUuid, "test.__onedata_archivematica")
d = fl_archivematica.opendir(uuid_am)
children_chunk = fl_archivematica.readdir(uuid_am, chunk_size, offset)
_ = queue.get()
fl_archivematica.releasedir(uuid_am, d)
print(list(children_chunk))
assert len(children_chunk) == 6
assert "metadata" in children_chunk
assert fl_archivematica.metadata_cache_contains(uuid+"-metadata")
metadataUuid = uuid+"-metadata"
d = fl_archivematica.opendir(metadataUuid)
children_chunk = fl_archivematica.readdir(metadataUuid, chunk_size, 0)
fl_archivematica.release(uuid, d)
assert len(children_chunk) == 3
assert "metadata.json" in list(children_chunk)
attr = fl_archivematica.lookup(metadataUuid, "metadata.json")
fh = fl_archivematica.open(metadataUuid+"-metadata-json", 0)
data = fl_archivematica.read(metadataUuid+"-metadata-json", fh, 0, 1024)
fl_archivematica.release(metadataUuid+"-metadata-json", fh)
am_metadata = json.loads(data)
assert len(am_metadata) == 2
assert am_metadata[0]["filename"] == "objects/dir1/bfiles-0"
assert am_metadata[1]["filename"] == "objects/afiles-0"
| 36.257164 | 3,488 | 0.71744 | 11,955 | 99,961 | 5.714262 | 0.050356 | 0.046052 | 0.022543 | 0.028457 | 0.771086 | 0.733773 | 0.697617 | 0.66938 | 0.645592 | 0.630515 | 0 | 0.020281 | 0.192035 | 99,961 | 2,756 | 3,489 | 36.270319 | 0.825556 | 0.035304 | 0 | 0.629688 | 0 | 0.000528 | 0.096868 | 0.044243 | 0 | 1 | 0 | 0 | 0.161648 | 0 | null | null | 0.00898 | 0.008452 | null | null | 0.003698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c82c295b3ca9d7d4214f64284a3e72e70632a810 | 97 | py | Python | test.py | farmjenny/Farm_Jenny_Installer | 88b54290c8cc9c4c64aaa1a669ae26d4a832d4a2 | [
"MIT"
] | null | null | null | test.py | farmjenny/Farm_Jenny_Installer | 88b54290c8cc9c4c64aaa1a669ae26d4a832d4a2 | [
"MIT"
] | null | null | null | test.py | farmjenny/Farm_Jenny_Installer | 88b54290c8cc9c4c64aaa1a669ae26d4a832d4a2 | [
"MIT"
] | null | null | null | from farmjennycellular import farmjennycellular
farmjennycellular.FarmJennyTest.tellmeyourokay()
| 32.333333 | 48 | 0.907216 | 7 | 97 | 12.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 97 | 2 | 49 | 48.5 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c0931d9478afe9950b6a73024759d7b7280bfb9e | 93 | py | Python | 2454.py | gabzin/uri | 177bdf3f87bacfd924bd031a973b8db877379fe5 | [
"MIT"
] | 3 | 2021-09-21T18:50:20.000Z | 2021-12-14T13:07:31.000Z | 2454.py | gabzin/uri | 177bdf3f87bacfd924bd031a973b8db877379fe5 | [
"MIT"
] | null | null | null | 2454.py | gabzin/uri | 177bdf3f87bacfd924bd031a973b8db877379fe5 | [
"MIT"
] | null | null | null | p,r=map(int,input().split())
if p==0:print("C")
else:
print("B") if r==0 else print("A")
| 18.6 | 38 | 0.55914 | 20 | 93 | 2.6 | 0.65 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.139785 | 93 | 4 | 39 | 23.25 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
c09b2bd3ab3b111f9da019eef9ad75df9eb6cb5c | 65 | py | Python | src/test/resources/files/docstring.py | rendner/py-prefix-fstring-plugin | c2e2ca7cca1b3833e988543fda5bce05c6860309 | [
"MIT"
] | null | null | null | src/test/resources/files/docstring.py | rendner/py-prefix-fstring-plugin | c2e2ca7cca1b3833e988543fda5bce05c6860309 | [
"MIT"
] | null | null | null | src/test/resources/files/docstring.py | rendner/py-prefix-fstring-plugin | c2e2ca7cca1b3833e988543fda5bce05c6860309 | [
"MIT"
] | 1 | 2021-05-24T09:32:06.000Z | 2021-05-24T09:32:06.000Z | def do_nothing():
"""A docstring {<caret>"""
return None
| 16.25 | 30 | 0.584615 | 8 | 65 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 65 | 3 | 31 | 21.666667 | 0.74 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
8d08697213c9dc33c4d6ae615ec15ee3bf294923 | 164 | py | Python | newsresearch/db_models/postgre_helpers.py | fernandosoto138/news-research | 73ffc0d260a87fac12727a062ff235b7001449f1 | [
"MIT"
] | null | null | null | newsresearch/db_models/postgre_helpers.py | fernandosoto138/news-research | 73ffc0d260a87fac12727a062ff235b7001449f1 | [
"MIT"
] | null | null | null | newsresearch/db_models/postgre_helpers.py | fernandosoto138/news-research | 73ffc0d260a87fac12727a062ff235b7001449f1 | [
"MIT"
] | null | null | null | from time import gmtime, strftime
class PostgreHelpers(object):
@staticmethod
def get_timestamp():
return strftime("%Y-%m-%d %H:%M:%S", gmtime())
| 20.5 | 54 | 0.652439 | 21 | 164 | 5.047619 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 164 | 7 | 55 | 23.428571 | 0.80303 | 0 | 0 | 0 | 0 | 0 | 0.103659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
23c25e95c7aac1d20952bde313e0178d66e3d41c | 377 | py | Python | analyzer/expression/assignment_expression_syntax.py | vbondarevsky/ones_analyzer | ab8bff875192db238ed17c20d61c9fa5b55c3fa8 | [
"MIT"
] | 12 | 2017-11-23T07:04:13.000Z | 2022-03-01T21:06:56.000Z | analyzer/expression/assignment_expression_syntax.py | vbondarevsky/analyzer_test | ab8bff875192db238ed17c20d61c9fa5b55c3fa8 | [
"MIT"
] | 2 | 2017-06-25T21:32:32.000Z | 2017-11-19T19:05:40.000Z | analyzer/expression/assignment_expression_syntax.py | vbondarevsky/analyzer_test | ab8bff875192db238ed17c20d61c9fa5b55c3fa8 | [
"MIT"
] | 5 | 2017-11-21T08:24:56.000Z | 2021-08-17T23:21:18.000Z | from analyzer.syntax_kind import SyntaxKind
class AssignmentExpressionSyntax(object):
def __init__(self, left, operator_token, right):
self.kind = SyntaxKind.AssignmentExpression
self.left = left
self.operator_token = operator_token
self.right = right
def __str__(self):
return f"{self.left}{self.operator_token}{self.right}"
| 29 | 62 | 0.70557 | 43 | 377 | 5.883721 | 0.465116 | 0.205534 | 0.126482 | 0.166008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 377 | 12 | 63 | 31.416667 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.116711 | 0.116711 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.111111 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f195df6ee42fbfd187287616f98e9b6c1e4e497a | 164 | py | Python | tests/test_dfluxc.py | andybroth/RANS | f168792f63ed9e055941eda0869cb09ea30c3cb5 | [
"MIT"
] | null | null | null | tests/test_dfluxc.py | andybroth/RANS | f168792f63ed9e055941eda0869cb09ea30c3cb5 | [
"MIT"
] | 10 | 2021-11-12T19:39:44.000Z | 2021-12-20T19:45:09.000Z | tests/test_dfluxc.py | andybroth/RANS | f168792f63ed9e055941eda0869cb09ea30c3cb5 | [
"MIT"
] | 1 | 2022-03-23T02:26:34.000Z | 2022-03-23T02:26:34.000Z | '''
import dfluxc_fort
print(dfluxc_fort.__doc__)
# here is what the call should look like
dfluxc_fort.dfluxc(ny,il,jl,w,p,porj,fw,radi,radj,rfil,vis0)
''' | 20.5 | 61 | 0.719512 | 29 | 164 | 3.827586 | 0.827586 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.140244 | 164 | 8 | 62 | 20.5 | 0.780142 | 0.908537 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f1d3dd3bb4cc5ff2d428bf5eda2a0ce9d5caa4c1 | 58 | py | Python | catalyst/rl/agents/__init__.py | stalkermustang/catalyst | 687bc6c31dfdc44ae3ff62938e11e69ce1999cd4 | [
"MIT"
] | null | null | null | catalyst/rl/agents/__init__.py | stalkermustang/catalyst | 687bc6c31dfdc44ae3ff62938e11e69ce1999cd4 | [
"MIT"
] | null | null | null | catalyst/rl/agents/__init__.py | stalkermustang/catalyst | 687bc6c31dfdc44ae3ff62938e11e69ce1999cd4 | [
"MIT"
] | null | null | null | # flake8: noqa
from .actor import *
from .critic import *
| 14.5 | 21 | 0.706897 | 8 | 58 | 5.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.189655 | 58 | 3 | 22 | 19.333333 | 0.851064 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f1eadb1f2b76eff993efd8f6bd967234415df21a | 16,519 | py | Python | point_in_polygon.py | Kaaos/kaaosgit | b4571b54d7afe40c67c14b4b1e18905fa51fea9f | [
"MIT"
] | null | null | null | point_in_polygon.py | Kaaos/kaaosgit | b4571b54d7afe40c67c14b4b1e18905fa51fea9f | [
"MIT"
] | null | null | null | point_in_polygon.py | Kaaos/kaaosgit | b4571b54d7afe40c67c14b4b1e18905fa51fea9f | [
"MIT"
] | null | null | null | # Winding number point-in-polygon algorithm (as per Dan Sunday, 2001)
# Function to test if a point is left or right of, or on, an edge.
# - Inputs: 3 Points (edge_point_a, edge_point_b, point to test)
# - A point is a tuple that has at least (X,Y) coordinates. Other values (z, h, n, m, ...) can exist but are not used.
# - X & Y must have values and their indices must be X:0, Y:1
# - Edge is considered to be an infinite line passing through edge points
# - Returns:
# > 0 for point that is left of the edge
# = 0 for point that is on the edge
# < 0 for point that is right of the edge
def is_left(edge_point_a, edge_point_b, point):
return ((edge_point_b[0] - edge_point_a[0]) * (point[1] - edge_point_a[1]) - (point[0] - edge_point_a[0]) * (edge_point_b[1] - edge_point_a[1]))
# Function to parse WKT geometries to tuples:
def geometryparser(wkt):
ret = None
# Point:
if (wkt[:5].upper() == "POINT"):
ret = wkt[5:].strip("() ").split(" ")
ret = (float(ret[0]), float(ret[1]))
# Polygon:
elif (wkt[:7].upper() == "POLYGON"):
tmp_array = []
ret = wkt[7:].strip("() ").replace("(", "").replace(")", "")
for i in ret.split(","):
tmp = i.strip().split(" ")
tmp_array.append((float(tmp[0]), float(tmp[1])))
ret = tuple(tmp_array)
# Other geometry types are not supported, terminate:
else:
print("Invalid geometry type. Exiting.")
exit()
return ret
# Winding number point-in-polygon test
# Inputs: a point and a polygon in WKT format
# - Both inputs must be in the same CRS to get sane results
# Returns:
# - True (point is inside polygon) or
# - False (point is outside of polygon)
# A point that is exactly on the edge of polygon is considered to be either inside or outside
# - in order to achieve constant behavior I recommend using another algorithm to catch points on the edge
def point_in_polygon(point_wkt, polygon_wkt):
counter = 0 # Winding number counter
ret = True # Return value
# Convert WKT geometries to tuples:
point = geometryparser(point_wkt)
polygon = geometryparser(polygon_wkt)
# Bounding box check to rule out obvious (outside-) cases:
xarray = [] # Array to hold polygon x-coordinates
yarray = [] # Array to hold polygon y-coordinates
# Append all x/y coordinates to corresponding array:
for i in polygon:
xarray.append(i[0])
yarray.append(i[1])
# Bounding box: ((minx, miny), (maxx, maxy))
bbox = ((min(xarray), min(yarray)), (max(xarray), max(yarray)))
# If point is outside of bounding box, return False:
if (point[0] < bbox[0][0] or point[0] > bbox[1][0] or point[1] < bbox[0][1] or point[1] > bbox[1][1]):
return False
# Point is known to be inside bounding box, make final check using winding number algorithm:
for i in range(len(polygon) - 1):
if (polygon[i][1] <= point[1]): # Polygon vertex Y <= Point Y
if (polygon[i+1][1] > point[1]):
if (is_left(polygon[i], polygon[i+1], point) > 0): # Point is left of edge
counter += 1
else: # Polygon vertex Y > Point Y
if (polygon[i+1][1] <= point[1]):
if (is_left(polygon[i], polygon[i+1], point) < 0): # Point is right of edge
counter -= 1
if (counter == 0):
ret = False
return ret
#
# # Test example:
#
# A somewhat complex polygon:
wkt_geom_polygon = "Polygon ((-0.80220646178092958 -1.01168126825198179, -0.80219483697104244 -1.01166312654649992, -0.80222164760598791 -1.0116559770438478, -0.80223415923562902 -1.0116899371814454, -0.80221986023032488 -1.01171138568940155, -0.80217696321441212 -1.01171674781639065, -0.80213406619849936 -1.01170066143542337, -0.80210725556355389 -1.01167742555180418, -0.80211976719319511 -1.01162022953058717, -0.80216266420910787 -1.0115826946416635, -0.8022502456165963 -1.01157197038768532, -0.80231280376480241 -1.01161844215492414, -0.8023521260293891 -1.01172925944603187, -0.80232889014576969 -1.01177751858893372, -0.80224667086527024 -1.01181326610219435, -0.80209831868523873 -1.01183113985882467, -0.80202146153172849 -1.01178288071592282, -0.8019481791295443 -1.01168814980578237, -0.8020053751507612 -1.01156660826069622, -0.80216266420910787 -1.01150226273682708, -0.80227348150021571 -1.01150226273682708, -0.802409322050606 -1.01154873450406591, -0.80248975395544242 -1.01171496044072762, -0.80246651807182301 -1.01193123289595444, -0.80231459114046544 -1.01196876778487788, -0.80208759443126065 -1.01199379104416032, -0.80195711600785946 -1.01192765814462837, -0.80185881034639273 -1.01176321958362947, -0.80178910269553461 -1.01155945875804409, -0.80199465089678301 -1.01140395707536057, -0.8021644515847709 -1.01139144544571935, -0.80235391340505213 -1.01139323282138238, -0.8025630363576266 -1.0114522162182622, -0.80267742840006051 -1.01182935248316164, -0.80251835196605081 -1.01211890734057253, -0.80221986023032488 -1.01219397711841985, -0.80204827216667396 -1.01215644222949619, -0.80185523559506666 -1.01209924620827918, -0.80174441830395882 -1.01202238905476882, -0.80161930200754672 -1.01173462157302096, -0.80156925548898195 -1.01157911989033744, -0.80169437178539404 -1.01135212318113266, -0.80205542166932609 -1.0112699039006332, -0.80241289680193206 -1.01124488064135076, -0.80282041845310292 -1.01132888729751325, -0.802881189225646 -1.0116899371814454, -0.80282220582876596 -1.01205456181670339, -0.80262738188149563 -1.01234232929845125, -0.80233961439974788 -1.01241382432497251, -0.80205005954233699 -1.01244599708690708, -0.80183378708711039 -1.01241382432497251, -0.80164611264249219 -1.01230300703386455, -0.80175514255793701 -1.0121743159861265, -0.80212870407151027 -1.01227083427192999, -0.80230386688648725 -1.01227619639891908, -0.80243434530988844 -1.01221721300203926, -0.80261665762751755 -1.01217610336178954, -0.80278824569116836 -1.01204026281139914, -0.80286331546901568 -1.0116970866840973, -0.80280969419912473 -1.01135391055679569, -0.80259878387088723 -1.01143791721295795, -0.80241289680193206 -1.01126275439798108, -0.80226275724623752 -1.01137893381607813, -0.80205899642065215 -1.01128062815461139, -0.80193030537291399 -1.01142898033464301, -0.80188919573266426 -1.01132888729751325, -0.80170152128804617 -1.01136284743511085, -0.80177480369023035 -1.01153801025008772, -0.8016747106531007 -1.01160414314961988, -0.80164611264249219 -1.01154873450406591, -0.80160321562657944 -1.01155945875804409, -0.80166577377478554 -1.01173283419735793, -0.80175692993360004 -1.01160593052528291, -0.80180340170083886 -1.01175607008097734, -0.80174084355263275 -1.01170066143542337, -0.80168364753141585 -1.01176858171061856, -0.80179089007119764 -1.01193659502294353, -0.80180876382782795 -1.01177751858893372, -0.80189098310832729 -1.01197055516054091, -0.80182306283313221 -1.01194910665258475, -0.80193924225122915 -1.01210997046225737, -0.80193566749990308 -1.01197770466319303, -0.80220019909803153 -1.01204562493838823, -0.80217517583874909 -1.01209388408129008, -0.80203576053703274 -1.01207601032465977, -0.80204112266402183 -1.01203311330874701, -0.8019714150131636 -1.01202238905476882, -0.8019767771401527 -1.01208494720297493, -0.80199286352111998 -1.01212426946756162, -0.80206793329896731 -1.01213499372153981, -0.80214300307681452 -1.01214929272684406, -0.80221986023032488 -1.01215108010250709, -0.80223594661129205 -1.01201881430344276, -0.80232531539444363 -1.01207958507598583, -0.80238429879132356 -1.01200809004946457, -0.80242540843157328 -1.01208137245164886, -0.80249332870676837 -1.01195625615523688, -0.80261487025185452 -1.0118668873720853, -0.80252550146870294 -1.0118668873720853, -0.80253801309834416 -1.01179717972122707, -0.80260950812486542 -1.0118043292238792, -0.80259342174389814 -1.01172032256771671, -0.80253622572268113 -1.0117650069592925, -0.80253086359569203 -1.01164882754119567, -0.80258448486558298 -1.01162559165757626, -0.80255409947931144 -1.01151298699080527, -0.80250584033640959 -1.01167385080047811, -0.80247366757447514 -1.01154515975273984, -0.80250941508773566 -1.01153801025008772, -0.80251298983906172 -1.01146651522356645, -0.8023521260293891 -1.01143612983729492, -0.80242362105591025 -1.01151298699080527, -0.80224488348960721 -1.0114575783452513, -0.80230207951082422 -1.01144506671561007, -0.80213406619849936 -1.01142898033464301, -0.80199107614545695 -1.01147545210188161, -0.80193566749990308 -1.01150405011249012, -0.80208938180692368 -1.01149868798550102, -0.80189455785965336 -1.01160950527660898, -0.80188204623001214 -1.01153086074743559, -0.80182663758445827 -1.01157375776334835, -0.80194460437821824 -1.01174534582699915, -0.80197498976448967 -1.01184007673713983, -0.80205899642065215 -1.01190799701233503, -0.80218590009272728 -1.01193659502294353, -0.80229314263250906 -1.0119008475096829, -0.80234676390240001 -1.01184007673713983, -0.80240574729927994 -1.0118615252450962, -0.80233425227275879 -1.01187939900172652, -0.8023521260293891 -1.01191514651498715, -0.80241647155325813 -1.01191157176366109, -0.80243077055856238 -1.01182935248316164, -0.80238072403999749 -1.01176143220796644, -0.80243255793422541 -1.0117650069592925, -0.80245221906651876 -1.01182220298050951, -0.80246830544748604 -1.01172747207036884, -0.80238608616698659 -1.01161486740359807, -0.80231459114046544 -1.01155230925539197, -0.80222164760598791 -1.01154158500141378, -0.80208223230427156 -1.01157018301202228, -0.80202146153172849 -1.01162380428191323, -0.80202682365871758 -1.01170781093807549, -0.80208759443126065 -1.01177573121327069, -0.80213406619849936 -1.01178645546724888, -0.80218411271706425 -1.01178824284291191, -0.80225918249491146 -1.0117703690862816, -0.802289567881183 -1.01172032256771671, -0.80227526887587874 -1.01166670129782599, -0.80225203299225933 -1.01163095378456536, -0.80222164760598791 -1.01161844215492414, -0.80218411271706425 -1.01161844215492414, -0.80214479045247755 -1.01162737903323929, -0.80215730208211877 -1.01165776441951083, -0.80214836520380361 -1.01169351193277146, -0.80217696321441212 -1.01167206342481508, -0.80220646178092958 -1.01168126825198179),(-0.8019086685880964 -1.01227003708684093, -0.80211546843829507 -1.01231891705143329, -0.80209290845463699 -1.0123865970024073, -0.80202898850093929 -1.01233395704053875, -0.80194626856085982 -1.01240163699151275, -0.8017883486752535 -1.01233395704053875, -0.80192746857447805 -1.01233395704053875, -0.80180338866435885 -1.01222491711952478, -0.8019086685880964 -1.01227003708684093),(-0.80222147073090533 -1.01166225446434477, -0.80221913420779201 -1.01166588905585431, -0.80222475917084257 -1.01166909095789848, -0.80221852844254038 -1.01166718712425063, -0.80222934567917614 -1.01167930242928272, -0.80221705729835802 -1.01166770635160908, -0.80223133605071717 -1.01168449470286781, -0.80221567269206862 -1.01166848519264696, -0.80221679768467868 -1.01167835051245869, -0.80221472077524469 -1.01166874480632618, -0.80221013426691112 -1.01167809089877947, -0.80221333616895529 -1.0116679659652883, -0.80220632659961533 -1.01167367746623205, -0.80221333616895529 -1.01166588905585431, -0.80220217278074724 -1.01166727366214371, -0.80221454769945844 -1.01166415829799261, -0.8022074515922254 -1.01166147562330688, -0.80221697076046483 -1.01166415829799261, -0.80221273040370367 -1.01165948525176597, -0.80221852844254038 -1.01166363907063417, -0.80221959273487042 -1.01165673515177645, -0.80222013779984191 -1.01166243108072829, -0.80222177299475639 -1.0116569259245165, -0.80222147073090533 -1.01166225446434477),(-0.80219832664382118 -1.01168744067163008, -0.80221286041232787 -1.01170446594330921, -0.80218130137214205 -1.01171152520229812, -0.80214102207085236 -1.01169865243590662, -0.80211444603701176 -1.01167456790523858, -0.80212981030657582 -1.01162266158914349, -0.8021738268626244 -1.01158653479314142, -0.80224732620621486 -1.01157822978256617, -0.80230296977706872 -1.01162889034707493, -0.80233702032042697 -1.0117210759644597, -0.80230795278341382 -1.01176882977526694, -0.80232124080033418 -1.01172522846974711, -0.80227763949481434 -1.01178502454588859, -0.80229674101913728 -1.01173893173719631, -0.80230130877495365 -1.01167290690312339, -0.80225521596626137 -1.01160812782063703, -0.8022456652040999 -1.01159151779948653, -0.8022161824165579 -1.01159442455318782, -0.80225106346097375 -1.01161767858279839, -0.80218794538060223 -1.01159816180794659, -0.80216344559940544 -1.01161269557645328, -0.802136454315036 -1.0116247378417873, -0.80214849658037002 -1.01165588163144426, -0.8021335475613347 -1.01164010211135147, -0.80212233579705816 -1.01167207640206591, -0.80213894581820855 -1.01165671213250175, -0.8021285645549896 -1.0116778899094685, -0.80215181858460016 -1.01166003413673189, -0.80214434407508239 -1.01169699143379144, -0.80216510660152041 -1.01169989818749295, -0.8021788098689695 -1.01167830515999735, -0.80218379287531461 -1.01170695744648187, -0.80219832664382118 -1.01168744067163008),(-0.8022433482322352 -1.01178653670723451, -0.80224930376792725 -1.0117984477786186, -0.80211282274165108 -1.0118212773321047, -0.80201505269737317 -1.01175675902877416, -0.80197981577786193 -1.0116768555915725, -0.80201455640273223 -1.01158355219906371, -0.80210240055418991 -1.01153590791352732, -0.80224235564295321 -1.01151407094932311, -0.80237982925851137 -1.01154781898491142, -0.80244037720471384 -1.01163516684172827, -0.80242548836548377 -1.01164360385062513, -0.80235253305325616 -1.01155129304739844, -0.80222200756267203 -1.01153094496711726, -0.8020577340364996 -1.01156717447591049, -0.80200711198311714 -1.01163318166316407, -0.80201257122416814 -1.01171507027892993, -0.80204483037583341 -1.01176271456446631, -0.80209743760777996 -1.01179646260005462, -0.80217535586641764 -1.01180241813574656, -0.8022433482322352 -1.01178653670723451),(-0.80232754547945684 -1.01188191831216678, -0.80233532015781794 -1.01187346757481778, -0.80239109502432138 -1.01186096048354113, -0.80234850330808238 -1.01184608718580682, -0.80232044686008364 -1.01187718589925124, -0.80229408055955476 -1.01191403111409306, -0.80218388294452359 -1.01194479179804331, -0.8023299116859145 -1.01192924244132132, -0.80216833358780137 -1.01196372144970526, -0.80240292605661001 -1.01193701711968242, -0.80234478498364881 -1.01192079170397231, -0.80232754547945684 -1.01188191831216678),(-0.80212296017562135 -1.0119453696139622, -0.80209504026715461 -1.01197794284050668, -0.80197095178508027 -1.0119151230464567, -0.80181894339453919 -1.01160257518223196, -0.801846087749993 -1.01162041290153004, -0.80193217413443207 -1.01176854352700629, -0.80196629846700251 -1.01185462991144548, -0.80205471151048047 -1.01192908300068996, -0.80212296017562135 -1.0119453696139622),(-0.80194497080282157 -1.01151828371449803, -0.80203431721967655 -1.01151634140108815, -0.8019051533779189 -1.01158432237043416, -0.80189252834075464 -1.01149691826698929, -0.80194497080282157 -1.01151828371449803),(-0.80234800083537372 -1.01140854300683913, -0.80233440464150452 -1.01142311035741339, -0.80237422206640729 -1.01148720669993986, -0.80231012572388094 -1.01146292778231617, -0.80232177960434026 -1.0114211680440035, -0.80227322176909299 -1.01141631226047868, -0.80234800083537372 -1.01140854300683913),(-0.80260147273536431 -1.01163773598920614, -0.8026529440407264 -1.01180186147234186, -0.80259273232501982 -1.01169697654820778, -0.80255194374341221 -1.01172999587617607, -0.80260147273536431 -1.01163773598920614),(-0.80238432738899723 -1.012056879621412, -0.80242415765192165 -1.01210402564691426, -0.80236238010126348 -1.01214873308489062, -0.80224045072496453 -1.01212597293464812, -0.80224288931249044 -1.01206013107144677, -0.80235262575115951 -1.01210808995945767, -0.80238432738899723 -1.012056879621412),(-0.80255474239946767 -1.01222776429069206, -0.80265428980875708 -1.01217161790067323, -0.80281376884147104 -1.01202900008727692, -0.80281025608744649 -1.01205569701786358, -0.80265032576132234 -1.01229049069510668, -0.80248785443633097 -1.01229189130997721, -0.80204382817997655 -1.01242247662259688, -0.80255474239946767 -1.01222776429069206))"
# A point inside polygon:
wkt_geom_point = "Point (-0.8022201557511377 -1.01166223862933169)"
# Check and report:
print("Input point: ", wkt_geom_point, " \nPoint is inside polygon? ", point_in_polygon(wkt_geom_point, wkt_geom_polygon))
| 161.95098 | 12,656 | 0.780858 | 1,752 | 16,519 | 7.337329 | 0.385274 | 0.007001 | 0.004667 | 0.003034 | 0.090004 | 0.019137 | 0.019137 | 0.011824 | 0.011824 | 0.011824 | 0 | 0.715073 | 0.099159 | 16,519 | 101 | 12,657 | 163.554455 | 0.14878 | 0.107028 | 0 | 0.086957 | 0 | 0.021739 | 0.868838 | 0.397498 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0 | 0.021739 | 0.152174 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7b04604c8fddc65bc17606e0c89dbb8dd4c15f5b | 209 | py | Python | bg_utils/__init__.py | recommend-games/board-game-utils | 31d64c952bf3ac907ce0b0853af8f8a3b6454f04 | [
"MIT"
] | 1 | 2021-05-17T11:15:42.000Z | 2021-05-17T11:15:42.000Z | bg_utils/__init__.py | recommend-games/board-game-utils | 31d64c952bf3ac907ce0b0853af8f8a3b6454f04 | [
"MIT"
] | null | null | null | bg_utils/__init__.py | recommend-games/board-game-utils | 31d64c952bf3ac907ce0b0853af8f8a3b6454f04 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Initialisations."""
from .__version__ import VERSION, __version__
from .recommend import recommend_games
from .transformers import make_transformer, matrix_to_dataframe, transform
| 26.125 | 74 | 0.784689 | 23 | 209 | 6.608696 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.110048 | 209 | 7 | 75 | 29.857143 | 0.811828 | 0.186603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7b0f7138edd82a5c963cff32dcaab9d6e6afdacc | 112 | py | Python | user_management/management.py | zonradkuse/city-tour-recsys | 46e9d7726a58fb5bfb1585393e0564dcc6ef69df | [
"MIT"
] | null | null | null | user_management/management.py | zonradkuse/city-tour-recsys | 46e9d7726a58fb5bfb1585393e0564dcc6ef69df | [
"MIT"
] | null | null | null | user_management/management.py | zonradkuse/city-tour-recsys | 46e9d7726a58fb5bfb1585393e0564dcc6ef69df | [
"MIT"
] | null | null | null | def add_user(name):
pass
def add_user_rating(uid, item_id, rating):
pass
def get_user(name):
pass
| 12.444444 | 42 | 0.678571 | 19 | 112 | 3.736842 | 0.526316 | 0.169014 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223214 | 112 | 8 | 43 | 14 | 0.816092 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
9eb85055af573f470b958c7a46079752aa23388d | 22 | py | Python | tests/__init__.py | Parasgupta44/py_holiday_calendar | 18ecc88b3638a1b126e159f96a31a88e517f45f1 | [
"MIT"
] | 3 | 2022-01-11T17:26:25.000Z | 2022-01-18T06:57:28.000Z | tests/__init__.py | Parasgupta44/py_holiday_calendar | 18ecc88b3638a1b126e159f96a31a88e517f45f1 | [
"MIT"
] | null | null | null | tests/__init__.py | Parasgupta44/py_holiday_calendar | 18ecc88b3638a1b126e159f96a31a88e517f45f1 | [
"MIT"
] | null | null | null | # Just add some tests
| 11 | 21 | 0.727273 | 4 | 22 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 1 | 22 | 22 | 0.941176 | 0.863636 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9ec1ac023dc99bad5d9314a3469f666eb1724f6e | 376 | py | Python | control/clients/__init__.py | mwmajew/Tonic | f09b9f95ac5281e9299638e4bd513d31ef703bc9 | [
"MIT"
] | 91 | 2019-03-21T22:27:34.000Z | 2022-02-17T09:19:09.000Z | control/clients/__init__.py | mwmajew/Tonic | f09b9f95ac5281e9299638e4bd513d31ef703bc9 | [
"MIT"
] | 19 | 2018-11-30T16:53:15.000Z | 2021-01-04T20:32:51.000Z | control/clients/__init__.py | mwmajew/Tonic | f09b9f95ac5281e9299638e4bd513d31ef703bc9 | [
"MIT"
] | 11 | 2019-07-02T13:07:43.000Z | 2021-12-17T04:43:00.000Z | from clients.client import Client
from clients.steering_client import QTSteeringClient, QTSteeringMotor
from clients.video_client import MultiVideoClient, VideoClient, QTVideoClient
from clients.imu_client import MultiImuClient, ImuWorker, QTImuClient
from clients.slam_client import QtSlamClient
from clients.sink import ClientSink
from clients.odo_client import QTOdoClient
| 47 | 77 | 0.880319 | 45 | 376 | 7.244444 | 0.466667 | 0.236196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087766 | 376 | 7 | 78 | 53.714286 | 0.950437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7b514fb73095e212d60154e6c69b84073f17991a | 25 | py | Python | api/__init__.py | kumetix/python-flask-microservice | 0777fb07bec1c0d1ee578b0346b2c23efb83135c | [
"Apache-2.0"
] | 611 | 2017-03-06T20:06:07.000Z | 2022-03-29T01:42:48.000Z | api/__init__.py | dhgdhg/python-flask-microservice | 0777fb07bec1c0d1ee578b0346b2c23efb83135c | [
"Apache-2.0"
] | 7 | 2017-05-29T09:18:32.000Z | 2019-12-27T00:44:12.000Z | api/__init__.py | dhgdhg/python-flask-microservice | 0777fb07bec1c0d1ee578b0346b2c23efb83135c | [
"Apache-2.0"
] | 222 | 2017-03-09T15:07:39.000Z | 2022-01-14T12:32:10.000Z | from api.room import Room | 25 | 25 | 0.84 | 5 | 25 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
7b6a5397de903337bce01072f73e62476881b8ac | 79 | py | Python | address.py | marygovory/python_training | 5ba12d7ac8813215e0392a85b2ef4f314aae38b3 | [
"Apache-2.0"
] | null | null | null | address.py | marygovory/python_training | 5ba12d7ac8813215e0392a85b2ef4f314aae38b3 | [
"Apache-2.0"
] | null | null | null | address.py | marygovory/python_training | 5ba12d7ac8813215e0392a85b2ef4f314aae38b3 | [
"Apache-2.0"
] | null | null | null | class Contact:
def __init__(self, contact):
self.contact = contact
| 19.75 | 32 | 0.658228 | 9 | 79 | 5.333333 | 0.555556 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253165 | 79 | 3 | 33 | 26.333333 | 0.813559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
7b9d0708dd9b56e6a35f4bd99167f785f9825f9b | 355 | py | Python | DataTypeApp.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | DataTypeApp.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | DataTypeApp.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | print("Welcome to Sommai Shop")
print("-----------------------")
print("Apple x", 3, 30,"THB -",3*30,"THB")
print("Tea x", 2, 25, "THB -",2*25, "THB")
print("Notebook x", 1, 50, "THB -", 1*50, "THB")
print("Yogurt x", 3, 20, "THB -", 3*20, "THB")
print("-----------------------")
print("Total", 3*30+2*25+1*50+3*20, "THB")
print("Thank you : Sommai Shop") | 39.444444 | 48 | 0.495775 | 59 | 355 | 2.983051 | 0.355932 | 0.227273 | 0.102273 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11465 | 0.115493 | 355 | 9 | 49 | 39.444444 | 0.44586 | 0 | 0 | 0.222222 | 0 | 0 | 0.452247 | 0.129213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
7bb20e72bb94030cb3e6c0e9443656590941d88d | 152 | py | Python | tests/v2/plugins_bad_cmd/cmd_plug_bad_run.py | tombry/virlutils | e98136b4e88c456828f2d0496c14f851f2627a46 | [
"MIT"
] | 133 | 2018-07-01T06:08:49.000Z | 2022-03-26T15:22:21.000Z | tests/v2/plugins_bad_cmd/cmd_plug_bad_run.py | tombry/virlutils | e98136b4e88c456828f2d0496c14f851f2627a46 | [
"MIT"
] | 76 | 2018-06-28T16:41:57.000Z | 2022-03-26T17:23:06.000Z | tests/v2/plugins_bad_cmd/cmd_plug_bad_run.py | tombry/virlutils | e98136b4e88c456828f2d0496c14f851f2627a46 | [
"MIT"
] | 43 | 2018-06-27T20:40:52.000Z | 2022-02-22T06:16:11.000Z | from virl.api.plugin import CommandPlugin
class TestBadCmdPlugin(CommandPlugin, command="test-bad-cmd"):
def run():
print("TEST COMMAND")
| 21.714286 | 62 | 0.717105 | 18 | 152 | 6.055556 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 152 | 6 | 63 | 25.333333 | 0.858268 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
c86e2b97b2e9219c0e732406cd0cc89a130535a4 | 38 | py | Python | purescript_show_python/ffi/Data/Symbol.py | thautwarm/purescript-show-python | e40134362653baa2f5feffced579504d7fe7f7ba | [
"MIT"
] | null | null | null | purescript_show_python/ffi/Data/Symbol.py | thautwarm/purescript-show-python | e40134362653baa2f5feffced579504d7fe7f7ba | [
"MIT"
] | 1 | 2020-02-24T16:38:30.000Z | 2020-02-24T16:38:30.000Z | purescript_show_python/ffi/Data/Symbol.py | purescript-python/purescript-show-python | e40134362653baa2f5feffced579504d7fe7f7ba | [
"MIT"
] | null | null | null | def unsafeCoerce(arg):
return arg
| 12.666667 | 22 | 0.710526 | 5 | 38 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 38 | 2 | 23 | 19 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
c898be0f909fde94a7e223d05c9832e68ac18333 | 109 | py | Python | parser/api/v1/__init__.py | amrbz/alexa | 4f1a0d53bed20c8cff5069e8ad6157f3aa6aa406 | [
"MIT"
] | 16 | 2019-08-08T17:08:04.000Z | 2022-02-02T07:04:59.000Z | parser/api/v1/__init__.py | amrbz/alexa | 4f1a0d53bed20c8cff5069e8ad6157f3aa6aa406 | [
"MIT"
] | 15 | 2021-05-08T21:26:08.000Z | 2022-01-29T12:52:34.000Z | parser/api/v1/__init__.py | amrbz/alexa | 4f1a0d53bed20c8cff5069e8ad6157f3aa6aa406 | [
"MIT"
] | 3 | 2019-09-30T09:27:01.000Z | 2020-11-10T15:32:37.000Z | from sanic import Blueprint
from .parser import parser
api_v1 = Blueprint.group(parser, url_prefix='/v1')
| 15.571429 | 50 | 0.770642 | 16 | 109 | 5.125 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.137615 | 109 | 6 | 51 | 18.166667 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.028037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
c8e450b1d76c54dbe7ea542ce303e95a766c13d5 | 240 | py | Python | python/dpu_utils/codeutils/__init__.py | mayank-sfdc/dpu-utils | 72cac67490ccf19fede6b86269c57843549b526c | [
"MIT"
] | 61 | 2019-05-14T15:09:19.000Z | 2022-03-31T18:47:41.000Z | python/dpu_utils/codeutils/__init__.py | mayank-sfdc/dpu-utils | 72cac67490ccf19fede6b86269c57843549b526c | [
"MIT"
] | 16 | 2019-06-04T13:59:59.000Z | 2022-03-14T09:49:19.000Z | python/dpu_utils/codeutils/__init__.py | mayank-sfdc/dpu-utils | 72cac67490ccf19fede6b86269c57843549b526c | [
"MIT"
] | 32 | 2019-07-05T08:55:59.000Z | 2022-02-19T03:37:02.000Z | from .identifiersplitting import split_identifier_into_parts
from .lattice import CSharpLattice, Lattice, LatticeVocabulary
from .keywords.keywordlist import get_language_keywords
from .filesuffix import language_candidates_from_suffix
| 48 | 63 | 0.879167 | 27 | 240 | 7.518519 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 240 | 4 | 64 | 60 | 0.931193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c8f06160bfa3b15f24a174525d94d6ef8717f88f | 76 | py | Python | scripts/prime.py | abrahamrhoffman/math | 9d49b74b437dfd0ed8f1b90fd9a61e24e3e01f1b | [
"Apache-2.0"
] | null | null | null | scripts/prime.py | abrahamrhoffman/math | 9d49b74b437dfd0ed8f1b90fd9a61e24e3e01f1b | [
"Apache-2.0"
] | null | null | null | scripts/prime.py | abrahamrhoffman/math | 9d49b74b437dfd0ed8f1b90fd9a61e24e3e01f1b | [
"Apache-2.0"
] | null | null | null |
def prime(n):
return all(n%j for j in range(2, int(n**0.5)+1)) and n>1
| 19 | 60 | 0.578947 | 20 | 76 | 2.2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.210526 | 76 | 3 | 61 | 25.333333 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
cdcb1c85a4718c515d72f463da7dd3e050f9593f | 24 | py | Python | favorite-animals/josh_smith.py | jasonjstewart/learn-git | 47a49c4efb31054610999a56a0f282213dff1178 | [
"MIT"
] | 1 | 2021-09-29T18:48:12.000Z | 2021-09-29T18:48:12.000Z | favorite-animals/josh_smith.py | jasonjstewart/learn-git | 47a49c4efb31054610999a56a0f282213dff1178 | [
"MIT"
] | 21 | 2021-09-27T17:19:45.000Z | 2021-09-30T04:07:26.000Z | favorite-animals/josh_smith.py | jasonjstewart/learn-git | 47a49c4efb31054610999a56a0f282213dff1178 | [
"MIT"
] | 192 | 2021-09-27T17:10:51.000Z | 2021-10-05T03:06:36.000Z | print("I like hamsters") | 24 | 24 | 0.75 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 24 | 1 | 24 | 24 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
cde1a9a86ab5790b1c26d431ad65896108511a35 | 98 | py | Python | _apicheck/apicheck/exceptions.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | 2 | 2019-05-31T09:56:59.000Z | 2019-05-31T11:28:50.000Z | _apicheck/apicheck/exceptions.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | 3 | 2022-02-07T03:37:37.000Z | 2022-03-02T03:38:13.000Z | _apicheck/apicheck/exceptions.py | sundayayandele/apicheck | ab91f567d67547b92b8e94824a29dcd5993b769e | [
"Apache-2.0"
] | 1 | 2021-07-18T15:01:22.000Z | 2021-07-18T15:01:22.000Z | class APICheckException(Exception):
pass
class APICheckFormatException(Exception):
pass
| 14 | 41 | 0.77551 | 8 | 98 | 9.5 | 0.625 | 0.342105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 6 | 42 | 16.333333 | 0.926829 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
cde9434c697b1f31830a45c8b0713fe29bf5cd3a | 34 | py | Python | background/history.py | USCMediaImpact/mip-dashboard | 19f3c732a13cd3bd7a6623c354bc0a0a0fcf3c33 | [
"Apache-2.0"
] | null | null | null | background/history.py | USCMediaImpact/mip-dashboard | 19f3c732a13cd3bd7a6623c354bc0a0a0fcf3c33 | [
"Apache-2.0"
] | null | null | null | background/history.py | USCMediaImpact/mip-dashboard | 19f3c732a13cd3bd7a6623c354bc0a0a0fcf3c33 | [
"Apache-2.0"
] | null | null | null | import debug
debug.run_history()
| 8.5 | 19 | 0.794118 | 5 | 34 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 3 | 20 | 11.333333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cdf23b4cb1a77561ca1b176313425f1750050a1a | 2,534 | py | Python | tests/test_char_encoder.py | lvyufeng/Elmo_mindspore | 2002b477738a8a805f3fa1f02af8163724795489 | [
"MIT"
] | 1 | 2021-03-25T14:32:17.000Z | 2021-03-25T14:32:17.000Z | tests/test_char_encoder.py | lvyufeng/elmo_mindspore | 2002b477738a8a805f3fa1f02af8163724795489 | [
"MIT"
] | null | null | null | tests/test_char_encoder.py | lvyufeng/elmo_mindspore | 2002b477738a8a805f3fa1f02af8163724795489 | [
"MIT"
] | null | null | null | import unittest
import mindspore
import numpy as np
from mindspore import Tensor
from elmo.modules.embedding import CharacterEncoder
from mindspore import context
class TestCharEncoder(unittest.TestCase):
char_cnn = {'activation': 'relu',
'embedding': {'dim': 16},
'filters': [[1, 32],
[2, 32],
[3, 64],
[4, 128],
[5, 256],
[6, 512],
[7, 1024]],
'max_characters_per_token': 50,
'n_characters': 261,
'n_highway': 2}
def test_char_encoder(self):
context.set_context(mode=context.PYNATIVE_MODE, device_target='Ascend')
cnn_options = self.char_cnn
filters = cnn_options['filters']
n_filters = sum(f[1] for f in filters)
max_chars = cnn_options['max_characters_per_token']
char_embed_dim = cnn_options['embedding']['dim']
n_chars = cnn_options['n_characters']
activation = cnn_options['activation']
n_highway = cnn_options.get('n_highway')
projection_dim = 512
char_embedding = char_embedding = CharacterEncoder(filters, n_filters, max_chars, char_embed_dim, n_chars, n_highway, projection_dim, activation)
# (batch_size, sequence_length, max_chars)
inputs = Tensor(np.random.randn(3, 20, max_chars), mindspore.int32)
token_embedding = char_embedding(inputs)
# (num_layers, seq_length, batch_size, hidden_size)
assert token_embedding.shape == (3, 20, projection_dim)
def test_char_encoder_graph_mode(self):
cnn_options = self.char_cnn
filters = cnn_options['filters']
n_filters = sum(f[1] for f in filters)
max_chars = cnn_options['max_characters_per_token']
char_embed_dim = cnn_options['embedding']['dim']
n_chars = cnn_options['n_characters']
activation = cnn_options['activation']
n_highway = cnn_options.get('n_highway')
projection_dim = 512
context.set_context(mode=context.GRAPH_MODE, device_target='Ascend')
char_embedding = char_embedding = CharacterEncoder(filters, n_filters, max_chars, char_embed_dim, n_chars, n_highway, projection_dim, activation)
# (batch_size, sequence_length, max_chars)
inputs = Tensor(np.random.randn(3, 20, max_chars), mindspore.int32)
token_embedding = char_embedding(inputs)
# (num_layers, seq_length, batch_size, hidden_size)
assert token_embedding.shape == (3, 20, projection_dim)
| 40.222222 | 153 | 0.655485 | 313 | 2,534 | 4.98722 | 0.252396 | 0.089686 | 0.038437 | 0.053812 | 0.743113 | 0.707239 | 0.707239 | 0.707239 | 0.707239 | 0.707239 | 0 | 0.03013 | 0.240331 | 2,534 | 63 | 154 | 40.222222 | 0.780779 | 0.071429 | 0 | 0.530612 | 0 | 0 | 0.10132 | 0.030651 | 0 | 0 | 0 | 0 | 0.040816 | 1 | 0.040816 | false | 0 | 0.122449 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a80ebaa459a5dd89e433f4e796805e64eb7d2f9d | 242 | py | Python | gui_app/enum/StatusCode.py | cloudconductor/cloud_conductor_gui | 8929815adfc6442daed5d320bfb7942d6917c3ea | [
"Apache-2.0"
] | null | null | null | gui_app/enum/StatusCode.py | cloudconductor/cloud_conductor_gui | 8929815adfc6442daed5d320bfb7942d6917c3ea | [
"Apache-2.0"
] | null | null | null | gui_app/enum/StatusCode.py | cloudconductor/cloud_conductor_gui | 8929815adfc6442daed5d320bfb7942d6917c3ea | [
"Apache-2.0"
] | null | null | null | from enum import Enum
class Environment(Enum):
CREATE_COMPLETE = 'CREATE_COMPLETE'
PENDING = 'PENDING'
ERROR = 'ERROR'
class Blueprint(Enum):
CREATE_COMPLETE = 'CREATE_COMPLETE'
PENDING = 'PENDING'
ERROR = 'ERROR'
| 17.285714 | 39 | 0.677686 | 26 | 242 | 6.153846 | 0.384615 | 0.35 | 0.225 | 0.3 | 0.7 | 0.7 | 0.7 | 0.7 | 0.7 | 0 | 0 | 0 | 0.223141 | 242 | 13 | 40 | 18.615385 | 0.851064 | 0 | 0 | 0.666667 | 0 | 0 | 0.223141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0.111111 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
a828950848a342e83cd3c02ff56774492dbbebfc | 64 | py | Python | finitewave/core/tissue/__init__.py | ArsOkenov/Finitewave | 14274d74be824a395b47a5c53ba18188798ab70d | [
"MIT"
] | null | null | null | finitewave/core/tissue/__init__.py | ArsOkenov/Finitewave | 14274d74be824a395b47a5c53ba18188798ab70d | [
"MIT"
] | null | null | null | finitewave/core/tissue/__init__.py | ArsOkenov/Finitewave | 14274d74be824a395b47a5c53ba18188798ab70d | [
"MIT"
] | null | null | null | from finitewave.core.tissue.cardiac_tissue import CardiacTissue
| 32 | 63 | 0.890625 | 8 | 64 | 7 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 64 | 1 | 64 | 64 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a82dc24c22b1677f0caf6564e9d0c94f2b354686 | 117 | py | Python | tensorboard_reducer/__init__.py | p-enel/tensorboard-reducer | d8258cd25cc9d809b4e733cbc6ba4f9f245dc8a2 | [
"MIT"
] | 26 | 2021-04-15T04:23:37.000Z | 2022-03-18T03:17:03.000Z | tensorboard_reducer/__init__.py | p-enel/tensorboard-reducer | d8258cd25cc9d809b4e733cbc6ba4f9f245dc8a2 | [
"MIT"
] | 8 | 2021-05-27T13:22:32.000Z | 2022-02-16T08:36:27.000Z | tensorboard_reducer/__init__.py | p-enel/tensorboard-reducer | d8258cd25cc9d809b4e733cbc6ba4f9f245dc8a2 | [
"MIT"
] | 3 | 2021-07-17T15:01:54.000Z | 2022-01-12T22:23:44.000Z | from .load import load_tb_events
from .main import main, reduce_events
from .write import write_csv, write_tb_events
| 29.25 | 45 | 0.837607 | 20 | 117 | 4.6 | 0.45 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119658 | 117 | 3 | 46 | 39 | 0.893204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b54084330c021a3f93e3b0ce6d8603164c1f3b7c | 43 | py | Python | jungle_book/encyclopedia/__init__.py | EmilTheSadCat/jungle-book-app | 9b92ba8b889943ce3636a88d538a59ee339869f1 | [
"MIT"
] | null | null | null | jungle_book/encyclopedia/__init__.py | EmilTheSadCat/jungle-book-app | 9b92ba8b889943ce3636a88d538a59ee339869f1 | [
"MIT"
] | null | null | null | jungle_book/encyclopedia/__init__.py | EmilTheSadCat/jungle-book-app | 9b92ba8b889943ce3636a88d538a59ee339869f1 | [
"MIT"
] | null | null | null | from jungle_book.plant.models import Plant
| 21.5 | 42 | 0.860465 | 7 | 43 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
b54f714eb9b7c4deec86f8e86175c7e26386a569 | 153 | py | Python | src/utils.py | codezakh/pykinship | 9d894d81ad6acd8229806fd0fb7e646ffb7ea475 | [
"MIT"
] | null | null | null | src/utils.py | codezakh/pykinship | 9d894d81ad6acd8229806fd0fb7e646ffb7ea475 | [
"MIT"
] | null | null | null | src/utils.py | codezakh/pykinship | 9d894d81ad6acd8229806fd0fb7e646ffb7ea475 | [
"MIT"
] | null | null | null | from pathlib import Path
def mkdir(din):
"""
:param din directory to make
"""
Path(din).absolute().mkdir(exist_ok=True, parents=True)
| 15.3 | 59 | 0.640523 | 21 | 153 | 4.619048 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 153 | 9 | 60 | 17 | 0.815126 | 0.183007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b586836a867844140fd09704c95f2ac492f98378 | 83 | py | Python | drf_irelation/__init__.py | justjew/irelation | 6383752e1b80655200143ae33bb2a55b3ca195ed | [
"MIT"
] | null | null | null | drf_irelation/__init__.py | justjew/irelation | 6383752e1b80655200143ae33bb2a55b3ca195ed | [
"MIT"
] | 1 | 2021-01-20T16:07:16.000Z | 2021-01-20T16:08:54.000Z | drf_irelation/__init__.py | justjew/irelation | 6383752e1b80655200143ae33bb2a55b3ca195ed | [
"MIT"
] | 1 | 2021-03-09T14:21:38.000Z | 2021-03-09T14:21:38.000Z | from .irelation import RelatedField, get_related_object, get_relation_from_request
| 41.5 | 82 | 0.891566 | 11 | 83 | 6.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 83 | 1 | 83 | 83 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b5ab551b69c073ada0cf640bd1e3caa5568a290d | 43 | py | Python | timesynth/signals/ode.py | swight-prc/TimeSynth | 9b10a276e90fee145c9f69c15195d028c78214bf | [
"MIT"
] | 242 | 2016-11-03T21:26:46.000Z | 2022-03-30T21:33:50.000Z | timesynth/signals/ode.py | swight-prc/TimeSynth | 9b10a276e90fee145c9f69c15195d028c78214bf | [
"MIT"
] | 18 | 2017-04-06T18:47:36.000Z | 2020-11-30T18:18:56.000Z | timesynth/signals/ode.py | swight-prc/TimeSynth | 9b10a276e90fee145c9f69c15195d028c78214bf | [
"MIT"
] | 56 | 2017-08-31T14:32:50.000Z | 2022-03-21T01:03:41.000Z | # Stub for Ordinary Differential Equations
| 21.5 | 42 | 0.837209 | 5 | 43 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 1 | 43 | 43 | 0.972973 | 0.930233 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a92811c929aca57f055b4a75fa6e750bd303d5ee | 4,501 | py | Python | examples/librelist/tests/handlers/bounce_tests.py | medecau/lamson | e78520b857384462b9eecdedfc0f8c2e57cdd00a | [
"BSD-3-Clause"
] | null | null | null | examples/librelist/tests/handlers/bounce_tests.py | medecau/lamson | e78520b857384462b9eecdedfc0f8c2e57cdd00a | [
"BSD-3-Clause"
] | null | null | null | examples/librelist/tests/handlers/bounce_tests.py | medecau/lamson | e78520b857384462b9eecdedfc0f8c2e57cdd00a | [
"BSD-3-Clause"
] | null | null | null | from nose.tools import *
from lamson.testing import *
from lamson.mail import MailRequest
from lamson.routing import Router
from app.handlers.admin import module_and_to
from app.model import mailinglist
from handlers import admin_tests
from email.utils import parseaddr
from lamson import bounce
from config import settings
sender = admin_tests.sender
list_addr = admin_tests.list_addr
client = admin_tests.client
def setup():
clear_queue(queue_dir=settings.BOUNCE_ARCHIVE)
def create_bounce(To, From):
msg = MailRequest("fakepeer", From, To, open("tests/bounce.msg").read())
assert msg.is_bounce()
msg.bounce.final_recipient = From
msg.bounce.headers['Final-Recipient'] = From
msg.bounce.original['from'] = From
msg.bounce.original['to'] = To
msg.bounce.original.To = set([To])
msg.bounce.original.From = From
return msg
def test_hard_bounce_disables_user():
# get them into a posting state
admin_tests.test_existing_user_posts_message()
assert_in_state('app.handlers.admin', list_addr, sender, 'POSTING')
clear_queue()
assert mailinglist.find_subscriptions(sender, list_addr)
# force them to HARD bounce
msg = create_bounce(list_addr, sender)
Router.deliver(msg)
assert_in_state('app.handlers.admin', list_addr, sender, 'BOUNCING')
assert_in_state('app.handlers.bounce', list_addr, sender, 'BOUNCING')
assert not delivered('unbounce'), "A HARD bounce should be silent."
assert_equal(len(queue(queue_dir=settings.BOUNCE_ARCHIVE).keys()), 1)
assert not mailinglist.find_subscriptions(sender, list_addr)
# make sure that any attempts to post return a "you're bouncing dude" message
unbounce = client.say(list_addr, 'So anyway as I was saying.', 'unbounce')
assert_in_state('app.handlers.admin', list_addr, sender, 'BOUNCING')
# now have them try to unbounce
msg = client.say(unbounce['from'], "Please put me back on, I'll be good.",
'unbounce-confirm')
# handle the bounce confirmation
client.say(msg['from'], "Confirmed to unbounce.", 'noreply')
# alright they should be in the unbounce state for the global bounce handler
assert_in_state('app.handlers.bounce', list_addr, sender,
'UNBOUNCED')
# and they need to be back to POSTING for regular operations
assert_in_state('app.handlers.admin', list_addr, sender, 'POSTING')
assert mailinglist.find_subscriptions(sender, list_addr)
# and make sure that only the original bounce is in the bounce archive
assert_equal(len(queue(queue_dir=settings.BOUNCE_ARCHIVE).keys()), 1)
def test_soft_bounce_tells_them():
setup()
# get them into a posting state
admin_tests.test_existing_user_posts_message()
assert_in_state('app.handlers.admin', list_addr, sender, 'POSTING')
clear_queue()
assert mailinglist.find_subscriptions(sender, list_addr)
# force them to soft bounce
msg = create_bounce(list_addr, sender)
msg.bounce.primary_status = (3, bounce.PRIMARY_STATUS_CODES[u'3'])
assert msg.bounce.is_soft()
Router.deliver(msg)
assert_in_state('app.handlers.admin', list_addr, sender, 'BOUNCING')
assert_in_state('app.handlers.bounce', list_addr, sender, 'BOUNCING')
assert delivered('unbounce'), "Looks like unbounce didn't go out."
assert_equal(len(queue(queue_dir=settings.BOUNCE_ARCHIVE).keys()), 1)
assert not mailinglist.find_subscriptions(sender, list_addr)
# make sure that any attempts to post return a "you're bouncing dude" message
unbounce = client.say(list_addr, 'So anyway as I was saying.', 'unbounce')
assert_in_state('app.handlers.admin', list_addr, sender, 'BOUNCING')
# now have them try to unbounce
msg = client.say(unbounce['from'], "Please put me back on, I'll be good.",
'unbounce-confirm')
# handle the bounce confirmation
client.say(msg['from'], "Confirmed to unbounce.", 'noreply')
# alright they should be in the unbounce state for the global bounce handler
assert_in_state('app.handlers.bounce', list_addr, sender,
'UNBOUNCED')
# and they need to be back to POSTING for regular operations
assert_in_state('app.handlers.admin', list_addr, sender, 'POSTING')
assert mailinglist.find_subscriptions(sender, list_addr)
# and make sure that only the original bounce is in the bounce archive
assert_equal(len(queue(queue_dir=settings.BOUNCE_ARCHIVE).keys()), 1)
| 38.801724 | 81 | 0.721173 | 634 | 4,501 | 4.955836 | 0.208202 | 0.061108 | 0.062381 | 0.061108 | 0.764799 | 0.734564 | 0.723743 | 0.701464 | 0.701464 | 0.701464 | 0 | 0.001619 | 0.176627 | 4,501 | 115 | 82 | 39.13913 | 0.846195 | 0.176183 | 0 | 0.555556 | 0 | 0 | 0.186229 | 0 | 0 | 0 | 0 | 0 | 0.361111 | 1 | 0.055556 | false | 0 | 0.138889 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a946fae60b8ecce5a7a4b689154886ba661d6163 | 561 | py | Python | forms/tokens/token.py | willgeorgetaylor/guten-tagger | 1be81234eb6c1fd4fd472496748dcfc82ed653c3 | [
"MIT"
] | null | null | null | forms/tokens/token.py | willgeorgetaylor/guten-tagger | 1be81234eb6c1fd4fd472496748dcfc82ed653c3 | [
"MIT"
] | null | null | null | forms/tokens/token.py | willgeorgetaylor/guten-tagger | 1be81234eb6c1fd4fd472496748dcfc82ed653c3 | [
"MIT"
] | null | null | null | from abc import abstractmethod, ABCMeta
class Token(object):
metaclass__ = ABCMeta
def __init__(self, form, language):
self._form = form
self._language = language
@abstractmethod
def generate_flags(self):
pass
@abstractmethod
def generate_flags(self):
pass
def wrap_flags(self, flags):
return self, flags
def __repr__(self):
return self.form
@property
def form(self):
return self._form
@property
def language(self):
return self._language
| 16.5 | 39 | 0.622103 | 61 | 561 | 5.442623 | 0.344262 | 0.096386 | 0.126506 | 0.180723 | 0.403614 | 0.403614 | 0 | 0 | 0 | 0 | 0 | 0 | 0.304813 | 561 | 33 | 40 | 17 | 0.851282 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318182 | false | 0.090909 | 0.045455 | 0.181818 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
a96789825e19c4802505605ce818b90b3f83f5df | 27 | py | Python | ctsb/problems/tests/__init__.py | paula-gradu/ctsb | fdc00acb798949ce1120778ad4725faf170f80c3 | [
"Apache-2.0"
] | 1 | 2021-07-03T05:26:56.000Z | 2021-07-03T05:26:56.000Z | ctsb/problems/tests/__init__.py | paula-gradu/ctsb | fdc00acb798949ce1120778ad4725faf170f80c3 | [
"Apache-2.0"
] | null | null | null | ctsb/problems/tests/__init__.py | paula-gradu/ctsb | fdc00acb798949ce1120778ad4725faf170f80c3 | [
"Apache-2.0"
] | null | null | null | # problems/tests init file
| 13.5 | 26 | 0.777778 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a982bbc6c5652ea8310914492797f88df0b76fc5 | 7,305 | py | Python | posthog/test/test_feature_flag.py | rightlyip/posthog | c00ad7a2b02df68930ca332675fc04ce4ed83a60 | [
"MIT"
] | null | null | null | posthog/test/test_feature_flag.py | rightlyip/posthog | c00ad7a2b02df68930ca332675fc04ce4ed83a60 | [
"MIT"
] | null | null | null | posthog/test/test_feature_flag.py | rightlyip/posthog | c00ad7a2b02df68930ca332675fc04ce4ed83a60 | [
"MIT"
] | null | null | null | from posthog.models import Cohort, FeatureFlag, Person
from posthog.models.feature_flag import FeatureFlagMatch
from posthog.test.base import BaseTest
class TestFeatureFlag(BaseTest):
def test_blank_flag(self):
# Blank feature flags now default to be released for everyone
feature_flag = self.create_feature_flag()
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertEqual(feature_flag.matches("another_id"), FeatureFlagMatch())
def test_rollout_percentage(self):
feature_flag = self.create_feature_flag(filters={"groups": [{"rollout_percentage": 50}]})
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_empty_group(self):
feature_flag = self.create_feature_flag(filters={"groups": [{}]})
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertEqual(feature_flag.matches("another_id"), FeatureFlagMatch())
def test_null_rollout_percentage(self):
feature_flag = self.create_feature_flag(filters={"groups": [{"properties": [], "rollout_percentage": None}]})
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
def test_complicated_flag(self):
Person.objects.create(
team=self.team, distinct_ids=["test_id"], properties={"email": "test@posthog.com"},
)
feature_flag = self.create_feature_flag(
filters={
"groups": [
{
"properties": [
{"key": "email", "type": "person", "value": "test@posthog.com", "operator": "exact"}
],
"rollout_percentage": 100,
},
{"rollout_percentage": 50},
]
}
)
self.assertEqual(feature_flag.matches("test_id"), FeatureFlagMatch())
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_multi_property_filters(self):
Person.objects.create(
team=self.team, distinct_ids=["example_id"], properties={"email": "tim@posthog.com"},
)
Person.objects.create(
team=self.team, distinct_ids=["another_id"], properties={"email": "example@example.com"},
)
feature_flag = self.create_feature_flag(
filters={
"groups": [
{"properties": [{"key": "email", "value": "tim@posthog.com"}]},
{"properties": [{"key": "email", "value": "example@example.com"}]},
]
}
)
with self.assertNumQueries(1):
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
with self.assertNumQueries(1):
self.assertEqual(feature_flag.matches("another_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("false_id"))
def test_user_in_cohort(self):
Person.objects.create(team=self.team, distinct_ids=["example_id"], properties={"$some_prop": "something"})
cohort = Cohort.objects.create(
team=self.team, groups=[{"properties": {"$some_prop": "something"}}], name="cohort1"
)
cohort.calculate_people(use_clickhouse=False)
feature_flag = self.create_feature_flag(
filters={"groups": [{"properties": [{"key": "id", "value": cohort.pk, "type": "cohort"}],}]}
)
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_legacy_rollout_percentage(self):
feature_flag = self.create_feature_flag(rollout_percentage=50)
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_legacy_property_filters(self):
Person.objects.create(
team=self.team, distinct_ids=["example_id"], properties={"email": "tim@posthog.com"},
)
Person.objects.create(
team=self.team, distinct_ids=["another_id"], properties={"email": "example@example.com"},
)
feature_flag = self.create_feature_flag(filters={"properties": [{"key": "email", "value": "tim@posthog.com"}]},)
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_legacy_rollout_and_property_filter(self):
Person.objects.create(
team=self.team, distinct_ids=["example_id"], properties={"email": "tim@posthog.com"},
)
Person.objects.create(
team=self.team, distinct_ids=["another_id"], properties={"email": "tim@posthog.com"},
)
Person.objects.create(
team=self.team, distinct_ids=["id_number_3"], properties={"email": "example@example.com"},
)
feature_flag = self.create_feature_flag(
rollout_percentage=50,
filters={"properties": [{"key": "email", "value": "tim@posthog.com", "type": "person"}]},
)
with self.assertNumQueries(1):
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
self.assertIsNone(feature_flag.matches("id_number_3"))
def test_legacy_user_in_cohort(self):
Person.objects.create(team=self.team, distinct_ids=["example_id"], properties={"$some_prop": "something"})
cohort = Cohort.objects.create(
team=self.team, groups=[{"properties": {"$some_prop": "something"}}], name="cohort1"
)
cohort.calculate_people(use_clickhouse=False)
feature_flag = self.create_feature_flag(
filters={"properties": [{"key": "id", "value": cohort.pk, "type": "cohort"}],}
)
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch())
self.assertIsNone(feature_flag.matches("another_id"))
def test_variants(self):
feature_flag = self.create_feature_flag(
filters={
"groups": [{"properties": [], "rollout_percentage": None}],
"multivariate": {
"variants": [
{"key": "first-variant", "name": "First Variant", "rollout_percentage": 50},
{"key": "second-variant", "name": "Second Variant", "rollout_percentage": 25},
{"key": "third-variant", "name": "Third Variant", "rollout_percentage": 25},
],
},
}
)
self.assertEqual(feature_flag.matches("11"), FeatureFlagMatch(variant="first-variant"))
self.assertEqual(feature_flag.matches("example_id"), FeatureFlagMatch(variant="second-variant"))
self.assertEqual(feature_flag.matches("3"), FeatureFlagMatch(variant="third-variant"))
def create_feature_flag(self, **kwargs):
return FeatureFlag.objects.create(
team=self.team, name="Beta feature", key="beta-feature", created_by=self.user, **kwargs
)
| 46.826923 | 120 | 0.620671 | 740 | 7,305 | 5.904054 | 0.125676 | 0.13344 | 0.111238 | 0.107118 | 0.808652 | 0.787594 | 0.769512 | 0.753491 | 0.694209 | 0.641337 | 0 | 0.004815 | 0.232444 | 7,305 | 155 | 121 | 47.129032 | 0.774389 | 0.008077 | 0 | 0.43609 | 0 | 0 | 0.180701 | 0 | 0 | 0 | 0 | 0 | 0.225564 | 1 | 0.097744 | false | 0 | 0.022556 | 0.007519 | 0.135338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a99e608d5cc64631f50960de69d686b68c27cd1c | 3,224 | py | Python | p3_collab-compet/network_separate.py | cshreyastech/deep-reinforcement-learning-v2 | 2aec8ba72c3dc3505d66e2b5f6c97a80ab9ffebb | [
"MIT"
] | null | null | null | p3_collab-compet/network_separate.py | cshreyastech/deep-reinforcement-learning-v2 | 2aec8ba72c3dc3505d66e2b5f6c97a80ab9ffebb | [
"MIT"
] | null | null | null | p3_collab-compet/network_separate.py | cshreyastech/deep-reinforcement-learning-v2 | 2aec8ba72c3dc3505d66e2b5f6c97a80ab9ffebb | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
def hidden_init(layer):
fan_in = layer.weight.data.size()[0]
lim = 1. / np.sqrt(fan_in)
return (-lim, lim)
class Actor(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, h1=256, h2=128, h3=128, h4=64):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
"""
super(Actor, self).__init__()
self.bn = nn.BatchNorm1d(state_size)
self.fc1 = nn.Linear(state_size, h1)
self.bn1 = nn.BatchNorm1d(h1)
self.fc2 = nn.Linear(h1, h2)
self.bn2 = nn.BatchNorm1d(h2)
self.fc3 = nn.Linear(h2, h3)
self.bn3 = nn.BatchNorm1d(h3)
self.fc4 = nn.Linear(h3, h4)
self.bn4 = nn.BatchNorm1d(h4)
self.fc5 = nn.Linear(h4, action_size)
self.reset_parameters()
def reset_parameters(self):
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(*hidden_init(self.fc2))
self.fc3.weight.data.uniform_(*hidden_init(self.fc3))
self.fc4.weight.data.uniform_(*hidden_init(self.fc4))
self.fc5.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state):
"""Build an actor (policy) network that maps states -> actions."""
x = F.relu(self.fc1(self.bn(state)))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = torch.tanh(self.fc5(x))
return x
class Critic(nn.Module):
"""Critic (Value) Model."""
def __init__(self, state_size, action_size, h1=512, h2=256, h3=128, h4=64):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
"""
super(Critic, self).__init__()
self.bn = nn.BatchNorm1d(state_size)
self.fc1 = nn.Linear(state_size + action_size, h1)
self.bn1 = nn.BatchNorm1d(h1)
self.fc2 = nn.Linear(h1, h2)
self.bn2 = nn.BatchNorm1d(h2)
self.fc3 = nn.Linear(h2, h3)
self.bn3 = nn.BatchNorm1d(h3)
self.fc4 = nn.Linear(h3, h4)
self.bn4 = nn.BatchNorm1d(h4)
self.fc5 = nn.Linear(h4, 1)
self.reset_parameters()
def reset_parameters(self):
self.fc1.weight.data.uniform_(*hidden_init(self.fc1))
self.fc2.weight.data.uniform_(*hidden_init(self.fc2))
self.fc3.weight.data.uniform_(*hidden_init(self.fc3))
self.fc4.weight.data.uniform_(*hidden_init(self.fc4))
self.fc5.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state, action1, action2):
"""Build a critic (value) network that maps (state, action) pairs -> Q-values."""
x = torch.cat((self.bn(state), action1, action2), dim=1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
return self.fc5(x) | 36.224719 | 89 | 0.594603 | 461 | 3,224 | 4.039046 | 0.186551 | 0.051557 | 0.0913 | 0.098818 | 0.764232 | 0.746509 | 0.746509 | 0.746509 | 0.746509 | 0.706767 | 0 | 0.05272 | 0.258685 | 3,224 | 89 | 90 | 36.224719 | 0.72636 | 0.164392 | 0 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112903 | false | 0 | 0.064516 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8d21be18b46931c947983aeeb1c6af95ef24635e | 164 | py | Python | data/CorrFusionModel.py | RuiTan/CorrFusionNet | 21ed7ff0b91cc5ccd4dcd1a3831f7b8de5c7711b | [
"MIT"
] | null | null | null | data/CorrFusionModel.py | RuiTan/CorrFusionNet | 21ed7ff0b91cc5ccd4dcd1a3831f7b8de5c7711b | [
"MIT"
] | null | null | null | data/CorrFusionModel.py | RuiTan/CorrFusionNet | 21ed7ff0b91cc5ccd4dcd1a3831f7b8de5c7711b | [
"MIT"
] | null | null | null | import torch
from torch import *
import torch.nn.functional as F
class CorrFusion(nn.Module):
def __init__(self):
super(CorrFusion, self).__init__()
| 16.4 | 42 | 0.713415 | 22 | 164 | 4.954545 | 0.636364 | 0.201835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189024 | 164 | 9 | 43 | 18.222222 | 0.819549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8d9ed84a28121ce4dda02e09160562c8c330c3bf | 107 | py | Python | python/testData/inspections/PyAugmentAssignmentInspection/mult.py | Tasemo/intellij-community | 50aeaf729b7073e91c7c77487a1f155e0dfe3fcd | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyAugmentAssignmentInspection/mult.py | Tasemo/intellij-community | 50aeaf729b7073e91c7c77487a1f155e0dfe3fcd | [
"Apache-2.0"
] | null | null | null | python/testData/inspections/PyAugmentAssignmentInspection/mult.py | Tasemo/intellij-community | 50aeaf729b7073e91c7c77487a1f155e0dfe3fcd | [
"Apache-2.0"
] | null | null | null | <weak_warning descr="Assignment can be replaced with an augmented assignment">var = var * 2</weak_warning>
| 53.5 | 106 | 0.785047 | 16 | 107 | 5.125 | 0.75 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.121495 | 107 | 1 | 107 | 107 | 0.861702 | 0 | 0 | 0 | 0 | 0 | 0.514019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a5ff9e21c7f581dd323f6f66ae41a33ae207430a | 1,035 | py | Python | src/graph_transpiler/webdnn/optimizer/sub_rules/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | src/graph_transpiler/webdnn/optimizer/sub_rules/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/optimizer/sub_rules/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.optimizer.sub_rules import concat_zero_padding
from webdnn.optimizer.sub_rules import constant_folding
from webdnn.optimizer.sub_rules import conv_filter_pruning
from webdnn.optimizer.sub_rules import convolution2d_svd_compression
from webdnn.optimizer.sub_rules import dump_graph
from webdnn.optimizer.sub_rules import elementwise_kernel_fusion
from webdnn.optimizer.sub_rules import merge_tensordot_and_elementwise_mul
from webdnn.optimizer.sub_rules import remove_no_effect_operator
from webdnn.optimizer.sub_rules import remove_redundant_operator
from webdnn.optimizer.sub_rules import replace_convolution_by_im2col
from webdnn.optimizer.sub_rules import replace_deconvolution_by_col2im
from webdnn.optimizer.sub_rules import replace_linear_by_tensordot
from webdnn.optimizer.sub_rules import replace_scalar_operator
from webdnn.optimizer.sub_rules import simplify_associative_operator
from webdnn.optimizer.sub_rules import simplify_elementwise_sequence
from webdnn.optimizer.sub_rules import update_inplace_attribute
| 60.882353 | 74 | 0.907246 | 148 | 1,035 | 5.993243 | 0.304054 | 0.180383 | 0.342728 | 0.396843 | 0.694476 | 0.694476 | 0.396843 | 0.110485 | 0 | 0 | 0 | 0.00309 | 0.061836 | 1,035 | 16 | 75 | 64.6875 | 0.910402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
939f1a99efbde1767cf0ae2112073f7cfc9a446e | 34 | py | Python | h5glance/__main__.py | aparamon/h5glance | fc6b93237f2a19c601d9fac6b96b6cfdb157dc08 | [
"BSD-3-Clause"
] | 44 | 2018-09-30T19:56:10.000Z | 2022-01-20T09:06:55.000Z | h5glance/__main__.py | aparamon/h5glance | fc6b93237f2a19c601d9fac6b96b6cfdb157dc08 | [
"BSD-3-Clause"
] | 25 | 2018-09-12T13:55:25.000Z | 2022-03-19T12:17:43.000Z | h5glance/__main__.py | aparamon/h5glance | fc6b93237f2a19c601d9fac6b96b6cfdb157dc08 | [
"BSD-3-Clause"
] | 3 | 2018-10-05T12:38:33.000Z | 2020-11-13T14:42:44.000Z | from .terminal import main
main()
| 11.333333 | 26 | 0.764706 | 5 | 34 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 2 | 27 | 17 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
93e63395b726ead3a777ff25c7591b50224d2394 | 91 | py | Python | Prototype/core/dataset/__init__.py | marc-ortuno/Vocal-Percussion-Classification-for-Real-Time-Context | e7ed1f13cc1868a824f4036dd08ec6bed4266c08 | [
"MIT"
] | 4 | 2021-03-29T19:15:29.000Z | 2021-06-08T05:34:00.000Z | Prototype/core/dataset/__init__.py | marc-ortuno/Vocal-Percussion-Classification-for-Real-Time-Context | e7ed1f13cc1868a824f4036dd08ec6bed4266c08 | [
"MIT"
] | 1 | 2021-06-08T06:03:51.000Z | 2021-06-08T06:03:51.000Z | Prototype/core/dataset/__init__.py | marc-ortuno/Vocal-Percussion-Classification-for-Real-Time-Context | e7ed1f13cc1868a824f4036dd08ec6bed4266c08 | [
"MIT"
] | null | null | null | from .reader import read_dataset,get_dataset
from .dataset_analyzer import dataset_analyzer | 45.5 | 46 | 0.89011 | 13 | 91 | 5.923077 | 0.538462 | 0.38961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 91 | 2 | 46 | 45.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9e1f494912babe9eebabfd000b862c7660f71fa6 | 243 | py | Python | src/environment/__init__.py | Qnubo-Tech/epydemic | 635f211f3621a5e52acd4dc82f3dd74b5049c36f | [
"AFL-3.0"
] | 2 | 2021-02-06T09:35:51.000Z | 2021-02-06T09:35:54.000Z | src/environment/__init__.py | Qnubo-Tech/epydemic | 635f211f3621a5e52acd4dc82f3dd74b5049c36f | [
"AFL-3.0"
] | null | null | null | src/environment/__init__.py | Qnubo-Tech/epydemic | 635f211f3621a5e52acd4dc82f3dd74b5049c36f | [
"AFL-3.0"
] | null | null | null | from .agent import Agent
from src.environment.status import Status
from src.environment.disease import Disease, Immunity, Infection
from src.environment.society import Society
from src.environment.mobility import MobilityFactory, MobilityType
| 40.5 | 66 | 0.855967 | 31 | 243 | 6.709677 | 0.419355 | 0.134615 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09465 | 243 | 5 | 67 | 48.6 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f52eb77178b6596bac5f616147387567d5fa448f | 115 | py | Python | gbvision/gui/feed_window.py | computerboy0555/GBVision | 79fc9ba09865bfd9c7a39abaa3980c46ce090b07 | [
"Apache-2.0"
] | 16 | 2019-04-15T18:52:58.000Z | 2022-02-13T23:00:46.000Z | gbvision/gui/feed_window.py | computerboy0555/GBVision | 79fc9ba09865bfd9c7a39abaa3980c46ce090b07 | [
"Apache-2.0"
] | 2 | 2019-04-15T19:00:05.000Z | 2019-04-19T15:47:21.000Z | gbvision/gui/feed_window.py | computerboy0555/GBVision | 79fc9ba09865bfd9c7a39abaa3980c46ce090b07 | [
"Apache-2.0"
] | 3 | 2019-05-03T13:48:25.000Z | 2019-09-22T14:03:49.000Z | from .opencv_window import OpenCVWindow
class FeedWindow(OpenCVWindow):
"""
a basic window class
"""
| 14.375 | 39 | 0.686957 | 12 | 115 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226087 | 115 | 7 | 40 | 16.428571 | 0.876404 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f53780545051bb3fe465979ab4b886174066b17a | 17,449 | py | Python | captioner/modeling/model.py | eyadgaran/caption-it | 8bb32849551f52ac4b4f5805bcb084d2445aa494 | [
"MIT"
] | 1 | 2019-06-07T09:07:47.000Z | 2019-06-07T09:07:47.000Z | captioner/modeling/model.py | eyadgaran/caption-it | 8bb32849551f52ac4b4f5805bcb084d2445aa494 | [
"MIT"
] | null | null | null | captioner/modeling/model.py | eyadgaran/caption-it | 8bb32849551f52ac4b4f5805bcb084d2445aa494 | [
"MIT"
] | null | null | null | '''
Module to define the model(s) used
'''
__author__ = 'Elisha Yadgaran'
from simpleml.models import SklearnModel, KerasEncoderDecoderStateClassifier, KerasEncoderDecoderStatelessClassifier
from simpleml.models.external_models import ExternalModelMixin
from sklearn.feature_extraction.text import CountVectorizer
from keras.layers import Dense, Embedding, LSTM, TimeDistributed, Masking, Input,\
RepeatVector, concatenate, Lambda
import keras.backend as K
import numpy as np
import logging
LOGGER = logging.getLogger(__name__)
class WrappedSklearnCountVectorizer(CountVectorizer, ExternalModelMixin):
def __init__(self, pad_token="#PAD#", unknown_token="um",
start_token="#START#", end_token="#END#", **kwargs):
super(WrappedSklearnCountVectorizer, self).__init__(**kwargs)
self.pad_token = pad_token
self.unknown_token = unknown_token
self.start_token = start_token
self.end_token = end_token
self.pad_index = 0
self.start_index = 1
self.unknown_index = 2
self.end_index = 3
def get_index(self, token):
return self.vocabulary_.get(token, self.vocabulary_.get(self.unknown_token))
def get_token(self, index):
if not hasattr(self, 'reverse_vocab'):
self.reverse_vocab = {v: k for k, v in self.vocabulary_.items()}
return self.reverse_vocab.get(index)
def fit(self, X, *args, **kwargs):
super(WrappedSklearnCountVectorizer, self).fit(X, *args, **kwargs)
vocab = self.vocabulary_
special_tokens = [self.pad_token, self.start_token, self.unknown_token, self.end_token]
for token in special_tokens:
vocab.pop(token, None)
self.vocabulary_ = {token: index for index, token in enumerate(special_tokens + list(vocab))}
# sanity check
for token, index in zip(special_tokens, [self.pad_index, self.start_index, self.unknown_index, self.end_index]):
assert(index == self.vocabulary_.get(token))
def predict(self, X):
'''
Assume X is an ndarray of tokens
'''
return np.apply_along_axis(self.index_tokens, 0, X)
def index_tokens(self, token_list):
'''
Index list of tokens
'''
return [self.get_index(token) for token in token_list]
def humanize_token_indices(self, index_list):
'''
Tokenize list of indices
'''
return ' '.join(
self.get_token(index) for index in index_list
if index not in [self.pad_index, self.start_index, self.end_index]
).strip()
class TextProcessor(SklearnModel):
def _create_external_model(self, **kwargs):
return WrappedSklearnCountVectorizer(**kwargs)
def inverse_transform(self, *args):
return self.external_model.humanize_token_indices(*args)
@property
def initial_response(self):
'''
When a request first comes in, only the start token is available.
Subsequent output tokens are recursively fed back into the neural net
'''
return np.array([self.external_model.start_index])
class ImageContextCaptionDecoder(KerasEncoderDecoderStateClassifier):
'''
Networks used for training and predicting a seq2seq caption using
an input image as the initial decoder state (does not feed the image in at
every timestep, only the states)
Dynamically creates inference network with training weights before predicting
(real-time recurrent behavior is slightly different)
'''
def build_network(self, model, **kwargs):
'''
training network
Input:
X = [image embedding, tokenized_caption]
y = [shifted_tokenized_caption]
Output:
y = [predicted_tokenized_captions]
'''
IMG_EMBED_SIZE = 2048 # InceptionV3 output
IMG_EMBED_BOTTLENECK = 120
WORD_EMBED_SIZE = 100
LSTM_UNITS = 300
LOGIT_BOTTLENECK = 120
VOCABULARY_SIZE = kwargs.get('vocabulary_size')
CAPTION_LENGTH = kwargs.get('pad_length') - 1 # Substract one because we dont predict the start token
PAD_INDEX = kwargs.get('pad_index')
# Save config values for later
new_configs = {
'IMG_EMBED_SIZE': IMG_EMBED_SIZE,
'IMG_EMBED_BOTTLENECK': IMG_EMBED_BOTTLENECK,
'WORD_EMBED_SIZE': WORD_EMBED_SIZE,
'LSTM_UNITS': LSTM_UNITS,
'LOGIT_BOTTLENECK': LOGIT_BOTTLENECK,
'VOCABULARY_SIZE': VOCABULARY_SIZE,
'CAPTION_LENGTH': CAPTION_LENGTH,
'PAD_INDEX': PAD_INDEX,
}
self.config.update(new_configs)
###############
# Image Input #
###############
# [batch_size, IMG_EMBED_SIZE] of CNN image features
image_input = Input(shape=(IMG_EMBED_SIZE,), dtype='float32', name='image_input')
# we use bottleneck here to reduce the number of parameters
# image embedding -> bottleneck
img_bottleneck = Dense(IMG_EMBED_BOTTLENECK, activation='elu', name='image_bottleneck')(image_input)
# image embedding bottleneck -> lstm initial state
initial_image_state = Dense(LSTM_UNITS, activation='elu', name='image_context')(img_bottleneck)
#################
# Caption Input #
#################
# [batch_size, time steps] of word ids
caption_input = Input(shape=(CAPTION_LENGTH,), dtype='int32', name='caption_input')
# Mask padding
padding_mask = Masking(mask_value=PAD_INDEX)(caption_input)
# word -> embedding
caption_embeddings = Embedding(VOCABULARY_SIZE, WORD_EMBED_SIZE, name='caption_embedding')(padding_mask)
###########
# Decoder #
###########
# lstm cell
decoder_output = LSTM(LSTM_UNITS, return_sequences=True, name='decoder_lstm')(caption_embeddings,
initial_state=(initial_image_state, initial_image_state))
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
decoder_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='decoder_bottleneck')(decoder_output)
# logits bottleneck -> logits for next token prediction
# Generate it for each timestamp independently
next_token_prediction = Dense(VOCABULARY_SIZE, activation='softmax', name='prediction')(decoder_bottleneck)
model = model([image_input, caption_input], next_token_prediction)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=[])
# print(model.summary())
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model.png', show_shapes=True)
return model
def build_inference_network(self, model):
'''
Inference network - Differs from training one so gets established dynamically
at inference time
Input:
X = [image embedding, tokenized_caption]
y = [shifted_tokenized_caption]
Output:
y = [predicted_tokenized_captions]
'''
IMG_EMBED_SIZE = self.config.get('IMG_EMBED_SIZE')
IMG_EMBED_BOTTLENECK = self.config.get('IMG_EMBED_BOTTLENECK')
WORD_EMBED_SIZE = self.config.get('WORD_EMBED_SIZE')
LSTM_UNITS = self.config.get('LSTM_UNITS')
LOGIT_BOTTLENECK = self.config.get('LOGIT_BOTTLENECK')
VOCABULARY_SIZE = self.config.get('VOCABULARY_SIZE')
#################
# Encoder Input #
#################
# [batch_size, IMG_EMBED_SIZE] of CNN image features
image_input = Input(shape=(IMG_EMBED_SIZE,), dtype='float32', name='image_input')
# we use bottleneck here to reduce the number of parameters
# image embedding -> bottleneck
img_bottleneck = Dense(IMG_EMBED_BOTTLENECK, activation='elu', name='image_bottleneck')(image_input)
# image embedding bottleneck -> lstm initial state
initial_image_state = Dense(LSTM_UNITS, activation='elu', name='image_context')(img_bottleneck)
encoder_model = model(image_input, initial_image_state)
encoder_model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam')
#################
# Decoder Input #
#################
# [batch_size, time steps] of word ids
caption_input = Input(shape=(None,), dtype='int32', name='caption_input')
decoder_state_h_input = Input(shape=(LSTM_UNITS,), dtype='float32', name='decoder_h_input')
decoder_state_c_input = Input(shape=(LSTM_UNITS,), dtype='float32', name='decoder_c_input')
# word -> embedding
caption_embeddings = Embedding(VOCABULARY_SIZE, WORD_EMBED_SIZE, name='caption_embeddings')(caption_input)
# lstm cell
decoder_output, state_h, state_c = LSTM(LSTM_UNITS, return_state=True, name='decoder_lstm')(caption_embeddings,
initial_state=[decoder_state_h_input, decoder_state_c_input])
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
decoder_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='decoder_bottleneck')(decoder_output)
# logits bottleneck -> logits for next token prediction
next_token_prediction = Dense(VOCABULARY_SIZE, activation='softmax', name='prediction')(decoder_bottleneck)
decoder_model = model([caption_input, decoder_state_h_input, decoder_state_c_input], [next_token_prediction, state_h, state_c])
decoder_model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam')
return encoder_model, decoder_model
class ImageCaptionDecoder(KerasEncoderDecoderStatelessClassifier):
'''
Networks used for training and predicting a seq2seq caption using
an input image in every timestep
Dynamically creates inference network with training weights before predicting
(real-time recurrent behavior is slightly different)
'''
def build_network(self, model, **kwargs):
'''
training network
Input:
X = [image embedding, tokenized_caption]
y = [shifted_tokenized_caption]
Output:
y = [predicted_tokenized_captions]
'''
IMG_EMBED_SIZE = 2048 # InceptionV3 output
IMG_EMBED_BOTTLENECK = 120
WORD_EMBED_SIZE = 100
LSTM_UNITS = 300
LOGIT_BOTTLENECK = 120
VOCABULARY_SIZE = kwargs.get('vocabulary_size')
CAPTION_LENGTH = kwargs.get('pad_length') - 1 # Substract one because we dont predict the start token
PAD_INDEX = kwargs.get('pad_index')
# Save config values for later
new_configs = {
'IMG_EMBED_SIZE': IMG_EMBED_SIZE,
'IMG_EMBED_BOTTLENECK': IMG_EMBED_BOTTLENECK,
'WORD_EMBED_SIZE': WORD_EMBED_SIZE,
'LSTM_UNITS': LSTM_UNITS,
'LOGIT_BOTTLENECK': LOGIT_BOTTLENECK,
'VOCABULARY_SIZE': VOCABULARY_SIZE,
'CAPTION_LENGTH': CAPTION_LENGTH,
'PAD_INDEX': PAD_INDEX,
}
self.config.update(new_configs)
###############
# Image Input #
###############
# [batch_size, IMG_EMBED_SIZE] of CNN image features
image_input = Input(shape=(IMG_EMBED_SIZE,), dtype='float32', name='image_input')
# we use bottleneck here to reduce the number of parameters
# image embedding -> bottleneck
img_bottleneck = Dense(IMG_EMBED_BOTTLENECK, activation='elu', name='image_bottleneck')(image_input)
# Repeat image to feed at every timestep
img_repeated = RepeatVector(CAPTION_LENGTH, name='image_repeated')(img_bottleneck)
#################
# Caption Input #
#################
# [batch_size, time steps] of word ids
caption_input = Input(shape=(CAPTION_LENGTH,), dtype='int32', name='caption_input')
# Mask padding
padding_mask = Masking(mask_value=PAD_INDEX)(caption_input)
# word -> embedding
caption_embeddings = Embedding(VOCABULARY_SIZE, WORD_EMBED_SIZE, name='caption_embedding')(padding_mask)
# lstm
caption_encoding = LSTM(LSTM_UNITS, return_sequences=True, name='caption_encoding')(caption_embeddings)
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
caption_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='caption_bottleneck')(caption_encoding)
###########
# Decoder #
###########
# merge inputs
merged_encoding = concatenate([img_repeated, caption_bottleneck])
# lstm cell
decoder_output = LSTM(LSTM_UNITS, return_sequences=True, name='decoder_lstm')(merged_encoding)
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
decoder_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='decoder_bottleneck')(decoder_output)
# logits bottleneck -> logits for next token prediction
# Generate it for each timestamp independently
next_token_prediction = Dense(VOCABULARY_SIZE, activation='softmax', name='prediction')(decoder_bottleneck)
model = model([image_input, caption_input], next_token_prediction)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=[])
# print(model.summary())
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='concat_model.png', show_shapes=True)
return model
def build_inference_network(self, model):
'''
Inference network - Differs from training one so gets established dynamically
at inference time
Input:
X = [image embedding, tokenized_caption]
y = [shifted_tokenized_caption]
Output:
y = [predicted_tokenized_captions]
'''
IMG_EMBED_SIZE = self.config.get('IMG_EMBED_SIZE')
IMG_EMBED_BOTTLENECK = self.config.get('IMG_EMBED_BOTTLENECK')
WORD_EMBED_SIZE = self.config.get('WORD_EMBED_SIZE')
LSTM_UNITS = self.config.get('LSTM_UNITS')
LOGIT_BOTTLENECK = self.config.get('LOGIT_BOTTLENECK')
VOCABULARY_SIZE = self.config.get('VOCABULARY_SIZE')
###############
# Image Input #
###############
# [batch_size, IMG_EMBED_SIZE] of CNN image features
image_input = Input(shape=(IMG_EMBED_SIZE,), dtype='float32', name='image_input')
# we use bottleneck here to reduce the number of parameters
# image embedding -> bottleneck
img_bottleneck = Dense(IMG_EMBED_BOTTLENECK, activation='elu', name='image_bottleneck')(image_input)
encoder_model = model(image_input, img_bottleneck)
encoder_model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam')
#################
# Decoder Input #
#################
# [batch_size, time steps] of word ids
caption_input = Input(shape=(None,), dtype='float32', name='caption_input')
decoder_image_input = Input(shape=(IMG_EMBED_BOTTLENECK,), dtype='float32', name='decoder_image_input')
# Repeat image to feed at every timestep
def repeat_vector(args):
layer_to_repeat, sequence_layer = args
return RepeatVector(K.shape(sequence_layer)[1], name='image_repeated')(layer_to_repeat)
img_repeated = Lambda(repeat_vector, output_shape=(None, IMG_EMBED_BOTTLENECK))([decoder_image_input, caption_input])
# word -> embedding
caption_embeddings = Embedding(VOCABULARY_SIZE, WORD_EMBED_SIZE, name='caption_embedding')(caption_input)
# lstm
caption_encoding = LSTM(LSTM_UNITS, return_sequences=True, name='caption_encoding')(caption_embeddings)
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
caption_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='caption_bottleneck')(caption_encoding)
###########
# Decoder #
###########
# merge inputs
merged_encoding = concatenate([img_repeated, caption_bottleneck])
# lstm cell
decoder_output = LSTM(LSTM_UNITS, name='decoder_lstm')(merged_encoding)
# we use bottleneck here to reduce model complexity
# lstm output -> logits bottleneck
decoder_bottleneck = Dense(LOGIT_BOTTLENECK, activation="elu", name='decoder_bottleneck')(decoder_output)
# logits bottleneck -> logits for next token prediction
# Generate it for each timestamp independently
next_token_prediction = Dense(VOCABULARY_SIZE, activation='softmax', name='prediction')(decoder_bottleneck)
decoder_model = model([caption_input, decoder_image_input], next_token_prediction)
decoder_model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam')
return encoder_model, decoder_model
| 40.67366 | 135 | 0.648977 | 1,912 | 17,449 | 5.65272 | 0.133368 | 0.025167 | 0.019985 | 0.01758 | 0.749537 | 0.739452 | 0.732791 | 0.727239 | 0.714101 | 0.698557 | 0 | 0.005027 | 0.247579 | 17,449 | 428 | 136 | 40.768692 | 0.818189 | 0.233652 | 0 | 0.56044 | 0 | 0 | 0.113932 | 0.01485 | 0 | 0 | 0 | 0 | 0.005495 | 1 | 0.082418 | false | 0 | 0.038462 | 0.016484 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f546c84d98ef07bf9e93ed93a998b32be4a97eac | 70 | py | Python | examples/tdd_19113/calculator.py | jfgrimm/ResearchCodingClub.github.io | 705f0c420b6e88cc420124f13c9a3cd9aebfd71a | [
"MIT"
] | 5 | 2017-08-20T09:29:00.000Z | 2019-08-06T12:38:42.000Z | examples/tdd_19113/calculator.py | jfgrimm/ResearchCodingClub.github.io | 705f0c420b6e88cc420124f13c9a3cd9aebfd71a | [
"MIT"
] | 2 | 2021-02-03T17:22:15.000Z | 2021-09-22T14:27:30.000Z | examples/tdd_19113/calculator.py | jfgrimm/ResearchCodingClub.github.io | 705f0c420b6e88cc420124f13c9a3cd9aebfd71a | [
"MIT"
] | 1 | 2021-02-17T15:39:42.000Z | 2021-02-17T15:39:42.000Z | def add(lhs, rhs):
"Adds the rhs to the lhs"
return lhs + rhs
| 17.5 | 29 | 0.6 | 13 | 70 | 3.230769 | 0.615385 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 70 | 3 | 30 | 23.333333 | 0.857143 | 0.328571 | 0 | 0 | 0 | 0 | 0.328571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
f55f6dece3561ee12abcadf6ec1f93273af0ae3e | 52 | py | Python | main.py | MarvvanPal/foundations-project-template | e5ac79fb65192df63add6e92e7a35f108d17d8ba | [
"MIT"
] | 3 | 2021-04-12T14:56:31.000Z | 2021-10-30T07:05:49.000Z | main.py | MarvvanPal/foundations-project-template | e5ac79fb65192df63add6e92e7a35f108d17d8ba | [
"MIT"
] | 1 | 2021-04-08T14:20:40.000Z | 2021-04-08T14:20:40.000Z | main.py | MarvvanPal/foundations-project-template | e5ac79fb65192df63add6e92e7a35f108d17d8ba | [
"MIT"
] | 29 | 2021-04-08T13:09:38.000Z | 2021-10-31T11:08:03.000Z | from great_project.website import app # noqa: F401
| 26 | 51 | 0.788462 | 8 | 52 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.153846 | 52 | 1 | 52 | 52 | 0.840909 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f565a16f31e119e746c73a1df9e4aa8c5c9bc7c4 | 107 | py | Python | config.py | shermike/bloopress | 4edb543dc8ea9aad27140edb68f396a06e7e25d4 | [
"MIT"
] | null | null | null | config.py | shermike/bloopress | 4edb543dc8ea9aad27140edb68f396a06e7e25d4 | [
"MIT"
] | null | null | null | config.py | shermike/bloopress | 4edb543dc8ea9aad27140edb68f396a06e7e25d4 | [
"MIT"
] | null | null | null | '''
Created on 18.11.2016
@author: Mike
'''
token = '246218053:AAFhSDVfEU30wZOnoSB-s98lVp3XU1uhXGg' | 15.285714 | 55 | 0.700935 | 11 | 107 | 6.818182 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255556 | 0.158879 | 107 | 7 | 55 | 15.285714 | 0.577778 | 0.336449 | 0 | 0 | 0 | 0 | 0.775862 | 0.775862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f58b4d799c2aa12c86b96fe7129e2777b736dcc5 | 22 | py | Python | Prueba/hola.py | GoldenDiamonds/Estadia2019 | 653d37be8b81772db2d0336a361945e38d13e0c5 | [
"MIT"
] | null | null | null | Prueba/hola.py | GoldenDiamonds/Estadia2019 | 653d37be8b81772db2d0336a361945e38d13e0c5 | [
"MIT"
] | null | null | null | Prueba/hola.py | GoldenDiamonds/Estadia2019 | 653d37be8b81772db2d0336a361945e38d13e0c5 | [
"MIT"
] | null | null | null | print ("HOla tortoise" | 22 | 22 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 1 | 22 | 22 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
f59d64ae2d7705abdf77f777d2c8e1763b94e7db | 25 | py | Python | src/deepke/name_entity_re/standard/models/__init__.py | johncolezhang/DeepKE | ea4552ec42cb003a835f00fc14fb454f9a9a7183 | [
"MIT"
] | 710 | 2021-08-01T16:43:59.000Z | 2022-03-31T08:39:17.000Z | src/deepke/name_entity_re/standard/models/__init__.py | johncolezhang/DeepKE | ea4552ec42cb003a835f00fc14fb454f9a9a7183 | [
"MIT"
] | 66 | 2019-06-09T12:14:31.000Z | 2021-07-27T05:54:35.000Z | src/deepke/name_entity_re/standard/models/__init__.py | johncolezhang/DeepKE | ea4552ec42cb003a835f00fc14fb454f9a9a7183 | [
"MIT"
] | 183 | 2018-09-07T06:57:13.000Z | 2021-08-01T08:50:15.000Z | from .InferBert import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.