hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0713bf1d16fde855bda0ed021b030d08feadd022 | 3,486 | py | Python | selfdrive/car/chrysler/radar_interface.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 85 | 2019-06-14T17:51:31.000Z | 2022-02-09T22:18:20.000Z | selfdrive/car/chrysler/radar_interface.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 4 | 2020-04-12T21:34:03.000Z | 2020-04-15T22:22:15.000Z | selfdrive/car/chrysler/radar_interface.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 73 | 2018-12-03T19:34:42.000Z | 2020-07-27T05:10:23.000Z | #!/usr/bin/env python3
import os
from opendbc.can.parser import CANParser
from cereal import car
from selfdrive.car.interfaces import RadarInterfaceBase
RADAR_MSGS_C = list(range(0x2c2, 0x2d4+2, 2)) # c_ messages 706,...,724
RADAR_MSGS_D = list(range(0x2a2, 0x2b4+2, 2)) # d_ messages
LAST_MSG = max(RADAR_MSGS_C + RADAR_MSGS_D)
NUMBER_MSGS = len(RADAR_MSGS_C) + len(RADAR_MSGS_D)
def _create_radar_can_parser():
dbc_f = 'chrysler_pacifica_2017_hybrid_private_fusion.dbc'
msg_n = len(RADAR_MSGS_C)
# list of [(signal name, message name or number, initial values), (...)]
# [('RADAR_STATE', 1024, 0),
# ('LONG_DIST', 1072, 255),
# ('LONG_DIST', 1073, 255),
# ('LONG_DIST', 1074, 255),
# ('LONG_DIST', 1075, 255),
# The factor and offset are applied by the dbc parsing library, so the
# default values should be after the factor/offset are applied.
signals = list(zip(['LONG_DIST'] * msg_n +
['LAT_DIST'] * msg_n +
['REL_SPEED'] * msg_n,
RADAR_MSGS_C * 2 + # LONG_DIST, LAT_DIST
RADAR_MSGS_D, # REL_SPEED
[0] * msg_n + # LONG_DIST
[-1000] * msg_n + # LAT_DIST
[-146.278] * msg_n)) # REL_SPEED set to 0, factor/offset to this
# TODO what are the checks actually used for?
# honda only checks the last message,
# toyota checks all the messages. Which do we want?
checks = list(zip(RADAR_MSGS_C +
RADAR_MSGS_D,
[20]*msg_n + # 20Hz (0.05s)
[20]*msg_n)) # 20Hz (0.05s)
return CANParser(os.path.splitext(dbc_f)[0], signals, checks, 1)
def _address_to_track(address):
if address in RADAR_MSGS_C:
return (address - RADAR_MSGS_C[0]) // 2
if address in RADAR_MSGS_D:
return (address - RADAR_MSGS_D[0]) // 2
raise ValueError("radar received unexpected address %d" % address)
class RadarInterface(RadarInterfaceBase):
def __init__(self, CP):
self.pts = {}
self.delay = 0 # Delay of radar #TUNE
self.rcp = _create_radar_can_parser()
self.updated_messages = set()
self.trigger_msg = LAST_MSG
def update(self, can_strings):
vls = self.rcp.update_strings(can_strings)
self.updated_messages.update(vls)
if self.trigger_msg not in self.updated_messages:
return None
ret = car.RadarData.new_message()
errors = []
if not self.rcp.can_valid:
errors.append("canError")
ret.errors = errors
for ii in self.updated_messages: # ii should be the message ID as a number
cpt = self.rcp.vl[ii]
trackId = _address_to_track(ii)
if trackId not in self.pts:
self.pts[trackId] = car.RadarData.RadarPoint.new_message()
self.pts[trackId].trackId = trackId
self.pts[trackId].aRel = float('nan')
self.pts[trackId].yvRel = float('nan')
self.pts[trackId].measured = True
if 'LONG_DIST' in cpt: # c_* message
self.pts[trackId].dRel = cpt['LONG_DIST'] # from front of car
# our lat_dist is positive to the right in car's frame.
# TODO what does yRel want?
self.pts[trackId].yRel = cpt['LAT_DIST'] # in car frame's y axis, left is positive
else: # d_* message
self.pts[trackId].vRel = cpt['REL_SPEED']
# We want a list, not a dictionary. Filter out LONG_DIST==0 because that means it's not valid.
ret.points = [x for x in self.pts.values() if x.dRel != 0]
self.updated_messages.clear()
return ret
| 37.085106 | 98 | 0.645439 | 520 | 3,486 | 4.126923 | 0.344231 | 0.062908 | 0.037279 | 0.029357 | 0.070829 | 0.031687 | 0 | 0 | 0 | 0 | 0 | 0.03624 | 0.240103 | 3,486 | 93 | 99 | 37.483871 | 0.773877 | 0.274527 | 0 | 0.03125 | 0 | 0 | 0.063651 | 0.019215 | 0 | 0 | 0.008006 | 0.010753 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
071afc12457e1373ac1b61126e3c5e710f213fb9 | 1,536 | py | Python | app/util/auth2.py | FSU-ACM/Contest-Server | 00a71cdcee1a7e4d4e4d8e33b5d6decf27f02313 | [
"MIT"
] | 8 | 2019-01-13T21:57:53.000Z | 2021-11-29T12:32:48.000Z | app/util/auth2.py | FSU-ACM/Contest-Server | 00a71cdcee1a7e4d4e4d8e33b5d6decf27f02313 | [
"MIT"
] | 73 | 2018-02-13T00:58:39.000Z | 2022-02-10T11:59:53.000Z | app/util/auth2.py | FSU-ACM/Contest-Server | 00a71cdcee1a7e4d4e4d8e33b5d6decf27f02313 | [
"MIT"
] | 4 | 2018-02-08T18:56:54.000Z | 2019-02-13T19:01:53.000Z | """ util.auth2: Authentication tools
This module is based off of util.auth, except with the action
paradigm removed.
"""
from flask import session
from app.models import Account
from app.util import course as course_util
# Session keys
SESSION_EMAIL = 'email'
def create_account(email: str, password: str, first_name: str,
last_name: str, fsuid: str, course_list: list = []):
"""
Creates an account for a single user.
:email: Required, the email address of the user.
:password: Required, user's chosen password.
:first_name: Required, user's first name.
:last_name: Required, user's last name.
:fsuid: Optional, user's FSUID.
:course_list: Optional, courses being taken by user
:return: Account object.
"""
account = Account(
email=email,
first_name=first_name,
last_name=last_name,
fsuid=fsuid,
is_admin=False
)
# Set user's extra credit courses
course_util.set_courses(account, course_list)
account.set_password(password)
account.save()
return account
def get_account(email: str=None):
"""
Retrieves account via email (defaults to using session), otherwise
redirects to login page.
:email: Optional email string, if not provided will use session['email']
:return: Account if email is present in session, None otherwise.
"""
try:
email = email or session['email']
return Account.objects.get_or_404(email=email)
except:
return None
| 26.033898 | 76 | 0.670573 | 204 | 1,536 | 4.946078 | 0.406863 | 0.044599 | 0.038652 | 0.033697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003439 | 0.242839 | 1,536 | 58 | 77 | 26.482759 | 0.864144 | 0.464844 | 0 | 0 | 0 | 0 | 0.013605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.086957 | 0.130435 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
071d3f55a7b2c99140b70a77b17ee7b9f4ba705d | 602 | py | Python | config.py | yasminbraga/ufopa-reports | 6d8b213eb0dfce6775d0bb0fd277e8dc09da041c | [
"MIT"
] | null | null | null | config.py | yasminbraga/ufopa-reports | 6d8b213eb0dfce6775d0bb0fd277e8dc09da041c | [
"MIT"
] | null | null | null | config.py | yasminbraga/ufopa-reports | 6d8b213eb0dfce6775d0bb0fd277e8dc09da041c | [
"MIT"
] | 2 | 2019-11-24T13:30:35.000Z | 2022-01-12T11:47:11.000Z | import os
class Config:
CSRF_ENABLED = True
SECRET_KEY = 'your-very-very-secret-key'
SQLALCHEMY_DATABASE_URI = 'postgresql:///flask_template_dev'
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ECHO = True
class Development(Config):
ENV = 'development'
DEBUG = True
TESTING = False
class Production(Config):
ENV = 'production'
DEBUG = False
SQLALCHEMY_DATABASE_URI = os.getenv('DATABASE_URL', 'postgres://firhokdcdnfygz:93231d3f2ae1156cabfc40f7e4ba08587a77f68a5e2072fbcbbdb30150ba4bcb@ec2-107-22-253-158.compute-1.amazonaws.com:5432/df9c5vvl0s21da')
| 27.363636 | 212 | 0.752492 | 65 | 602 | 6.784615 | 0.661538 | 0.040816 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111765 | 0.152824 | 602 | 21 | 213 | 28.666667 | 0.752941 | 0 | 0 | 0 | 0 | 0.066667 | 0.403654 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
071dbe42fd5b14449158462daf2a890df418a73d | 2,651 | py | Python | heat/api/openstack/v1/views/stacks_view.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 265 | 2015-01-02T09:33:22.000Z | 2022-03-26T23:19:54.000Z | heat/api/openstack/v1/views/stacks_view.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 8 | 2015-09-01T15:43:19.000Z | 2021-12-14T05:18:23.000Z | heat/api/openstack/v1/views/stacks_view.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 295 | 2015-01-06T07:00:40.000Z | 2021-09-06T08:05:06.000Z | #
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from heat.api.openstack.v1 import util
from heat.api.openstack.v1.views import views_common
from heat.rpc import api as rpc_api
_collection_name = 'stacks'
basic_keys = (
rpc_api.STACK_ID,
rpc_api.STACK_NAME,
rpc_api.STACK_DESCRIPTION,
rpc_api.STACK_STATUS,
rpc_api.STACK_STATUS_DATA,
rpc_api.STACK_CREATION_TIME,
rpc_api.STACK_DELETION_TIME,
rpc_api.STACK_UPDATED_TIME,
rpc_api.STACK_OWNER,
rpc_api.STACK_PARENT,
rpc_api.STACK_USER_PROJECT_ID,
rpc_api.STACK_TAGS,
)
def format_stack(req, stack, keys=None, include_project=False):
def transform(key, value):
if keys and key not in keys:
return
if key == rpc_api.STACK_ID:
yield ('id', value['stack_id'])
yield ('links', [util.make_link(req, value)])
if include_project:
yield ('project', value['tenant'])
elif key == rpc_api.STACK_ACTION:
return
elif (key == rpc_api.STACK_STATUS and
rpc_api.STACK_ACTION in stack):
# To avoid breaking API compatibility, we join RES_ACTION
# and RES_STATUS, so the API format doesn't expose the
# internal split of state into action/status
yield (key, '_'.join((stack[rpc_api.STACK_ACTION], value)))
else:
# TODO(zaneb): ensure parameters can be formatted for XML
# elif key == rpc_api.STACK_PARAMETERS:
# return key, json.dumps(value)
yield (key, value)
return dict(itertools.chain.from_iterable(
transform(k, v) for k, v in stack.items()))
def collection(req, stacks, count=None, include_project=False):
keys = basic_keys
formatted_stacks = [format_stack(req, s, keys, include_project)
for s in stacks]
result = {'stacks': formatted_stacks}
links = views_common.get_collection_links(req, formatted_stacks)
if links:
result['links'] = links
if count is not None:
result['count'] = count
return result
| 33.556962 | 78 | 0.659751 | 366 | 2,651 | 4.598361 | 0.387978 | 0.067736 | 0.117647 | 0.033274 | 0.058229 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003041 | 0.255753 | 2,651 | 78 | 79 | 33.987179 | 0.849975 | 0.311581 | 0 | 0.041667 | 0 | 0 | 0.028286 | 0 | 0 | 0 | 0 | 0.012821 | 0 | 1 | 0.0625 | false | 0 | 0.083333 | 0 | 0.229167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
072012e3a0677e91ae06d829a2d1c70bfa487fe4 | 1,502 | py | Python | bot/constants/messages.py | aasw0ng/thornode-telegram-bot | 5f73b882381548f45fc9e690c6e4845def9600b7 | [
"MIT"
] | 15 | 2020-04-21T07:51:26.000Z | 2021-11-02T05:45:48.000Z | bot/constants/messages.py | aasw0ng/thornode-telegram-bot | 5f73b882381548f45fc9e690c6e4845def9600b7 | [
"MIT"
] | 78 | 2020-04-13T23:01:16.000Z | 2021-05-09T11:46:25.000Z | bot/constants/messages.py | aasw0ng/thornode-telegram-bot | 5f73b882381548f45fc9e690c6e4845def9600b7 | [
"MIT"
] | 5 | 2020-09-03T21:19:16.000Z | 2021-11-20T00:17:56.000Z | from enum import Enum
from constants.globals import HEALTH_EMOJIS
NETWORK_ERROR = '😱 There was an error while getting data 😱\nAn API endpoint is down!'
HEALTH_LEGEND = f'\n*Node health*:\n{HEALTH_EMOJIS[True]} - *healthy*\n{HEALTH_EMOJIS[False]} - *unhealthy*\n' \
f'{HEALTH_EMOJIS[None]} - *unknown*\n'
class NetworkHealthStatus(Enum):
INEFFICIENT = "Inefficient"
OVERBONDED = "Overbonded"
OPTIMAL = "Optimal"
UNDBERBONDED = "Underbonded"
INSECURE = "Insecure"
NETWORK_HEALTHY_AGAIN = "The network is safe and efficient again! ✅"
def get_network_health_warning(network_health_status: NetworkHealthStatus) -> str:
severity = "🤒"
if network_health_status is NetworkHealthStatus.INSECURE:
severity = "💀"
elif network_health_status is NetworkHealthStatus.INEFFICIENT:
severity = "🦥"
return f"Network health is not optimal: {network_health_status.value} {severity}"
def get_node_healthy_again_message(node_data) -> str:
return f"⚕️Node is healthy again⚕️\nAddress: {node_data['node_address']}\nIP: {node_data['ip_address']}\n" \
def get_node_health_warning_message(node_data) -> str:
return "⚠️ ️⚠ ️ ️⚠️ ️ ⚠ ️⚠ ⚠️ ️⚠ ️⚠ ⚠️ ️⚠ ️ ️⚠️ ️ ⚠ ️⚠ ⚠️ \n" \
f"Node is *not responding*!\nAddress: {node_data['node_address']}\nIP: {node_data['ip_address']}\n" \
"\nCheck it's health immediately\n" \
"⚠️ ️⚠ ️ ️⚠️ ️ ⚠ ️⚠ ⚠️ ️⚠ ️⚠ ⚠️ ️⚠ ️ ️⚠️ ️ ⚠ ️⚠ ⚠️"
| 35.761905 | 112 | 0.631158 | 212 | 1,502 | 4.646226 | 0.353774 | 0.079188 | 0.077157 | 0.032487 | 0.292386 | 0.162437 | 0.162437 | 0.162437 | 0.162437 | 0.162437 | 0 | 0 | 0.216378 | 1,502 | 41 | 113 | 36.634146 | 0.774002 | 0 | 0 | 0 | 0 | 0.115385 | 0.482345 | 0.169221 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.076923 | 0.076923 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0720bde47f5a6d668b162186b490b208d369a3a2 | 233 | py | Python | desktop/core/ext-py/pyasn1-0.1.8/pyasn1/compat/iterfunc.py | kokosing/hue | 2307f5379a35aae9be871e836432e6f45138b3d9 | [
"Apache-2.0"
] | 422 | 2015-01-08T14:08:08.000Z | 2022-02-07T11:47:37.000Z | desktop/core/ext-py/pyasn1-0.1.8/pyasn1/compat/iterfunc.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 581 | 2015-01-01T08:07:16.000Z | 2022-02-23T11:44:37.000Z | desktop/core/ext-py/pyasn1-0.1.8/pyasn1/compat/iterfunc.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 115 | 2015-01-08T14:41:00.000Z | 2022-02-13T12:31:17.000Z | from sys import version_info
if version_info[0] <= 2 and version_info[1] <= 4:
def all(iterable):
for element in iterable:
if not element:
return False
return True
else:
all = all
| 21.181818 | 49 | 0.579399 | 32 | 233 | 4.125 | 0.6875 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02649 | 0.351931 | 233 | 10 | 50 | 23.3 | 0.847682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07229c65c61816346ca75d9d08af09c5eb62b6ff | 6,813 | py | Python | src/mf_horizon_client/client/pipelines/blueprints.py | MF-HORIZON/mf-horizon-python-client | 67a4a094767cb8e5f01956f20f5ca7726781614a | [
"MIT"
] | null | null | null | src/mf_horizon_client/client/pipelines/blueprints.py | MF-HORIZON/mf-horizon-python-client | 67a4a094767cb8e5f01956f20f5ca7726781614a | [
"MIT"
] | null | null | null | src/mf_horizon_client/client/pipelines/blueprints.py | MF-HORIZON/mf-horizon-python-client | 67a4a094767cb8e5f01956f20f5ca7726781614a | [
"MIT"
] | null | null | null | from enum import Enum
class BlueprintType(Enum):
"""
A blueprint is a pipeline template in horizon, and must be specified when creating a new pipeline
Nonlinear
===============================================================================================================
A nonlinear pipeline combines nonlinear feature generation and selection with a nonlinear regressor to generate
forecasts that are at a specific target in the future.
A number of different regressor types are available here:
1. Mondrian Forest. An adaptation of the probabilistic Mondrian Forest algorithm - https://arxiv.org/abs/1406.2673
Provides Bayesian-esque error bounds, and is our recommended nonlinear regressor of choice.
2. XG Boost
3. Random Forest.
The stages of a nonlinear pipeline are as follows:
A. Forecast Specification
B. Stationarization
C. Feature Generation
D. Feature Filtering
E. Feature Refinement
F. Nonlinear Backtesting
G. Nonlinear Prediction
Linear
===============================================================================================================
A nonlinear pipeline combines nonlinear feature generation with a nonlinear regressor to generate
forecasts that are at a specific target in the future.
The regressor used is a Variational Bayesian Linear Regressor
The stages of a linear pipeline are as follows:
A. Forecast Specification
B. Stationarization
C. Nonlinear Feature Generation
D. Feature Filtering
E. Feature Refinement
F. Linear Backtesting
G. Linear Prediction
Fast Forecasting
===============================================================================================================
The fast forecasting pipeline is intended to be used as a quick assessment of a dataset's predictive performance
It is identical to the linear pipeline, but does not include Feature Refinement.
The stages of a linear pipeline are as follows:
A. Forecast Specification
B. Stationarization
C. Nonlinear Feature Generation
D. Feature Filtering
E. Linear Backtesting
F. Linear Prediction
Feature Selection
===============================================================================================================
The feature selection pipeline assumes that the input data set already encodes information about a signal's
past, such that a horizontal observation vector may be used in a traditional regression sense to map to a target
value at a point in the future.
Feat1 | Feat2 | Feat3 | .... | FeatP
Obs1 ------------------------------------- t
Obs2 ------------------------------------- t-1
Obs3 ------------------------------------- t-2
... .....................................
... .....................................
ObsN ------------------------------------- t-N
Two stages of feature selection are then used in order to maximize predictive performance of the feature set
on specified future points for a given target
The stages of a linear pipeline are as follows:
A. Forecast Specification
B. Feature Filtering
E. Feature Refinement
Feature Discovery
===============================================================================================================
The feature discovery pipeline discovers features to maximize performance for a particular forecast target,
at a specified point in the future. Unlike the feature selection pipeline, it does not assume that the signal
set has already encoded historical information about the original data's past.
The stages of a feature discovery pipeline are as follows:
A. Forecast Specification
B. Feature Generation
C. Feature Filtering
D. Feature Refinement
Signal Encoding
===============================================================================================================
One of Horizon's feature generation methods is to encode signals in the frequency domain, extracting historic
lags that will efficiently represent the information contained within them.
The signal encoding pipeline allows for this functionality to be isolated, where the output is a feature
set that has encoded past information about a signal that can be exported from the platform
The stages of a signal encoding pipeline are as follows:
A. Forecast Specification
B. Feature Generation
C. Feature Filtering
Stationarization
===============================================================================================================
Stationarize a signal set and specified target using Augmented Dicky Fuller analysis, and a detrending method
for the specified target.
The stages of a stationarization pipeline are as follows:
A. Forecast Specification
B. Stationarization
Time-Series Regression
===============================================================================================================
Run Horizon's regression algorithms on a pre-encoded signal set.
Small Data Forecasting
===============================================================================================================
Time-series pipeline for small data. Does not contain any backtesting, and uses all the data for model training.
A. Forecast Specification
B. Stationarization
C. Linear Feature Generation
D. Feature Filtering
E. Feature Refinement
G. Linear Prediction
Variational Forecasting
===============================================================================================================
Creates a stacked lag-embedding matrix by combining a two-stage feature generation and selection process, with
lag-only feature generation.
A. Forecast Specification
B. Stationarization
C. Linear Feature Generation
D. Feature Filtering
E. Linear Feature Generation
F. Feature Filtering
G. Linear Backtesting
H. Linear Prediction
Custom
===============================================================================================================
Advanced: Contains only a forecast specification stage for adding stages manually.
N.B. There is no validation on stage addition.
"""
nonlinear = "nonlinear"
linear = "linear"
fast_forecasting = "fast_forecast"
feature_selection = "feature_selection"
feature_discovery = "feature_discovery"
signal_encoding = "signal_encoding"
stationarisation = "stationarisation"
time_series_regression = "regression"
variational_forecasting = "variational_forecasting"
custom = "custom"
small_data = "small_data"
| 39.842105 | 122 | 0.57405 | 697 | 6,813 | 5.591105 | 0.311334 | 0.05671 | 0.056454 | 0.053118 | 0.296638 | 0.283551 | 0.283551 | 0.253785 | 0.249423 | 0.214267 | 0 | 0.003441 | 0.189637 | 6,813 | 170 | 123 | 40.076471 | 0.702409 | 0.862322 | 0 | 0 | 0 | 0 | 0.304721 | 0.049356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 1 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0723f800260b47fe29201f275a3497c9e0250212 | 6,758 | py | Python | pyChess/olaf/views.py | An-Alone-Cow/pyChess | 2729a3a89e4d7d79659488ecb1b0bff9cac281a3 | [
"MIT"
] | null | null | null | pyChess/olaf/views.py | An-Alone-Cow/pyChess | 2729a3a89e4d7d79659488ecb1b0bff9cac281a3 | [
"MIT"
] | 18 | 2017-02-05T17:52:41.000Z | 2017-02-16T09:04:39.000Z | pyChess/olaf/views.py | An-Alone-Cow/pyChess | 2729a3a89e4d7d79659488ecb1b0bff9cac281a3 | [
"MIT"
] | null | null | null | from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import User
from django.shortcuts import render
from django.urls import reverse
from django.http import HttpResponseRedirect, HttpResponse
from django.utils import timezone
from olaf.models import *
from olaf.forms import *
from olaf.utility import usertools
from olaf.chess.controller import proccess_move
def index ( request ):
args = {}
message = request.session.pop ( 'message', default = None )
if ( message is not None ):
args [ 'message' ] = message
if ( request.user.is_authenticated ):
if ( request.method == 'POST' ):
if ( request.POST.get ( 'game_id' ) is not None ):
game_id = request.POST.get ( 'game_id' )
if ( game_id == '-1' ):
game_id = usertools.new_game ( request )
request.session [ 'game_id' ] = game_id
else:
request.session.pop ( 'game_id', default = None )
f = lambda a : str ( a.date () ) + " - " + str ( a.hour ) + ":" + str ( a.minute ) + ":" + str ( a.second )
args [ 'game_list' ] = list ([str ( game.id ), f ( game.creation_time )] for game in request.user.userdata.game_history.filter ( result = 0 ).order_by ( '-creation_time' ) )
if ( request.session.get ( 'game_id' ) is not None ):
args [ 'game_board' ] = usertools.get_translated_game_board ( request )
else:
args [ 'game_board' ] = None
return render ( request, 'olaf/index_logged_in.html', args )
else:
args [ 'login_form' ] = LoginForm ()
args [ 'register_form' ] = RegisterForm ()
args [ 'score' ] = list ( [user.master.username, user.wins, user.loses, user.ties] for user in UserData.objects.filter ( is_active = True ) )
return render ( request, 'olaf/index_not_logged_in.html', args )
form_operation_dict = {
'login' : (
usertools.login_user,
LoginForm,
'olaf/login.html',
{},
'index',
{ 'message' : "You're logged in. :)"}
),
'register' : (
usertools.register_user,
RegisterForm,
'olaf/register.html',
{},
'index',
{ 'message' : "An activation email has been sent to you" }
),
'password_reset_request' : (
usertools.init_pass_reset_token,
ForgotPasswordUsernameOrEmailForm,
'olaf/password_reset_request.html',
{},
'index',
{ 'message' : "An email containing the password reset link will be sent to your email"}
),
'reset_password' : (
usertools.reset_password_action,
PasswordChangeForm,
'olaf/reset_password.html',
{},
'olaf:login',
{ 'message' : "Password successfully changed, you can login now" }
),
'resend_activation_email' : (
usertools.resend_activation_email,
ResendActivationUsernameOrEmailForm,
'olaf/resend_activation_email.html',
{},
'index',
{ 'message' : "Activation email successfully sent to your email" }
),
}
def form_operation ( request, oper, *args ):
func, FORM, fail_template, fail_args, success_url, success_args = form_operation_dict [ oper ]
if ( request.method == 'POST' ):
form = FORM ( request.POST )
if ( form.is_valid () ):
func ( request, form, *args )
for key in success_args:
request.session [ key ] = success_args [ key ]
return HttpResponseRedirect ( reverse ( success_url ) )
else:
form = FORM ()
message = request.session.pop ( 'message', default = None )
if ( message is not None ):
fail_args [ 'message' ] = message
fail_args [ 'form' ] = form
return render ( request, fail_template, fail_args )
#view functions
def login_user ( request ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
return form_operation ( request, 'login' )
def register_user ( request ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
return form_operation ( request, 'register' )
def password_reset_request ( request ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
return form_operation ( request, 'password_reset_request' )
def reset_password_action ( request, token ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
tk = ExpirableTokenField.objects.filter ( token = token ).first ()
if ( tk is None ):
request.session [ 'message' ] = "Broken link"
return HttpResponseRedirect ( reverse ( 'index' ) )
else:
if ( timezone.now () <= tk.expiration_time ):
return form_operation ( request, 'reset_password', token )
else:
request.session [ 'message' ] = "Link expired, try getting a new one"
return HttpResponseRedirect ( reverse ( 'olaf:reset_password' ) )
def activate_account ( request, token ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
tk = ExpirableTokenField.objects.filter ( token = token ).first ()
if ( tk is None ):
request.session [ 'message' ] = "Broken link"
return HttpResponseRedirect ( reverse ( 'index' ) )
else:
if ( timezone.now () <= tk.expiration_time ):
if ( tk.user.is_active ):
request.session [ 'message' ] = "Account already active"
return HttpResponseRedirect ( reverse ( 'index' ) )
else:
userdata = tk.user
userdata.is_active = True
userdata.save ()
request.session [ 'message' ] = "Your account has been activated successfully"
return HttpResponseRedirect ( reverse ( 'olaf:login' ) )
else:
request.session [ 'message' ] = "Link expired, try getting a new one"
return HttpResponseRedirect ( reverse ( 'olaf:resend_activation_email' ) )
def resend_activation_email ( request ):
if ( request.user.is_authenticated ):
return HttpResponseRedirect ( reverse ( 'index' ) )
return form_operation ( request, 'resend_activation_email' )
def logout_user ( request ):
usertools.logout_user ( request )
request.session [ 'message' ] = "Goodbye :)"
return HttpResponseRedirect ( reverse ( 'index' ) )
def scoreboard ( request ):
if ( request.method == 'POST' ):
username = request.POST.get ( 'username' )
user = User.objects.filter ( username = username ).first ()
if ( user is None ):
request.session [ 'message' ] = "User not found"
return HttpResponseRedirect ( reverse ( 'olaf:scoreboard' ) )
else:
return HttpResponseRedirect ( reverse ( 'olaf:user_profile', args = (username, ) ) )
else:
args = {}
message = request.session.pop ( 'message', default = None )
if ( message is not None ):
args [ 'message' ] = message
lst = [ (user.master.username, user.wins, user.loses, user.ties) for user in UserData.objects.filter ( is_active = True ) ]
args [ 'lst' ] = lst
if ( request.user.is_authenticated ):
args [ 'logged_in' ] = True
return render ( request, 'olaf/scoreboard.html', args )
def move ( request ):
proccess_move ( request )
return HttpResponseRedirect ( reverse ( 'index' ) ) | 31.877358 | 175 | 0.683042 | 794 | 6,758 | 5.680101 | 0.185139 | 0.098004 | 0.12439 | 0.092683 | 0.383814 | 0.33082 | 0.322838 | 0.322838 | 0.322838 | 0.322838 | 0 | 0.000365 | 0.188369 | 6,758 | 212 | 176 | 31.877358 | 0.821878 | 0.002072 | 0 | 0.394118 | 0 | 0 | 0.178529 | 0.038701 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064706 | false | 0.088235 | 0.058824 | 0 | 0.276471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
072e9a202d69d5d6154bfb44a978d712661a1d52 | 869 | py | Python | examples/morpho.py | jaideep-seth/PyOpenWorm | c36baeda9590334ba810296934973da34f0eab78 | [
"MIT"
] | null | null | null | examples/morpho.py | jaideep-seth/PyOpenWorm | c36baeda9590334ba810296934973da34f0eab78 | [
"MIT"
] | null | null | null | examples/morpho.py | jaideep-seth/PyOpenWorm | c36baeda9590334ba810296934973da34f0eab78 | [
"MIT"
] | null | null | null | """
How to load morphologies of certain cells from the database.
"""
#this is an expected failure right now, as morphology is not implemented
from __future__ import absolute_import
from __future__ import print_function
import PyOpenWorm as P
from PyOpenWorm.context import Context
from PyOpenWorm.worm import Worm
from six import StringIO
#Connect to database.
with P.connect('default.conf') as conn:
ctx = Context(ident="http://openworm.org/data", conf=conn.conf).stored
#Create a new Cell object to work with.
aval = ctx(Worm)().get_neuron_network().aneuron('AVAL')
#Get the morphology associated with the Cell. Returns a neuroml.Morphology object.
morph = aval._morphology()
out = StringIO()
morph.export(out, 0) # we're printing it here, but we would normally do something else with the morphology object.
print(str(out.read()))
| 34.76 | 118 | 0.749137 | 128 | 869 | 4.984375 | 0.585938 | 0.031348 | 0.050157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001376 | 0.163406 | 869 | 24 | 119 | 36.208333 | 0.876204 | 0.417722 | 0 | 0 | 0 | 0 | 0.080972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.461538 | 0 | 0.461538 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0730aed278d58141b67cbd8f8213146b99199686 | 13,377 | py | Python | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/dateperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | null | null | null | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/dateperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | 6 | 2021-12-20T17:13:35.000Z | 2022-03-29T08:54:11.000Z | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/dateperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from typing import List, Pattern
from recognizers_text.utilities import RegExpUtility
from recognizers_number.number import BaseNumberParser
from recognizers_number.number.italian.extractors import ItalianIntegerExtractor, ItalianCardinalExtractor
from recognizers_number.number.italian.parsers import ItalianNumberParserConfiguration
from ...resources.base_date_time import BaseDateTime
from ...resources.italian_date_time import ItalianDateTime
from ..extractors import DateTimeExtractor
from ..base_duration import BaseDurationExtractor
from ..base_date import BaseDateExtractor
from ..base_dateperiod import DatePeriodExtractorConfiguration, MatchedIndex
from .duration_extractor_config import ItalianDurationExtractorConfiguration
from .date_extractor_config import ItalianDateExtractorConfiguration
from recognizers_text.extractor import Extractor
from recognizers_number import ItalianOrdinalExtractor, BaseNumberExtractor, ItalianCardinalExtractor
class ItalianDatePeriodExtractorConfiguration(DatePeriodExtractorConfiguration):
@property
def previous_prefix_regex(self) -> Pattern:
return self._previous_prefix_regex
@property
def check_both_before_after(self) -> bool:
return self._check_both_before_after
@property
def simple_cases_regexes(self) -> List[Pattern]:
return self._simple_cases_regexes
@property
def illegal_year_regex(self) -> Pattern:
return self._illegal_year_regex
@property
def year_regex(self) -> Pattern:
return self._year_regex
@property
def till_regex(self) -> Pattern:
return self._till_regex
@property
def followed_unit(self) -> Pattern:
return self._followed_unit
@property
def number_combined_with_unit(self) -> Pattern:
return self._number_combined_with_unit
@property
def past_regex(self) -> Pattern:
return self._past_regex
@property
def decade_with_century_regex(self) -> Pattern:
return self._decade_with_century_regex
@property
def future_regex(self) -> Pattern:
return self._future_regex
@property
def week_of_regex(self) -> Pattern:
return self._week_of_regex
@property
def month_of_regex(self) -> Pattern:
return self._month_of_regex
@property
def date_unit_regex(self) -> Pattern:
return self._date_unit_regex
@property
def in_connector_regex(self) -> Pattern:
return self._in_connector_regex
@property
def range_unit_regex(self) -> Pattern:
return self._range_unit_regex
@property
def date_point_extractor(self) -> DateTimeExtractor:
return self._date_point_extractor
@property
def integer_extractor(self) -> BaseNumberExtractor:
return self._integer_extractor
@property
def number_parser(self) -> BaseNumberParser:
return self._number_parser
@property
def duration_extractor(self) -> DateTimeExtractor:
return self._duration_extractor
@property
def now_regex(self) -> Pattern:
return self._now_regex
@property
def future_suffix_regex(self) -> Pattern:
return self._future_suffix_regex
@property
def ago_regex(self) -> Pattern:
return self._ago_regex
@property
def later_regex(self) -> Pattern:
return self._later_regex
@property
def less_than_regex(self) -> Pattern:
return self._less_than_regex
@property
def more_than_regex(self) -> Pattern:
return self._more_than_regex
@property
def duration_date_restrictions(self) -> [str]:
return self._duration_date_restrictions
@property
def year_period_regex(self) -> Pattern:
return self._year_period_regex
@property
def month_num_regex(self) -> Pattern:
return self._month_num_regex
@property
def century_suffix_regex(self) -> Pattern:
return self._century_suffix_regex
@property
def ordinal_extractor(self) -> BaseNumberExtractor:
return self._ordinal_extractor
@property
def cardinal_extractor(self) -> Extractor:
return self._cardinal_extractor
@property
def time_unit_regex(self) -> Pattern:
return self._time_unit_regex
@property
def within_next_prefix_regex(self) -> Pattern:
return self._within_next_prefix_regex
@property
def range_connector_regex(self) -> Pattern:
return self._range_connector_regex
@property
def day_regex(self) -> Pattern:
return self._day_regex
@property
def week_day_regex(self) -> Pattern:
return self._week_day_regex
@property
def relative_month_regex(self) -> Pattern:
return self._relative_month_regex
@property
def month_suffix_regex(self) -> Pattern:
return self._month_suffix_regex
@property
def past_prefix_regex(self) -> Pattern:
return self._past_prefix_regex
@property
def next_prefix_regex(self) -> Pattern:
return self._next_prefix_regex
@property
def this_prefix_regex(self) -> Pattern:
return self._this_prefix_regex
@property
def which_week_regex(self) -> Pattern:
return self._which_week_regex
@property
def rest_of_date_regex(self) -> Pattern:
return self._rest_of_date_regex
@property
def complex_date_period_regex(self) -> Pattern:
return self._complex_date_period_regex
@property
def week_day_of_month_regex(self) -> Pattern:
return self._week_day_of_month_regex
@property
def all_half_year_regex(self) -> Pattern:
return self._all_half_year_regex
def __init__(self):
self._all_half_year_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.AllHalfYearRegex)
self._week_day_of_month_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.WeekDayOfMonthRegex)
self._complex_date_period_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.ComplexDatePeriodRegex)
self._rest_of_date_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.RestOfDateRegex)
self._which_week_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.WhichWeekRegex)
self._this_prefix_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.ThisPrefixRegex)
self._next_prefix_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.NextSuffixRegex)
self._past_prefix_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.PastSuffixRegex)
self._month_suffix_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.MonthSuffixRegex)
self._relative_month_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.RelativeMonthRegex)
self._week_day_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.WeekDayRegex)
self._day_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.DayRegex)
self._range_connector_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.RangeConnectorRegex)
self._time_unit_regex = RegExpUtility.get_safe_reg_exp(ItalianDateTime.TimeUnitRegex)
self._previous_prefix_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.PastSuffixRegex)
self._check_both_before_after = ItalianDateTime.CheckBothBeforeAfter
self._simple_cases_regexes = [
RegExpUtility.get_safe_reg_exp(ItalianDateTime.SimpleCasesRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.BetweenRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.OneWordPeriodRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.MonthWithYear),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.MonthNumWithYear),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.YearRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.YearPeriodRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.WeekOfYearRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.WeekDayOfMonthRegex),
RegExpUtility.get_safe_reg_exp(
ItalianDateTime.MonthFrontBetweenRegex),
RegExpUtility.get_safe_reg_exp(
ItalianDateTime.MonthFrontSimpleCasesRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.QuarterRegex),
RegExpUtility.get_safe_reg_exp(
ItalianDateTime.QuarterRegexYearFront),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.SeasonRegex),
RegExpUtility.get_safe_reg_exp(
ItalianDateTime.LaterEarlyPeriodRegex),
RegExpUtility.get_safe_reg_exp(
ItalianDateTime.WeekWithWeekDayRangeRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.YearPlusNumberRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.DecadeWithCenturyRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.RelativeDecadeRegex)
]
self._check_both_before_after = ItalianDateTime.CheckBothBeforeAfter
self._illegal_year_regex = RegExpUtility.get_safe_reg_exp(
BaseDateTime.IllegalYearRegex)
self._year_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.YearRegex)
self._till_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.TillRegex)
self._followed_unit = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.FollowedDateUnit)
self._number_combined_with_unit = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.NumberCombinedWithDateUnit)
self._past_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.PastSuffixRegex)
self._future_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.NextSuffixRegex)
self._week_of_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.WeekOfRegex)
self._month_of_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.MonthOfRegex)
self._date_unit_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.DateUnitRegex)
self._within_next_prefix_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.WithinNextPrefixRegex)
self._in_connector_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.InConnectorRegex)
self._range_unit_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.RangeUnitRegex)
self.from_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.FromRegex)
self.connector_and_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.ConnectorAndRegex)
self.before_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.BeforeRegex2)
self._date_point_extractor = BaseDateExtractor(
ItalianDateExtractorConfiguration())
self._integer_extractor = ItalianIntegerExtractor()
self._number_parser = BaseNumberParser(
ItalianNumberParserConfiguration())
self._duration_extractor = BaseDurationExtractor(
ItalianDurationExtractorConfiguration())
self._now_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.NowRegex)
self._future_suffix_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.FutureSuffixRegex
)
self._ago_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.AgoRegex
)
self._later_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.LaterRegex
)
self._less_than_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.LessThanRegex
)
self._more_than_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.MoreThanRegex
)
self._duration_date_restrictions = ItalianDateTime.DurationDateRestrictions
self._year_period_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.YearPeriodRegex
)
self._month_num_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.MonthNumRegex
)
self._century_suffix_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.CenturySuffixRegex
)
self._ordinal_extractor = ItalianOrdinalExtractor()
self._cardinal_extractor = ItalianCardinalExtractor()
self._previous_prefix_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.PreviousPrefixRegex
)
self._cardinal_extractor = ItalianCardinalExtractor()
# TODO When the implementation for these properties is added, change the None values to their respective Regexps
self._time_unit_regex = None
def get_from_token_index(self, source: str) -> MatchedIndex:
match = self.from_regex.search(source)
if match:
return MatchedIndex(True, match.start())
return MatchedIndex(False, -1)
def get_between_token_index(self, source: str) -> MatchedIndex:
match = self.before_regex.search(source)
if match:
return MatchedIndex(True, match.start())
return MatchedIndex(False, -1)
def has_connector_token(self, source: str) -> bool:
return not self.connector_and_regex.search(source) is None
| 38.329513 | 120 | 0.729984 | 1,394 | 13,377 | 6.584648 | 0.14132 | 0.104587 | 0.130733 | 0.150343 | 0.554962 | 0.475106 | 0.311036 | 0.2095 | 0.070269 | 0.041835 | 0 | 0.000283 | 0.208268 | 13,377 | 348 | 121 | 38.439655 | 0.866396 | 0.015026 | 0 | 0.229965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002874 | 0 | 1 | 0.1777 | false | 0 | 0.052265 | 0.167247 | 0.414634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
0745f258c3fda6d1caa28babca894afcfee11f9f | 10,829 | py | Python | run_classifier.py | wj-Mcat/model-getting-started | abe8c9df10b45841eeb38e859e680a37ec03fe8a | [
"Apache-2.0"
] | null | null | null | run_classifier.py | wj-Mcat/model-getting-started | abe8c9df10b45841eeb38e859e680a37ec03fe8a | [
"Apache-2.0"
] | null | null | null | run_classifier.py | wj-Mcat/model-getting-started | abe8c9df10b45841eeb38e859e680a37ec03fe8a | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT finetuning runner."""
from __future__ import annotations, absolute_import
import os
from typing import Dict, List
from transformers import (
AutoTokenizer, BertTokenizer,
BertForSequenceClassification, BertConfig,
Trainer, TrainingArguments,
PreTrainedTokenizer
)
from transformers.configuration_utils import PretrainedConfig
from src.schema import (
InputExample, InputFeatures, Config
)
from src.data_process import (
AgNewsDataProcessor
)
from config import create_logger
logger = create_logger()
def convert_single_example(
example_index: int, example: InputExample, label2id: Dict[str, int], max_seq_length: int, tokenizer: BertTokenizer
) -> InputFeatures:
"""Converts a single `InputExample` into a single `InputFeatures`.
example_index: 用于展示example中的前几例数据
"""
parameters = {
"text":example.text_a,
"add_special_tokens":True,
"padding":True,
"max_length":max_seq_length,
"return_attention_mask":True,
"return_token_type_ids":True,
"return_length":True,
"verbose":True
}
if example.text_b:
parameters['text_pair'] = example.text_b
feature = tokenizer(**parameters)
input_feature = InputFeatures(
input_ids=feature['token_ids'],
attention_mask=feature['attention_mask'],
segment_ids=feature['token_type_ids'],
label_id=label2id[example.label],
is_real_example=True
)
if example_index < 5:
logger.info(f'*************************** Example {example_index} ***************************')
logger.info(example)
logger.info(input_feature)
logger.info('*************************** Example End ***************************')
return input_feature
def create_bert_for_sequence_classification_model(config: Config):
bert_config: BertConfig = BertConfig.from_pretrained(config.pretrained_model_name)
bert_config.num_labels = config.num_labels
model = BertForSequenceClassification(bert_config)
return model
def create_model(config: Config):
"""Creates a classification model."""
models = {
"bert-for-sequence-classification": create_bert_for_sequence_classification_model,
}
return models[config.model_name](config)
def convert_examples_to_features(
examples, label_list: List[str],
max_seq_length: int, tokenizer: PreTrainedTokenizer
):
"""Convert a set of `InputExample`s to a list of `InputFeatures`."""
label2id = {label: index for index, label in enumerate(label_list)}
features = []
for (ex_index, example) in enumerate(examples):
if ex_index % 200 == 0:
logger.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label2id,
max_seq_length, tokenizer)
features.append(feature)
return features
class SequenceClassificationTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
return outputs.loss
def main():
# processors need to be updated
processors = {
'agnews-processor': AgNewsDataProcessor,
}
config: Config = Config.instance()
if not config.do_train and not config.do_eval and not config.do_predict:
raise ValueError(
"At least one of `do_train`, `do_eval` or `do_predict' must be True.")
bert_config = PretrainedConfig.from_pretrained(config.pretrained_model_name)
# 根据不同的任务,处理不同的数据集
task_name = config.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
label_list = processor.get_labels()
tokenizer = AutoTokenizer.from_pretrained(config.pretrained_model_name)
train_examples = None
num_train_steps = None
num_warmup_steps = None
if config.do_train:
train_examples: List[InputExample] = processor.get_train_examples(config.data_dir)
train_dataset_loader =
num_train_steps = int(
len(train_examples) / config.train_batch_size * config.epochs
)
num_warmup_steps = int(num_train_steps * config.warmup_proportion)
model = create_model(config=config)
training_arguments = TrainingArguments(
output_dir=config.output_dir,
overwrite_output_dir=True,
)
trainer = SequenceClassificationTrainer(
model=model,
)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPUs
if config.do_train:
train_file = os.path.join(config.output_dir, "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, config.max_seq_length, tokenizer, train_file)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", config.train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=config.max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
if config.do_eval:
eval_examples = processor.get_dev_examples(config.data_dir)
num_actual_eval_examples = len(eval_examples)
if config.use_tpu:
# TPU requires a fixed batch size for all batches, therefore the number
# of examples must be a multiple of the batch size, or else examples
# will get dropped. So we pad with fake examples which are ignored
# later on. These do NOT count towards the metric (all tf.metrics
# support a per-instance weight, and these get a weight of 0.0).
while len(eval_examples) % config.eval_batch_size != 0:
eval_examples.append(PaddingInputExample())
eval_file = os.path.join(config.output_dir, "eval.tf_record")
file_based_convert_examples_to_features(
eval_examples, label_list, config.max_seq_length, tokenizer, eval_file)
tf.logging.info("***** Running evaluation *****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(eval_examples), num_actual_eval_examples,
len(eval_examples) - num_actual_eval_examples)
tf.logging.info(" Batch size = %d", config.eval_batch_size)
# This tells the estimator to run through the entire set.
eval_steps = None
# However, if running eval on the TPU, you will need to specify the
# number of steps.
if config.use_tpu:
assert len(eval_examples) % config.eval_batch_size == 0
eval_steps = int(len(eval_examples) // config.eval_batch_size)
eval_drop_remainder = True if config.use_tpu else False
eval_input_fn = file_based_input_fn_builder(
input_file=eval_file,
seq_length=config.max_seq_length,
is_training=False,
drop_remainder=eval_drop_remainder)
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
output_eval_file = os.path.join(config.output_dir, "eval_results.txt")
with tf.gfile.GFile(output_eval_file, "w") as writer:
tf.logging.info("***** Eval results *****")
for key in sorted(result.keys()):
tf.logging.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
if config.do_predict:
predict_examples = processor.get_test_examples(config.data_dir)
num_actual_predict_examples = len(predict_examples)
if config.use_tpu:
# TPU requires a fixed batch size for all batches, therefore the number
# of examples must be a multiple of the batch size, or else examples
# will get dropped. So we pad with fake examples which are ignored
# later on.
while len(predict_examples) % config.predict_batch_size != 0:
predict_examples.append(PaddingInputExample())
predict_file = os.path.join(config.output_dir, "predict.tf_record")
file_based_convert_examples_to_features(predict_examples, label_list,
config.max_seq_length, tokenizer,
predict_file)
tf.logging.info("***** Running prediction*****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(predict_examples), num_actual_predict_examples,
len(predict_examples) - num_actual_predict_examples)
tf.logging.info(" Batch size = %d", config.predict_batch_size)
predict_drop_remainder = True if config.use_tpu else False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file,
seq_length=config.max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
output_predict_file = os.path.join(config.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
assert num_written_lines == num_actual_predict_examples
if __name__ == "__main__":
main()
| 38.537367 | 122 | 0.651676 | 1,301 | 10,829 | 5.170638 | 0.222137 | 0.020068 | 0.025123 | 0.016055 | 0.318716 | 0.28839 | 0.249145 | 0.210644 | 0.145087 | 0.087112 | 0 | 0.003088 | 0.252286 | 10,829 | 280 | 123 | 38.675 | 0.827714 | 0.128636 | 0 | 0.073298 | 0 | 0 | 0.097074 | 0.019873 | 0 | 0 | 0 | 0 | 0.010471 | 0 | null | null | 0 | 0.041885 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0748f0d589516cda25a4a57eb049a757da513fda | 3,169 | py | Python | utilidades/texto.py | DeadZombie14/chillMagicCarPygame | 756bb6d27939bed3c2834222d03096e90f05a788 | [
"MIT"
] | null | null | null | utilidades/texto.py | DeadZombie14/chillMagicCarPygame | 756bb6d27939bed3c2834222d03096e90f05a788 | [
"MIT"
] | null | null | null | utilidades/texto.py | DeadZombie14/chillMagicCarPygame | 756bb6d27939bed3c2834222d03096e90f05a788 | [
"MIT"
] | null | null | null | import pygame
class Texto:
def __init__(self, screen, text, x, y, text_size = 20, fuente = 'Calibri', italic = False, bold= False, subrayado= False, color = (250, 240, 230), bg = [] ):
self.screen = screen
fg = color
self.coord = x, y
#load font, prepare values
font = pygame.font.Font(None, 80)
size = font.size(text)
# Font
a_sys_font = pygame.font.SysFont(fuente, text_size)
# Cursiva
if italic:
a_sys_font.set_bold(1)
# Negritas
if bold:
a_sys_font.set_bold(1)
# Subrayado
if subrayado:
a_sys_font.set_underline(1)
# Construccion del texto
if len(bg) > 1: # Si hay fondo de texto
ren = a_sys_font.render(text, 1, fg, bg)
else: # Si no, transparente
ren = a_sys_font.render(text, 1, fg)
# self.size = x+size[0], y
self.text_rect = ren.get_rect()
self.text_rect.center = (x,y)
self.image = ren, (x,y)
screen.blit(ren, (x, y))
# Cursiva
if italic:
a_sys_font.set_bold(0)
# Negritas
if bold:
a_sys_font.set_bold(0)
# Subrayado
if subrayado:
a_sys_font.set_underline(0)
# self.image.blit(ren, self.text_rect)
# self.text_rect = (x, y),ren.get_size()
# text = str(self.counter)
# label = self.myfont.render(text, 1, (255,0,0))
# text_rect = label.get_rect()
# text_rect.center = (50,50)
# self.image.blit(label, text_rect)
pass
def getProperties(self):
return self.text_rect
def redraw(self):
self.screen.blit(self.image[0], self.image[1])
pass
##################### EJEMPLO DE USO ##############################
# texto1 = Texto(screen, 'Hola', 10, 10)
class TextArea():
def __init__(self, screen, text, x, y, fuente='Calibri', text_size = 20, color=pygame.Color('black')):
self.coord = x, y
font = pygame.font.SysFont(fuente, text_size)
words = [word.split(' ') for word in text.splitlines()] # 2D array where each row is a list of words.
space = font.size(' ')[0] # The width of a space.
max_width, max_height = screen.get_size()
pos = x,y
for line in words:
for word in line:
word_surface = font.render(word, 0, color)
word_width, word_height = word_surface.get_size()
if x + word_width >= max_width:
x = pos[0] # Reset the x.
y += word_height # Start on new row.
screen.blit(word_surface, (x, y))
x += word_width + space
x = pos[0] # Reset the x.
y += word_height # Start on new row.
self.size = word_width, word_height
pass
def getProperties(self):
return self.size, self.coord
##################### EJEMPLO DE USO ##############################
# textarea1 = Textarea(screen, 'Hola mundo que tal estas hoy') | 31.376238 | 161 | 0.517829 | 407 | 3,169 | 3.87715 | 0.265356 | 0.015209 | 0.045627 | 0.041825 | 0.323194 | 0.323194 | 0.277567 | 0.204056 | 0.048162 | 0.048162 | 0 | 0.023055 | 0.34301 | 3,169 | 101 | 162 | 31.376238 | 0.73487 | 0.214579 | 0 | 0.362069 | 0 | 0 | 0.008895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086207 | false | 0.051724 | 0.017241 | 0.034483 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
074b42be48178517185311cda7a91881826a6fd2 | 654 | py | Python | sktime/annotation/tests/test_all_annotators.py | Rubiel1/sktime | 2fd2290fb438224f11ddf202148917eaf9b73a87 | [
"BSD-3-Clause"
] | 1 | 2021-09-08T14:24:52.000Z | 2021-09-08T14:24:52.000Z | sktime/annotation/tests/test_all_annotators.py | Rubiel1/sktime | 2fd2290fb438224f11ddf202148917eaf9b73a87 | [
"BSD-3-Clause"
] | null | null | null | sktime/annotation/tests/test_all_annotators.py | Rubiel1/sktime | 2fd2290fb438224f11ddf202148917eaf9b73a87 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""Tests for sktime annotators."""
import pandas as pd
import pytest
from sktime.registry import all_estimators
from sktime.utils._testing.estimator_checks import _make_args
ALL_ANNOTATORS = all_estimators(estimator_types="series-annotator", return_names=False)
@pytest.mark.parametrize("Estimator", ALL_ANNOTATORS)
def test_output_type(Estimator):
"""Test annotator output type."""
estimator = Estimator.create_test_instance()
args = _make_args(estimator, "fit")
estimator.fit(*args)
args = _make_args(estimator, "predict")
y_pred = estimator.predict(*args)
assert isinstance(y_pred, pd.Series)
| 28.434783 | 87 | 0.750765 | 83 | 654 | 5.674699 | 0.506024 | 0.050955 | 0.080679 | 0.089172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001764 | 0.133028 | 654 | 22 | 88 | 29.727273 | 0.828924 | 0.120795 | 0 | 0 | 0 | 0 | 0.062057 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
074b7ef708bdd483e5b790825c69a90db600e852 | 569 | py | Python | raspberry-pi-camera/cam.py | AlexMassin/mlh-react-vr-website | dc08788ccdecc9923b8dbfd31fa452cb83d214ae | [
"MIT"
] | 1 | 2019-05-19T03:37:26.000Z | 2019-05-19T03:37:26.000Z | raspberry-pi-camera/cam.py | AlexMassin/mlh-react-vr-website | dc08788ccdecc9923b8dbfd31fa452cb83d214ae | [
"MIT"
] | null | null | null | raspberry-pi-camera/cam.py | AlexMassin/mlh-react-vr-website | dc08788ccdecc9923b8dbfd31fa452cb83d214ae | [
"MIT"
] | 1 | 2019-10-02T20:18:54.000Z | 2019-10-02T20:18:54.000Z | picamera import PiCamera
from time import sleep
import boto3
import os.path
import subprocess
s3 = boto3.client('s3')
bucket = 'cambucket21'
camera = PiCamera()
#camera.resolution(1920,1080)
x = 0
camerafile = x
while True:
if (x == 6):
x = 1
else:
x = x + 1
camera.start_preview()
camera.start_recording('/home/pi/' + str(x) + '.h264')
sleep(2)
camera.stop_recording()
camera.stop_preview()
subprocess.Popen("MP4Box -add " + str(x) + ".h264 " + str(x) +".mp4", shell=True)
sleep(1)
s3.upload_file('/home/pi/' + str(x) + '.mp4',bucket,'/home/pi/' + str(x) + '.mp4')
| 20.321429 | 82 | 0.671353 | 88 | 569 | 4.284091 | 0.477273 | 0.05305 | 0.071618 | 0.079576 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063136 | 0.137083 | 569 | 27 | 83 | 21.074074 | 0.704684 | 0.049209 | 0 | 0 | 0 | 0 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.217391 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0754a45f518b76cfc3fadb21e0d4b383c11aeb7f | 2,937 | py | Python | magma/operators.py | Kuree/magma | be2439aa897768c5810be72e3a55a6f772ac83cf | [
"MIT"
] | null | null | null | magma/operators.py | Kuree/magma | be2439aa897768c5810be72e3a55a6f772ac83cf | [
"MIT"
] | null | null | null | magma/operators.py | Kuree/magma | be2439aa897768c5810be72e3a55a6f772ac83cf | [
"MIT"
] | null | null | null | from magma import _BitType, BitType, BitsType, UIntType, SIntType
class MantleImportError(RuntimeError):
pass
class UndefinedOperatorError(RuntimeError):
pass
def raise_mantle_import_error_unary(self):
raise MantleImportError(
"Operators are not defined until mantle has been imported")
def raise_mantle_import_error_binary(self, other):
raise MantleImportError(
"Operators are not defined until mantle has been imported")
def define_raise_undefined_operator_error(type_str, operator, type_):
if type_ == "unary":
def wrapped(self):
raise UndefinedOperatorError(
f"{operator} is undefined for {type_str}")
else:
assert type_ == "binary"
def wrapped(self, other):
raise UndefinedOperatorError(
f"{operator} is undefined for {type_str}")
return wrapped
for op in ("__eq__", "__ne__"):
setattr(_BitType, op, raise_mantle_import_error_binary)
for op in (
"__and__",
"__or__",
"__xor__",
"__invert__",
"__add__",
"__sub__",
"__mul__",
"__div__",
"__lt__",
# __le__ skipped because it's used for assignment on inputs
# "__le__",
"__gt__",
"__ge__"
):
if op == "__invert__":
setattr(_BitType, op,
define_raise_undefined_operator_error("_BitType", op, "unary"))
else:
setattr(
_BitType, op,
define_raise_undefined_operator_error("_BitType", op, "binary"))
for op in ("__and__",
"__or__",
"__xor__",
"__invert__"
):
if op == "__invert__":
setattr(BitType, op, raise_mantle_import_error_unary)
else:
setattr(BitType, op, raise_mantle_import_error_binary)
for op in ("__and__",
"__or__",
"__xor__",
"__invert__",
"__lshift__",
"__rshift__",
):
if op == "__invert__":
setattr(BitsType, op, raise_mantle_import_error_unary)
else:
setattr(BitsType, op, raise_mantle_import_error_binary)
for op in ("__add__",
"__sub__",
"__mul__",
"__div__",
"__lt__",
# __le__ skipped because it's used for assignment on inputs
# "__le__",
"__gt__",
"__ge__"
):
setattr(BitsType, op,
define_raise_undefined_operator_error("BitsType", op, "binary"))
for op in ("__add__",
"__sub__",
"__mul__",
"__div__",
"__lt__",
# __le__ skipped because it's used for assignment on inputs
# "__le__",
"__gt__",
"__ge__"
):
setattr(SIntType, op, raise_mantle_import_error_binary)
setattr(UIntType, op, raise_mantle_import_error_binary)
| 26.459459 | 79 | 0.571672 | 286 | 2,937 | 5.003497 | 0.237762 | 0.069182 | 0.106918 | 0.138365 | 0.779874 | 0.708595 | 0.628232 | 0.602376 | 0.532495 | 0.436059 | 0 | 0 | 0.330609 | 2,937 | 110 | 80 | 26.7 | 0.727874 | 0.069459 | 0 | 0.619048 | 0 | 0 | 0.194424 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 1 | 0.059524 | false | 0.02381 | 0.178571 | 0 | 0.27381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0758c8b4614be9ea14ff7452e9accfcfb90b432b | 1,263 | py | Python | dvc/utils/stage.py | Abrosimov-a-a/dvc | 93280c937b9160003afb0d2f3fd473c03d6d9673 | [
"Apache-2.0"
] | null | null | null | dvc/utils/stage.py | Abrosimov-a-a/dvc | 93280c937b9160003afb0d2f3fd473c03d6d9673 | [
"Apache-2.0"
] | null | null | null | dvc/utils/stage.py | Abrosimov-a-a/dvc | 93280c937b9160003afb0d2f3fd473c03d6d9673 | [
"Apache-2.0"
] | null | null | null | import yaml
from ruamel.yaml import YAML
from ruamel.yaml.error import YAMLError
try:
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from dvc.exceptions import StageFileCorruptedError
from dvc.utils.compat import open
def load_stage_file(path):
with open(path, "r", encoding="utf-8") as fd:
return parse_stage(fd.read(), path)
def parse_stage(text, path):
try:
return yaml.load(text, Loader=SafeLoader) or {}
except yaml.error.YAMLError as exc:
raise StageFileCorruptedError(path, cause=exc)
def parse_stage_for_update(text, path):
"""Parses text into Python structure.
Unlike `parse_stage()` this returns ordered dicts, values have special
attributes to store comments and line breaks. This allows us to preserve
all of those upon dump.
This one is, however, several times slower than simple `parse_stage()`.
"""
try:
yaml = YAML()
return yaml.load(text) or {}
except YAMLError as exc:
raise StageFileCorruptedError(path, cause=exc)
def dump_stage_file(path, data):
with open(path, "w", encoding="utf-8") as fd:
yaml = YAML()
yaml.default_flow_style = False
yaml.dump(data, fd)
| 26.87234 | 76 | 0.69517 | 174 | 1,263 | 4.971264 | 0.465517 | 0.057803 | 0.03237 | 0.046243 | 0.224277 | 0.131792 | 0.131792 | 0.131792 | 0.131792 | 0 | 0 | 0.002024 | 0.217736 | 1,263 | 46 | 77 | 27.456522 | 0.873482 | 0.218527 | 0 | 0.25 | 0 | 0 | 0.0125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
075cb80186092395148f9c03498c024c22cfd0b5 | 793 | py | Python | utils/nlp.py | splovyt/SFPython-Project-Night | 50f20f581e074401d59d91457bac2a69631bef61 | [
"Apache-2.0"
] | 1 | 2019-04-17T18:02:59.000Z | 2019-04-17T18:02:59.000Z | utils/nlp.py | splovyt/SFPython-Project-Night | 50f20f581e074401d59d91457bac2a69631bef61 | [
"Apache-2.0"
] | null | null | null | utils/nlp.py | splovyt/SFPython-Project-Night | 50f20f581e074401d59d91457bac2a69631bef61 | [
"Apache-2.0"
] | null | null | null | import ssl
import nltk
from textblob import TextBlob
from nltk.corpus import stopwords
# set SSL
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
# download noun data (if required)
nltk.download('brown')
nltk.download('punkt')
nltk.download('stopwords')
def extract_nouns(sentence):
"""Extract the nouns from a sentence using the 'textblob' library."""
blob = TextBlob(sentence)
return blob.noun_phrases
def remove_stopwords(sentence):
"""Remove stopwords from a sentence and return the list of words."""
blob = TextBlob(sentence)
return [word for word in blob.words if word not in stopwords.words('english') and len(word)>2]
| 26.433333 | 98 | 0.760404 | 108 | 793 | 5.416667 | 0.444444 | 0.082051 | 0.071795 | 0.095727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001493 | 0.155107 | 793 | 29 | 99 | 27.344828 | 0.871642 | 0.211854 | 0 | 0.105263 | 0 | 0 | 0.042414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.210526 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
075cf0dd079f839e7d44c9491837f8a19123cdd5 | 1,418 | py | Python | toolbox/core/management/commands/celery_beat_resource_scraper.py | akshedu/toolbox | 7c647433b68f1098ee4c8623f836f74785dc970c | [
"MIT"
] | null | null | null | toolbox/core/management/commands/celery_beat_resource_scraper.py | akshedu/toolbox | 7c647433b68f1098ee4c8623f836f74785dc970c | [
"MIT"
] | null | null | null | toolbox/core/management/commands/celery_beat_resource_scraper.py | akshedu/toolbox | 7c647433b68f1098ee4c8623f836f74785dc970c | [
"MIT"
] | null | null | null |
from django_celery_beat.models import PeriodicTask, IntervalSchedule
from django.core.management.base import BaseCommand
from django.db import IntegrityError
class Command(BaseCommand):
def handle(self, *args, **options):
try:
schedule_channel, created = IntervalSchedule.objects.get_or_create(
every=4,
period=IntervalSchedule.HOURS,
)
except IntegrityError as e:
pass
try:
schedule_video, created = IntervalSchedule.objects.get_or_create(
every=6,
period=IntervalSchedule.HOURS,
)
except IntegrityError as e:
pass
try:
PeriodicTask.objects.create(
interval=schedule_channel,
name='Scrape Channels',
task='toolbox.scraper.tasks.scrape_youtube_channels',
)
except IntegrityError as e:
pass
try:
PeriodicTask.objects.create(
interval=schedule_video,
name='Scrape Videos',
task='toolbox.scraper.tasks.scrape_youtube_videos',
)
except IntegrityError as e:
pass
| 32.227273 | 79 | 0.499295 | 110 | 1,418 | 6.309091 | 0.445455 | 0.115274 | 0.126801 | 0.132565 | 0.600865 | 0.56196 | 0.458213 | 0.325648 | 0.325648 | 0.204611 | 0 | 0.002535 | 0.443583 | 1,418 | 43 | 80 | 32.976744 | 0.87706 | 0 | 0 | 0.457143 | 0 | 0 | 0.081979 | 0.062191 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.114286 | 0.085714 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0760aecd744d04b7a42ae02e90ca8b423ee0a619 | 2,834 | py | Python | ucscsdk/mometa/storage/StorageScsiLunRef.py | parag-may4/ucscsdk | 2ea762fa070330e3a4e2c21b46b157469555405b | [
"Apache-2.0"
] | 9 | 2016-12-22T08:39:25.000Z | 2019-09-10T15:36:19.000Z | ucscsdk/mometa/storage/StorageScsiLunRef.py | parag-may4/ucscsdk | 2ea762fa070330e3a4e2c21b46b157469555405b | [
"Apache-2.0"
] | 10 | 2017-01-31T06:59:56.000Z | 2021-11-09T09:14:37.000Z | ucscsdk/mometa/storage/StorageScsiLunRef.py | parag-may4/ucscsdk | 2ea762fa070330e3a4e2c21b46b157469555405b | [
"Apache-2.0"
] | 13 | 2016-11-14T07:42:58.000Z | 2022-02-10T17:32:05.000Z | """This module contains the general information for StorageScsiLunRef ManagedObject."""
from ...ucscmo import ManagedObject
from ...ucsccoremeta import UcscVersion, MoPropertyMeta, MoMeta
from ...ucscmeta import VersionMeta
class StorageScsiLunRefConsts():
pass
class StorageScsiLunRef(ManagedObject):
"""This is StorageScsiLunRef class."""
consts = StorageScsiLunRefConsts()
naming_props = set([u'id'])
mo_meta = MoMeta("StorageScsiLunRef", "storageScsiLunRef", "scsi-lun-ref-[id]", VersionMeta.Version131a, "InputOutput", 0x1f, [], ["read-only"], [u'storageLunReplica', u'storageLunSnapshot', u'storageScsiLun', u'storageVirtualDrive'], [], ["Get"])
prop_meta = {
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version131a, MoPropertyMeta.INTERNAL, None, None, None, r"""((deleteAll|ignore|deleteNonPresent),){0,2}(deleteAll|ignore|deleteNonPresent){0,1}""", [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version131a, MoPropertyMeta.READ_ONLY, 0x2, 0, 256, None, [], []),
"id": MoPropertyMeta("id", "id", "uint", VersionMeta.Version131a, MoPropertyMeta.NAMING, 0x4, None, None, None, [], []),
"ls_dn": MoPropertyMeta("ls_dn", "lsDn", "string", VersionMeta.Version131a, MoPropertyMeta.READ_ONLY, None, 0, 256, None, [], []),
"lun_name": MoPropertyMeta("lun_name", "lunName", "string", VersionMeta.Version131a, MoPropertyMeta.READ_ONLY, None, None, None, r"""[\-\.:_a-zA-Z0-9]{0,16}""", [], []),
"pn_dn": MoPropertyMeta("pn_dn", "pnDn", "string", VersionMeta.Version141a, MoPropertyMeta.READ_ONLY, None, 0, 256, None, [], []),
"profile_dn": MoPropertyMeta("profile_dn", "profileDn", "string", VersionMeta.Version131a, MoPropertyMeta.READ_ONLY, None, 0, 256, None, [], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version131a, MoPropertyMeta.READ_ONLY, 0x8, 0, 256, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version131a, MoPropertyMeta.READ_WRITE, 0x10, None, None, r"""((removed|created|modified|deleted),){0,3}(removed|created|modified|deleted){0,1}""", [], []),
}
prop_map = {
"childAction": "child_action",
"dn": "dn",
"id": "id",
"lsDn": "ls_dn",
"lunName": "lun_name",
"pnDn": "pn_dn",
"profileDn": "profile_dn",
"rn": "rn",
"status": "status",
}
def __init__(self, parent_mo_or_dn, id, **kwargs):
self._dirty_mask = 0
self.id = id
self.child_action = None
self.ls_dn = None
self.lun_name = None
self.pn_dn = None
self.profile_dn = None
self.status = None
ManagedObject.__init__(self, "StorageScsiLunRef", parent_mo_or_dn, **kwargs)
| 50.607143 | 251 | 0.642202 | 301 | 2,834 | 5.887043 | 0.315615 | 0.111738 | 0.162528 | 0.165914 | 0.235892 | 0.176072 | 0.119639 | 0.069977 | 0.069977 | 0.069977 | 0 | 0.03223 | 0.178899 | 2,834 | 55 | 252 | 51.527273 | 0.729265 | 0.040226 | 0 | 0 | 0 | 0.04878 | 0.24003 | 0.069055 | 0 | 0 | 0.006278 | 0 | 0 | 1 | 0.02439 | false | 0.02439 | 0.073171 | 0 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0761a4f4179e9679d7d567a51af6174207abac78 | 16,697 | py | Python | saxstools/fullsaxs.py | latrocinia/saxstools | 8e88474f62466b745791c0ccbb07c80a959880f3 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | saxstools/fullsaxs.py | latrocinia/saxstools | 8e88474f62466b745791c0ccbb07c80a959880f3 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | saxstools/fullsaxs.py | latrocinia/saxstools | 8e88474f62466b745791c0ccbb07c80a959880f3 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | from __future__ import print_function, absolute_import, division
from sys import stdout as _stdout
from time import time as _time
import numpy as np
try:
import pyfftw
pyfftw.interfaces.cache.enable()
pyfftw.interfaces.cache.set_keepalive_time(10)
rfftn = pyfftw.interfaces.numpy_fft.rfftn
irfftn = pyfftw.interfaces.numpy_fft.irfftn
except ImportError:
from numpy.fft import rfftn, irfftn
from disvis import volume
from disvis.points import dilate_points
from disvis.libdisvis import (rotate_image3d, dilate_points_add, longest_distance)
from powerfit.solutions import Solutions
from saxstools.saxs_curve import scattering_curve, create_fifj_lookup_table
from saxstools.helpers import coarse_grain
from saxstools.libsaxstools import calc_chi2
from saxstools.kernels import Kernels as saxs_Kernels
try:
import pyopencl as cl
import pyopencl.array as cl_array
import disvis.pyclfft
from disvis.kernels import Kernels
from disvis import pyclfft
except ImportError:
pass
class FullSAXS(object):
def __init__(self):
# parameters to be defined
self._receptor = None
self._ligand = None
# parameters with standard values
self.rotations = [[[1, 0, 0], [0, 1, 0], [0, 0, 1]]]
self.weights = None
self.voxelspacing = 1.0
self.interaction_radius = 2.5
self.max_clash = 100
self.min_interaction = 300
self.coarse_grain = True
self.beads_per_residue = 2
# CPU or GPU
self._queue = None
# unchangeable
self._data = {}
self._q = None
self._Iq = None
self._sq = None
@property
def receptor(self):
return self._receptor
@receptor.setter
def receptor(self, receptor):
self._receptor = receptor.duplicate()
@property
def ligand(self):
return self._ligand
@ligand.setter
def ligand(self, ligand):
self._ligand = ligand.duplicate()
@property
def rotations(self):
return self._rotations
@rotations.setter
def rotations(self, rotations):
rotmat = np.asarray(rotations, dtype=np.float64)
if rotmat.ndim != 3:
raise ValueError("Input should be a list of rotation matrices.")
self._rotations = rotmat
@property
def weights(self):
return self._weights
@weights.setter
def weights(self, weights):
self._weights = weights
@property
def interaction_radius(self):
return self._interaction_radius
@interaction_radius.setter
def interaction_radius(self, radius):
if radius <= 0:
raise ValueError("Interaction radius should be bigger than zero")
self._interaction_radius = radius
@property
def voxelspacing(self):
return self._voxelspacing
@voxelspacing.setter
def voxelspacing(self, voxelspacing):
self._voxelspacing = voxelspacing
@property
def max_clash(self):
return self._max_clash
@max_clash.setter
def max_clash(self, max_clash):
if max_clash < 0:
raise ValueError("Maximum allowed clashing volume cannot be negative")
self._max_clash = max_clash + 0.9
@property
def min_interaction(self):
return self._min_interaction
@min_interaction.setter
def min_interaction(self, min_interaction):
if min_interaction < 1:
raise ValueError("Minimum required interaction volume cannot be smaller than 1")
self._min_interaction = min_interaction + 0.9
@property
def queue(self):
return self._queue
@queue.setter
def queue(self, queue):
self._queue = queue
@property
def data(self):
return self._data
@property
def saxsdata(self):
return self._q, self._Iq, self._sq
@saxsdata.setter
def saxsdata(self, saxsdata):
self._q, self._Iq, self._sq = saxsdata
def _initialize(self):
# check if requirements are set
if any(x is None for x in (self.receptor, self.ligand)):
raise ValueError("Not all requirements are met for a search")
if self.weights is None:
self.weights = np.ones(self.rotations.shape[0], dtype=np.float64)
if len(self.weights) != len(self.rotations):
raise ValueError("")
d = self.data
# determine size for grid
shape = grid_shape(self.receptor.coor, self.ligand.coor, self.voxelspacing)
# calculate the interaction surface and core of the receptor
vdw_radii = self.receptor.vdw_radius
radii = vdw_radii + self.interaction_radius
d['rsurf'] = rsurface(self.receptor.coor, radii,
shape, self.voxelspacing)
d['rcore'] = rsurface(self.receptor.coor, vdw_radii,
shape, self.voxelspacing)
# keep track of some data for later calculations
d['origin'] = np.asarray(d['rcore'].origin, dtype=np.float64)
d['shape'] = d['rcore'].shape
d['start'] = d['rcore'].start
d['nrot'] = self.rotations.shape[0]
# set ligand center to the origin of the receptor map
# and make a grid of the ligand
radii = self.ligand.vdw_radius
d['lsurf'] = dilate_points((self.ligand.coor - self.ligand.center \
+ self.receptor.center), radii, volume.zeros_like(d['rcore']))
d['im_center'] = np.asarray((self.receptor.center - d['rcore'].origin)/self.voxelspacing, dtype=np.float64)
d['max_clash'] = self.max_clash/self.voxelspacing**3
d['min_interaction'] = self.min_interaction/self.voxelspacing**3
# SAXS data
d['q'] = self._q
d['targetIq'] = self._Iq
d['sq'] = self._sq
if self.coarse_grain:
e1, xyz1 = coarse_grain(self.receptor, bpr=self.beads_per_residue)
e2, xyz2 = coarse_grain(self.ligand, bpr=self.beads_per_residue)
else:
e1, xyz1 = self.receptor.elements, self.receptor.coor
e2, xyz2 = self.ligand.elements, self.ligand.coor
d['base_Iq'] = scattering_curve(self._q, e1, xyz1, bpr=self.beads_per_residue)
d['base_Iq'] += scattering_curve(self._q, e2, xyz2, bpr=self.beads_per_residue)
d['fifj'], d['rind'], d['lind'] = create_fifj_lookup_table(d['q'], e1, e2, bpr=self.beads_per_residue)
d['rxyz'] = xyz1
d['lxyz'] = xyz2 - self.ligand.center
d['chi2'] = np.zeros(d['rcore'].shape, dtype=np.float64)
d['best_chi2'] = np.zeros_like(d['chi2'])
def search(self):
self._initialize()
if self.queue is None:
self._cpu_init()
self._cpu_search()
else:
self._gpu_init()
self._gpu_search()
if _stdout.isatty():
print()
d = self.data
ind = d['best_chi2'] > 0
d['best_chi2'][ind] -= d['best_chi2'][ind].min()
best_chi2 = volume.Volume(d['best_chi2'], voxelspacing=self.voxelspacing, origin=d['origin'])
return Solutions(best_chi2, self.rotations, d['rot_ind'])
def _cpu_init(self):
self.cpu_data = {}
c = self.cpu_data
d = self.data
c['rcore'] = d['rcore'].array
c['rsurf'] = d['rsurf'].array
c['im_lsurf'] = d['lsurf'].array
c['lsurf'] = np.zeros_like(c['rcore'])
c['clashvol'] = np.zeros_like(c['rcore'])
c['intervol'] = np.zeros_like(c['rcore'])
c['interspace'] = np.zeros_like(c['rcore'], dtype=np.int64)
# complex arrays
c['ft_shape'] = list(d['shape'])
c['ft_shape'][-1] = d['shape'][-1]//2 + 1
c['ft_lsurf'] = np.zeros(c['ft_shape'], dtype=np.complex128)
c['ft_rcore'] = np.zeros(c['ft_shape'], dtype=np.complex128)
c['ft_rsurf'] = np.zeros(c['ft_shape'], dtype=np.complex128)
# initial calculations
c['ft_rcore'] = rfftn(c['rcore'])
c['ft_rsurf'] = rfftn(c['rsurf'])
c['rotmat'] = np.asarray(self.rotations, dtype=np.float64)
c['weights'] = np.asarray(self.weights, dtype=np.float64)
c['nrot'] = d['nrot']
c['shape'] = d['shape']
c['max_clash'] = d['max_clash']
c['min_interaction'] = d['min_interaction']
c['vlength'] = int(np.linalg.norm(self.ligand.coor - \
self.ligand.center, axis=1).max() + \
self.interaction_radius + 1.5)/self.voxelspacing
c['origin'] = d['origin']
# SAXS arrays
c['q'] = d['q']
c['targetIq'] = d['targetIq']
c['sq'] = d['sq']
c['base_Iq'] = d['base_Iq']
c['fifj'] = d['fifj']
c['rind'] = d['rind']
c['lind'] = d['lind']
c['rxyz'] = d['rxyz']
c['lxyz'] = d['lxyz']
c['chi2'] = d['chi2']
c['best_chi2'] = d['best_chi2']
c['rot_ind'] = np.zeros(d['shape'], dtype=np.int32)
c['Iq'] = np.zeros_like(c['targetIq'])
c['tmplxyz'] = np.zeros_like(c['lxyz'])
def _cpu_search(self):
d = self.data
c = self.cpu_data
time0 = _time()
for n in xrange(c['rotmat'].shape[0]):
# rotate ligand image
rotate_image3d(c['im_lsurf'], c['vlength'],
np.linalg.inv(c['rotmat'][n]), d['im_center'], c['lsurf'])
c['ft_lsurf'] = rfftn(c['lsurf']).conj()
c['clashvol'] = irfftn(c['ft_lsurf'] * c['ft_rcore'], s=c['shape'])
c['intervol'] = irfftn(c['ft_lsurf'] * c['ft_rsurf'], s=c['shape'])
np.logical_and(c['clashvol'] < c['max_clash'],
c['intervol'] > c['min_interaction'],
c['interspace'])
print('Number of complexes to analyze: ', c['interspace'].sum())
c['chi2'].fill(0)
calc_chi2(c['interspace'], c['q'], c['base_Iq'],
c['rind'], c['rxyz'], c['lind'], (np.mat(c['rotmat'][n])*np.mat(c['lxyz']).T).T,
c['origin'], self.voxelspacing,
c['fifj'], c['targetIq'], c['sq'], c['chi2'])
ind = c['chi2'] > c['best_chi2']
c['best_chi2'][ind] = c['chi2'][ind]
c['rot_ind'][ind] = n
if _stdout.isatty():
self._print_progress(n, c['nrot'], time0)
d['best_chi2'] = c['best_chi2']
d['rot_ind'] = c['rot_ind']
def _print_progress(self, n, total, time0):
m = n + 1
pdone = m/total
t = _time() - time0
_stdout.write('\r{:d}/{:d} ({:.2%}, ETA: {:d}s) '\
.format(m, total, pdone,
int(t/pdone - t)))
_stdout.flush()
def _gpu_init(self):
self.gpu_data = {}
g = self.gpu_data
d = self.data
q = self.queue
g['rcore'] = cl_array.to_device(q, float32array(d['rcore'].array))
g['rsurf'] = cl_array.to_device(q, float32array(d['rsurf'].array))
g['im_lsurf'] = cl.image_from_array(q.context, float32array(d['lsurf'].array))
g['sampler'] = cl.Sampler(q.context, False, cl.addressing_mode.CLAMP,
cl.filter_mode.LINEAR)
g['lsurf'] = cl_array.zeros_like(g['rcore'])
g['clashvol'] = cl_array.zeros_like(g['rcore'])
g['intervol'] = cl_array.zeros_like(g['rcore'])
g['interspace'] = cl_array.zeros(q, d['shape'], dtype=np.int32)
# complex arrays
g['ft_shape'] = list(d['shape'])
g['ft_shape'][0] = d['shape'][0]//2 + 1
g['ft_rcore'] = cl_array.zeros(q, g['ft_shape'], dtype=np.complex64)
g['ft_rsurf'] = cl_array.zeros_like(g['ft_rcore'])
g['ft_lsurf'] = cl_array.zeros_like(g['ft_rcore'])
g['ft_clashvol'] = cl_array.zeros_like(g['ft_rcore'])
g['ft_intervol'] = cl_array.zeros_like(g['ft_rcore'])
# allocate SAXS arrays
g['q'] = cl_array.to_device(q, float32array(d['q']))
g['targetIq'] = cl_array.to_device(q, float32array(d['targetIq']))
g['sq'] = cl_array.to_device(q, float32array(d['sq']))
g['base_Iq'] = cl_array.to_device(q, float32array(d['base_Iq']))
g['fifj'] = cl_array.to_device(q, float32array(d['fifj']))
g['rind'] = cl_array.to_device(q, d['rind'].astype(np.int32))
g['lind'] = cl_array.to_device(q, d['lind'].astype(np.int32))
g_rxyz = np.zeros((d['rxyz'].shape[0], 4), dtype=np.float32)
g_rxyz[:, :3] = d['rxyz'][:]
g_lxyz = np.zeros((d['lxyz'].shape[0], 4), dtype=np.float32)
g_lxyz[:, :3] = d['lxyz'][:]
g['rxyz'] = cl_array.to_device(q, g_rxyz)
g['lxyz'] = cl_array.to_device(q, g_lxyz)
g['rot_lxyz'] = cl_array.zeros_like(g['lxyz'])
g['chi2'] = cl_array.to_device(q, d['chi2'].astype(np.float32))
g['best_chi2'] = cl_array.to_device(q, d['best_chi2'].astype(np.float32))
g['rot_ind'] = cl_array.zeros(q, d['shape'], dtype=np.int32)
g['origin'] = np.zeros(4, dtype=np.float32)
g['origin'][:3] = d['origin'].astype(np.float32)
g['voxelspacing'] = np.float32(self.voxelspacing)
# kernels
g['k'] = Kernels(q.context)
g['saxs_k'] = saxs_Kernels(q.context)
g['k'].rfftn = pyclfft.RFFTn(q.context, d['shape'])
g['k'].irfftn = pyclfft.iRFFTn(q.context, d['shape'])
g['k'].rfftn(q, g['rcore'], g['ft_rcore'])
g['k'].rfftn(q, g['rsurf'], g['ft_rsurf'])
g['nrot'] = d['nrot']
g['max_clash'] = d['max_clash']
g['min_interaction'] = d['min_interaction']
def _gpu_search(self):
d = self.data
g = self.gpu_data
q = self.queue
k = g['k']
time0 = _time()
for n in xrange(g['nrot']):
k.rotate_image3d(q, g['sampler'], g['im_lsurf'],
self.rotations[n], g['lsurf'], d['im_center'])
k.rfftn(q, g['lsurf'], g['ft_lsurf'])
k.c_conj_multiply(q, g['ft_lsurf'], g['ft_rcore'], g['ft_clashvol'])
k.irfftn(q, g['ft_clashvol'], g['clashvol'])
k.c_conj_multiply(q, g['ft_lsurf'], g['ft_rsurf'], g['ft_intervol'])
k.irfftn(q, g['ft_intervol'], g['intervol'])
k.touch(q, g['clashvol'], g['max_clash'],
g['intervol'], g['min_interaction'],
g['interspace'])
g['saxs_k'].rotate_points(q, g['lxyz'], self.rotations[n], g['rot_lxyz'])
k.fill(q, g['chi2'], 0)
g['saxs_k'].calc_chi2(q, g['interspace'], g['q'], g['base_Iq'],
g['rind'], g['rxyz'], g['lind'], g['rot_lxyz'], g['origin'],
g['voxelspacing'], g['fifj'], g['targetIq'], g['sq'], g['chi2'])
g['saxs_k'].take_best(q, g['chi2'], g['best_chi2'], g['rot_ind'], n)
if _stdout.isatty():
self._print_progress(n, g['nrot'], time0)
self.queue.finish()
d['best_chi2'] = g['best_chi2'].get()
d['rot_ind'] = g['rot_ind'].get()
def rsurface(points, radius, shape, voxelspacing):
dimensions = [x*voxelspacing for x in shape]
origin = volume_origin(points, dimensions)
rsurf = volume.zeros(shape, voxelspacing, origin)
rsurf = dilate_points(points, radius, rsurf)
return rsurf
def volume_origin(points, dimensions):
center = points.mean(axis=0)
origin = [(c - d/2.0) for c, d in zip(center, dimensions)]
return origin
def grid_restraints(restraints, voxelspacing, origin, lcenter):
nrestraints = len(restraints)
g_restraints = np.zeros((nrestraints, 8), dtype=np.float64)
for n in range(nrestraints):
r_sel, l_sel, mindis, maxdis = restraints[n]
r_pos = (r_sel.center - origin)/voxelspacing
l_pos = (l_sel.center - lcenter)/voxelspacing
g_restraints[n, 0:3] = r_pos
g_restraints[n, 3:6] = l_pos
g_restraints[n, 6] = mindis/voxelspacing
g_restraints[n, 7] = maxdis/voxelspacing
return g_restraints
def grid_shape(points1, points2, voxelspacing):
shape = min_grid_shape(points1, points2, voxelspacing)
shape = [volume.radix235(x) for x in shape]
return shape
def min_grid_shape(points1, points2, voxelspacing):
# the minimal grid shape is the size of the fixed protein in
# each dimension and the longest diameter is the scanning chain
dimensions1 = points1.ptp(axis=0)
dimension2 = longest_distance(points2)
grid_shape = np.asarray(((dimensions1 + dimension2)/voxelspacing) + 10, dtype=np.int32)[::-1]
return grid_shape
def float32array(array_like):
return np.asarray(array_like, dtype=np.float32)
| 32.675147 | 115 | 0.581302 | 2,239 | 16,697 | 4.171952 | 0.129075 | 0.018735 | 0.012525 | 0.020876 | 0.22139 | 0.14902 | 0.091532 | 0.039503 | 0.036078 | 0.013061 | 0 | 0.018114 | 0.262682 | 16,697 | 510 | 116 | 32.739216 | 0.740638 | 0.033898 | 0 | 0.1 | 0 | 0 | 0.128305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002778 | 0.058333 | null | null | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0765d0b1f7f6046c9a5ec38c71317e234a345a45 | 270 | py | Python | pyrocco/__init__.py | joaopalmeiro/pyrocco | 4144f56d654500c3ec49cb04c06b98296004eafe | [
"MIT"
] | null | null | null | pyrocco/__init__.py | joaopalmeiro/pyrocco | 4144f56d654500c3ec49cb04c06b98296004eafe | [
"MIT"
] | 4 | 2021-05-31T16:44:16.000Z | 2021-05-31T17:08:04.000Z | pyrocco/__init__.py | joaopalmeiro/pyrocco | 4144f56d654500c3ec49cb04c06b98296004eafe | [
"MIT"
] | null | null | null | __package_name__ = "pyrocco"
__version__ = "0.1.0"
__author__ = "João Palmeiro"
__author_email__ = "jm.palmeiro@campus.fct.unl.pt"
__description__ = "A Python CLI to add the Party Parrot to a custom background image."
__url__ = "https://github.com/joaopalmeiro/pyrocco"
| 38.571429 | 86 | 0.766667 | 38 | 270 | 4.763158 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012552 | 0.114815 | 270 | 6 | 87 | 45 | 0.74477 | 0 | 0 | 0 | 0 | 0 | 0.588889 | 0.107407 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0768b4de117d71513a10b4439456e7226bc8f05f | 850 | py | Python | 2020/day08/machine.py | ingjrs01/adventofcode | c5e4f0158dac0efc2dbfc10167f2700693b41fea | [
"Apache-2.0"
] | null | null | null | 2020/day08/machine.py | ingjrs01/adventofcode | c5e4f0158dac0efc2dbfc10167f2700693b41fea | [
"Apache-2.0"
] | null | null | null | 2020/day08/machine.py | ingjrs01/adventofcode | c5e4f0158dac0efc2dbfc10167f2700693b41fea | [
"Apache-2.0"
] | null | null | null | class Machine():
def __init__(self):
self.pointer = 0
self.accum = 0
self.visited = []
def run(self,program):
salir = False
while (salir == False):
if (self.pointer in self.visited):
return False
if (self.pointer >= len(program)):
return True
self.visited.append(self.pointer)
incremento = 1
if (program[self.pointer][0] == "acc"):
self.accum += program[self.pointer][1]
if (program[self.pointer][0] == "jmp"):
incremento = program[self.pointer][1]
self.pointer += incremento
return True
def getVisited(self):
return self.visited
def getAccum(self):
return self.accum
| 22.368421 | 54 | 0.483529 | 84 | 850 | 4.845238 | 0.309524 | 0.243243 | 0.176904 | 0.088452 | 0.108108 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0.013917 | 0.408235 | 850 | 37 | 55 | 22.972973 | 0.795229 | 0 | 0 | 0.083333 | 0 | 0 | 0.007059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.083333 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4acaa4e6a8b6fa3eb236788a62a84f44c80e376f | 3,451 | py | Python | ingenico/direct/sdk/domain/customer_token.py | Ingenico/direct-sdk-python3 | d2b30b8e8afb307153a1f19ac4c054d5344449ce | [
"Apache-2.0"
] | null | null | null | ingenico/direct/sdk/domain/customer_token.py | Ingenico/direct-sdk-python3 | d2b30b8e8afb307153a1f19ac4c054d5344449ce | [
"Apache-2.0"
] | 1 | 2021-03-30T12:55:39.000Z | 2021-04-08T08:23:27.000Z | ingenico/direct/sdk/domain/customer_token.py | Ingenico/direct-sdk-python3 | d2b30b8e8afb307153a1f19ac4c054d5344449ce | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
#
# This class was auto-generated from the API references found at
# https://support.direct.ingenico.com/documentation/api/reference/
#
from ingenico.direct.sdk.data_object import DataObject
from ingenico.direct.sdk.domain.address import Address
from ingenico.direct.sdk.domain.company_information import CompanyInformation
from ingenico.direct.sdk.domain.personal_information_token import PersonalInformationToken
class CustomerToken(DataObject):
__billing_address = None
__company_information = None
__personal_information = None
@property
def billing_address(self) -> Address:
"""
| Object containing billing address details
Type: :class:`ingenico.direct.sdk.domain.address.Address`
"""
return self.__billing_address
@billing_address.setter
def billing_address(self, value: Address):
self.__billing_address = value
@property
def company_information(self) -> CompanyInformation:
"""
| Object containing company information
Type: :class:`ingenico.direct.sdk.domain.company_information.CompanyInformation`
"""
return self.__company_information
@company_information.setter
def company_information(self, value: CompanyInformation):
self.__company_information = value
@property
def personal_information(self) -> PersonalInformationToken:
"""
Type: :class:`ingenico.direct.sdk.domain.personal_information_token.PersonalInformationToken`
"""
return self.__personal_information
@personal_information.setter
def personal_information(self, value: PersonalInformationToken):
self.__personal_information = value
def to_dictionary(self):
dictionary = super(CustomerToken, self).to_dictionary()
if self.billing_address is not None:
dictionary['billingAddress'] = self.billing_address.to_dictionary()
if self.company_information is not None:
dictionary['companyInformation'] = self.company_information.to_dictionary()
if self.personal_information is not None:
dictionary['personalInformation'] = self.personal_information.to_dictionary()
return dictionary
def from_dictionary(self, dictionary):
super(CustomerToken, self).from_dictionary(dictionary)
if 'billingAddress' in dictionary:
if not isinstance(dictionary['billingAddress'], dict):
raise TypeError('value \'{}\' is not a dictionary'.format(dictionary['billingAddress']))
value = Address()
self.billing_address = value.from_dictionary(dictionary['billingAddress'])
if 'companyInformation' in dictionary:
if not isinstance(dictionary['companyInformation'], dict):
raise TypeError('value \'{}\' is not a dictionary'.format(dictionary['companyInformation']))
value = CompanyInformation()
self.company_information = value.from_dictionary(dictionary['companyInformation'])
if 'personalInformation' in dictionary:
if not isinstance(dictionary['personalInformation'], dict):
raise TypeError('value \'{}\' is not a dictionary'.format(dictionary['personalInformation']))
value = PersonalInformationToken()
self.personal_information = value.from_dictionary(dictionary['personalInformation'])
return self
| 41.578313 | 109 | 0.703854 | 331 | 3,451 | 7.151057 | 0.18429 | 0.091255 | 0.050275 | 0.058302 | 0.437262 | 0.370934 | 0.109421 | 0.069708 | 0.069708 | 0.069708 | 0 | 0.000364 | 0.204578 | 3,451 | 82 | 110 | 42.085366 | 0.861931 | 0.135903 | 0 | 0.056604 | 1 | 0 | 0.116183 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.075472 | 0 | 0.396226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ace72273e1ae90dc1c68aa24e3b23afcdc01695 | 2,141 | py | Python | djangosige/apps/cadastro/models/empresa.py | MateusMolina/lunoERP | 0880adb93b3a2d3169c6780efa60a229272f927a | [
"MIT"
] | null | null | null | djangosige/apps/cadastro/models/empresa.py | MateusMolina/lunoERP | 0880adb93b3a2d3169c6780efa60a229272f927a | [
"MIT"
] | null | null | null | djangosige/apps/cadastro/models/empresa.py | MateusMolina/lunoERP | 0880adb93b3a2d3169c6780efa60a229272f927a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
from django.db import models
from django.db.models.signals import post_delete
from django.dispatch import receiver
from .base import Pessoa
from djangosige.apps.login.models import Usuario
from djangosige.configs.settings import MEDIA_ROOT
def logo_directory_path(instance, filename):
extension = os.path.splitext(filename)[1]
return 'imagens/empresas/logo_{0}_{1}{2}'.format(instance.nome_razao_social, instance.id, extension)
class Empresa(Pessoa):
logo_file = models.ImageField(
upload_to=logo_directory_path, default='imagens/logo.png', blank=True, null=True)
cnae = models.CharField(max_length=10, blank=True, null=True)
iest = models.CharField(max_length=32, null=True, blank=True)
class Meta:
verbose_name = "Empresa"
@property
def caminho_completo_logo(self):
if self.logo_file.name != 'imagens/logo.png':
return os.path.join(MEDIA_ROOT, self.logo_file.name)
else:
return ''
def save(self, *args, **kwargs):
# Deletar logo se ja existir um
try:
obj = Empresa.objects.get(id=self.id)
if obj.logo_file != self.logo_file and obj.logo_file != 'imagens/logo.png':
obj.logo_file.delete(save=False)
except:
pass
super(Empresa, self).save(*args, **kwargs)
def __unicode__(self):
return u'%s' % self.nome_razao_social
def __str__(self):
return u'%s' % self.nome_razao_social
# Deletar logo quando empresa for deletada
@receiver(post_delete, sender=Empresa)
def logo_post_delete_handler(sender, instance, **kwargs):
# Nao deletar a imagem default 'logo.png'
if instance.logo_file != 'imagens/logo.png':
instance.logo_file.delete(False)
class MinhaEmpresa(models.Model):
m_empresa = models.ForeignKey(
Empresa, on_delete=models.CASCADE, related_name='minha_empresa', blank=True, null=True)
m_usuario = models.ForeignKey(
Usuario, on_delete=models.CASCADE, related_name='empresa_usuario')
| 32.439394 | 105 | 0.666978 | 278 | 2,141 | 4.956835 | 0.388489 | 0.05225 | 0.040639 | 0.03701 | 0.123367 | 0.091437 | 0.044993 | 0.044993 | 0 | 0 | 0 | 0.005425 | 0.225128 | 2,141 | 65 | 106 | 32.938462 | 0.825196 | 0.061653 | 0 | 0.045455 | 0 | 0 | 0.069624 | 0.016503 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.022727 | 0.159091 | 0.045455 | 0.590909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ad04912e975ba67417ff28c203441d4697e2178 | 846 | py | Python | autocomplete/migrations/0001_initial.py | openshift-eng/art-dashboard-server | af4e78b3d2213c30038cf69de646f25fd57c9e3c | [
"Apache-2.0"
] | 1 | 2020-09-21T06:48:47.000Z | 2020-09-21T06:48:47.000Z | autocomplete/migrations/0001_initial.py | adarshtri/build_interface_server | af4e78b3d2213c30038cf69de646f25fd57c9e3c | [
"Apache-2.0"
] | 5 | 2021-02-05T19:43:08.000Z | 2021-06-04T23:23:29.000Z | autocomplete/migrations/0001_initial.py | openshift-eng/art-dashboard-server | af4e78b3d2213c30038cf69de646f25fd57c9e3c | [
"Apache-2.0"
] | 6 | 2021-02-06T07:21:37.000Z | 2021-06-07T12:40:37.000Z | # Generated by Django 3.0.7 on 2020-07-27 19:23
import build.models
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='AutoCompleteRecord',
fields=[
('updated_at', build.models.UnixTimestampField(auto_created=True, null=True)),
('created_at', build.models.UnixTimestampField(auto_created=True, null=True)),
('log_autocomplete_record_id', models.AutoField(primary_key=True, serialize=False)),
('type', models.CharField(max_length=50)),
('value', models.CharField(max_length=300)),
],
options={
'db_table': 'log_autocomplete_record',
},
),
]
| 29.172414 | 100 | 0.588652 | 83 | 846 | 5.843373 | 0.626506 | 0.068041 | 0.053608 | 0.127835 | 0.22268 | 0.22268 | 0.22268 | 0.22268 | 0.22268 | 0 | 0 | 0.033445 | 0.293144 | 846 | 28 | 101 | 30.214286 | 0.777592 | 0.053191 | 0 | 0 | 1 | 0 | 0.130163 | 0.061327 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ad19826ee08450eee4ee8d57542ce3dfd0b5399 | 636 | py | Python | unet3d/config.py | fcollman/pytorch-3dunet | 303336bfdc0234f075c70e0c59759d09bc4081b8 | [
"MIT"
] | null | null | null | unet3d/config.py | fcollman/pytorch-3dunet | 303336bfdc0234f075c70e0c59759d09bc4081b8 | [
"MIT"
] | null | null | null | unet3d/config.py | fcollman/pytorch-3dunet | 303336bfdc0234f075c70e0c59759d09bc4081b8 | [
"MIT"
] | null | null | null | import argparse
import os
import torch
import yaml
DEFAULT_DEVICE = 'cuda:0'
def load_config():
parser = argparse.ArgumentParser(description='UNet3D training')
parser.add_argument('--config', type=str, help='Path to the YAML config file', required=True)
args = parser.parse_args()
config = _load_config_yaml(args.config)
# Get a device to train on
device = config.get('device', DEFAULT_DEVICE)
config['device'] = torch.device(device if torch.cuda.is_available() else "cpu")
return config
def _load_config_yaml(config_file):
return yaml.load(open(config_file, 'r'), Loader=yaml.FullLoader)
| 26.5 | 97 | 0.720126 | 88 | 636 | 5.045455 | 0.5 | 0.067568 | 0.058559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003766 | 0.165094 | 636 | 23 | 98 | 27.652174 | 0.832392 | 0.037736 | 0 | 0 | 0 | 0 | 0.119672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0.066667 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ad2ee44fa3231c3be7b4de5ecea4010665c6467 | 738 | py | Python | A2/semcor_chunk.py | Rogerwlk/Natural-Language-Processing | e1c0499180cec49ac0060aad7f0da00b61cfac94 | [
"MIT"
] | null | null | null | A2/semcor_chunk.py | Rogerwlk/Natural-Language-Processing | e1c0499180cec49ac0060aad7f0da00b61cfac94 | [
"MIT"
] | null | null | null | A2/semcor_chunk.py | Rogerwlk/Natural-Language-Processing | e1c0499180cec49ac0060aad7f0da00b61cfac94 | [
"MIT"
] | null | null | null | from nltk.corpus import semcor
class semcor_chunk:
def __init__(self, chunk):
self.chunk = chunk
#returns the synset if applicable, otherwise returns None
def get_syn_set(self):
try:
synset = self.chunk.label().synset()
return synset
except AttributeError:
try:
synset = wn.synset(self.chunk.label())
return synset
except:
return None
#returns a list of the words in the chunk
def get_words(self):
try:
return self.chunk.leaves()
except AttributeError:
return self.chunk
# if __name__ == "__main__":
# s = semcor.tagged_sents(tag='sem')[0]
# for chunk in s:
# a = semcor_chunk(chunk)
# print a.get_syn_set()
# for chunk in s:
# a = semcor_chunk(chunk)
# print a.get_words() | 19.945946 | 58 | 0.682927 | 108 | 738 | 4.462963 | 0.388889 | 0.112033 | 0.037344 | 0.082988 | 0.153527 | 0.153527 | 0.153527 | 0.153527 | 0.153527 | 0.153527 | 0 | 0.001712 | 0.208672 | 738 | 37 | 59 | 19.945946 | 0.82363 | 0.398374 | 0 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ad31bb3fb3f281f7ca24b5d13a95985f1d2e610 | 868 | py | Python | deps/lib/python3.5/site-packages/netdisco/discoverables/samsung_tv.py | jfarmer08/hassio | 792a6071a97bb33857c14c9937946233c620035c | [
"MIT"
] | 78 | 2017-08-19T03:46:13.000Z | 2020-02-19T04:29:45.000Z | deps/lib/python3.5/site-packages/netdisco/discoverables/samsung_tv.py | jfarmer08/hassio | 792a6071a97bb33857c14c9937946233c620035c | [
"MIT"
] | 5 | 2017-08-21T16:33:08.000Z | 2018-06-21T18:37:18.000Z | deps/lib/python3.5/site-packages/netdisco/discoverables/samsung_tv.py | jfarmer08/hassio | 792a6071a97bb33857c14c9937946233c620035c | [
"MIT"
] | 13 | 2017-08-19T16:46:08.000Z | 2018-11-05T23:11:34.000Z | """Discover Samsung Smart TV services."""
from . import SSDPDiscoverable
from ..const import ATTR_NAME
# For some models, Samsung forces a [TV] prefix to the user-specified name.
FORCED_NAME_PREFIX = '[TV]'
class Discoverable(SSDPDiscoverable):
"""Add support for discovering Samsung Smart TV services."""
def get_entries(self):
"""Get all the Samsung RemoteControlReceiver entries."""
return self.find_by_st(
"urn:samsung.com:device:RemoteControlReceiver:1")
def info_from_entry(self, entry):
"""Get most important info, by default the description location."""
info = super().info_from_entry(entry)
# Strip the forced prefix, if present
if info[ATTR_NAME].startswith(FORCED_NAME_PREFIX):
info[ATTR_NAME] = info[ATTR_NAME][len(FORCED_NAME_PREFIX):].strip()
return info
| 33.384615 | 79 | 0.691244 | 110 | 868 | 5.3 | 0.490909 | 0.054889 | 0.082333 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001449 | 0.205069 | 868 | 25 | 80 | 34.72 | 0.843478 | 0.361751 | 0 | 0 | 0 | 0 | 0.093985 | 0.086466 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ad38b4a5080c2f9ece1062934512164a3b8e38a | 324 | py | Python | sapmi/employees/migrations/0002_remove_employee_phone_alt.py | Juhanostby/django-apotek-sapmi | 972a05ca9d54eed62b640572fcf582cc8751d15a | [
"MIT"
] | 1 | 2021-09-04T17:29:14.000Z | 2021-09-04T17:29:14.000Z | sapmi/employees/migrations/0002_remove_employee_phone_alt.py | Juhanostby/django-apotek-sapmi | 972a05ca9d54eed62b640572fcf582cc8751d15a | [
"MIT"
] | 1 | 2021-07-19T15:54:27.000Z | 2021-07-20T23:01:57.000Z | sapmi/employees/migrations/0002_remove_employee_phone_alt.py | Juhanostby/django-apotek-sapmi | 972a05ca9d54eed62b640572fcf582cc8751d15a | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-12-21 19:42
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('employees', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='employee',
name='phone_alt',
),
]
| 18 | 47 | 0.58642 | 34 | 324 | 5.5 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0837 | 0.299383 | 324 | 17 | 48 | 19.058824 | 0.740088 | 0.138889 | 0 | 0 | 1 | 0 | 0.137184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ad5badf5fa7e630a25fb87b42b8e063138bfecd | 495 | py | Python | opencv/resizing.py | hackerman-101/Hacktoberfest-2022 | 839f28293930987da55f8a2414efaa1cf9676cc9 | [
"MIT"
] | 1 | 2022-02-22T17:13:54.000Z | 2022-02-22T17:13:54.000Z | opencv/resizing.py | hackerman-101/Hacktoberfest-2022 | 839f28293930987da55f8a2414efaa1cf9676cc9 | [
"MIT"
] | 11 | 2022-01-24T20:42:11.000Z | 2022-02-27T23:58:24.000Z | opencv/resizing.py | hackerman-101/Hacktoberfest-2022 | 839f28293930987da55f8a2414efaa1cf9676cc9 | [
"MIT"
] | null | null | null | import cv2 as cv
import numpy as np
cap = cv.VideoCapture(1)
print(cap.get(cv.CAP_PROP_FRAME_WIDTH))
print(cap.get(cv.CAP_PROP_FRAME_HEIGHT))
cap.set(3,3000)
cap.set(4,3000)
print(cap.get(cv.CAP_PROP_FRAME_WIDTH))
print(cap.get(cv.CAP_PROP_FRAME_HEIGHT))
while (cap.isOpened()):
ret , frame = cap.read()
if (ret == True):
cv.imshow("camVid", frame)
if cv.waitKey(25) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv.destroyAllWindows()
| 18.333333 | 45 | 0.656566 | 80 | 495 | 3.9125 | 0.45 | 0.102236 | 0.140575 | 0.166134 | 0.389776 | 0.389776 | 0.389776 | 0.389776 | 0.389776 | 0.389776 | 0 | 0.037406 | 0.189899 | 495 | 26 | 46 | 19.038462 | 0.743142 | 0 | 0 | 0.315789 | 0 | 0 | 0.014141 | 0 | 0 | 0 | 0.008081 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ad5ce0b4290abab4891890ac501c3156152672b | 13,902 | py | Python | minibenchmarks/go.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | minibenchmarks/go.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | null | null | null | minibenchmarks/go.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | # from pypy-benchmarks/own/chaos.py, with some minor modifications
# (more output, took out the benchmark harness)
#
import random, math, sys, time
SIZE = 9
GAMES = 200
KOMI = 7.5
EMPTY, WHITE, BLACK = 0, 1, 2
SHOW = {EMPTY: '.', WHITE: 'o', BLACK: 'x'}
PASS = -1
MAXMOVES = SIZE*SIZE*3
TIMESTAMP = 0
MOVES = 0
def to_pos(x,y):
return y * SIZE + x
def to_xy(pos):
y, x = divmod(pos, SIZE)
return x, y
class Square:
def __init__(self, board, pos):
self.board = board
self.pos = pos
self.timestamp = TIMESTAMP
self.removestamp = TIMESTAMP
self.zobrist_strings = [random.randrange(sys.maxint) for i in range(3)]
def set_neighbours(self):
x, y = self.pos % SIZE, self.pos / SIZE;
self.neighbours = []
for dx, dy in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
newx, newy = x + dx, y + dy
if 0 <= newx < SIZE and 0 <= newy < SIZE:
self.neighbours.append(self.board.squares[to_pos(newx, newy)])
def move(self, color):
global TIMESTAMP, MOVES
TIMESTAMP += 1
MOVES += 1
self.board.zobrist.update(self, color)
self.color = color
self.reference = self
self.ledges = 0
self.used = True
for neighbour in self.neighbours:
neighcolor = neighbour.color
if neighcolor == EMPTY:
self.ledges += 1
else:
neighbour_ref = neighbour.find(update=True)
if neighcolor == color:
if neighbour_ref.reference.pos != self.pos:
self.ledges += neighbour_ref.ledges
neighbour_ref.reference = self
self.ledges -= 1
else:
neighbour_ref.ledges -= 1
if neighbour_ref.ledges == 0:
neighbour.remove(neighbour_ref)
self.board.zobrist.add()
def remove(self, reference, update=True):
self.board.zobrist.update(self, EMPTY)
self.removestamp = TIMESTAMP
if update:
self.color = EMPTY
self.board.emptyset.add(self.pos)
# if color == BLACK:
# self.board.black_dead += 1
# else:
# self.board.white_dead += 1
for neighbour in self.neighbours:
if neighbour.color != EMPTY and neighbour.removestamp != TIMESTAMP:
neighbour_ref = neighbour.find(update)
if neighbour_ref.pos == reference.pos:
neighbour.remove(reference, update)
else:
if update:
neighbour_ref.ledges += 1
def find(self, update=False):
reference = self.reference
if reference.pos != self.pos:
reference = reference.find(update)
if update:
self.reference = reference
return reference
def __repr__(self):
return repr(to_xy(self.pos))
class EmptySet:
def __init__(self, board):
self.board = board
self.empties = range(SIZE*SIZE)
self.empty_pos = range(SIZE*SIZE)
def random_choice(self):
choices = len(self.empties)
while choices:
i = int(random.random()*choices)
pos = self.empties[i]
if self.board.useful(pos):
return pos
choices -= 1
self.set(i, self.empties[choices])
self.set(choices, pos)
return PASS
def add(self, pos):
self.empty_pos[pos] = len(self.empties)
self.empties.append(pos)
def remove(self, pos):
self.set(self.empty_pos[pos], self.empties[len(self.empties)-1])
self.empties.pop()
def set(self, i, pos):
self.empties[i] = pos
self.empty_pos[pos] = i
class ZobristHash:
def __init__(self, board):
self.board = board
self.hash_set = set()
self.hash = 0
for square in self.board.squares:
self.hash ^= square.zobrist_strings[EMPTY]
self.hash_set.clear()
self.hash_set.add(self.hash)
def update(self, square, color):
self.hash ^= square.zobrist_strings[square.color]
self.hash ^= square.zobrist_strings[color]
def add(self):
self.hash_set.add(self.hash)
def dupe(self):
return self.hash in self.hash_set
class Board:
def __init__(self):
self.squares = [Square(self, pos) for pos in range(SIZE*SIZE)]
for square in self.squares:
square.set_neighbours()
self.reset()
def reset(self):
for square in self.squares:
square.color = EMPTY
square.used = False
self.emptyset = EmptySet(self)
self.zobrist = ZobristHash(self)
self.color = BLACK
self.finished = False
self.lastmove = -2
self.history = []
self.white_dead = 0
self.black_dead = 0
def move(self, pos):
square = self.squares[pos]
if pos != PASS:
square.move(self.color)
self.emptyset.remove(square.pos)
elif self.lastmove == PASS:
self.finished = True
if self.color == BLACK: self.color = WHITE
else: self.color = BLACK
self.lastmove = pos
self.history.append(pos)
def random_move(self):
return self.emptyset.random_choice()
def useful_fast(self, square):
if not square.used:
for neighbour in square.neighbours:
if neighbour.color == EMPTY:
return True
return False
def useful(self, pos):
global TIMESTAMP
TIMESTAMP += 1
square = self.squares[pos]
if self.useful_fast(square):
return True
old_hash = self.zobrist.hash
self.zobrist.update(square, self.color)
empties = opps = weak_opps = neighs = weak_neighs = 0
for neighbour in square.neighbours:
neighcolor = neighbour.color
if neighcolor == EMPTY:
empties += 1
continue
neighbour_ref = neighbour.find()
if neighbour_ref.timestamp != TIMESTAMP:
if neighcolor == self.color:
neighs += 1
else:
opps += 1
neighbour_ref.timestamp = TIMESTAMP
neighbour_ref.temp_ledges = neighbour_ref.ledges
neighbour_ref.temp_ledges -= 1
if neighbour_ref.temp_ledges == 0:
if neighcolor == self.color:
weak_neighs += 1
else:
weak_opps += 1
neighbour_ref.remove(neighbour_ref, update=False)
dupe = self.zobrist.dupe()
self.zobrist.hash = old_hash
strong_neighs = neighs-weak_neighs
strong_opps = opps-weak_opps
return not dupe and \
(empties or weak_opps or (strong_neighs and (strong_opps or weak_neighs)))
def useful_moves(self):
return [pos for pos in self.emptyset.empties if self.useful(pos)]
def replay(self, history):
for pos in history:
self.move(pos)
def score(self, color):
if color == WHITE:
count = KOMI + self.black_dead
else:
count = self.white_dead
for square in self.squares:
squarecolor = square.color
if squarecolor == color:
count += 1
elif squarecolor == EMPTY:
surround = 0
for neighbour in square.neighbours:
if neighbour.color == color:
surround += 1
if surround == len(square.neighbours):
count += 1
return count
def check(self):
for square in self.squares:
if square.color == EMPTY:
continue
members1 = set([square])
changed = True
while changed:
changed = False
for member in members1.copy():
for neighbour in member.neighbours:
if neighbour.color == square.color and neighbour not in members1:
changed = True
members1.add(neighbour)
ledges1 = 0
for member in members1:
for neighbour in member.neighbours:
if neighbour.color == EMPTY:
ledges1 += 1
root = square.find()
#print 'members1', square, root, members1
#print 'ledges1', square, ledges1
members2 = set()
for square2 in self.squares:
if square2.color != EMPTY and square2.find() == root:
members2.add(square2)
ledges2 = root.ledges
#print 'members2', square, root, members1
#print 'ledges2', square, ledges2
assert members1 == members2
assert ledges1 == ledges2, ('ledges differ at %r: %d %d' % (square, ledges1, ledges2))
empties1 = set(self.emptyset.empties)
empties2 = set()
for square in self.squares:
if square.color == EMPTY:
empties2.add(square.pos)
def __repr__(self):
result = []
for y in range(SIZE):
start = to_pos(0, y)
result.append(''.join([SHOW[square.color]+' ' for square in self.squares[start:start+SIZE]]))
return '\n'.join(result)
class UCTNode:
def __init__(self):
self.bestchild = None
self.pos = -1
self.wins = 0
self.losses = 0
self.pos_child = [None for x in range(SIZE*SIZE)]
self.parent = None
def play(self, board):
""" uct tree search """
color = board.color
node = self
path = [node]
while True:
pos = node.select(board)
if pos == PASS:
break
board.move(pos)
child = node.pos_child[pos]
if not child:
child = node.pos_child[pos] = UCTNode()
child.unexplored = board.useful_moves()
child.pos = pos
child.parent = node
path.append(child)
break
path.append(child)
node = child
self.random_playout(board)
self.update_path(board, color, path)
def select(self, board):
""" select move; unexplored children first, then according to uct value """
if self.unexplored:
i = random.randrange(len(self.unexplored))
pos = self.unexplored[i]
self.unexplored[i] = self.unexplored[len(self.unexplored)-1]
self.unexplored.pop()
return pos
elif self.bestchild:
return self.bestchild.pos
else:
return PASS
def random_playout(self, board):
""" random play until both players pass """
for x in range(MAXMOVES): # XXX while not self.finished?
if board.finished:
break
board.move(board.random_move())
def update_path(self, board, color, path):
""" update win/loss count along path """
wins = board.score(BLACK) >= board.score(WHITE)
for node in path:
if color == BLACK: color = WHITE
else: color = BLACK
if wins == (color == BLACK):
node.wins += 1
else:
node.losses += 1
if node.parent:
node.parent.bestchild = node.parent.best_child()
def score(self):
winrate = self.wins/float(self.wins+self.losses)
parentvisits = self.parent.wins+self.parent.losses
if not parentvisits:
return winrate
nodevisits = self.wins+self.losses
return winrate + math.sqrt((math.log(parentvisits))/(5*nodevisits))
def best_child(self):
maxscore = -1
maxchild = None
for child in self.pos_child:
if child and child.score() > maxscore:
maxchild = child
maxscore = child.score()
return maxchild
def best_visited(self):
maxvisits = -1
maxchild = None
for child in self.pos_child:
# if child:
# print to_xy(child.pos), child.wins, child.losses, child.score()
if child and (child.wins+child.losses) > maxvisits:
maxvisits, maxchild = (child.wins+child.losses), child
return maxchild
def user_move(board):
while True:
text = raw_input('?').strip()
if text == 'p':
return PASS
if text == 'q':
raise EOFError
try:
x, y = [int(i) for i in text.split()]
except ValueError:
continue
if not (0 <= x < SIZE and 0 <= y < SIZE):
continue
pos = to_pos(x, y)
if board.useful(pos):
return pos
def computer_move(board):
global MOVES
pos = board.random_move()
if pos == PASS:
return PASS
tree = UCTNode()
tree.unexplored = board.useful_moves()
nboard = Board()
for game in range(GAMES):
node = tree
nboard.reset()
nboard.replay(board.history)
node.play(nboard)
# print 'moves', MOVES
return tree.best_visited().pos
def versus_cpu():
print "versus_cpu"
random.seed(1)
board = Board()
pos = computer_move(board)
def main(n):
times = []
for i in range(5):
versus_cpu() # warmup
for i in range(n):
t1 = time.time()
versus_cpu()
t2 = time.time()
times.append(t2 - t1)
return times
if __name__ == "__main__":
main(100)
| 31.310811 | 105 | 0.534743 | 1,574 | 13,902 | 4.639771 | 0.135324 | 0.023415 | 0.013419 | 0.014378 | 0.193208 | 0.117486 | 0.088457 | 0.056963 | 0.022457 | 0.011502 | 0 | 0.01218 | 0.368077 | 13,902 | 443 | 106 | 31.38149 | 0.819124 | 0.039203 | 0 | 0.224599 | 0 | 0 | 0.004028 | 0 | 0 | 0 | 0 | 0 | 0.005348 | 0 | null | null | 0.024064 | 0.002674 | null | null | 0.002674 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ad876898c7dcfaaa80fe53e3fa05c848775c82a | 1,828 | py | Python | bin/p3starcoordcheck.py | emkailu/PAT3DEM | 74e7a0f30179e49ea5c7da1bea893e21a3ed601a | [
"MIT"
] | null | null | null | bin/p3starcoordcheck.py | emkailu/PAT3DEM | 74e7a0f30179e49ea5c7da1bea893e21a3ed601a | [
"MIT"
] | null | null | null | bin/p3starcoordcheck.py | emkailu/PAT3DEM | 74e7a0f30179e49ea5c7da1bea893e21a3ed601a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os
import sys
import argparse
import pat3dem.star as p3s
import math
def main():
progname = os.path.basename(sys.argv[0])
usage = progname + """ [options] <coord star files>
Output the coord star files after deleting duplicate particles
"""
args_def = {'mindis':150}
parser = argparse.ArgumentParser()
parser.add_argument("star", nargs='*', help="specify coord star files to be processed")
parser.add_argument("-m", "--mindis", type=float, help="specify the minimum distance between particles in pixels, by default {}".format(args_def['mindis']))
args = parser.parse_args()
if len(sys.argv) == 1:
print "usage: " + usage
print "Please run '" + progname + " -h' for detailed options."
sys.exit(1)
# get default values
for i in args_def:
if args.__dict__[i] == None:
args.__dict__[i] = args_def[i]
# loop over all input files
for star in args.star:
star_dict = p3s.star_parse(star, 'data_')
header = star_dict['data_']+star_dict['loop_']
header_len = len(header)
basename = os.path.basename(os.path.splitext(star)[0])
with open(star) as s_read:
lines = s_read.readlines()[header_len:-1]
#
with open(basename+'_checked.star', 'w') as s_w:
s_w.write(''.join(header))
# use list of list to store x and y
xy = []
for line in lines:
good = 1
line = line.split()
# get coord
x, y = float(line[star_dict['_rlnCoordinateX']]), float(line[star_dict['_rlnCoordinateY']])
for i in xy:
dis = math.sqrt((x - i[0])**2 + (y - i[1])**2)
if dis < args.mindis:
print 'Distance between ({},{}) and {} is {}. Discard.'.format(x,y,i,dis)
good = 0
break
if good == 1:
s_w.write('{:>12} '.format(x) + '{:>12} \n'.format(y))
xy.append((x,y))
s_w.write('\n')
if __name__ == '__main__':
main()
| 30.466667 | 157 | 0.639497 | 280 | 1,828 | 4.021429 | 0.392857 | 0.035524 | 0.0373 | 0.030195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.193654 | 1,828 | 59 | 158 | 30.983051 | 0.748982 | 0.060722 | 0 | 0 | 0 | 0 | 0.239626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.104167 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ad89a5bebd4952730caed6adc03938d82e1dcd1 | 4,251 | py | Python | src/review_scraper.py | ryankirkland/voice-of-the-customer | 0214af45cc6aa76bfce64065f07c3f4781ee045e | [
"MIT"
] | null | null | null | src/review_scraper.py | ryankirkland/voice-of-the-customer | 0214af45cc6aa76bfce64065f07c3f4781ee045e | [
"MIT"
] | null | null | null | src/review_scraper.py | ryankirkland/voice-of-the-customer | 0214af45cc6aa76bfce64065f07c3f4781ee045e | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
import pandas as pd
import requests
import time
import sys
def reviews_scraper(asin_list, filename):
'''
Takes a list of asins, retrieves html for reviews page, and parses out key data points
Parameters
----------
List of ASINs (list of strings)
Returns:
-------
review information (list), reviews_df (Pandas DataFrame)
'''
asin_list = [asin_list]
print(asin_list)
reviews = []
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept-Encoding":"gzip, deflate", "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "DNT":"1","Connection":"close", "Upgrade-Insecure-Requests":"1"}
for asin in asin_list:
print(f'Collecting reviews for {asin}')
passed_last_page = None
counter = 1
while (passed_last_page == None) and (counter <= 10):
print(len(reviews))
reviews_url = f'https://www.amazon.com/product-reviews/{asin}/ref=cm_cr_arp_d_viewopt_srt?ie=UTF8&reviewerType=all_reviews&sortBy=recent&pageNumber={counter}'
print(reviews_url)
rev = requests.get(reviews_url, headers=headers)
print(rev.status_code)
reviews_page_content = rev.content
review_soup = BeautifulSoup(reviews_page_content, features='lxml')
print(review_soup)
passed_last_page = review_soup.find('div', attrs={'class': 'a-section a-spacing-top-large a-text-center no-reviews-section'})
if passed_last_page == None:
for d in review_soup.findAll('div', attrs={'data-hook':'review'}):
# print(d)
try:
date = d.find('span', attrs={'data-hook':'review-date'})
date = date.text.split(' ')[-3:]
date = ' '.join(date)
except:
date = 'null'
try:
title = d.find('a', attrs={'data-hook': 'review-title'})
except:
title = 'null'
try:
product = d.find('a', attrs={'data-hook': 'format-strip'})
product = product.text
except:
product = 'null'
try:
review_asin = product['href'].split('/')[3]
except:
review_asin = asin
try:
verified = d.find('span', attrs={'data-hook':'avp-badge'})
if verified == None:
verified = 'Not Verified'
else:
verified = verified.text
except:
verified = 'null'
try:
description = d.find('span', attrs={'data-hook': 'review-body'})
except:
description = 'null'
try:
reviewer_name = d.find('span', attrs={'class': 'a-profile-name'})
except:
reviewer_name = 'null'
try:
stars = d.find('span', attrs={'class': 'a-icon-alt'})
except:
stars = 'null'
reviews.append([review_asin, product, date, verified, title.text, description.text, reviewer_name.text, float(stars.text[0:3])])
else:
pass
counter += 1
time.sleep(15)
reviews_df = pd.DataFrame(reviews, columns=['asin','product','date', 'verified', 'title', 'desc', 'reviewer_name', 'rating'])
reviews_df.to_csv(f'data/reviews/{filename}')
print(f'{len(reviews)} reviews for {len(asin_list)} asins stored successfully in {filename}')
return reviews, reviews_df
if __name__ == '__main__':
reviews_scraper(*sys.argv[1:]) | 42.089109 | 285 | 0.482945 | 428 | 4,251 | 4.670561 | 0.376168 | 0.017509 | 0.03902 | 0.035018 | 0.106053 | 0.078039 | 0.028014 | 0 | 0 | 0 | 0 | 0.016304 | 0.394025 | 4,251 | 101 | 286 | 42.089109 | 0.759705 | 0.054575 | 0 | 0.236842 | 0 | 0.039474 | 0.21608 | 0.027889 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013158 | false | 0.065789 | 0.065789 | 0 | 0.092105 | 0.092105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4adace3be34277664a2e8a315913402feb463667 | 3,788 | py | Python | lumberdata/metadata.py | cglumberjack/lumber_metadata | aebca5dbecb8d7684b1b169bf2961e4ab0daca2b | [
"MIT"
] | null | null | null | lumberdata/metadata.py | cglumberjack/lumber_metadata | aebca5dbecb8d7684b1b169bf2961e4ab0daca2b | [
"MIT"
] | null | null | null | lumberdata/metadata.py | cglumberjack/lumber_metadata | aebca5dbecb8d7684b1b169bf2961e4ab0daca2b | [
"MIT"
] | null | null | null | # noinspection PyUnresolvedReferences
import os
import re
# TODO I'm going to need to make a dictionary for my big list of stuff i care about and what's needed for
# every file type....
RAF = ['EXIF:LensModel', 'MakerNotes:RawImageHeight', 'MakerNotes:RawImageWidth', 'EXIF:CreateDate', 'EXIF:ModifyDate',
'EXIF:SerialNumber', 'Composite:Aperture', 'EXIF:FocalLength', 'EXIF:Make', 'EXIF:Model', 'EXIF:LensMake']
MOV = ['EXIF:LensModel', 'MakerNotes:RawImageHeight', 'MakerNotes:RawImageWidth', 'EXIF:CreateDate', 'EXIF:ModifyDate',
'EXIF:SerialNumber', 'Composite:Aperture', 'EXIF:FocalLength', 'EXIF:Make', 'EXIF:Model', 'EXIF:LensMake',
'QuickTime:VideoFrameRate', 'QuickTime:Duration']
R3D = ['ClipName', 'EdgeTC', 'EndEdgeTC', 'TotalFrames', 'FrameHeight', 'FrameWidth', 'Aperture', 'ISO', 'Date',
'AudioSlate', 'VideoSlate', 'Camera', 'CameraModel', 'CameraPIN', 'MediaSerialNumber', 'LensSerialNumber',
'FPS', 'AspectRatio', 'Kelvin', 'LensName', 'LensBrand', 'FocalLength', 'Shutter(deg)', 'SensorID', 'SensorName',
'Take']
def check_exiftool():
"""
checks if exiftool is installed.
:return:
"""
pass
def check_redline():
"""
checks if redline is installed
:return:
"""
pass
def check_ffprobe():
"""
checks if ffprobe is installed
:return:
"""
pass
def get(filein, tool='exiftool', print_output=False):
"""
Due to issues with the exiftool module this is provided as a way to parse output directly
from exiftool through the system commands and cglexecute. For the moment it's only designed
to get the lumberdata for a single file.
:param filein:
:return: dictionary containing lumberdata from exiftool
"""
ext = os.path.splitext(filein)[-1]
d = {}
if tool == 'exiftool':
command = r'exiftool %s' % filein
output = cgl_execute(command=command, verbose=False, print_output=print_output)
for each in output['printout']:
key, value = re.split("\s+:\s+", each)
d[key] = value
return d
elif tool == 'ffprobe':
command = r'%s %s' % ('ffprobe', filein)
output = cgl_execute(command=command)
for each in output['printout']:
try:
values = re.split(":\s+", each)
key = values[0]
values.pop(0)
if 'Stream' in key:
split_v = values[1].split(',')
d['Image Size'] = split_v[2].split()[0]
d['Source Image Width'], d['Source Image Height'] = d['Image Size'].split('x')
d['Video Frame Rate'] = split_v[4].split(' fps')[0].replace(' ', '')
if 'Duration' in key:
d['Track Duration'] = '%s s' % values[0].split(',')[0]
value = ' '.join(values)
d[key] = value
except ValueError:
print('skipping %s' % each)
return d
def get_red_data(filein):
"""
method for pulling lumberdata from r3d files. REDLINE is a command line interface from RED that is required
for this
https://www.red.com/downloads/options?itemInternalId=16144
:param filein:
:return:
"""
file_, ext_ = os.path.splitext(filein)
if ext_.upper() == '.R3D':
command = r'REDLINE --i %s --printMeta 1' % filein
d = {}
for line in os.popen(command).readlines():
line = line.strip('\n')
line = line.replace('\t', '')
line = line.replace(' ', '')
try:
key_, value = line.split(':', 1)
if key_ != 'None':
d[key_] = value
except ValueError:
pass
return d
| 35.735849 | 120 | 0.573654 | 429 | 3,788 | 5.020979 | 0.414918 | 0.01857 | 0.023677 | 0.029248 | 0.290622 | 0.213556 | 0.153203 | 0.153203 | 0.153203 | 0.153203 | 0 | 0.007361 | 0.282735 | 3,788 | 105 | 121 | 36.07619 | 0.785425 | 0.205385 | 0 | 0.269841 | 0 | 0 | 0.299619 | 0.042258 | 0 | 0 | 0 | 0.009524 | 0 | 1 | 0.079365 | false | 0.063492 | 0.031746 | 0 | 0.15873 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4adecc45d925a985d290d61ac2e4d5096ee82755 | 3,057 | py | Python | jug/subcommands/demo.py | rdenham/jug | 40925445a5f96f9eec237de37e46e6fabcce6526 | [
"MIT"
] | 309 | 2015-02-09T09:33:52.000Z | 2022-03-26T22:30:18.000Z | jug/subcommands/demo.py | zhaoxiugao/jug | 9c5e3930777658699bc9579c872a010a7c3bffe3 | [
"MIT"
] | 61 | 2015-01-25T18:11:14.000Z | 2020-10-15T06:52:13.000Z | jug/subcommands/demo.py | zhaoxiugao/jug | 9c5e3930777658699bc9579c872a010a7c3bffe3 | [
"MIT"
] | 51 | 2015-01-25T17:40:31.000Z | 2022-02-28T20:42:42.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (C) 2017, Luis Pedro Coelho <luis@luispedro.org>
# vim: set ts=4 sts=4 sw=4 expandtab smartindent:
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from . import SubCommand
__all__ = ['DemoCommand']
class DemoCommand(SubCommand):
'''Create demo directory.
'''
name = "demo"
def run(self, *args, **kwargs):
import os
from os import path
print('''
Jug will create a directory called 'jug-demo/' with a file called 'primes.py'
inside.
You can test jug by switching to that directory and running the commands:
jug status primes.py
followed by
jug execute primes.py
Upon termination of the process, results will be in a file called 'output.txt'.
PARALLEL USAGE
You can speed up the process by running several 'jug execute' in parallel:
jug execute primes.py &
jug execute primes.py &
jug execute primes.py &
jug execute primes.py &
TROUBLE SHOOTING:
Should you run into issues, you can run the internal tests for jug with
jug test-jug
FURTHER READING
The online documentation contains further reading. You can read the next
tutorial here:
http://jug.readthedocs.io/en/latest/decrypt-example.html
''')
if path.exists('jug-demo'):
print("Jug-demo previously created")
return
os.mkdir('jug-demo')
output = open('jug-demo/primes.py', 'wt')
output.write(r'''
from time import sleep
from jug import TaskGenerator
@TaskGenerator
def is_prime(n):
sleep(1.)
for j in range(2, n - 1):
if (n % j) == 0:
return False
return True
@TaskGenerator
def count_primes(ps):
return sum(ps)
@TaskGenerator
def write_output(n):
output = open('output.txt', 'wt')
output.write("Found {0} primes <= 100.\n".format(n))
output.close()
primes100 = []
for n in range(2, 101):
primes100.append(is_prime(n))
n_primes = count_primes(primes100)
write_output(n_primes)
''')
output.close()
demo = DemoCommand()
| 26.815789 | 80 | 0.700687 | 448 | 3,057 | 4.754464 | 0.473214 | 0.041315 | 0.037559 | 0.042254 | 0.033803 | 0.033803 | 0.033803 | 0.033803 | 0.033803 | 0.033803 | 0 | 0.012018 | 0.210664 | 3,057 | 113 | 81 | 27.053097 | 0.8707 | 0.394832 | 0 | 0.183333 | 0 | 0 | 0.771256 | 0.040592 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.083333 | 0 | 0.2 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ae20be2afc3642f06f66e4a1b7fcf4056c7970b | 11,636 | py | Python | src/python/nimbusml/internal/entrypoints/trainers_lightgbmbinaryclassifier.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 134 | 2018-11-01T22:15:24.000Z | 2019-05-04T11:30:08.000Z | src/python/nimbusml/internal/entrypoints/trainers_lightgbmbinaryclassifier.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 226 | 2019-05-07T19:00:44.000Z | 2021-01-06T07:59:48.000Z | src/python/nimbusml/internal/entrypoints/trainers_lightgbmbinaryclassifier.py | montehoover/NimbusML | f6be39ce9359786976429bab0ccd837e849b4ba5 | [
"MIT"
] | 43 | 2019-05-15T20:19:42.000Z | 2022-03-30T10:26:07.000Z | # - Generated by tools/entrypoint_compiler.py: do not edit by hand
"""
Trainers.LightGbmBinaryClassifier
"""
import numbers
from ..utils.entrypoints import EntryPoint
from ..utils.utils import try_set, unlist
def trainers_lightgbmbinaryclassifier(
training_data,
predictor_model=None,
number_of_iterations=100,
learning_rate=None,
number_of_leaves=None,
minimum_example_count_per_leaf=None,
feature_column_name='Features',
booster=None,
label_column_name='Label',
example_weight_column_name=None,
row_group_column_name=None,
normalize_features='Auto',
caching='Auto',
unbalanced_sets=False,
weight_of_positive_examples=1.0,
sigmoid=0.5,
evaluation_metric='Logloss',
maximum_bin_count_per_feature=255,
verbose=False,
silent=True,
number_of_threads=None,
early_stopping_round=0,
batch_size=1048576,
use_categorical_split=None,
handle_missing_value=True,
use_zero_as_missing_value=False,
minimum_example_count_per_group=100,
maximum_categorical_split_point_count=32,
categorical_smoothing=10.0,
l2_categorical_regularization=10.0,
seed=None,
parallel_trainer=None,
**params):
"""
**Description**
Train a LightGBM binary classification model.
:param number_of_iterations: Number of iterations. (inputs).
:param training_data: The data to be used for training (inputs).
:param learning_rate: Shrinkage rate for trees, used to prevent
over-fitting. Range: (0,1]. (inputs).
:param number_of_leaves: Maximum leaves for trees. (inputs).
:param minimum_example_count_per_leaf: Minimum number of
instances needed in a child. (inputs).
:param feature_column_name: Column to use for features (inputs).
:param booster: Which booster to use, can be gbtree, gblinear or
dart. gbtree and dart use tree based model while gblinear
uses linear function. (inputs).
:param label_column_name: Column to use for labels (inputs).
:param example_weight_column_name: Column to use for example
weight (inputs).
:param row_group_column_name: Column to use for example groupId
(inputs).
:param normalize_features: Normalize option for the feature
column (inputs).
:param caching: Whether trainer should cache input training data
(inputs).
:param unbalanced_sets: Use for binary classification when
training data is not balanced. (inputs).
:param weight_of_positive_examples: Control the balance of
positive and negative weights, useful for unbalanced classes.
A typical value to consider: sum(negative cases) /
sum(positive cases). (inputs).
:param sigmoid: Parameter for the sigmoid function. (inputs).
:param evaluation_metric: Evaluation metrics. (inputs).
:param maximum_bin_count_per_feature: Maximum number of bucket
bin for features. (inputs).
:param verbose: Verbose (inputs).
:param silent: Printing running messages. (inputs).
:param number_of_threads: Number of parallel threads used to run
LightGBM. (inputs).
:param early_stopping_round: Rounds of early stopping, 0 will
disable it. (inputs).
:param batch_size: Number of entries in a batch when loading
data. (inputs).
:param use_categorical_split: Enable categorical split or not.
(inputs).
:param handle_missing_value: Enable special handling of missing
value or not. (inputs).
:param use_zero_as_missing_value: Enable usage of zero (0) as
missing value. (inputs).
:param minimum_example_count_per_group: Minimum number of
instances per categorical group. (inputs).
:param maximum_categorical_split_point_count: Max number of
categorical thresholds. (inputs).
:param categorical_smoothing: Lapalace smooth term in categorical
feature spilt. Avoid the bias of small categories. (inputs).
:param l2_categorical_regularization: L2 Regularization for
categorical split. (inputs).
:param seed: Sets the random seed for LightGBM to use. (inputs).
:param parallel_trainer: Parallel LightGBM Learning Algorithm
(inputs).
:param predictor_model: The trained model (outputs).
"""
entrypoint_name = 'Trainers.LightGbmBinaryClassifier'
inputs = {}
outputs = {}
if number_of_iterations is not None:
inputs['NumberOfIterations'] = try_set(
obj=number_of_iterations,
none_acceptable=True,
is_of_type=numbers.Real)
if training_data is not None:
inputs['TrainingData'] = try_set(
obj=training_data,
none_acceptable=False,
is_of_type=str)
if learning_rate is not None:
inputs['LearningRate'] = try_set(
obj=learning_rate,
none_acceptable=True,
is_of_type=numbers.Real)
if number_of_leaves is not None:
inputs['NumberOfLeaves'] = try_set(
obj=number_of_leaves,
none_acceptable=True,
is_of_type=numbers.Real)
if minimum_example_count_per_leaf is not None:
inputs['MinimumExampleCountPerLeaf'] = try_set(
obj=minimum_example_count_per_leaf,
none_acceptable=True,
is_of_type=numbers.Real)
if feature_column_name is not None:
inputs['FeatureColumnName'] = try_set(
obj=feature_column_name,
none_acceptable=True,
is_of_type=str,
is_column=True)
if booster is not None:
inputs['Booster'] = try_set(
obj=booster,
none_acceptable=True,
is_of_type=dict)
if label_column_name is not None:
inputs['LabelColumnName'] = try_set(
obj=label_column_name,
none_acceptable=True,
is_of_type=str,
is_column=True)
if example_weight_column_name is not None:
inputs['ExampleWeightColumnName'] = try_set(
obj=example_weight_column_name,
none_acceptable=True,
is_of_type=str,
is_column=True)
if row_group_column_name is not None:
inputs['RowGroupColumnName'] = try_set(
obj=row_group_column_name,
none_acceptable=True,
is_of_type=str,
is_column=True)
if normalize_features is not None:
inputs['NormalizeFeatures'] = try_set(
obj=normalize_features,
none_acceptable=True,
is_of_type=str,
values=[
'No',
'Warn',
'Auto',
'Yes'])
if caching is not None:
inputs['Caching'] = try_set(
obj=caching,
none_acceptable=True,
is_of_type=str,
values=[
'Auto',
'Memory',
'None'])
if unbalanced_sets is not None:
inputs['UnbalancedSets'] = try_set(
obj=unbalanced_sets,
none_acceptable=True,
is_of_type=bool)
if weight_of_positive_examples is not None:
inputs['WeightOfPositiveExamples'] = try_set(
obj=weight_of_positive_examples,
none_acceptable=True,
is_of_type=numbers.Real)
if sigmoid is not None:
inputs['Sigmoid'] = try_set(
obj=sigmoid,
none_acceptable=True,
is_of_type=numbers.Real)
if evaluation_metric is not None:
inputs['EvaluationMetric'] = try_set(
obj=evaluation_metric,
none_acceptable=True,
is_of_type=str,
values=[
'None',
'Default',
'Logloss',
'Error',
'AreaUnderCurve'])
if maximum_bin_count_per_feature is not None:
inputs['MaximumBinCountPerFeature'] = try_set(
obj=maximum_bin_count_per_feature,
none_acceptable=True,
is_of_type=numbers.Real)
if verbose is not None:
inputs['Verbose'] = try_set(
obj=verbose,
none_acceptable=True,
is_of_type=bool)
if silent is not None:
inputs['Silent'] = try_set(
obj=silent,
none_acceptable=True,
is_of_type=bool)
if number_of_threads is not None:
inputs['NumberOfThreads'] = try_set(
obj=number_of_threads,
none_acceptable=True,
is_of_type=numbers.Real)
if early_stopping_round is not None:
inputs['EarlyStoppingRound'] = try_set(
obj=early_stopping_round,
none_acceptable=True,
is_of_type=numbers.Real)
if batch_size is not None:
inputs['BatchSize'] = try_set(
obj=batch_size,
none_acceptable=True,
is_of_type=numbers.Real)
if use_categorical_split is not None:
inputs['UseCategoricalSplit'] = try_set(
obj=use_categorical_split, none_acceptable=True, is_of_type=bool)
if handle_missing_value is not None:
inputs['HandleMissingValue'] = try_set(
obj=handle_missing_value,
none_acceptable=True,
is_of_type=bool)
if use_zero_as_missing_value is not None:
inputs['UseZeroAsMissingValue'] = try_set(
obj=use_zero_as_missing_value,
none_acceptable=True,
is_of_type=bool)
if minimum_example_count_per_group is not None:
inputs['MinimumExampleCountPerGroup'] = try_set(
obj=minimum_example_count_per_group,
none_acceptable=True,
is_of_type=numbers.Real,
valid_range={
'Inf': 0,
'Max': 2147483647})
if maximum_categorical_split_point_count is not None:
inputs['MaximumCategoricalSplitPointCount'] = try_set(
obj=maximum_categorical_split_point_count,
none_acceptable=True,
is_of_type=numbers.Real,
valid_range={
'Inf': 0,
'Max': 2147483647})
if categorical_smoothing is not None:
inputs['CategoricalSmoothing'] = try_set(
obj=categorical_smoothing,
none_acceptable=True,
is_of_type=numbers.Real, valid_range={'Min': 0.0})
if l2_categorical_regularization is not None:
inputs['L2CategoricalRegularization'] = try_set(
obj=l2_categorical_regularization,
none_acceptable=True,
is_of_type=numbers.Real, valid_range={'Min': 0.0})
if seed is not None:
inputs['Seed'] = try_set(
obj=seed,
none_acceptable=True,
is_of_type=numbers.Real)
if parallel_trainer is not None:
inputs['ParallelTrainer'] = try_set(
obj=parallel_trainer,
none_acceptable=True,
is_of_type=dict)
if predictor_model is not None:
outputs['PredictorModel'] = try_set(
obj=predictor_model, none_acceptable=False, is_of_type=str)
input_variables = {
x for x in unlist(inputs.values())
if isinstance(x, str) and x.startswith("$")}
output_variables = {
x for x in unlist(outputs.values())
if isinstance(x, str) and x.startswith("$")}
entrypoint = EntryPoint(
name=entrypoint_name, inputs=inputs, outputs=outputs,
input_variables=input_variables,
output_variables=output_variables)
return entrypoint
| 38.026144 | 77 | 0.633293 | 1,355 | 11,636 | 5.164576 | 0.171218 | 0.028294 | 0.041155 | 0.066448 | 0.339383 | 0.266933 | 0.217491 | 0.191197 | 0.138468 | 0.077165 | 0 | 0.007866 | 0.289876 | 11,636 | 305 | 78 | 38.15082 | 0.839042 | 0.251375 | 0 | 0.323276 | 1 | 0 | 0.079196 | 0.028251 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00431 | false | 0 | 0.012931 | 0 | 0.021552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ae9ad24978103134e04134f0b180d82ec622bb0 | 5,118 | py | Python | ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_service_check.py | vsosrc/ambari | e3cc898672707bedf7597f2e16d684c8a00bba3b | [
"Apache-2.0"
] | null | null | null | ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_service_check.py | vsosrc/ambari | e3cc898672707bedf7597f2e16d684c8a00bba3b | [
"Apache-2.0"
] | null | null | null | ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_service_check.py | vsosrc/ambari | e3cc898672707bedf7597f2e16d684c8a00bba3b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
'''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
import os
from mock.mock import MagicMock, call, patch
from stacks.utils.RMFTestCase import *
import datetime, sys, socket
import resource_management.libraries.functions
@patch.object(resource_management.libraries.functions, "get_unique_id_and_date", new = MagicMock(return_value=''))
@patch("socket.socket", new = MagicMock())
class TestServiceCheck(RMFTestCase):
@patch("sys.exit")
def test_service_check_default(self, sys_exit_mock):
self.executeScript("2.0.6/services/HIVE/package/scripts/service_check.py",
classname="HiveServiceCheck",
command="service_check",
config_file="default.json"
)
self.assertResourceCalled('File', '/tmp/hcatSmoke.sh',
content = StaticFile('hcatSmoke.sh'),
mode = 0755,
)
self.assertResourceCalled('Execute', 'env JAVA_HOME=/usr/jdk64/jdk1.7.0_45 /tmp/hcatSmoke.sh hcatsmoke prepare',
logoutput = True,
path = ['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'],
tries = 3,
user = 'ambari-qa',
environment = {'PATH' : os.environ['PATH'] + os.pathsep + "/usr/lib/hive/bin"},
try_sleep = 5,
)
self.assertResourceCalled('ExecuteHadoop', 'fs -test -e /apps/hive/warehouse/hcatsmoke',
logoutput = True,
user = 'hdfs',
conf_dir = '/etc/hadoop/conf',
keytab=UnknownConfigurationMock(),
kinit_path_local='/usr/bin/kinit',
bin_dir = '/usr/lib/hive/bin',
security_enabled=False
)
self.assertResourceCalled('Execute', ' /tmp/hcatSmoke.sh hcatsmoke cleanup',
logoutput = True,
path = ['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'],
tries = 3,
user = 'ambari-qa',
environment = {'PATH' : os.environ['PATH'] + os.pathsep + "/usr/lib/hive/bin"},
try_sleep = 5,
)
self.assertNoMoreResources()
@patch("sys.exit")
def test_service_check_secured(self, sys_exit_mock):
self.executeScript("2.0.6/services/HIVE/package/scripts/service_check.py",
classname="HiveServiceCheck",
command="service_check",
config_file="secured.json"
)
self.assertResourceCalled('File', '/tmp/hcatSmoke.sh',
content = StaticFile('hcatSmoke.sh'),
mode = 0755,
)
self.assertResourceCalled('Execute', '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa; env JAVA_HOME=/usr/jdk64/jdk1.7.0_45 /tmp/hcatSmoke.sh hcatsmoke prepare',
logoutput = True,
path = ['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'],
tries = 3,
user = 'ambari-qa',
environment = {'PATH' : os.environ['PATH'] + os.pathsep + "/usr/lib/hive/bin"},
try_sleep = 5,
)
self.assertResourceCalled('ExecuteHadoop', 'fs -test -e /apps/hive/warehouse/hcatsmoke',
logoutput = True,
user = 'hdfs',
conf_dir = '/etc/hadoop/conf',
keytab='/etc/security/keytabs/hdfs.headless.keytab',
kinit_path_local='/usr/bin/kinit',
security_enabled=True,
bin_dir = '/usr/lib/hive/bin',
principal='hdfs'
)
self.assertResourceCalled('Execute', '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa; /tmp/hcatSmoke.sh hcatsmoke cleanup',
logoutput = True,
path = ['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'],
tries = 3,
user = 'ambari-qa',
environment = {'PATH' : os.environ['PATH'] + os.pathsep + "/usr/lib/hive/bin"},
try_sleep = 5,
)
self.assertNoMoreResources()
| 47.831776 | 194 | 0.557835 | 537 | 5,118 | 5.240223 | 0.335196 | 0.01919 | 0.029851 | 0.027719 | 0.615139 | 0.615139 | 0.583866 | 0.561834 | 0.561834 | 0.561834 | 0 | 0.011551 | 0.323369 | 5,118 | 106 | 195 | 48.283019 | 0.80104 | 0.003908 | 0 | 0.626506 | 0 | 0.024096 | 0.285418 | 0.089063 | 0 | 0 | 0 | 0 | 0.120482 | 0 | null | null | 0 | 0.060241 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4aefb5a97f19992a6966a61598aa4554de228c41 | 4,164 | py | Python | design.py | StrangeArcturus/QtAndRequestParser-Project | 5205420ff06c91917ce0c1d890da85e9d72a06ea | [
"MIT"
] | null | null | null | design.py | StrangeArcturus/QtAndRequestParser-Project | 5205420ff06c91917ce0c1d890da85e9d72a06ea | [
"MIT"
] | null | null | null | design.py | StrangeArcturus/QtAndRequestParser-Project | 5205420ff06c91917ce0c1d890da85e9d72a06ea | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'design.ui'
#
# Created by: PyQt5 UI code generator 5.15.4
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(650, 550)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(20, 10, 140, 13))
self.label.setObjectName("label")
self.song_title = QtWidgets.QLineEdit(self.centralwidget)
self.song_title.setGeometry(QtCore.QRect(90, 30, 113, 20))
self.song_title.setObjectName("song_title")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(20, 30, 60, 13))
self.label_2.setObjectName("label_2")
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(220, 30, 80, 13))
self.label_3.setObjectName("label_3")
self.song_autor = QtWidgets.QLineEdit(self.centralwidget)
self.song_autor.setGeometry(QtCore.QRect(310, 30, 113, 20))
self.song_autor.setObjectName("song_autor")
self.label_4 = QtWidgets.QLabel(self.centralwidget)
self.label_4.setGeometry(QtCore.QRect(20, 90, 140, 13))
self.label_4.setObjectName("label_4")
self.orig_text = QtWidgets.QPlainTextEdit(self.centralwidget)
self.orig_text.setGeometry(QtCore.QRect(20, 150, 270, 340))
self.orig_text.setObjectName("orig_text")
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(20, 120, 60, 13))
self.label_5.setObjectName("label_5")
self.trans_text = QtWidgets.QPlainTextEdit(self.centralwidget)
self.trans_text.setGeometry(QtCore.QRect(320, 150, 270, 340))
self.trans_text.setObjectName("trans_text")
self.label_6 = QtWidgets.QLabel(self.centralwidget)
self.label_6.setGeometry(QtCore.QRect(320, 120, 120, 13))
self.label_6.setObjectName("label_6")
self.get_text = QtWidgets.QPushButton(self.centralwidget)
self.get_text.setGeometry(QtCore.QRect(310, 70, 100, 23))
self.get_text.setObjectName("get_text")
self.pretty_flag = QtWidgets.QCheckBox(self.centralwidget)
self.pretty_flag.setGeometry(QtCore.QRect(20, 60, 250, 20))
self.pretty_flag.setObjectName("pretty_flag")
self.info = QtWidgets.QLabel(self.centralwidget)
self.info.setGeometry(QtCore.QRect(30, 500, 560, 13))
self.info.setText("")
self.info.setObjectName("info")
self.error_text = QtWidgets.QLabel(self.centralwidget)
self.error_text.setGeometry(QtCore.QRect(30, 520, 560, 20))
self.error_text.setText("")
self.error_text.setObjectName("error_text")
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Проект 1"))
self.label.setText(_translate("MainWindow", "Введите данные о песне:"))
self.label_2.setText(_translate("MainWindow", "Название:"))
self.label_3.setText(_translate("MainWindow", "Исполнитель:"))
self.label_4.setText(_translate("MainWindow", "Полученный текст песни:"))
self.label_5.setText(_translate("MainWindow", "Оригинал:"))
self.label_6.setText(_translate("MainWindow", "Перевод на русский:"))
self.get_text.setText(_translate("MainWindow", "Запрос текста"))
self.pretty_flag.setText(_translate("MainWindow", "Красивый текст (без указания на припев)"))
| 52.05 | 102 | 0.686119 | 493 | 4,164 | 5.659229 | 0.269777 | 0.077419 | 0.112903 | 0.091756 | 0.189964 | 0.153405 | 0 | 0 | 0 | 0 | 0 | 0.052475 | 0.194525 | 4,164 | 79 | 103 | 52.708861 | 0.779368 | 0.065082 | 0 | 0 | 1 | 0 | 0.099842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.015385 | 0 | 0.061538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4af181be525d8e8daf1ffbab71cb2d90c60d3216 | 597 | py | Python | EP_2019/py_impl/main.py | Alisa-lisa/conferences | d93014747dc9d18493295dbc33fa51c8fb9467dc | [
"MIT"
] | 5 | 2019-07-06T07:22:57.000Z | 2020-12-19T22:49:35.000Z | EP_2019/py_impl/main.py | pindash/conferences | 87fcb9f595a244408c015c66283c337d124b358d | [
"MIT"
] | null | null | null | EP_2019/py_impl/main.py | pindash/conferences | 87fcb9f595a244408c015c66283c337d124b358d | [
"MIT"
] | 3 | 2020-06-07T14:58:24.000Z | 2020-11-24T22:51:14.000Z | from simulation.car import spawn_drivers
from simulation.passenger import spawn_passengers
from simulation.core import World, Clock
conf = {
"x": 100,
"y": 100,
"drivers": 200,
"users": 1000,
"start": "2019-07-08T00:00:00",
"end": "2019-07-08T00:01:00"
}
clock = Clock(conf["start"], conf["end"])
if __name__ == '__main__':
world = World([conf['x'], conf['y']], clock=clock)
world.register_drivers(spawn_drivers(conf["drivers"], conf['x'], conf['y']))
world.register_passengers(spawn_passengers(conf["users"], conf['x'], conf['y']))
world.run(log=False)
| 28.428571 | 84 | 0.649916 | 82 | 597 | 4.560976 | 0.390244 | 0.053476 | 0.072193 | 0.080214 | 0.080214 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081188 | 0.154104 | 597 | 20 | 85 | 29.85 | 0.659406 | 0 | 0 | 0 | 0 | 0 | 0.157454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117647 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4af19e16fcec726156bfcc2b3d41a671e651e34c | 795 | py | Python | Python/reverse_with_swap.py | avulaankith/Python | 71269b1a36b45150edb7834c559386a91618e723 | [
"MIT"
] | null | null | null | Python/reverse_with_swap.py | avulaankith/Python | 71269b1a36b45150edb7834c559386a91618e723 | [
"MIT"
] | null | null | null | Python/reverse_with_swap.py | avulaankith/Python | 71269b1a36b45150edb7834c559386a91618e723 | [
"MIT"
] | 1 | 2021-08-14T13:24:11.000Z | 2021-08-14T13:24:11.000Z | #!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'reverse_words_order_and_swap_cases' function below.
#
# The function is expected to return a STRING.
# The function accepts STRING sentence as parameter.
#
def reverse_words_order_and_swap_cases(sentence):
# Write your code here
l = []
st = ""
for i in sentence:
if i == " ":
l.append(st)
st = ""
else:
st += i.swapcase()
# continue
l.append(st)
st = ""
l.reverse()
news = ""
for i in range(len(l)):
if i != (len(l) - 1):
news += l[i] + " "
else:
news += l[i]
return news
sentence = input()
news = reverse_words_order_and_swap_cases(sentence)
print(news)
| 18.488372 | 67 | 0.566038 | 105 | 795 | 4.142857 | 0.466667 | 0.082759 | 0.117241 | 0.137931 | 0.236782 | 0.236782 | 0.170115 | 0 | 0 | 0 | 0 | 0.003711 | 0.322013 | 795 | 42 | 68 | 18.928571 | 0.80334 | 0.257862 | 0 | 0.259259 | 0 | 0 | 0.003442 | 0 | 0 | 0 | 0 | 0.02381 | 0 | 1 | 0.037037 | false | 0 | 0.185185 | 0 | 0.259259 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4af2b457e2a07435b2f1cbc51394d14794b7cb2f | 294 | py | Python | creeds/static/api1.py | MaayanLab/creeds | 7d580c91ca45c03e34bbc0d1928668f266ff13d9 | [
"CC0-1.0"
] | 2 | 2019-01-10T18:10:45.000Z | 2019-04-05T13:47:01.000Z | creeds/static/api1.py | MaayanLab/creeds | 7d580c91ca45c03e34bbc0d1928668f266ff13d9 | [
"CC0-1.0"
] | 1 | 2019-05-09T21:25:31.000Z | 2019-05-13T14:26:30.000Z | creeds/static/api1.py | MaayanLab/creeds | 7d580c91ca45c03e34bbc0d1928668f266ff13d9 | [
"CC0-1.0"
] | 2 | 2018-12-21T23:59:27.000Z | 2019-10-24T18:26:26.000Z | import json, requests
from pprint import pprint
CREEDS_URL = 'http://amp.pharm.mssm.edu/CREEDS/'
response = requests.get(CREEDS_URL + 'search', params={'q':'STAT3'})
if response.status_code == 200:
pprint(response.json())
json.dump(response.json(), open('api1_result.json', 'wb'), indent=4)
| 32.666667 | 69 | 0.721088 | 43 | 294 | 4.837209 | 0.674419 | 0.086538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022556 | 0.095238 | 294 | 8 | 70 | 36.75 | 0.759399 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4af34453b4c3c543b26ea00b073366252fd5c89d | 355 | py | Python | admin/migrations/0041_course_color.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 4 | 2019-07-19T17:39:04.000Z | 2022-03-22T21:02:15.000Z | admin/migrations/0041_course_color.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 53 | 2019-08-04T14:25:40.000Z | 2022-03-26T20:30:55.000Z | admin/migrations/0041_course_color.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 3 | 2020-03-09T07:11:03.000Z | 2020-09-11T01:22:50.000Z | # Generated by Django 2.2.3 on 2019-07-31 13:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("admin", "0040_auto_20190718_0938")]
operations = [
migrations.AddField(
model_name="course", name="color", field=models.CharField(default="#000", max_length=7)
)
]
| 23.666667 | 99 | 0.661972 | 44 | 355 | 5.227273 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124555 | 0.208451 | 355 | 14 | 100 | 25.357143 | 0.69395 | 0.126761 | 0 | 0 | 1 | 0 | 0.13961 | 0.074675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4af59f537fb6e3fa8f98dad4df206983a8ca37fd | 3,651 | py | Python | gengine/app/tests_old/test_groups.py | greck2908/gamification-engine | 4a74086bde4505217e4b9ba36349a427a7042b4b | [
"MIT"
] | 347 | 2015-03-03T14:25:59.000Z | 2022-03-09T07:46:31.000Z | gengine/app/tests_old/test_groups.py | greck2908/gamification-engine | 4a74086bde4505217e4b9ba36349a427a7042b4b | [
"MIT"
] | 76 | 2015-03-05T23:37:31.000Z | 2022-03-31T13:41:42.000Z | gengine/app/tests_old/test_groups.py | greck2908/gamification-engine | 4a74086bde4505217e4b9ba36349a427a7042b4b | [
"MIT"
] | 115 | 2015-03-04T23:47:25.000Z | 2021-12-24T06:24:06.000Z | # -*- coding: utf-8 -*-
from gengine.app.tests.base import BaseDBTest
from gengine.app.tests.helpers import create_user, update_user, delete_user, get_or_create_language
from gengine.metadata import DBSession
from gengine.app.model import AuthUser
class TestUserCreation(BaseDBTest):
def test_user_creation(self):
lang = get_or_create_language("en")
user = create_user(
lat = 12.1,
lng = 12.2,
#country = "RO",
#region = "Transylvania",
#city = "Cluj-Napoca",
timezone = "Europe/Bukarest",
language = "en",
additional_public_data = {
"first_name" : "Rudolf",
"last_name" : "Red Nose"
}
)
self.assertTrue(user.lat == 12.1)
self.assertTrue(user.lng == 12.2)
#self.assertTrue(user.country == "RO")
#self.assertTrue(user.region == "Transylvania")
#self.assertTrue(user.city == "Cluj-Napoca")
self.assertTrue(user.timezone == "Europe/Bukarest")
self.assertTrue(user.language_id == lang.id)
self.assertTrue(user.additional_public_data["first_name"] == "Rudolf")
self.assertTrue(user.additional_public_data["last_name"] == "Red Nose")
def test_user_updation(self):
lang = get_or_create_language("en")
user = create_user()
user = update_user(
user_id = user.id,
lat = 14.2,
lng = 16.3,
#country = "EN",
#region = "Transylvania",
#city = "Cluj-Napoca",
timezone = "Europe/Bukarest",
language = "en",
additional_public_data = {
"first_name" : "Rudolf",
"last_name" : "Red Nose"
}
)
# Correct cases
self.assertTrue(user.lat == 14.2)
self.assertTrue(user.lng == 16.3)
#self.assertTrue(user.country == "EN")
#self.assertTrue(user.region == "Transylvania")
#self.assertTrue(user.city == "Cluj-Napoca")
self.assertTrue(user.timezone == "Europe/Bukarest")
self.assertTrue(user.language_id == lang.id)
def test_user_deletion(self):
user1 = create_user()
# Create Second user
user2 = create_user(
lat=85.59,
lng=65.75,
#country="DE",
#region="Niedersachsen",
#city="Osnabrück",
timezone="Europe/Berlin",
language="de",
additional_public_data={
"first_name": "Michael",
"last_name": "Clarke"
},
friends=[1]
)
remaining_users = delete_user(
user_id = user1.id
)
# Correct cases
self.assertNotIn(user1.id, remaining_users)
self.assertEqual(user2.id, remaining_users[0].id)
def test_verify_password(self):
auth_user = AuthUser()
auth_user.password = "test12345"
auth_user.active = True
auth_user.email = "test@actidoo.com"
DBSession.add(auth_user)
iscorrect = auth_user.verify_password("test12345")
self.assertEqual(iscorrect, True)
def test_create_token(self):
user = create_user()
auth_user = AuthUser()
auth_user.user_id = user.id
auth_user.password = "test12345"
auth_user.active = True
auth_user.email = "test@actidoo.com"
DBSession.add(auth_user)
if auth_user.verify_password("test12345"):
token = auth_user.get_or_create_token()
self.assertNotEqual(token, None)
| 29.92623 | 99 | 0.564229 | 388 | 3,651 | 5.118557 | 0.25 | 0.11279 | 0.145015 | 0.050352 | 0.517623 | 0.437563 | 0.391742 | 0.391742 | 0.391742 | 0.391742 | 0 | 0.024077 | 0.317447 | 3,651 | 121 | 100 | 30.173554 | 0.772873 | 0.135032 | 0 | 0.371795 | 0 | 0 | 0.09001 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 1 | 0.064103 | false | 0.064103 | 0.051282 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ab0004198f8e66f5be455567544099aa471f9197 | 3,349 | py | Python | modules/helper/subtitles/subtitles.py | sdelcore/video-event-notifier-old | 16bd322f2b81efbb3e08e63ed407ab098d610c88 | [
"MIT"
] | null | null | null | modules/helper/subtitles/subtitles.py | sdelcore/video-event-notifier-old | 16bd322f2b81efbb3e08e63ed407ab098d610c88 | [
"MIT"
] | null | null | null | modules/helper/subtitles/subtitles.py | sdelcore/video-event-notifier-old | 16bd322f2b81efbb3e08e63ed407ab098d610c88 | [
"MIT"
] | null | null | null | import time
import srt
import re
import datetime
from mqtthandler import MQTTHandler
INIT_STATUS={
"video": {
"title": None,
"series_title": None,
"season": None,
"episode": None
},
"time": None,
"events": None
}
class SubtitleHandler:
subtitles = []
phrases = []
def __init__(self, broker):
self.mqtt = MQTTHandler(broker)
def parseSRT(self, srt_filename):
f=open(srt_filename, "r")
subtitle_generate = srt.parse(f.read())
f.close()
self.subtitles = list(subtitle_generate)
return self.subtitles
def parsePhrases(self, phrase_filename):
f=open(phrase_filename, "r")
lines = f.readlines()
for line in lines:
phrase = line.rstrip("\n\r").split("/")
self.phrases.append(phrase)
return self.phrases
def isPhraseInLine(self,phrase, sub, content):
sub_line = re.sub('[^A-Za-z0-9\s]+', '', str(content)).lower()
phrase = re.sub('[^A-Za-z0-9\s]+', '', str(phrase)).lower()
count = 0
while bool(re.search(phrase, sub_line)):
count += 1
sub_line = sub_line.replace(phrase, '', 1)
return count
def getEventTime(self,sub):
middle = sub.end - sub.start
between_sec = datetime.timedelta.total_seconds(middle) / 2
sec = between_sec + datetime.timedelta.total_seconds(sub.start)
return int(sec)
def matchEventToMovie(self, movie, subtitles, phrases, time_offset):
global INIT_STATUS
status = INIT_STATUS
status["video"]["title"] = movie
#TODO determine how to set up phrase data
for sub in subtitles:
c = sub.content.replace('\n', ' ')
c = c.split(" ")
firstpart, secondpart = " ".join(c[:len(c)//2]), " ".join(c[len(c)//2:])
mult = 0
for phrase in phrases:
line = phrase[0]
events = phrase[1]
mult += self.isPhraseInLine(line,sub,sub.content)
#f = self.isPhraseInLine(line,sub, firstpart)
#s = self.isPhraseInLine(line,sub, secondpart)
#if f + s == 0:
# mult += self.isPhraseInLine(line,sub,sub.content )
#else:
# mult += f+s
## DEAR LESS DRUNK SELF
# this currently adds the number of events over the entire subtitle
# what you need to do if you wish to accept it, is to split each subtitle into to two parts
# the first part will the the half that has the first bit of text, which will have the correct time to event for the work
# the second half will have the correct time to event gfor the second half
# you could have three if statements that check and each toher them reach a send.message()
if mult > 0: # wotn work properly if events is greater than 1
status["time"] = self.getEventTime(sub) + time_offset
status["events"] = int(events) * mult
self.sendMessage(status)
#print(sub.content)
def sendMessage(self, msg):
self.mqtt.send(msg)
print(msg)
return msg
def isDone(self):
return True | 33.828283 | 133 | 0.561959 | 407 | 3,349 | 4.565111 | 0.371007 | 0.026911 | 0.047363 | 0.053821 | 0.142088 | 0.131324 | 0.089343 | 0.016146 | 0 | 0 | 0 | 0.007127 | 0.329651 | 3,349 | 99 | 134 | 33.828283 | 0.82049 | 0.221857 | 0 | 0 | 0 | 0 | 0.041683 | 0 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0.115942 | false | 0 | 0.072464 | 0.014493 | 0.318841 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab026e12b9cf96fdf582b2fdd6e78d761d952e59 | 5,709 | py | Python | misc/trac_plugins/IncludeMacro/includemacro/macros.py | weese/seqan | 1acb1688969c7b61497f2328af54b4d11228a484 | [
"BSD-3-Clause"
] | 1 | 2017-10-24T20:37:58.000Z | 2017-10-24T20:37:58.000Z | misc/trac_plugins/IncludeMacro/includemacro/macros.py | weese/seqan | 1acb1688969c7b61497f2328af54b4d11228a484 | [
"BSD-3-Clause"
] | 10 | 2015-03-02T16:45:39.000Z | 2015-06-23T14:02:13.000Z | misc/trac_plugins/IncludeMacro/includemacro/macros.py | weese/seqan | 1acb1688969c7b61497f2328af54b4d11228a484 | [
"BSD-3-Clause"
] | 2 | 2015-02-24T19:07:54.000Z | 2015-04-08T13:53:24.000Z | # TracIncludeMacro macros
import re
import urllib2
from StringIO import StringIO
from trac.core import *
from trac.wiki.macros import WikiMacroBase
from trac.wiki.formatter import system_message
from trac.wiki.model import WikiPage
from trac.mimeview.api import Mimeview, get_mimetype, Context
from trac.perm import IPermissionRequestor
from genshi.core import escape
from genshi.input import HTMLParser, ParseError
from genshi.filters.html import HTMLSanitizer
__all__ = ['IncludeMacro']
class IncludeMacro(WikiMacroBase):
"""A macro to include other resources in wiki pages.
More documentation to follow.
"""
implements(IPermissionRequestor)
# Default output formats for sources that need them
default_formats = {
'wiki': 'text/x-trac-wiki',
}
# IWikiMacroProvider methods
def expand_macro(self, formatter, name, content):
req = formatter.req # Shortcut.
safe_content = False # Whether or not to disable cleaning HTML.
args = [x.strip() for x in content.split(',')]
if len(args) == 1:
args.append(None)
elif len(args) == 3:
return system_message('args == %s' % args)
if not args[2].startswith('fragment='):
msg = ('If three arguments are given, the last one must'
' start with fragment=, but tag content was %s')
return system_message(msg % content)
elif len(args) != 2:
return system_message('Invalid arguments "%s"'%content)
# Parse out fragment name.
fragment_name = None
if args[-1] and args[-1].startswith('fragment='):
fragment_name = args[-1][len('fragment='):]
args.pop()
if len(args) == 1:
args.append(None)
# Pull out the arguments
source, dest_format = args
try:
source_format, source_obj = source.split(':', 1)
except ValueError: # If no : is present, assume its a wiki page
source_format, source_obj = 'wiki', source
# Apply a default format if needed
if dest_format is None:
try:
dest_format = self.default_formats[source_format]
except KeyError:
pass
if source_format in ('http', 'https', 'ftp'):
# Since I can't really do recursion checking, and because this
# could be a source of abuse allow selectively blocking it.
# RFE: Allow blacklist/whitelist patterns for URLS. <NPK>
# RFE: Track page edits and prevent unauthorized users from ever entering a URL include. <NPK>
if not req.perm.has_permission('INCLUDE_URL'):
self.log.info('IncludeMacro: Blocking attempt by %s to include URL %s on page %s', req.authname, source, req.path_info)
return ''
try:
urlf = urllib2.urlopen(source)
out = urlf.read()
except urllib2.URLError, e:
return system_message('Error while retrieving file', str(e))
except TracError, e:
return system_message('Error while previewing', str(e))
ctxt = Context.from_request(req)
elif source_format == 'wiki':
# XXX: Check for recursion in page includes. <NPK>
if not req.perm.has_permission('WIKI_VIEW'):
return ''
page = WikiPage(self.env, source_obj)
if not page.exists:
return system_message('Wiki page %s does not exist'%source_obj)
out = page.text
ctxt = Context.from_request(req, 'wiki', source_obj)
elif source_format == 'source':
if not req.perm.has_permission('FILE_VIEW'):
return ''
repo = self.env.get_repository(authname=req.authname)
node = repo.get_node(source_obj)
out = node.get_content().read()
if dest_format is None:
dest_format = node.content_type or get_mimetype(source_obj, out)
ctxt = Context.from_request(req, 'source', source_obj)
# RFE: Add ticket: and comment: sources. <NPK>
# RFE: Add attachment: source. <NPK>
else:
return system_message('Unsupported include source %s'%source)
# If there was a fragment name given then find the fragment.
fragment = []
current_fragment_name = None
if fragment_name:
for line in out.splitlines():
res = re.search(r'FRAGMENT\(([^)]*)\)', line)
if res:
current_fragment_name = res.groups()[0]
else:
if current_fragment_name == fragment_name:
fragment.append(line)
out = '\n'.join(fragment)
# If we have a preview format, use it
if dest_format:
# We can trust the output and do not need to call the HTML sanitizer
# below. The HTML sanitization leads to whitespace being stripped.
safe_content = True
out = Mimeview(self.env).render(ctxt, dest_format, out, force_source=True)
# Escape if needed
if not safe_content and not self.config.getbool('wiki', 'render_unsafe_content', False):
try:
out = HTMLParser(StringIO(out)).parse() | HTMLSanitizer()
except ParseError:
out = escape(out)
return out
# IPermissionRequestor methods
def get_permission_actions(self):
yield 'INCLUDE_URL'
| 40.204225 | 135 | 0.584341 | 662 | 5,709 | 4.932024 | 0.353474 | 0.033078 | 0.040735 | 0.011026 | 0.091884 | 0.057887 | 0.031853 | 0 | 0 | 0 | 0 | 0.003392 | 0.328779 | 5,709 | 141 | 136 | 40.489362 | 0.848643 | 0.165878 | 0 | 0.148515 | 0 | 0 | 0.102691 | 0.004521 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009901 | 0.118812 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab0c1dad4d8d1784a1b379e00273da750a4c7145 | 871 | py | Python | AnimeSpider/spiders/AinmeLinkList.py | xiaowenwen1995/AnimeSpider | 11c676b772508fd4e14565a7adbfc7336d69b982 | [
"MIT"
] | 7 | 2020-02-26T15:58:13.000Z | 2021-11-14T15:48:01.000Z | AnimeSpider/spiders/AinmeLinkList.py | xiaowenwen1995/AnimeSpider | 11c676b772508fd4e14565a7adbfc7336d69b982 | [
"MIT"
] | 1 | 2020-07-23T06:44:19.000Z | 2020-07-23T16:12:28.000Z | AnimeSpider/spiders/AinmeLinkList.py | xiaowenwen1995/AnimeSpider | 11c676b772508fd4e14565a7adbfc7336d69b982 | [
"MIT"
] | 1 | 2021-04-01T09:22:51.000Z | 2021-04-01T09:22:51.000Z | # -*- coding: utf-8 -*-
import scrapy
import json
import os
import codecs
from AnimeSpider.items import AnimespiderItem
class AinmelinklistSpider(scrapy.Spider):
name = 'AinmeLinkList'
allowed_domains = ['bilibili.com']
start_urls = ['http://bilibili.com/']
def start_requests(self):
jsonpath = os.path.dirname(__file__) + '/output'
jsonfile = codecs.open('%s/AinmeList_items.json' % jsonpath, 'r', encoding='utf-8')
for line in jsonfile:
ainme = json.loads(line)
ainmename = ainme["name"]
url = ainme["link"].replace("//", "https://")
yield scrapy.Request(url=url, callback=self.parse, meta={'ainmename': ainmename})
def parse(self, response):
item = AnimespiderItem()
item["info_link"] = response.css(".media-title").xpath('@href').get()
yield item
| 32.259259 | 93 | 0.626866 | 98 | 871 | 5.479592 | 0.632653 | 0.014898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002946 | 0.220436 | 871 | 26 | 94 | 33.5 | 0.787923 | 0.02411 | 0 | 0 | 0 | 0 | 0.158019 | 0.027123 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.238095 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ab0f9c740d041f9e373bd50dbfcabb4859e2d6c4 | 950 | py | Python | api/migrations/0004_auto_20210107_2032.py | bartoszper/Django-REST-API-movierater | a145f087d9c59167ea3503dde5fa74ab7f3e3e72 | [
"MIT"
] | null | null | null | api/migrations/0004_auto_20210107_2032.py | bartoszper/Django-REST-API-movierater | a145f087d9c59167ea3503dde5fa74ab7f3e3e72 | [
"MIT"
] | null | null | null | api/migrations/0004_auto_20210107_2032.py | bartoszper/Django-REST-API-movierater | a145f087d9c59167ea3503dde5fa74ab7f3e3e72 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-01-07 19:32
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('api', '0003_auto_20210107_2010'),
]
operations = [
migrations.AlterField(
model_name='extrainfo',
name='rodzaj',
field=models.IntegerField(choices=[(2, 'Sci-Fi'), (0, 'Nieznany'), (5, 'Komedia'), (3, 'Dramat'), (1, 'Horror')], default=0),
),
migrations.CreateModel(
name='Recenzja',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('opis', models.TextField(default='')),
('gwizdki', models.IntegerField(default=5)),
('film', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.film')),
],
),
]
| 32.758621 | 137 | 0.570526 | 99 | 950 | 5.393939 | 0.656566 | 0.044944 | 0.052434 | 0.082397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054755 | 0.269474 | 950 | 28 | 138 | 33.928571 | 0.714697 | 0.047368 | 0 | 0.090909 | 1 | 0 | 0.120709 | 0.025471 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab102bf2e193beb384ebdf3e74b5f3f77d47c463 | 3,976 | py | Python | vendor/munkireport/firewall/scripts/firewall.py | menamegaly/MR | 18d042639d9b45ca81a9b58659f45c6e2c3ac87f | [
"MIT"
] | null | null | null | vendor/munkireport/firewall/scripts/firewall.py | menamegaly/MR | 18d042639d9b45ca81a9b58659f45c6e2c3ac87f | [
"MIT"
] | null | null | null | vendor/munkireport/firewall/scripts/firewall.py | menamegaly/MR | 18d042639d9b45ca81a9b58659f45c6e2c3ac87f | [
"MIT"
] | null | null | null | #!/usr/bin/python
"""
Firewall for munkireport.
By Tuxudo
Will return all details about how the firewall is configured
"""
import subprocess
import os
import sys
import platform
import re
import plistlib
import json
sys.path.insert(0,'/usr/local/munki')
sys.path.insert(0, '/usr/local/munkireport')
from munkilib import FoundationPlist
def get_firewall_info():
'''Uses system profiler to get firewall info for the machine.'''
cmd = ['/usr/sbin/system_profiler', 'SPFirewallDataType', '-xml']
proc = subprocess.Popen(cmd, shell=False, bufsize=-1,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, unused_error) = proc.communicate()
try:
plist = plistlib.readPlistFromString(output)
# system_profiler xml is an array
firewall_dict = plist[0]
items = firewall_dict['_items']
return items
except Exception:
return {}
def flatten_firewall_info(array):
'''Un-nest firewall info, return array with objects with relevant keys'''
firewall = {}
for obj in array:
for item in obj:
if item == '_items':
out = out + flatten_firewall_info(obj['_items'])
elif item == 'spfirewall_services':
for service in obj[item]:
if obj[item][service] == "spfirewall_allow_all":
obj[item][service] = 1
else:
obj[item][service] = 0
firewall['services'] = json.dumps(obj[item])
elif item == 'spfirewall_applications':
for application in obj[item]:
if obj[item][application] == "spfirewall_allow_all":
obj[item][application] = 1
else:
obj[item][application] = 0
firewall['applications'] = json.dumps(obj[item])
return firewall
def get_alf_preferences():
pl = FoundationPlist.readPlist("/Library/Preferences/com.apple.alf.plist")
firewall = {}
for item in pl:
if item == 'allowdownloadsignedenabled':
firewall['allowdownloadsignedenabled'] = to_bool(pl[item])
elif item == 'allowsignedenabled':
firewall['allowsignedenabled'] = to_bool(pl[item])
elif item == 'firewallunload':
firewall['firewallunload'] = to_bool(pl[item])
elif item == 'globalstate':
firewall['globalstate'] = to_bool(pl[item])
elif item == 'stealthenabled':
firewall['stealthenabled'] = to_bool(pl[item])
elif item == 'loggingenabled':
firewall['loggingenabled'] = to_bool(pl[item])
elif item == 'loggingoption':
firewall['loggingoption'] = pl[item]
elif item == 'version':
firewall['version'] = pl[item]
return firewall
def to_bool(s):
if s == True:
return 1
else:
return 0
def merge_two_dicts(x, y):
z = x.copy()
z.update(y)
return z
def main():
"""Main"""
# Skip manual check
if len(sys.argv) > 1:
if sys.argv[1] == 'manualcheck':
print 'Manual check: skipping'
exit(0)
# Create cache dir if it does not exist
cachedir = '%s/cache' % os.path.dirname(os.path.realpath(__file__))
if not os.path.exists(cachedir):
os.makedirs(cachedir)
# Set the encoding
# The "ugly hack" :P
reload(sys)
sys.setdefaultencoding('utf8')
# Get results
result = dict()
info = get_firewall_info()
result = merge_two_dicts(flatten_firewall_info(info), get_alf_preferences())
# Write firewall results to cache
output_plist = os.path.join(cachedir, 'firewall.plist')
FoundationPlist.writePlist(result, output_plist)
#print FoundationPlist.writePlistToString(result)
if __name__ == "__main__":
main()
| 31.307087 | 80 | 0.591549 | 436 | 3,976 | 5.272936 | 0.350917 | 0.030448 | 0.041757 | 0.042627 | 0.108743 | 0.086994 | 0 | 0 | 0 | 0 | 0 | 0.004993 | 0.294769 | 3,976 | 126 | 81 | 31.555556 | 0.814907 | 0.058602 | 0 | 0.076923 | 0 | 0 | 0.156761 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.087912 | null | null | 0.010989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab143c1e766e4bf7477a807945495619e156d263 | 729 | py | Python | Examples/IMAP/FilteringMessagesFromIMAPMailbox.py | Muzammil-khan/Aspose.Email-Python-Dotnet | 04ca3a6f440339f3ddf316218f92d15d66f24e7e | [
"MIT"
] | 5 | 2019-01-28T05:17:12.000Z | 2020-04-14T14:31:34.000Z | Examples/IMAP/FilteringMessagesFromIMAPMailbox.py | Muzammil-khan/Aspose.Email-Python-Dotnet | 04ca3a6f440339f3ddf316218f92d15d66f24e7e | [
"MIT"
] | 1 | 2019-01-28T16:07:26.000Z | 2021-11-25T10:59:52.000Z | Examples/IMAP/FilteringMessagesFromIMAPMailbox.py | Muzammil-khan/Aspose.Email-Python-Dotnet | 04ca3a6f440339f3ddf316218f92d15d66f24e7e | [
"MIT"
] | 6 | 2018-07-16T14:57:34.000Z | 2020-08-30T05:59:52.000Z | import aspose.email
from aspose.email.clients.imap import ImapClient
from aspose.email.clients import SecurityOptions
from aspose.email.clients.imap import ImapQueryBuilder
import datetime as dt
def run():
dataDir = ""
#ExStart: FetchEmailMessageFromServer
client = ImapClient("imap.gmail.com", 993, "username", "password")
client.select_folder("Inbox")
builder = ImapQueryBuilder()
builder.subject.contains("Newsletter")
builder.internal_date.on(dt.datetime.now())
query = builder.get_query()
msgsColl = client.list_messages(query)
print("Total Messages fulfilling search criterion: " + str(len(msgsColl)))
#ExEnd: FetchEmailMessageFromServer
if __name__ == '__main__':
run()
| 31.695652 | 78 | 0.739369 | 81 | 729 | 6.506173 | 0.617284 | 0.083491 | 0.085389 | 0.125237 | 0.121442 | 0.121442 | 0 | 0 | 0 | 0 | 0 | 0.004847 | 0.150892 | 729 | 22 | 79 | 33.136364 | 0.846527 | 0.096022 | 0 | 0 | 0 | 0 | 0.147641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.294118 | 0 | 0.352941 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ab149d0949672fc58bdb20c8bbee5cb7134e800f | 2,363 | py | Python | Python.FancyBear/settings.py | 010001111/Vx-Suites | 6b4b90a60512cce48aa7b87aec5e5ac1c4bb9a79 | [
"MIT"
] | 2 | 2021-02-04T06:47:45.000Z | 2021-07-28T10:02:10.000Z | Python.FancyBear/settings.py | 010001111/Vx-Suites | 6b4b90a60512cce48aa7b87aec5e5ac1c4bb9a79 | [
"MIT"
] | null | null | null | Python.FancyBear/settings.py | 010001111/Vx-Suites | 6b4b90a60512cce48aa7b87aec5e5ac1c4bb9a79 | [
"MIT"
] | null | null | null | # Server UID
SERVER_UID = 45158729
# Setup Logging system #########################################
#
import os
from FileConsoleLogger import FileConsoleLogger
ServerLogger = FileConsoleLogger( os.path.join(os.path.dirname(os.path.abspath(__file__)), "_w3server.log") )
W3Logger = FileConsoleLogger( os.path.join(os.path.dirname(os.path.abspath(__file__)), "_w3.log") )
#
# Setup Level 2 Protocol - P2Scheme #########################################
#
from P2Scheme import P2Scheme
P2_URL_TOKEN = '760e25f9eb3124'.decode('hex')
P2_SUBJECT_TOKEN = '\x55\xaa\x63\x68\x69\x6e\x61'
P2_DATA_TOKEN = '\x55\xaa\x63\x68\x69\x6e\x61'
# P2_DATA_TOKEN = 'd85a8c54fbe5e6'.decode('hex')
MARK = 'itwm='
B64_JUNK_LEN = 9
BIN_JUNK_LEN = 4
P2_Scheme = P2Scheme(_url_token=P2_URL_TOKEN, _data_token=P2_DATA_TOKEN, _mark=MARK, _subj_token=P2_SUBJECT_TOKEN,\
_b64junk_len=B64_JUNK_LEN, _binary_junk_len=BIN_JUNK_LEN)
#
# Setup Level 3 Protocol - P3Scheme #########################################
#
from P3Scheme import P3Scheme
#
P3_PRIVATE_TOKEN = 'a20e25f9aa3fe4'.decode('hex')
P3_SERVICE_TOKEN = '015a1354acf1b1'.decode('hex')
#
P3_Scheme = P3Scheme(private_token=P3_PRIVATE_TOKEN, service_token=P3_SERVICE_TOKEN)
#
# Setup HTTP checker
#
#from HTTPHeadersChecker import HTTPHeadersChecker
#
#HTTPChecker = HTTPHeadersChecker()
# Setup LocalStorage
#
from FSLocalStorage import FSLocalStorage
LocalStorage = FSLocalStorage()
############################################################
# Initialize Server instance #
#
#from W3Server import W3Server
#MAIN_HANDLER = W3Server(p2_scheme=P2_Scheme, p3_scheme=P3_Scheme, http_checker=HTTPChecker, local_storage=LocalStorage, logger=ServerLogger)
############################################################
# Mail Parameters
POP3_MAIL_IP = 'pop.gmail.com'
POP3_PORT = 995
POP3_ADDR = 'jassnovember30@gmail.com'
POP3_PASS = '30Jass11'
SMTP_MAIL_IP = 'smtp.gmail.com'
SMTP_PORT = 587
SMTP_TO_ADDR = 'userdf783@mailtransition.com'
SMTP_FROM_ADDR = 'ginabetz75@gmail.com'
SMTP_PASS = '75Gina75'
# C&C Parametrs
#
XAS_IP = '104.152.187.66'
XAS_GATE = '/updates/'
############################################################
# Setup P3 communication
# wsgi2
#
LS_TIMEOUT = 1 # big loop timeout
FILES_PER_ITER = 5 # count of requests per iter
############################################################
| 28.46988 | 141 | 0.650444 | 275 | 2,363 | 5.290909 | 0.418182 | 0.024742 | 0.02268 | 0.037113 | 0.125773 | 0.125773 | 0.125773 | 0.125773 | 0.125773 | 0.125773 | 0 | 0.067989 | 0.103682 | 2,363 | 82 | 142 | 28.817073 | 0.61898 | 0.241219 | 0 | 0 | 0 | 0 | 0.19341 | 0.077364 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.060606 | 0.151515 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ab1a86e3a749c305907e0a449b620a088db1db5e | 4,070 | py | Python | var/spack/repos/builtin/packages/py-mdanalysis/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2,360 | 2017-11-06T08:47:01.000Z | 2022-03-31T14:45:33.000Z | var/spack/repos/builtin/packages/py-mdanalysis/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 13,838 | 2017-11-04T07:49:45.000Z | 2022-03-31T23:38:39.000Z | var/spack/repos/builtin/packages/py-mdanalysis/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1,793 | 2017-11-04T07:45:50.000Z | 2022-03-30T14:31:53.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyMdanalysis(PythonPackage):
"""MDAnalysis is a Python toolkit to analyze molecular dynamics
trajectories generated by a wide range of popular simulation
packages including DL_Poly, CHARMM, Amber, NAMD, LAMMPS, and
Gromacs. (See the lists of supported trajectory formats and
topology formats.)"""
homepage = "https://www.mdanalysis.org"
pypi = "MDAnalysis/MDAnalysis-0.19.2.tar.gz"
version('1.0.0', sha256='f45a024aca45e390ff1c45ca90beb2180b78881be377e2a1aa9cd6c109bcfa81')
version('0.20.1', sha256='d04b71b193b9716d2597ffb9938b93f43487fa535da1bb5c1f2baccf356d7df9')
version('0.19.2', sha256='c5395bbafa5efca2e1aee4715d26129844140c47cb8301da0293106cb969de7d')
version('0.19.1', sha256='ff1d694f8598c0833ec340de6a6adb3b5e62b92d0fa94ee6401718ba972db3cc')
version('0.19.0', sha256='248e3b37fc6150e31c609cc18a3927c32aee37b76d29cbfedf635e7e1aa982cf')
version('0.18.0', sha256='a08acea1755112411e7db55e3f282e164b47a59e15794b38744cce6c596f252a')
version('0.17.0', sha256='9bd61760334698cc7b8a57ad26456451e926e9c9e66722594ad8816561348cde')
version('0.16.2', sha256='407d9a9ff1ab8a5e47973714d06fabff220f8d08a28792dee93e88e70e995b0a')
version('0.16.1', sha256='3dc8f5d639ab3a0d152cbd7259ae9372ec8a9bac0f8cb7d3b80ce5adc1e3ee57')
version('0.16.0', sha256='c4824fa1fddd336daa39371436187ebb023366885fb250c2827ed7fce2546bd4')
version('0.15.0', sha256='9088786048b47339cba1f8a586977bbb3bb04ae1bcd0462b59e45bda37e25533')
variant('analysis', default=True,
description='Enable analysis packages: matplotlib, scipy, seaborn')
variant('amber', default=False,
description='Support AMBER netcdf format.')
depends_on('python@2.7:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
depends_on('py-cython@0.16:', type='build')
depends_on('py-six@1.4.0:', type=('build', 'run'))
depends_on('py-networkx@1.0:', type=('build', 'run'))
depends_on('py-gsd@1.4.0:', when='@0.17.0:', type=('build', 'run'))
depends_on('py-mmtf-python@1.0.0:', when='@0.16.0:', type=('build', 'run'))
depends_on('py-mock', when='@0.18.0:', type=('build', 'run'))
depends_on('py-tqdm@4.43.0:', when='@1.0.0:', type=('build', 'run'))
depends_on('py-joblib', when='@0.16.0:0.20.1', type=('build', 'run'))
depends_on('py-joblib@0.12:', when='@1.0.0:', type=('build', 'run'))
depends_on('py-numpy@1.5.0:', when='@:0.15.0', type=('build', 'run'))
depends_on('py-numpy@1.10.4:', when='@0.16.0:0.19.2', type=('build', 'run'))
depends_on('py-numpy@1.13.3:', when='@0.20.1:', type=('build', 'run'))
depends_on('py-biopython@1.59:', when='@:0.17.0', type=('build', 'run'))
depends_on('py-biopython@1.71:', when='@0.18.0:', type=('build', 'run'))
depends_on('py-griddataformats@0.3.2:', when='@:0.16.2', type=('build', 'run'))
depends_on('py-griddataformats@0.4:', when='@0.17.0:', type=('build', 'run'))
depends_on('py-matplotlib', when='@:0.15.0+analysis', type=('build', 'run'))
depends_on('py-matplotlib@1.5.1:', when='@0.16.0:0.16.1+analysis', type=('build', 'run'))
depends_on('py-matplotlib@1.5.1:', when='@0.16.2:', type=('build', 'run'))
depends_on('py-scipy', when='@:0.16.1+analysis', type=('build', 'run'))
depends_on('py-scipy', when='@0.16.2:0.17.0', type=('build', 'run'))
depends_on('py-scipy@1.0.0:', when='@0.18.0:', type=('build', 'run'))
depends_on('py-scikit-learn', when='@0.16.0:+analysis', type=('build', 'run'))
depends_on('py-seaborn', when='+analysis', type=('build', 'run'))
depends_on('py-netcdf4@1.0:', when='+amber', type=('build', 'run'))
depends_on('hdf5', when='+amber', type=('run'))
| 54.266667 | 96 | 0.653317 | 516 | 4,070 | 5.096899 | 0.265504 | 0.095817 | 0.108745 | 0.180608 | 0.356654 | 0.322433 | 0.313688 | 0.263878 | 0.220532 | 0.176046 | 0 | 0.193446 | 0.145209 | 4,070 | 74 | 97 | 55 | 0.562518 | 0.110811 | 0 | 0 | 0 | 0 | 0.495826 | 0.231219 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab1d930ad268269a2d4b9569657fc14b57b495e4 | 690 | py | Python | lib/jbgp/jbgpneighbor.py | routedo/junos-pyez-example | b89df2d40ca0a233529e4a26b42dd605c00aae46 | [
"Apache-2.0"
] | null | null | null | lib/jbgp/jbgpneighbor.py | routedo/junos-pyez-example | b89df2d40ca0a233529e4a26b42dd605c00aae46 | [
"Apache-2.0"
] | null | null | null | lib/jbgp/jbgpneighbor.py | routedo/junos-pyez-example | b89df2d40ca0a233529e4a26b42dd605c00aae46 | [
"Apache-2.0"
] | 1 | 2020-06-17T12:17:18.000Z | 2020-06-17T12:17:18.000Z | """
Query BGP neighbor table on a Juniper network device.
"""
import sys
from jnpr.junos import Device
from jnpr.junos.factory import loadyaml
def juniper_bgp_state(dev, bgp_neighbor):
"""
This function queries the BGP neighbor table on a Juniper network device.
dev = Juniper device connection
bgp_neighbor = IP address of BGP neighbor
return = Returns state of BGP neighbor
"""
try:
globals().update(loadyaml('yaml/bgp_neighbor.yml'))
bgp_ni = bgp_neighbor_info(dev).get(neighbor_address=bgp_neighbor)
return bgp_ni
except Exception as err:
print(err)
dev.close()
sys.exit(1)
return
return
| 23 | 77 | 0.676812 | 93 | 690 | 4.903226 | 0.483871 | 0.217105 | 0.070175 | 0.078947 | 0.171053 | 0.171053 | 0.171053 | 0.171053 | 0 | 0 | 0 | 0.001927 | 0.247826 | 690 | 29 | 78 | 23.793103 | 0.876686 | 0.349275 | 0 | 0.142857 | 0 | 0 | 0.050725 | 0.050725 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.428571 | 0.071429 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab1fe51ebbcd4a1dc4363d8ff7260094c438deca | 2,170 | py | Python | lib/cherrypy/cherrypy/test/test_sessionauthenticate.py | MiCHiLU/google_appengine_sdk | 3da9f20d7e65e26c4938d2c4054bc4f39cbc5522 | [
"Apache-2.0"
] | 790 | 2015-01-03T02:13:39.000Z | 2020-05-10T19:53:57.000Z | AppServer/lib/cherrypy/cherrypy/test/test_sessionauthenticate.py | nlake44/appscale | 6944af660ca4cb772c9b6c2332ab28e5ef4d849f | [
"Apache-2.0"
] | 1,361 | 2015-01-08T23:09:40.000Z | 2020-04-14T00:03:04.000Z | AppServer/lib/cherrypy/cherrypy/test/test_sessionauthenticate.py | nlake44/appscale | 6944af660ca4cb772c9b6c2332ab28e5ef4d849f | [
"Apache-2.0"
] | 162 | 2015-01-01T00:21:16.000Z | 2022-02-23T02:36:04.000Z | import cherrypy
from cherrypy.test import helper
class SessionAuthenticateTest(helper.CPWebCase):
def setup_server():
def check(username, password):
# Dummy check_username_and_password function
if username != 'test' or password != 'password':
return 'Wrong login/password'
def augment_params():
# A simple tool to add some things to request.params
# This is to check to make sure that session_auth can handle request
# params (ticket #780)
cherrypy.request.params["test"] = "test"
cherrypy.tools.augment_params = cherrypy.Tool('before_handler',
augment_params, None, priority=30)
class Test:
_cp_config = {'tools.sessions.on': True,
'tools.session_auth.on': True,
'tools.session_auth.check_username_and_password': check,
'tools.augment_params.on': True,
}
def index(self, **kwargs):
return "Hi %s, you are logged in" % cherrypy.request.login
index.exposed = True
cherrypy.tree.mount(Test())
setup_server = staticmethod(setup_server)
def testSessionAuthenticate(self):
# request a page and check for login form
self.getPage('/')
self.assertInBody('<form method="post" action="do_login">')
# setup credentials
login_body = 'username=test&password=password&from_page=/'
# attempt a login
self.getPage('/do_login', method='POST', body=login_body)
self.assertStatus((302, 303))
# get the page now that we are logged in
self.getPage('/', self.cookies)
self.assertBody('Hi test, you are logged in')
# do a logout
self.getPage('/do_logout', self.cookies, method='POST')
self.assertStatus((302, 303))
# verify we are logged out
self.getPage('/', self.cookies)
self.assertInBody('<form method="post" action="do_login">')
| 34.444444 | 82 | 0.566359 | 233 | 2,170 | 5.16309 | 0.39485 | 0.045719 | 0.027431 | 0.0399 | 0.147963 | 0.071488 | 0.071488 | 0.071488 | 0 | 0 | 0 | 0.011765 | 0.334101 | 2,170 | 62 | 83 | 35 | 0.820761 | 0.152074 | 0 | 0.176471 | 0 | 0 | 0.197044 | 0.072797 | 0 | 0 | 0 | 0 | 0.147059 | 1 | 0.147059 | false | 0.147059 | 0.058824 | 0.029412 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ab224c0b7dd96b0783239d1ab27b2b04825a3e94 | 4,122 | py | Python | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/timeperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | null | null | null | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/timeperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | 6 | 2021-12-20T17:13:35.000Z | 2022-03-29T08:54:11.000Z | Python/libraries/recognizers-date-time/recognizers_date_time/date_time/italian/timeperiod_extractor_config.py | felaray/Recognizers-Text | f514fd61c8d472ed92565261162712409f655312 | [
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from typing import List, Pattern
from recognizers_text.utilities import RegExpUtility
from recognizers_text.extractor import Extractor
from recognizers_number.number.italian.extractors import ItalianIntegerExtractor
from ...resources.italian_date_time import ItalianDateTime
from ..extractors import DateTimeExtractor
from ..base_timeperiod import TimePeriodExtractorConfiguration, MatchedIndex
from ..base_time import BaseTimeExtractor
from ..base_timezone import BaseTimeZoneExtractor
from .time_extractor_config import ItalianTimeExtractorConfiguration
from .base_configs import ItalianDateTimeUtilityConfiguration
from .timezone_extractor_config import ItalianTimeZoneExtractorConfiguration
class ItalianTimePeriodExtractorConfiguration(TimePeriodExtractorConfiguration):
@property
def check_both_before_after(self) -> bool:
return self._check_both_before_after
@property
def simple_cases_regex(self) -> List[Pattern]:
return self._simple_cases_regex
@property
def till_regex(self) -> Pattern:
return self._till_regex
@property
def time_of_day_regex(self) -> Pattern:
return self._time_of_day_regex
@property
def general_ending_regex(self) -> Pattern:
return self._general_ending_regex
@property
def single_time_extractor(self) -> DateTimeExtractor:
return self._single_time_extractor
@property
def integer_extractor(self) -> Extractor:
return self._integer_extractor
@property
def token_before_date(self) -> str:
return self._token_before_date
@property
def pure_number_regex(self) -> List[Pattern]:
return self._pure_number_regex
@property
def time_zone_extractor(self) -> DateTimeExtractor:
return self._time_zone_extractor
def __init__(self):
super().__init__()
self._check_both_before_after = ItalianDateTime.CheckBothBeforeAfter
self._single_time_extractor = BaseTimeExtractor(
ItalianTimeExtractorConfiguration())
self._integer_extractor = ItalianIntegerExtractor()
self.utility_configuration = ItalianDateTimeUtilityConfiguration()
self._simple_cases_regex: List[Pattern] = [
RegExpUtility.get_safe_reg_exp(ItalianDateTime.PureNumFromTo),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.PureNumBetweenAnd),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.PmRegex),
RegExpUtility.get_safe_reg_exp(ItalianDateTime.AmRegex)
]
self._till_regex: Pattern = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.TillRegex)
self._time_of_day_regex: Pattern = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.TimeOfDayRegex)
self._general_ending_regex: Pattern = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.GeneralEndingRegex)
self.from_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.FromRegex2)
self.connector_and_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.ConnectorAndRegex)
self.before_regex = RegExpUtility.get_safe_reg_exp(
ItalianDateTime.BeforeRegex2)
self._token_before_date = ItalianDateTime.TokenBeforeDate
self._pure_number_regex = [ItalianDateTime.PureNumFromTo, ItalianDateTime.PureNumFromTo]
self._time_zone_extractor = BaseTimeZoneExtractor(
ItalianTimeZoneExtractorConfiguration())
def get_from_token_index(self, source: str) -> MatchedIndex:
match = self.from_regex.search(source)
if match:
return MatchedIndex(True, match.start())
return MatchedIndex(False, -1)
def get_between_token_index(self, source: str) -> MatchedIndex:
match = self.before_regex.search(source)
if match:
return MatchedIndex(True, match.start())
return MatchedIndex(False, -1)
def is_connector_token(self, source: str):
return self.connector_and_regex.match(source)
| 37.816514 | 96 | 0.743328 | 419 | 4,122 | 6.959427 | 0.24105 | 0.037723 | 0.068587 | 0.078875 | 0.34808 | 0.268176 | 0.205418 | 0.141632 | 0.056927 | 0.056927 | 0 | 0.001201 | 0.191897 | 4,122 | 108 | 97 | 38.166667 | 0.874212 | 0.021834 | 0 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.142857 | 0.130952 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
ab253b2fa27d701106a981880d15472309de60c1 | 2,379 | py | Python | tests_oval_graph/test_arf_xml_parser/test_arf_xml_parser.py | Honny1/oval-graph | 96472a9d2b08c2afce620c54f229ce95ad019d1f | [
"Apache-2.0"
] | 21 | 2019-08-01T09:09:25.000Z | 2020-09-27T10:00:09.000Z | tests_oval_graph/test_arf_xml_parser/test_arf_xml_parser.py | Honny1/oval-graph | 96472a9d2b08c2afce620c54f229ce95ad019d1f | [
"Apache-2.0"
] | 129 | 2019-08-04T19:06:24.000Z | 2020-10-03T10:02:26.000Z | tests_oval_graph/test_arf_xml_parser/test_arf_xml_parser.py | Honny1/oval-graph | 96472a9d2b08c2afce620c54f229ce95ad019d1f | [
"Apache-2.0"
] | 11 | 2019-08-07T08:53:54.000Z | 2020-10-02T22:02:38.000Z | from pathlib import Path
import pytest
from oval_graph.arf_xml_parser.arf_xml_parser import ARFXMLParser
def get_arf_report_path(src="global_test_data/ssg-fedora-ds-arf.xml"):
return str(Path(__file__).parent.parent / src)
@pytest.mark.parametrize("rule_id, result", [
(
"xccdf_org.ssgproject.content_rule_accounts_passwords_pam_faillock_deny",
"false",
),
(
"xccdf_org.ssgproject.content_rule_sshd_disable_gssapi_auth",
"false",
),
(
"xccdf_org.ssgproject.content_rule_service_debug-shell_disabled",
"true",
),
(
"xccdf_org.ssgproject.content_rule_mount_option_dev_shm_noexec",
"false",
),
(
"xccdf_org.ssgproject.content_rule_audit_rules_unsuccessful_file_modification_creat",
"false",
),
(
"xccdf_org.ssgproject.content_rule_audit_rules_file_deletion_events_rmdir",
"false",
),
(
"xccdf_org.ssgproject.content_rule_require_singleuser_auth",
"true",
),
])
def test_parsing_and_evaluate_scan_rule(rule_id, result):
path = get_arf_report_path()
parser = ARFXMLParser(path)
oval_tree = parser.get_oval_tree(rule_id)
assert oval_tree.evaluate_tree() == result
def test_parsing_arf_report_without_system_data():
path = get_arf_report_path("global_test_data/arf_no_system_data.xml")
rule_id = "xccdf_com.example.www_rule_test-fail"
parser = ARFXMLParser(path)
oval_tree = parser.get_oval_tree(rule_id)
assert oval_tree.evaluate_tree() == "false"
@pytest.mark.parametrize("rule_id, pattern", [
("hello", "404 rule \"hello\" not found!"),
("xccdf_org.ssgproject.content_rule_ntpd_specify_remote_server", "notselected"),
("xccdf_org.ssgproject.content_rule_configure_bind_crypto_policy", "notchecked"),
("xccdf_org.ssgproject.content_rule_ensure_gpgcheck_local_packages", "notapplicable"),
])
def test_parsing_bad_rule(rule_id, pattern):
path = get_arf_report_path()
parser = ARFXMLParser(path)
with pytest.raises(Exception, match=pattern):
assert parser.get_oval_tree(rule_id)
def test_use_bad_report_file():
src = 'global_test_data/xccdf_org.ssgproject.content_profile_ospp-results-initial.xml'
path = get_arf_report_path(src)
with pytest.raises(Exception, match=r"arf\b|ARF\b"):
assert ARFXMLParser(path)
| 30.896104 | 93 | 0.721732 | 307 | 2,379 | 5.13355 | 0.355049 | 0.055838 | 0.125635 | 0.174492 | 0.478426 | 0.274746 | 0.195431 | 0.195431 | 0.100254 | 0.100254 | 0 | 0.001518 | 0.169399 | 2,379 | 76 | 94 | 31.302632 | 0.796053 | 0 | 0 | 0.370968 | 0 | 0 | 0.411517 | 0.352669 | 0 | 0 | 0 | 0 | 0.064516 | 1 | 0.080645 | false | 0.016129 | 0.048387 | 0.016129 | 0.145161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab2a38bd32faf647f78849a772f13ad447eb6e18 | 2,144 | py | Python | chapter_13/mailtools/__init__.py | bimri/programming_python | ba52ccd18b9b4e6c5387bf4032f381ae816b5e77 | [
"MIT"
] | null | null | null | chapter_13/mailtools/__init__.py | bimri/programming_python | ba52ccd18b9b4e6c5387bf4032f381ae816b5e77 | [
"MIT"
] | null | null | null | chapter_13/mailtools/__init__.py | bimri/programming_python | ba52ccd18b9b4e6c5387bf4032f381ae816b5e77 | [
"MIT"
] | null | null | null | "The mailtools Utility Package"
'Initialization File'
"""
##################################################################################
mailtools package: interface to mail server transfers, used by pymail2, PyMailGUI,
and PyMailCGI; does loads, sends, parsing, composing, and deleting, with part
attachments, encodings (of both the email and Unicdode kind), etc.; the parser,
fetcher, and sender classes here are designed to be mixed-in to subclasses which
use their methods, or used as embedded or standalone objects;
this package also includes convenience subclasses for silent mode, and more;
loads all mail text if pop server doesn't do top; doesn't handle threads or UI
here, and allows askPassword to differ per subclass; progress callback funcs get
status; all calls raise exceptions on error--client must handle in GUI/other;
this changed from file to package: nested modules imported here for bw compat;
4E: need to use package-relative import syntax throughout, because in Py 3.X
package dir in no longer on module import search path if package is imported
elsewhere (from another directory which uses this package); also performs
Unicode decoding on mail text when fetched (see mailFetcher), as well as for
some text part payloads which might have been email-encoded (see mailParser);
TBD: in saveparts, should file be opened in text mode for text/ contypes?
TBD: in walkNamedParts, should we skip oddballs like message/delivery-status?
TBD: Unicode support has not been tested exhaustively: see Chapter 13 for more
on the Py3.1 email package and its limitations, and the policies used here;
##################################################################################
"""
# collect contents of all modules here, when package dir imported directly
from .mailFetcher import *
from .mailSender import * # 4E: package-relative
from .mailParser import *
# export nested modules here, when from mailtools import *
__all__ = 'mailFetcher', 'mailSender', 'mailParser'
# self-test code is in selftest.py to allow mailconfig's path
# to be set before running thr nested module imports above
| 51.047619 | 83 | 0.718284 | 301 | 2,144 | 5.10299 | 0.584718 | 0.005208 | 0.019531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004432 | 0.158116 | 2,144 | 41 | 84 | 52.292683 | 0.846537 | 0.139459 | 0 | 0 | 0 | 0 | 0.369159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ab2b3845336cbc9c2cd653a367ec0d03b0cfffa6 | 223 | py | Python | server.py | SDelhey/websocket-chat | c7b83583007a723baee25acedbceddd55c12ffec | [
"MIT"
] | null | null | null | server.py | SDelhey/websocket-chat | c7b83583007a723baee25acedbceddd55c12ffec | [
"MIT"
] | null | null | null | server.py | SDelhey/websocket-chat | c7b83583007a723baee25acedbceddd55c12ffec | [
"MIT"
] | null | null | null | from flask import Flask, render_template
from flask_socketio import SocketIO, send, emit
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app) | 24.777778 | 47 | 0.748879 | 30 | 223 | 5.066667 | 0.533333 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134529 | 223 | 9 | 48 | 24.777778 | 0.787565 | 0 | 0 | 0 | 0 | 0 | 0.111607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab2d830b247e5d1c87b1cc476939c72b7371cdbc | 10,997 | py | Python | bin/mem_monitor.py | Samahu/ros-system-monitor | 5376eba046ac38cfe8fe9ff8b385fa2637015eda | [
"BSD-3-Clause"
] | 68 | 2016-02-07T00:35:25.000Z | 2022-03-22T11:14:16.000Z | bin/mem_monitor.py | Samahu/ros-system-monitor | 5376eba046ac38cfe8fe9ff8b385fa2637015eda | [
"BSD-3-Clause"
] | 5 | 2016-04-12T14:29:51.000Z | 2021-08-04T12:55:59.000Z | bin/mem_monitor.py | Samahu/ros-system-monitor | 5376eba046ac38cfe8fe9ff8b385fa2637015eda | [
"BSD-3-Clause"
] | 62 | 2015-08-09T23:17:16.000Z | 2022-02-11T18:24:30.000Z | #!/usr/bin/env python
############################################################################
# Copyright (C) 2009, Willow Garage, Inc. #
# Copyright (C) 2013 by Ralf Kaestner #
# ralf.kaestner@gmail.com #
# Copyright (C) 2013 by Jerome Maye #
# jerome.maye@mavt.ethz.ch #
# #
# All rights reserved. #
# #
# Redistribution and use in source and binary forms, with or without #
# modification, are permitted provided that the following conditions #
# are met: #
# #
# 1. Redistributions of source code must retain the above copyright #
# notice, this list of conditions and the following disclaimer. #
# #
# 2. Redistributions in binary form must reproduce the above copyright #
# notice, this list of conditions and the following disclaimer in #
# the documentation and/or other materials provided with the #
# distribution. #
# #
# 3. The name of the copyright holders may be used to endorse or #
# promote products derived from this software without specific #
# prior written permission. #
# #
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS #
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS #
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE #
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, #
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, #
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; #
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER #
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT #
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN #
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE #
# POSSIBILITY OF SUCH DAMAGE. #
############################################################################
from __future__ import with_statement
import rospy
import traceback
import threading
from threading import Timer
import sys, os, time
from time import sleep
import subprocess
import string
import socket
from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus, KeyValue
mem_level_warn = 0.95
mem_level_error = 0.99
stat_dict = { 0: 'OK', 1: 'Warning', 2: 'Error' }
def update_status_stale(stat, last_update_time):
time_since_update = rospy.get_time() - last_update_time
stale_status = 'OK'
if time_since_update > 20 and time_since_update <= 35:
stale_status = 'Lagging'
if stat.level == DiagnosticStatus.OK:
stat.message = stale_status
elif stat.message.find(stale_status) < 0:
stat.message = ', '.join([stat.message, stale_status])
stat.level = max(stat.level, DiagnosticStatus.WARN)
if time_since_update > 35:
stale_status = 'Stale'
if stat.level == DiagnosticStatus.OK:
stat.message = stale_status
elif stat.message.find(stale_status) < 0:
stat.message = ', '.join([stat.message, stale_status])
stat.level = max(stat.level, DiagnosticStatus.ERROR)
stat.values.pop(0)
stat.values.pop(0)
stat.values.insert(0, KeyValue(key = 'Update Status', value = stale_status))
stat.values.insert(1, KeyValue(key = 'Time Since Update', value = str(time_since_update)))
class MemMonitor():
def __init__(self, hostname, diag_hostname):
self._diag_pub = rospy.Publisher('/diagnostics', DiagnosticArray, queue_size = 100)
self._mutex = threading.Lock()
self._mem_level_warn = rospy.get_param('~mem_level_warn', mem_level_warn)
self._mem_level_error = rospy.get_param('~mem_level_error', mem_level_error)
self._usage_timer = None
self._usage_stat = DiagnosticStatus()
self._usage_stat.name = 'Memory Usage (%s)' % diag_hostname
self._usage_stat.level = 1
self._usage_stat.hardware_id = hostname
self._usage_stat.message = 'No Data'
self._usage_stat.values = [ KeyValue(key = 'Update Status', value = 'No Data' ),
KeyValue(key = 'Time Since Last Update', value = 'N/A') ]
self._last_usage_time = 0
self._last_publish_time = 0
# Start checking everything
self.check_usage()
## Must have the lock to cancel everything
def cancel_timers(self):
if self._usage_timer:
self._usage_timer.cancel()
def check_memory(self):
values = []
level = DiagnosticStatus.OK
msg = ''
mem_dict = { 0: 'OK', 1: 'Low Memory', 2: 'Very Low Memory' }
try:
p = subprocess.Popen('free -tm',
stdout = subprocess.PIPE,
stderr = subprocess.PIPE, shell = True)
stdout, stderr = p.communicate()
retcode = p.returncode
if retcode != 0:
values.append(KeyValue(key = "\"free -tm\" Call Error", value = str(retcode)))
return DiagnosticStatus.ERROR, values
rows = stdout.split('\n')
data = rows[1].split()
total_mem_physical = data[1]
used_mem_physical = data[2]
free_mem_physical = data[3]
data = rows[2].split()
total_mem_swap = data[1]
used_mem_swap = data[2]
free_mem_swap = data[3]
data = rows[3].split()
total_mem = data[1]
used_mem = data[2]
free_mem = data[3]
level = DiagnosticStatus.OK
mem_usage = float(used_mem_physical)/float(total_mem_physical)
if (mem_usage < self._mem_level_warn):
level = DiagnosticStatus.OK
elif (mem_usage < self._mem_level_error):
level = DiagnosticStatus.WARN
else:
level = DiagnosticStatus.ERROR
values.append(KeyValue(key = 'Memory Status', value = mem_dict[level]))
values.append(KeyValue(key = 'Total Memory (Physical)', value = total_mem_physical+"M"))
values.append(KeyValue(key = 'Used Memory (Physical)', value = used_mem_physical+"M"))
values.append(KeyValue(key = 'Free Memory (Physical)', value = free_mem_physical+"M"))
values.append(KeyValue(key = 'Total Memory (Swap)', value = total_mem_swap+"M"))
values.append(KeyValue(key = 'Used Memory (Swap)', value = used_mem_swap+"M"))
values.append(KeyValue(key = 'Free Memory (Swap)', value = free_mem_swap+"M"))
values.append(KeyValue(key = 'Total Memory', value = total_mem+"M"))
values.append(KeyValue(key = 'Used Memory', value = used_mem+"M"))
values.append(KeyValue(key = 'Free Memory', value = free_mem+"M"))
msg = mem_dict[level]
except Exception, e:
rospy.logerr(traceback.format_exc())
msg = 'Memory Usage Check Error'
values.append(KeyValue(key = msg, value = str(e)))
level = DiagnosticStatus.ERROR
return level, mem_dict[level], values
def check_usage(self):
if rospy.is_shutdown():
with self._mutex:
self.cancel_timers()
return
diag_level = 0
diag_vals = [ KeyValue(key = 'Update Status', value = 'OK' ),
KeyValue(key = 'Time Since Last Update', value = 0 )]
diag_msgs = []
# Check memory
mem_level, mem_msg, mem_vals = self.check_memory()
diag_vals.extend(mem_vals)
if mem_level > 0:
diag_msgs.append(mem_msg)
diag_level = max(diag_level, mem_level)
if diag_msgs and diag_level > 0:
usage_msg = ', '.join(set(diag_msgs))
else:
usage_msg = stat_dict[diag_level]
# Update status
with self._mutex:
self._last_usage_time = rospy.get_time()
self._usage_stat.level = diag_level
self._usage_stat.values = diag_vals
self._usage_stat.message = usage_msg
if not rospy.is_shutdown():
self._usage_timer = threading.Timer(5.0, self.check_usage)
self._usage_timer.start()
else:
self.cancel_timers()
def publish_stats(self):
with self._mutex:
# Update everything with last update times
update_status_stale(self._usage_stat, self._last_usage_time)
msg = DiagnosticArray()
msg.header.stamp = rospy.get_rostime()
msg.status.append(self._usage_stat)
if rospy.get_time() - self._last_publish_time > 0.5:
self._diag_pub.publish(msg)
self._last_publish_time = rospy.get_time()
if __name__ == '__main__':
hostname = socket.gethostname()
hostname = hostname.replace('-', '_')
import optparse
parser = optparse.OptionParser(usage="usage: mem_monitor.py [--diag-hostname=cX]")
parser.add_option("--diag-hostname", dest="diag_hostname",
help="Computer name in diagnostics output (ex: 'c1')",
metavar="DIAG_HOSTNAME",
action="store", default = hostname)
options, args = parser.parse_args(rospy.myargv())
try:
rospy.init_node('mem_monitor_%s' % hostname)
except rospy.exceptions.ROSInitException:
print >> sys.stderr, 'Memory monitor is unable to initialize node. Master may not be running.'
sys.exit(0)
mem_node = MemMonitor(hostname, options.diag_hostname)
rate = rospy.Rate(1.0)
try:
while not rospy.is_shutdown():
rate.sleep()
mem_node.publish_stats()
except KeyboardInterrupt:
pass
except Exception, e:
traceback.print_exc()
rospy.logerr(traceback.format_exc())
mem_node.cancel_timers()
sys.exit(0)
| 41.973282 | 102 | 0.555606 | 1,197 | 10,997 | 4.908104 | 0.249791 | 0.033702 | 0.040851 | 0.046979 | 0.234043 | 0.166809 | 0.144 | 0.076255 | 0.076255 | 0.076255 | 0 | 0.00979 | 0.340547 | 10,997 | 261 | 103 | 42.1341 | 0.800331 | 0.256252 | 0 | 0.186047 | 0 | 0 | 0.087192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005814 | 0.069767 | null | null | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab313db7c6b7e6135aaa8212f15c08dfe29e2372 | 1,280 | py | Python | dataloader/frame_counter/frame_counter.py | aaron-zou/pretraining-twostream | 5aa2f4bafb731e61f8f671e2500a6dfa8436be57 | [
"MIT"
] | null | null | null | dataloader/frame_counter/frame_counter.py | aaron-zou/pretraining-twostream | 5aa2f4bafb731e61f8f671e2500a6dfa8436be57 | [
"MIT"
] | null | null | null | dataloader/frame_counter/frame_counter.py | aaron-zou/pretraining-twostream | 5aa2f4bafb731e61f8f671e2500a6dfa8436be57 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""Generate frame counts dict for a dataset.
Usage:
frame_counter.py [options]
Options:
-h, --help Print help message
--root=<str> Path to root of dataset (should contain video folders that contain images)
[default: /vision/vision_users/azou/data/hmdb51_flow/u/]
--output=<str> Output filename [default: hmdb_frame_count.pickle]
"""
from __future__ import print_function
from docopt import docopt
import os
import sys
import pickle
if __name__ == '__main__':
args = docopt(__doc__)
print(args)
# Final counts
counts = {}
min_count = sys.maxint
# Generate list of video folders
for root, dirs, files in os.walk(args['--root']):
# Skip the root directory
if len(dirs) != 0:
continue
# Process a directory and frame count into a dictionary entry
name = os.path.basename(os.path.normpath(root))
print('{}: {} frames'.format(name, len(files)))
counts[name] = len(files)
# Track minimum count
if len(files) < min_count:
min_count = len(files)
with open(args['--output'], 'wb') as ofile:
pickle.dump(counts, ofile)
print('Minimum frame count = {}'.format(min_count))
| 27.826087 | 102 | 0.625 | 165 | 1,280 | 4.690909 | 0.521212 | 0.041344 | 0.031008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003171 | 0.260938 | 1,280 | 45 | 103 | 28.444444 | 0.815011 | 0.432031 | 0 | 0 | 1 | 0 | 0.085315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.238095 | 0 | 0.238095 | 0.190476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab32101612714ab2b6b04c378a7a5646daa96906 | 155 | py | Python | Problem_30/main.py | jdalzatec/EulerProject | 2f2f4d9c009be7fd63bb229bb437ea75db77d891 | [
"MIT"
] | 1 | 2022-03-28T05:32:58.000Z | 2022-03-28T05:32:58.000Z | Problem_30/main.py | jdalzatec/EulerProject | 2f2f4d9c009be7fd63bb229bb437ea75db77d891 | [
"MIT"
] | null | null | null | Problem_30/main.py | jdalzatec/EulerProject | 2f2f4d9c009be7fd63bb229bb437ea75db77d891 | [
"MIT"
] | null | null | null | total = 0
for n in range(1000, 1000000):
suma = 0
for i in str(n):
suma += int(i)**5
if (n == suma):
total += n
print(total) | 14.090909 | 30 | 0.483871 | 26 | 155 | 2.884615 | 0.576923 | 0.106667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 0.36129 | 155 | 11 | 31 | 14.090909 | 0.616162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab36c71fdbd365804953a57202728144c1db7c55 | 628 | py | Python | nl/predictor.py | jclosure/donkus | b3384447094b2ecbaff5ee9d970818313b6ee8b0 | [
"MIT"
] | 1 | 2015-01-16T01:04:39.000Z | 2015-01-16T01:04:39.000Z | nl/predictor.py | jclosure/donkus | b3384447094b2ecbaff5ee9d970818313b6ee8b0 | [
"MIT"
] | null | null | null | nl/predictor.py | jclosure/donkus | b3384447094b2ecbaff5ee9d970818313b6ee8b0 | [
"MIT"
] | null | null | null |
from nltk.corpus import gutenberg
from nltk import ConditionalFreqDist
from random import choice
#create the distribution object
cfd = ConditionalFreqDist()
## for each token count the current word given the previous word
prev_word = None
for word in gutenberg.words('austen-persuasion.txt'):
cfd[prev_word][word] += 1
prev_word = word
## start predicting at given word, say "therefore"
word = "therefore"
i = 1
## find all words that can follow the given word and choose one at random
while i<20:
print word,
lwords = cfd.get(word).keys()
follower = choice(lwords)
word = follower
i += 1
| 22.428571 | 73 | 0.716561 | 91 | 628 | 4.912088 | 0.549451 | 0.053691 | 0.053691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.203822 | 628 | 27 | 74 | 23.259259 | 0.884 | 0.335987 | 0 | 0 | 0 | 0 | 0.073529 | 0.051471 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1875 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab37e16ef4016e52fa0dab454a286037abc7c623 | 889 | py | Python | tests/test_tempo_event.py | yokaze/crest-python | c246b16ade6fd706f0e18aae797660064bddd555 | [
"MIT"
] | null | null | null | tests/test_tempo_event.py | yokaze/crest-python | c246b16ade6fd706f0e18aae797660064bddd555 | [
"MIT"
] | null | null | null | tests/test_tempo_event.py | yokaze/crest-python | c246b16ade6fd706f0e18aae797660064bddd555 | [
"MIT"
] | null | null | null | #
# test_tempo_event.py
# crest-python
#
# Copyright (C) 2017 Rue Yokaze
# Distributed under the MIT License.
#
import crest_loader
import unittest
from crest.events.meta import TempoEvent
class TestTempoEvent(unittest.TestCase):
def test_ctor(self):
TempoEvent()
TempoEvent(120)
def test_message(self):
evt = TempoEvent(120)
self.assertEqual(evt.Message, [0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20])
def test_property(self):
evt = TempoEvent(120)
self.assertEqual(evt.Tempo, 120)
self.assertEqual(evt.MicroSeconds, 500000)
evt.Tempo = 60
self.assertEqual(evt.Tempo, 60)
self.assertEqual(evt.MicroSeconds, 1000000)
evt.MicroSeconds = 250000
self.assertEqual(evt.Tempo, 240)
self.assertEqual(evt.MicroSeconds, 250000)
if (__name__ == '__main__'):
unittest.main()
| 24.694444 | 75 | 0.662542 | 103 | 889 | 5.582524 | 0.475728 | 0.182609 | 0.21913 | 0.109565 | 0.224348 | 0.224348 | 0.132174 | 0 | 0 | 0 | 0 | 0.092375 | 0.232846 | 889 | 35 | 76 | 25.4 | 0.750733 | 0.115861 | 0 | 0.090909 | 0 | 0 | 0.010309 | 0 | 0 | 0 | 0.030928 | 0 | 0.318182 | 1 | 0.136364 | false | 0 | 0.136364 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab3e8587e5794a9e3b079ba7743bbea191efe88d | 66,791 | py | Python | Arbitrage_Future/Arbitrage_Future/test.py | ronaldzgithub/CryptoArbitrage | b4b7a12b7b11f3dcf950f9d2039dad4f1388530b | [
"MIT"
] | 1 | 2021-11-03T06:16:16.000Z | 2021-11-03T06:16:16.000Z | Arbitrage_Future/Arbitrage_Future/test.py | benno0810/CryptoArbitrage | b4b7a12b7b11f3dcf950f9d2039dad4f1388530b | [
"MIT"
] | null | null | null | Arbitrage_Future/Arbitrage_Future/test.py | benno0810/CryptoArbitrage | b4b7a12b7b11f3dcf950f9d2039dad4f1388530b | [
"MIT"
] | 2 | 2021-05-07T09:11:54.000Z | 2021-11-27T16:29:10.000Z | # !/usr/local/bin/python
# -*- coding:utf-8 -*-
import YunBi
import CNBTC
import json
import threading
import Queue
import time
import logging
import numpy
import message
import random
open_platform = [True,True,True,True]
numpy.set_printoptions(suppress=True)
# logging.basicConfig(level=logging.DEBUG,
# format="[%(asctime)20s] [%(levelname)8s] %(filename)10s:%(lineno)-5s --- %(message)s",
# datefmt="%Y-%m-%d %H:%M:%S",
# filename="log/%s.log"%time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())),
# filemode='w')
# console = logging.StreamHandler()
# console.setLevel(logging.INFO)
# formatter = logging.Formatter("[%(asctime)20s] [%(levelname)8s] %(filename)10s:%(lineno)-5s --- %(message)s", "%Y-%m-%d %H:%M:%S")
# console.setFormatter(formatter)
# logging.getLogger('').addHandler(console)
coin_status = [-1,-1,-1,-1]
money_status = [-1,-1,-1,-1]
history = open("log/historyPrice_%s.txt"%time.strftime('%Y_%m_%d_%H_%M_%S', time.localtime(time.time())),"a")
# output = open("journalist.txt",'a')
balance = open("log/balance%s.txt"%time.strftime('%Y_%m_%d %H_%M_%S', time.localtime(time.time())),'a')
ybQue1 = Queue.Queue()
ybQue2 = Queue.Queue()
hbQue1 = Queue.Queue()
hbQue2 = Queue.Queue()
okcQue1 = Queue.Queue()
okcQue2 = Queue.Queue()
cnbtcQue1 = Queue.Queue()
cnbtcQue2 = Queue.Queue()
ybTradeQue1 = Queue.Queue()
ybTradeQue2 = Queue.Queue()
cnbtcTradeQue1 = Queue.Queue()
cnbtcTradeQue2 = Queue.Queue()
hbTradeQue1 = Queue.Queue()
hbTradeQue2 = Queue.Queue()
okcTradeQue1 = Queue.Queue()
okcTradeQue2 = Queue.Queue()
ybAccountQue1 = Queue.Queue()
ybAccountQue2 = Queue.Queue()
cnbtcAccountQue1 = Queue.Queue()
cnbtcAccountQue2 = Queue.Queue()
hbAccountQue1 = Queue.Queue()
hbAccountQue2 = Queue.Queue()
okcAccountQue1 = Queue.Queue()
okcAccountQue2 = Queue.Queue()
alertQue = Queue.Queue()
total_trade_coin = 0
delay_time = 0.2
config = json.load(open("config.json","r"))
#####max coin # in each trade
maxTradeLimitation = float(config["MaxCoinTradeLimitation"])
tel_list = config["tel"]
# maxTradeLimitation_yb_buy_cnbtc_sell = float(config["MaxCoinTradeLimitation_yb_buy_cnbtc_sell"])
# maxTradeLimitation_yb_buy_hb_sell = float(config["MaxCoinTradeLimitation_yb_buy_hb_sell"])
# maxTradeLimitation_yb_sell_hb_buy = float(config["MaxCoinTradeLimitation_yb_sell_hb_buy"])
# maxTradeLimitation_hb_buy_cnbtc_sell = float(config["MaxCoinTradeLimitation_hb_buy_cnbtc_sell"])
# maxTradeLimitation_hb_sell_cnbtc_buy = float(config["MaxCoinTradeLimitation_hb_sell_cnbtc_buy"])
#####max coin # for each account
maxCoin = float(config["MaxCoinLimitation"])
#####if spread over this threshold, we trade
max_thres_limitation = float(config["max_thres_limitation"])
spread_threshold_yb_sell_cnbtc_buy = float(config["spread_threshold_yb_sell_cnbtc_buy"])
spread_threshold_yb_buy_cnbtc_sell = float(config["spread_threshold_yb_buy_cnbtc_sell"])
spread_threshold_yb_buy_hb_sell = float(config["spread_threshold_yb_buy_hb_sell"])
spread_threshold_yb_sell_hb_buy = float(config["spread_threshold_yb_sell_hb_buy"])
spread_threshold_hb_buy_cnbtc_sell = float(config["spread_threshold_hb_buy_cnbtc_sell"])
spread_threshold_hb_sell_cnbtc_buy = float(config["spread_threshold_hb_sell_cnbtc_buy"])
random_range = float(config["RandomRange"])
spread_threshold_yb_sell_okc_buy = float(config["spread_threshold_yb_sell_okc_buy"])
spread_threshold_yb_buy_okc_sell = float(config["spread_threshold_yb_buy_okc_sell"])
spread_threshold_okc_buy_hb_sell = float(config["spread_threshold_okc_buy_hb_sell"])
spread_threshold_okc_sell_hb_buy = float(config["spread_threshold_okc_sell_hb_buy"])
spread_threshold_okc_buy_cnbtc_sell = float(config["spread_threshold_okc_buy_cnbtc_sell"])
spread_threshold_okc_sell_cnbtc_buy = float(config["spread_threshold_okc_sell_cnbtc_buy"])
max_diff_thres = float(config["max_diff_thres"])
#######if coin # is lower than alert thres, it will increase the thres
alert_thres_coin = float(config["alert_thres_coin"])
alert_thres_money = float(config["alert_thres_money"])
thres_coin = float(config["thres_coin"])
thres_money = float(config["thres_money"])
#######max thres increase is slop*alert_thres
slope = float(config["alert_slope"])
# print max_diff_thres,alert_thres,slope
# spread_threshold = float(config["spread_threshold"])
# spread_threshold_minor = float(config["spread_threshold_minor"])
#####if we start a trade, we will accept all trade until spread reach lowest spread threshold, after that, we cancel all trade
lowest_spread_threshold = float(config["lowest_spread_threshold"])
trade_multiplier_ratio = float(config["TradeMultiplyRatio"])
# lowest_spread_threshold_minor = float(config["lowest_spread_threshold_minor"])
#####the trade price is max trade limitation*trade ratio behind the min/max price of ask/bid
trade_ratio = float(config["TradeAdvanceRatio"])
# trade_ratio_minor = float(config["TradeAdvanceRatio_minor"])
#####slippage
slippage = float(config["slippage"])
tmpThres = maxTradeLimitation*trade_ratio
# tmpThres_minor = maxTradeLimitation_minor*trade_ratio
offset_player = int(config["offset_player"])
# offset_player_minor = int(config["offset_player_minor"])
offset_coin = float(config["offset_coin"])
# offset_coin_minor = float(config["offset_coin_minor"])
########return 0 accumulate amount
########return 1 price
########return 2 list
def cnbtcThresCoin(thres,offset_coin,offset_player,list):
acc = 0
for i in range(offset_player,len(list)):
acc += list[i][1]
if acc > thres+offset_coin:
return (thres,list[i][0],list)
return (acc,list[-1][0],list)
def ybThresCoin(thres,offset_coin,offset_player,list):
acc = 0
for i in range(offset_player,len(list)):
acc += float(list[i][1])
if acc > thres+offset_coin:
return (thres,float(list[i][0]),list)
return (acc,float(list[-1][0]),list)
def hbThresCoin(thres,offset_coin,offset_player,list):
acc = 0
for i in range(offset_player,len(list)):
acc += float(list[i][1])
if acc > thres+offset_coin:
return (thres,float(list[i][0]),list)
return (acc,float(list[-1][0]),list)
def okcThresCoin(thres,offset_coin,offset_player,list):
acc = 0
for i in range(offset_player,len(list)):
acc += list[i][1]
if acc > thres+offset_coin:
return (thres,list[i][0],list)
return (acc,list[-1][0],list)
def ybRun():
while True:
yb = ybQue1.get()
if yb == None:
ybQue1.task_done()
break
else:
while True:
depth = yb.getDepth()
if depth:
break
depth["asks"].reverse()
ybQue2.put((ybThresCoin(tmpThres,offset_coin,offset_player,depth["bids"]),depth["timestamp"]))
ybQue2.put((ybThresCoin(tmpThres,offset_coin,offset_player,depth["asks"]),depth["timestamp"]))
ybQue1.task_done()
def okcRun():
while True:
okc = okcQue1.get()
if okc == None:
okcQue1.task_done()
break
else:
while True:
depth = okc.getDepth()
if depth:
break
depth["asks"].reverse()
okcQue2.put((okcThresCoin(tmpThres,offset_coin,offset_player,depth["bids"]),"-99999999"))
okcQue2.put((okcThresCoin(tmpThres,offset_coin,offset_player,depth["asks"]),"-99999999"))
okcQue1.task_done()
def hbRun():
while True:
hb = hbQue1.get()
if hb == None:
hbQue1.task_done()
break
else:
while True:
depth = hb.getDepth()
if depth and depth["status"] == "ok":
break
# depth["tick"]["asks"].reverse()
hbQue2.put((hbThresCoin(tmpThres,offset_coin,offset_player,depth["tick"]["bids"]),depth["ts"]/1000))
hbQue2.put((hbThresCoin(tmpThres,offset_coin,offset_player,depth["tick"]["asks"]),depth["ts"]/1000))
hbQue1.task_done()
def cnbtcRun():
while True:
cnbtc = cnbtcQue1.get()
if cnbtc == None:
cnbtcQue1.task_done()
break
else:
while True:
depth = cnbtc.getDepth()
if depth:
break
depth["asks"].reverse()
cnbtcQue2.put((cnbtcThresCoin(tmpThres,offset_coin,offset_player,depth["bids"]),depth["timestamp"]))
cnbtcQue2.put((cnbtcThresCoin(tmpThres,offset_coin,offset_player,depth["asks"]),depth["timestamp"]))
cnbtcQue1.task_done()
#######tradeque1[0]:obj
#######tradeque1[1]:buy or sell
#######tradeque1[2]:amount
#######tradeque1[3]:price
#######tradeque1[4]:limit_price
def ybTradeRun():
while True:
yb_tuple = ybTradeQue1.get()
money = 0
if yb_tuple == None:
ybTradeQue1.task_done()
break
yb = yb_tuple[0]
amount = yb_tuple[2]
remain = amount
price = yb_tuple[3]
if amount==0:
ybTradeQue2.put((0.0,0.0))
ybTradeQue1.task_done()
continue
sell = True
if yb_tuple[1] == "buy":
sell = False
times = 10
while True:
order = None
if sell:
order = yb.sell(volume = amount,price=price-slippage)
else:
order = yb.buy(volume = amount, price = price + slippage)
if order!= None:
if order.has_key("error"):
time.sleep(delay_time)
print "yb",order
continue
id = order["id"]
wait_times = 3
while wait_times>0:
wait_times-=1
time.sleep(1)
while True:
order = yb.getOrder(id)
if order!=None:
if order.has_key("error"):
time.sleep(delay_time)
print "yb",order
continue
break
print "yb",order
if order["state"] == "done":
break
if order["state"] == "done":
if sell:
print "yunbi remain sell %f"%0.0
money+=amount*(price-slippage)
ybTradeQue2.put((0.0,money))
break
else:
print "yunbi remain buy 0.0"
money-=amount*(price+slippage)
ybTradeQue2.put((0.0,money))
break
else:
# order["state"] == "wait":
while True:
order = yb.deleteOrder(id)
print "yb",order
if order!=None:
if order.has_key("error"):
print "yb,delete",order
time.sleep(delay_time)
continue
break
while True:
order = yb.getOrder(id)
print "yb",order
if order!=None:
if order.has_key("error"):
time.sleep(delay_time)
print "yb",order
continue
if order["state"] != "wait":
break
else:
time.sleep(delay_time)
# break
#todo judge whether has been deleted
if sell:
money+=float(order["executed_volume"])*(price-slippage)
remain = float(order["remaining_volume"])
print "yunbi remain sell %f"%float(order["remaining_volume"])
else:
money-=float(order["executed_volume"])*(price+slippage)
remain = float(order["remaining_volume"])
print "yunbi remain buy %f"%float(order["remaining_volume"])
if remain <=0:
ybTradeQue2.put((0.0,money))
break
print "get_price"
while True:
depth = yb.getDepth()
if depth:
depth["asks"].reverse()
break
if sell:
price_now = ybThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["bids"])[1]
print "price_now yb",price_now,yb_tuple[4]
if price_now<yb_tuple[4]:
ybTradeQue2.put((remain,money))
break
else:
price_now = ybThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["asks"])[1]
print "price_now yb",price_now
if price_now>yb_tuple[4]:
ybTradeQue2.put((remain,money))
break
price = price_now
amount = remain
times-=1
ybTradeQue1.task_done()
def okcTradeRun():
while True:
okc_tuple = okcTradeQue1.get()
money = 0
if okc_tuple == None:
okcTradeQue1.task_done()
break
okc = okc_tuple[0]
amount = okc_tuple[2]
remain = amount
price = okc_tuple[3]
if amount==0:
okcTradeQue2.put((0.0,0.0))
okcTradeQue1.task_done()
continue
sell = True
if okc_tuple[1] == "buy":
sell = False
times = 10
while True:
order = None
if sell:
order = okc.sell(volume = amount,price=price-slippage)
else:
order = okc.buy(volume = amount, price = price+slippage)
if order!= None:
if order["result"] != True:
print "okc",order
time.sleep(delay_time)
continue
id = order["order_id"]
wait_times = 3
while wait_times>0:
wait_times-=1
time.sleep(1)
while True:
order = okc.getOrder(id)
if order!=None:
if order["result"] != True:
time.sleep(delay_time)
print "okc",order
continue
break
print "okc",order
if order["orders"][0]["status"] == 2:
break
if order["orders"][0]["status"] == 2:
if sell:
print "okcoin remain sell %f"%0.0
money+=amount*(price-slippage)
okcTradeQue2.put((0.0,money))
break
else:
print "okcoin remain buy 0.0"
money-=amount*(price+slippage)
okcTradeQue2.put((0.0,money))
break
else:
# order["state"] == "wait":
while True:
order = okc.deleteOrder(id)
if order!=None:
if order["result"] != True:
time.sleep(delay_time)
print "okc",order
if order["error_code"]==10050:
break
continue
break
while True:
order = okc.getOrder(id)
if order!=None:
if order["result"] != True:
time.sleep(delay_time)
print "okc",order
continue
if order["orders"][0]["status"] == 2 or order["orders"][0]["status"]== -1:
break
else:
time.sleep(delay_time)
#todo judge whether has been deleted
if sell:
money+=float(order["orders"][0]["deal_amount"])*(price-slippage)
remain = float(order["orders"][0]["amount"]) - float(order["orders"][0]["deal_amount"])
print "okcoin remain sell %f"%remain
else:
money-=float(order["orders"][0]["deal_amount"])*(price+slippage)
remain = float(order["orders"][0]["amount"])-float(order["orders"][0]["deal_amount"])
print "okcoin remain buy %f"%remain
if remain<=0:
okcTradeQue2.put((0.0,money))
break
print "get_price"
while True:
depth = okc.getDepth()
if depth:
depth["asks"].reverse()
break
if sell:
price_now = okcThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["bids"])[1]
print "price_now okc",price_now,okc_tuple[4]
if price_now<okc_tuple[4]:
okcTradeQue2.put((remain,money))
break
else:
price_now = okcThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["asks"])[1]
print "price_now okc",price_now
if price_now>okc_tuple[4]:
okcTradeQue2.put((remain,money))
break
price = price_now
amount = remain
times-=1
okcTradeQue1.task_done()
def hbTradeRun():
while True:
hb_tuple = hbTradeQue1.get()
money = 0
if hb_tuple == None:
hbTradeQue1.task_done()
break
hb = hb_tuple[0]
amount = hb_tuple[2]
remain = amount
price = hb_tuple[3]
if amount==0:
hbTradeQue2.put((0.0,0.0))
hbTradeQue1.task_done()
continue
sell = True
if hb_tuple[1] == "buy":
sell = False
times = 10
while True:
order = None
if sell:
order = hb.sell(volume = amount,price=price-slippage)
#todo
if order!=None and order["status"] == "ok":
order = hb.place_order(order["data"])
else:
#todo
order = hb.buy(volume = amount, price = price + slippage)
if order!=None and order["status"] == "ok":
order = hb.place_order(order["data"])
if order!= None:
if order["status"]!="ok":
print "hb",order
time.sleep(delay_time)
continue
id = order["data"]
wait_times = 3
while wait_times>0:
wait_times-=1
time.sleep(1)
while True:
order = hb.getOrder(id)
if order!=None:
if order["status"]!="ok":
time.sleep(delay_time)
print "hb",order
continue
break
print "hb",order
if order["data"]["state"] == "filled":
break
#todo
if order["data"]["state"] == "filled":
if sell:
print "huobi remain sell %f"%0.0
money+=amount*(price-slippage)
hbTradeQue2.put((0.0,money))
break
else:
print "huobi remain buy 0.0"
money-=amount*(price+slippage)
hbTradeQue2.put((0.0,money))
break
else:
# order["state"] == "wait":
while True:
print id
order = hb.deleteOrder(id)
if order!=None:
if order["status"]!="ok":
if order['status'] == 'error' and order['err-code'] == 'order-orderstate-error':
break
print "hb",order
continue
break
while True:
order = hb.getOrder(id)
if order!=None:
if order["status"]!="ok":
time.sleep(delay_time)
print "hb",order
continue
print "hb",order
if order["data"]["state"] == "canceled" or order["data"]["state"] == "filled" or order["data"]["state"] == "partial-canceled" or order["data"]["state"] == "partial-filled":
break
else:
time.sleep(delay_time)
#todo judge whether has been deleted
if sell:
money+=float(order["data"]["field-amount"])*(price-slippage)
remain = float(order["data"]["amount"])-float(order["data"]["field-amount"])
print "huobi remain sell %f"%remain
else:
money-=float(order["data"]["field-amount"])*(price+slippage)
remain = float(order["data"]["amount"])-float(order["data"]["field-amount"])
print "huobi remain buy %f"%remain
if remain<=0:
hbTradeQue2.put((0.0,money))
break
print "get_price"
while True:
depth = hb.getDepth()
if depth:
break
if sell:
price_now = hbThresCoin(remain*trade_ratio,offset_coin,offset_player,depth['tick']["bids"])[1]
print "price_now hb",price_now,hb_tuple[4]
if price_now<hb_tuple[4]:
hbTradeQue2.put((remain,money))
break
else:
price_now = hbThresCoin(remain*trade_ratio,offset_coin,offset_player,depth['tick']["asks"])[1]
print "price_now hb",price_now
if price_now>hb_tuple[4]:
hbTradeQue2.put((remain,money))
break
price = price_now
amount = remain
times-=1
hbTradeQue1.task_done()
def cnbtcTradeRun():
while True:
cnbtc_tuple = cnbtcTradeQue1.get()
if cnbtc_tuple == None:
cnbtcTradeQue1.task_done()
break
# print cnbtc_tuple
money = 0;
cnbtc = cnbtc_tuple[0]
amount = cnbtc_tuple[2]
remain = amount
price = cnbtc_tuple[3]
if amount==0:
cnbtcTradeQue2.put((0.0,0.0))
cnbtcTradeQue1.task_done()
continue
buy = True
if cnbtc_tuple[1] == "sell":
buy = False
times = 10
while True:
if buy:
order = cnbtc.buy(volume = amount,price=price+slippage)
else:
order = cnbtc.sell(volume=amount,price=price-slippage)
if order!= None:
if order.has_key("code") and order["code"] != 1000:
time.sleep(delay_time)
print "cnbtc",order
continue
id = order["id"]
wait_times = 5
while wait_times>0:
wait_times-=1
time.sleep(1)
while True:
order = cnbtc.getOrder(id)
if order!=None:
break
print "cnbtc",order
####2 is done
####
if order["status"] == 2:
break
if order["status"] == 2:
if buy:
print "cnbtc remain buy ",0.0
money-=amount*(price+slippage)
cnbtcTradeQue2.put((0.0,money))
else:
print "cnbtc remain sell 0.0"
money+=amount*(price-slippage)
cnbtcTradeQue2.put((0.0,money))
break
elif order["status"] == 0 or order["status"] == 3:
while True:
order = cnbtc.deleteOrder(id)
if order!=None:
if order.has_key("code") and order["code"] != 1000:
print json.dumps(order,ensure_ascii=False)
if order["code"] == 3001:
break
time.sleep(delay_time)
continue
break
while True:
order = cnbtc.getOrder(id)
if order!=None:
# print order
if order.has_key("code") and order["code"] != 1000:
print "cnbtc",order
time.sleep(delay_time)
continue
#todo judge whether is deleted
if order["status"]==1 or order["status"] == 2:
break
else:
time.sleep(delay_time)
print "cnbtc",order
if buy:
money-=float(order["trade_amount"])*(price+slippage)
remain = float(order["total_amount"]) - float(order["trade_amount"])
print "cnbtc remain buy %f/%f"%(remain,float(order["total_amount"]))
else:
money+=float(order["trade_amount"])*(price-slippage)
remain = float(order["total_amount"]) - float(order["trade_amount"])
print "cnbtc remain sell %f/%f"%(remain,float(order["total_amount"]))
if remain<=0:
cnbtcTradeQue2.put((0.0,money))
break
else:
if buy:
money-=float(order["trade_amount"])*(price+slippage)
remain = float(order["total_amount"]) - float(order["trade_amount"])
print "cnbtc remain buy %f/%f"%(remain,float(order["total_amount"]))
else:
money+=float(order["trade_amount"])*(price-slippage)
remain = float(order["total_amount"]) - float(order["trade_amount"])
print "cnbtc remain sell %f/%f"%(remain,float(order["total_amount"]))
if remain<=0:
cnbtcTradeQue2.put((0.0,money))
break
print "get_depth"
while True:
depth = cnbtc.getDepth()
depth["asks"].reverse()
if depth:
break
if buy:
price_now = cnbtcThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["asks"])[1]
print "prince_now cnbtc",price_now
if price_now>cnbtc_tuple[4]:
cnbtcTradeQue2.put((remain,money))
break
else:
price_now = cnbtcThresCoin(remain*trade_ratio,offset_coin,offset_player,depth["bids"])[1]
print "prince_now cnbtc",price_now
if price_now<cnbtc_tuple[4]:
cnbtcTradeQue2.put((remain,money))
break
price = price_now
amount = remain
times-=1
cnbtcTradeQue1.task_done()
def ybAccountRun():
while True:
yb = ybAccountQue1.get()
yb_cny = 0
yb_eth = 0
while True:
yb_acc = yb.get_account()
if yb_acc!= None:
if yb_acc.has_key("error"):
time.sleep(delay_time)
print yb_acc
continue
break
for acc in yb_acc["accounts"]:
if acc["currency"] == "cny":
yb_cny=float(acc["balance"])
elif acc["currency"] == "eth":
yb_eth= float(acc["balance"])
ybAccountQue1.task_done()
ybAccountQue2.put((yb_cny,yb_eth))
def cnbtcAccountRun():
while True:
cnbtc = cnbtcAccountQue1.get()
cnbtc_cny = 0
cnbtc_eth = 0
while True:
cnbtc_acc = cnbtc.get_account()
if cnbtc_acc!= None:
if cnbtc_acc.has_key("code") and cnbtc_acc["code"] != 1000:
time.sleep(delay_time)
print cnbtc_acc
continue
break
cnbtc_eth=cnbtc_acc["result"]["balance"]["ETH"]["amount"]
cnbtc_cny+=cnbtc_acc["result"]["balance"]["CNY"]["amount"]
cnbtcAccountQue1.task_done()
cnbtcAccountQue2.put((cnbtc_cny,cnbtc_eth))
def okcAccountRun():
while True:
time.sleep(delay_time)
okc = okcAccountQue1.get()
okc_cny = 0
okc_eth = 0
while True:
okc_acc = okc.get_account()
if okc_acc!= None:
if okc_acc["result"]!=True:
time.sleep(delay_time)
print "okc",okc_acc
continue
break
okc_eth = float(okc_acc["info"]["funds"]["free"]["eth"])
okc_cny = float(okc_acc["info"]["funds"]["free"]["cny"])
# print okc_acc
okcAccountQue1.task_done()
okcAccountQue2.put((okc_cny,okc_eth))
def hbAccountRun():
while True:
hb = hbAccountQue1.get()
hb_cny = 0
hb_eth = 0
while True:
hb_acc = hb.get_account()
if hb_acc!= None:
if hb_acc["status"]!="ok":
print hb_acc
continue
break
for mon in hb_acc["data"]["list"]:
if mon["currency"]=="cny" and mon["type"] == "trade":
hb_cny = float(mon["balance"])
if mon["currency"] == "eth" and mon["type"] == "trade":
hb_eth = float(mon["balance"])
hbAccountQue1.task_done()
hbAccountQue2.put((hb_cny,hb_eth))
import sys
import numpy.matlib
def setThreshold(cny_list,eth_list,brokerage_fee,cash_fee,thres_list_now,thres_list_origin,number,price,tick_coin,name_list):
trade_multiplier = numpy.ones([number,number])
thres_list = thres_list_origin.copy()
sell_times = eth_list/tick_coin
buy_times = cny_list/price/tick_coin
trade_broker = numpy.add.outer(brokerage_fee,brokerage_fee)*price*1.1
trade_cash = numpy.add.outer(cash_fee,numpy.zeros(cash_fee.shape[0]))*price*1.05
length = cny_list.shape[0]
print "buy_times",buy_times
print "sell_times",sell_times
tmp = buy_times.copy()
tmp[tmp>thres_money] = thres_money
tmp = (-tmp+thres_money)*slope
tmp[tmp>max_thres_limitation] = max_thres_limitation
offset = numpy.matlib.repmat(tmp,length,1)
tmp = buy_times.copy()
tmp[tmp>thres_money] = thres_money
tmp = (-tmp+thres_money)*5/thres_money
tmp[tmp>1] = 1
max_diff_thres_tmp = max(0,max_diff_thres)
tmp_mul = numpy.matlib.repmat(tmp.reshape(length,1),1,length)
trade_multiplier+=tmp_mul*trade_multiplier_ratio
tmp = numpy.matlib.repmat(tmp.reshape(length,1),1,length)
# print 123
offset_cash = -numpy.multiply(tmp,numpy.add.outer(cash_fee,numpy.zeros(cash_fee.shape[0]))*price*1.05)
# print tmp
# tmp = numpy.matlib.repmat(tmp.reshape(length,1),1,length)
# print tmp
tmp = sell_times.copy()
tmp[tmp>thres_coin] = thres_coin
tmp = (-tmp+thres_coin)*slope
tmp[tmp>max_thres_limitation] = max_thres_limitation
offset += numpy.matlib.repmat(tmp.reshape(length,1),1,length)
tmp = sell_times.copy()
tmp[tmp>thres_coin] = thres_coin
tmp = (-tmp+thres_coin)*5/thres_coin
tmp[tmp>1] = 1
tmp_mul = numpy.matlib.repmat(tmp,length,1)
trade_multiplier+=tmp_mul*trade_multiplier_ratio
tmp = numpy.matlib.repmat(tmp,length,1)
# print 123
offset_cash -= numpy.multiply(tmp,numpy.add.outer(cash_fee,numpy.zeros(cash_fee.shape[0]))*price*1.05)
# print offset
# buy_times<100
alertQue.put((buy_times,sell_times,number))
# offset[offset<max_diff_thres_tmp] = max_diff_thres_tmp
offset[offset>max_thres_limitation] = max_thres_limitation
print offset
# print offset
# print trade_broker,trade_cash,offset_cash
thres_list = trade_broker+trade_cash+offset_cash+max_diff_thres_tmp+offset+thres_list_origin
# print thres_list
thres_list[:,buy_times<=8] = 999999
thres_list[sell_times<=8,:] = 999999
buy_tmp = (thres_money-buy_times.copy())*slope
buy_tmp[buy_tmp<0] = 0
buy_tmp[buy_tmp>max_diff_thres_tmp] = max_diff_thres_tmp
buy_tmp_n_n = numpy.matlib.repmat(buy_tmp.reshape(length, 1), 1, length)
sell_tmp = (thres_coin-sell_times.copy())*slope
sell_tmp[sell_tmp<0] = 0
sell_tmp[sell_tmp>max_diff_thres_tmp] = max_diff_thres_tmp
sell_tmp_n_n = numpy.matlib.repmat(sell_tmp,length,1)
tmp_n_n = numpy.maximum(sell_tmp_n_n,buy_tmp_n_n)
# print thres_list
# print tmp_n_n
thres_list -= tmp_n_n
# thres_list -= sell_tmp
numpy.fill_diagonal(thres_list,999999)
numpy.fill_diagonal(trade_multiplier,0)
trade_multiplier[trade_multiplier>2] = 2
# print trade_multiplier
# print thres_list
# thres_list = numpy.maximum.reduce([thres_list,(trade_broker+trade_cash)])
# print buy_times<=1
# print thres_list
# result = thres_list_origin.copy()
# result[:number,:number] = thres_list
# thres_list[2,0] = 0
# thres_list[2,1] = 0
# thres_list[1,2] = 0
# thres_list[0,2] = 0
# print thres_list
return thres_list,trade_multiplier
def alert():
while True:
alertTuple = alertQue.get()
buy_times = alertTuple[0]
sell_times = alertTuple[1]
number = alertTuple[2]
for i in range(number):
if open_platform[i]:
if buy_times[i] <= 8:
if money_status[i] == 0 or money_status[i] == 1:
for tel in tel_list:
res = message.send_sms("提醒:%s的账户完全没钱了" % name_list[i], tel)
print res
money_status[i] = 2
print >> sys.stderr, "%s has no money!!!!!!!!!!!!!!!!!!!!!" % name_list[i]
elif buy_times[i] < alert_thres_money:
if money_status[i] == 0:
for tel in tel_list:
message.send_sms("提醒:%s快没钱了,只能买%f次了" % (name_list[i],buy_times[i]), tel)
money_status[i] = 1
print >> sys.stderr, "%s is low money!!!!!!!!!!!!!!!!!!!!!!" % name_list[i]
else:
money_status[i] = 0
if sell_times[i] <= 8:
if coin_status[i] == 0 or coin_status[i] == 1:
for tel in tel_list:
message.send_sms("提醒:%s的账户完全没币了" % name_list[i], tel)
coin_status[i] = 2
print >> sys.stderr, "%s has no coin!!!!!!!!!!!!!!!!!!!!!!" % name_list[i]
elif sell_times[i] < alert_thres_coin:
if coin_status[i] == 0:
for tel in tel_list:
message.send_sms("提醒:%s快没币了,只能卖%f次了" % (name_list[i],sell_times[i]), tel)
coin_status[i] = 1
print >> sys.stderr, "%s is low coin!!!!!!!!!!!!!!!!!!!!!!" % name_list[i]
else:
coin_status[i] = 0
alertQue.task_done()
import HuoBi
import OKCoin
open_okc = open_platform[3]
open_yb = open_platform[1]
open_cnbtc = open_platform[0]
open_hb = open_platform[2]
if open_yb:
yb = YunBi.Yunbi(config,"LiChen")
print yb.get_account()
else:
yb = None
# import gzip
# from StringIO import StringIO
#
# buf = StringIO(acc["name"])
# f = gzip.GzipFile(fileobj=buf)
# print f.read()
# sss = acc["name"].encode("raw_unicode_escape").decode()
# print ss
# logging.info("YB Account "+json.dumps(yb.get_account(),ensure_ascii=False))
if open_cnbtc:
cnbtc = CNBTC.CNBTC(config)
print("cnbtc Account "+str(cnbtc.get_account()))
else:
cnbtc = None
if open_hb:
hb = HuoBi.HuoBi(config)
print("HB Account "+str(hb.get_account()))
else:
hb = None
if open_okc:
okc = OKCoin.OKCoin(config)
print("OKCoin Account "+str(okc.get_account()))
okc_thread = threading.Thread(target=okcRun)
okc_thread.setDaemon(True)
okc_thread.start()
else:
okc = None
if open_yb:
yb_thread = threading.Thread(target=ybRun)
yb_thread.setDaemon(True)
yb_thread.start()
if open_cnbtc:
cnbtc_thread = threading.Thread(target=cnbtcRun)
cnbtc_thread.setDaemon(True)
cnbtc_thread.start()
if open_hb:
hb_thread = threading.Thread(target=hbRun)
hb_thread.setDaemon(True)
hb_thread.start()
if open_okc:
okc_trade_thread = threading.Thread(target=okcTradeRun)
okc_trade_thread.setDaemon(True)
okc_trade_thread.start()
if open_yb:
yb_trade_thread = threading.Thread(target=ybTradeRun)
yb_trade_thread.setDaemon(True)
yb_trade_thread.start()
if open_cnbtc:
cnbtc_trade_thread = threading.Thread(target = cnbtcTradeRun)
cnbtc_trade_thread.setDaemon(True)
cnbtc_trade_thread.start()
if open_hb:
hb_trade_thread = threading.Thread(target=hbTradeRun)
hb_trade_thread.setDaemon(True)
hb_trade_thread.start()
if open_okc:
okc_account_thread = threading.Thread(target=okcAccountRun)
okc_account_thread.setDaemon(True)
okc_account_thread.start()
if open_yb:
yb_account_thread = threading.Thread(target=ybAccountRun)
yb_account_thread.setDaemon(True)
yb_account_thread.start()
if open_cnbtc:
cnbtc_account_thread = threading.Thread(target = cnbtcAccountRun)
cnbtc_account_thread.setDaemon(True)
cnbtc_account_thread.start()
if open_hb:
hb_account_thread = threading.Thread(target=hbAccountRun)
hb_account_thread.setDaemon(True)
hb_account_thread.start()
alertThread = threading.Thread(target=alert)
alertThread.setDaemon(True)
alertThread.start()
total_coin = 0
total_money = 0
tick = 0
last_total_eth = 0
last_total_cny = 0
first_total_eth = 0
first_total_cny = 0
first = True
platform_number = 4
name_list = ["CNBTC","YunBi","HuoBi","OKCoin"]
obj_list = [cnbtc,yb,hb,okc]
que1_list = [cnbtcQue1,ybQue1,hbQue1,okcQue1]
que2_list = [cnbtcQue2,ybQue2,hbQue2,okcQue2]
trade_que1_list = [cnbtcTradeQue1,ybTradeQue1,hbTradeQue1,okcTradeQue1]
trade_que2_list = [cnbtcTradeQue2,ybTradeQue2,hbTradeQue2,okcTradeQue2]
thres_list = numpy.array([[999999,spread_threshold_yb_buy_cnbtc_sell,spread_threshold_hb_buy_cnbtc_sell,spread_threshold_okc_buy_cnbtc_sell],
[spread_threshold_yb_sell_cnbtc_buy,999999,spread_threshold_yb_sell_hb_buy,spread_threshold_yb_sell_okc_buy],
[spread_threshold_hb_sell_cnbtc_buy,spread_threshold_yb_buy_hb_sell,9999999,spread_threshold_okc_buy_hb_sell],
[spread_threshold_okc_sell_cnbtc_buy,spread_threshold_yb_buy_okc_sell,spread_threshold_okc_sell_hb_buy,999999]])
thres_list_origin = thres_list.copy()
has_ts = [True,True,True,False]
platform_list = []
for i in range(platform_number):
platform_list.append(
{
"name":name_list[i],
"obj":obj_list[i],
"que1":que1_list[i],
"que2":que2_list[i],
"trade_que1":trade_que1_list[i],
"trade_que2":trade_que2_list[i],
"depth_buy":None,
"depth_sell":None,
"has_ts":has_ts[i]
}
)
brokerage_fee = numpy.asarray([0.0004,0.001,0.002,0.001])
cash_fee = numpy.asarray([0.001,0.001,0.002,0.002])
while True:
print 'tick',tick
for platform in platform_list:
if platform["obj"]!=None:
platform["que1"].put(platform["obj"])
if open_yb:
ybAccountQue1.put(yb)
if open_okc:
okcAccountQue1.put(okc)
if open_cnbtc:
cnbtcAccountQue1.put(cnbtc)
if open_hb:
hbAccountQue1.put(hb)
for platform in platform_list:
if platform["obj"]!=None:
platform["depth_sell"] = platform["que2"].get()
platform["depth_buy"] = platform["que2"].get()
###depth[0] is amount
###depth[1] is price
###depth[2] is list platform_list["depth_buy"] = platform["que2"].get()
max_diff = -1000
trade_info = dict()
average_price = 0
open_num = 0
for i in range(platform_number):
if platform_list[i]["obj"]!=None:
open_num+=1
average_price+=platform_list[i]["depth_buy"][0][1]+platform_list[i]["depth_sell"][0][1]
average_price /= open_num*2.0/1.01
print 'average_price %f'%average_price
brokerage_trade = numpy.add.outer(brokerage_fee,brokerage_fee)*average_price
cash_trade = numpy.add.outer(cash_fee,numpy.zeros(cash_fee.shape[0]))*average_price
tick+=1
if tick % 1 == 0:
total_cny = 0
total_eth = 0
yb_cny = 0
yb_eth = 0
cnbtc_cny = 0
cnbtc_eth = 0
hb_cny = 0
hb_eth = 0
okc_cny = 0
okc_eth = 0
if open_yb:
yb_cny,yb_eth = ybAccountQue2.get()
print "yb_balance:%f %f"%(yb_eth,yb_cny)
if open_okc:
okc_cny,okc_eth = okcAccountQue2.get()
print "okc_balance:%f %f"%(okc_eth,okc_cny)
if open_hb:
hb_cny,hb_eth = hbAccountQue2.get()
print "hb balance:%f %f"%(hb_eth,hb_cny)
if open_cnbtc:
cnbtc_cny,cnbtc_eth = cnbtcAccountQue2.get()
print "cnbtc balance:%f %f"%(cnbtc_eth,cnbtc_cny)
total_cny = yb_cny+hb_cny+cnbtc_cny+okc_cny
total_eth = yb_eth+hb_eth+cnbtc_eth+okc_eth
balance.write("%s %f %f %f %f %f %f %f %f %f %f\n"%(time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())),
cnbtc_eth,cnbtc_cny,yb_eth,yb_cny,hb_eth,hb_cny,okc_eth,okc_cny,total_eth,total_cny))
history.write("%s "%time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())))
for i in range(platform_number):
if platform_list[i]["obj"]!=None:
history.write("%f %f "%(platform_list[i]["depth_buy"][0][1],platform_list[i]["depth_sell"][0][1]))
else:
history.write('0 0 ')
history.write('\n')
cny_list = numpy.asarray([cnbtc_cny,yb_cny,hb_cny,okc_cny])
eth_list = numpy.asarray([cnbtc_eth,yb_eth,hb_eth,okc_eth])
last_total_eth = total_eth
last_total_cny = total_cny
if first:
first_total_cny = total_cny
first_total_eth = total_eth
first = False
# history.write("%s %f %f %f %f %f %f\n" % (time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())),
# yb_depth[0][1], cnbtc_depth[0][1], yb_depth[0][1] - cnbtc_depth[0][1],
# yb_depth_minor[0][1], cnbtc_depth_minor[0][1],
# cnbtc_depth_minor[0][1] - yb_depth_minor[0][1]))
balance.flush()
history.flush()
if tick%1 == 0:
thres_list,trade_multiplier = setThreshold(cny_list,eth_list,brokerage_fee,cash_fee,thres_list,thres_list_origin,platform_number,average_price,maxTradeLimitation,name_list)
# print thres_list
i1 = None
j1 = None
for i in range(platform_number):
for j in range(platform_number):
if i!=j and platform_list[i]["obj"]!=None and platform_list[j]["obj"]!=None:
# if platform_list[i]["has_ts"] and platform_list[j]["has_ts"]:
# print i,j,int(platform_list[i]["depth_sell"][1]),int(platform_list[j]["depth_buy"][1])
# if (int(platform_list[i]["depth_sell"][1])-int(platform_list[j]["depth_buy"][1]))>5:
# continue
# print platform_list[i],platform_list[j]
if platform_list[i]["depth_sell"][0][1] - platform_list[j]["depth_buy"][0][1]>thres_list[i,j] and platform_list[i]["depth_sell"][0][1] - platform_list[j]["depth_buy"][0][1]-thres_list[i,j]>max_diff:
max_diff = platform_list[i]["depth_sell"][0][1]-platform_list[j]["depth_buy"][0][1]-thres_list[i,j]
trade_info["sell_depth"] = platform_list[i]["depth_sell"]
trade_info["buy_depth"] = platform_list[j]["depth_buy"]
trade_info["sell_name"] = platform_list[i]["name"]
trade_info["buy_name"] = platform_list[j]["name"]
trade_info["sell_que1"] = platform_list[i]["trade_que1"]
trade_info["sell_que2"] = platform_list[i]["trade_que2"]
trade_info["buy_que1"] = platform_list[j]["trade_que1"]
trade_info["buy_que2"] = platform_list[j]["trade_que2"]
trade_info["sell_obj"] = platform_list[i]["obj"]
trade_info["buy_obj"]=platform_list[j]["obj"]
i1 = i
j1 = j
if max_diff>0:
print "max_diff %f"%max_diff
buy_depth = trade_info["buy_depth"]
sell_depth = trade_info["sell_depth"]
# print("BuySide:%s timestamp:%s amount:\t%f price:\t%f"%(trade_info["buy_name"],buy_depth[1],buy_depth[0][0],buy_depth[0][1],str(buy_depth[0][2])))
# print('SellSide:%s timestamp:%s amount:\t%f price:\t%f'%(trade_info["sell_name"],sell_depth[1],sell_depth[0][0],sell_depth[0][1],str(sell_depth[0][2])))
# print 'BuySide:%s timestamp:%s amount:\t%f price:\t%f asks:%s'%(trade_info["buy_name"],buy_depth[1],buy_depth[0][0],buy_depth[0][1],str(buy_depth[0][2]))
# print 'SellSide:%s timestamp:%s amount:\t%f price:\t%f bids:%s'%(trade_info["sell_name"],sell_depth[1],sell_depth[0][0],sell_depth[0][1],str(sell_depth[0][2]))
amount = int(min(buy_depth[0][0],sell_depth[0][0])*1.0/trade_ratio*trade_multiplier[i1,j1]*100)/100.0
amount +=int((random.random()-0.5)*2*(random_range+0.01)*100)/100.0
if amount<0:
amount = 0
amount_buy=amount
amount_sell=amount_buy
limit = (buy_depth[0][1]+sell_depth[0][1])*1.0/2.0
if total_coin>0.0001:
amount_buy = max(amount_buy-total_coin,0)
elif total_coin<-0.0001:
amount_sell = max(amount_sell+total_coin,0)
print "%s buy %f coins at %f and limit %f" %(trade_info["buy_name"],amount_buy,buy_depth[0][1],limit-lowest_spread_threshold/2.0)
trade_info["buy_que1"].put((trade_info["buy_obj"],"buy",amount_buy,buy_depth[0][1],limit-lowest_spread_threshold/2.0))
print "%s sell %f coins at %f and limit %f" %(trade_info["sell_name"],amount_sell,sell_depth[0][1],limit+lowest_spread_threshold/2.0)
trade_info["sell_que1"].put((trade_info["sell_obj"],"sell",amount_sell,sell_depth[0][1],limit+lowest_spread_threshold/2.0))
sell_remain = trade_info["sell_que2"].get()
buy_remain = trade_info["buy_que2"].get()
# output.write('%f, %f, %f, %f\n'%(sell_remain[0]-amount_sell,amount_buy-buy_remain[0],buy_remain[1],sell_remain[1]))
# output.flush()
total_coin+=sell_remain[0]-amount_sell-buy_remain[0]+amount_buy
total_money+=sell_remain[1]+buy_remain[1]
print "%s_remain:%f\t %s_remain:%f,total_remain:%f"%(trade_info["buy_name"],buy_remain[0],trade_info["sell_name"],sell_remain[0],maxCoin)
print"coin:%f,money:%f"%(total_coin,total_money)
maxCoin-=max(sell_remain[0],buy_remain[0])
# if maxCoin<0:
# hbQue1.put(None)
# cnbtcQue1.put(None)
# hbTradeQue1.put(None)
# cnbtcTradeQue1.put(None)
# break
else:
# average_price = 0
for i in range(platform_number):
for j in range(platform_number):
if i!=j and platform_list[i]["obj"]!=None and platform_list[j]["obj"]!=None:
print "no trade %s sell:%f %s buy:%f diff:%15f thres:%20f diff_brokerage:%20f"%(platform_list[i]["name"],platform_list[i]["depth_sell"][0][1],platform_list[j]["name"],platform_list[j]["depth_buy"][0][1],
platform_list[i]["depth_sell"][0][1]-platform_list[j]["depth_buy"][0][1],thres_list[i,j],platform_list[i]["depth_sell"][0][1]-platform_list[j]["depth_buy"][0][1]-thres_list[i,j])
# average_price+=platform_list[i]["depth_buy"][0][1]+platform_list[i]["depth_sell"][0][1]
# average_price/=2.0*platform_number
print average_price
# print "no trade yb sell:%f cnbtc buy:%f diff:%f"%(yb_depth_sell[0][1],cnbtc_depth_buy[0][1],yb_depth_sell[0][1]-cnbtc_depth_buy[0][1])
# print "no trade hb sell:%f cnbtc buy:%f diff:%f"%(hb_depth_sell[0][1],cnbtc_depth_buy[0][1],hb_depth_sell[0][1]-cnbtc_depth_buy[0][1])
# print "no trade yb buy:%f cnbtc sell:%f diff:%f"%(yb_depth_buy[0][1],cnbtc_depth_sell[0][1],cnbtc_depth_sell[0][1]-yb_depth_buy[0][1])
# print "no trade hb buy:%f cnbtc sell:%f diff:%f"%(hb_depth_buy[0][1],cnbtc_depth_sell[0][1],cnbtc_depth_sell[0][1]-hb_depth_buy[0][1])
# print "no trade yb buy:%f hb sell:%f diff:%f"%(yb_depth_buy[0][1],hb_depth_sell[0][1],hb_depth_sell[0][1]-yb_depth_buy[0][1])
# print "no trade hb buy:%f yb sell:%f diff:%f"%(hb_depth_buy[0][1],yb_depth_sell[0][1],yb_depth_sell[0][1]-hb_depth_buy[0][1])
print "balance %f %f diff: %f %f %f first:%f %f"%(total_eth,total_cny, total_eth - last_total_eth,total_cny - last_total_cny,(total_eth - last_total_eth)*2000.0,
total_eth - first_total_eth,total_cny - first_total_cny)
print '\n'
#
# if hb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>spread_threshold_hb_sell_cnbtc_buy and abs(int(cnbtc_depth_buy[1])-int(hb_depth_sell[1])<=3) and hb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>max_diff:
# if cnbtc_depth_sell[0][1]-hb_depth_buy[0][1]>spread_threshold_hb_buy_cnbtc_sell and abs(int(hb_depth_buy[1])-int(cnbtc_depth_sell[1])<=3) and cnbtc_depth_sell[0][1]-hb_depth_buy[0][1]>max_diff:
# max_diff = cnbtc_depth_sell[0][1]-hb_depth_buy[0][1]
# trade_info["sell_depth"] = cnbtc_depth_sell
# trade_info["buy_depth"] = hb_depth_buy
# trade_info["sell_name"] = "CNBTC"
# trade_info["buy_name"] = "HuoBi"
# trade_info["sell_que1"] = cnbtcTradeQue1
# trade_info["sell_que2"] = cnbtcTradeQue2
# trade_info["buy_que1"] = hbTradeQue1
# trade_info["buy_que2"] = hbTradeQue2
# trade_info["buy_obj"] = hb
# trade_info["sell_obj"]=cnbtc
# if hb_depth_sell[0][1]-yb_depth_buy[0][1]>spread_threshold_yb_buy_hb_sell and abs(int(yb_depth_buy[1])-int(hb_depth_sell[1])<=3) and hb_depth_sell[0][1]-yb_depth_buy[0][1]>max_diff:
# max_diff = hb_depth_sell[0][1]-yb_depth_buy[0][1]
# trade_info["sell_depth"] = hb_depth_sell
# trade_info["buy_depth"] = yb_depth_buy
# trade_info["sell_name"] = "HuoBi"
# trade_info["buy_name"] = "YunBi"
# trade_info["sell_que1"] = hbTradeQue1
# trade_info["sell_que2"] = hbTradeQue2
# trade_info["buy_que1"] = ybTradeQue1
# trade_info["buy_que2"] = ybTradeQue2
# trade_info["sell_obj"] = hb
# trade_info["buy_obj"]=yb
# if yb_depth_sell[0][1]-hb_depth_buy[0][1]>spread_threshold_yb_sell_hb_buy and abs(int(hb_depth_buy[1])-int(yb_depth_sell[1])<=3) and yb_depth_sell[0][1]-hb_depth_buy[0][1]>max_diff:
# max_diff = yb_depth_sell[0][1]-hb_depth_buy[0][1]
# trade_info["sell_depth"] = yb_depth_sell
# trade_info["buy_depth"] = hb_depth_buy
# trade_info["sell_name"] = "YunBi"
# trade_info["buy_name"] = "HuoBi"
# trade_info["sell_que1"] = ybTradeQue1
# trade_info["sell_que2"] = ybTradeQue2
# trade_info["buy_que1"] = hbTradeQue1
# trade_info["buy_que2"] = hbTradeQue2
# trade_info["sell_obj"] = yb
# trade_info["buy_obj"]=hb
# if yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>spread_threshold_yb_sell_cnbtc_buy and abs(int(cnbtc_depth_buy[1])-int(yb_depth_sell[1])<=3) and yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>max_diff:
# max_diff = yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]
# trade_info["sell_depth"] = yb_depth_sell
# trade_info["buy_depth"] = cnbtc_depth_buy
# trade_info["sell_name"] = "YunBi"
# trade_info["buy_name"] = "CNBTC"
# trade_info["sell_que1"] = ybTradeQue1
# trade_info["sell_que2"] = ybTradeQue2
# trade_info["buy_que1"] = cnbtcTradeQue1
# trade_info["buy_que2"] = cnbtcTradeQue2
# trade_info["sell_obj"] = yb
# trade_info["buy_obj"]=cnbtc
# if cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]>spread_threshold_yb_sell_cnbtc_buy and abs(int(cnbtc_depth_sell[1])-int(yb_depth_buy[1])<=3) and cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]>max_diff:
# max_diff = cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]
# trade_info["sell_depth"] = cnbtc_depth_sell
# trade_info["buy_depth"] = yb_depth_buy
# trade_info["sell_name"] = "CNBTC"
# trade_info["buy_name"] = "YunBi"
# trade_info["sell_que1"] = cnbtcTradeQue1
# trade_info["sell_que2"] = cnbtcTradeQue2
# trade_info["buy_que1"] = ybTradeQue1
# trade_info["buy_que2"] = ybTradeQue2
# trade_info["sell_obj"] = cnbtc
# trade_info["buy_obj"]=yb
# if open_okc:
# if okc_depth_sell[0][1]-cnbtc_depth_buy[0][1]>spread_threshold_okc_sell_cnbtc_buy and okc_depth_sell[0][1]-cnbtc_depth_buy[0][1]>max_diff:
# max_diff = okc_depth_sell[0][1]-cnbtc_depth_buy[0][1]
# trade_info["sell_depth"] = okc_depth_sell
# trade_info["buy_depth"] = cnbtc_depth_buy
# trade_info["sell_name"] = "OKCoin"
# trade_info["buy_name"] = "CNBTC"
# trade_info["sell_que1"] = okcTradeQue1
# trade_info["sell_que2"] = okcTradeQue2
# trade_info["buy_que1"] = cnbtcTradeQue1
# trade_info["buy_que2"] = cnbtcTradeQue2
# trade_info["sell_obj"] = okc
# trade_info["buy_obj"]=cnbtc
# if cnbtc_depth_sell[0][1]-okc_depth_buy[0][1]>spread_threshold_okc_buy_cnbtc_sell and cnbtc_depth_sell[0][1]-okc_depth_buy[0][1]>max_diff:
# max_diff = cnbtc_depth_sell[0][1]-okc_depth_buy[0][1]
# trade_info["sell_depth"] = cnbtc_depth_sell
# trade_info["buy_depth"] = okc_depth_buy
# trade_info["sell_name"] = "CNBTC"
# trade_info["buy_name"] = "OKCoin"
# trade_info["sell_que1"] = cnbtcTradeQue1
# trade_info["sell_que2"] = cnbtcTradeQue2
# trade_info["buy_que1"] = okcTradeQue1
# trade_info["buy_que2"] = okcTradeQue2
# trade_info["buy_obj"] = okc
# trade_info["sell_obj"]=cnbtc
# if hb_depth_sell[0][1]-okc_depth_buy[0][1]>spread_threshold_okc_buy_hb_sell and hb_depth_sell[0][1]-okc_depth_buy[0][1]>max_diff:
# max_diff = hb_depth_sell[0][1]-okc_depth_buy[0][1]
# trade_info["sell_depth"] = hb_depth_sell
# trade_info["buy_depth"] = okc_depth_buy
# trade_info["sell_name"] = "HuoBi"
# trade_info["buy_name"] = "OKCoin"
# trade_info["sell_que1"] = hbTradeQue1
# trade_info["sell_que2"] = hbTradeQue2
# trade_info["buy_que1"] = okcTradeQue1
# trade_info["buy_que2"] = okcTradeQue2
# trade_info["sell_obj"] = hb
# trade_info["buy_obj"]=okc
# if okc_depth_sell[0][1]-hb_depth_buy[0][1]>spread_threshold_okc_sell_hb_buy and okc_depth_sell[0][1]-hb_depth_buy[0][1]>max_diff:
# max_diff = okc_depth_sell[0][1]-hb_depth_buy[0][1]
# trade_info["sell_depth"] = okc_depth_sell
# trade_info["buy_depth"] = hb_depth_buy
# trade_info["sell_name"] = "OKCoin"
# trade_info["buy_name"] = "HuoBi"
# trade_info["sell_que1"] = okcTradeQue1
# trade_info["sell_que2"] = okcTradeQue2
# trade_info["buy_que1"] = hbTradeQue1
# trade_info["buy_que2"] = hbTradeQue2
# trade_info["sell_obj"] = okc
# trade_info["buy_obj"]=hb
# if yb_depth_sell[0][1]-okc_buy[0][1]>spread_threshold_yb_sell_cnbtc_buy and yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>max_diff:
# max_diff = yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]
# trade_info["sell_depth"] = yb_depth_sell
# trade_info["buy_depth"] = cnbtc_depth_buy
# trade_info["sell_name"] = "YunBi"
# trade_info["buy_name"] = "CNBTC"
# trade_info["sell_que1"] = ybTradeQue1
# trade_info["sell_que2"] = ybTradeQue2
# trade_info["buy_que1"] = cnbtcTradeQue1
# trade_info["buy_que2"] = cnbtcTradeQue2
# trade_info["sell_obj"] = yb
# trade_info["buy_obj"]=cnbtc
# if cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]>spread_threshold_yb_sell_cnbtc_buy and cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]>max_diff:
# max_diff = cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]
# trade_info["sell_depth"] = cnbtc_depth_sell
# trade_info["buy_depth"] = yb_depth_buy
# trade_info["sell_name"] = "CNBTC"
# trade_info["buy_name"] = "YunBi"
# trade_info["sell_que1"] = cnbtcTradeQue1
# trade_info["sell_que2"] = cnbtcTradeQue2
# trade_info["buy_que1"] = ybTradeQue1
# trade_info["buy_que2"] = ybTradeQue2
# trade_info["sell_obj"] = cnbtc
# trade_info["buy_obj"]=yb
# if hb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>spread_threshold_hb_sell_cnbtc_buy and abs(int(cnbtc_depth_buy[1])-int(hb_depth_sell[1])<=3) and hb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>max_diff:
# print "start trade major"
#
# elif yb_depth_sell[0][1]-cnbtc_depth_buy[0][1]>spread_threshold_yb_sell_cnbtc_buy and abs(int(cnbtc_depth_buy[1])-int(yb_depth_sell[1])<=3):
# print 'CNBTC: timestamp:%s amount:\t%f price:\t%f asks:%s'%(cnbtc_depth_buy[1],cnbtc_depth_buy[0][0],cnbtc_depth_buy[0][1],str(cnbtc_depth_buy[0][2]))
# print 'YUNBI: timestamp:%s amount:\t%f price:\t%f bids:%s'%(yb_depth_sell[1],yb_depth_sell[0][0],yb_depth_sell[0][1],str(yb_depth_sell[0][2]))
# print "start trade major"
# amount = min(cnbtc_depth_buy[0][0],yb_depth_sell[0][0])*1.0/trade_ratio
# amount_buy=amount
# amount_sell=amount_buy
# limit = (cnbtc_depth_buy[0][1]+yb_depth_sell[0][1])*1.0/2.0
# if total_coin>0.0001:
# amount_buy = max(amount_buy-total_coin,0)
# elif total_coin<-0.0001:
# amount_sell = max(amount_sell+total_coin,0)
# print "cnbtc buy %f coins at %f and limit %f" %(amount_buy,cnbtc_depth_buy[0][1],limit-lowest_spread_threshold/2.0)
# cnbtcTradeQue1.put((cnbtc,"buy",amount_buy,cnbtc_depth_buy[0][1],limit-lowest_spread_threshold/2.0))
# print "yb sell %f coins at %f and limit %f" %(amount_sell,yb_depth_sell[0][1],limit+lowest_spread_threshold/2.0)
# ybTradeQue1.put((yb,"sell",amount_sell,yb_depth_sell[0][1],limit+lowest_spread_threshold/2.0))
# cnbtc_remain = cnbtcTradeQue2.get()
# yb_remain = ybTradeQue2.get()
# output.write('%f, %f, %f, %f\n'%(yb_remain[0]-amount_sell,amount_buy-cnbtc_remain[0],yb_remain[1],cnbtc_remain[1]))
# output.flush()
# total_coin+=yb_remain[0]-amount_sell-cnbtc_remain[0]+amount_buy
# total_money+=yb_remain[1]+cnbtc_remain[1]
# print "cnbtc_remain:%f\t yb_remain:%f,total_remain:%f"%(cnbtc_remain[0],yb_remain[0],maxCoin)
# print"coin:%f,money:%f"%(total_coin,total_money)
# maxCoin-=max(yb_remain[0],cnbtc_remain[0])
# if maxCoin<0:
# ybQue1.put(None)
# cnbtcQue1.put(None)
# ybTradeQue1.put(None)
# cnbtcTradeQue1.put(None)
# break
#
# # elif False:
# elif cnbtc_depth_sell[0][1]-yb_depth_buy[0][1]>spread_threshold_yb_buy_cnbtc_sell and abs(int(cnbtc_depth_sell[1])-int(yb_depth_buy[1])<=3):
# print 'CNBTC: timestamp:%s amount:\t%f price:\t%f bids:%s'%(cnbtc_depth_sell[1],cnbtc_depth_sell[0][0],cnbtc_depth_sell[0][1],str(cnbtc_depth_sell[0][2]))
# print 'YUNBI: timestamp:%s amount:\t%f price:\t%f asks:%s'%(yb_depth_buy[1],yb_depth_buy[0][0],yb_depth_buy[0][1],str(yb_depth_buy[0][2]))
# print "start trade minor"
# amount = min(cnbtc_depth_sell[0][0], yb_depth_buy[0][0]) * 1.0 / trade_ratio
# amount_buy = amount
# amount_sell = amount_buy
# limit = (cnbtc_depth_sell[0][1] + yb_depth_buy[0][1]) * 1.0 / 2.0
# if total_coin > 0.01:
# amount_buy = max(amount_buy - total_coin, 0)
# elif total_coin < -0.01:
# amount_sell = max(amount_sell + total_coin, 0)
# print "cnbtc sell %f coins at %f and limit %f" % (amount_sell, cnbtc_depth_sell[0][1], limit + lowest_spread_threshold/ 2.0)
# cnbtcTradeQue1.put((cnbtc, "sell", amount_sell, cnbtc_depth_sell[0][1], limit + lowest_spread_threshold / 2.0))
# print "yb buy %f coins at %f and limit %f" % (amount_buy, yb_depth_buy[0][1], limit - lowest_spread_threshold / 2.0)
# ybTradeQue1.put(
# (yb, "buy", amount_buy, yb_depth_buy[0][1], limit - lowest_spread_threshold / 2.0))
# cnbtc_remain = cnbtcTradeQue2.get()
# yb_remain = ybTradeQue2.get()
# output.write('%f, %f, %f, %f\n' % (
# amount_buy - yb_remain[0], cnbtc_remain[0] - amount_sell, yb_remain[1], cnbtc_remain[1]))
# total_coin += -yb_remain[0] - amount_sell + cnbtc_remain[0] + amount_buy
# total_money += yb_remain[1] + cnbtc_remain[1]
# print "cnbtc_remain:%f\t yb_remain:%f,total_remain:%f" % (cnbtc_remain[0], yb_remain[0], maxCoin)
# print"coin:%f,money:%f" % (total_coin, total_money)
# maxCoin -= max(yb_remain[0], cnbtc_remain[0])
# if maxCoin < 0:
# ybQue1.put(None)
# cnbtcQue1.put(None)
# ybTradeQue1.put(None)
# cnbtcTradeQue1.put(None)
# break
# # elif False:
# elif cnbtc_depth_sell[0][1]-hb_depth_buy[0][1]>spread_threshold_hb_buy_cnbtc_sell and abs(int(cnbtc_depth_sell[1])-int(hb_depth_buy[1])<=3):
# print 'CNBTC: timestamp:%s amount:\t%f price:\t%f bids:%s'%(cnbtc_depth_sell[1],cnbtc_depth_sell[0][0],cnbtc_depth_sell[0][1],str(cnbtc_depth_sell[0][2]))
# print 'HuoBI: timestamp:%s amount:\t%f price:\t%f asks:%s'%(hb_depth_buy[1],hb_depth_buy[0][0],hb_depth_buy[0][1],str(hb_depth_buy[0][2]))
# print "start trade minor"
# amount = min(cnbtc_depth_sell[0][0], hb_depth_buy[0][0]) * 1.0 / trade_ratio
# amount_buy = amount
# amount_sell = amount_buy
# limit = (cnbtc_depth_sell[0][1] + hb_depth_buy[0][1]) * 1.0 / 2.0
# if total_coin > 0.01:
# amount_buy = max(amount_buy - total_coin, 0)
# elif total_coin < -0.01:
# amount_sell = max(amount_sell + total_coin, 0)
# print "cnbtc sell %f coins at %f and limit %f" % (amount_sell, cnbtc_depth_sell[0][1], limit + lowest_spread_threshold/ 2.0)
# cnbtcTradeQue1.put((cnbtc, "sell", amount_sell, cnbtc_depth_sell[0][1], limit + lowest_spread_threshold / 2.0))
# print "hb buy %f coins at %f and limit %f" % (amount_buy, hb_depth_buy[0][1], limit - lowest_spread_threshold / 2.0)
# hbTradeQue1.put(
# (hb, "buy", amount_buy, hb_depth_buy[0][1], limit - lowest_spread_threshold / 2.0))
# cnbtc_remain = cnbtcTradeQue2.get()
# hb_remain = hbTradeQue2.get()
# output.write('%f, %f, %f, %f\n' % (
# amount_buy - hb_remain[0], cnbtc_remain[0] - amount_sell, hb_remain[1], cnbtc_remain[1]))
# total_coin += -hb_remain[0] - amount_sell + cnbtc_remain[0] + amount_buy
# total_money += hb_remain[1] + cnbtc_remain[1]
# print "cnbtc_remain:%f\t hb_remain:%f,total_remain:%f" % (cnbtc_remain[0], hb_remain[0], maxCoin)
# print"coin:%f,money:%f" % (total_coin, total_money)
# maxCoin -= max(hb_remain[0], cnbtc_remain[0])
# if maxCoin < 0:
# hbQue1.put(None)
# cnbtcQue1.put(None)
# hbTradeQue1.put(None)
# cnbtcTradeQue1.put(None)
# break
# else:
# # print "total coin: %f total_cny %f"%(total_eth,total_cny)
# # print "yunbi ",str(yb.get_account())
# # print "cnbtc ",str(cnbtc.get_account())
# print cnbtc.get_account()
# cnbtc.getDepth()
# print cnbtc.buy(volume=0.01,price=1461)
# print cnbtc.get_account()
# hft = HaiFengTeng.HaiFengTeng(config)
# hft.login()
# yb = YunBi.Yunbi(config,"YunBi2")
# yb.get_account()
# yb.buy(volume=0.001,price=9999.0)
# yb.getOrder()
# print yb.getDepth()
| 45.74726 | 245 | 0.561558 | 8,463 | 66,791 | 4.174406 | 0.043956 | 0.009567 | 0.023211 | 0.02273 | 0.686906 | 0.635247 | 0.585286 | 0.543761 | 0.516021 | 0.479025 | 0 | 0.029737 | 0.307212 | 66,791 | 1,459 | 246 | 45.778615 | 0.733748 | 0.297959 | 0 | 0.502896 | 0 | 0.001931 | 0.082669 | 0.01334 | 0 | 0 | 0 | 0.000685 | 0 | 0 | null | null | 0 | 0.013514 | null | null | 0.083012 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab3ea1f161bcea5311f9766c4b23a51c645e6437 | 1,174 | py | Python | startuptweet.py | cudmore/startupnotify | 76b61b295ae7049e597fa05457a6696e624c4955 | [
"MIT"
] | null | null | null | startuptweet.py | cudmore/startupnotify | 76b61b295ae7049e597fa05457a6696e624c4955 | [
"MIT"
] | null | null | null | startuptweet.py | cudmore/startupnotify | 76b61b295ae7049e597fa05457a6696e624c4955 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""
Author: Robert Cudmore
Date: 20181013
Purpose: Send a Tweet with IP and MAC address of a Raspberry Pi
Install:
pip3 install tweepy
Usage:
python3 startuptweet.py 'this is my tweet'
"""
import tweepy
import sys
import socket
import subprocess
from uuid import getnode as get_mac
from datetime import datetime
# Create variables for each key, secret, token
from my_config import hash_tag
from my_config import consumer_key
from my_config import consumer_secret
from my_config import access_token
from my_config import access_token_secret
message = ''
if len( sys.argv ) > 1:
message = sys.argv[1]
# Set up OAuth and integrate with API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
#
thetime = datetime.now().strftime('%Y%m%d %H:%M:%S')
ip = subprocess.check_output(['hostname', '--all-ip-addresses'])
ip = ip.decode('utf-8').strip()
hostname = socket.gethostname()
mac = get_mac()
mac = hex(mac)
tweet = thetime + ' ' + hostname + ' ' + ip + ' ' + mac + ' ' + message + ' ' + hash_tag
print('tweeting:', tweet)
api.update_status(status=tweet)
| 22.576923 | 88 | 0.736797 | 176 | 1,174 | 4.784091 | 0.488636 | 0.035629 | 0.071259 | 0.106888 | 0.157957 | 0.068884 | 0 | 0 | 0 | 0 | 0 | 0.014042 | 0.150767 | 1,174 | 51 | 89 | 23.019608 | 0.830491 | 0.251278 | 0 | 0 | 0 | 0 | 0.068027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.423077 | 0 | 0.423077 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ab4054d837b64d6cdc4bc55d34e29e751e8dc8d5 | 4,427 | py | Python | private/scripts/recheck-invalid-handles.py | bansal-shubham/stopstalk-deployment | 6392eace490311be103292fdaff9ae215e4db7e6 | [
"MIT"
] | null | null | null | private/scripts/recheck-invalid-handles.py | bansal-shubham/stopstalk-deployment | 6392eace490311be103292fdaff9ae215e4db7e6 | [
"MIT"
] | null | null | null | private/scripts/recheck-invalid-handles.py | bansal-shubham/stopstalk-deployment | 6392eace490311be103292fdaff9ae215e4db7e6 | [
"MIT"
] | null | null | null | """
Copyright (c) 2015-2019 Raj Patel(raj454raj@gmail.com), StopStalk
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
import requests, bs4
import sites
# Constants to be used in case of request failures
SERVER_FAILURE = "SERVER_FAILURE"
NOT_FOUND = "NOT_FOUND"
OTHER_FAILURE = "OTHER_FAILURE"
REQUEST_FAILURES = (SERVER_FAILURE, NOT_FOUND, OTHER_FAILURE)
def get_invalid_handle_method(site):
site_class = getattr(sites, site.lower())
invalid_handle_method = getattr(site_class.Profile, "is_invalid_handle")
return invalid_handle_method
if __name__ == "__main__":
ihtable = db.invalid_handle
atable = db.auth_user
cftable = db.custom_friend
stable = db.submission
nrtable = db.next_retrieval
mapping = {}
handle_to_row = {}
for site in current.SITES:
mapping[site] = get_invalid_handle_method(site)
handle_to_row[site] = {}
impossiblehandle = "thisreallycantbeahandle308"
assert(all(map(lambda site: get_invalid_handle_method(site)(impossiblehandle), current.SITES.keys())))
def populate_handle_to_row(table):
for row in db(table).select():
for site in current.SITES:
site_handle = row[site.lower() + "_handle"]
if site_handle:
if handle_to_row[site].has_key(site_handle):
handle_to_row[site][site_handle].append(row)
else:
handle_to_row[site][site_handle] = [row]
populate_handle_to_row(atable)
populate_handle_to_row(cftable)
# for site in current.SITES:
# print site
# for site_handle in handle_to_row[site]:
# print "\t", site_handle
# for row in handle_to_row[site][site_handle]:
# print "\t\t", row.first_name, row.last_name, row.stopstalk_handle
update_dict = {"stopstalk_rating": 0,
"stopstalk_prev_rating": 0,
"per_day": 0.0,
"per_day_change": "0.0",
"authentic": False}
final_delete_query = False
cnt = 0
for row in db(ihtable).iterselect():
# If not an invalid handle anymore
if handle_to_row[row.site].has_key(row.handle) and mapping[row.site](row.handle) is False:
cnt += 1
print row.site, row.handle, "deleted"
for row_obj in handle_to_row[row.site][row.handle]:
print "\t", row_obj.stopstalk_handle, "updated"
update_dict[row.site.lower() + "_lr"] = current.INITIAL_DATE
row_obj.update_record(**update_dict)
if "user_id" in row_obj:
# Custom user
db(nrtable.custom_user_id == row_obj.id).update(**{row.site.lower() + "_delay": 0})
else:
db(nrtable.user_id == row_obj.id).update(**{row.site.lower() + "_delay": 0})
final_delete_query |= ((stable.site == row.site) & \
(stable.stopstalk_handle == row_obj.stopstalk_handle))
del update_dict[row.site.lower() + "_lr"]
row.delete_record()
if cnt >= 10:
if final_delete_query:
db(final_delete_query).delete()
cnt = 0
final_delete_query = False
if final_delete_query:
db(final_delete_query).delete()
| 41.373832 | 106 | 0.639259 | 576 | 4,427 | 4.696181 | 0.324653 | 0.041405 | 0.048799 | 0.033272 | 0.180037 | 0.126802 | 0.05915 | 0.05915 | 0.05915 | 0.028096 | 0 | 0.008696 | 0.272645 | 4,427 | 106 | 107 | 41.764151 | 0.831366 | 0.081319 | 0 | 0.1875 | 0 | 0 | 0.070812 | 0.016235 | 0 | 0 | 0 | 0 | 0.015625 | 0 | null | null | 0 | 0.03125 | null | null | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab42719d063612a8629ae16074131965d4bb9222 | 1,397 | py | Python | src/ice_g2p/dictionaries.py | cadia-lvl/ice-g2p | 5a6cc55f45282e8a656ea0742e2f373189c9a912 | [
"Apache-2.0"
] | null | null | null | src/ice_g2p/dictionaries.py | cadia-lvl/ice-g2p | 5a6cc55f45282e8a656ea0742e2f373189c9a912 | [
"Apache-2.0"
] | null | null | null | src/ice_g2p/dictionaries.py | cadia-lvl/ice-g2p | 5a6cc55f45282e8a656ea0742e2f373189c9a912 | [
"Apache-2.0"
] | null | null | null | import os, sys
DICTIONARY_FILE = os.path.join(sys.prefix, 'dictionaries/ice_pron_dict_standard_clear.csv')
HEAD_FILE = os.path.join(sys.prefix, 'data/head_map.csv')
MODIFIER_FILE = os.path.join(sys.prefix, 'data/modifier_map.csv')
VOWELS_FILE = os.path.join(sys.prefix, 'data/vowels_sampa.txt')
CONS_CLUSTERS_FILE = os.path.join(sys.prefix, 'data/cons_clusters_sampa.txt')
def read_map(filename):
with open(filename) as f:
file_content = f.read().splitlines()
dict_map = {}
for line in file_content:
arr = line.split('\t')
if len(arr) > 1:
values = arr[1:]
else:
values = []
key = arr[0]
dict_map[key] = values
return dict_map
def read_dictionary(filename):
with open(filename) as f:
file_content = f.read().splitlines()
pronDict = {}
for line in file_content:
word, transcr = line.split('\t')
pronDict[word] = transcr
return pronDict
def read_list(filename):
with open(filename) as f:
file_content = f.read().splitlines()
return file_content
def get_head_map():
return read_map(HEAD_FILE)
def get_modifier_map():
return read_map(MODIFIER_FILE)
def get_dictionary():
return read_dictionary(DICTIONARY_FILE)
def get_vowels():
return read_list(VOWELS_FILE)
def get_cons_clusters():
return read_list(CONS_CLUSTERS_FILE)
| 24.086207 | 91 | 0.670007 | 198 | 1,397 | 4.494949 | 0.262626 | 0.074157 | 0.05618 | 0.078652 | 0.370787 | 0.325843 | 0.3 | 0.178652 | 0.178652 | 0.178652 | 0 | 0.002727 | 0.212598 | 1,397 | 57 | 92 | 24.508772 | 0.806364 | 0 | 0 | 0.195122 | 0 | 0 | 0.097351 | 0.082319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0 | 0.02439 | 0.121951 | 0.414634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
ab42c6179a77692e03a58e9d6335af55ec3cb46d | 385 | py | Python | tests/test_annotations_notebook.py | jeromedockes/pylabelbuddy | 26be00db679e94117968387aa7010dab2739b517 | [
"BSD-3-Clause"
] | null | null | null | tests/test_annotations_notebook.py | jeromedockes/pylabelbuddy | 26be00db679e94117968387aa7010dab2739b517 | [
"BSD-3-Clause"
] | null | null | null | tests/test_annotations_notebook.py | jeromedockes/pylabelbuddy | 26be00db679e94117968387aa7010dab2739b517 | [
"BSD-3-Clause"
] | null | null | null | from pylabelbuddy import _annotations_notebook
def test_annotations_notebook(root, annotations_mock, dataset_mock):
nb = _annotations_notebook.AnnotationsNotebook(
root, annotations_mock, dataset_mock
)
nb.change_database()
assert nb.notebook.index(nb.notebook.select()) == 2
nb.go_to_annotations()
assert nb.notebook.index(nb.notebook.select()) == 0
| 32.083333 | 68 | 0.750649 | 46 | 385 | 6 | 0.456522 | 0.144928 | 0.137681 | 0.188406 | 0.5 | 0.5 | 0.268116 | 0 | 0 | 0 | 0 | 0.006135 | 0.153247 | 385 | 11 | 69 | 35 | 0.840491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab4374fa18ea29af4960ad145950b9d2672ecb83 | 1,257 | py | Python | middleware/run.py | natedogg484/react-flask-authentication | 5000685d35471b03f72e0b07dfbdbf6d5fc296d2 | [
"MIT"
] | null | null | null | middleware/run.py | natedogg484/react-flask-authentication | 5000685d35471b03f72e0b07dfbdbf6d5fc296d2 | [
"MIT"
] | 4 | 2021-03-09T21:12:06.000Z | 2022-02-26T19:17:31.000Z | middleware/run.py | natedogg484/vue-authentication | ab087e238d98606ffb73167cb9a16648812ac3e5 | [
"MIT"
] | null | null | null | from flask import Flask
from flask_cors import CORS
from flask_restful import Api
from flask_sqlalchemy import SQLAlchemy
from flask_jwt_extended import JWTManager
app = Flask(__name__)
CORS(app)
api = Api(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = 'some-secret-string'
app.config['JWT_SECRET_KEY'] = 'jwt-secret-string'
app.config['JWT_BLACKLIST_ENABLED'] = True
app.config['JWT_BLACKLIST_TOKEN_CHECKS'] = ['access', 'refresh']
db = SQLAlchemy(app)
jwt = JWTManager(app)
@app.before_first_request
def create_tables():
db.create_all()
import models, resources, views
api.add_resource(resources.UserRegistration, '/registration')
api.add_resource(resources.UserLogin, '/login')
api.add_resource(resources.UserLogoutAccess, '/logout/access')
api.add_resource(resources.UserLogoutRefresh, '/logout/refresh')
api.add_resource(resources.TokenRefresh, '/token/refresh')
api.add_resource(resources.AllUsers, '/users')
api.add_resource(resources.SecretResource, '/secret')
@jwt.token_in_blacklist_loader
def check_if_token_in_blacklist(decrypted_token):
jti = decrypted_token['jti']
return models.RevokedTokenModel.is_jti_blacklisted(jti) | 27.933333 | 64 | 0.791567 | 165 | 1,257 | 5.769697 | 0.387879 | 0.044118 | 0.102941 | 0.169118 | 0.113445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085123 | 1,257 | 45 | 65 | 27.933333 | 0.827826 | 0 | 0 | 0 | 0 | 0 | 0.211447 | 0.079491 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.193548 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab4d5adf0a4cf40d756ef93b4de1fbf8fed57093 | 1,953 | py | Python | example_project/test_messages/bbcode_tags.py | bastiedotorg/django-precise-bbcode | 567a8a7f104fb7f2c9d59f304791e53d2d8f4dea | [
"BSD-3-Clause"
] | 30 | 2015-01-02T13:43:56.000Z | 2021-02-08T18:43:09.000Z | example_project/test_messages/bbcode_tags.py | bastiedotorg/django-precise-bbcode | 567a8a7f104fb7f2c9d59f304791e53d2d8f4dea | [
"BSD-3-Clause"
] | 31 | 2015-01-16T00:25:19.000Z | 2021-12-11T16:40:03.000Z | example_project/test_messages/bbcode_tags.py | bastiedotorg/django-precise-bbcode | 567a8a7f104fb7f2c9d59f304791e53d2d8f4dea | [
"BSD-3-Clause"
] | 13 | 2015-07-16T23:25:10.000Z | 2020-08-23T20:12:24.000Z | import re
from precise_bbcode.bbcode.tag import BBCodeTag
from precise_bbcode.tag_pool import tag_pool
color_re = re.compile(r'^([a-z]+|#[0-9abcdefABCDEF]{3,6})$')
class SubTag(BBCodeTag):
name = 'sub'
def render(self, value, option=None, parent=None):
return '<sub>%s</sub>' % value
class PreTag(BBCodeTag):
name = 'pre'
render_embedded = False
def render(self, value, option=None, parent=None):
return '<pre>%s</pre>' % value
class SizeTag(BBCodeTag):
name = 'size'
definition_string = '[size={RANGE=4,7}]{TEXT}[/size]'
format_string = '<span style="font-size:{RANGE=4,7}px;">{TEXT}</span>'
class FruitTag(BBCodeTag):
name = 'fruit'
definition_string = '[fruit]{CHOICE=tomato,orange,apple}[/fruit]'
format_string = '<h5>{CHOICE=tomato,orange,apple}</h5>'
class PhoneLinkTag(BBCodeTag):
name = 'phone'
definition_string = '[phone]{PHONENUMBER}[/phone]'
format_string = '<a href="tel:{PHONENUMBER}">{PHONENUMBER}</a>'
def render(self, value, option=None, parent=None):
href = 'tel:{}'.format(value)
return '<a href="{0}">{0}</a>'.format(href, value)
class StartsWithATag(BBCodeTag):
name = 'startswitha'
definition_string = '[startswitha]{STARTSWITH=a}[/startswitha]'
format_string = '<span>{STARTSWITH=a}</span>'
class RoundedBBCodeTag(BBCodeTag):
name = 'rounded'
class Options:
strip = False
def render(self, value, option=None, parent=None):
if option and re.search(color_re, option) is not None:
return '<div class="rounded" style="border-color:{};">{}</div>'.format(option, value)
return '<div class="rounded">{}</div>'.format(value)
tag_pool.register_tag(SubTag)
tag_pool.register_tag(PreTag)
tag_pool.register_tag(SizeTag)
tag_pool.register_tag(FruitTag)
tag_pool.register_tag(PhoneLinkTag)
tag_pool.register_tag(StartsWithATag)
tag_pool.register_tag(RoundedBBCodeTag)
| 27.125 | 97 | 0.673835 | 250 | 1,953 | 5.148 | 0.296 | 0.048951 | 0.081585 | 0.097902 | 0.135198 | 0.135198 | 0.135198 | 0.135198 | 0.105672 | 0 | 0 | 0.00733 | 0.161802 | 1,953 | 71 | 98 | 27.507042 | 0.778864 | 0 | 0 | 0.085106 | 0 | 0 | 0.262161 | 0.197645 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.06383 | 0.042553 | 0.765957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ab59b426727f7713efb93b6855597da219efc0be | 1,695 | py | Python | examples/multimedia/mmimdb_MFM.py | kapikantzari/MultiBench | 44ab6ea028682040a0c04de68239ce5cdf15123f | [
"MIT"
] | 148 | 2021-03-06T06:54:13.000Z | 2022-03-29T19:27:21.000Z | examples/multimedia/mmimdb_MFM.py | kapikantzari/MultiBench | 44ab6ea028682040a0c04de68239ce5cdf15123f | [
"MIT"
] | 10 | 2021-07-19T22:57:49.000Z | 2022-02-04T03:12:29.000Z | examples/multimedia/mmimdb_MFM.py | kapikantzari/MultiBench | 44ab6ea028682040a0c04de68239ce5cdf15123f | [
"MIT"
] | 18 | 2021-07-22T07:17:27.000Z | 2022-03-27T16:11:40.000Z | import torch
import sys
import os
sys.path.append(os.getcwd())
from utils.helper_modules import Sequential2
from unimodals.common_models import Linear, MLP, MaxOut_MLP
from datasets.imdb.get_data import get_dataloader
from fusions.common_fusions import Concat
from objective_functions.objectives_for_supervised_learning import MFM_objective
from objective_functions.recon import sigmloss1d
from training_structures.Supervised_Learning import train, test
filename = "best_mfm.pt"
traindata, validdata, testdata = get_dataloader(
"../video/multimodal_imdb.hdf5", "../video/mmimdb", vgg=True, batch_size=128)
classes = 23
n_latent = 512
fuse = Sequential2(Concat(), MLP(2*n_latent, n_latent, n_latent//2)).cuda()
encoders = [MaxOut_MLP(512, 512, 300, n_latent, False).cuda(
), MaxOut_MLP(512, 1024, 4096, n_latent, False).cuda()]
head = Linear(n_latent//2, classes).cuda()
decoders = [MLP(n_latent, 600, 300).cuda(), MLP(n_latent, 2048, 4096).cuda()]
intermediates = [MLP(n_latent, n_latent//2, n_latent//2).cuda(),
MLP(n_latent, n_latent//2, n_latent//2).cuda()]
recon_loss = MFM_objective(2.0, [sigmloss1d, sigmloss1d], [
1.0, 1.0], criterion=torch.nn.BCEWithLogitsLoss())
train(encoders, fuse, head, traindata, validdata, 1000, decoders+intermediates, early_stop=True, task="multilabel",
objective_args_dict={"decoders": decoders, "intermediates": intermediates}, save=filename, optimtype=torch.optim.AdamW, lr=5e-3, weight_decay=0.01, objective=recon_loss)
print("Testing:")
model = torch.load(filename).cuda()
test(model, testdata, method_name="MFM", dataset="imdb",
criterion=torch.nn.BCEWithLogitsLoss(), task="multilabel")
| 42.375 | 175 | 0.746313 | 236 | 1,695 | 5.177966 | 0.427966 | 0.085925 | 0.03928 | 0.045827 | 0.061375 | 0.0491 | 0.0491 | 0.0491 | 0.0491 | 0.0491 | 0 | 0.046791 | 0.117404 | 1,695 | 39 | 176 | 43.461538 | 0.770053 | 0 | 0 | 0 | 0 | 0 | 0.065487 | 0.017109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.322581 | 0 | 0.322581 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ab5a9e198509b5652d8bbadee3e63897c02a6e94 | 1,461 | py | Python | subeana/migrations/0001_initial.py | izumin2000/izuminapp | 3464cebe1d98c85c2cd95c6fac779ec1f42ef930 | [
"MIT"
] | null | null | null | subeana/migrations/0001_initial.py | izumin2000/izuminapp | 3464cebe1d98c85c2cd95c6fac779ec1f42ef930 | [
"MIT"
] | null | null | null | subeana/migrations/0001_initial.py | izumin2000/izuminapp | 3464cebe1d98c85c2cd95c6fac779ec1f42ef930 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.2 on 2022-06-01 04:43
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(default='', max_length=50)),
('isexist', models.BooleanField(default=True)),
],
),
migrations.CreateModel(
name='Song',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(default='', max_length=50)),
('lyrics', models.CharField(default='', max_length=5000)),
('url', models.CharField(blank=True, default='', max_length=50, null=True)),
('isexist', models.BooleanField(default=True)),
('channel', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='song_channel', to='subeana.channel')),
('imitate', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='song_imitate', to='subeana.song')),
],
),
]
| 40.583333 | 169 | 0.600274 | 155 | 1,461 | 5.541935 | 0.380645 | 0.037253 | 0.074505 | 0.076834 | 0.590221 | 0.470314 | 0.393481 | 0.393481 | 0.393481 | 0.393481 | 0 | 0.022873 | 0.251882 | 1,461 | 35 | 170 | 41.742857 | 0.763038 | 0.030801 | 0 | 0.428571 | 1 | 0 | 0.082037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab5ff68a9733a875c0aeb19f8b19c6f3ac7260b4 | 3,108 | py | Python | vendor/packages/logilab-astng/__pkginfo__.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | 2 | 2019-08-19T17:08:47.000Z | 2019-10-05T11:37:02.000Z | vendor/packages/logilab-astng/__pkginfo__.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | vendor/packages/logilab-astng/__pkginfo__.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2003-2010 LOGILAB S.A. (Paris, FRANCE).
# http://www.logilab.fr/ -- mailto:contact@logilab.fr
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the Free Software
# Foundation; either version 2 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along with
# this program; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# copyright 2003-2010 LOGILAB S.A. (Paris, FRANCE), all rights reserved.
# contact http://www.logilab.fr/ -- mailto:contact@logilab.fr
# copyright 2003-2010 Sylvain Thenault, all rights reserved.
# contact mailto:thenault@gmail.com
#
# This file is part of logilab-astng.
#
# logilab-astng is free software: you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 2.1 of the License, or (at your
# option) any later version.
#
# logilab-astng is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
# for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with logilab-astng. If not, see <http://www.gnu.org/licenses/>.
"""
logilab.astng packaging information
"""
distname = 'logilab-astng'
modname = 'astng'
subpackage_of = 'logilab'
numversion = (0, 20, 1)
version = '.'.join([str(num) for num in numversion])
install_requires = ['logilab-common >= 0.49.0']
pyversions = ["2.3", "2.4", "2.5", '2.6']
license = 'LGPL'
author = 'Logilab'
author_email = 'python-projects@lists.logilab.org'
mailinglist = "mailto://%s" % author_email
web = "http://www.logilab.org/project/%s" % distname
ftp = "ftp://ftp.logilab.org/pub/%s" % modname
short_desc = "rebuild a new abstract syntax tree from Python's ast"
long_desc = """The aim of this module is to provide a common base \
representation of python source code for projects such as pychecker, pyreverse,
pylint... Well, actually the development of this library is essentially
governed by pylint's needs.
It rebuilds the tree generated by the compiler.ast [1] module (python <= 2.4)
or by the builtin _ast module (python >= 2.5) by recursively walking down the
AST and building an extended ast (let's call it astng ;). The new node classes
have additional methods and attributes for different usages.
Furthermore, astng builds partial trees by inspecting living objects."""
from os.path import join
include_dirs = [join('test', 'regrtest_data'),
join('test', 'data'), join('test', 'data2')]
| 40.363636 | 87 | 0.740991 | 483 | 3,108 | 4.749482 | 0.403727 | 0.013078 | 0.031386 | 0.049695 | 0.442895 | 0.442895 | 0.442895 | 0.418483 | 0.385353 | 0.385353 | 0 | 0.024166 | 0.161197 | 3,108 | 76 | 88 | 40.894737 | 0.855773 | 0.557593 | 0 | 0 | 0 | 0.038462 | 0.647412 | 0.045761 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab617d4c442405b9219d3fa02f66e3a525d82e42 | 4,339 | py | Python | bioinformatics/analysis/rnaseq/prepare/split_gtf_by_type.py | bioShaun/omsCabinet | 741179a06cbd5200662cd03bc2e0115f4ad06917 | [
"MIT"
] | null | null | null | bioinformatics/analysis/rnaseq/prepare/split_gtf_by_type.py | bioShaun/omsCabinet | 741179a06cbd5200662cd03bc2e0115f4ad06917 | [
"MIT"
] | null | null | null | bioinformatics/analysis/rnaseq/prepare/split_gtf_by_type.py | bioShaun/omsCabinet | 741179a06cbd5200662cd03bc2e0115f4ad06917 | [
"MIT"
] | null | null | null | import fire
import gtfparse
from pathlib import Path
GENCODE_CATEGORY_MAP = {
'IG_C_gene': 'protein_coding',
'IG_D_gene': 'protein_coding',
'IG_J_gene': 'protein_coding',
'IG_V_gene': 'protein_coding',
'IG_LV_gene': 'protein_coding',
'TR_C_gene': 'protein_coding',
'TR_J_gene': 'protein_coding',
'TR_V_gene': 'protein_coding',
'TR_D_gene': 'protein_coding',
'TEC': 'protein_coding',
'nonsense_mediated_decay': 'protein_coding',
'non_stop_decay': 'protein_coding',
'retained_intron': 'lncRNA',
'protein_coding': 'protein_coding',
'ambiguous_orf': 'lncRNA',
'Mt_rRNA': 'ncRNA',
'Mt_tRNA': 'ncRNA',
'miRNA': 'ncRNA',
'misc_RNA': 'ncRNA',
'rRNA': 'ncRNA',
'snRNA': 'ncRNA',
'snoRNA': 'ncRNA',
'ribozyme': 'ncRNA',
'sRNA': 'ncRNA',
'scaRNA': 'ncRNA',
'scRNA': 'ncRNA',
'non_coding': 'lncRNA',
'known_ncrna': 'ncRNA',
'3prime_overlapping_ncrna': 'lncRNA',
'3prime_overlapping_ncRNA': 'lncRNA',
'vaultRNA': 'ncRNA',
'processed_transcript': 'lncRNA',
'lincRNA': 'lncRNA',
'macro_lncRNA': 'lncRNA',
'sense_intronic': 'lncRNA',
'sense_overlapping': 'lncRNA',
'antisense': 'lncRNA',
'antisense_RNA': 'lncRNA',
'bidirectional_promoter_lncRNA': 'lncRNA',
'IG_pseudogene': 'pseudogene',
'IG_D_pseudogene': 'pseudogene',
'IG_C_pseudogene': 'pseudogene',
'IG_J_pseudogene': 'pseudogene',
'IG_V_pseudogene': 'pseudogene',
'TR_V_pseudogene': 'pseudogene',
'TR_J_pseudogene': 'pseudogene',
'Mt_tRNA_pseudogene': 'pseudogene',
'tRNA_pseudogene': 'pseudogene',
'snoRNA_pseudogene': 'pseudogene',
'snRNA_pseudogene': 'pseudogene',
'scRNA_pseudogene': 'pseudogene',
'rRNA_pseudogene': 'pseudogene',
'misc_RNA_pseudogene': 'pseudogene',
'miRNA_pseudogene': 'pseudogene',
'pseudogene': 'pseudogene',
'processed_pseudogene': 'pseudogene',
'polymorphic_pseudogene': 'pseudogene',
'retrotransposed': 'pseudogene',
'transcribed_processed_pseudogene': 'pseudogene',
'transcribed_unprocessed_pseudogene': 'pseudogene',
'transcribed_unitary_pseudogene': 'pseudogene',
'translated_processed_pseudogene': 'pseudogene',
'translated_unprocessed_pseudogene': 'pseudogene',
'unitary_pseudogene': 'pseudogene',
'unprocessed_pseudogene': 'pseudogene',
'novel_lncRNA': 'lncRNA',
'TUCP': 'TUCP',
'lncRNA': 'lncRNA'
}
def simplify_gene_type(gene_type):
if gene_type in GENCODE_CATEGORY_MAP:
sim_type = GENCODE_CATEGORY_MAP.get(gene_type)
if sim_type == 'lncRNA':
sim_type = f'annotated_{sim_type}'
elif sim_type == 'ncRNA':
sim_type = f'other_{sim_type}'
else:
pass
return sim_type
else:
raise ValueError(gene_type)
def dfline2gtfline(dfline):
basic_inf = dfline[:8]
basic_inf.fillna('.', inplace=True)
basic_inf.frame = '.'
basic_inf_list = [str(each) for each in basic_inf]
basic_inf_line = '\t'.join(basic_inf_list)
attr_inf = dfline[8:]
attr_inf_list = []
for key, val in attr_inf.items():
if val:
attr_inf_list.append(f'{key} "{val}";')
attr_inf_line = ' '.join(attr_inf_list)
return f'{basic_inf_line}\t{attr_inf_line}\n'
def split_gtf(gtf, outdir, novel=False):
gtf_df = gtfparse.read_gtf(gtf)
if 'gene_type' in gtf_df.columns:
gtf_df.loc[:, 'gene_biotype'] = gtf_df.gene_type
gtf_df.drop('gene_type', axis=1, inplace=True)
elif 'gene_biotype' in gtf_df.columns:
pass
else:
gtf_df.loc[:, 'gene_biotype'] = 'protein_coding'
type_label = 'gene_biotype'
if novel:
gtf_df.loc[
:, type_label] = gtf_df.loc[:, type_label].map(
GENCODE_CATEGORY_MAP)
else:
gtf_df.loc[
:, type_label] = gtf_df.loc[:, type_label].map(
simplify_gene_type)
outdir = Path(outdir)
outdir.mkdir(parents=True, exist_ok=True)
for gt, grp in gtf_df.groupby(type_label):
gt_file = outdir / f'{gt}.gtf'
with open(gt_file, 'w') as gt_inf:
for idx in grp.index:
outline = dfline2gtfline(grp.loc[idx])
gt_inf.write(outline)
if __name__ == '__main__':
fire.Fire(split_gtf)
| 30.77305 | 59 | 0.63563 | 506 | 4,339 | 5.086957 | 0.264822 | 0.20202 | 0.059441 | 0.029526 | 0.043512 | 0.028749 | 0.028749 | 0.028749 | 0.028749 | 0.028749 | 0 | 0.002061 | 0.217101 | 4,339 | 140 | 60 | 30.992857 | 0.755667 | 0 | 0 | 0.079365 | 0 | 0 | 0.403319 | 0.078129 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0.015873 | 0.02381 | 0 | 0.063492 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ab6209870d287fc20132452f64da2ca39e9ab140 | 1,890 | py | Python | cities_light/tests/test_import.py | jsandovalc/django-cities-light | a1c6af08938b7b01d4e12555bd4cb5040905603d | [
"MIT"
] | null | null | null | cities_light/tests/test_import.py | jsandovalc/django-cities-light | a1c6af08938b7b01d4e12555bd4cb5040905603d | [
"MIT"
] | null | null | null | cities_light/tests/test_import.py | jsandovalc/django-cities-light | a1c6af08938b7b01d4e12555bd4cb5040905603d | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
import glob
import os
from dbdiff.fixture import Fixture
from .base import TestImportBase, FixtureDir
from ..settings import DATA_DIR
class TestImport(TestImportBase):
"""Load test."""
def test_single_city(self):
"""Load single city."""
fixture_dir = FixtureDir('import')
self.import_data(
fixture_dir,
'angouleme_country',
'angouleme_region',
'angouleme_subregion',
'angouleme_city',
'angouleme_translations'
)
Fixture(fixture_dir.get_file_path('angouleme.json')).assertNoDiff()
def test_single_city_zip(self):
"""Load single city."""
filelist = glob.glob(os.path.join(DATA_DIR, "angouleme_*.txt"))
for f in filelist:
os.remove(f)
fixture_dir = FixtureDir('import_zip')
self.import_data(
fixture_dir,
'angouleme_country',
'angouleme_region',
'angouleme_subregion',
'angouleme_city',
'angouleme_translations',
file_type="zip"
)
Fixture(FixtureDir('import').get_file_path('angouleme.json')).assertNoDiff()
def test_city_wrong_timezone(self):
"""Load single city with wrong timezone."""
fixture_dir = FixtureDir('import')
self.import_data(
fixture_dir,
'angouleme_country',
'angouleme_region',
'angouleme_subregion',
'angouleme_city_wtz',
'angouleme_translations'
)
Fixture(fixture_dir.get_file_path('angouleme_wtz.json')).assertNoDiff()
from ..loading import get_cities_model
city_model = get_cities_model('City')
cities = city_model.objects.all()
for city in cities:
print(city.get_timezone_info().zone)
| 29.53125 | 84 | 0.607937 | 192 | 1,890 | 5.671875 | 0.28125 | 0.073462 | 0.038567 | 0.049587 | 0.471074 | 0.471074 | 0.471074 | 0.471074 | 0.410468 | 0.323232 | 0 | 0 | 0.291005 | 1,890 | 63 | 85 | 30 | 0.812687 | 0.044444 | 0 | 0.428571 | 0 | 0 | 0.204036 | 0.036996 | 0 | 0 | 0 | 0 | 0.061224 | 1 | 0.061224 | false | 0 | 0.306122 | 0 | 0.387755 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
db4545f1a4dfa83103a39912add856795ff6a347 | 813 | py | Python | core/tests/test_base_time_range_controller.py | One-Green/plant-keeper-master | 67101a4cc7070d26fd1685631a710ae9a60fc5e8 | [
"CC0-1.0"
] | 2 | 2022-02-04T17:52:38.000Z | 2022-02-04T17:52:40.000Z | core/tests/test_base_time_range_controller.py | shanisma/plant-keeper | 3ca92ae2d55544a301e1398496a08a45cca6d15b | [
"CC0-1.0"
] | 4 | 2021-06-16T20:01:50.000Z | 2022-03-09T20:17:53.000Z | core/tests/test_base_time_range_controller.py | shanisma/plant-keeper | 3ca92ae2d55544a301e1398496a08a45cca6d15b | [
"CC0-1.0"
] | 1 | 2021-06-27T10:45:36.000Z | 2021-06-27T10:45:36.000Z | import os
import sys
from datetime import time
import unittest
sys.path.append(
os.path.dirname(
os.path.dirname(os.path.join("..", "..", "..", os.path.dirname("__file__")))
)
)
from core.controller import BaseTimeRangeController
class TestTimeRangeController(unittest.TestCase):
def test_time_range(self):
start_at = time(10, 0, 0)
end_at = time(12, 0, 0)
time_range_controller = BaseTimeRangeController(start_at, end_at)
time_now = time(11, 0, 0)
time_range_controller.set_current_time(time_now)
self.assertTrue(time_range_controller.action)
time_now = time(12, 15, 0)
time_range_controller.set_current_time(time_now)
self.assertFalse(time_range_controller.action)
if __name__ == "__main__":
unittest.main()
| 26.225806 | 84 | 0.688807 | 105 | 813 | 4.990476 | 0.371429 | 0.103053 | 0.181298 | 0.114504 | 0.274809 | 0.171756 | 0.171756 | 0.171756 | 0.171756 | 0.171756 | 0 | 0.026114 | 0.199262 | 813 | 30 | 85 | 27.1 | 0.778802 | 0 | 0 | 0.086957 | 0 | 0 | 0.02706 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.043478 | false | 0 | 0.217391 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db45d8bc1a8d49e33721d418ba06b6f827c48c0b | 4,098 | py | Python | generator_code/mp3_generator.py | jurganson/spingen | f8421a26356d0cd1d94a0692846791eb45fce6f5 | [
"MIT"
] | null | null | null | generator_code/mp3_generator.py | jurganson/spingen | f8421a26356d0cd1d94a0692846791eb45fce6f5 | [
"MIT"
] | null | null | null | generator_code/mp3_generator.py | jurganson/spingen | f8421a26356d0cd1d94a0692846791eb45fce6f5 | [
"MIT"
] | null | null | null | from gtts import gTTS as ttos
from pydub import AudioSegment
import os
def generate_mp3 (segments, fade_ms, speech_gain, comment_fade_ms, language = "en", output_file_name = "generated_program_sound") :
def apply_comments (exercise_audio, segment) :
new_exercise_audio = exercise_audio
for comment in segment.comments :
comment_audio = comment["comment_audio"]
comment_time_ms = comment["second"]*1000 + comment["minute"]*60000
part_01 = new_exercise_audio[comment_time_ms:comment_time_ms+len(comment_audio)+comment_fade_ms*2]
part_02 = part_01.fade(to_gain=-speech_gain, start=0, end=comment_fade_ms)
part_02 = part_02.fade(to_gain= speech_gain, start=comment_fade_ms+len(comment_audio), end=len(part_02))
part_02 = part_02.overlay(comment_audio, position=comment_fade_ms)
new_exercise_audio = new_exercise_audio[:comment_time_ms] + part_02 + new_exercise_audio[comment_time_ms+len(part_02):]
return new_exercise_audio
def append_segment (current_audio, next_segment, future_segment) :
segment_audio = next_segment.song_audio
segment_audio_faded = segment_audio - speech_gain
segment_text_audio = next_segment.text_audio
part_01 = segment_audio_faded[:len(segment_text_audio)] # First part of next segment
part_01 = current_audio[-len(segment_text_audio):].append(part_01, crossfade=len(segment_text_audio)).overlay(segment_text_audio) #
part_02 = part_01 + segment_audio_faded[len(part_01):len(part_01)+fade_ms].fade(to_gain=speech_gain, start=0, end=fade_ms) # Faded up to exercise gain
part_03 = apply_comments(segment_audio[len(part_02):len(part_02)+next_segment.get_exercise_duration_ms()+fade_ms], next_segment) # Apply comments to exercise
part_03 = part_02 + part_03.fade(to_gain=-speech_gain, start=len(part_03)-fade_ms, end=len(part_03))
part_04 = current_audio[:-len(segment_text_audio)] + part_03
if not future_segment :
part_05 = part_04.fade_out(fade_ms)
ttos(text="Program finished", lang=language, slow=False).save("output.mp3")
finish_voice = AudioSegment.from_file("output.mp3")
print("Cleaning up output.mp3")
os.remove("output.mp3")
return part_05 + finish_voice
else :
part_05 = part_04 + segment_audio_faded[len(part_03):len(part_03)+len(future_segment.text_audio)]
return part_05
print("Generating MP3 for segment 1 of " + str(len(segments)))
intro_segment_audio = segments[0].song_audio
intro_segment_text_audio = segments[0].text_audio
intro_segment_audio_faded = intro_segment_audio - speech_gain
part_01 = intro_segment_audio_faded[:fade_ms].fade_in(fade_ms)
part_02 = part_01 + intro_segment_audio_faded[len(part_01):len(part_01)+len(intro_segment_text_audio)].overlay(intro_segment_text_audio)
part_03 = part_02 + intro_segment_audio_faded[len(part_02):len(part_02)+fade_ms].fade(to_gain=speech_gain, start=0, end=fade_ms)
part_04 = apply_comments(intro_segment_audio[len(part_03):len(part_03)+segments[0].get_exercise_duration_ms()+fade_ms], segments[0])
part_04 = part_03 + part_04.fade(to_gain=-speech_gain, start=len(part_04)-fade_ms, end=len(part_04))
part_05 = part_04 + intro_segment_audio_faded[len(part_04):len(part_04)+len(segments[1].text_audio)]
program_audio = part_05
for i in range(1, len(segments)) :
print("Generating MP3 for segment " + str(i+1) + " of " + str(len(segments)))
if i+1 >= len(segments) :
program_audio = append_segment(program_audio, segments[i], None)
else :
program_audio = append_segment(program_audio, segments[i], segments[i+1])
if not os.path.exists("./output") :
os.mkdir("./output")
print("Exporting final mp3 ...")
file_path = "./output/"+output_file_name+".mp3"
program_audio.export(file_path, format="mp3")
print("Done! Exported mp3 to "+ file_path)
| 54.64 | 175 | 0.708394 | 607 | 4,098 | 4.410214 | 0.158155 | 0.052297 | 0.065745 | 0.035861 | 0.368323 | 0.283526 | 0.125887 | 0.125887 | 0.05678 | 0.030631 | 0 | 0.044245 | 0.183748 | 4,098 | 74 | 176 | 55.378378 | 0.756054 | 0.019522 | 0 | 0.034483 | 1 | 0 | 0.064291 | 0.005731 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.051724 | 0 | 0.155172 | 0.086207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db4b0c2266fee61af6dfa6c16082c9e18c028c39 | 4,345 | py | Python | selfdrive/locationd/calibrationd.py | matthewklinko/openpilot | b0563a59684d0901f99abbb58ac1fbd729ded1f9 | [
"MIT"
] | 3 | 2019-06-29T08:32:58.000Z | 2019-09-06T15:58:03.000Z | selfdrive/locationd/calibrationd.py | matthewklinko/openpilot | b0563a59684d0901f99abbb58ac1fbd729ded1f9 | [
"MIT"
] | 1 | 2019-09-22T06:44:10.000Z | 2019-09-22T06:44:10.000Z | selfdrive/locationd/calibrationd.py | matthewklinko/openpilot | b0563a59684d0901f99abbb58ac1fbd729ded1f9 | [
"MIT"
] | 2 | 2020-03-18T02:56:23.000Z | 2020-05-12T16:22:31.000Z | #!/usr/bin/env python
import os
import copy
import json
import numpy as np
import selfdrive.messaging as messaging
from selfdrive.locationd.calibration_helpers import Calibration
from selfdrive.swaglog import cloudlog
from common.params import Params
from common.transformations.model import model_height
from common.transformations.camera import view_frame_from_device_frame, get_view_frame_from_road_frame, \
eon_intrinsics, get_calib_from_vp, H, W
MPH_TO_MS = 0.44704
MIN_SPEED_FILTER = 15 * MPH_TO_MS
MAX_YAW_RATE_FILTER = np.radians(2) # per second
INPUTS_NEEDED = 300 # allow to update VP every so many frames
INPUTS_WANTED = 600 # We want a little bit more than we need for stability
WRITE_CYCLES = 400 # write every 400 cycles
VP_INIT = np.array([W/2., H/2.])
# These validity corners were chosen by looking at 1000
# and taking most extreme cases with some margin.
VP_VALIDITY_CORNERS = np.array([[W//2 - 150, 280], [W//2 + 150, 540]])
DEBUG = os.getenv("DEBUG") is not None
def is_calibration_valid(vp):
return vp[0] > VP_VALIDITY_CORNERS[0,0] and vp[0] < VP_VALIDITY_CORNERS[1,0] and \
vp[1] > VP_VALIDITY_CORNERS[0,1] and vp[1] < VP_VALIDITY_CORNERS[1,1]
class Calibrator(object):
def __init__(self, param_put=False):
self.param_put = param_put
self.vp = copy.copy(VP_INIT)
self.vps = []
self.cal_status = Calibration.UNCALIBRATED
self.write_counter = 0
self.just_calibrated = False
self.params = Params()
calibration_params = self.params.get("CalibrationParams")
if calibration_params:
try:
calibration_params = json.loads(calibration_params)
self.vp = np.array(calibration_params["vanishing_point"])
self.vps = np.tile(self.vp, (calibration_params['valid_points'], 1)).tolist()
self.update_status()
except Exception:
cloudlog.exception("CalibrationParams file found but error encountered")
def update_status(self):
start_status = self.cal_status
if len(self.vps) < INPUTS_NEEDED:
self.cal_status = Calibration.UNCALIBRATED
else:
self.cal_status = Calibration.CALIBRATED if is_calibration_valid(self.vp) else Calibration.INVALID
end_status = self.cal_status
self.just_calibrated = False
if start_status == Calibration.UNCALIBRATED and end_status == Calibration.CALIBRATED:
self.just_calibrated = True
def handle_cam_odom(self, log):
trans, rot = log.trans, log.rot
if np.linalg.norm(trans) > MIN_SPEED_FILTER and abs(rot[2]) < MAX_YAW_RATE_FILTER:
new_vp = eon_intrinsics.dot(view_frame_from_device_frame.dot(trans))
new_vp = new_vp[:2]/new_vp[2]
self.vps.append(new_vp)
self.vps = self.vps[-INPUTS_WANTED:]
self.vp = np.mean(self.vps, axis=0)
self.update_status()
self.write_counter += 1
if self.param_put and (self.write_counter % WRITE_CYCLES == 0 or self.just_calibrated):
cal_params = {"vanishing_point": list(self.vp),
"valid_points": len(self.vps)}
self.params.put("CalibrationParams", json.dumps(cal_params))
return new_vp
else:
return None
def send_data(self, pm):
calib = get_calib_from_vp(self.vp)
extrinsic_matrix = get_view_frame_from_road_frame(0, calib[1], calib[2], model_height)
cal_send = messaging.new_message()
cal_send.init('liveCalibration')
cal_send.liveCalibration.calStatus = self.cal_status
cal_send.liveCalibration.calPerc = min(len(self.vps) * 100 // INPUTS_NEEDED, 100)
cal_send.liveCalibration.extrinsicMatrix = [float(x) for x in extrinsic_matrix.flatten()]
cal_send.liveCalibration.rpyCalib = [float(x) for x in calib]
pm.send('liveCalibration', cal_send)
def calibrationd_thread(sm=None, pm=None):
if sm is None:
sm = messaging.SubMaster(['cameraOdometry'])
if pm is None:
pm = messaging.PubMaster(['liveCalibration'])
calibrator = Calibrator(param_put=True)
# buffer with all the messages that still need to be input into the kalman
while 1:
sm.update()
new_vp = calibrator.handle_cam_odom(sm['cameraOdometry'])
if DEBUG and new_vp is not None:
print 'got new vp', new_vp
calibrator.send_data(pm)
def main(sm=None, pm=None):
calibrationd_thread(sm, pm)
if __name__ == "__main__":
main()
| 35.325203 | 105 | 0.711853 | 630 | 4,345 | 4.684127 | 0.312698 | 0.016943 | 0.026432 | 0.024399 | 0.094883 | 0.032531 | 0 | 0 | 0 | 0 | 0 | 0.020102 | 0.187112 | 4,345 | 122 | 106 | 35.614754 | 0.815402 | 0.074108 | 0 | 0.085106 | 0 | 0 | 0.058281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.106383 | null | null | 0.010638 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db4d954b047874012d94933f5000302aa9b31037 | 1,500 | py | Python | TSFpy/debug/sample_fibonacci.py | ooblog/TSF1KEV | f7d4b4ff88f52ba00b46eb53ed98f8ea62ec2f6d | [
"MIT"
] | null | null | null | TSFpy/debug/sample_fibonacci.py | ooblog/TSF1KEV | f7d4b4ff88f52ba00b46eb53ed98f8ea62ec2f6d | [
"MIT"
] | null | null | null | TSFpy/debug/sample_fibonacci.py | ooblog/TSF1KEV | f7d4b4ff88f52ba00b46eb53ed98f8ea62ec2f6d | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: UTF-8 -*-
from __future__ import division,print_function,absolute_import,unicode_literals
import sys
import os
os.chdir(sys.path[0])
sys.path.append('/mnt/sda2/github/TSF1KEV/TSFpy')
from TSF_io import *
#from TSF_Forth import *
from TSF_shuffle import *
from TSF_match import *
from TSF_calc import *
from TSF_time import *
TSF_Forth_init(TSF_io_argvs(),[TSF_shuffle_Initwords,TSF_match_Initwords,TSF_calc_Initwords,TSF_time_Initwords])
TSF_Forth_setTSF("TSF_Tab-Separated-Forth:",
"\t".join(["UTF-8","#TSF_encoding","200","#TSF_calcPR","N-Fibonacci:","#TSF_this","0","#TSF_fin."]),
TSF_style="T")
TSF_Forth_setTSF("N-Fibonacci:",
"\t".join(["TSF_argvs:","#TSF_cloneargvs","TSF_argvs:","#TSF_lenthe","[0]Z[Fibcount:0]~[TSF_argvs:0]","#TSF_calcDC","Fibcount:","0","#TSF_pokethe","Fibonacci:","#TSF_this"]),
TSF_style="T")
TSF_Forth_setTSF("Fibonacci:",
"\t".join(["[Fibcount:1]Z1~[Fibcount:1]","#TSF_calcDC","((2&(([0]+3)*[0]+2)^)/((2&(2*[0]+2)^)-(2&([0]+1)^)-1)\\1)#(2&([0]+1)^)","#TSF_calcDC","1","#TSF_echoN","[Fibcount:1]+1","#TSF_calcDC","Fibcount:","1","#TSF_pokethe","Fibjump:","[Fibcount:0]-([Fibcount:1]+1)o0~1","#TSF_calcDC","#TSF_peekthe","#TSF_this"]),
TSF_style="T")
TSF_Forth_setTSF("Fibcount:",
"\t".join(["20","-1"]),
TSF_style="T")
TSF_Forth_setTSF("Fibjump:",
"\t".join(["Fibonacci:","#exit"]),
TSF_style="T")
TSF_Forth_addfin(TSF_io_argvs())
TSF_Forth_argvsleftcut(TSF_io_argvs(),1)
TSF_Forth_run()
| 39.473684 | 315 | 0.675333 | 235 | 1,500 | 4.008511 | 0.293617 | 0.084926 | 0.069002 | 0.063694 | 0.130573 | 0.112527 | 0.063694 | 0.063694 | 0 | 0 | 0 | 0.032999 | 0.070667 | 1,500 | 37 | 316 | 40.540541 | 0.642755 | 0.044 | 0 | 0.172414 | 0 | 0.034483 | 0.397203 | 0.14965 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.275862 | 0 | 0.275862 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db4dff7ffc5831999b457d95fed00095a9bee6b8 | 6,545 | py | Python | Tomboy2Evernote.py | rguptan/Tomboy2Evernote | 2bee66537d080c13856811b806613ca6aaef8833 | [
"MIT"
] | null | null | null | Tomboy2Evernote.py | rguptan/Tomboy2Evernote | 2bee66537d080c13856811b806613ca6aaef8833 | [
"MIT"
] | null | null | null | Tomboy2Evernote.py | rguptan/Tomboy2Evernote | 2bee66537d080c13856811b806613ca6aaef8833 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
import re
import sys, getopt
import glob
import os
def process_files(inputdir, outputdir):
os.chdir(inputdir)
enex_notes = []
output_filename = 'Tomboy2Evernote.enex'
i = 0
for file in glob.glob("*.note"):
note_file_path = inputdir + '/' + file
note_body = open(note_file_path, 'r').read()
title = get_title(note_body)
html_note_body = get_html_body(note_body)
created_date = tomboy_to_enex_date(get_created_date(note_body))
updated_date = tomboy_to_enex_date(get_updated_date(note_body))
enex_notes.append(make_enex(title, html_note_body, created_date, updated_date))
i += 1
multi_enex_body = make_multi_enex(enex_notes)
save_to_file(outputdir, output_filename, multi_enex_body)
print "Exported notes count: " + `i`
print "Evernote file location: " + outputdir + "/" + output_filename
def get_title(note_body):
title_regex = re.compile("<title>(.+?)</title>")
matches = title_regex.search(note_body);
if matches:
return matches.group(1)
else:
return "No Title"
def get_created_date(note_body):
created_date_regex = re.compile("<create-date>(.+?)</create-date>")
matches = created_date_regex.search(note_body);
if matches:
return matches.group(1)
else:
return "No Created Date"
def get_updated_date(note_body):
updated_date_regex = re.compile("<last-change-date>(.+?)</last-change-date>")
matches = updated_date_regex.search(note_body);
if matches:
return matches.group(1)
else:
return "No Updated Date"
def tomboy_to_enex_date(tomboy_date):
return re.sub(r"^([0-9]{4})-([0-9]{2})-([0-9]{2})T([0-9]{2}):([0-9]{2}):([0-9]{2}).*", r"\1\2\3T\4\5\6Z",
tomboy_date)
def get_html_body(note_body):
new_line = '¬BR¬'
xml_tag = r"<(\/?)[a-zA-Z0-9_\-:]+>"
start_xml_tag = r"<[a-zA-Z0-9_\-:]+>"
# make note body a one liner
note_body = note_body.replace('\n', new_line)
# get content
note_body = re.sub(r".*<note-content.+?>(.+?)</note-content>.*", r"\1", note_body)
# strip title until new_line or start_xml_tag
note_body = re.sub(r"^(.+?)(" + start_xml_tag + "|" + new_line + ")", r"\2", note_body)
# strip first two new lines, even if prefixed with an xml tag
tag = re.match("^" + start_xml_tag, note_body)
if tag != None:
note_body = re.sub(r"^" + start_xml_tag, r"", note_body)
note_body = re.sub(r"^(" + new_line + "){1,2}", r"", note_body)
if tag != None:
note_body = tag.group(0) + note_body
# links
note_body = re.sub(r"<link:internal>(.+?)</link:internal>", r"\1", note_body)
note_body = re.sub(r"<link:broken>(.+?)</link:broken>", r"\1", note_body)
p = re.compile(r"(<link:url>(.+?)</link:url>)")
for m in p.finditer(note_body):
if re.search(r"^([a-zA-Z0-9\._%+\-]+@(?:[a-zA-Z0-9\-]+\.)+[a-zA-Z]{2,10}|https?://.+)$", m.group(2)):
note_body = note_body.replace(m.group(1), '<a href="' + m.group(2) + '">' + m.group(2) + "</a>")
else:
note_body = note_body.replace(m.group(1), m.group(2))
# lists
note_body = re.sub(r"<(\/?)list>", r"<\1ul>", note_body)
note_body = re.sub(r'<list-item dir="ltr">', r"<li>", note_body)
note_body = re.sub(r"<(\/?)list-item>", r"<\1li>", note_body)
# higlight
note_body = re.sub(r"<highlight>(.+?)</highlight>", r'<span style="background:yellow">\1</span>', note_body)
# font size
note_body = re.sub(r"<size:small>(.+?)</size:small>", r'<span style="font-size:small">\1</span>', note_body)
note_body = re.sub(r"<size:large>(.+?)</size:large>", r'<span style="font-size:large">\1</span>', note_body)
note_body = re.sub(r"<size:huge>(.+?)</size:huge>", r'<span style="font-size:xx-large">\1</span>', note_body)
# text style
note_body = re.sub(r"<(\/?)monospace>", r"<\1code>", note_body)
note_body = re.sub(r"<(\/?)bold>", r"<\1b>", note_body)
note_body = re.sub(r"<(\/?)italic>", r"<\1i>", note_body)
note_body = re.sub(r"<(\/?)strikethrough>", r"<\1strike>", note_body)
# identation
note_body = re.sub(r"\t", r" ", note_body)
while re.search(new_line + " ", note_body) != None:
note_body = re.sub("(" + new_line + " *) ", r"\1 ", note_body)
# set new lines
note_body = note_body.replace(new_line, '<br/>\n')
return note_body
def make_enex(title, body, created_date, updated_date):
return '''<note><title>''' + title + '''</title><content><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd">
<en-note style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">
''' + body + '''
</en-note>]]></content><created>''' + created_date + '''</created><updated>''' + updated_date + '''</updated></note>'''
def make_multi_enex(multi_enex_body):
return '''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE en-export SYSTEM "http://xml.evernote.com/pub/evernote-export2.dtd">
<en-export export-date="20150412T153431Z" application="Evernote/Windows" version="5.x">
''' + ''.join(multi_enex_body) + '''</en-export>'''
def save_to_file(outputdir, filename, body):
if not os.path.exists(outputdir):
os.makedirs(outputdir)
text_file = open(outputdir + '/' + filename, "w")
text_file.write(body)
text_file.close()
def get_help_line():
print 'Usage: ', sys.argv[0], ' -i <inputdir> -o <outputdir>'
def get_input_params(argv):
inputdir = ''
outputdir = ''
printhelpline = 0
try:
opts, args = getopt.getopt(argv, "hi:o:", ["idir=", "odir="])
except getopt.GetoptError:
exit_with_error()
for opt, arg in opts:
if opt == '-h':
get_help_line()
sys.exit()
elif opt in ("-i", "--idir"):
inputdir = arg
elif opt in ("-o", "--odir"):
outputdir = arg
if (inputdir == ""):
print "Error: Missing input folder"
printhelpline = 1
if (outputdir == ""):
print "Error: Missing output folder"
printhelpline = 1
if printhelpline == 1:
exit_with_error()
return (inputdir, outputdir)
def exit_with_error():
get_help_line()
sys.exit(2)
def main(argv):
inputdir, outputdir = get_input_params(argv)
process_files(inputdir, outputdir)
if __name__ == "__main__":
main(sys.argv[1:])
| 34.088542 | 119 | 0.60382 | 950 | 6,545 | 3.962105 | 0.211579 | 0.142402 | 0.030287 | 0.065622 | 0.343783 | 0.225558 | 0.175611 | 0.128321 | 0.099097 | 0.065622 | 0 | 0.018234 | 0.195569 | 6,545 | 191 | 120 | 34.267016 | 0.696296 | 0.037892 | 0 | 0.133333 | 0 | 0.044444 | 0.276619 | 0.119211 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02963 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db643ae984ce9c0d8dd5236851af05c04998a27b | 6,746 | py | Python | engine/tree.py | dougsc/gp | d144dd1f483150b26483077e6e5032f4f21a6d4e | [
"Apache-2.0"
] | null | null | null | engine/tree.py | dougsc/gp | d144dd1f483150b26483077e6e5032f4f21a6d4e | [
"Apache-2.0"
] | null | null | null | engine/tree.py | dougsc/gp | d144dd1f483150b26483077e6e5032f4f21a6d4e | [
"Apache-2.0"
] | null | null | null | import random
from pprint import pformat
from copy import deepcopy
from utils.logger import GP_Logger
from terminal_set import TerminalSet
class Tree:
@classmethod
def log(cls):
return GP_Logger.logger(cls.__name__)
def __init__(self):
self.terminal_set=None
self.function_set=None
self.function_bias=None
self.max_depth=None
self.tree = None
def clone(self, clone_tree):
assert clone_tree.tree != None, 'trying to clone from an uninitialized tree'
self.terminal_set = clone_tree.terminal_set
self.function_set = clone_tree.function_set
self.function_bias = clone_tree.function_bias
self.max_depth = clone_tree.max_depth
self.tree = deepcopy(clone_tree.tree)
def mutate(self, clone_tree):
self.clone(clone_tree)
mutation_node = random.choice(self.get_node_list())
self.log().debug('mutating at node %s - current depth: %d' % (mutation_node['node']['name'], mutation_node['depth']))
self._create_new_node(mutation_node['depth'], mutation_node)
self.log().debug('node mutated to %s' % (mutation_node['node']['name']))
self._add_layer(mutation_node)
def subtree_crossover(self, clone_tree, other_tree):
self.clone(clone_tree)
this_crossover_node = random.choice(self.get_node_list())
other_crossover_node = random.choice(other_tree.get_node_list())
self.log().debug('x-over node 1: %s (depth: %d), node 2: %s (depth: %d)' % (this_crossover_node['node']['name'],
this_crossover_node['depth'],
other_crossover_node['node']['name'],
other_crossover_node['depth']))
this_crossover_node['node'] = deepcopy(other_crossover_node['node'])
this_crossover_node['lower_nodes'] = deepcopy(other_crossover_node['lower_nodes'])
self.recalculate_depth(this_crossover_node['lower_nodes'], this_crossover_node['depth'] + 1)
def create(self, terminal_set=[], function_set=[], function_bias=1, max_depth=3):
self.terminal_set=terminal_set
self.function_set=function_set
self.function_bias=function_bias
self.max_depth=max_depth
self.tree = {}
self._create_new_node(1, self.tree)
self._add_layer(current_node=self.tree)
def _create_new_node(self, depth, node):
node_set = []
if depth == 1:
node_set = self.function_set
elif depth >= self.max_depth:
node_set = self.terminal_set
else:
node_set = self.function_set * self.function_bias + self.terminal_set
chosen_node = random.choice(node_set)
if not chosen_node.has_key('name'):
# this needs converting to a named node
value = chosen_node['function'](*chosen_node['args'])
chosen_node = TerminalSet.terminal_value(value)
node['node'] = chosen_node
node['lower_nodes'] = []
node['depth'] = depth
def _add_layer(self, current_node):
new_node_count = current_node['node'].has_key('arity') and current_node['node']['arity'] or 0
self.log().debug('adding %d nodes below %s - current depth = %d' % (new_node_count, current_node['node']['name'], current_node['depth']))
for i in range(new_node_count):
new_node = {}
self._create_new_node(current_node['depth'] + 1, new_node)
current_node['lower_nodes'].append(new_node)
map(lambda x:self._add_layer(x), current_node['lower_nodes'])
def dump(self):
print 'Tree: \n%s' % pformat(self.tree)
def _dump_structure(self, from_nodes, to_nodes):
for from_node in from_nodes:
new_node = {'name': from_node['node']['name'], 'lower_nodes': []}
to_nodes.append(new_node)
self._dump_structure(from_node['lower_nodes'], new_node['lower_nodes'])
def dump_structure(self):
structure = {'name': self.tree['node']['name'], 'lower_nodes': []}
self._dump_structure(self.tree['lower_nodes'], structure['lower_nodes'])
return structure
def execute_node(self, node, function_lookup, args=None):
assert node.has_key('value') or node.has_key('function'), 'node does not have a function or value'
value = None
if node.has_key('value'):
value = node['value']
else:
if args == None:
args = node['args']
if isinstance(node['function'], str):
value = function_lookup.get_func(node['function'])(*args)
else:
value = node['function'](*args)
return value
def get_lower_node_value(self, function_lookup, lower_node):
if lower_node['node']['node_type'] == 'terminal':
return self.execute_node(lower_node['node'], function_lookup)
else:
result_list = map(lambda x:self.get_lower_node_value(function_lookup, x), lower_node['lower_nodes'])
return self.execute_node(lower_node['node'], function_lookup, result_list)
def execute(self, function_lookup):
result_list = map(lambda x:self.get_lower_node_value(function_lookup, x), self.tree['lower_nodes'])
return self.execute_node(self.tree['node'], function_lookup, result_list)
def iterate_tree(self, nodes, callback):
for node in nodes:
callback(node)
self.iterate_tree(node['lower_nodes'], callback)
def recalculate_depth(self, nodes, depth):
for node in nodes:
node['depth'] = depth
self.recalculate_depth(node['lower_nodes'], depth+1)
def _get_node_list(self, nodes, node_list):
for node in nodes:
node_list.append(node)
self._get_node_list(node['lower_nodes'], node_list)
def get_node_list(self):
node_list = []
self._get_node_list(self.tree['lower_nodes'], node_list)
return node_list
def _simplify(self, node, function_lookup):
if len(node['lower_nodes']) == 0:
return
terminal_value_count = filter(lambda x:TerminalSet.is_terminal_value(x['node']), node['lower_nodes'])
if node['node']['arity'] == terminal_value_count:
value = self.execute_node(node, function_lookup, args=map(lambda x:x['node']['value'], node['lower_nodes']))
self.log().debug('Replacing existing node: %s' % pformat(node['node']))
node['lower_nodes'] = []
node['node'] = TerminalSet.terminal_value(value)
self.log().debug(' -- with node: %s' % pformat(node['node']))
self.is_simplified = False
else:
map(lambda x:self._simplify(x, function_lookup), node['lower_nodes'])
def simplify(self, function_lookup):
self.is_simplified = False
simplify_loop_count = 1
while not self.is_simplified:
self.log().debug('Simplification %d' % (simplify_loop_count))
self.is_simplified = True
self._simplify(self.tree, function_lookup)
simplify_loop_count += 1
| 40.154762 | 141 | 0.67225 | 919 | 6,746 | 4.64309 | 0.1284 | 0.044996 | 0.055777 | 0.017577 | 0.242559 | 0.104054 | 0.063276 | 0.048746 | 0.048746 | 0.026248 | 0 | 0.002397 | 0.196116 | 6,746 | 167 | 142 | 40.39521 | 0.784437 | 0.005485 | 0 | 0.113475 | 0 | 0.007092 | 0.12405 | 0 | 0 | 0 | 0 | 0 | 0.014184 | 0 | null | null | 0 | 0.035461 | null | null | 0.014184 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db64f80a8ca557a291741dc4fd34c7d58b0c51f0 | 7,181 | py | Python | lib/googlecloudsdk/third_party/apis/serviceuser/v1/serviceuser_v1_client.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | null | null | null | lib/googlecloudsdk/third_party/apis/serviceuser/v1/serviceuser_v1_client.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 11 | 2020-02-29T02:51:12.000Z | 2022-03-30T23:20:08.000Z | lib/googlecloudsdk/third_party/apis/serviceuser/v1/serviceuser_v1_client.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 1 | 2020-07-25T18:17:57.000Z | 2020-07-25T18:17:57.000Z | """Generated client library for serviceuser version v1."""
# NOTE: This file is autogenerated and should not be edited by hand.
from apitools.base.py import base_api
from googlecloudsdk.third_party.apis.serviceuser.v1 import serviceuser_v1_messages as messages
class ServiceuserV1(base_api.BaseApiClient):
"""Generated client library for service serviceuser version v1."""
MESSAGES_MODULE = messages
BASE_URL = u'https://serviceuser.googleapis.com/'
_PACKAGE = u'serviceuser'
_SCOPES = [u'https://www.googleapis.com/auth/cloud-platform', u'https://www.googleapis.com/auth/cloud-platform.read-only', u'https://www.googleapis.com/auth/service.management']
_VERSION = u'v1'
_CLIENT_ID = '1042881264118.apps.googleusercontent.com'
_CLIENT_SECRET = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_USER_AGENT = 'x_Tw5K8nnjoRAqULM9PFAC2b'
_CLIENT_CLASS_NAME = u'ServiceuserV1'
_URL_VERSION = u'v1'
_API_KEY = None
def __init__(self, url='', credentials=None,
get_credentials=True, http=None, model=None,
log_request=False, log_response=False,
credentials_args=None, default_global_params=None,
additional_http_headers=None, response_encoding=None):
"""Create a new serviceuser handle."""
url = url or self.BASE_URL
super(ServiceuserV1, self).__init__(
url, credentials=credentials,
get_credentials=get_credentials, http=http, model=model,
log_request=log_request, log_response=log_response,
credentials_args=credentials_args,
default_global_params=default_global_params,
additional_http_headers=additional_http_headers,
response_encoding=response_encoding)
self.projects_services = self.ProjectsServicesService(self)
self.projects = self.ProjectsService(self)
self.services = self.ServicesService(self)
class ProjectsServicesService(base_api.BaseApiService):
"""Service class for the projects_services resource."""
_NAME = u'projects_services'
def __init__(self, client):
super(ServiceuserV1.ProjectsServicesService, self).__init__(client)
self._upload_configs = {
}
def Disable(self, request, global_params=None):
r"""Disable a service so it can no longer be used with a.
project. This prevents unintended usage that may cause unexpected billing
charges or security leaks.
Operation<response: google.protobuf.Empty>
Args:
request: (ServiceuserProjectsServicesDisableRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Operation) The response message.
"""
config = self.GetMethodConfig('Disable')
return self._RunMethod(
config, request, global_params=global_params)
Disable.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'serviceuser.projects.services.disable',
ordered_params=[u'projectsId', u'servicesId'],
path_params=[u'projectsId', u'servicesId'],
query_params=[],
relative_path=u'v1/projects/{projectsId}/services/{servicesId}:disable',
request_field=u'disableServiceRequest',
request_type_name=u'ServiceuserProjectsServicesDisableRequest',
response_type_name=u'Operation',
supports_download=False,
)
def Enable(self, request, global_params=None):
r"""Enable a service so it can be used with a project.
See [Cloud Auth Guide](https://cloud.google.com/docs/authentication) for
more information.
Operation<response: google.protobuf.Empty>
Args:
request: (ServiceuserProjectsServicesEnableRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(Operation) The response message.
"""
config = self.GetMethodConfig('Enable')
return self._RunMethod(
config, request, global_params=global_params)
Enable.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'POST',
method_id=u'serviceuser.projects.services.enable',
ordered_params=[u'projectsId', u'servicesId'],
path_params=[u'projectsId', u'servicesId'],
query_params=[],
relative_path=u'v1/projects/{projectsId}/services/{servicesId}:enable',
request_field=u'enableServiceRequest',
request_type_name=u'ServiceuserProjectsServicesEnableRequest',
response_type_name=u'Operation',
supports_download=False,
)
def List(self, request, global_params=None):
r"""List enabled services for the specified consumer.
Args:
request: (ServiceuserProjectsServicesListRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(ListEnabledServicesResponse) The response message.
"""
config = self.GetMethodConfig('List')
return self._RunMethod(
config, request, global_params=global_params)
List.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'serviceuser.projects.services.list',
ordered_params=[u'projectsId'],
path_params=[u'projectsId'],
query_params=[u'pageSize', u'pageToken'],
relative_path=u'v1/projects/{projectsId}/services',
request_field='',
request_type_name=u'ServiceuserProjectsServicesListRequest',
response_type_name=u'ListEnabledServicesResponse',
supports_download=False,
)
class ProjectsService(base_api.BaseApiService):
"""Service class for the projects resource."""
_NAME = u'projects'
def __init__(self, client):
super(ServiceuserV1.ProjectsService, self).__init__(client)
self._upload_configs = {
}
class ServicesService(base_api.BaseApiService):
"""Service class for the services resource."""
_NAME = u'services'
def __init__(self, client):
super(ServiceuserV1.ServicesService, self).__init__(client)
self._upload_configs = {
}
def Search(self, request, global_params=None):
r"""Search available services.
When no filter is specified, returns all accessible services. For
authenticated users, also returns all services the calling user has
"servicemanagement.services.bind" permission for.
Args:
request: (ServiceuserServicesSearchRequest) input message
global_params: (StandardQueryParameters, default: None) global arguments
Returns:
(SearchServicesResponse) The response message.
"""
config = self.GetMethodConfig('Search')
return self._RunMethod(
config, request, global_params=global_params)
Search.method_config = lambda: base_api.ApiMethodInfo(
http_method=u'GET',
method_id=u'serviceuser.services.search',
ordered_params=[],
path_params=[],
query_params=[u'pageSize', u'pageToken'],
relative_path=u'v1/services:search',
request_field='',
request_type_name=u'ServiceuserServicesSearchRequest',
response_type_name=u'SearchServicesResponse',
supports_download=False,
)
| 38.40107 | 179 | 0.709372 | 772 | 7,181 | 6.370466 | 0.233161 | 0.04636 | 0.030907 | 0.018707 | 0.473973 | 0.460553 | 0.386946 | 0.323302 | 0.288329 | 0.222448 | 0 | 0.006394 | 0.194123 | 7,181 | 186 | 180 | 38.607527 | 0.843442 | 0.265423 | 0 | 0.290598 | 1 | 0 | 0.203971 | 0.112396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068376 | false | 0 | 0.017094 | 0 | 0.247863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db66779a2882ba639d36d1d562ab73945afc92fc | 1,317 | py | Python | examples/rrbot_p2p_low_energy.py | abcamiletto/urdf2optcontrol | 39b3f761a4685cc7d50b48793b6b2906c89b1694 | [
"MIT"
] | null | null | null | examples/rrbot_p2p_low_energy.py | abcamiletto/urdf2optcontrol | 39b3f761a4685cc7d50b48793b6b2906c89b1694 | [
"MIT"
] | null | null | null | examples/rrbot_p2p_low_energy.py | abcamiletto/urdf2optcontrol | 39b3f761a4685cc7d50b48793b6b2906c89b1694 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from urdf2optcontrol import optimizer
from matplotlib import pyplot as plt
import pathlib
# URDF options
urdf_path = pathlib.Path(__file__).parent.joinpath('urdf', 'rrbot.urdf').absolute()
root = "link1"
end = "link3"
in_cond = [0] * 4
def my_cost_func(q, qd, qdd, ee_pos, u, t):
return u.T @ u
def my_constraint1(q, qd, qdd, ee_pos, u, t):
return [-30, -30], u, [30, 30]
def my_constraint2(q, qd, qdd, ee_pos, u, t):
return [-4, -4], qd, [4, 4]
my_constraints = [my_constraint1, my_constraint2]
def my_final_constraint1(q, qd, qdd, ee_pos, u):
return [3.14 / 2, 0], q, [3.14 / 2, 0]
def my_final_constraint2(q, qd, qdd, ee_pos, u):
return [0, 0], qd, [0, 0]
my_final_constraints = [my_final_constraint1, my_final_constraint2]
time_horizon = 2.0
steps = 40
# Load the urdf and calculate the differential equations
optimizer.load_robot(urdf_path, root, end)
# Loading the problem conditions
optimizer.load_problem(
my_cost_func,
steps,
in_cond,
time_horizon=time_horizon,
constraints=my_constraints,
final_constraints=my_final_constraints,
max_iter=500
)
# Solving the non linear problem
res = optimizer.solve()
print('u = ', res['u'][0])
print('q = ', res['q'][0])
# Print the results!
fig = optimizer.plot_result(show=True)
| 21.241935 | 83 | 0.688686 | 211 | 1,317 | 4.094787 | 0.369668 | 0.048611 | 0.034722 | 0.046296 | 0.158565 | 0.158565 | 0.158565 | 0.065972 | 0 | 0 | 0 | 0.045078 | 0.174639 | 1,317 | 61 | 84 | 21.590164 | 0.74977 | 0.129081 | 0 | 0 | 0 | 0 | 0.029798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.085714 | 0.142857 | 0.371429 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
db66ffcc00192c85b05965750638c6febdb95b51 | 15,803 | py | Python | SocketServer/apps/django-db-pool-master/dbpool/db/backends/postgresql_psycopg2/base.py | fqc/SocketSample_Mina_Socket | f5a7bb9bcd6052fe9e2a419c877073b32be4dc3d | [
"MIT"
] | 23 | 2015-01-28T13:31:24.000Z | 2020-03-11T18:11:45.000Z | SocketServer/apps/django-db-pool-master/dbpool/db/backends/postgresql_psycopg2/base.py | fqc/SocketSample_Mina_Socket | f5a7bb9bcd6052fe9e2a419c877073b32be4dc3d | [
"MIT"
] | 1 | 2015-04-30T12:01:00.000Z | 2015-04-30T13:33:38.000Z | SocketServer/apps/django-db-pool-master/dbpool/db/backends/postgresql_psycopg2/base.py | fqc/SocketSample_Mina_Socket | f5a7bb9bcd6052fe9e2a419c877073b32be4dc3d | [
"MIT"
] | 10 | 2015-05-27T12:52:19.000Z | 2021-01-13T13:35:11.000Z | """
Pooled PostgreSQL database backend for Django.
Requires psycopg 2: http://initd.org/projects/psycopg2
"""
from django import get_version as get_django_version
from django.db.backends.postgresql_psycopg2.base import \
DatabaseWrapper as OriginalDatabaseWrapper
from django.db.backends.signals import connection_created
from threading import Lock
import logging
import sys
try:
import psycopg2 as Database
import psycopg2.extensions
except ImportError, e:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
logger = logging.getLogger(__name__)
class PooledConnection():
'''
Thin wrapper around a psycopg2 connection to handle connection pooling.
'''
def __init__(self, pool, test_query=None):
self._pool = pool
# If passed a test query we'll run it to ensure the connection is available
if test_query:
self._wrapped_connection = None
num_attempts = 0
while self._wrapped_connection is None:
num_attempts += 1;
c = pool.getconn()
try:
c.cursor().execute(test_query)
except Database.Error:
pool.putconn(c, close=True)
if num_attempts > self._pool.maxconn:
logger.error("Unable to check out connection from pool %s" % self._pool)
raise;
else:
logger.info("Closing dead connection from pool %s" % self._pool,
exc_info=sys.exc_info())
else:
if not c.autocommit:
c.rollback()
self._wrapped_connection = c
else:
self._wrapped_connection = pool.getconn()
logger.debug("Checked out connection %s from pool %s" % (self._wrapped_connection, self._pool))
def close(self):
'''
Override to return the connection to the pool rather than closing it.
'''
if self._wrapped_connection and self._pool:
logger.debug("Returning connection %s to pool %s" % (self._wrapped_connection, self._pool))
self._pool.putconn(self._wrapped_connection)
self._wrapped_connection = None
def __getattr__(self, attr):
'''
All other calls proxy through to the "real" connection
'''
return getattr(self._wrapped_connection, attr)
'''
This holds our connection pool instances (for each alias in settings.DATABASES that
uses our PooledDatabaseWrapper.)
'''
connection_pools = {}
connection_pools_lock = Lock()
pool_config_defaults = {
'MIN_CONNS': None,
'MAX_CONNS': 1,
'TEST_ON_BORROW': False,
'TEST_ON_BORROW_QUERY': 'SELECT 1'
}
def _set_up_pool_config(self):
'''
Helper to configure pool options during DatabaseWrapper initialization.
'''
self._max_conns = self.settings_dict['OPTIONS'].get('MAX_CONNS', pool_config_defaults['MAX_CONNS'])
self._min_conns = self.settings_dict['OPTIONS'].get('MIN_CONNS', self._max_conns)
self._test_on_borrow = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW',
pool_config_defaults['TEST_ON_BORROW'])
if self._test_on_borrow:
self._test_on_borrow_query = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW_QUERY',
pool_config_defaults['TEST_ON_BORROW_QUERY'])
else:
self._test_on_borrow_query = None
def _create_connection_pool(self, conn_params):
'''
Helper to initialize the connection pool.
'''
connection_pools_lock.acquire()
try:
# One more read to prevent a read/write race condition (We do this
# here to avoid the overhead of locking each time we get a connection.)
if (self.alias not in connection_pools or
connection_pools[self.alias]['settings'] != self.settings_dict):
logger.info("Creating connection pool for db alias %s" % self.alias)
logger.info(" using MIN_CONNS = %s, MAX_CONNS = %s, TEST_ON_BORROW = %s" % (self._min_conns,
self._max_conns,
self._test_on_borrow))
from psycopg2 import pool
connection_pools[self.alias] = {
'pool': pool.ThreadedConnectionPool(self._min_conns, self._max_conns, **conn_params),
'settings': dict(self.settings_dict),
}
finally:
connection_pools_lock.release()
'''
Simple Postgres pooled connection that uses psycopg2's built-in ThreadedConnectionPool
implementation. In Django, use this by specifying MAX_CONNS and (optionally) MIN_CONNS
in the OPTIONS dictionary for the given db entry in settings.DATABASES.
MAX_CONNS should be equal to the maximum number of threads your app server is configured
for. For example, if you are running Gunicorn or Apache/mod_wsgi (in a multiple *process*
configuration) MAX_CONNS should be set to 1, since you'll have a dedicated python
interpreter per process/worker. If you're running Apache/mod_wsgi in a multiple *thread*
configuration set MAX_CONNS to the number of threads you have configured for each process.
By default MIN_CONNS will be set to MAX_CONNS, which prevents connections from being closed.
If your load is spikey and you want to recycle connections, set MIN_CONNS to something lower
than MAX_CONNS. I suggest it should be no lower than your 95th percentile concurrency for
your app server.
If you wish to validate connections on each check out, specify TEST_ON_BORROW (set to True)
in the OPTIONS dictionary for the given db entry. You can also provide an optional
TEST_ON_BORROW_QUERY, which is "SELECT 1" by default.
'''
class DatabaseWrapper16(OriginalDatabaseWrapper):
'''
For Django 1.6.x
TODO: See https://github.com/django/django/commit/1893467784deb6cd8a493997e8bac933cc2e4af9
but more importantly https://github.com/django/django/commit/2ee21d9f0d9eaed0494f3b9cd4b5bc9beffffae5
This code may be no longer needed!
'''
set_up_pool_config = _set_up_pool_config
create_connection_pool = _create_connection_pool
def __init__(self, *args, **kwargs):
super(DatabaseWrapper16, self).__init__(*args, **kwargs)
self.set_up_pool_config()
def get_new_connection(self, conn_params):
# Is this the initial use of the global connection_pools dictionary for
# this python interpreter? Build a ThreadedConnectionPool instance and
# add it to the dictionary if so.
if self.alias not in connection_pools or connection_pools[self.alias]['settings'] != self.settings_dict:
for extra in pool_config_defaults.keys():
if extra in conn_params:
del conn_params[extra]
self.create_connection_pool(conn_params)
return PooledConnection(connection_pools[self.alias]['pool'], test_query=self._test_on_borrow_query)
class DatabaseWrapper14and15(OriginalDatabaseWrapper):
'''
For Django 1.4.x and 1.5.x
'''
set_up_pool_config = _set_up_pool_config
create_connection_pool = _create_connection_pool
def __init__(self, *args, **kwargs):
super(DatabaseWrapper14and15, self).__init__(*args, **kwargs)
self.set_up_pool_config()
def _cursor(self):
settings_dict = self.settings_dict
if self.connection is None or connection_pools[self.alias]['settings'] != settings_dict:
# Is this the initial use of the global connection_pools dictionary for
# this python interpreter? Build a ThreadedConnectionPool instance and
# add it to the dictionary if so.
if self.alias not in connection_pools or connection_pools[self.alias]['settings'] != settings_dict:
if not settings_dict['NAME']:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured(
"settings.DATABASES is improperly configured. "
"Please supply the NAME value.")
conn_params = {
'database': settings_dict['NAME'],
}
conn_params.update(settings_dict['OPTIONS'])
for extra in ['autocommit'] + pool_config_defaults.keys():
if extra in conn_params:
del conn_params[extra]
if settings_dict['USER']:
conn_params['user'] = settings_dict['USER']
if settings_dict['PASSWORD']:
conn_params['password'] = force_str(settings_dict['PASSWORD'])
if settings_dict['HOST']:
conn_params['host'] = settings_dict['HOST']
if settings_dict['PORT']:
conn_params['port'] = settings_dict['PORT']
self.create_connection_pool(conn_params)
self.connection = PooledConnection(connection_pools[self.alias]['pool'],
test_query=self._test_on_borrow_query)
self.connection.set_client_encoding('UTF8')
tz = 'UTC' if settings.USE_TZ else settings_dict.get('TIME_ZONE')
if tz:
try:
get_parameter_status = self.connection.get_parameter_status
except AttributeError:
# psycopg2 < 2.0.12 doesn't have get_parameter_status
conn_tz = None
else:
conn_tz = get_parameter_status('TimeZone')
if conn_tz != tz:
# Set the time zone in autocommit mode (see #17062)
self.connection.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
self.connection.cursor().execute(
self.ops.set_time_zone_sql(), [tz])
self.connection.set_isolation_level(self.isolation_level)
self._get_pg_version()
connection_created.send(sender=self.__class__, connection=self)
cursor = self.connection.cursor()
cursor.tzinfo_factory = utc_tzinfo_factory if settings.USE_TZ else None
return CursorWrapper(cursor)
class DatabaseWrapper13(OriginalDatabaseWrapper):
'''
For Django 1.3.x
'''
set_up_pool_config = _set_up_pool_config
create_connection_pool = _create_connection_pool
def __init__(self, *args, **kwargs):
super(DatabaseWrapper13, self).__init__(*args, **kwargs)
self.set_up_pool_config()
def _cursor(self):
'''
Override _cursor to plug in our connection pool code. We'll return a wrapped Connection
which can handle returning itself to the pool when its .close() method is called.
'''
from django.db.backends.postgresql.version import get_version
new_connection = False
set_tz = False
settings_dict = self.settings_dict
if self.connection is None or connection_pools[self.alias]['settings'] != settings_dict:
new_connection = True
set_tz = settings_dict.get('TIME_ZONE')
# Is this the initial use of the global connection_pools dictionary for
# this python interpreter? Build a ThreadedConnectionPool instance and
# add it to the dictionary if so.
if self.alias not in connection_pools or connection_pools[self.alias]['settings'] != settings_dict:
if settings_dict['NAME'] == '':
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("You need to specify NAME in your Django settings file.")
conn_params = {
'database': settings_dict['NAME'],
}
conn_params.update(settings_dict['OPTIONS'])
for extra in ['autocommit'] + pool_config_defaults.keys():
if extra in conn_params:
del conn_params[extra]
if settings_dict['USER']:
conn_params['user'] = settings_dict['USER']
if settings_dict['PASSWORD']:
conn_params['password'] = settings_dict['PASSWORD']
if settings_dict['HOST']:
conn_params['host'] = settings_dict['HOST']
if settings_dict['PORT']:
conn_params['port'] = settings_dict['PORT']
self.create_connection_pool(conn_params)
self.connection = PooledConnection(connection_pools[self.alias]['pool'],
test_query=self._test_on_borrow_query)
self.connection.set_client_encoding('UTF8')
self.connection.set_isolation_level(self.isolation_level)
# We'll continue to emulate the old signal frequency in case any code depends upon it
connection_created.send(sender=self.__class__, connection=self)
cursor = self.connection.cursor()
cursor.tzinfo_factory = None
if new_connection:
if set_tz:
cursor.execute("SET TIME ZONE %s", [settings_dict['TIME_ZONE']])
if not hasattr(self, '_version'):
self.__class__._version = get_version(cursor)
if self._version[0:2] < (8, 0):
# No savepoint support for earlier version of PostgreSQL.
self.features.uses_savepoints = False
if self.features.uses_autocommit:
if self._version[0:2] < (8, 2):
# FIXME: Needs extra code to do reliable model insert
# handling, so we forbid it for now.
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("You cannot use autocommit=True with PostgreSQL prior to 8.2 at the moment.")
else:
# FIXME: Eventually we're enable this by default for
# versions that support it, but, right now, that's hard to
# do without breaking other things (#10509).
self.features.can_return_id_from_insert = True
return CursorWrapper(cursor)
'''
Choose a version of the DatabaseWrapper class to use based on the Django
version. This is a bit hacky, what's a more elegant way?
'''
django_version = get_django_version()
if django_version.startswith('1.3'):
from django.db.backends.postgresql_psycopg2.base import CursorWrapper
class DatabaseWrapper(DatabaseWrapper13):
pass
elif django_version.startswith('1.4') or django_version.startswith('1.5'):
from django.conf import settings
from django.db.backends.postgresql_psycopg2.base import utc_tzinfo_factory, \
CursorWrapper
# The force_str call around the password seems to be the only change from
# 1.4 to 1.5, so we'll use the same DatabaseWrapper class and make
# force_str a no-op.
try:
from django.utils.encoding import force_str
except ImportError:
force_str = lambda x: x
class DatabaseWrapper(DatabaseWrapper14and15):
pass
elif django_version.startswith('1.6'):
class DatabaseWrapper(DatabaseWrapper16):
pass
else:
raise ImportError("Unsupported Django version %s" % django_version)
| 44.767705 | 124 | 0.627729 | 1,838 | 15,803 | 5.170294 | 0.20457 | 0.051773 | 0.021467 | 0.015784 | 0.451436 | 0.426392 | 0.377249 | 0.370094 | 0.328738 | 0.30422 | 0 | 0.011855 | 0.295387 | 15,803 | 352 | 125 | 44.894886 | 0.841581 | 0.086249 | 0 | 0.387387 | 0 | 0 | 0.087607 | 0 | 0 | 0 | 0 | 0.005682 | 0 | 0 | null | null | 0.031532 | 0.094595 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db68dcb7ad2aa62124559726780ed4b83d08a974 | 2,510 | py | Python | docker/cleanup_generators.py | hashnfv/hashnfv-nfvbench | 8da439b932537748d379c7bd3bdf560ef739b203 | [
"Apache-2.0"
] | null | null | null | docker/cleanup_generators.py | hashnfv/hashnfv-nfvbench | 8da439b932537748d379c7bd3bdf560ef739b203 | [
"Apache-2.0"
] | null | null | null | docker/cleanup_generators.py | hashnfv/hashnfv-nfvbench | 8da439b932537748d379c7bd3bdf560ef739b203 | [
"Apache-2.0"
] | 1 | 2019-07-14T14:54:15.000Z | 2019-07-14T14:54:15.000Z | # Copyright 2016 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
TREX_OPT = '/opt/trex'
TREX_UNUSED = [
'_t-rex-64-debug', '_t-rex-64-debug-o', 'bp-sim-64', 'bp-sim-64-debug',
't-rex-64-debug', 't-rex-64-debug-o', 'automation/__init__.py',
'automation/graph_template.html',
'automation/config', 'automation/h_avc.py', 'automation/phantom',
'automation/readme.txt', 'automation/regression', 'automation/report_template.html',
'automation/sshpass.exp', 'automation/trex_perf.py', 'wkhtmltopdf-amd64'
]
def remove_unused_libs(path, files):
"""
Remove files not used by traffic generator.
"""
for f in files:
f = os.path.join(path, f)
try:
if os.path.isdir(f):
shutil.rmtree(f)
else:
os.remove(f)
except OSError:
print "Skipped file:"
print f
continue
def get_dir_size(start_path='.'):
"""
Computes size of directory.
:return: size of directory with subdirectiories
"""
total_size = 0
for dirpath, dirnames, filenames in os.walk(start_path):
for f in filenames:
try:
fp = os.path.join(dirpath, f)
total_size += os.path.getsize(fp)
except OSError:
continue
return total_size
if __name__ == "__main__":
versions = os.listdir(TREX_OPT)
for version in versions:
trex_path = os.path.join(TREX_OPT, version)
print 'Cleaning TRex', version
try:
size_before = get_dir_size(trex_path)
remove_unused_libs(trex_path, TREX_UNUSED)
size_after = get_dir_size(trex_path)
print '==== Saved Space ===='
print size_before - size_after
except OSError:
import traceback
print traceback.print_exc()
print 'Cleanup was not finished.'
| 31.772152 | 88 | 0.622709 | 324 | 2,510 | 4.679012 | 0.453704 | 0.039578 | 0.015831 | 0.029024 | 0.058707 | 0.03496 | 0.030343 | 0.030343 | 0.030343 | 0 | 0 | 0.012651 | 0.275697 | 2,510 | 78 | 89 | 32.179487 | 0.821232 | 0.241434 | 0 | 0.166667 | 0 | 0 | 0.240346 | 0.097983 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.020833 | 0.0625 | null | null | 0.145833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db6a7657f91ac9f80bc299ba273000b77ee1c28c | 490 | py | Python | storage/aug_buffer.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | 9 | 2022-02-20T18:18:51.000Z | 2022-03-24T03:04:44.000Z | storage/aug_buffer.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | null | null | null | storage/aug_buffer.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | 2 | 2022-02-19T05:17:06.000Z | 2022-02-21T20:53:26.000Z | from storage.buffer import QLearningBuffer
from utils.torch_utils import ExpertTransition, augmentTransition
from utils.parameters import buffer_aug_type
class QLearningBufferAug(QLearningBuffer):
def __init__(self, size, aug_n=9):
super().__init__(size)
self.aug_n = aug_n
def add(self, transition: ExpertTransition):
super().add(transition)
for _ in range(self.aug_n):
super().add(augmentTransition(transition, buffer_aug_type))
| 25.789474 | 71 | 0.72449 | 58 | 490 | 5.810345 | 0.448276 | 0.047478 | 0.077151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002519 | 0.189796 | 490 | 18 | 72 | 27.222222 | 0.846348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
db7042284fa2b7f2b0d11816372b28c2a0aa4dd3 | 1,755 | py | Python | __dm__.py | AbhilashDatta/InstagramBot | 21916fcfc621ae3185df8494b12aa35743c165f8 | [
"MIT"
] | 12 | 2021-07-17T09:19:07.000Z | 2022-01-18T18:49:43.000Z | __dm__.py | kumarankm/InstagramBot | db08f0ae12f22b76d31f844a9ff7f037622e534f | [
"MIT"
] | 1 | 2021-08-12T22:04:07.000Z | 2021-08-13T14:14:10.000Z | __dm__.py | kumarankm/InstagramBot | db08f0ae12f22b76d31f844a9ff7f037622e534f | [
"MIT"
] | 8 | 2021-07-17T09:19:19.000Z | 2021-09-13T19:15:04.000Z | from selenium import webdriver
from time import sleep
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
def Dm(driver,user,message):
''' This function is used to direct message a single user/group '''
driver.get('https://www.instagram.com/direct/inbox/')
send_message_button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="react-root"]/section/div/div[2]/div/div/div[2]/div/div[3]/div/button'))).click()
search_user = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[5]/div/div/div[2]/div[1]/div/div[2]/input')))
search_user.send_keys(user)
selector = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[5]/div/div/div[2]/div[2]/div/div/div[3]/button/span'))).click()
next_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[5]/div/div/div[1]/div/div[2]/div/button/div'))).click()
try:
text = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="react-root"]/section/div/div[2]/div/div/div[2]/div[2]/div/div[2]/div/div/div[2]/textarea')))
text.send_keys(message)
send = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="react-root"]/section/div/div[2]/div/div/div[2]/div[2]/div/div[2]/div/div/div[3]/button'))).click()
driver.get('https://www.instagram.com/direct/inbox/')
except:
print('No message sent to '+user)
driver.get('https://www.instagram.com/direct/inbox/') | 56.612903 | 193 | 0.699145 | 277 | 1,755 | 4.33574 | 0.249097 | 0.129892 | 0.081599 | 0.09159 | 0.576187 | 0.560366 | 0.560366 | 0.545379 | 0.439634 | 0.439634 | 0 | 0.023121 | 0.112821 | 1,755 | 31 | 194 | 56.612903 | 0.748234 | 0.033618 | 0 | 0.142857 | 0 | 0.285714 | 0.346359 | 0.265838 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.285714 | 0 | 0.333333 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db7052a530fb46c3cf9935b4a0d738b78df5d9c6 | 11,060 | py | Python | mashov.py | Yotamefr/BeitBiram | 84bd6abddf6ac865b502e0692561ee48d510ef7c | [
"MIT"
] | 1 | 2020-12-31T07:32:28.000Z | 2020-12-31T07:32:28.000Z | mashov.py | Yotamefr/BeitBiram | 84bd6abddf6ac865b502e0692561ee48d510ef7c | [
"MIT"
] | null | null | null | mashov.py | Yotamefr/BeitBiram | 84bd6abddf6ac865b502e0692561ee48d510ef7c | [
"MIT"
] | null | null | null | import requests
from datetime import datetime
import json
from extras import Day, Lesson
class PasswordError(Exception):
pass
class LoginFailed(Exception):
pass
class MashovAPI:
"""
MashovAPI
Originally made by Xiddoc. Project can be found here: https://github.com/Xiddoc/MashovAPI
Modifications were made by me, Yotamefr.
"""
def __init__(self, username, **kwargs):
"""
Parameters
------------
username -> Represents the username
------------
There are some weird stuff here. I might clean it in a while
Again, this code wasn't made by me, just modified by me
"""
self.url = "https://web.mashov.info/api/{}/"
self.session = requests.Session()
self.session.headers.update({'Accept': 'application/json, text/plain, */*',
'Referer': 'https://web.mashov.info/students/login',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36',
'Content-Type': 'application/json'})
self.username = username
self.auth_ID = 0
self.user_ID = self.auth_ID
self.uid = self.auth_ID
self.uID = self.auth_ID
self.guid = self.auth_ID
self.guID = self.auth_ID
self.school_site = ""
self.moodle_site = ""
self.school_name = ""
self.last_name = ""
self.first_name = ""
self.class_name = ""
self.last_pass = ""
self.last_login = ""
self.school_years = []
self.csrf_token = ""
self.user_children = {}
# Kwargs password
if "password" in kwargs:
self.password = kwargs["password"]
else:
self.password = False
# Kwargs schoolData
if "schoolData" in kwargs:
self.school_data = kwargs["schoolData"]
else:
self.school_data = False
# Kwargs schoolID
if "schoolID" in kwargs:
self.school_ID = kwargs["schoolID"]
elif not self.school_data:
self.school_data = self.get_schools()
self.school_ID = self.get_school_ID_by_name(kwargs["schoolName"])
self.current_year = datetime.now().year + 1
def login(self):
"""
Parameters
------------
------------
"""
if not self.password:
raise PasswordError("No password entered.")
self.login_data = {'semel': self.school_ID,
'username': self.username,
'password': self.password,
'year': self.current_year}
self.ret_data = self.send("login", "post", self.login_data)
self.ret_text = json.loads(self.ret_data.text)
if not self.ret_data.status_code == 200:
self.is_logged_in = False
raise LoginFailed()
self.is_logged_in = True
self.auth_ID = self.ret_text["credential"]["userId"]
self.user_ID = self.auth_ID
self.uid = self.auth_ID
self.uID = self.auth_ID
self.guid = self.auth_ID
self.guID = self.auth_ID
self.school_site = self.ret_text["accessToken"]["schoolOptions"]["schoolSite"]
self.moodle_site = self.ret_text["accessToken"]["schoolOptions"]["moodleSite"]
self.school_name = self.ret_text["accessToken"]["schoolOptions"]["schoolName"]
self.last_name = self.ret_text["accessToken"]["children"][0]["familyName"]
self.first_name = self.ret_text["accessToken"]["children"][0]["privateName"]
self.class_name = f'{self.ret_text["accessToken"]["children"][0]["classNum"]}{self.ret_text["accessToken"]["children"][0]["classCode"]}'
self.last_pass = self.ret_text["accessToken"]["lastPassSet"]
self.last_login = self.ret_text["accessToken"]["lastLogin"]
self.school_years = self.ret_text["accessToken"]["userSchoolYears"]
self.csrf_token = self.ret_data.cookies["Csrf-Token"]
self.session.headers.update({"x-csrf-token": self.csrf_token})
self.user_children = self.ret_text["accessToken"]["children"]
del self.username
del self.password
@property
def timetable(self):
return self.form_return(self.send(f"students/{self.user_ID}/timetable", "get"))
def update_school_data(self):
"""
Parameters
------------
------------
"""
self.school_data = self.form_return(self.send("schools", "get"))
def get_schools(self):
"""
Parameters
------------
------------
"""
self.update_school_data()
return self.school_data()
def get_school_ID_by_name(self, school):
"""
Parameters
------------
school -> Represents the school name
------------
"""
if self.school_data:
schoolData = self.school_data
else:
schoolData = self.update_school_data()
for schools in schoolData:
if schools["name"].find(school) == 0:
return schools["semel"]
def clear_session(self):
"""
Parameters
------------
------------
"""
return self.form_return(self.send("clearSession", "get"))
def get_special_lessons(self):
"""
Parameters
------------
------------
"""
return self.get_private_lessons()
def get_private_lessons(self):
"""
Parameters
------------
------------
"""
return self.form_return(self.send("students/{}/specialHoursLessons".format(self.auth_ID), "get"))
def get_private_lesson_types(self):
"""
Parameters
------------
------------
"""
return self.form_return(self.send("lessonsTypes", "get"))
@property
def classes(self):
return self.groups
@property
def groups(self):
return self.form_return(self.send("students/{}/groups".format(self.auth_ID), "get"))
@property
def teachers(self):
recipents = self.recipents
teachers = []
for i in recipents:
if "הורים/" not in i["displayName"]:
teachers.append(i)
return teachers
@property
def recipents(self):
return self.form_return(self.send("mail/recipients", "get"))
def form_return(self, response):
"""
Parameters
------------
response -> Represents the response from the website
------------
"""
if response.status_code != 200:
return False
else:
try:
return json.loads(response.text)
except:
return response.text
def send(self, url, method="get", params={}, files={}):
"""
Parameters
------------
url -> Represents the url to go to
method -> Represents the method to use. Can be either `get` or `post`
params -> Represents the parameters to send to the website. Only use it on `post`
files -> Pretty much the same as for the params
------------
"""
return getattr(self.session, str(method).strip().lower())(self.url.format(url), data=json.dumps(params),
files=files)
def __str__(self):
return json.dumps({
"MashovAPI": {
"url": self.url,
"sessionH": dict(self.session.headers),
"sessionC": self.session.cookies.get_dict(),
"username": self.username,
"password": self.password,
"schoolData": self.school_data,
"schoolID": self.school_ID,
"currentYear": self.current_year,
"loginData": self.login_data,
"isLoggedIn": self.is_logged_in,
"authID": self.auth_ID,
"userID": self.user_ID,
"uid": self.uid,
"uID": self.uID,
"guid": self.guid,
"guID": self.guID,
"schoolSite": self.school_site,
"moodleSite": self.moodle_site,
"schoolName": self.school_name,
"lastName": self.last_name,
"firstName": self.first_name,
"className": self.class_name,
"lastPass": self.last_pass,
"lastLogin": self.last_login,
"schoolYears": self.school_years,
"csrfToken": self.csrf_token,
"userChildren": self.user_children
}})
def get_day(self, day_num: int):
"""
Parameters
------------
day -> Represents the day number
------------
"""
day = []
timetable = []
for i in self.timetable:
if i["timeTable"]["day"] == day_num:
timetable.append(i)
for i in range(len(timetable)):
for j in range(i+1, len(timetable), 1):
if timetable[i]["timeTable"]["lesson"] > timetable[j]["timeTable"]["lesson"]:
temp = timetable[i]
timetable[i] = timetable[j]
timetable[j] = temp
for i in timetable:
if not "קפ'" in i["groupDetails"]["subjectName"]: # We don't need that. It's useless.
if len(day) > 0:
while i["timeTable"]["lesson"] > day[-1].number + 1:
day.append(Lesson(
lesson="",
lesson_number=day[-1].number + 1,
lesson_time="",
classroom="",
teacher="",
)
)
i["groupDetails"]["groupTeachers"][0]["teacherName"] = i["groupDetails"]["groupTeachers"][0]["teacherName"].replace("-", " ")
day.append(Lesson(
lesson=i["groupDetails"]["subjectName"],
lesson_number=i["timeTable"]["lesson"],
lesson_time="",
classroom=i["timeTable"]["roomNum"],
teacher=i["groupDetails"]["groupTeachers"][0]["teacherName"]
)
)
return Day(day_num, day)
def get_today(self):
"""
Parameters
------------
------------
"""
today = datetime.now().weekday()
today += 2
if today > 7:
today -= 7
return self.get_day(today)
| 34.88959 | 168 | 0.499458 | 1,082 | 11,060 | 4.966728 | 0.224584 | 0.042799 | 0.027912 | 0.028656 | 0.208969 | 0.164496 | 0.097507 | 0.06364 | 0.040194 | 0.040194 | 0 | 0.007595 | 0.357143 | 11,060 | 316 | 169 | 35 | 0.748242 | 0.110579 | 0 | 0.122549 | 0 | 0.009804 | 0.155264 | 0.019408 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093137 | false | 0.068627 | 0.019608 | 0.02451 | 0.215686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
db713485817468ad0752428e7966eefdca79459b | 4,233 | py | Python | clover.py | imyz/25000 | 909b6ceaf326138b0684e6600f347a38fe68f9f0 | [
"MIT"
] | 8 | 2015-08-10T03:43:06.000Z | 2022-01-18T21:23:31.000Z | clover.py | jcrocholl/25000 | 0607a9c2f5f16f0776d88e56460c6479921616cb | [
"MIT"
] | null | null | null | clover.py | jcrocholl/25000 | 0607a9c2f5f16f0776d88e56460c6479921616cb | [
"MIT"
] | 6 | 2015-06-28T20:02:01.000Z | 2018-01-06T17:37:38.000Z | #!/usr/bin/env python
from math import *
import sys
def rotate(x, y, degrees):
c = cos(pi * degrees / 180.0)
s = sin(pi * degrees / 180.0)
return x * c + y * s, y * c - x * s
def move(verb, **kwargs):
keys = kwargs.keys()
keys.sort()
words = [verb.upper()]
for key in keys:
words.append('%s%g' % (key.upper(), kwargs[key]))
print ' '.join(words)
def travel(**kwargs): move('G0', **kwargs)
def linear(**kwargs): move('G1', **kwargs)
def clockwise(**kwargs): move('G2', **kwargs)
def up(): travel(z=8)
def down(): linear(z=-2)
def jump(**kwargs):
up()
travel(**kwargs)
down()
frame_width = 200
frame_height = 75
drill = 1.6 # 1/16 inch radius.
extrusion = 15
motor_screw_grid = 31
motor_cutout_diameter = 22
motor_width = 42.2
motor_offset = 35 # Motor face to extrusion.
motor_side, motor_bend = rotate(0, motor_offset + extrusion, 30)
motor_side += extrusion/2
motor_side += extrusion/cos(pi/6)
mc = motor_cutout_diameter/2 + drill
#nema23 = 47.14 # Mounting screws center-to-center
clover = 6
thickness = 0.0478 * 25.4 # 18 gauge steel.
enable_perimeter = False
print >> sys.stderr, 'thickness', thickness
print >> sys.stderr, 'motor_bend', motor_bend
print >> sys.stderr, 'motor_side', motor_side
print >> sys.stderr, 'mc', mc
print >> sys.stderr, 'extrusion-to-extrusion', frame_width
print >> sys.stderr, 'edge-to-edge', frame_width + 2*extrusion
xa = motor_side - drill # Outside wings start
xb = motor_side + motor_bend + drill
xs1 = xa + extrusion/2 # Extrusion screws
xs2 = xb - extrusion/2
# xe = frame_width/2 # Extrusion corner
xt = motor_width/2
xms = motor_screw_grid/sqrt(2)
xgs = 19
ya = frame_height/2 + drill # Top without flange
yb = frame_height/2 + drill - extrusion
ys = frame_height/2 - extrusion/2 # Extrusion screws
yt = motor_width/2
yt2 = yt + 4
yms = xms
ygs = xgs
s2 = sqrt(2)
print 'G17 ; Select XY plane for arcs'
print 'G90 ; Absolute coordinates'
move('G92', x=0, y=0, z=0)
linear(x=0, y=0, z=0)
print '; Gasket screw holes'
for x in (-xgs, xgs):
for y in (-x, x):
jump(x=x, y=y)
# clockwise(i=1)
if enable_perimeter:
print '; Horizontal extrusion screw holes'
for x in (xs1, xs2):
jump(x=x, y=ys)
for x in (xs2, xs1, -xs1, -xs2):
jump(x=x, y=-ys)
for x in (-xs2, -xs1):
jump(x=x, y=ys)
#print '; 22mm dia cutout for reference'
#jump(x=0, y=11)
#clockwise(j=-11)
#print '; NEMA17 square for reference'
#jump(x=0, y=yt*s2)
#linear(x=xt*s2, y=0)
#linear(x=0, y=-yt*s2)
#linear(x=-xt*s2, y=0)
#linear(x=0, y=yt*s2)
def clovercut(z):
up()
travel(x=-clover+1, y=yms-clover-1)
linear(z=z)
print '; Motor cutout clover leaf'
linear(x=-clover, y=yms-clover)
clockwise(x=clover, i=clover, j=clover)
#clockwise(x=xms-clover, y=clover, r=mc)
linear(x=xms-clover, y=clover, r=mc)
clockwise(y=-clover, i=clover, j=-clover)
#clockwise(x=clover, y=-yms+clover, r=mc)
linear(x=clover, y=-yms+clover, r=mc)
clockwise(x=-clover, i=-clover, j=-clover)
#clockwise(x=-xms+clover, y=-clover, r=mc)
linear(x=-xms+clover, y=-clover, r=mc)
clockwise(y=clover, i=-clover, j=clover)
#clockwise(x=-clover, y=yms-clover, r=mc)
linear(x=-clover, y=yms-clover, r=mc)
linear(x=-clover+1, y=yms-clover+1)
for z in (-1, -2.5):
clovercut(z)
def perimeter(z):
up()
travel(x=xa, y=yb)
linear(z=z)
print '; Right wing (outside horizontal extrusions)'
clockwise(x=xa+extrusion, y=ya, i=extrusion)
linear(x=xb)
linear(y=-ya)
linear(x=xa+extrusion)
clockwise(x=xa, y=-yb, j=extrusion)
print '; Extrusion pass-through and motor mounting plate'
linear(x=xa-20)
clockwise(x=-xa+20, i=-xa+20, j=yb)
linear(x=-xa, y=-yb)
print '; Left wing (outside horizontal extrusions)'
clockwise(x=-xa-extrusion, y=-ya, i=-extrusion)
linear(x=-xb)
linear(y=ya)
linear(x=-xa-extrusion)
clockwise(x=-xa, y=yb, j=-extrusion)
print '; Extrusion pass-through and motor mounting plate'
linear(x=-xa+20)
clockwise(x=xa-20, i=xa-20, j=-yb)
linear(x=xa, y=yb)
if enable_perimeter:
for z in (-1, -2.5):
perimeter(z)
print '; All done'
up()
| 25.196429 | 64 | 0.632412 | 709 | 4,233 | 3.726375 | 0.220028 | 0.050341 | 0.027252 | 0.020818 | 0.403482 | 0.385693 | 0.352385 | 0.340651 | 0.340651 | 0.340651 | 0 | 0.040672 | 0.198441 | 4,233 | 167 | 65 | 25.347305 | 0.73799 | 0.146704 | 0 | 0.116667 | 0 | 0 | 0.114334 | 0.006135 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.016667 | 0.016667 | null | null | 0.141667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db71abd1961c2779351e3978214beab6ac4916f7 | 915 | py | Python | scripts/mnist_inference.py | asiakaczmar/noise2self | 75daaf188c49bff0da22c235540da20f4eca9614 | [
"MIT"
] | null | null | null | scripts/mnist_inference.py | asiakaczmar/noise2self | 75daaf188c49bff0da22c235540da20f4eca9614 | [
"MIT"
] | null | null | null | scripts/mnist_inference.py | asiakaczmar/noise2self | 75daaf188c49bff0da22c235540da20f4eca9614 | [
"MIT"
] | null | null | null | import torch
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader
from scripts.utils import SyntheticNoiseDataset
from models.babyunet import BabyUnet
CHECKPOINTS_PATH = '../checkpoints/'
mnist_test = MNIST('../inferred_data/MNIST', download=True,
transform=transforms.Compose([
transforms.ToTensor(),
]), train=False)
noisy_mnist_test = SyntheticNoiseDataset(mnist_test, 'test')
data_loader = DataLoader(noisy_mnist_test, batch_size=256, shuffle=True)
for x in range(0, 200, 10):
trained_model = BabyUnet()
trained_model.load_state_dict( CHECKPOINTS_PATH + 'model' + str(x))
trained_model.eval()
for i, batch in enumerate(data_loader):
denoised = trained_model(batch)
break()
np.save(denoised.numpy(), '../inferred_data/model' + str(x) + '.npz')
| 31.551724 | 73 | 0.696175 | 108 | 915 | 5.722222 | 0.490741 | 0.058252 | 0.045307 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012245 | 0.196721 | 915 | 28 | 74 | 32.678571 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0.078775 | 0.04814 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.285714 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db76b4e07eb1879ec4babded5e9e5a77166fce6b | 424 | py | Python | core/data/utils.py | ahmad-PH/auto_lcc | 55a6ac0e92994f4eed9951a27b7aa0d834f9d804 | [
"MIT"
] | 2 | 2022-01-01T22:09:05.000Z | 2022-01-01T23:00:43.000Z | core/data/utils.py | ahmad-PH/auto_lcc | 55a6ac0e92994f4eed9951a27b7aa0d834f9d804 | [
"MIT"
] | null | null | null | core/data/utils.py | ahmad-PH/auto_lcc | 55a6ac0e92994f4eed9951a27b7aa0d834f9d804 | [
"MIT"
] | null | null | null | import pickle
import pandas as pd
from typing import List, Tuple
def load_libofc_df(data_path):
def tuple_to_df(data: List[Tuple]) -> pd.DataFrame:
return pd.DataFrame(data, columns=["class", "title", "synopsis", "id"])
with open(data_path, 'rb') as f:
classes = pickle.load(f)
train = pickle.load(f)
test = pickle.load(f)
return classes, tuple_to_df(train), tuple_to_df(test)
| 28.266667 | 79 | 0.660377 | 64 | 424 | 4.21875 | 0.46875 | 0.077778 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209906 | 424 | 14 | 80 | 30.285714 | 0.80597 | 0 | 0 | 0 | 0 | 0 | 0.051887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
db7701392b667ccf9ad8bc520bcd09b9ef9711c5 | 608 | py | Python | apps/users/adminx.py | hhdMrLion/mxshop-api | 1472ad0d959439ea80c1f8d8bfd3629c15d3017d | [
"Apache-2.0"
] | null | null | null | apps/users/adminx.py | hhdMrLion/mxshop-api | 1472ad0d959439ea80c1f8d8bfd3629c15d3017d | [
"Apache-2.0"
] | null | null | null | apps/users/adminx.py | hhdMrLion/mxshop-api | 1472ad0d959439ea80c1f8d8bfd3629c15d3017d | [
"Apache-2.0"
] | null | null | null | import xadmin
from users.models import VerifyCode
from xadmin import views
class BaseSetting(object):
# 添加主题功能
enable_themes = True
user_bootswatch = True
class GlobalSettings(object):
# 全局配置,后端管理标题和页脚
site_title = '天天生鲜后台管理'
site_footer = 'https://www.qnmlgb.top/'
# 菜单收缩
menu_style = 'accordion'
class VerifyCodeAdmin(object):
list_display = ['code', 'mobile', 'add_time']
xadmin.site.register(VerifyCode, VerifyCodeAdmin)
xadmin.site.register(views.BaseAdminView, BaseSetting)
xadmin.site.register(views.CommAdminView, GlobalSettings)
| 22.518519 | 58 | 0.708882 | 65 | 608 | 6.523077 | 0.630769 | 0.070755 | 0.127358 | 0.108491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194079 | 608 | 26 | 59 | 23.384615 | 0.865306 | 0.042763 | 0 | 0 | 0 | 0 | 0.105072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
db7edea364132ddeeca859f58229a42b6ea2f0ae | 534 | py | Python | config/settings/local.py | vyshakTs/STORE_MANAGEMENT_SYSTEM | b6b82a02c0b512083c35a8656e191436552569a9 | [
"CC0-1.0"
] | null | null | null | config/settings/local.py | vyshakTs/STORE_MANAGEMENT_SYSTEM | b6b82a02c0b512083c35a8656e191436552569a9 | [
"CC0-1.0"
] | null | null | null | config/settings/local.py | vyshakTs/STORE_MANAGEMENT_SYSTEM | b6b82a02c0b512083c35a8656e191436552569a9 | [
"CC0-1.0"
] | null | null | null | from .base import *
DEBUG = True
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'SMS',
'USER': 'postgres',
'PASSWORD': 'password',
'HOST': 'localhost',
'PORT': '',
}
}
INSTALLED_APPS += [
'debug_toolbar.apps.DebugToolbarConfig',
'django_extensions',
]
ALLOWED_HOSTS += ['.herokuapp.com']
# Loads SECRET_KEY from .env file
# SECRET_KEY = get_env_variable('SECRET_KEY')
| 19.777778 | 64 | 0.617978 | 54 | 534 | 5.925926 | 0.777778 | 0.084375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220974 | 534 | 26 | 65 | 20.538462 | 0.769231 | 0.140449 | 0 | 0 | 0 | 0 | 0.45614 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055556 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
db8615ff95bbb42756435769fd0cc3b6f45c202c | 503 | py | Python | day-2/part_b.py | yuetsin/AoC | a7c5aea245ee6e77312352907fc4d1ac8eac2d3a | [
"CC0-1.0"
] | null | null | null | day-2/part_b.py | yuetsin/AoC | a7c5aea245ee6e77312352907fc4d1ac8eac2d3a | [
"CC0-1.0"
] | null | null | null | day-2/part_b.py | yuetsin/AoC | a7c5aea245ee6e77312352907fc4d1ac8eac2d3a | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
import re
def get_input() -> list:
with open('./input', 'r') as f:
return [v for v in [v.strip() for v in f.readlines()] if v]
lines = get_input()
count = 0
for line in lines:
lower, upper, char, password = re.split(r'-|: | ', line)
lower, upper = int(lower) - 1, int(upper) - 1
try:
if (password[lower] == char) ^ (password[upper] == char):
count += 1
except:
# don't care about boundaries
pass
print(count)
| 20.12 | 67 | 0.554672 | 74 | 503 | 3.743243 | 0.567568 | 0.057762 | 0.043321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013928 | 0.286282 | 503 | 24 | 68 | 20.958333 | 0.75766 | 0.097416 | 0 | 0 | 0 | 0 | 0.030973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.2 | 0.066667 | 0 | 0.2 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
db8707b6679e39765f15056eb4cf61c517a7c762 | 9,435 | py | Python | hcloud/servers/domain.py | usmannasir/hcloud-python | 2a90551fb1c4d9d8a6aea5d8b6601a7c1360494d | [
"MIT"
] | 1 | 2019-10-23T01:00:08.000Z | 2019-10-23T01:00:08.000Z | hcloud/servers/domain.py | usmannasir/hcloud-python | 2a90551fb1c4d9d8a6aea5d8b6601a7c1360494d | [
"MIT"
] | null | null | null | hcloud/servers/domain.py | usmannasir/hcloud-python | 2a90551fb1c4d9d8a6aea5d8b6601a7c1360494d | [
"MIT"
] | 1 | 2019-06-19T17:53:10.000Z | 2019-06-19T17:53:10.000Z | # -*- coding: utf-8 -*-
from hcloud.core.domain import BaseDomain
from hcloud.helpers.descriptors import ISODateTime
class Server(BaseDomain):
"""Server Domain
:param id: int
ID of the server
:param name: str
Name of the server (must be unique per project and a valid hostname as per RFC 1123)
:param status: str
Status of the server Choices: `running`, `initializing`, `starting`, `stopping`, `off`, `deleting`, `migrating`, `rebuilding`, `unknown`
:param created: datetime
Point in time when the server was created
:param public_net: :class:`PublicNetwork <hcloud.servers.domain.PublicNetwork>`
Public network information.
:param server_type: :class:`BoundServerType <hcloud.server_types.client.BoundServerType>`
:param datacenter: :class:`BoundDatacenter <hcloud.datacenters.client.BoundDatacenter>`
:param image: :class:`BoundImage <hcloud.images.client.BoundImage>`, None
:param iso: :class:`BoundIso <hcloud.isos.client.BoundIso>`, None
:param rescue_enabled: bool
True if rescue mode is enabled: Server will then boot into rescue system on next reboot.
:param locked: bool
True if server has been locked and is not available to user.
:param backup_window: str, None
Time window (UTC) in which the backup will run, or None if the backups are not enabled
:param outgoing_traffic: int, None
Outbound Traffic for the current billing period in bytes
:param ingoing_traffic: int, None
Inbound Traffic for the current billing period in bytes
:param included_traffic: int
Free Traffic for the current billing period in bytes
:param protection: dict
Protection configuration for the server
:param labels: dict
User-defined labels (key-value pairs)
:param volumes: List[:class:`BoundVolume <hcloud.volumes.client.BoundVolume>`]
Volumes assigned to this server.
"""
STATUS_RUNNING = "running"
"""Server Status running"""
STATUS_INIT = "initializing"
"""Server Status initializing"""
STATUS_STARTING = "starting"
"""Server Status starting"""
STATUS_STOPPING = "stopping"
"""Server Status stopping"""
STATUS_OFF = "off"
"""Server Status off"""
STATUS_DELETING = "deleting"
"""Server Status deleting"""
STATUS_MIGRATING = "migrating"
"""Server Status migrating"""
STATUS_REBUILDING = "rebuilding"
"""Server Status rebuilding"""
STATUS_UNKNOWN = "unknown"
"""Server Status unknown"""
__slots__ = (
"id",
"name",
"status",
"public_net",
"server_type",
"datacenter",
"image",
"iso",
"rescue_enabled",
"locked",
"backup_window",
"outgoing_traffic",
"ingoing_traffic",
"included_traffic",
"protection",
"labels",
"volumes",
)
created = ISODateTime()
supported_fields = ("created",)
def __init__(
self,
id,
name=None,
status=None,
created=None,
public_net=None,
server_type=None,
datacenter=None,
image=None,
iso=None,
rescue_enabled=None,
locked=None,
backup_window=None,
outgoing_traffic=None,
ingoing_traffic=None,
included_traffic=None,
protection=None,
labels=None,
volumes=None,
):
self.id = id
self.name = name
self.status = status
self.created = created
self.public_net = public_net
self.server_type = server_type
self.datacenter = datacenter
self.image = image
self.iso = iso
self.rescue_enabled = rescue_enabled
self.locked = locked
self.backup_window = backup_window
self.outgoing_traffic = outgoing_traffic
self.ingoing_traffic = ingoing_traffic
self.included_traffic = included_traffic
self.protection = protection
self.labels = labels
self.volumes = volumes
class CreateServerResponse(BaseDomain):
"""Create Server Response Domain
:param action: :class:`BoundServer <hcloud.servers.client.BoundServer>`
The created server
:param action: :class:`BoundAction <hcloud.actions.client.BoundAction>`
Shows the progress of the server creation
:param next_actions: List[:class:`BoundAction <hcloud.actions.client.BoundAction>`]
Additional actions like a `start_server` action after the server creation
:param root_password: str, None
The root password of the server if no SSH-Key was given on server creation
"""
__slots__ = (
"server",
"action",
"next_actions",
"root_password"
)
def __init__(
self,
server, # type: BoundServer
action, # type: BoundAction
next_actions, # type: List[Action]
root_password # type: str
):
self.server = server
self.action = action
self.next_actions = next_actions
self.root_password = root_password
class ResetPasswordResponse(BaseDomain):
"""Reset Password Response Domain
:param action: :class:`BoundAction <hcloud.actions.client.BoundAction>`
Shows the progress of the server passwort reset action
:param root_password: str
The root password of the server
"""
__slots__ = (
"action",
"root_password"
)
def __init__(
self,
action, # type: BoundAction
root_password # type: str
):
self.action = action
self.root_password = root_password
class EnableRescueResponse(BaseDomain):
"""Enable Rescue Response Domain
:param action: :class:`BoundAction <hcloud.actions.client.BoundAction>`
Shows the progress of the server enable rescue action
:param root_password: str
The root password of the server in the rescue mode
"""
__slots__ = (
"action",
"root_password"
)
def __init__(
self,
action, # type: BoundAction
root_password # type: str
):
self.action = action
self.root_password = root_password
class RequestConsoleResponse(BaseDomain):
"""Request Console Response Domain
:param action: :class:`BoundAction <hcloud.actions.client.BoundAction>`
Shows the progress of the server request console action
:param wss_url: str
URL of websocket proxy to use. This includes a token which is valid for a limited time only.
:param password: str
VNC password to use for this connection. This password only works in combination with a wss_url with valid token.
"""
__slots__ = (
"action",
"wss_url",
"password"
)
def __init__(
self,
action, # type: BoundAction
wss_url, # type: str
password, # type: str
):
self.action = action
self.wss_url = wss_url
self.password = password
class PublicNetwork(BaseDomain):
"""Public Network Domain
:param ipv4: :class:`IPv4Address <hcloud.servers.domain.IPv4Address>`
:param ipv6: :class:`IPv6Network <hcloud.servers.domain.IPv6Network>`
:param floating_ips: List[:class:`BoundFloatingIP <hcloud.floating_ips.client.BoundFloatingIP>`]
"""
__slots__ = (
"ipv4",
"ipv6",
"floating_ips"
)
def __init__(self,
ipv4, # type: IPv4Address
ipv6, # type: IPv6Network
floating_ips, # type: List[BoundFloatingIP]
):
self.ipv4 = ipv4
self.ipv6 = ipv6
self.floating_ips = floating_ips
class IPv4Address(BaseDomain):
"""IPv4 Address Domain
:param ip: str
The IPv4 Address
:param blocked: bool
Determine if the IP is blocked
:param dns_ptr: str
DNS PTR for the ip
"""
__slots__ = (
"ip",
"blocked",
"dns_ptr"
)
def __init__(self,
ip, # type: str
blocked, # type: bool
dns_ptr, # type: str
):
self.ip = ip
self.blocked = blocked
self.dns_ptr = dns_ptr
class IPv6Network(BaseDomain):
"""IPv6 Network Domain
:param ip: str
The IPv6 Network as CIDR Notation
:param blocked: bool
Determine if the Network is blocked
:param dns_ptr: dict
DNS PTR Records for the Network as Dict
:param network: str
The network without the network mask
:param network_mask: str
The network mask
"""
__slots__ = (
"ip",
"blocked",
"dns_ptr",
"network",
"network_mask"
)
def __init__(self,
ip, # type: str
blocked, # type: bool
dns_ptr, # type: list
):
self.ip = ip
self.blocked = blocked
self.dns_ptr = dns_ptr
ip_parts = self.ip.split("/") # 2001:db8::/64 to 2001:db8:: and 64
self.network = ip_parts[0]
self.network_mask = ip_parts[1]
| 30.337621 | 147 | 0.598728 | 1,008 | 9,435 | 5.446429 | 0.196429 | 0.039344 | 0.020036 | 0.026412 | 0.267942 | 0.232605 | 0.20255 | 0.188889 | 0.188889 | 0.164299 | 0 | 0.006655 | 0.315209 | 9,435 | 310 | 148 | 30.435484 | 0.843058 | 0.449497 | 0 | 0.375 | 0 | 0 | 0.088171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.073864 | 0.011364 | 0 | 0.210227 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
db8a89f5042414f5dbf4f47067a5e2131c5f76b8 | 1,881 | py | Python | dlk/core/schedulers/__init__.py | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | dlk/core/schedulers/__init__.py | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | dlk/core/schedulers/__init__.py | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 cstsunfu. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""schedulers"""
import importlib
import os
from dlk.utils.register import Register
from torch.optim import Optimizer
from torch.optim.lr_scheduler import LambdaLR
import math
scheduler_config_register = Register("Schedule config register.")
scheduler_register = Register("Schedule register.")
class BaseScheduler(object):
"""interface for Schedule"""
def get_scheduler(self)->LambdaLR:
"""return the initialized scheduler
Returns:
Schedule
"""
raise NotImplementedError
def __call__(self):
"""the same as self.get_scheduler()
"""
return self.get_scheduler()
def import_schedulers(schedulers_dir, namespace):
for file in os.listdir(schedulers_dir):
path = os.path.join(schedulers_dir, file)
if (
not file.startswith("_")
and not file.startswith(".")
and (file.endswith(".py") or os.path.isdir(path))
):
scheduler_name = file[: file.find(".py")] if file.endswith(".py") else file
importlib.import_module(namespace + "." + scheduler_name)
# automatically import any Python files in the schedulers directory
schedulers_dir = os.path.dirname(__file__)
import_schedulers(schedulers_dir, "dlk.core.schedulers")
| 30.836066 | 87 | 0.701223 | 238 | 1,881 | 5.432773 | 0.487395 | 0.046404 | 0.020108 | 0.024749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005355 | 0.205742 | 1,881 | 60 | 88 | 31.35 | 0.860107 | 0.407762 | 0 | 0 | 0 | 0 | 0.070209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.36 | 0 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
db95ca16068801b73d6de76c353700c64c6cc5f8 | 3,558 | py | Python | lctools/shortcuts.py | novel/lc-tools | 1b9032357e2e87aebd76d87664077caa5747c220 | [
"Apache-2.0"
] | 5 | 2015-03-24T11:04:18.000Z | 2021-07-11T00:06:44.000Z | lctools/shortcuts.py | novel/lc-tools | 1b9032357e2e87aebd76d87664077caa5747c220 | [
"Apache-2.0"
] | null | null | null | lctools/shortcuts.py | novel/lc-tools | 1b9032357e2e87aebd76d87664077caa5747c220 | [
"Apache-2.0"
] | null | null | null | import getopt
import sys
from libcloud.compute.types import NodeState
from lc import get_lc
from printer import Printer
def lister_main(what, resource=None,
extension=False, supports_location=False, **kwargs):
"""Shortcut for main() routine for lister
tools, e.g. lc-SOMETHING-list
@param what: what we are listing, e.g. 'nodes'
@param extension: is it an extension of core libcloud functionality?
@param kwargs: additional arguments for the call
@type what: C{string}
@param supports_location: tells that objects we
listing could be filtered by location
@type supports_location: C{bool}
"""
list_method = "%slist_%s" % ({True: 'ex_', False: ''}[extension], what)
profile = "default"
format = location = None
options = "f:p:"
if supports_location:
options += "l:"
try:
opts, args = getopt.getopt(sys.argv[1:], options)
except getopt.GetoptError, err:
sys.stderr.write("%s\n" % str(err))
sys.exit(1)
for o, a in opts:
if o == "-f":
format = a
if o == "-p":
profile = a
if o == "-l":
location = a
try:
conn = get_lc(profile, resource=resource)
list_kwargs = kwargs
if supports_location and location is not None:
nodelocation = filter(lambda loc: str(loc.id) == location,
conn.list_locations())[0]
list_kwargs["location"] = nodelocation
for node in getattr(conn, list_method)(**list_kwargs):
Printer.do(node, format)
except Exception, err:
sys.stderr.write("Error: %s\n" % str(err))
def save_image_main():
"""Shortcut for main() routine for provider
specific image save tools.
"""
def usage(progname):
sys.stdout.write("%s -i <node_id> -n <image_name> [-p <profile]\n\n" % progname)
profile = 'default'
name = node_id = None
try:
opts, args = getopt.getopt(sys.argv[1:], "i:n:p:")
except getopt.GetoptError, err:
sys.stderr.write("%s\n" % str(err))
sys.exit(1)
for o, a in opts:
if o == "-i":
node_id = a
if o == "-n":
name = a
if o == "-p":
profile = a
if node_id is None or name is None:
usage(sys.argv[0])
sys.exit(1)
conn = get_lc(profile)
node = get_node_or_fail(conn, node_id, print_error_and_exit,
("Error: cannot find node with id '%s'." % node_id,))
Printer.do(conn.ex_save_image(node, name))
def get_node_or_fail(conn, node_id, coroutine=None, cargs=(), ckwargs={}):
"""Shortcut to get a single node by its id. In case when
such node could not be found, coroutine could be called
to handle such case. Typically coroutine will output an
error message and exit from application.
@param conn: libcloud connection handle
@param node_id: id of the node to search for
@param coroutine: a callable object to handle case
when node cannot be found
@param cargs: positional arguments for coroutine
@param kwargs: keyword arguments for coroutine
@return: node object if found, None otherwise"""
try:
node = [node for node in conn.list_nodes()
if str(node.id) == str(node_id)][0]
return node
except IndexError:
if callable(coroutine):
coroutine(*cargs, **ckwargs)
return None
def print_error_and_exit(message):
sys.stderr.write("%s\n" % message)
sys.exit(1)
| 28.693548 | 88 | 0.606239 | 487 | 3,558 | 4.338809 | 0.289528 | 0.028396 | 0.026503 | 0.024136 | 0.162802 | 0.131566 | 0.131566 | 0.095599 | 0.066257 | 0.066257 | 0 | 0.00352 | 0.281338 | 3,558 | 123 | 89 | 28.926829 | 0.822839 | 0 | 0 | 0.25 | 0 | 0 | 0.067014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.069444 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db991c0b9d90667e802fd9ff394fd81d65368331 | 624 | py | Python | ex38.py | YunMeMeThaw/python_exercises | 151d5d3695d578059611ac09c94b3677442197d7 | [
"MIT"
] | null | null | null | ex38.py | YunMeMeThaw/python_exercises | 151d5d3695d578059611ac09c94b3677442197d7 | [
"MIT"
] | null | null | null | ex38.py | YunMeMeThaw/python_exercises | 151d5d3695d578059611ac09c94b3677442197d7 | [
"MIT"
] | null | null | null | ten_things = "Apples Oranges cows Telephone Light Sugar"
print ("Wait there are not 10 things in that list. Let's fix")
stuff = ten_things.split(' ')
more_stuff = {"Day", "Night", "Song", "Firebee",
"Corn", "Banana", "Girl", "Boy"}
while len(stuff) !=10:
next_one = more_stuff.pop()
print("Adding: ", next_one)
stuff.append(next_one)
print (f"There are {len(stuff)} items n ow.")
print ("There we go : ", stuff)
print ("Let's do some things with stuff.")
print (stuff[1])
print (stuff[-1]) # whoa! cool!
print (stuff.pop())
print (' '.join(stuff)) # what? cool !
print ('#'.join(stuff[3:5])) #super stealler!
| 27.130435 | 62 | 0.647436 | 97 | 624 | 4.092784 | 0.57732 | 0.052897 | 0.065491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.160256 | 624 | 22 | 63 | 28.363636 | 0.742366 | 0.0625 | 0 | 0 | 0 | 0 | 0.378657 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.588235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
db99d0c184b26e85aa45a341b38434f288a19023 | 700 | py | Python | var/spack/repos/builtin/packages/diffmark/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 11 | 2015-10-04T02:17:46.000Z | 2018-02-07T18:23:00.000Z | var/spack/repos/builtin/packages/diffmark/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 22 | 2017-08-01T22:45:10.000Z | 2022-03-10T07:46:31.000Z | var/spack/repos/builtin/packages/diffmark/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 4 | 2016-06-10T17:57:39.000Z | 2018-09-11T04:59:38.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Diffmark(AutotoolsPackage):
"""Diffmark is a DSL for transforming one string to another."""
homepage = "https://github.com/vbar/diffmark"
git = "https://github.com/vbar/diffmark.git"
version('master', branch='master')
depends_on('autoconf', type='build')
depends_on('automake', type='build')
depends_on('libtool', type='build')
depends_on('m4', type='build')
depends_on('pkgconfig', type='build')
depends_on('libxml2')
| 30.434783 | 73 | 0.688571 | 90 | 700 | 5.288889 | 0.666667 | 0.113445 | 0.168067 | 0.189076 | 0.121849 | 0.121849 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.171429 | 700 | 22 | 74 | 31.818182 | 0.8 | 0.352857 | 0 | 0 | 0 | 0 | 0.328829 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
db9d8c67bcfd3a7c9d253f50f4a6bf8badfcdb9c | 592 | py | Python | betterbib/__init__.py | tbabej/betterbib | 80a3c9040232d9988f9a1e4c40724b40b9b9ed85 | [
"MIT"
] | null | null | null | betterbib/__init__.py | tbabej/betterbib | 80a3c9040232d9988f9a1e4c40724b40b9b9ed85 | [
"MIT"
] | null | null | null | betterbib/__init__.py | tbabej/betterbib | 80a3c9040232d9988f9a1e4c40724b40b9b9ed85 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
from __future__ import print_function
from betterbib.__about__ import (
__version__,
__author__,
__author_email__,
__website__,
)
from betterbib.tools import (
create_dict,
decode,
pybtex_to_dict,
pybtex_to_bibtex_string,
write,
update,
JournalNameUpdater,
translate_month
)
from betterbib.crossref import Crossref
from betterbib.dblp import Dblp
try:
import pipdate
except ImportError:
pass
else:
if pipdate.needs_checking(__name__):
print(pipdate.check(__name__, __version__), end='')
| 18.5 | 59 | 0.701014 | 64 | 592 | 5.765625 | 0.640625 | 0.140921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002169 | 0.221284 | 592 | 31 | 60 | 19.096774 | 0.798265 | 0.035473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.038462 | 0.269231 | 0 | 0.269231 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dbad2da50018b20b9e8cf4be1668cfeef2d4c6cb | 729 | py | Python | tests/test_dump.py | flaeppe/astunparse | 754ec7d113fa273625ccc7b6c5d65aa7700ab8a9 | [
"PSF-2.0"
] | 189 | 2016-03-15T06:48:48.000Z | 2022-03-12T09:34:10.000Z | tests/test_dump.py | flaeppe/astunparse | 754ec7d113fa273625ccc7b6c5d65aa7700ab8a9 | [
"PSF-2.0"
] | 50 | 2015-09-14T16:22:00.000Z | 2022-02-24T05:36:57.000Z | tests/test_dump.py | flaeppe/astunparse | 754ec7d113fa273625ccc7b6c5d65aa7700ab8a9 | [
"PSF-2.0"
] | 52 | 2015-04-29T10:52:33.000Z | 2022-03-03T19:59:54.000Z | import ast
import re
import sys
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
import astunparse
from tests.common import AstunparseCommonTestCase
class DumpTestCase(AstunparseCommonTestCase, unittest.TestCase):
def assertASTEqual(self, dump1, dump2):
# undo the pretty-printing
dump1 = re.sub(r"(?<=[\(\[])\n\s+", "", dump1)
dump1 = re.sub(r"\n\s+", " ", dump1)
self.assertEqual(dump1, dump2)
def check_roundtrip(self, code1, filename="internal", mode="exec"):
ast_ = compile(str(code1), filename, mode, ast.PyCF_ONLY_AST)
dump1 = astunparse.dump(ast_)
dump2 = ast.dump(ast_)
self.assertASTEqual(dump1, dump2)
| 29.16 | 71 | 0.663923 | 89 | 729 | 5.359551 | 0.52809 | 0.062893 | 0.041929 | 0.046122 | 0.075472 | 0.075472 | 0.075472 | 0 | 0 | 0 | 0 | 0.029412 | 0.207133 | 729 | 24 | 72 | 30.375 | 0.795848 | 0.032922 | 0 | 0 | 0 | 0 | 0.048364 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.105263 | false | 0 | 0.368421 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.