hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
80e115033a86c707eb93a0ae3031b719ba2a3293 | 3,755 | py | Python | day-16/part_2.py | leotappe/aoc-2021 | 6132e01bd9b4c6ee6a8d95e213f08463102596c2 | [
"MIT"
] | null | null | null | day-16/part_2.py | leotappe/aoc-2021 | 6132e01bd9b4c6ee6a8d95e213f08463102596c2 | [
"MIT"
] | null | null | null | day-16/part_2.py | leotappe/aoc-2021 | 6132e01bd9b4c6ee6a8d95e213f08463102596c2 | [
"MIT"
] | null | null | null | """
Advent of Code 2021 | Day 16 | Part 2
"""
import sys
import math
class Packet:
def __init__(self, version, type_id):
self.version = version
self.type_id = type_id
class Literal(Packet):
def __init__(self, version, type_id, value):
super().__init__(version, type_id)
self.value = value
def sum_version_numbers(self):
return self.version
def eval(self):
return self.value
def __str__(self):
return f'L-{self.version}-{self.type_id}({self.value})'
class Operator(Packet):
def __init__(self, version, type_id, length_type_id):
super().__init__(version, type_id)
self.length_type_id = length_type_id
self.subpackets = []
def sum_version_numbers(self):
return self.version + sum(packet.sum_version_numbers() for packet in self.subpackets)
def eval(self):
if self.type_id == 0:
return sum(p.eval() for p in self.subpackets)
if self.type_id == 1:
return math.prod(p.eval() for p in self.subpackets)
if self.type_id == 2:
return min(p.eval() for p in self.subpackets)
if self.type_id == 3:
return max(p.eval() for p in self.subpackets)
if self.type_id == 5:
return int(self.subpackets[0].eval() > self.subpackets[1].eval())
if self.type_id == 6:
return int(self.subpackets[0].eval() < self.subpackets[1].eval())
if self.type_id == 7:
return int(self.subpackets[0].eval() == self.subpackets[1].eval())
def __str__(self):
return f'O-{self.version}-{self.type_id}({",".join(str(packet) for packet in self.subpackets)})'
def get_version(bits, start_index):
return int(bits[start_index:start_index + 3], base=2)
def get_type_id(bits, start_index):
return int(bits[start_index + 3:start_index + 6], base=2)
def get_literal_value(bits, start_index):
groups = []
for i in range(start_index + 6, len(bits), 5):
groups.append(bits[i + 1:i + 5])
if bits[i] == '0':
break
return int(''.join(groups), base=2), i + 5
def get_length_type_id(bits, start_index):
return int(bits[start_index + 6])
def get_total_length_of_subpackets_in_bits(bits, start_index):
return int(bits[start_index + 7:start_index + 7 + 15], base=2)
def get_number_of_subpackets(bits, start_index):
return int(bits[start_index + 7:start_index + 7 + 11], base=2)
def parse(bits, start_index):
version = get_version(bits, start_index)
type_id = get_type_id(bits, start_index)
if type_id == 4:
value, index = get_literal_value(bits, start_index)
return Literal(version, type_id, value), index
else:
packet = Operator(version, type_id, get_length_type_id(bits, start_index))
if packet.length_type_id == 0:
bit_length_of_subpackets = get_total_length_of_subpackets_in_bits(bits, start_index)
index = start_index + 6 + 1 + 15
while index < start_index + 6 + 1 + 15 + bit_length_of_subpackets:
subpacket, index = parse(bits, index)
packet.subpackets.append(subpacket)
else:
num_subpackets = get_number_of_subpackets(bits, start_index)
index = start_index + 6 + 1 + 11
for _ in range(num_subpackets):
subpacket, index = parse(bits, index)
packet.subpackets.append(subpacket)
return packet, index
def main():
with open(sys.argv[1]) as f:
bits = f.readline().strip()
bits = ''.join(f'{int(c, base=16):04b}' for c in bits)
packet, _ = parse(bits, 0)
print(packet.eval())
if __name__ == '__main__':
main()
| 30.282258 | 104 | 0.622636 | 536 | 3,755 | 4.100746 | 0.151119 | 0.076433 | 0.11465 | 0.038217 | 0.640127 | 0.576433 | 0.477707 | 0.400364 | 0.324386 | 0.324386 | 0 | 0.022556 | 0.256192 | 3,755 | 123 | 105 | 30.528455 | 0.764411 | 0.009854 | 0 | 0.162791 | 0 | 0.011628 | 0.043396 | 0.026415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.197674 | false | 0 | 0.023256 | 0.116279 | 0.488372 | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
80e8e4f12d8b86345865f28ec633cf5984a0885b | 2,142 | py | Python | pyspark/test/bigdl/test_engine_env.py | twicoder/BigDL | f065db372e1c682fa4a7903e287bba21d5f46750 | [
"Apache-2.0"
] | 55 | 2018-01-12T01:43:29.000Z | 2021-03-09T02:35:56.000Z | pyspark/test/bigdl/test_engine_env.py | jason-hzw/BigDL | ef4f4137965147e2bc59e41f40c4acbb50eeda97 | [
"Apache-2.0"
] | 4 | 2018-01-15T07:34:41.000Z | 2018-01-16T05:46:12.000Z | pyspark/test/bigdl/test_engine_env.py | jason-hzw/BigDL | ef4f4137965147e2bc59e41f40c4acbb50eeda97 | [
"Apache-2.0"
] | 22 | 2018-01-15T14:18:15.000Z | 2019-12-16T18:51:33.000Z | #
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import pytest
import os
from bigdl.util.common import *
class TestEngineEnv():
def setup_method(self, method):
""" setup any state tied to the execution of the given method in a
class. setup_method is invoked for every test method of a class.
"""
pass
def teardown_method(self, method):
""" teardown any state that was previously setup with a setup_method
call.
"""
pass
def test___prepare_bigdl_env(self):
# BigDL will automatically execute 'prepare_env()' function which
# includes '__prepare_bigdl_env()'. To test if there's no more duplicate
# adding jar path message, just do prepare_env()' again
# to see if the log is correct and the environment variables should not vary.
from bigdl.util.engine import prepare_env
bigdl_jars_env_1 = os.environ.get("BIGDL_JARS", None)
spark_class_path_1 = os.environ.get("SPARK_CLASSPATH", None)
sys_path_1 = sys.path
prepare_env()
# there should be no duplicate messages about adding jar path to
# the environment var "BIGDL_JARS"
# environment variables should remain the same
bigdl_jars_env_2 = os.environ.get("BIGDL_JARS", None)
spark_class_path_2 = os.environ.get("SPARK_CLASSPATH", None)
sys_path_2 = sys.path
assert bigdl_jars_env_1 == bigdl_jars_env_2
assert spark_class_path_1 == spark_class_path_2
assert sys_path_1 == sys_path_2
if __name__ == '__main__':
pytest.main()
| 36.305085 | 85 | 0.694211 | 311 | 2,142 | 4.581994 | 0.427653 | 0.044211 | 0.033684 | 0.022456 | 0.122807 | 0.106667 | 0.106667 | 0.106667 | 0.054737 | 0 | 0 | 0.012217 | 0.235761 | 2,142 | 58 | 86 | 36.931034 | 0.858277 | 0.542951 | 0 | 0.090909 | 0 | 0 | 0.063666 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.136364 | false | 0.090909 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
80ee0d6b8414bcb69cd0dca69b5279de3f08e3fc | 3,846 | py | Python | ex1/owais/imu_exercise.py | balintmaci/drone_intro_exercises | 1d8b839fecd6b0c5e33210b9a88fd741a71034cc | [
"Unlicense"
] | null | null | null | ex1/owais/imu_exercise.py | balintmaci/drone_intro_exercises | 1d8b839fecd6b0c5e33210b9a88fd741a71034cc | [
"Unlicense"
] | null | null | null | ex1/owais/imu_exercise.py | balintmaci/drone_intro_exercises | 1d8b839fecd6b0c5e33210b9a88fd741a71034cc | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# IMU exercise
# Copyright (c) 2015-2020 Kjeld Jensen kjen@mmmi.sdu.dk kj@kjen.dk
##### Insert initialize code below ###################
## Uncomment the file to read ##
fileName = 'imu_razor_data_static.txt'
#fileName = 'imu_razor_data_pitch_55deg.txt'
#fileName = 'imu_razor_data_roll_65deg.txt'
#fileName = 'imu_razor_data_yaw_90deg.txt'
## IMU type
#imuType = 'vectornav_vn100'
imuType = 'sparkfun_razor'
## Variables for plotting ##
showPlot = True
plotData = []
## Initialize your variables here ##
myValue = 0.0
######################################################
# import libraries
from math import pi, sqrt, atan2
import matplotlib.pyplot as plt
from scipy.signal import butter, lfilter, freqz #For Low pass filter
#####Filter function##################
def butter_lowpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def butter_lowpass_filter(data, cutoff, fs, order=5):
b, a = butter_lowpass(cutoff, fs, order=order)
y = lfilter(b, a, data)
return y
######################################
###### Filter Parameters #############
order = 6
fs = 30.0 # sample rate, Hz
cutoff = 3.667
######################################
# open the imu data file
f = open (fileName, "r")
# initialize variables
count = 0
# looping through file
for line in f:
count += 1
# split the line into CSV formatted data
line = line.replace ('*',',') # make the checkum another csv value
csv = line.split(',')
# keep track of the timestamps
ts_recv = float(csv[0])
if count == 1:
ts_now = ts_recv # only the first time
ts_prev = ts_now
ts_now = ts_recv
if imuType == 'sparkfun_razor':
# import data from a SparkFun Razor IMU (SDU firmware)
acc_x = int(csv[2]) / 1000.0 * 4 * 9.82;
acc_y = int(csv[3]) / 1000.0 * 4 * 9.82;
acc_z = int(csv[4]) / 1000.0 * 4 * 9.82;
gyro_x = int(csv[5]) * 1/14.375 * pi/180.0;
gyro_y = int(csv[6]) * 1/14.375 * pi/180.0;
gyro_z = int(csv[7]) * 1/14.375 * pi/180.0;
elif imuType == 'vectornav_vn100':
# import data from a VectorNav VN-100 configured to output $VNQMR
acc_x = float(csv[9])
acc_y = float(csv[10])
acc_z = float(csv[11])
gyro_x = float(csv[12])
gyro_y = float(csv[13])
gyro_z = float(csv[14])
##### Insert loop code below #########################
# Variables available
# ----------------------------------------------------
# count Current number of updates
# ts_prev Time stamp at the previous update
# ts_now Time stamp at this update
# acc_x Acceleration measured along the x axis
# acc_y Acceleration measured along the y axis
# acc_z Acceleration measured along the z axis
# gyro_x Angular velocity measured about the x axis
# gyro_y Angular velocity measured about the y axis
# gyro_z Angular velocity measured about the z axis
## Insert your code here ##
#3.2.1
#myValue=atan2((acc_y),sqrt((pow(acc_x,2))+(pow(acc_z,2))))
#3.2.2
#myValue=atan2((-acc_x),sqrt(acc_z))
#3.2.3
#myValue=atan2((acc_y),sqrt((pow(acc_x,2))+(pow(acc_z,2))))
#myValue=atan2((-acc_x),sqrt(acc_z))
#3.2.4
#myValue=atan2((acc_y),sqrt((pow(acc_x,2))+(pow(acc_z,2))))
#3.3.1
#myValue= myValue + gyro_z*(ts_now-ts_prev)
#3.3.2
myValue= myValue + gyro_z*(ts_now-ts_prev)- (0.00045 * pi/180)
#3.3.3
#myValue = pitch # relevant for the first exercise, then change this.
# in order to show a plot use this function to append your value to a list:
plotData.append (myValue*180.0/pi)
#plotData2 = butter_lowpass_filter(plotData, cutoff, fs, order)
######################################################
# closing the file
f.close()
# show the plot
if showPlot == True:
plt.plot(plotData)
plt.savefig('imu_exercise_plot.png')
plt.show()
| 23.888199 | 76 | 0.619345 | 597 | 3,846 | 3.864322 | 0.309883 | 0.013871 | 0.015171 | 0.034677 | 0.2228 | 0.126138 | 0.110533 | 0.096662 | 0.070655 | 0.070655 | 0 | 0.051306 | 0.173947 | 3,846 | 160 | 77 | 24.0375 | 0.67485 | 0.483099 | 0 | 0.038462 | 0 | 0 | 0.058608 | 0.028083 | 0 | 0 | 0 | 0.00625 | 0 | 1 | 0.038462 | false | 0.057692 | 0.057692 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
80ef3cf60e112d79c054dc1061b480da77a25354 | 1,148 | py | Python | theroot/users_bundle/models/address.py | Deviad/Adhesive | a7eb5140c4e5de783aca24ea935b3bf00a44f3e1 | [
"MIT"
] | null | null | null | theroot/users_bundle/models/address.py | Deviad/Adhesive | a7eb5140c4e5de783aca24ea935b3bf00a44f3e1 | [
"MIT"
] | null | null | null | theroot/users_bundle/models/address.py | Deviad/Adhesive | a7eb5140c4e5de783aca24ea935b3bf00a44f3e1 | [
"MIT"
] | null | null | null | from theroot.db import *
from theroot.users_bundle.models.user_info import address_user_table
class Address(db.Model):
__tablename__ = 'addresses'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
address_line = db.Column(db.String(255), unique=False, nullable=False)
zip = db.Column(db.String(255), unique=False, nullable=True)
country = db.Column(db.String(255), unique=False, nullable=False)
geohash = db.Column(db.String(255), unique=False, nullable=False)
user_info = db.relationship("UserInfo", secondary=address_user_table, back_populates="addresses")
def __init__(self, address, country, geohash, the_zip=None):
self.address_line = address
self.country = country
self.geohash = geohash
self.zip = the_zip
def __repr__(self):
return "<User (id='%r', address_line='%r', country='%r', geohash='%r', zip='%r', user_info='%r')>" \
% (self.id, self.address_line, self.country, self.geohash, self.zip, self.user_info)
def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns} | 44.153846 | 108 | 0.675958 | 157 | 1,148 | 4.726115 | 0.33121 | 0.053908 | 0.067385 | 0.086253 | 0.225067 | 0.225067 | 0.225067 | 0.225067 | 0.173854 | 0 | 0 | 0.012834 | 0.18554 | 1,148 | 26 | 109 | 44.153846 | 0.780749 | 0 | 0 | 0 | 0 | 0.05 | 0.100087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.1 | 0.1 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
80f1fcacc7763df5460142961dd40a1c5c44d6a2 | 160 | py | Python | settings.py | ErikRichardS/mnist-ann | 3fb34a25ec41177d34445d2ccda6cf42b7d4175e | [
"MIT"
] | null | null | null | settings.py | ErikRichardS/mnist-ann | 3fb34a25ec41177d34445d2ccda6cf42b7d4175e | [
"MIT"
] | null | null | null | settings.py | ErikRichardS/mnist-ann | 3fb34a25ec41177d34445d2ccda6cf42b7d4175e | [
"MIT"
] | null | null | null | NR_CLASSES = 10
hyperparameters = {
"number-epochs" : 30,
"batch-size" : 100,
"learning-rate" : 0.005,
"weight-decay" : 1e-9,
"learning-decay" : 1e-3
}
| 14.545455 | 25 | 0.61875 | 22 | 160 | 4.454545 | 0.863636 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0.1875 | 160 | 11 | 26 | 14.545455 | 0.638462 | 0 | 0 | 0 | 0 | 0 | 0.385093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80f51317fba911ed237d54c4c7c39490d353795f | 903 | py | Python | 03 Prime Number .py | yoursamlan/FunWithNumbers | 15e139a7c7d56f0553ef63446ab08f68c8262631 | [
"MIT"
] | null | null | null | 03 Prime Number .py | yoursamlan/FunWithNumbers | 15e139a7c7d56f0553ef63446ab08f68c8262631 | [
"MIT"
] | null | null | null | 03 Prime Number .py | yoursamlan/FunWithNumbers | 15e139a7c7d56f0553ef63446ab08f68c8262631 | [
"MIT"
] | null | null | null | # A prime number is a positive integer greater than one, that has no positive integer factors except one and itself.
# Since we have already dealt with number of factors of a number, I'm thinking to implement this idea finding prime number.
# The prime number has the factor of 1 and itself.
# So, number of factors of a prime number is always 2. We will use this logic to find it.
# After that, we will find prime numbers upto a certain limit.
def factors(n):
flist = []
for i in range(1,n+1):
if n%i == 0:
flist.append(i)
return flist
def numfact(num):
fno = []
for i in range(1,num+1):
fno.append(len(factors(i)))
return fno
def is_prime(m):
q = len(factors(m))
if q == 2:
return True
else:
return False
limit = int(input("Enter the limit: "))
for q in range(limit):
if is_prime(q):
print(q)
| 28.21875 | 123 | 0.635659 | 153 | 903 | 3.738562 | 0.45098 | 0.076923 | 0.041958 | 0.048951 | 0.104895 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012232 | 0.275748 | 903 | 31 | 124 | 29.129032 | 0.862385 | 0 | 0 | 0 | 0 | 0 | 0.036717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80f9553aa2281baf010bcd189b89d3921013e8ac | 1,903 | py | Python | upload2db/upload2db.py | rhoerbe/eu23enerwatch | 2749f0d3314580fa9df3251a2151817ec8c38d9c | [
"MIT"
] | null | null | null | upload2db/upload2db.py | rhoerbe/eu23enerwatch | 2749f0d3314580fa9df3251a2151817ec8c38d9c | [
"MIT"
] | null | null | null | upload2db/upload2db.py | rhoerbe/eu23enerwatch | 2749f0d3314580fa9df3251a2151817ec8c38d9c | [
"MIT"
] | null | null | null | """
Upload samples into database
"""
import os
import sys
import psycopg2
from pathlib import Path
import constants
def main():
password = os.environ['PG_PASSWD']
conn = psycopg2.connect(host="dc.idn.local", dbname="eu23enerwatch", user="eu23enerwatch", password=password)
logdir = Path(sys.argv[1])
for fpath in logdir.rglob('*'):
if fpath.is_file() and not fpath.name.startswith('done_'):
with open(fpath) as fd:
row_values: dict = read_sample(fd)
write_db(conn.cursor(), fpath.name, row_values)
rename_inputfile(fpath)
conn.commit()
conn.close()
def read_sample(fd) -> dict:
row_values = {}
for line in fd.readlines():
s_id, value = line.split()
s_name = constants.sensor_id[s_id]
s_location = constants.sensor_loc[s_name]
row_values[s_location] = round(int(value)/1000, 1)
return row_values
def write_db(cursor, sampletime: str, row_values: dict):
sampletime_edited = sampletime.replace('_', ':')
sql = f"""
INSERT INTO samples (
sampletime,
Kellerabluft,
Ofenvorlauf,
EGabluft,
Boiler,
Puffer,
OGabluft,
FBHvorlauf,
FBHruecklauf
)
VALUES (
'{sampletime_edited}',
{row_values.get('Kellerabluft', -99)},
{row_values.get('Ofenvorlauf', -99)},
{row_values.get('EGabluft', -99)},
{row_values.get('Boiler', -99)},
{row_values.get('Puffer', -99)},
{row_values.get('OGabluft', -99)},
{row_values.get('FBHvorlauf', -99)},
{row_values.get('FBHruecklauf', -99)}
)
"""
try:
cursor.execute(sql)
except psycopg2.errors.UniqueViolation:
pass
def rename_inputfile(fpath: Path):
newpath = Path(fpath.parent, 'done_' + str(fpath.name))
fpath.rename(newpath)
main() | 26.068493 | 113 | 0.601682 | 221 | 1,903 | 5.022624 | 0.434389 | 0.113514 | 0.086486 | 0.088288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020626 | 0.261167 | 1,903 | 73 | 114 | 26.068493 | 0.768848 | 0.014714 | 0 | 0 | 0 | 0 | 0.359743 | 0.132227 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0.050847 | 0.084746 | 0 | 0.169492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0388b46edfbba0396db6a8d52b25d94afdf26576 | 2,045 | py | Python | block.py | IgorReshetnyak/Statistics | f2f876a679389a7ecc4f24f23ca3f8aabd6a2604 | [
"MIT"
] | null | null | null | block.py | IgorReshetnyak/Statistics | f2f876a679389a7ecc4f24f23ca3f8aabd6a2604 | [
"MIT"
] | null | null | null | block.py | IgorReshetnyak/Statistics | f2f876a679389a7ecc4f24f23ca3f8aabd6a2604 | [
"MIT"
] | null | null | null |
"""Blocking analysis
Print running average
Blocking analysis scheme
Running error
takes as input filename to analyze
Igor Reshetnyak 2017
"""
import math,sys,pickle,os.path,pylab,time
if len(sys.argv)<2 :
print 'No file to analyze'
exit()
datafile=sys.argv[1]
file_name=datafile+''
if os.path.isfile(file_name)==False:
print 'file does not exist'
exit()
input=open(file_name,'r')
#samples=pickle.load(input)
samples=[]
for line in input:
data=line.split()
samples.append(float(data[1]))
input.close()
N=len(samples)
print N
#The first algorithm
def AvandError(sample,N):
Av=sum(sample)/float(N)
Error=math.sqrt(sum([(x-Av)**2 for x in sample]))/float(N)
return Av,Error
Av,Error=AvandError(samples,N)
#The bunching algorithm
def makebunch(sample):
new_list=[]
while len(sample)>1:
x=sample.pop(0)
y=sample.pop(0)
new_list.append((x+y)/2.)
return new_list
sample1=samples[:]
Avs2=[]
Errors2=[]
step=0
sample1=makebunch(sample1)
while len(sample1)>4:
print step
step+=1
N2=len(sample1)
Av2,Error2=AvandError(sample1,N2)
Avs2.append(Av2)
Errors2.append(Error2)
sample1=makebunch(sample1)
pylab.plot(range(1,step+1),Errors2,'ro')
pylab.axhline(y=Error,color='b')
pylab.axis([1,step,0,2*max(Errors2)])
pylab.xlabel('Bunching step')
pylab.ylabel('Error')
pylab.savefig(datafile+'Error2.png')
pylab.clf()
#Real time evaluation
Avs3=[samples[0]]
Errors3=[samples[0]**2]
for i in range(1,N):
Avs3.append(samples[i]+Avs3[i-1])
Errors3.append(samples[i]**2+Errors3[i-1])
Avs3=[Avs3[i]/float(i+1) for i in range(N)]
Errors3=[math.sqrt((Errors3[i]/float(i+1)-Avs3[i]**2)/float(i+1)) for i in range(N)]
pylab.plot(range(N),Errors3,'r')
pylab.axhline(y=Error,color='b')
pylab.axis([1,N-1,0,2*max(Errors3)])
pylab.xlabel('Step')
pylab.ylabel('Error')
pylab.savefig(datafile+'Error3.png')
pylab.clf()
pylab.plot(range(N),Avs3,'r')
pylab.axhline(y=Av,color='b')
#pylab.axis([1,N-1,1.1*min(Avs3),1.1*max(Avs3)])
pylab.xlabel('Step')
pylab.ylabel('Average')
pylab.savefig(datafile+'Average3.png')
pylab.clf()
| 18.590909 | 84 | 0.711002 | 352 | 2,045 | 4.113636 | 0.284091 | 0.006906 | 0.029006 | 0.031077 | 0.167818 | 0.142265 | 0.142265 | 0.073204 | 0.046961 | 0 | 0 | 0.043243 | 0.095355 | 2,045 | 109 | 85 | 18.761468 | 0.739459 | 0.065526 | 0 | 0.188406 | 0 | 0 | 0.065304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.014493 | null | null | 0.057971 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
039115099b0acd0dd43e432d85e08424f1e0930e | 1,493 | py | Python | dom/metadata.py | Starwort/domgen | 598f57c2d365cdef353ed1b373a274715c896867 | [
"MIT"
] | null | null | null | dom/metadata.py | Starwort/domgen | 598f57c2d365cdef353ed1b373a274715c896867 | [
"MIT"
] | null | null | null | dom/metadata.py | Starwort/domgen | 598f57c2d365cdef353ed1b373a274715c896867 | [
"MIT"
] | null | null | null | import functools
from .base_classes import Container, Void
class BaseURL(Void):
"""The HTML `<base>` element specifies the base URL to use for *all*
relative URLs in a document. There can be only one `<base>` element in a
document.
"""
__slots__ = ()
tag = "base"
Base = BaseURL
class ExternalResourceLink(Void):
"""The HTML External Resource Link element (`<link>`) specifies
relationships between the current document and an external resource.
This element is most commonly used to link to stylesheets, but is
also used to establish site icons (both "favicon" style icons and
icons for the home screen and apps on mobile devices) among other
things.
"""
__slots__ = ()
tag = "link"
Link = ExternalResourceLink
ExternalStyleSheet = functools.partial(ExternalResourceLink, rel="stylesheet")
class Meta(Void):
"""The HTML `<meta>` element represents metadata that cannot be
represented by other HTML meta-related elements, like `<base>`,
`<link>`, `<script>`, `<style>` or `<title>`.
"""
__slots__ = ()
tag = "meta"
class Style(Container):
"""The HTML `<style>` element contains style information for a
document, or part of a document.
"""
__slots__ = ()
tag = "style"
class Title(Container):
"""The HTML Title element (`<title>`) defines the document's title
that is shown in a browser's title bar or a page's tab.
"""
__slots__ = ()
tag = "title"
| 23.698413 | 78 | 0.663094 | 190 | 1,493 | 5.1 | 0.473684 | 0.03612 | 0.034056 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231078 | 1,493 | 62 | 79 | 24.080645 | 0.844077 | 0.58138 | 0 | 0.25 | 0 | 0 | 0.060377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.85 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
03940eed9922e68c7391552d1a5597a7b61786aa | 429 | py | Python | src/waldur_slurm/migrations/0006_allocationusage_deposit_usage.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 26 | 2017-10-18T13:49:58.000Z | 2021-09-19T04:44:09.000Z | src/waldur_slurm/migrations/0006_allocationusage_deposit_usage.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 14 | 2018-12-10T14:14:51.000Z | 2021-06-07T10:33:39.000Z | src/waldur_slurm/migrations/0006_allocationusage_deposit_usage.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 32 | 2017-09-24T03:10:45.000Z | 2021-10-16T16:41:09.000Z | # Generated by Django 1.11.7 on 2018-03-05 22:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('waldur_slurm', '0005_add_deposit'),
]
operations = [
migrations.AddField(
model_name='allocationusage',
name='deposit_usage',
field=models.DecimalField(decimal_places=2, default=0, max_digits=8),
),
]
| 23.833333 | 81 | 0.62704 | 48 | 429 | 5.458333 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072555 | 0.261072 | 429 | 17 | 82 | 25.235294 | 0.753943 | 0.107226 | 0 | 0 | 1 | 0 | 0.146982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03972df064d30776ab61ace4cb50e0a249055d77 | 6,032 | py | Python | tests/tests_basic.py | mehrdad-shokri/fluxcapacitor | b4e646e3048a317b33f5a84741b7962fd69cdc61 | [
"MIT"
] | 648 | 2015-01-08T18:05:33.000Z | 2022-03-30T04:35:08.000Z | tests/tests_basic.py | mehrdad-shokri/fluxcapacitor | b4e646e3048a317b33f5a84741b7962fd69cdc61 | [
"MIT"
] | 4 | 2017-08-24T18:05:01.000Z | 2018-01-19T09:06:06.000Z | tests/tests_basic.py | mehrdad-shokri/fluxcapacitor | b4e646e3048a317b33f5a84741b7962fd69cdc61 | [
"MIT"
] | 21 | 2015-05-12T13:03:15.000Z | 2022-03-12T15:12:26.000Z | import os
import tests
from tests import at_most, compile, savefile
import subprocess
node_present = True
erlang_present = True
if os.system("node -v >/dev/null 2>/dev/null") != 0:
print " [!] ignoring nodejs tests"
node_present = False
if (os.system("erl -version >/dev/null 2>/dev/null") != 0 or
os.system("which escript >/dev/null 2>/dev/null") != 0):
print " [!] ignoring erlang tests"
erlang_present = False
sleep_sort_script='''\
#!/bin/bash
echo "Unsorted: $*"
function f() {
sleep "$1"
echo -n "$1 "
}
while [ -n "$1" ]; do
f "$1" &
shift
done
wait
echo
'''
class SingleProcess(tests.TestCase):
@at_most(seconds=2)
def test_bash_sleep(self):
self.system("sleep 10")
@at_most(seconds=2)
def test_bash_bash_sleep(self):
self.system("bash -c 'sleep 120;'")
@at_most(seconds=2)
def test_python2_sleep(self):
self.system('python2 -c "import time; time.sleep(10)"')
@at_most(seconds=2)
def test_python2_select(self):
self.system('python2 -c "import select; select.select([],[],[], 10)"')
@at_most(seconds=2)
def test_python2_poll(self):
self.system('python2 -c "import select; select.poll().poll(10000)"')
@at_most(seconds=2)
def test_python2_epoll(self):
self.system('python2 -c "import select; select.epoll().poll(10000)"')
@at_most(seconds=2)
def test_node_epoll(self):
if node_present:
self.system('node -e "setTimeout(function(){},10000);"')
def test_bad_command(self):
self.system('command_that_doesnt exist',
returncode=127, ignore_stderr=True)
def test_return_status(self):
self.system('python2 -c "import sys; sys.exit(188)"', returncode=188)
self.system('python2 -c "import sys; sys.exit(-1)"', returncode=255)
@at_most(seconds=2)
@compile(code='''
#include <unistd.h>
int main() {
sleep(10);
return(0);
}''')
def test_c_sleep(self, compiled=None):
self.system(compiled)
@at_most(seconds=2)
@compile(code='''
#include <time.h>
int main() {
struct timespec ts = {1, 0};
nanosleep(&ts, NULL);
return(0);
}''')
def test_c_nanosleep(self, compiled=None):
self.system(compiled)
@at_most(seconds=5)
@savefile(suffix="erl", text='''\
#!/usr/bin/env escript
%%! -smp disable +A1 +K true -noinput
-export([main/1]).
main(_) ->
timer:sleep(10*1000),
halt(0).
''')
def test_erlang_sleep(self, filename=None):
if erlang_present:
self.system("escript %s" % (filename,))
@at_most(seconds=5)
@savefile(suffix="erl", text='''\
#!/usr/bin/env escript
%%! -smp enable +A30 +K true -noinput
-export([main/1]).
main(_) ->
timer:sleep(10*1000),
halt(0).
''')
def test_erlang_sleep_smp(self, filename=None):
if erlang_present:
self.system("escript %s" % (filename,))
@at_most(seconds=5)
@savefile(suffix="erl", text='''\
#!/usr/bin/env escript
%%! -smp enable +A30 +K false -noinput
-export([main/1]).
main(_) ->
timer:sleep(10*1000),
halt(0).
''')
def test_erlang_sleep_smp_no_epoll(self, filename=None):
if erlang_present:
self.system("escript %s" % (filename,))
@at_most(seconds=5)
@savefile(suffix="erl", text='''\
#!/usr/bin/env escript
%%! -smp disable +A1 +K true -noinput
-export([main/1]).
main(_) ->
self() ! msg,
proc(10),
receive
_ -> ok
end.
proc(0) ->
receive
_ -> halt(0)
end;
proc(N) ->
Pid = spawn(fun () -> proc(N-1) end),
receive
_ -> timer:sleep(1000),
Pid ! msg
end.
''')
def test_erlang_process_staircase(self, filename=None):
if erlang_present:
self.system("escript %s" % (filename,))
@at_most(seconds=2)
def test_perl_sleep(self):
self.system("perl -e 'sleep 10'")
@at_most(seconds=5)
@savefile(suffix="sh", text=sleep_sort_script)
def test_sleep_sort(self, filename=None):
self.system("bash %s 1 12 1231 123213 13212 > /dev/null" % (filename,))
@at_most(seconds=5)
@savefile(suffix="sh", text=sleep_sort_script)
def test_sleep_sort(self, filename=None):
self.system("bash %s 5 3 6 3 6 3 1 4 7 > /dev/null" % (filename,))
@at_most(seconds=10)
def test_parallel_sleeps(self):
for i in range(10):
stdout = self.system(' -- '.join(['bash -c "date +%s"',
'bash -c "sleep 60; date +%s"',
'bash -c "sleep 120; date +%s"']),
capture_stdout=True)
a, b, c = [int(l) for l in stdout.split()]
assert 55 < (b - a) < 65, str(b-a)
assert 55 < (c - b) < 65, str(c-b)
assert 110 < (c - a) < 130, str(c-a)
@at_most(seconds=3)
def test_file_descriptor_leak(self):
out = subprocess.check_output("ls /proc/self/fd", shell=True)
normal_fds = len(out.split('\n'))
stdout = self.system(' -- '.join(['sleep 1',
'sleep 60',
'sleep 120',
'bash -c "sleep 180; ls /proc/self/fd"']),
capture_stdout=True)
after_fork_fds = len(stdout.split('\n'))
assert normal_fds == after_fork_fds
@at_most(seconds=4)
def test_2546_wraparound(self):
if os.uname()[4] == "x86_64":
stdout = self.system("bash -c 'for i in `seq 1 55`; do sleep 315360000; done; date +%Y'",
capture_stdout=True)
assert int(stdout) > 2500
if __name__ == '__main__':
import unittest
unittest.main()
| 27.418182 | 101 | 0.54443 | 771 | 6,032 | 4.11284 | 0.228275 | 0.069379 | 0.077893 | 0.04415 | 0.540839 | 0.506465 | 0.477767 | 0.438978 | 0.31315 | 0.287606 | 0 | 0.048925 | 0.298574 | 6,032 | 219 | 102 | 27.543379 | 0.700544 | 0 | 0 | 0.41573 | 0 | 0.005618 | 0.362401 | 0.028515 | 0 | 0 | 0 | 0 | 0.02809 | 0 | null | null | 0 | 0.061798 | null | null | 0.011236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
039b50ec23666881fdd70d72494a3f55144b2adf | 444 | py | Python | tests/test_root.py | oclyke-dev/blue-heron | 05d59b66ff1cb10a40e0fb01ee65f778a7c157a8 | [
"MIT"
] | null | null | null | tests/test_root.py | oclyke-dev/blue-heron | 05d59b66ff1cb10a40e0fb01ee65f778a7c157a8 | [
"MIT"
] | null | null | null | tests/test_root.py | oclyke-dev/blue-heron | 05d59b66ff1cb10a40e0fb01ee65f778a7c157a8 | [
"MIT"
] | null | null | null |
import blue_heron
import pytest
from pathlib import Path
from lxml import etree as ET
from blue_heron import Root, Drawing
@pytest.fixture(scope='module')
def test_board():
with open(Path(__file__).parent/'data/ArtemisDevKit.brd', 'r') as f:
root = ET.parse(f).getroot()
yield root
def test_get_drawing(test_board):
root = Root(test_board)
drawing = root.drawing
assert type(drawing) == type(blue_heron.drawing.Drawing(None))
| 23.368421 | 70 | 0.747748 | 68 | 444 | 4.705882 | 0.514706 | 0.084375 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13964 | 444 | 18 | 71 | 24.666667 | 0.837696 | 0 | 0 | 0 | 0 | 0 | 0.065463 | 0.049661 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.142857 | false | 0 | 0.357143 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
03a03bedee87cbb3ff98b4472ee9a56d99d7f812 | 7,070 | py | Python | myconnectome/rsfmri/mk_connectome_figures.py | poldrack/myconnectome | 201f414b3165894d6fe0be0677c8a58f6d161948 | [
"MIT"
] | 28 | 2015-04-02T16:43:14.000Z | 2020-06-17T20:04:26.000Z | myconnectome/rsfmri/mk_connectome_figures.py | poldrack/myconnectome | 201f414b3165894d6fe0be0677c8a58f6d161948 | [
"MIT"
] | 11 | 2015-05-19T02:57:22.000Z | 2017-03-17T17:36:16.000Z | myconnectome/rsfmri/mk_connectome_figures.py | poldrack/myconnectome | 201f414b3165894d6fe0be0677c8a58f6d161948 | [
"MIT"
] | 10 | 2015-05-21T17:01:26.000Z | 2020-11-11T04:28:08.000Z | # -*- coding: utf-8 -*-
"""
make images for connnectivity adjmtx
also compute within/between hemisphere stats
Created on Sun Jun 21 09:19:06 2015
@author: poldrack
"""
import os
import numpy
import nilearn.plotting
import scipy.stats
from myconnectome.utils.get_parcel_coords import get_parcel_coords
import matplotlib.pyplot as plt
def get_mean_connection_distance(input):
from scipy.spatial.distance import euclidean
adj=input.copy()
adj[numpy.tril_indices(adj.shape[0])]=0
coords=get_parcel_coords()
dist=[]
hits=numpy.where(adj>0)
for h in range(hits[0].shape[0]):
dist.append(euclidean(coords[hits[0][h]],coords[hits[1][h]]))
return numpy.mean(dist)
def r_to_z(r):
# fisher transform
z=0.5*numpy.log((1.0+r)/(1.0-r))
z[numpy.where(numpy.isinf(z))]=0
z[numpy.where(numpy.isnan(z))]=0
return z
def z_to_r(z):
# inverse transform
return (numpy.exp(2.0*z) - 1)/(numpy.exp(2.0*z) + 1)
basedir=os.environ['MYCONNECTOME_DIR']
def mk_connectome_figures(use_abs_corr=False,thresh=0.0025):
dtidata=numpy.loadtxt(os.path.join(basedir,'diffusion/tracksumm_distcorr.txt'),skiprows=1)
dtidata=dtidata[:,1:]
dtidata=dtidata+dtidata.T
dtibin=dtidata>0
rsfmridata=numpy.load(os.path.join(basedir,'rsfmri/corrdata.npy'))
rsfmridata=r_to_z(rsfmridata)
meancorr_z=numpy.mean(rsfmridata,0)
meancorr=z_to_r(meancorr_z)
if use_abs_corr:
meancorr=numpy.abs(meancorr)
meancorr[numpy.isnan(meancorr)]=0
adjsize=630
utr=numpy.triu_indices(adjsize,1)
meandti=dtidata[utr]
task_connectome=numpy.loadtxt(os.path.join(basedir,'taskfmri/task_connectome.txt'))
taskdata=task_connectome[utr]
l2data=numpy.load(os.path.join(basedir,'rsfmri/l2_utr_data.npy'))
l2mean=z_to_r(numpy.mean(r_to_z(l2data),0))
l1data=numpy.load(os.path.join(basedir,'rsfmri/quic_utr_data_0.1.npy'))
l1mean=z_to_r(numpy.mean(r_to_z(l1data),0))
rsthresh=meancorr > scipy.stats.scoreatpercentile(meancorr,100-100*thresh)
dtithresh=meandti > scipy.stats.scoreatpercentile(meandti,100-100*thresh)
taskthresh=taskdata > scipy.stats.scoreatpercentile(taskdata,100-100*thresh)
l2thresh=l2mean > scipy.stats.scoreatpercentile(l2mean,100-100*thresh)
l1thresh=l1mean > scipy.stats.scoreatpercentile(l1mean,100-100*thresh)
rsadj=numpy.zeros((adjsize,adjsize))
l2adj=numpy.zeros((adjsize,adjsize))
l1adj=numpy.zeros((adjsize,adjsize))
dtiadj=numpy.zeros((adjsize,adjsize))
taskadj=numpy.zeros((adjsize,adjsize))
rsadj[utr]=rsthresh
l2adj[utr]=l2thresh
l1adj[utr]=l1thresh
dtiadj[utr]=dtithresh
taskadj[utr]=taskthresh
rsadj=rsadj+rsadj.T
l2adj=l2adj+l2adj.T
l1adj=l1adj+l1adj.T
dtiadj=dtiadj+dtiadj.T
taskadj=taskadj+taskadj.T
coords=get_parcel_coords()
hemis=numpy.zeros((630,630))
# get inter/intrahemispheric marker - 1=intra, -1=inter
for i in range(630):
for j in range(i+1,630):
if numpy.sign(coords[i,0])==numpy.sign(coords[j,0]):
hemis[i,j]=1
else:
hemis[i,j]=-1
hemisutr=hemis[utr]
inter=numpy.where(hemisutr==-1)
intra=numpy.where(hemisutr==1)
densities=[0.001,0.005,0.01,0.025,0.05,0.075,0.1]
hemisdata=numpy.zeros((len(densities),5))
for d in range(len(densities)):
rsthresh=meancorr > scipy.stats.scoreatpercentile(meancorr,100-100*densities[d])
hemisdata[d,0]=numpy.sum(rsthresh[inter])/float(numpy.sum(rsthresh))
dtithresh=meandti > scipy.stats.scoreatpercentile(meandti,100-100*densities[d])
hemisdata[d,1]=numpy.sum(dtithresh[inter])/float(numpy.sum(dtithresh))
taskthresh=taskdata > scipy.stats.scoreatpercentile(taskdata,100-100*densities[d])
hemisdata[d,2]=numpy.sum(taskthresh[inter])/float(numpy.sum(taskthresh))
l2thresh=l2mean > scipy.stats.scoreatpercentile(l2mean,100-100*densities[d])
hemisdata[d,3]=numpy.sum(l2thresh[inter])/float(numpy.sum(l2thresh))
l1thresh=l1mean > scipy.stats.scoreatpercentile(l1mean,100-100*densities[d])
hemisdata[d,4]=numpy.sum(l1thresh[inter])/float(numpy.sum(l1thresh))
print hemisdata
plt.plot(hemisdata,linewidth=2)
plt.legend(['Full correlation','DTI','Task','L1 partial','L2 partial'],loc=5)
plt.xticks(range(len(densities)),densities*100)
plt.xlabel('Density (proportion of possible connections)',fontsize=14)
plt.ylabel('Proportion of connections that are interhemispheric',fontsize=14)
plt.savefig(os.path.join(basedir,'rsfmri/interhemispheric_connection_plot.pdf'))
print 'mean connection distances (%0.04f density)'%thresh
print 'fullcorr:',get_mean_connection_distance(rsadj)
print 'l1 pcorr:',get_mean_connection_distance(l1adj)
print 'l2 pcorr:',get_mean_connection_distance(l2adj)
print 'task corr:',get_mean_connection_distance(taskadj)
print 'dti:',get_mean_connection_distance(dtiadj)
dti_sum=numpy.sum(dtiadj,0)
tmp=dtiadj[dti_sum>0,:]
dtiadj_reduced=tmp[:,dti_sum>0]
#dtiadj_reduced=dtiadj_reduced+dtiadj_reduced.T
nilearn.plotting.plot_connectome(dtiadj_reduced,coords[dti_sum>0,:],node_size=2,
output_file=os.path.join(basedir,'diffusion/dti_connectome_thresh%f.pdf'%thresh))
rs_sum=numpy.sum(rsadj,0)
rsadj_match=rsadj*0.01 + rsadj*dtibin*0.8 # add one to matches to change edge color
tmp=rsadj_match[rs_sum>0,:]
rsadj_reduced=tmp[:,rs_sum>0]
#rsadj_reduced=rsadj_reduced+rsadj_reduced.T
nilearn.plotting.plot_connectome(rsadj_reduced,coords[rs_sum>0,:],node_size=2,
edge_vmin=0,edge_vmax=1,edge_cmap='seismic',edge_kwargs={'linewidth':1},
output_file=os.path.join(basedir,'rsfmri/rsfmri_corr_connectome_thresh%f.pdf'%thresh))
l2_sum=numpy.sum(l2adj,0)
l2adj_match=l2adj*0.01 + l2adj*dtibin*0.8 # add one to matches to change edge color
tmp=l2adj_match[l2_sum>0,:]
l2adj_reduced=tmp[:,l2_sum>0]
#l2adj_reduced=l2adj_reduced+l2adj_reduced.T
nilearn.plotting.plot_connectome(l2adj_reduced,coords[l2_sum>0,:],node_size=2,
edge_vmin=0,edge_vmax=1,edge_cmap='seismic',edge_kwargs={'linewidth':1},
output_file=os.path.join(basedir,'rsfmri/rsfmri_l2_connectome_thresh%f.pdf'%thresh))
task_sum=numpy.sum(taskadj,0)
taskadj_match=taskadj*0.01 + taskadj*dtibin*0.8 # add one to matches to change edge color
tmp=taskadj_match[task_sum>0,:]
taskadj_reduced=tmp[:,task_sum>0]
#taskadj_reduced=taskadj_reduced+taskadj_reduced.T
nilearn.plotting.plot_connectome(taskadj_reduced,coords[task_sum>0,:],node_size=2,
edge_vmin=0,edge_vmax=1,edge_cmap='seismic',edge_kwargs={'linewidth':1},
output_file=os.path.join(basedir,'taskfmri/task_connectome_thresh%f.pdf'%thresh))
if __name__ == "__main__":
mk_connectome_figures()
| 40.170455 | 123 | 0.698727 | 1,016 | 7,070 | 4.714567 | 0.204724 | 0.023382 | 0.020877 | 0.035491 | 0.393111 | 0.317119 | 0.246138 | 0.213361 | 0.090188 | 0.090188 | 0 | 0.046445 | 0.162518 | 7,070 | 175 | 124 | 40.4 | 0.76254 | 0.058133 | 0 | 0.038462 | 0 | 0 | 0.095509 | 0.047524 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053846 | null | null | 0.053846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03aa122f7d46f999001e9b311a609665d78ad637 | 527 | py | Python | snoop/data/management/commands/migratecollections.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | null | null | null | snoop/data/management/commands/migratecollections.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | 168 | 2019-11-07T12:38:07.000Z | 2021-04-19T09:53:51.000Z | snoop/data/management/commands/migratecollections.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | null | null | null | """Creates and migrates databases and indexes.
"""
from django.core.management.base import BaseCommand
from ... import collections
from ...logs import logging_for_management_command
class Command(BaseCommand):
help = "Create and migrate the collection databases"
def handle(self, *args, **options):
logging_for_management_command(options['verbosity'])
collections.create_databases()
collections.migrate_databases()
collections.create_es_indexes()
collections.create_roots()
| 27.736842 | 60 | 0.736243 | 57 | 527 | 6.614035 | 0.54386 | 0.135279 | 0.106101 | 0.143236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172676 | 527 | 18 | 61 | 29.277778 | 0.864679 | 0.081594 | 0 | 0 | 0 | 0 | 0.109015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
03bbae28c2cd4fb58b49c704d23f872cee7681d3 | 3,505 | py | Python | t.py | cmsirbu/gencfg | 5f201208ca55bdd2ddd67974129465d95ebf4af4 | [
"MIT"
] | 5 | 2016-03-09T19:50:54.000Z | 2018-10-12T03:05:23.000Z | t.py | cmsirbu/gencfg | 5f201208ca55bdd2ddd67974129465d95ebf4af4 | [
"MIT"
] | null | null | null | t.py | cmsirbu/gencfg | 5f201208ca55bdd2ddd67974129465d95ebf4af4 | [
"MIT"
] | 2 | 2019-06-28T10:34:52.000Z | 2019-09-16T23:56:49.000Z | #!/usr/bin/env python
"""A script that helps generate router configuration from templates.
"""
import os
import sys
import argparse
import csv
import jinja2
from jinja2 import meta
def get_template_var_list(config_template):
j2_env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath='.'))
j2_template_source = j2_env.loader.get_source(j2_env, config_template)[0]
j2_parsed_content = j2_env.parse(j2_template_source)
return(meta.find_undeclared_variables(j2_parsed_content))
def generate_csv_header(config_template):
template_vars = sorted(list(get_template_var_list(config_template)))
pre, _ = os.path.splitext(config_template)
with open(pre + ".csv", "w") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(template_vars)
print("Header variables saved to " + pre + ".csv")
def generate_config(config_template, config_data, config_outdir):
# init jinja2 environment
j2_env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath='.'))
j2_template = j2_env.get_template(config_template)
# read csv data
totalrows = 0
with open(config_data) as csv_file:
# initialize reader object and protect against non-uniform csv files
# missing values will be empty strings
csv_reader = csv.DictReader(csv_file, restval="WARNING_VALUE_MISSING")
# check if all the template vars are found in the csv
if not all(x in csv_reader.fieldnames for x in get_template_var_list(config_template)):
sys.exit('Not all variables in {} are found in {}'.format(config_template, config_data))
# create config output dir
out_directory = os.path.join(os.path.dirname(config_template), config_outdir)
if not os.path.exists(out_directory):
os.makedirs(out_directory)
for row in csv_reader:
# render template for each row from the csv file and write it to disk
j2_rendered_template = j2_template.render(row)
out_filename = os.path.join(out_directory, "cfg-" + str(csv_reader.line_num-1))
with open(out_filename, mode="w") as out_file:
out_file.write(j2_rendered_template)
totalrows += 1
print("Generated {} files in {}/".format(totalrows, out_directory))
def main(arguments):
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('operation', help="gencfg, csvheader")
parser.add_argument('-t', '--template', help="config template file (jinja2)")
parser.add_argument('-d', '--data', help="config data file (csv)")
parser.add_argument('-o', '--outdir', help="output directory (default=configs)", default="configs")
args = parser.parse_args(arguments)
if args.operation == "gencfg":
if args.template and args.data:
generate_config(args.template, args.data, args.outdir)
else:
sys.exit("Template (-t) and data (-d) files must be specified.")
elif args.operation == "csvheader":
if args.template:
generate_csv_header(args.template)
else:
sys.exit("Template (-t) file must be specified.")
else:
sys.exit("Invalid operation. Use gencfg to apply data to a template or " +
"csvheader to extract variables from a template.")
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
| 38.097826 | 103 | 0.681598 | 457 | 3,505 | 5.015317 | 0.326039 | 0.06719 | 0.029668 | 0.02356 | 0.120419 | 0.102967 | 0.061082 | 0.061082 | 0.061082 | 0.061082 | 0 | 0.009807 | 0.214551 | 3,505 | 91 | 104 | 38.516484 | 0.822739 | 0.106419 | 0 | 0.084746 | 1 | 0 | 0.158756 | 0.006735 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.101695 | 0 | 0.169492 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03bce3dec0a0cfe68389b401fddaa0824f69003d | 366 | py | Python | Python/bench_2_1.py | nifty-swift/Nifty-benchmarks | 025128d6276a5dec0c89d1e464131c4e4dc22292 | [
"Apache-2.0"
] | 1 | 2018-03-28T05:51:21.000Z | 2018-03-28T05:51:21.000Z | Python/bench_2_1.py | nifty-swift/Nifty-benchmarks | 025128d6276a5dec0c89d1e464131c4e4dc22292 | [
"Apache-2.0"
] | null | null | null | Python/bench_2_1.py | nifty-swift/Nifty-benchmarks | 025128d6276a5dec0c89d1e464131c4e4dc22292 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from time import time
def bench_2_1():
trials = 100
elements = 1000000
times = []
for i in range(trials):
start = time()
M = np.random.randint(1,999, size=elements)
t = time()-start
times.append(t)
print 'Python - Benchmark 2.1: Average time = {} milliseconds'.format(np.mean(times)*1000) | 19.263158 | 94 | 0.601093 | 51 | 366 | 4.27451 | 0.686275 | 0.018349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.278689 | 366 | 19 | 94 | 19.263158 | 0.742424 | 0 | 0 | 0 | 0 | 0 | 0.147139 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03bebfed119097ef096738e58538d61c95362c67 | 31,667 | py | Python | fake_spectra/rate_network.py | xiaohanzai/fake_spectra | 170b42ac7732eb4f299617a1049cd3eabecfa3a7 | [
"MIT"
] | null | null | null | fake_spectra/rate_network.py | xiaohanzai/fake_spectra | 170b42ac7732eb4f299617a1049cd3eabecfa3a7 | [
"MIT"
] | null | null | null | fake_spectra/rate_network.py | xiaohanzai/fake_spectra | 170b42ac7732eb4f299617a1049cd3eabecfa3a7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""A rate network for neutral hydrogen following
Katz, Weinberg & Hernquist 1996, eq. 28-32."""
import os.path
import math
import numpy as np
import scipy.interpolate as interp
import scipy.optimize
class RateNetwork(object):
"""A rate network for neutral hydrogen following
Katz, Weinberg & Hernquist 1996, astro-ph/9509107, eq. 28-32.
Most internal methods are CamelCapitalized and follow a convention that
they are named like the process and then the ion they refer to.
eg:
CollisionalExciteHe0 is the neutral Helium collisional excitation rate.
RecombHp is the recombination rate for ionized hydrogen.
Externally useful methods (the API) are named like get_*.
These are:
get_temp() - gets the temperature from the density and internal energy.
get_cooling_rate() - gets the total cooling rate from density and internal energy.
get_neutral_fraction() - gets the neutral fraction from the rate network given density and internal energy.
Two useful helper functions:
get_equilib_ne() - gets the equilibrium electron density.
get_ne_by_nh() - gets the above, divided by the hydrogen density (Gadget reports this as ElectronAbundance).
Constructor arguments:
redshift - the redshift at which to evaluate the cooling. Affects the photoionization rate,
the Inverse Compton cooling and the self shielding threshold.
photo_factor - Factor by which to multiply the UVB amplitude.
f_bar - Baryon fraction. Omega_b / Omega_cdm.
converge - Tolerance to which the rate network should be converged.
selfshield - Flag to enable self-shielding following Rahmati 2013
cool - which cooling rate coefficient table to use.
Supported are: KWH (original Gadget rates)
Nyx (rates used in Nyx (Lukic 2015))
Sherwood (rates used in Sherwood simulations (Bolton 2017))
Default is Sherwood
recomb - which recombination rate table to use.
Supported are: C92 (Cen 1992, the Gadget default)
V96 (Verner & Ferland 1996, more accurate rates).
B06 (Badnell 2006 rates, current cloudy defaults. Very similar to V96).
collisional - Flag to enable collisional ionizations.
treecool_file - File to read a UV background from. Matches format used by Gadget.
"""
def __init__(self,redshift, photo_factor = 1., f_bar = 0.17, converge = 1e-7, selfshield=True, cool="Sherwood", recomb="V96", collisional=True, treecool_file="data/TREECOOL_ep_2018p"):
if recomb == "V96":
self.recomb = RecombRatesVerner96()
elif recomb == "B06":
self.recomb = RecombRatesBadnell()
else:
self.recomb = RecombRatesCen92()
self.photo = PhotoRates(treecool_file=treecool_file)
self.photo_factor = photo_factor
self.f_bar = f_bar
if cool == "KWH":
self.cool = CoolingRatesKWH92()
elif cool == "Sherwood":
self.cool = CoolingRatesSherwood()
elif cool == "Nyx":
self.cool = CoolingRatesNyx()
else:
raise ValueError("Not supported")
#Extra helium reionization photoheating model
self.hub = 0.7
self.he_thresh = 10
self.he_amp = 1
self.he_exp = 0
self.he_model_on = False
#proton mass in g
self.protonmass = 1.67262178e-24
self.redshift = redshift
self.converge = converge
self.selfshield = selfshield
self.collisional = collisional
zz = [0, 1, 2, 3, 4, 5, 6, 7, 8]
#Tables for the self-shielding correction. Note these are not well-measured for z > 5!
gray_opac = [2.59e-18,2.37e-18,2.27e-18, 2.15e-18, 2.02e-18, 1.94e-18, 1.82e-18, 1.71e-18, 1.60e-18]
self.Gray_ss = interp.InterpolatedUnivariateSpline(zz, gray_opac)
def get_temp(self, density, ienergy, helium=0.24):
"""Get the equilibrium temperature at given internal energy.
density is gas density in protons/cm^3
Internal energy is in J/kg == 10^-10 ergs/g.
helium is a mass fraction"""
ne = self.get_equilib_ne(density, ienergy, helium)
nh = density * (1-helium)
return self._get_temp(ne/nh, ienergy, helium)
def get_cooling_rate(self, density, ienergy, helium=0.24, photoheating=False):
"""Get the total cooling rate for a temperature and density. Negative means heating."""
ne = self.get_equilib_ne(density, ienergy, helium)
nh = density * (1-helium)
temp = self._get_temp(ne/nh, ienergy, helium)
nH0 = self._nH0(nh, temp, ne)
nHe0 = self._nHe0(nh, temp, ne)
nHp = self._nHp(nh, temp, ne)
nHep = self._nHep(nh, temp, ne)
nHepp = self._nHepp(nh, temp, ne)
#This is the collisional excitation and ionisation rate.
LambdaCollis = ne * (self.cool.CollisionalH0(temp) * nH0 +
self.cool.CollisionalHe0(temp) * nHe0 +
self.cool.CollisionalHeP(temp) * nHep)
LambdaRecomb = ne * (self.cool.RecombHp(temp) * nHp +
self.cool.RecombHeP(temp) * nHep +
self.cool.RecombHePP(temp) * nHepp)
LambdaFF = ne * (self.cool.FreeFree(temp, 1)*(nHp + nHep) + self.cool.FreeFree(temp, 2)*nHepp)
LambdaCmptn = ne * self.cool.InverseCompton(temp, self.redshift)
Lambda = LambdaCollis + LambdaRecomb + LambdaFF + LambdaCmptn
Heating = 0
if photoheating:
Heating = nH0 * self.photo.epsH0(self.redshift)
Heating += nHe0 * self.photo.epsHe0(self.redshift)
Heating += nHep * self.photo.epsHep(self.redshift)
Heating *= self.photo_factor
if self.he_model_on:
Heating *= self._he_reion_factor(density)
return Lambda - Heating
def get_equilib_ne(self, density, ienergy,helium=0.24):
"""Solve the system of equations for photo-ionisation equilibrium,
starting with ne = nH and continuing until convergence.
density is gas density in protons/cm^3
Internal energy is in J/kg == 10^-10 ergs/g.
helium is a mass fraction.
"""
#Get hydrogen number density
nh = density * (1-helium)
rooted = lambda ne: self._ne(nh, self._get_temp(ne/nh, ienergy, helium=helium), ne, helium=helium)
ne = scipy.optimize.fixed_point(rooted, nh,xtol=self.converge)
assert np.all(np.abs(rooted(ne) - ne) < self.converge)
return ne
def get_ne_by_nh(self, density, ienergy, helium=0.24):
"""Same as above, but get electrons per proton."""
return self.get_equilib_ne(density, ienergy, helium)/(density*(1-helium))
def get_neutral_fraction(self, density, ienergy, helium=0.24):
"""Get the neutral hydrogen fraction at a given temperature and density.
density is gas density in protons/cm^3
Internal energy is in J/kg == 10^-10 ergs/g.
helium is a mass fraction.
"""
ne = self.get_equilib_ne(density, ienergy, helium=helium)
nh = density * (1-helium)
temp = self._get_temp(ne/nh, ienergy, helium)
return self._nH0(nh, temp, ne) / nh
def _nH0(self, nh, temp, ne):
"""The neutral hydrogen number density. Eq. 33 of KWH."""
alphaHp = self.recomb.alphaHp(temp)
GammaeH0 = self.collisional * self.recomb.GammaeH0(temp)
photorate = self.photo.gH0(self.redshift)/ne*self.photo_factor*self._self_shield_corr(nh, temp)
return nh * alphaHp/ (alphaHp + GammaeH0 + photorate)
def _nHp(self, nh, temp, ne):
"""The ionised hydrogen number density. Eq. 34 of KWH."""
return nh - self._nH0(nh, temp, ne)
def _nHep(self, nh, temp, ne):
"""The ionised helium number density, divided by the helium number fraction. Eq. 35 of KWH."""
alphaHep = self.recomb.alphaHep(temp) + self.recomb.alphad(temp)
alphaHepp = self.recomb.alphaHepp(temp)
photofac = self.photo_factor*self._self_shield_corr(nh, temp)
GammaHe0 = self.collisional * self.recomb.GammaeHe0(temp) + self.photo.gHe0(self.redshift)/ne*photofac
GammaHep = self.collisional * self.recomb.GammaeHep(temp) + self.photo.gHep(self.redshift)/ne*photofac
return nh / (1 + alphaHep / GammaHe0 + GammaHep/alphaHepp)
def _nHe0(self, nh, temp, ne):
"""The neutral helium number density, divided by the helium number fraction. Eq. 36 of KWH."""
alphaHep = self.recomb.alphaHep(temp) + self.recomb.alphad(temp)
photofac = self.photo_factor*self._self_shield_corr(nh, temp)
GammaHe0 = self.collisional * self.recomb.GammaeHe0(temp) + self.photo.gHe0(self.redshift)/ne*photofac
return self._nHep(nh, temp, ne) * alphaHep / GammaHe0
def _nHepp(self, nh, temp, ne):
"""The doubly ionised helium number density, divided by the helium number fraction. Eq. 37 of KWH."""
photofac = self.photo_factor*self._self_shield_corr(nh, temp)
GammaHep = self.collisional * self.recomb.GammaeHep(temp) + self.photo.gHep(self.redshift)/ne*photofac
alphaHepp = self.recomb.alphaHepp(temp)
return self._nHep(nh, temp, ne) * GammaHep / alphaHepp
def _ne(self, nh, temp, ne, helium=0.24):
"""The electron number density. Eq. 38 of KWH."""
yy = helium / 4 / (1 - helium)
return self._nHp(nh, temp, ne) + yy * self._nHep(nh, temp, ne) + 2* yy * self._nHepp(nh, temp, ne)
def _self_shield_corr(self, nh, temp):
"""Photoionisation rate as a function of density from Rahmati 2012, eq. 14.
Calculates Gamma_{Phot} / Gamma_{UVB}.
Inputs: hydrogen density, temperature
n_H
The coefficients are their best-fit from appendix A."""
if not self.selfshield:
return np.ones_like(nh)
nSSh = 1.003*self._self_shield_dens(self.redshift, temp)
return 0.98*(1+(nh/nSSh)**1.64)**-2.28+0.02*(1+nh/nSSh)**-0.84
def _self_shield_dens(self,redshift, temp):
"""Calculate the critical self-shielding density. Rahmati 202 eq. 13.
gray_opac is a parameter of the UVB used.
gray_opac is in cm^2 (2.49e-18 is HM01 at z=3)
temp is particle temperature in K
f_bar is the baryon fraction. 0.17 is roughly 0.045/0.265
Returns density in atoms/cm^3"""
T4 = temp/1e4
G12 = self.photo.gH0(redshift)/1e-12
return 6.73e-3 * (self.Gray_ss(redshift) / 2.49e-18)**(-2./3)*(T4)**0.17*(G12)**(2./3)*(self.f_bar/0.17)**(-1./3)
def _he_reion_factor(self, density):
"""Compute a density dependent correction factor to the heating rate which can model the effect of helium reionization.
Argument: Gas density in protons/cm^3."""
#Newton's constant (cgs units)
gravity = 6.672e-8
#100 km/s/Mpc in h/sec
hubble = 3.2407789e-18
omegab = 0.0483
atime = 1/(1+self.redshift)
rhoc = 3 * (self.hub* hubble)**2 /(8* math.pi * gravity)
overden = self.protonmass * density /(omegab * rhoc * atime**(-3))
if overden >= self.he_thresh:
overden = self.he_thresh
return self.he_amp * overden**self.he_exp
def _get_temp(self, nebynh, ienergy, helium=0.24):
"""Compute temperature (in K) from internal energy and electron density.
Uses: internal energy
electron abundance per H atom (ne/nH)
hydrogen mass fraction (0.76)
Internal energy is in J/kg, internal gadget units, == 10^-10 ergs/g.
Factor to convert U (J/kg) to T (K) : U = N k T / (γ - 1)
T = U (γ-1) μ m_P / k_B
where k_B is the Boltzmann constant
γ is 5/3, the perfect gas constant
m_P is the proton mass
μ = 1 / (mean no. molecules per unit atomic weight)
= 1 / (X + Y /4 + E)
where E = Ne * X, and Y = (1-X).
Can neglect metals as they are heavy.
Leading contribution is from electrons, which is already included
[+ Z / (12->16)] from metal species
[+ Z/16*4 ] for OIV from electrons."""
#convert U (J/kg) to T (K) : U = N k T / (γ - 1)
#T = U (γ-1) μ m_P / k_B
#where k_B is the Boltzmann constant
#γ is 5/3, the perfect gas constant
#m_P is the proton mass
#μ is 1 / (mean no. molecules per unit atomic weight) calculated in loop.
#Internal energy units are 10^-10 erg/g
hy_mass = 1 - helium
muienergy = 4 / (hy_mass * (3 + 4*nebynh) + 1)*ienergy*1e10
#Boltzmann constant (cgs)
boltzmann=1.38066e-16
gamma=5./3
#So for T in K, boltzmann in erg/K, internal energy has units of erg/g
temp = (gamma-1) * self.protonmass / boltzmann * muienergy
return temp
class RecombRatesCen92(object):
"""Recombination rates and collisional ionization rates, as a function of temperature.
This is taken from KWH 06, astro-ph/9509107, Table 2, based on Cen 1992.
Illustris uses these rates."""
def alphaHp(self,temp):
"""Recombination rate for H+, ionized hydrogen, in cm^3/s.
Temp in K."""
return 8.4e-11 / np.sqrt(temp) / np.power(temp/1000, 0.2) / (1+ np.power(temp/1e6, 0.7))
def alphaHep(self,temp):
"""Recombination rate for He+, ionized helium, in cm^3/s.
Temp in K."""
return 1.5e-10 / np.power(temp,0.6353)
def alphad(self, temp):
"""Recombination rate for dielectronic recombination, in cm^3/s.
Temp in K."""
return 1.9e-3 / np.power(temp,1.5) * np.exp(-4.7e5/temp)*(1+0.3*np.exp(-9.4e4/temp))
def alphaHepp(self, temp):
"""Recombination rate for doubly ionized helium, in cm^3/s.
Temp in K."""
return 4 * self.alphaHp(temp)
def GammaeH0(self,temp):
"""Collisional ionization rate for H0 in cm^3/s. Temp in K"""
return 5.85e-11 * np.sqrt(temp) * np.exp(-157809.1/temp) / (1+ np.sqrt(temp/1e5))
def GammaeHe0(self,temp):
"""Collisional ionization rate for H0 in cm^3/s. Temp in K"""
return 2.38e-11 * np.sqrt(temp) * np.exp(-285335.4/temp) / (1+ np.sqrt(temp/1e5))
def GammaeHep(self,temp):
"""Collisional ionization rate for H0 in cm^3/s. Temp in K"""
return 5.68e-12 * np.sqrt(temp) * np.exp(-631515.0/temp) / (1+ np.sqrt(temp/1e5))
class RecombRatesVerner96(object):
"""Recombination rates and collisional ionization rates, as a function of temperature.
Recombination rates are the fit from Verner & Ferland 1996 (astro-ph/9509083).
Collisional rates are the fit from Voronov 1997 (http://www.sciencedirect.com/science/article/pii/S0092640X97907324).
In a very photoionised medium this changes the neutral hydrogen abundance by approximately 10% compared to Cen 1992.
These rates are those used by Nyx.
"""
def _Verner96Fit(self, temp, aa, bb, temp0, temp1):
"""Formula used as a fitting function in Verner & Ferland 1996 (astro-ph/9509083)."""
sqrttt0 = np.sqrt(temp/temp0)
sqrttt1 = np.sqrt(temp/temp1)
return aa / ( sqrttt0 * (1 + sqrttt0)**(1-bb)*(1+sqrttt1)**(1+bb) )
def alphaHp(self,temp):
"""Recombination rate for H+, ionized hydrogen, in cm^3/s.
The V&F 96 fitting formula is accurate to < 1% in the worst case.
Temp in K."""
#See line 1 of V&F96 table 1.
return self._Verner96Fit(temp, aa=7.982e-11, bb=0.748, temp0=3.148, temp1=7.036e+05)
def alphaHep(self,temp):
"""Recombination rate for He+, ionized helium, in cm^3/s.
Accurate to ~2% for T < 10^6 and 5% for T< 10^10.
Temp in K."""
#VF96 give two rates. The first is more accurate for T < 10^6, the second is valid up to T = 10^10.
#We use the most accurate allowed. See lines 2 and 3 of Table 1 of VF96.
lowTfit = self._Verner96Fit(temp, aa=3.294e-11, bb=0.6910, temp0=1.554e+01, temp1=3.676e+07)
highTfit = self._Verner96Fit(temp, aa=9.356e-10, bb=0.7892, temp0=4.266e-02, temp1=4.677e+06)
#Note that at 10^6K the two fits differ by ~10%. This may lead one to disbelieve the quoted accuracies!
#We thus switch over at a slightly lower temperature.
#The two fits cross at T ~ 3e5K.
swtmp = 7e5
deltat = 1e5
upper = swtmp + deltat
lower = swtmp - deltat
#In order to avoid a sharp feature at 10^6 K, we linearly interpolate between the two fits around 10^6 K.
interpfit = (lowTfit * (upper - temp) + highTfit * (temp - lower))/(2*deltat)
return (temp < lower)*lowTfit + (temp > upper)*highTfit + (upper > temp)*(temp > lower)*interpfit
def alphad(self, temp):
"""Recombination rate for dielectronic recombination, in cm^3/s.
This is the value from Aldrovandi & Pequignot 73, as used in Nyx, Sherwood and Cen 1992.
It is corrected from the value in Aldrovandi & Pequignot 1973 by Burgess & Tworkowski 1976 (fig1)
by a factor of 0.65. The exponent is also made slightly more accurate.
Temp in K."""
return 1.23e-3 / np.power(temp,1.5) * np.exp(-4.72e5/temp)*(1+0.3*np.exp(-9.4e4/temp))
def alphaHepp(self, temp):
"""Recombination rate for doubly ionized helium, in cm^3/s. Accurate to 2%.
Temp in K."""
#See line 4 of V&F96 table 1.
return self._Verner96Fit(temp, aa=1.891e-10, bb=0.7524, temp0=9.370, temp1=2.774e6)
def _Voronov96Fit(self, temp, dE, PP, AA, XX, KK):
"""Fitting function for collisional rates. Eq. 1 of Voronov 1997. Accurate to 10%,
but data is only accurate to 50%."""
bolevk = 8.61734e-5 # Boltzmann constant in units of eV/K
UU = dE / (bolevk * temp)
return AA * (1 + PP * np.sqrt(UU))/(XX+UU) * UU**KK * np.exp(-UU)
def GammaeH0(self,temp):
"""Collisional ionization rate for H0 in cm^3/s. Temp in K. Voronov 97, Table 1."""
return self._Voronov96Fit(temp, 13.6, 0, 0.291e-07, 0.232, 0.39)
def GammaeHe0(self,temp):
"""Collisional ionization rate for He0 in cm^3/s. Temp in K. Voronov 97, Table 1."""
return self._Voronov96Fit(temp, 24.6, 0, 0.175e-07, 0.180, 0.35)
def GammaeHep(self,temp):
"""Collisional ionization rate for HeI in cm^3/s. Temp in K. Voronov 97, Table 1."""
return self._Voronov96Fit(temp, 54.4, 1, 0.205e-08, 0.265, 0.25)
class RecombRatesBadnell(RecombRatesVerner96):
"""Recombination rates and collisional ionization rates, as a function of temperature.
Recombination rates are the fit from Badnell's website: http://amdpp.phys.strath.ac.uk/tamoc/RR/#partial.
"""
def _RecombRateFit_lowcharge_ion(self, temp, aa, bb, cc, temp0, temp1, temp2):
"""Formula used as a fitting function in Verner & Ferland 1996 (astro-ph/9509083)/ See http://amdpp.phys.strath.ac.uk/tamoc/RR/#partial."""
sqrttt0 = np.sqrt(temp/temp0)
sqrttt1 = np.sqrt(temp/temp1)
BB = bb + cc*np.exp(-temp2/temp)
return aa / ( sqrttt0 * (1 + sqrttt0)**(1-BB)*(1+sqrttt1)**(1+BB) )
def alphaHp(self,temp):
"""Recombination rate for H+, ionized hydrogen, in cm^3/s.
Temp in K."""
#See line 1 of V&F96 table 1.
return self._Verner96Fit(temp, aa=8.318e-11, bb=0.7472, temp0=2.965, temp1=7.001e5)
def alphaHep(self,temp):
"""Recombination rate for H+, ionized hydrogen, in cm^3/s.
Temp in K."""
#See line 1 of V&F96 table 1.
return self._Verner96Fit(temp, aa=1.818E-10, bb=0.7492, temp0=10.17, temp1=2.786e6)
def alphaHepp(self, temp):
"""Recombination rate for doubly ionized helium, in cm^3/s.
Temp in K."""
#See line 4 of V&F96 table 1.
return self._RecombRateFit_lowcharge_ion(temp, aa=5.235E-11, bb=0.6988, cc=0.0829, temp0=7.301, temp1=4.475e6, temp2 = 1.682e5)
class PhotoRates(object):
"""The photoionization rates for a given species.
Eq. 29 of KWH 96. This is loaded from a TREECOOL table."""
def __init__(self, treecool_file="data/TREECOOL_ep_2018p"):
#Format of the treecool table:
# log_10(1+z), Gamma_HI, Gamma_HeI, Gamma_HeII, Qdot_HI, Qdot_HeI, Qdot_HeII,
# where 'Gamma' is the photoionization rate and 'Qdot' is the photoheating rate.
# The Gamma's are in units of s^-1, and the Qdot's are in units of erg s^-1.
try:
data = np.loadtxt(treecool_file)
except OSError:
treefile = os.path.join(os.path.dirname(os.path.realpath(__file__)), treecool_file)
data = np.loadtxt(treefile)
redshifts = data[:,0]
photo_rates = data[:,1:4]
photo_heat = data[:,4:7]
assert np.shape(redshifts)[0] == np.shape(photo_rates)[0]
self.Gamma_HI = interp.InterpolatedUnivariateSpline(redshifts, photo_rates[:,0])
self.Gamma_HeI = interp.InterpolatedUnivariateSpline(redshifts, photo_rates[:,1])
self.Gamma_HeII = interp.InterpolatedUnivariateSpline(redshifts, photo_rates[:,2])
self.Eps_HI = interp.InterpolatedUnivariateSpline(redshifts, photo_heat[:,0])
self.Eps_HeI = interp.InterpolatedUnivariateSpline(redshifts, photo_heat[:,1])
self.Eps_HeII = interp.InterpolatedUnivariateSpline(redshifts, photo_heat[:,2])
def gHe0(self,redshift):
"""Get photo rate for neutral Helium"""
log1z = np.log10(1+redshift)
return self.Gamma_HeI(log1z)
def gHep(self,redshift):
"""Get photo rate for singly ionized Helium"""
log1z = np.log10(1+redshift)
return self.Gamma_HeII(log1z)
def gH0(self,redshift):
"""Get photo rate for neutral Hydrogen"""
log1z = np.log10(1+redshift)
return self.Gamma_HI(log1z)
def epsHe0(self,redshift):
"""Get photo heating rate for neutral Helium"""
log1z = np.log10(1+redshift)
return self.Eps_HeI(log1z)
def epsHep(self,redshift):
"""Get photo heating rate for singly ionized Helium"""
log1z = np.log10(1+redshift)
return self.Eps_HeII(log1z)
def epsH0(self,redshift):
"""Get photo heating rate for neutral Hydrogen"""
log1z = np.log10(1+redshift)
return self.Eps_HI(log1z)
class CoolingRatesKWH92(object):
"""The cooling rates from KWH92, in erg s^-1 cm^-3 (cgs).
All rates are divided by the abundance of the ions involved in the interaction.
So we are computing the cooling rate divided by n_e n_X. Temperatures in K.
None of these rates are original to KWH92, but are taken from Cen 1992,
and originally from older references. The hydrogen rates in particular are probably inaccurate.
Cen 1992 modified (arbitrarily) the excitation and ionisation rates for high temperatures.
There is no collisional excitation rate for He0 - not sure why.
References:
Black 1981, from Lotz 1967, Seaton 1959, Burgess & Seaton 1960.
Recombination rates are from Spitzer 1978.
Free-free: Spitzer 1978.
Collisional excitation and ionisation cooling rates are merged.
"""
def __init__(self, tcmb=2.7255, t5_corr=1e5, recomb=None):
self.tcmb = tcmb
if recomb is None:
self.recomb = RecombRatesCen92()
else:
self.recomb = recomb
self.t5_corr = t5_corr
#1 eV in ergs
self.eVinergs = 1.60218e-12
#boltzmann constant in erg/K
self.kB = 1.38064852e-16
def _t5(self, temp):
"""Commonly used Cen 1992 correction factor for large temperatures.
This is implemented so that the cooling rates have the right
asymptotic behaviour. However, Cen erroneously imposes this correction at T=1e5,
which is too small: the Black 1981 rates these are based on should be good
until 5e5 at least, where the correction factor has a 10% effect already.
More modern tables thus impose it at T=5e7, which is still arbitrary but should be harmless.
"""
return 1+(temp/t5_corr)**0.5
def CollisionalExciteH0(self, temp):
"""Collisional excitation cooling rate for n_H0 and n_e. Gadget calls this BetaH0."""
return 7.5e-19 * np.exp(-118348.0/temp) /self._t5(temp)
def CollisionalExciteHeP(self, temp):
"""Collisional excitation cooling rate for n_He+ and n_e. Gadget calls this BetaHep."""
return 5.54e-17 * temp**(-0.397)*np.exp(-473638./temp)/self._t5(temp)
def CollisionalExciteHe0(self, temp):
"""This is listed in Cen 92 but neglected in KWH 97, presumably because it is very small."""
#return 0
return 9.1e-27 * temp**(-0.1687) * np.exp(-473638/temp) / self._t5(temp)
def CollisionalIonizeH0(self, temp):
"""Collisional ionisation cooling rate for n_H0 and n_e. Gadget calls this GammaeH0."""
#Ionisation potential of H0
return 13.5984 * self.eVinergs * self.recomb.GammaeH0(temp)
def CollisionalIonizeHe0(self, temp):
"""Collisional ionisation cooling rate for n_H0 and n_e. Gadget calls this GammaeHe0."""
return 24.5874 * self.eVinergs * self.recomb.GammaeHe0(temp)
def CollisionalIonizeHeP(self, temp):
"""Collisional ionisation cooling rate for n_H0 and n_e. Gadget calls this GammaeHep."""
return 54.417760 * self.eVinergs * self.recomb.GammaeHep(temp)
def CollisionalH0(self, temp):
"""Total collisional cooling for H0"""
return self.CollisionalExciteH0(temp) + self.CollisionalIonizeH0(temp)
def CollisionalHe0(self, temp):
"""Total collisional cooling for H0"""
return self.CollisionalExciteHe0(temp) + self.CollisionalIonizeHe0(temp)
def CollisionalHeP(self, temp):
"""Total collisional cooling for H0"""
return self.CollisionalExciteHeP(temp) + self.CollisionalIonizeHeP(temp)
def RecombHp(self, temp):
"""Recombination cooling rate for H+ and e. Gadget calls this AlphaHp."""
return 0.75 * self.kB * temp * self.recomb.alphaHp(temp)
def RecombHeP(self, temp):
"""Recombination cooling rate for He+ and e. Gadget calls this AlphaHep."""
#I'm not sure why they use 0.75 kT as the free energy of an electron.
#I would guess this is explained in Spitzer 1978.
return 0.75 * self.kB * temp * self.recomb.alphaHep(temp)+ self._RecombDielect(temp)
def RecombHePP(self, temp):
"""Recombination cooling rate for He++ and e. Gadget calls this AlphaHepp."""
return 0.75 * self.kB * temp * self.recomb.alphaHepp(temp)
def _RecombDielect(self, temp):
"""Dielectric recombination rate for He+ and e. Gadget calls this Alphad."""
#What is this magic number?
return 6.526e-11*self.recomb.alphad(temp)
def FreeFree(self, temp, zz):
"""Free-free cooling rate for electrons scattering on ions without being captured.
Factors here are n_e and total ionized species:
(FreeFree(zz=1)*(n_H+ + n_He+) + FreeFree(zz=2)*n_He++)"""
return 1.426e-27*np.sqrt(temp)*zz**2*self._gff(temp,zz)
def _gff(self, temp, zz):
"""Formula for the Gaunt factor. KWH takes this from Spitzer 1978."""
_ = zz
return 1.1+0.34*np.exp(-(5.5 - np.log10(temp))**2/3.)
def InverseCompton(self, temp, redshift):
"""Cooling rate for inverse Compton from the microwave background.
Multiply this only by n_e. Note the CMB temperature is hardcoded in KWH92 to 2.7."""
tcmb_red = self.tcmb * (1+redshift)
#Thompson cross-section in cm^2
sigmat = 6.6524e-25
#Radiation density constant, 4 sigma_stefan-boltzmann / c in erg cm^-3 K^-4
rad_dens = 7.5657e-15
#Electron mass in g
me = 9.10938e-28
#Speed of light in cm/s
cc = 2.99792e10
return 4 * sigmat * rad_dens / (me*cc) * tcmb_red**4 * self.kB * (temp - tcmb_red)
class CoolingRatesSherwood(CoolingRatesKWH92):
"""The cooling rates used in the Sherwood simulation, Bolton et al 2017, in erg s^-1 cm^-3 (cgs).
Differences from KWH92 are updated recombination and collisional ionization rates, and the use of a
larger temperature correction factor than Cen 92.
"""
def __init__(self, tcmb=2.7255, recomb=None):
CoolingRatesKWH92.__init__(tcmb = tcmb, t5_corr = 5e7, recomb=RecombRatesVerner96)
class CoolingRatesNyx(CoolingRatesKWH92):
"""The cooling rates used in the Nyx paper Lukic 2014, 1406.6361, in erg s^-1 cm^-3 (cgs).
All rates are divided by the abundance of the ions involved in the interaction.
So we are computing the cooling rate divided by n_e n_X. Temperatures in K.
Major differences from KWH are the use of the Scholz & Walter 1991
hydrogen collisional cooling rates, a less aggressive high temperature correction for helium, and
Shapiro & Kang 1987 for free free.
Older Black 1981 recombination cooling rates are used!
They use the recombination rates from Verner & Ferland 96, but do not change the cooling rates to match.
Ditto the ionization rates from Voronov 1997: they should also use these rates for collisional ionisation,
although this is harder because Sholz & Walter don't break their rates into ionization and excitation.
References:
Scholz & Walters 1991 (0.45% accuracy)
Black 1981 (recombination and helium)
Shapiro & Kang 1987
"""
def __init__(self, tcmb=2.7255, recomb=None):
CoolingRatesKWH92.__init__(tcmb = tcmb, t5_corr = 5e7, recomb=recomb)
def CollisionalH0(self, temp):
"""Collisional cooling rate for n_H0 and n_e. Gadget calls this BetaH0 + GammaeH0.
Formula from Eq. 23, Table 4 of Scholz & Walters, claimed good to 0.45 %.
Note though that they have two datasets which differ by a factor of two.
Differs from Cen 92 by a factor of two."""
#Technically only good for T > 2000.
y = np.log(temp)
#Constant is 0.75/k_B in Rydberg
Ryd = 2.1798741e-11
tot = -0.75/self.kB*Ryd/temp
coeffslowT = [213.7913, 113.9492, 25.06062, 2.762755, 0.1515352, 3.290382e-3]
coeffshighT = [271.25446, 98.019455, 14.00728, 0.9780842, 3.356289e-2, 4.553323e-4]
for j in range(6):
tot += ((temp < 1e5)*coeffslowT[j]+(temp >=1e5)*coeffshighT[j])*(-y)**j
return 1e-20 * np.exp(tot)
def RecombHp(self, temp):
"""Recombination cooling rate for H+ and e. Gadget calls this AlphaHp.
Differs by O(10%) until 3x10^6."""
return 2.851e-27 * np.sqrt(temp) * (5.914 - 0.5 * np.log(temp) + 0.01184 * temp**(1./3))
def RecombHePP(self, temp):
"""Recombination cooling rate for H+ and e. Gadget calls this AlphaHepp.
Differs from Cen 92 by 10% until ~10^7"""
return 1.140e-26 * np.sqrt(temp) * (6.607 - 0.5 * np.log(temp) + 7.459e-3 * temp**(1./3))
def _gff(self, temp, zz):
"""Formula for the Gaunt factor from Shapiro & Kang 1987. ZZ is 1 for H+ and He+ and 2 for He++.
This is almost identical to the KWH rate but not continuous."""
#This is not continuous. Check the original reference.
little = (temp/zz**2 <= 3.2e5)
lt = np.log10(temp/zz**2)
return little * (0.79464 + 0.1243*lt) + np.logical_not(little) * ( 2.13164 - 0.1240 * lt)
| 49.94795 | 188 | 0.639688 | 4,574 | 31,667 | 4.370354 | 0.182991 | 0.016408 | 0.007204 | 0.005103 | 0.381091 | 0.336718 | 0.310105 | 0.295048 | 0.265333 | 0.248124 | 0 | 0.068572 | 0.252124 | 31,667 | 633 | 189 | 50.026856 | 0.775493 | 0.443585 | 0 | 0.204013 | 0 | 0 | 0.005396 | 0.002698 | 0 | 0 | 0 | 0 | 0.006689 | 1 | 0.22408 | false | 0 | 0.016722 | 0 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03c6a4780081ba46e0720aa30e18a9c4fde3152f | 1,596 | py | Python | escola/tests/selenium_test_case.py | vini84200/medusa2 | 37cf33d05be8b0195b10845061ca893ba5e814dd | [
"MIT"
] | 1 | 2019-03-15T18:04:24.000Z | 2019-03-15T18:04:24.000Z | escola/tests/selenium_test_case.py | vini84200/medusa2 | 37cf33d05be8b0195b10845061ca893ba5e814dd | [
"MIT"
] | 22 | 2019-03-17T21:53:50.000Z | 2021-03-31T19:12:19.000Z | escola/tests/selenium_test_case.py | vini84200/medusa2 | 37cf33d05be8b0195b10845061ca893ba5e814dd | [
"MIT"
] | 1 | 2018-11-25T03:05:23.000Z | 2018-11-25T03:05:23.000Z | # Developed by Vinicius José Fritzen
# Last Modified 13/04/19 16:04.
# Copyright (c) 2019 Vinicius José Fritzen and Albert Angel Lanzarini
import pytest
from decouple import config
from django.contrib.auth.models import User
from django.test import LiveServerTestCase, TestCase
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.wait import WebDriverWait
# @pytest.mark.selenium
class SeleniumTestCase(LiveServerTestCase):
"""
A base test case for Selenium, providing hepler methods for generating
clients and logging in profiles.
"""
def setUp(self):
options = Options()
if config('MOZ_HEADLESS', 0) == 1:
options.add_argument('-headless')
self.browser = CustomWebDriver(firefox_options=options)
def tearDown(self):
self.browser.quit()
class CustomWebDriver(webdriver.Firefox):
"""Our own WebDriver with some helpers added"""
def find_css(self, css_selector):
"""Shortcut to find elements by CSS. Returns either a list or singleton"""
elems = self.find_elements_by_css_selector(css_selector)
found = len(elems)
if found == 1:
return elems[0]
elif not elems:
raise NoSuchElementException(css_selector)
return elems
def wait_for_css(self, css_selector, timeout=7):
""" Shortcut for WebDriverWait"""
return WebDriverWait(self, timeout).until(lambda driver : driver.find_css(css_selector)) | 33.957447 | 96 | 0.714286 | 194 | 1,596 | 5.793814 | 0.510309 | 0.058719 | 0.033808 | 0.032028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014996 | 0.20614 | 1,596 | 47 | 96 | 33.957447 | 0.872139 | 0.251253 | 0 | 0 | 0 | 0 | 0.01815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.296296 | 0 | 0.62963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
03cc40e680dd0a778266264e42ce9370062476e1 | 1,036 | py | Python | tornado/4_celery_async_sleep.py | dongweiming/speakerdeck | 497352767a6ec57629f28d5c85f70bef38fc1914 | [
"Apache-2.0"
] | 6 | 2015-03-02T06:01:28.000Z | 2016-06-03T09:55:34.000Z | tornado/4_celery_async_sleep.py | dongweiming/speakerdeck | 497352767a6ec57629f28d5c85f70bef38fc1914 | [
"Apache-2.0"
] | null | null | null | tornado/4_celery_async_sleep.py | dongweiming/speakerdeck | 497352767a6ec57629f28d5c85f70bef38fc1914 | [
"Apache-2.0"
] | 5 | 2015-02-01T13:48:58.000Z | 2018-11-27T02:10:59.000Z | #!/bin/env python
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.gen
import tornado.httpclient
import tcelery
import sleep_task as tasks
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
tcelery.setup_nonblocking_producer()
class SleepHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
@tornado.gen.coroutine
def get(self):
yield tornado.gen.Task(tasks.sleep.apply_async, args=[5])
self.write("when i sleep 5s")
self.finish()
class JustNowHandler(tornado.web.RequestHandler):
def get(self):
self.write("i hope just now see you")
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[
(r"/sleep", SleepHandler), (r"/justnow", JustNowHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
| 28 | 70 | 0.728764 | 133 | 1,036 | 5.556391 | 0.533835 | 0.105548 | 0.054127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006865 | 0.156371 | 1,036 | 36 | 71 | 28.777778 | 0.838673 | 0.015444 | 0 | 0.071429 | 0 | 0 | 0.083415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.321429 | 0 | 0.464286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
03d62215eb44f2521a4a5180463ae9e3411c086d | 1,401 | py | Python | custom_components/gpodder/config_flow.py | hsolberg/gpodder | 6b3af212f8067c7084f638bf40c9a25fe6fc252d | [
"MIT"
] | 13 | 2019-03-21T10:44:58.000Z | 2021-04-17T09:19:53.000Z | custom_components/gpodder/config_flow.py | hsolberg/gpodder | 6b3af212f8067c7084f638bf40c9a25fe6fc252d | [
"MIT"
] | 18 | 2019-03-24T20:41:21.000Z | 2021-12-10T01:42:57.000Z | custom_components/gpodder/config_flow.py | hsolberg/gpodder | 6b3af212f8067c7084f638bf40c9a25fe6fc252d | [
"MIT"
] | 8 | 2019-03-24T06:19:24.000Z | 2021-06-03T11:08:23.000Z | """Adds config flow for gPodder."""
from homeassistant import config_entries
import voluptuous as vol
from custom_components.gpodder.const import (
CONF_NAME,
CONF_PASSWORD,
CONF_USERNAME,
CONF_DEVICE,
DEFAULT_NAME,
DOMAIN,
)
class GpodderFlowHandler(config_entries.ConfigFlow, domain=DOMAIN):
"""Config flow for gPodder."""
VERSION = 1
CONNECTION_CLASS = config_entries.CONN_CLASS_CLOUD_POLL
def __init__(self):
"""Initialize."""
self._errors = {}
async def async_step_user(self, user_input=None):
"""Handle a flow initialized by the user."""
self._errors = {}
if user_input is not None:
return self.async_create_entry(
title=user_input[CONF_DEVICE], data=user_input
)
return await self._show_config_form(user_input)
async def _show_config_form(self, user_input):
"""Show the configuration form to edit location data."""
return self.async_show_form(
step_id="user",
data_schema=vol.Schema(
{
vol.Required(CONF_USERNAME): str,
vol.Required(CONF_PASSWORD): str,
vol.Required(CONF_DEVICE): str,
vol.Required(CONF_NAME, default=DEFAULT_NAME): str,
}
),
errors=self._errors,
)
| 28.591837 | 71 | 0.605282 | 157 | 1,401 | 5.10828 | 0.414013 | 0.067332 | 0.074813 | 0.067332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001028 | 0.305496 | 1,401 | 48 | 72 | 29.1875 | 0.823227 | 0.047109 | 0 | 0.057143 | 0 | 0 | 0.003281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.057143 | 0.085714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
03d659e23330734a212d3f3f3cc9b22edbb8b9c6 | 1,397 | py | Python | mirari/INV/migrations/0003_auto_20190609_1903.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | mirari/INV/migrations/0003_auto_20190609_1903.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | 18 | 2019-12-27T19:58:20.000Z | 2022-02-27T08:17:49.000Z | mirari/INV/migrations/0003_auto_20190609_1903.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.5 on 2019-06-10 00:03
from django.db import migrations, models
import localflavor.mx.models
class Migration(migrations.Migration):
dependencies = [
('INV', '0002_auto_20190608_2204'),
]
operations = [
migrations.AlterField(
model_name='fiscalmx',
name='contactEmail',
field=models.EmailField(default='email@email.com', help_text='Correo donde llegarán las notificaciones sobre facturación', max_length=100, verbose_name='Email contacto'),
preserve_default=False,
),
migrations.AlterField(
model_name='fiscalmx',
name='persona',
field=models.CharField(choices=[('FISICA', 'FISICA'), ('MORAL', 'MORAL')], default='Física', max_length=100, verbose_name='Tipo de persona'),
),
migrations.AlterField(
model_name='fiscalmx',
name='razon_social',
field=models.CharField(default='Razon Social', help_text='Razón social de persona Física o Moral', max_length=255, verbose_name='Razón social'),
preserve_default=False,
),
migrations.AlterField(
model_name='fiscalmx',
name='rfc',
field=localflavor.mx.models.MXRFCField(default='SUL010720JN8', max_length=13, verbose_name='RFC'),
preserve_default=False,
),
]
| 36.763158 | 182 | 0.625626 | 147 | 1,397 | 5.802721 | 0.482993 | 0.093787 | 0.117233 | 0.135991 | 0.293083 | 0.239156 | 0.143025 | 0.143025 | 0.143025 | 0 | 0 | 0.04698 | 0.2534 | 1,397 | 37 | 183 | 37.756757 | 0.770853 | 0.032212 | 0 | 0.483871 | 1 | 0 | 0.221481 | 0.017037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03d7f39caab28a9c35a6797a159ddc4799ba288c | 618 | py | Python | python/sort/BubbleSort.py | smdsbz/homework | 6cac5cc006543bc0787ef4219e72f314ee04083e | [
"MIT"
] | 5 | 2017-05-21T15:36:27.000Z | 2018-01-01T09:47:26.000Z | python/sort/BubbleSort.py | smdsbz/homework | 6cac5cc006543bc0787ef4219e72f314ee04083e | [
"MIT"
] | null | null | null | python/sort/BubbleSort.py | smdsbz/homework | 6cac5cc006543bc0787ef4219e72f314ee04083e | [
"MIT"
] | null | null | null | #!/usr/bin/python3
'''
BubbleSort.py
by Xiaoguang Zhu
'''
array = []
print("Enter at least two numbers to start bubble-sorting.")
print("(You can end inputing anytime by entering nonnumeric)")
# get numbers
while True:
try:
array.append(float(input(">> ")))
except ValueError: # exit inputing
break
print("\nThe array you've entered was:"); print(array)
print("\nNow sorting...")
# sorting
for x in range(len(array)-1, 0, -1):
for y in range(x):
if array[y] > array[y+1]:
array[y], array[y+1] = array[y+1], array[y]
print(array)
# output
print("\nAll done! Now the moment of truth!")
print(array)
| 19.3125 | 62 | 0.665049 | 98 | 618 | 4.193878 | 0.602041 | 0.087591 | 0.051095 | 0.087591 | 0.094891 | 0.077859 | 0.077859 | 0 | 0 | 0 | 0 | 0.01354 | 0.16343 | 618 | 31 | 63 | 19.935484 | 0.781431 | 0.144013 | 0 | 0.117647 | 0 | 0 | 0.367505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
03eb9e9a2678ce2856ad1cc39eac15c2a16bbcc9 | 431 | py | Python | Section_7/word_count_repo/src/word_count.py | PacktPublishing/Software-Engineering-with-Python-3.x | 056e4c89e4f8d7fc4a4095ee0671d6944a86630e | [
"MIT"
] | 1 | 2020-02-02T13:55:29.000Z | 2020-02-02T13:55:29.000Z | Section_7/word_count_repo/src/word_count.py | PacktPublishing/Software-Engineering-with-Python-3.x | 056e4c89e4f8d7fc4a4095ee0671d6944a86630e | [
"MIT"
] | null | null | null | Section_7/word_count_repo/src/word_count.py | PacktPublishing/Software-Engineering-with-Python-3.x | 056e4c89e4f8d7fc4a4095ee0671d6944a86630e | [
"MIT"
] | 2 | 2020-02-09T12:41:40.000Z | 2020-09-21T02:16:06.000Z | from project_utils import dict_to_file, get_word_count
if __name__ == "__main__":
inp_filename = 'sample.txt'
out_filename = 'count.csv'
print("Reading file ", inp_filename)
word_dict = get_word_count(inp_filename)
print("Output from get_word_count is")
print(word_dict)
print("Writing to file named", out_filename)
dict_to_file(word_dict, out_filename)
print("Done processing!")
| 19.590909 | 54 | 0.703016 | 60 | 431 | 4.583333 | 0.45 | 0.065455 | 0.130909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201856 | 431 | 21 | 55 | 20.52381 | 0.799419 | 0 | 0 | 0 | 0 | 0 | 0.24594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
03ef1f344c45295f3dabd049b11b142929115048 | 1,671 | py | Python | Chapter05/airflow/dags/classification_pipeline_dag.py | arifmudi/Machine-Learning-Engineering-with-Python | 05c3fb9ae9fb9124a13812f59f8e681d66832d3b | [
"MIT"
] | 67 | 2021-01-31T19:43:15.000Z | 2022-03-27T08:03:56.000Z | Chapter05/airflow/dags/classification_pipeline_dag.py | arifmudi/Machine-Learning-Engineering-with-Python | 05c3fb9ae9fb9124a13812f59f8e681d66832d3b | [
"MIT"
] | null | null | null | Chapter05/airflow/dags/classification_pipeline_dag.py | arifmudi/Machine-Learning-Engineering-with-Python | 05c3fb9ae9fb9124a13812f59f8e681d66832d3b | [
"MIT"
] | 35 | 2021-02-08T14:34:46.000Z | 2022-03-18T16:06:09.000Z | from datetime import timedelta
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.utils.dates import days_ago
default_args = {
'owner': 'Andrew McMahon',
'depends_on_past': False,
'start_date': days_ago(2),
'email': ['example@example.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=2),
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
# 'wait_for_downstream': False,
# 'dag': dag,
# 'sla': timedelta(hours=2),
# 'execution_timeout': timedelta(seconds=300),
# 'on_failure_callback': some_function,
# 'on_success_callback': some_other_function,
# 'on_retry_callback': another_function,
# 'sla_miss_callback': yet_another_function,
# 'trigger_rule': 'all_success'
}
#instantiate DAG
dag = DAG(
'classification_pipeline',
default_args=default_args,
description=’Basic pipeline for classifying the Wine Dataset',
schedule_interval=timedelta(days=1), # run daily? check
)
get_data = BashOperator(
task_id='get_data',
bash_command='python3 /usr/local/airflow/scripts/get_data.py',
dag=dag,
)
train_model= BashOperator(
task_id='train_model',
depends_on_past=False,
bash_command='python3 /usr/local/airflow/scripts/train_model.py',
retries=3,
dag=dag,
)
# Persist to MLFlow
persist_model = BashOperator(
task_id='persist_model',
depends_on_past=False,
bash_command=’python ……./persist_model.py,
retries=3,
dag=dag,
)
get_data >> train_model >> persist_model
| 26.109375 | 69 | 0.691801 | 211 | 1,671 | 5.222749 | 0.450237 | 0.032668 | 0.03539 | 0.049002 | 0.162432 | 0.162432 | 0.124319 | 0 | 0 | 0 | 0 | 0.014535 | 0.176541 | 1,671 | 63 | 70 | 26.52381 | 0.781977 | 0.273489 | 0 | 0.175 | 0 | 0 | 0.222222 | 0.085213 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
03f5f0261dae2f37b5e7db37db0f4a97a9efea20 | 3,860 | py | Python | examples/kcs.py | WeiZhixiong/ksc-sdk-python | a93237ce376e107eaae644678ef6b99819a9f8eb | [
"Apache-2.0"
] | 53 | 2016-09-21T15:52:14.000Z | 2021-12-23T09:23:00.000Z | examples/kcs.py | WeiZhixiong/ksc-sdk-python | a93237ce376e107eaae644678ef6b99819a9f8eb | [
"Apache-2.0"
] | 27 | 2016-09-21T15:24:43.000Z | 2021-11-18T08:38:38.000Z | examples/kcs.py | WeiZhixiong/ksc-sdk-python | a93237ce376e107eaae644678ef6b99819a9f8eb | [
"Apache-2.0"
] | 68 | 2016-09-06T10:33:09.000Z | 2021-11-16T07:13:03.000Z | # -*- encoding:utf-8 -*-
from kscore.session import get_session
if __name__ == "__main__":
s = get_session()
#确定服务名称以及机房
kcsClient = s.create_client("kcs", "cn-shanghai-3", use_ssl=False)
# 调用DescribeCacheReadonlyNode接口需要传入kcsv2
#kcsv2Client = s.create_client("kcsv2", "cn-shanghai-3", use_ssl=False)
# 创建缓存服务
#print(kcsClient.create_cache_cluster(**{'Name': 'pjl_sdk_test0921', 'Capacity': 1, 'NetType': 2, 'VpcId': '3c12ccdf-9b8f-4d9b-8aa6-a523897e97a1', 'VnetId': '293c16a5-c757-405c-a693-3b2a3adead50'}))
# 查询缓存服务列表
#print(kcsClient.describe_cache_clusters(**{'Offset': 0, 'Limit': 5, 'OrderBy': 'created,desc'}))
# 查询缓存服务详情
#print(kcsClient.describe_cache_cluster(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 重命名缓存服务
#print(kcsClient.rename_cache_cluster(**{'Name': 'pjl_test_sdk', 'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 清空缓存服务
#print(kcsClient.flush_cache_cluster(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 更配缓存服务
#print(kcsClient.resize_cache_cluster(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb', 'Capacity': 2}))
# 删除缓存服务
#print(kcsClient.delete_cache_cluster(CacheId='b80ef266-dd52-47b2-9377-6a4a73626c19'))
# 查询缓存服务参数
#print(kcsClient.describe_cache_parameters(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 设置缓存服务参数
#print(kcsClient.set_cache_parameters(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb', 'Parameters.ParameterName.1': 'maxmemory-policy', 'Parameters.ParameterValue.1': 'allkeys-lru', 'ResetAllParameters': 'true'}))
# 查询缓存服务安全规则
#print(kcsClient.describe_cache_security_rules(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 设置缓存服务安全规则
#print(kcsClient.set_cache_security_rules(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb', 'SecurityRules.Cidr.1': '192.168.18.17/21'}))
# 删除缓存服务安全规则
#print(kcsClient.delete_cache_security_rule(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb', 'SecurityRuleId': 105}))
# 查询实例只读节点
#print(kcsv2Client.describe_cache_readonly_node(**{'CacheId': '01988fc0-6041-49d2-b6b5-e2385e5d5edb'}))
# 查询可用区
#print(kcsClient.describe_availability_zones(**{'Engine': 'redis', 'Mode': 1}))
# 查询机房
#print(kcsClient.describe_regions(**{'Engine': 'redis', 'Mode': 1}))
# 创建安全组
# print(kcsClient.create_security_group(**{'AvailableZone': 'az', 'Name': 'testPythonSdk', 'Description': 'testPythonSdk'}))
# 克隆安全组
# print(kcsClient.clone_security_group(**{'AvailableZone': 'az', 'Name': 'testPythonSdkClone', 'Description': 'testPythonSdkClone', 'SrcSecurityGroupId': 'srcSecurityGroupId'}))
# 删除安全组
# print(kcsClient.delete_security_group(**{'AvailableZone': 'az', 'SecurityGroupId.1': 'securityGroupId'}))
# 修改安全组
# print(kcsClient.modify_security_group(**{'AvailableZone': 'az', 'Name': 'testPythonSdk777', 'Description': 'testPythonSdk777', 'SecurityGroupId': 'securityGroupId'}))
# 查询安全组列表
# print(kcsClient.describe_security_groups(**{'AvailableZone': 'az'}))
# 查询安全组详情
# print(kcsClient.describe_security_group(**{'AvailableZone': 'az', 'SecurityGroupId': 'securityGroupId'}))
# 实例绑定安全组
# print(kcsClient.allocate_security_group(**{'AvailableZone': 'az', 'CacheId.1': 'cacheId', 'SecurityGroupId.1': 'securityGroupId'}))
# 实例解绑安全组
# print(kcsClient.deallocate_security_group(**{'AvailableZone': 'az', 'CacheId.1': 'cacheId', 'SecurityGroupId': 'securityGroupId'}))
# 创建安全组规则
# print(kcsClient.create_security_group_rule(**{'AvailableZone': 'az', 'SecurityGroupId': 'securityGroupId', 'Cidrs.1': "172.10.12.0/16"}))
# 删除安全组规则
# print(kcsClient.delete_security_group_rule(**{'AvailableZone': 'az', 'SecurityGroupId': 'securityGroupId', 'SecurityGroupRuleId.1': 'securityGroupRuleId'})) | 45.411765 | 223 | 0.70285 | 389 | 3,860 | 6.786632 | 0.380463 | 0.127273 | 0.07197 | 0.087121 | 0.393182 | 0.293939 | 0.232955 | 0.185985 | 0.043182 | 0 | 0 | 0.096453 | 0.116321 | 3,860 | 85 | 224 | 45.411765 | 0.677514 | 0.868912 | 0 | 0 | 0 | 0 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ff011eb3b30a5dcec0975d8afd7f72454da4922d | 4,378 | py | Python | projects/migrations/0001_initial.py | louisenje/project-rate | b11e209bebdf59983d967864a049538b2807acd2 | [
"MIT"
] | null | null | null | projects/migrations/0001_initial.py | louisenje/project-rate | b11e209bebdf59983d967864a049538b2807acd2 | [
"MIT"
] | null | null | null | projects/migrations/0001_initial.py | louisenje/project-rate | b11e209bebdf59983d967864a049538b2807acd2 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.3 on 2021-06-02 07:26
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='NewsLetterRecipients',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=30)),
('email', models.EmailField(max_length=254)),
],
),
migrations.CreateModel(
name='Profile',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('profile_pic', models.ImageField(blank=True, default='media/profile/male.png', upload_to='profile/')),
('bio', models.TextField(blank=True, default='*No Bio*')),
('phone_no', models.IntegerField(blank=True, null=True)),
('gender', models.CharField(blank=True, max_length=10)),
('pub_date', models.DateTimeField(auto_now_add=True)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='webapps',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('main_picture', models.ImageField(default='webapps/internet.png', upload_to='webapps/')),
('screenshot1', models.ImageField(blank=True, default='webapps/internet.png', upload_to='webapps/')),
('screenshot2', models.ImageField(blank=True, default='webapps/internet.png', upload_to='webapps/')),
('screenshot3', models.ImageField(blank=True, default='webapps/internet.png', upload_to='webapps/')),
('link', models.CharField(max_length=200)),
('description', models.TextField()),
('pub_date', models.DateTimeField(auto_now_add=True)),
('profile', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='projects.profile')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-pub_date'],
},
),
migrations.CreateModel(
name='ratings',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('rate_by_design', models.IntegerField(choices=[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9), (10, 10)], default=0)),
('rate_by_usability', models.IntegerField(choices=[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9), (10, 10)], default=0)),
('rate_by_content', models.IntegerField(choices=[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9), (10, 10)], default=0)),
('rate_by_creativity', models.IntegerField(choices=[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9), (10, 10)], default=0)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
('webapp', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='projects.webapps')),
],
),
migrations.CreateModel(
name='comment',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('comment', models.CharField(blank=True, max_length=80)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
('webapp', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='projects.webapps')),
],
),
]
| 56.128205 | 171 | 0.574006 | 497 | 4,378 | 4.93159 | 0.221328 | 0.029376 | 0.045696 | 0.071807 | 0.655243 | 0.642187 | 0.615259 | 0.598939 | 0.566299 | 0.566299 | 0 | 0.040524 | 0.250343 | 4,378 | 77 | 172 | 56.857143 | 0.706277 | 0.010279 | 0 | 0.428571 | 1 | 0 | 0.117063 | 0.00508 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042857 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ff098027575fba502952fc4d4e830126b3044b28 | 3,086 | py | Python | src/access.py | zimingd/packer-rstudio | 935360c55c292b06969fe157da95b74351e4d3c1 | [
"Apache-2.0"
] | null | null | null | src/access.py | zimingd/packer-rstudio | 935360c55c292b06969fe157da95b74351e4d3c1 | [
"Apache-2.0"
] | 7 | 2020-04-14T17:00:48.000Z | 2022-03-03T00:39:21.000Z | src/access.py | zimingd/packer-rstudio | 935360c55c292b06969fe157da95b74351e4d3c1 | [
"Apache-2.0"
] | 4 | 2020-05-20T16:47:22.000Z | 2021-05-25T15:16:00.000Z | #!/usr/bin/env python3
import jwt
import requests
import base64
import json
import boto3
import time
import functools
import os
from mod_python import apache
region = json.loads(requests.get('http://169.254.169.254/latest/dynamic/instance-identity/document').text)['region']
ssm_parameter_name_env_var = 'SYNAPSE_TOKEN_AWS_SSM_PARAMETER_NAME'
kms_alias_env_var = 'KMS_KEY_ALIAS'
def headerparserhandler(req):
jwt_str = req.headers_in['x-amzn-oidc-data'] #proxy.conf ensures this header exists
try:
payload = jwt_payload(jwt_str)
if payload['userid'] == approved_user() and payload['exp'] > time.time():
store_to_ssm(req.headers_in['x-amzn-oidc-accesstoken'])
return apache.OK
else:
return apache.HTTP_UNAUTHORIZED #the userid claim does not match the userid tag
except Exception:
# if the JWT playload is invalid
return apache.HTTP_UNAUTHORIZED
def approved_user():
instance_id = requests.get('http://169.254.169.254/latest/meta-data/instance-id').text
ec2 = boto3.resource('ec2',region)
vm = ec2.Instance(instance_id)
#TODO handle exception on multiple tags in this list
for tags in vm.tags:
if tags["Key"] == 'Protected/AccessApprovedCaller':
approved_caller = tags["Value"]
return approved_caller.split(':')[1] #return userid portion of tag
# taking advantage of lru cache to avoid re-putting the same access token to
# SSM Parameter Store.
# According to functools source code, arguments (i.e. the access token) are hashed,
# not stored as-is in memory
@functools.lru_cache(maxsize=1)
def store_to_ssm(access_token):
parameter_name = os.environ.get(ssm_parameter_name_env_var)
kms_key_alias = os.environ.get(kms_alias_env_var)
if not (parameter_name):
# just exit early if the parameter name to store in SSM is not found
return
ssm_client = boto3.client('ssm', region)
kms_client = boto3.client('kms', region)
key_id = kms_client.describe_key(KeyId=kms_key_alias)['KeyMetadata']['KeyId']
ssm_client.put_parameter(
Name=parameter_name,
Type='SecureString',
Value=access_token,
KeyId=key_id,
Overwrite=True
)
def jwt_payload(encoded_jwt):
# The x-amzn-oid-data header is a base64-encoded JWT signed by the ALB
# validating the signature of the JWT means the payload is authentic
# per http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html
# Step 1: Get the key id from JWT headers (the kid field)
#encoded_jwt = headers.dict['x-amzn-oidc-data']
jwt_headers = encoded_jwt.split('.')[0]
decoded_jwt_headers = base64.b64decode(jwt_headers).decode("utf-8")
decoded_json = json.loads(decoded_jwt_headers)
kid = decoded_json['kid']
# Step 2: Get the public key from regional endpoint
pub_key = get_aws_elb_public_key(region, kid)
# Step 3: Get the payload
return jwt.decode(encoded_jwt, pub_key, algorithms=['ES256'])
@functools.lru_cache()
def get_aws_elb_public_key(region, key_id):
url = f'https://public-keys.auth.elb.{region}.amazonaws.com/{key_id}'
return requests.get(url).text
| 33.182796 | 116 | 0.745949 | 473 | 3,086 | 4.691332 | 0.382664 | 0.046868 | 0.021631 | 0.016224 | 0.102749 | 0.070302 | 0.029743 | 0.029743 | 0 | 0 | 0 | 0.018961 | 0.145496 | 3,086 | 92 | 117 | 33.543478 | 0.822526 | 0.292288 | 0 | 0.034483 | 0 | 0 | 0.169898 | 0.04109 | 0 | 0 | 0 | 0.01087 | 0 | 1 | 0.086207 | false | 0 | 0.155172 | 0 | 0.362069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ff0c20d8ce541a65a62a8c8b06da2596d5632606 | 16,877 | py | Python | perses/tests/test_atom_mapping.py | schallerdavid/perses | 58bd6e626e027879e136f56e175683893e016f8c | [
"MIT"
] | 99 | 2016-01-19T18:10:37.000Z | 2022-03-26T02:43:08.000Z | perses/tests/test_atom_mapping.py | schallerdavid/perses | 58bd6e626e027879e136f56e175683893e016f8c | [
"MIT"
] | 878 | 2015-09-18T19:25:30.000Z | 2022-03-31T02:33:04.000Z | perses/tests/test_atom_mapping.py | schallerdavid/perses | 58bd6e626e027879e136f56e175683893e016f8c | [
"MIT"
] | 30 | 2015-09-21T15:26:35.000Z | 2022-01-10T20:07:24.000Z | import os
import pytest
import unittest
from perses.rjmc.atom_mapping import AtomMapper, AtomMapping, InvalidMappingException
from openff.toolkit.topology import Molecule
################################################################################
# LOGGER
################################################################################
import logging
logging.basicConfig(level = logging.NOTSET)
_logger = logging.getLogger("atom_mapping")
_logger.setLevel(logging.INFO)
################################################################################
# AtomMapping
################################################################################
class TestAtomMapping(unittest.TestCase):
"""Test AtomMapping object."""
def setUp(self):
"""Create useful common objects for testing."""
self.old_mol = Molecule.from_smiles('[C:0]([H:1])([H:2])([H:3])[C:4]([H:5])([H:6])([H:7])') # ethane
self.new_mol = Molecule.from_smiles('[C:0]([H:1])([H:2])([H:3])[C:4]([H:5])([H:6])[O:7][H:8]') # ethanol
self.old_to_new_atom_map = { 0:0, 4:4 }
self.new_to_old_atom_map = dict(map(reversed, self.old_to_new_atom_map.items()))
def test_create(self):
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
assert atom_mapping.old_to_new_atom_map == self.old_to_new_atom_map
assert atom_mapping.new_to_old_atom_map == self.new_to_old_atom_map
assert atom_mapping.n_mapped_atoms == 2
atom_mapping = AtomMapping(self.old_mol, self.old_mol, new_to_old_atom_map=self.new_to_old_atom_map)
assert atom_mapping.old_to_new_atom_map == self.old_to_new_atom_map
assert atom_mapping.new_to_old_atom_map == self.new_to_old_atom_map
assert atom_mapping.n_mapped_atoms == 2
def test_validation_fail(self):
# Empty mapping
with pytest.raises(InvalidMappingException) as excinfo:
atom_mapping = AtomMapping(self.old_mol, self.new_mol, { })
# Non-integers
with pytest.raises(InvalidMappingException) as excinfo:
atom_mapping = AtomMapping(self.old_mol, self.new_mol, { 0:0, 4:4, 5:'a' })
# Invalid atom indices
with pytest.raises(InvalidMappingException) as excinfo:
atom_mapping = AtomMapping(self.old_mol, self.new_mol, { 0:0, 4:4, 9:9 })
# Duplicated atom indices
with pytest.raises(InvalidMappingException) as excinfo:
atom_mapping = AtomMapping(self.old_mol, self.new_mol, { 0:0, 4:4, 3:4 })
def test_set_and_get_mapping(self):
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
# Set old-to-new map
atom_mapping.old_to_new_atom_map = self.old_to_new_atom_map
assert atom_mapping.old_to_new_atom_map == self.old_to_new_atom_map
assert atom_mapping.new_to_old_atom_map == self.new_to_old_atom_map
# Set new-to-old map
atom_mapping.new_to_old_atom_map = self.new_to_old_atom_map
assert atom_mapping.old_to_new_atom_map == self.old_to_new_atom_map
assert atom_mapping.new_to_old_atom_map == self.new_to_old_atom_map
def test_repr(self):
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
repr(atom_mapping)
def test_str(self):
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
str(atom_mapping)
def test_render_image(self):
import tempfile
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
for suffix in ['.pdf', '.png', '.svg']:
with tempfile.NamedTemporaryFile(suffix=suffix) as tmpfile:
atom_mapping.render_image(tmpfile.name)
def test_ring_breaking_detection(self):
# Test simple ethane -> ethanol transformation
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
assert atom_mapping.creates_or_breaks_rings() == False
# Define benzene -> napthalene transformation
old_mol = Molecule.from_smiles('[c:0]1[c:1][c:2][c:3][c:4][c:5]1') # benzene
new_mol = Molecule.from_smiles('[c:0]12[c:1][c:2][c:3][c:4][c:5]2[c:6][c:7][c:8][c:9]1') # napthalene
old_to_new_atom_map = { 0:0, 1:1, 2:2, 3:3, 4:4, 5:5 }
new_to_old_atom_map = dict(map(reversed, self.old_to_new_atom_map.items()))
atom_mapping = AtomMapping(old_mol, new_mol, old_to_new_atom_map=old_to_new_atom_map)
assert atom_mapping.creates_or_breaks_rings() == True
def test_unmap_partially_mapped_cycles(self):
# Test simple ethane -> ethanol transformation
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
n_mapped_atoms_old = atom_mapping.n_mapped_atoms
atom_mapping.unmap_partially_mapped_cycles()
assert atom_mapping.n_mapped_atoms == n_mapped_atoms_old
# Test methyl-cyclohexane -> methyl-cyclopentane, demapping the ring transformation
old_mol = Molecule.from_smiles('[C:0][C:1]1[C:2][C:3][C:4][C:5][C:6]1') # methyl-cyclohexane
new_mol = Molecule.from_smiles('[C:0][C:1]1[C:2][C:3][C:4][C:5]1') # methyl-cyclopentane
old_to_new_atom_map = { 0:0, 1:1, 2:2, 3:3, 5:4, 6:5 }
new_to_old_atom_map = dict(map(reversed, self.old_to_new_atom_map.items()))
atom_mapping = AtomMapping(old_mol, new_mol, old_to_new_atom_map=old_to_new_atom_map)
assert atom_mapping.creates_or_breaks_rings() == True
atom_mapping.unmap_partially_mapped_cycles()
assert atom_mapping.old_to_new_atom_map == {0:0} # only methyl group should remain mapped
def test_preserve_chirality(self):
# Test simple ethane -> ethanol transformation
atom_mapping = AtomMapping(self.old_mol, self.old_mol, old_to_new_atom_map=self.old_to_new_atom_map)
n_mapped_atoms_old = atom_mapping.n_mapped_atoms
atom_mapping.preserve_chirality()
assert atom_mapping.n_mapped_atoms == n_mapped_atoms_old
# Test resolution of incorrect stereochemistry
old_mol = Molecule.from_smiles('[C@H:0]([Cl:1])([Br:2])([F:3])')
new_mol = Molecule.from_smiles('[C@@H:0]([Cl:1])([Br:2])([F:3])')
atom_mapping = AtomMapping(old_mol, new_mol, old_to_new_atom_map={0:0, 1:1, 2:2, 3:3})
atom_mapping.preserve_chirality()
assert atom_mapping.old_to_new_atom_map == {0:0, 1:1, 2:2, 3:3} # TODO: Check this
################################################################################
# AtomMapper
################################################################################
class TestAtomMapper(unittest.TestCase):
def setUp(self):
self.molecules = dict()
for dataset_name in ['CDK2', 'p38', 'Tyk2', 'Thrombin', 'PTP1B', 'MCL1', 'Jnk1', 'Bace']:
# Read molecules
from pkg_resources import resource_filename
dataset_path = 'data/schrodinger-jacs-datasets/%s_ligands.sdf' % dataset_name
sdf_filename = resource_filename('perses', dataset_path)
self.molecules[dataset_name] = Molecule.from_file(sdf_filename, allow_undefined_stereo=True)
def test_molecular_atom_mapping(self):
"""Test the creation of atom maps between pairs of molecules from the JACS benchmark set.
"""
for use_positions in [True, False]:
for allow_ring_breaking in [True, False]:
# Create and configure an AtomMapper
atom_mapper = AtomMapper(use_positions=use_positions, allow_ring_breaking=allow_ring_breaking)
# Test mappings for JACS dataset ligands
# TODO: Uncomment other test datasets
for dataset_name in ['CDK2', 'p38', 'Tyk2', 'Thrombin', 'PTP1B', 'MCL1', 'Jnk1', 'Bace']:
molecules = self.molecules[dataset_name]
# Build atom map for some transformations.
#from itertools import combinations
#for old_index, new_index in combinations(range(len(molecules)), 2): # exhaustive test is too slow
old_index = 0
for new_index in range(1, len(molecules), 3): # skip every few molecules to keep test times down
try:
atom_mapping = atom_mapper.get_best_mapping(molecules[old_index], molecules[new_index])
# TODO: Perform quality checks
# Render mapping for visual inspection
#filename = f'mapping-{dataset_name}-use_positions={use_positions}-allow_ring_breaking={allow_ring_breaking}-{old_index}-to-{new_index}.png'
#atom_mapping.render_image(filename)
except Exception as e:
e.args += (f'Exception encountered for {dataset_name} use_positions={use_positions} allow_ring_breaking={allow_ring_breaking}: {old_index} {molecules[old_index]}-> {new_index} {molecules[new_index]}', )
raise e
def test_map_strategy(self):
"""
Test the creation of atom maps between pairs of molecules from the JACS benchmark set.
"""
# Create and configure an AtomMapper
from openeye import oechem
atom_expr = oechem.OEExprOpts_IntType
bond_expr = oechem.OEExprOpts_RingMember
atom_mapper = AtomMapper(atom_expr=atom_expr, bond_expr=bond_expr)
# Test mappings for JACS dataset ligands
for dataset_name in ['Jnk1']:
molecules = self.molecules[dataset_name]
# Jnk1 ligands 0 and 2 have meta substituents that face opposite each other in the active site.
# When ignoring position information, the mapper should align these groups, and put them both in the core.
# When using position information, the mapper should see that the orientations differ and chose
# to unmap (i.e. put both these groups in core) such as to get the geometry right at the expense of
# mapping fewer atoms
# Ignore positional information when scoring mappings
atom_mapper.use_positions = False
atom_mapping = atom_mapper.get_best_mapping(molecules[0], molecules[2])
#assert len(atom_mapping.new_to_old_atom_map) == 36, f'Expected meta groups methyl C to map onto ethyl O\n{atom_mapping}' # TODO
# Use positional information to score mappings
atom_mapper.use_positions = True
atom_mapping = atom_mapper.get_best_mapping(molecules[0], molecules[2])
#assert len(atom_mapping.new_to_old_atom_map) == 35, f'Expected meta groups methyl C to NOT map onto ethyl O as they are distal in cartesian space\n{atom_mapping}' # TODO
def test_generate_atom_mapping_from_positions(self):
"""
Test the generation of atom mappings from positions on JACS set compounds
"""
# Create and configure an AtomMapper
atom_mapper = AtomMapper()
# Exclude datasets that contain displaced ligands:
# 'p38', 'PTP1B', 'MCL1',
for dataset_name in ['CDK2', 'Tyk2', 'Thrombin', 'Jnk1', 'Bace']:
molecules = self.molecules[dataset_name]
reference_molecule = molecules[0]
for index, target_molecule in enumerate(molecules):
# Explicitly construct mapping from positional information alone
try:
atom_mapping = atom_mapper.generate_atom_mapping_from_positions(reference_molecule, target_molecule)
except InvalidMappingException as e:
e.args = e.args + (f'dataset: {dataset_name}: molecule 0 -> {index}',)
raise e
def test_atom_mappings_moonshot(self):
"""
Test the generation of atom mappings on COVID Moonshot compounds
"""
# Create and configure an AtomMapper
atom_mapper = AtomMapper()
# Load molecules with positions
from pkg_resources import resource_filename
dataset_path = 'data/covid-moonshot/sprint-10-2021-07-26-x10959-dimer-neutral.sdf.gz'
sdf_filename = resource_filename('perses', dataset_path)
molecules = Molecule.from_file(sdf_filename)
# Take a subset
nskip = 20
molecules = molecules[::nskip]
# Test geometry-derived mappings
reference_molecule = molecules[0]
for index, molecule in enumerate(molecules):
# Ignore positional information when scoring mappings
atom_mapper.use_positions = False
atom_mapping = atom_mapper.get_best_mapping(molecules[0], molecules[2])
#assert len(atom_mapping.new_to_old_atom_map) == 36, f'Expected meta groups methyl C to map onto ethyl O\n{atom_mapping}' # TODO
# Use positional information to score mappings
atom_mapper.use_positions = True
atom_mapping = atom_mapper.get_best_mapping(molecules[0], molecules[2])
#assert len(atom_mapping.new_to_old_atom_map) == 35, f'Expected meta groups methyl C to NOT map onto ethyl O as they are distal in cartesian space\n{atom_mapping}' # TODO
# Explicitly construct mapping from positional information alone
atom_mapping = atom_mapper.generate_atom_mapping_from_positions(reference_molecule, molecule)
def test_simple_heterocycle_mapping(self):
"""
Test the ability to map conjugated heterocycles (that preserves all rings). Will assert that the number of ring members in both molecules is the same.
"""
# TODO: generalize this to test for ring breakage and closure.
iupac_pairs = [
('benzene', 'pyridine')
]
# Create and configure an AtomMapper
atom_mapper = AtomMapper(allow_ring_breaking=False)
for old_iupac, new_iupac in iupac_pairs:
old_mol = Molecule.from_iupac(old_iupac)
new_mol = Molecule.from_iupac(new_iupac)
atom_mapping = atom_mapper.get_best_mapping(old_mol, new_mol)
assert len(atom_mapping.old_to_new_atom_map) > 0
def test_mapping_strength_levels(self):
"""Test the mapping strength defaults work as expected"""
# SMILES pairs to test mappings
tests = [
('c1ccccc1', 'C1CCCCC1', {'default': 0, 'weak' : 6, 'strong' : 0}), # benzene -> cyclohexane
('CNC1CCCC1', 'CNC1CCCCC1', {'default': 6, 'weak' : 6, 'strong' : 6}), # https://github.com/choderalab/perses/issues/805#issue-913932127
('c1ccccc1CNC2CCC2', 'c1ccccc1CNCC2CCC2', {'default': 13, 'weak' : 13, 'strong' : 11}), # https://github.com/choderalab/perses/issues/805#issue-913932127
('Cc1ccccc1','c1ccc(cc1)N', {'default': 12, 'weak' : 12, 'strong' : 11}),
('CC(c1ccccc1)','O=C(c1ccccc1)', {'default': 13, 'weak' : 14, 'strong' : 11}),
('Oc1ccccc1','Sc1ccccc1', {'default': 12, 'weak' : 12, 'strong' : 11}),
]
DEBUG_MODE = True # If True, don't fail, but print results of tests for calibration
for mol1_smiles, mol2_smiles, expected_results in tests:
for map_strength, expected_n_mapped_atoms in expected_results.items():
# Create OpenFF Molecule objects
mol1 = Molecule.from_smiles(mol1_smiles)
mol2 = Molecule.from_smiles(mol2_smiles)
# Initialize the atom mapper with the requested mapping strength
atom_mapper = AtomMapper(map_strength=map_strength, allow_ring_breaking=False)
# Create the atom mapping
atom_mapping = atom_mapper.get_best_mapping(mol1, mol2)
if DEBUG_MODE:
if atom_mapping is not None:
_logger.info(f'{mol1_smiles} -> {mol2_smiles} using map strength {map_strength} : {atom_mapping.n_mapped_atoms} atoms mapped : {atom_mapping.old_to_new_atom_map}')
atom_mapping.render_image(f'test_mapping_strength_levels:{mol1_smiles}:{mol2_smiles}:{map_strength}.png')
else:
_logger.info(f'{mol1_smiles} -> {mol2_smiles} using map strength {map_strength} : {atom_mapping}')
else:
# Check that expected number of mapped atoms are provided
n_mapped_atoms = 0
if atom_mapping is not None:
n_mapped_atoms = atom_mapping.n_mapped_atoms
assert n_mapped_atoms==expected_n_mapped_atoms, "Number of mapped atoms does not match hand-calibrated expectation"
| 54.092949 | 230 | 0.642591 | 2,228 | 16,877 | 4.586176 | 0.160682 | 0.082893 | 0.032883 | 0.04815 | 0.605696 | 0.573791 | 0.550303 | 0.497749 | 0.475436 | 0.432864 | 0 | 0.022662 | 0.236535 | 16,877 | 311 | 231 | 54.266881 | 0.770353 | 0.229839 | 0 | 0.372222 | 0 | 0.05 | 0.115094 | 0.056047 | 0 | 0 | 0 | 0.016077 | 0.105556 | 1 | 0.094444 | false | 0 | 0.055556 | 0 | 0.161111 | 0.005556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ff12374036cabc8d3ecc65f6a2e6dd1c7c2493d3 | 5,171 | py | Python | tex2ebook.py | rzoller/tex2ebook | 57859343e2e4fd31a5701ee834019a5e7b9e8128 | [
"Apache-2.0"
] | 13 | 2015-01-03T13:07:07.000Z | 2017-01-03T16:06:28.000Z | tex2ebook.py | rkaravia/tex2ebook | 57859343e2e4fd31a5701ee834019a5e7b9e8128 | [
"Apache-2.0"
] | 1 | 2020-11-05T13:31:02.000Z | 2020-11-05T13:31:03.000Z | tex2ebook.py | rzoller/tex2ebook | 57859343e2e4fd31a5701ee834019a5e7b9e8128 | [
"Apache-2.0"
] | 6 | 2015-03-30T05:13:25.000Z | 2019-05-16T14:05:03.000Z | # run with --help to see available options
import os, sys, tempfile, shutil, re
from optparse import OptionParser
log_dir = os.path.abspath('_log')
def get_working_dir(texfile, log):
if log:
# create a subdirectory in _log
if not os.path.exists(log_dir):
os.makedirs(log_dir)
subdir = os.path.join(log_dir, '%s-files' % os.path.splitext(os.path.basename(texfile))[0])
working_dir = os.path.join(log_dir, subdir)
if os.path.exists(working_dir):
shutil.rmtree(working_dir)
os.mkdir(working_dir)
return working_dir
else:
# create a temporary directory in system tmp
return tempfile.mkdtemp()
# convert all files listed in indexfile
def batch(indexfile, log, ebook_ext):
print "--- Using batch file %s" % indexfile
indexroot = os.path.abspath(os.path.dirname(indexfile))
for texfilerel in open(indexfile):
texfile = os.path.join(indexroot, texfilerel.strip())
convert(texfile, log, ebook_ext)
# convert a single file
def convert(texfile, log, ebook_ext, dest=None):
print "--- Converting file %s" % texfile
basename = os.path.basename(texfile)
title = os.path.splitext(basename)[0]
working_dir = get_working_dir(texfile, log)
print "--- Working dir is %s" % working_dir
os.chdir(os.path.join('./', os.path.dirname(texfile)))
html = os.path.join(working_dir, '%s.html' % title)
log_hevea = os.path.join(working_dir, 'hevea.log')
hevea = 'hevea %s -o %s >> %s' % (basename, html, log_hevea)
print "--- Invoking hevea..."
print hevea
os.system(hevea)
os.system('bibhva %s >> %s' % (os.path.join(working_dir, title), log_hevea))
os.system(hevea)
os.system(hevea)
imagen = 'imagen -pdf %s >> %s' % (os.path.join(working_dir, title), log_hevea)
print "--- Invoking imagen..."
print imagen
os.system(imagen)
if dest == None:
dest = '%s.%s' % (title, ebook_ext)
# add extension specific options
ext_options = ''
if ebook_ext == 'epub':
ext_options = '--no-default-epub-cover'
log_ebook = os.path.join(working_dir, 'ebook-convert.log')
ebookconvert = 'ebook-convert %s %s %s --page-breaks-before / --toc-threshold 0 --level1-toc //h:h2 --level2-toc //h:h3 --level3-toc //h:h4 >> %s' % (html, dest, ext_options, log_ebook)
print "--- Invoking ebook-convert..."
print ebookconvert
os.system(ebookconvert)
print "--- Result written to %s" % dest
# convert equations to images
# added 25.04.2013 ML
# infos de http://webcache.googleusercontent.com/search?q=cache:V3iGRJDdHDIJ:comments.gmane.org/gmane.comp.tex.hevea/192+&cd=3&hl=en&ct=clnk&client=firefox-a
# fonction pompée de http://stackoverflow.com/questions/39086/search-and-replace-a-line-in-a-file-in-python$
# http://en.wikibooks.org/wiki/LaTeX/Mathematics
def equ_to_images(texfile):
print "--- Converting equations to images for file %s" % texfile
(head, tail) = os.path.split(texfile)
(root, ext) = os.path.splitext(tail)
new_root = '%s_eq_to_images' % root
new_texfile = os.path.join(head, new_root + ext)
new_file = open(new_texfile, 'w')
old_file = open(texfile)
# define new environment
new_file.write('\\newenvironment{equ_to_image}{\\begin{toimage}\\(}{\\)\\end{toimage}\\imageflush}')
for line in old_file:
new_line = line
# replace all possible equation start and end tags by new environment tags (only $ and $$ are not replaced)
new_line = new_line.replace('\\(', '\\begin{equ_to_image}')
new_line = new_line.replace('\\begin{math}', '\\begin{equ_to_image}')
new_line = new_line.replace('\\[', '\\begin{equ_to_image}')
new_line = new_line.replace('\\begin{displaymath}', '\\begin{equ_to_image}')
new_line = new_line.replace('\\begin{equation}', '\\begin{equ_to_image}')
new_line = new_line.replace('\\)', '\\end{equ_to_image}')
new_line = new_line.replace('\\end{math}', '\\end{equ_to_image}')
new_line = new_line.replace('\\]', '\\end{equ_to_image}')
new_line = new_line.replace('\\end{displaymath}', '\\end{equ_to_image}')
new_line = new_line.replace('\\end{equation}', '\\end{equ_to_image}')
new_file.write(new_line)
#close temp file
new_file.close()
old_file.close()
return new_texfile
usage = "usage: %prog [options] file"
parser = OptionParser(usage=usage)
parser.add_option("-l", "--log", action="store_true", dest="log", default=False, help="keep the intermediate files")
parser.add_option("-b", "--batch", action="store_true", dest="batch", default=False, help="process several files in batch mode")
parser.add_option("-k", "--kindle", action="store_true", dest="kindle", default=False, help="convert to MOBI rather than EPUB (default)")
parser.add_option("-i", "--equ_to_images", action="store_true", dest="images", default=False, help="convert equations to images")
parser.add_option("-o", "--output", dest="outfile", help="output filename")
(options, params) = parser.parse_args()
if options.kindle:
ext = 'mobi'
else:
ext = 'epub'
if len(params) == 0:
print "No file specified!"
else:
if options.batch:
batch(params[-1], options.log, ext)
else:
texfile = params[-1]
if options.images:
texfile = equ_to_images(texfile)
if options.outfile == None:
convert(texfile, options.log, ext)
else:
convert(texfile, options.log, ext, os.path.abspath(options.outfile)) | 39.776923 | 186 | 0.704699 | 780 | 5,171 | 4.529487 | 0.264103 | 0.03906 | 0.031135 | 0.039626 | 0.208322 | 0.135296 | 0.121993 | 0.121993 | 0.121993 | 0.120577 | 0 | 0.006631 | 0.125121 | 5,171 | 130 | 187 | 39.776923 | 0.774094 | 0.136531 | 0 | 0.078431 | 0 | 0.009804 | 0.268315 | 0.047191 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019608 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2063c95543ab6e4ef6c980fc98b25c5894306406 | 9,816 | py | Python | tests/test_pipeline.py | phvu/cebes-python | 41e0a687feeac437eadcab1a4d1f0a041986bd4e | [
"Apache-2.0"
] | null | null | null | tests/test_pipeline.py | phvu/cebes-python | 41e0a687feeac437eadcab1a4d1f0a041986bd4e | [
"Apache-2.0"
] | null | null | null | tests/test_pipeline.py | phvu/cebes-python | 41e0a687feeac437eadcab1a4d1f0a041986bd4e | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 The Cebes Authors. All Rights Reserved.
#
# Licensed under the Apache License, version 2.0 (the "License").
# You may not use this work except in compliance with the License,
# which is available at www.apache.org/licenses/LICENSE-2.0
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied, as more fully set forth in the License.
#
# See the NOTICE file distributed with this work for information regarding copyright ownership.
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import unittest
import six
from pycebes.core import pipeline_api as pl
from pycebes.core.dataframe import Dataframe
from pycebes.core.exceptions import ServerException
from pycebes.core.pipeline import Pipeline, Model
from tests import test_base
class TestPipeline(test_base.TestBase):
def test_stage_general(self):
df = self.cylinder_bands
with Pipeline() as ppl:
s = pl.drop(df, ['hardener', 'customer'])
name = s.get_name()
self.assertIsNotNone(name)
with self.assertRaises(ValueError):
pl.drop(df, ['customer'], name=name)
self.assertIsInstance(ppl.stages, dict)
self.assertIsInstance(repr(ppl), six.string_types)
def test_drop(self):
df = self.cylinder_bands
with Pipeline() as ppl:
d = pl.drop(df, ['hardener', 'customer'], name='drop_stage')
df2 = ppl.run(d.output_df)
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2.columns) + 2, len(df.columns))
self.assertTrue('hardener' not in df2.columns)
self.assertTrue('customer' not in df2.columns)
# magic methods
self.assertTrue(d in ppl)
self.assertTrue('drop_stage' in ppl)
self.assertEqual(d, ppl['drop_stage'])
# cannot add more stages into the pipeline
with self.assertRaises(ValueError) as ex:
with ppl:
pl.drop(df, ['customer'])
self.assertIn('Cannot add more stage into this Pipeline', '{}'.format(ex.exception))
def test_placeholder(self):
with Pipeline() as ppl:
data = pl.placeholder(pl.PlaceholderTypes.DATAFRAME)
d = pl.drop(df=data, col_names=['hardener', 'customer'])
with self.assertRaises(ServerException) as ex:
ppl.run(d.output_df)
self.assertTrue('Input slot inputVal is undefined' in '{}'.format(ex.exception))
df = self.cylinder_bands
df2 = ppl.run(d.output_df, feeds={data: df})
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2.columns) + 2, len(df.columns))
self.assertTrue('hardener' not in df2.columns)
self.assertTrue('customer' not in df2.columns)
def test_value_placeholder(self):
with Pipeline() as ppl:
data = pl.placeholder(pl.PlaceholderTypes.DATAFRAME)
cols = pl.placeholder(pl.PlaceholderTypes.VALUE, value_type='array')
d = pl.drop(df=data, col_names=cols)
with self.assertRaises(ServerException) as ex:
ppl.run(d.output_df)
self.assertTrue('Input slot inputVal is undefined' in '{}'.format(ex.exception))
df = self.cylinder_bands
df2 = ppl.run(d.output_df, feeds={data: df, cols: ['hardener', 'customer']})
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2.columns) + 2, len(df.columns))
self.assertTrue('hardener' not in df2.columns)
self.assertTrue('customer' not in df2.columns)
def test_linear_regression_with_vector_assembler(self):
df = self.cylinder_bands
self.assertGreater(len(df), 10)
df = df.dropna(columns=['viscosity', 'proof_cut', 'caliper'])
self.assertGreater(len(df), 10)
with Pipeline() as ppl:
assembler = pl.vector_assembler(df, ['viscosity', 'proof_cut'], 'features')
s = pl.linear_regression(assembler.output_df, features_col='features',
label_col='caliper', prediction_col='caliper_predict', reg_param=0.001)
r = ppl.run([s.output_df, s.model, assembler.output_df])
self.assertEqual(len(r), 3)
df1 = r[0]
self.assertIsInstance(df1, Dataframe)
self.assertEqual(len(df1), len(df))
self.assertEqual(len(df1.columns), len(df.columns) + 2)
self.assertTrue('features' in df1.columns)
self.assertTrue('caliper_predict' in df1.columns)
m = r[1]
self.assertIsInstance(m, Model)
self.assertEqual(m.inputs['reg_param'], 0.001)
self.assertIsInstance(m.metadata, dict)
df2 = r[2]
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2), len(df))
self.assertEqual(len(df2.columns), len(df.columns) + 1)
self.assertTrue('features' in df2.columns)
def test_linear_regression_with_vector_assembler_with_placeholder(self):
# define the pipeline
with Pipeline() as ppl:
inp = pl.placeholder(pl.PlaceholderTypes.DATAFRAME)
assembler = pl.vector_assembler(inp, ['viscosity', 'proof_cut'], 'features')
lr = pl.linear_regression(assembler.output_df, features_col='features',
label_col='caliper', prediction_col='caliper_predict', reg_param=0.001)
# fail because placeholder is not filled
with self.assertRaises(ServerException) as ex:
ppl.run([lr.output_df, lr.model, assembler.output_df])
self.assertTrue('Input slot inputVal is undefined' in '{}'.format(ex.exception))
# run again with feeds into the placeholder
df = self.cylinder_bands.dropna(columns=['viscosity', 'proof_cut', 'caliper'])
self.assertGreater(len(df), 10)
r = ppl.run([lr.output_df, lr.model, assembler.output_df], feeds={inp: df})
self.assertEqual(len(r), 3)
df1 = r[0]
self.assertIsInstance(df1, Dataframe)
self.assertEqual(len(df1), len(df))
self.assertEqual(len(df1.columns), len(df.columns) + 2)
self.assertTrue('features' in df1.columns)
self.assertTrue('caliper_predict' in df1.columns)
pandas_df = df1.take(5)
self.assertEqual(len(pandas_df), 5)
m = r[1]
self.assertIsInstance(m, Model)
self.assertEqual(m.inputs['reg_param'], 0.001)
self.assertIsInstance(m.metadata, dict)
df2 = r[2]
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2), len(df))
self.assertEqual(len(df2.columns), len(df.columns) + 1)
self.assertTrue('features' in df2.columns)
# Run again with a different input dataframe, model ID shouldn't change
new_df = df.where(df.viscosity > 40)
r2 = ppl.run([lr.output_df, lr.model, assembler.output_df], feeds={inp: new_df})
self.assertEqual(r2[1].id, r[1].id)
def test_linear_regression_with_vector_assembler_with_placeholders(self):
# define the pipeline
with Pipeline() as ppl:
inp_df = pl.placeholder(pl.PlaceholderTypes.DATAFRAME)
inp_col = pl.placeholder(pl.PlaceholderTypes.VALUE)
assembler = pl.vector_assembler(inp_df, [''], inp_col)
s = pl.linear_regression(assembler.output_df, features_col='features',
label_col='caliper', prediction_col='caliper_predict', reg_param=0.001)
df = self.cylinder_bands.dropna(columns=['viscosity', 'proof_cut', 'caliper'])
self.assertGreater(len(df), 10)
r = ppl.run([s.output_df, s.model, assembler.output_df],
feeds={inp_df: df, inp_col: 'features', assembler.input_cols: ['viscosity', 'proof_cut']})
self.assertEqual(len(r), 3)
df1 = r[0]
self.assertIsInstance(df1, Dataframe)
self.assertEqual(len(df1), len(df))
self.assertEqual(len(df1.columns), len(df.columns) + 2)
self.assertTrue('features' in df1.columns)
self.assertTrue('caliper_predict' in df1.columns)
m = r[1]
self.assertIsInstance(m, Model)
self.assertEqual(m.inputs['reg_param'], 0.001)
self.assertIsInstance(m.metadata, dict)
df2 = r[2]
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2), len(df))
self.assertEqual(len(df2.columns), len(df.columns) + 1)
self.assertTrue('features' in df2.columns)
# assemble some other columns
df = self.cylinder_bands.dropna(columns=['viscosity', 'proof_cut', 'ink_temperature', 'caliper'])
self.assertGreater(len(df), 10)
r = ppl.run([s.output_df, s.model, assembler.output_df],
feeds={inp_df: df, inp_col: 'new_features',
assembler.input_cols: ['viscosity', 'proof_cut', 'ink_temperature'],
s.features_col: 'new_features'})
self.assertEqual(len(r), 3)
df1 = r[0]
self.assertIsInstance(df1, Dataframe)
self.assertEqual(len(df1), len(df))
self.assertEqual(len(df1.columns), len(df.columns) + 2)
self.assertTrue('new_features' in df1.columns)
self.assertTrue('caliper_predict' in df1.columns)
m = r[1]
self.assertIsInstance(m, Model)
self.assertEqual(m.inputs['reg_param'], 0.001)
self.assertIsInstance(m.metadata, dict)
df2 = r[2]
self.assertIsInstance(df2, Dataframe)
self.assertEqual(len(df2), len(df))
self.assertEqual(len(df2.columns), len(df.columns) + 1)
self.assertTrue('new_features' in df2.columns)
if __name__ == '__main__':
unittest.main()
| 40.9 | 110 | 0.640587 | 1,233 | 9,816 | 4.982157 | 0.154907 | 0.073254 | 0.070324 | 0.048348 | 0.724727 | 0.675728 | 0.664008 | 0.643171 | 0.628683 | 0.594335 | 0 | 0.018858 | 0.238284 | 9,816 | 239 | 111 | 41.07113 | 0.802728 | 0.078647 | 0 | 0.632184 | 0 | 0 | 0.091191 | 0 | 0 | 0 | 0 | 0 | 0.494253 | 1 | 0.04023 | false | 0 | 0.057471 | 0 | 0.103448 | 0.005747 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
206bea68e024108a3072a57ecb2075b2c8f91020 | 1,300 | py | Python | yarll/scripts/list_exps.py | hknozturk/yarll | c5293e6455e3debe6e4d4d21f713937a24a654f3 | [
"MIT"
] | 62 | 2016-11-05T19:27:11.000Z | 2018-09-20T13:29:39.000Z | yarll/scripts/list_exps.py | hknozturk/yarll | c5293e6455e3debe6e4d4d21f713937a24a654f3 | [
"MIT"
] | 4 | 2020-07-09T16:46:19.000Z | 2022-01-26T07:18:06.000Z | yarll/scripts/list_exps.py | hknozturk/yarll | c5293e6455e3debe6e4d4d21f713937a24a654f3 | [
"MIT"
] | 18 | 2016-11-24T14:17:15.000Z | 2018-07-04T16:33:00.000Z | import os
import json
import argparse
from pathlib import Path
import pandas as pd
import dateutil
parser = argparse.ArgumentParser()
parser.add_argument("directory", type=Path help="Path to the directory.")
def main():
args = parser.parse_args()
dirs = sorted([d for d in os.listdir(args.directory) if os.path.isdir(args.directory / d)], key=lambda x: int(x[3:]))
header = ["RUN", "DESCR", "START", "BRANCH", "COMMITMSG"]
data = []
for d in dirs:
config_path = args.directory / d / "config.json"
if os.path.exists(config_path):
with open(config_path) as f:
config = json.load(f)
else:
config = {}
run_data = [
d,
config.get("description", ""),
dateutil.parser.parse(config["start_time"]).strftime("%d/%m/%y %H:%M") if "start_time" in config else "",
]
run_data += [config["git"]["head"], config["git"]["message"]] if "git" in config else [""] * 2
data.append(run_data)
df = pd.DataFrame(data, columns=header)
df.set_index("RUN", inplace=True)
with pd.option_context('display.max_rows', None, 'display.max_columns', None, "display.width", None, "display.max_colwidth", 100):
print(df)
if __name__ == '__main__':
main()
| 33.333333 | 134 | 0.606154 | 174 | 1,300 | 4.396552 | 0.465517 | 0.05098 | 0.015686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00504 | 0.236923 | 1,300 | 38 | 135 | 34.210526 | 0.766129 | 0 | 0 | 0 | 0 | 0 | 0.164615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
206ef09a2bc28f7ba5a8ff01cc6d9883d5038da6 | 18,167 | py | Python | src/main/python/scrumtools/github.py | TU-Berlin-DIMA/scrum-tools | f17b39f815d01b7a6f1e2b3cd46d7e99e3cf3118 | [
"Apache-2.0"
] | 1 | 2015-05-23T05:19:32.000Z | 2015-05-23T05:19:32.000Z | src/main/python/scrumtools/github.py | TU-Berlin-DIMA/scrum-tools | f17b39f815d01b7a6f1e2b3cd46d7e99e3cf3118 | [
"Apache-2.0"
] | null | null | null | src/main/python/scrumtools/github.py | TU-Berlin-DIMA/scrum-tools | f17b39f815d01b7a6f1e2b3cd46d7e99e3cf3118 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2010-2014 DIMA Research Group, TU Berlin
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Created on Apr 13, 2014
"""
from __future__ import absolute_import
import os
import sys
import socket
# noinspection PyPackageRequirements
from github3 import login, models
from scrumtools import data, error
from termcolor import cprint, colored
from cement.core import controller
from requests.exceptions import ConnectionError
try:
prompt = raw_input
except NameError:
prompt = input
class GitHubController(controller.CementBaseController):
class Meta:
label = 'github'
interface = controller.IController
stacked_on = 'base'
stacked_type = 'nested'
description = "A set of batch management tools for GitHub."
config_section = 'github'
config_defaults = dict(
auth_id=None,
auth_token=None,
organization='example.org',
team_admins='example.admins',
team_admins_group=-1,
team_users='example.users',
team_pattern='example.g%02d',
repo_admins='example',
repo_users='example',
repo_pattern='example.g%02d',
)
arguments = [
(['-U', '--users-file'],
dict(action='store', metavar='FILE', dest='users_file',
help='a CSV file listing all users')),
(['-O', '--organization'],
dict(action='store', metavar='NAME', dest='organization',
help='the organization managing the GitHub repositories')),
]
@controller.expose(hide=True)
def default(self):
self.app.args.parse_args(['--help'])
@controller.expose(help="Authorizes scrum-tools with a GitHub account.")
def authorize(self):
self.app.log.debug('Authorizing a GitHub user.')
(username, password) = self.__class__.prompt_login()
try:
gh = login(username, password, two_factor_callback=self.__class__.prompt_two_factor_login)
au = gh.authorize(username,
password,
scopes=['repo', 'delete_repo', 'admin:org'],
note='Scrum-tools on %s' % socket.gethostname())
cprint(os.linesep.join(["Please copy these lines into the [github] section of your scrum-tools config:",
" auth_id = %s " % au.id,
" auth_token = %s " % au.token]), 'green')
except (models.GitHubError, ConnectionError) as e:
raise RuntimeError(e.msg)
@controller.expose(help="Validate the provided GitHub account names.")
def validate_users(self):
self.app.log.debug('Validating GitHub account names.')
# validate required config parameters
if not self.app.config.get('github', 'auth_token') or not self.app.config.get('github', 'auth_id'):
raise error.ConfigError("Missing config parameter 'github.auth_id' and/or 'github.auth_token'! "
"Please run 'scrum-tools github authorize' first! ")
key_username = self.app.config.get('core', 'users_schema_key_username')
key_github = self.app.config.get('core', 'users_schema_key_github')
user_repository = data.UserRepository(self.app.config)
gh = login(token=self.app.config.get('github', 'auth_token'))
for u in user_repository.users():
if not u[key_github]:
cprint("Skipping empty GitHub account for user '%s'." % u[key_username], 'yellow', file=sys.stdout)
continue
print colored("Validating GitHub account '%s' for user '%s'..." % (u[key_github], u[key_username]), 'green'),
try:
if gh.user(u[key_github]):
print colored('OK', 'green', attrs=['bold'])
else:
raise RuntimeError("Github user '%s' not found" % u[key_github])
except RuntimeError:
print colored('Not OK', 'red', attrs=['bold'])
@controller.expose(help="Creates GitHub repositories.")
def create_repos(self):
self.app.log.debug('Creating GitHub repositories.')
# validate required config parameters
if not self.app.config.get('github', 'auth_token') or not self.app.config.get('github', 'auth_id'):
raise error.ConfigError("Missing config parameter 'github.auth_id' and/or 'github.auth_token'! "
"Please run 'scrum-tools github authorize' first! ")
# organization
organization = self.app.config.get('github', 'organization')
# teams setup
team_admins = self.app.config.get('github', 'team_admins')
team_users = self.app.config.get('github', 'team_users')
team_pattern = self.app.config.get('github', 'team_pattern')
# repos setup
repo_admins = self.app.config.get('github', 'repo_admins')
repo_users = self.app.config.get('github', 'repo_users')
repo_pattern = self.app.config.get('github', 'repo_pattern')
# get the users
user_repository = data.UserRepository(self.app.config)
# create github session
gh = login(token=self.app.config.get('github', 'auth_token'))
# get the organization
org = gh.organization(organization)
if not org:
raise RuntimeError("Organization '%s' not found" % organization)
# get all organization repos
teams = dict((t.name, t) for t in org.iter_teams())
repos = dict((r.name, r) for r in org.iter_repos())
# create group repos
for group in user_repository.groups():
repo_group = repo_pattern % int(group)
team_group = team_pattern % int(group)
repo_teams = [v for (k, v) in teams.iteritems() if k in [team_group, team_admins]]
self.__class__.__create_repo(org, repo_group, repo_teams, repos)
# create admins repo
repo_teams = [v for (k, v) in teams.iteritems() if k in [team_admins]]
self.__class__.__create_repo(org, repo_admins, repo_teams, repos)
# create users repo
repo_teams = [v for (k, v) in teams.iteritems() if k in [team_admins, team_users]]
self.__class__.__create_repo(org, repo_users, repo_teams, repos)
@controller.expose(help="Deletes GitHub repositories.")
def delete_repos(self):
self.app.log.debug('Deleting GitHub repositories.')
if not self.__class__.prompt_confirm(colored('This cannot be undone! Proceed? (yes/no): ', 'red')):
cprint("Aborting delete command.", 'yellow', file=sys.stdout)
return
# validate required config parameters
if not self.app.config.get('github', 'auth_token') or not self.app.config.get('github', 'auth_id'):
raise error.ConfigError("Missing config parameter 'github.auth_id' and/or 'github.auth_token'! "
"Please run 'scrum-tools github authorize' first! ")
# organization
organization = self.app.config.get('github', 'organization')
# repos setup
repo_admins = self.app.config.get('github', 'repo_admins')
repo_users = self.app.config.get('github', 'repo_users')
repo_pattern = self.app.config.get('github', 'repo_pattern')
user_repository = data.UserRepository(self.app.config)
gh = login(token=self.app.config.get('github', 'auth_token'))
# get the organization
org = gh.organization(organization)
if not org:
raise RuntimeError("Organization '%s' not found" % organization)
# get all organization repos
repos = dict((t.name, t) for t in org.iter_repos())
# delete group repos
for group in user_repository.groups():
repo_name = repo_pattern % int(group)
self.__class__.__delete_repo(repo_name, repos)
# delete admins repo
self.__class__.__delete_repo(repo_admins, repos)
# delete users repo
self.__class__.__delete_repo(repo_users, repos)
@controller.expose(help="Creates GitHub teams.")
def create_teams(self):
self.app.log.debug('Creating GitHub teams.')
# validate required config parameters
if not self.app.config.get('github', 'auth_token') or not self.app.config.get('github', 'auth_id'):
raise error.ConfigError("Missing config parameter 'github.auth_id' and/or 'github.auth_token'! "
"Please run 'scrum-tools github authorize' first! ")
# schema keys
key_group = self.app.config.get('core', 'users_schema_key_group')
key_github = self.app.config.get('core', 'users_schema_key_github')
# organization
organization = self.app.config.get('github', 'organization')
# teams setup
team_admins = self.app.config.get('github', 'team_admins')
team_admins_group = self.app.config.get('github', 'team_admins_group')
team_users = self.app.config.get('github', 'team_users')
team_pattern = self.app.config.get('github', 'team_pattern')
# repos setup
repo_admins = self.app.config.get('github', 'repo_admins')
repo_users = self.app.config.get('github', 'repo_users')
repo_pattern = self.app.config.get('github', 'repo_pattern')
# get the users
user_repository = data.UserRepository(self.app.config)
# create github session
gh = login(token=self.app.config.get('github', 'auth_token'))
# get the organization
org = gh.organization(organization)
if not org:
raise RuntimeError("Organization '%s' not found" % organization)
# get all organization teams
teams = dict((t.name, t) for t in org.iter_teams())
# create group teams
for group in user_repository.groups():
team_name = team_pattern % int(group)
repo_names = ['%s/%s' % (organization, repo_pattern % int(group))]
self.__class__.__create_team(org, team_name, repo_names, 'push', teams)
# update group teams members
for group in user_repository.groups():
team = teams[team_pattern % int(group)]
members_act = set(m.login for m in team.iter_members())
members_exp = set(u[key_github] for u in user_repository.users(lambda x: x[key_group] == group))
self.__class__.__update_team_members(team, members_act, members_exp)
# create admins team
repo_names = ['%s/%s' % (organization, repo_admins)] + \
['%s/%s' % (organization, repo_users)] + \
['%s/%s' % (organization, repo_pattern % int(group)) for group in user_repository.groups()]
self.__class__.__create_team(org, team_admins, repo_names, 'admin', teams)
# update admins team members
team = teams[team_admins]
members_act = set(m.login for m in team.iter_members())
members_exp = set(u[key_github] for u in user_repository.users(lambda x: x[key_group] == team_admins_group))
self.__class__.__update_team_members(team, members_act, members_exp)
# create users team
repo_names = ['%s/%s' % (organization, repo_users)]
self.__class__.__create_team(org, team_users, repo_names, 'pull', teams)
# update users team members
team = teams[team_users]
members_act = set(m.login for m in team.iter_members())
members_exp = set(u[key_github] for u in user_repository.users())
self.__class__.__update_team_members(team, members_act, members_exp)
@controller.expose(help="Deletes GitHub teams.")
def delete_teams(self):
if not self.__class__.prompt_confirm(colored('This cannot be undone! Proceed? (yes/no): ', 'red')):
cprint("Aborting delete command.", 'yellow', file=sys.stdout)
return
self.app.log.debug('Deleting GitHub teams.')
# validate required config parameters
if not self.app.config.get('github', 'auth_token') or not self.app.config.get('github', 'auth_id'):
raise error.ConfigError("Missing config parameter 'github.auth_id' and/or 'github.auth_token'! "
"Please run 'scrum-tools github authorize' first! ")
# organization
organization = self.app.config.get('github', 'organization')
# teams setup
team_admins = self.app.config.get('github', 'team_admins')
team_users = self.app.config.get('github', 'team_users')
team_pattern = self.app.config.get('github', 'team_pattern')
user_repository = data.UserRepository(self.app.config)
gh = login(token=self.app.config.get('github', 'auth_token'))
# get the organization
org = gh.organization(organization)
if not org:
raise RuntimeError("Organization '%s' not found" % organization)
# get all organization teams
teams = dict((t.name, t) for t in org.iter_teams())
# delete group teams
for group in user_repository.groups():
team_name = team_pattern % int(group)
self.__class__.__delete_team(team_name, teams)
# delete admins team
self.__class__.__delete_team(team_admins, teams)
# delete users team
self.__class__.__delete_team(team_users, teams)
@staticmethod
def __create_repo(org, repo_name, teams, repos):
if not repo_name in repos:
print colored("Creating repository '%s'..." % repo_name, 'green'),
repo = org.create_repo(name=repo_name, private=True, has_wiki=False)
if repo:
repos[repo_name] = repo
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
else:
print colored("Skipping repository '%s' (already exists)." % repo_name, 'yellow')
for team in teams:
print colored("Adding repo '%s/%s' to team '%s'..." % (org.login, repo_name, team.name), 'green'),
if team.add_repo('%s/%s' % (org.login, repo_name)):
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
@staticmethod
def __delete_repo(repo_name, repos):
if repo_name in repos:
print colored("Deleting repository '%s'..." % repo_name, 'green'),
if repos[repo_name].delete():
del repos[repo_name]
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
else:
print colored("Skipping repository '%s' (does not exist)." % repo_name, 'yellow')
@staticmethod
def __create_team(org, team_name, repo_names, premission, teams):
if not team_name in teams:
print colored("Creating team '%s'..." % team_name, 'green'),
team = org.create_team(name=team_name, repo_names=repo_names, permission=premission)
if team:
teams[team_name] = team
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
else:
print colored("Skipping team '%s' (already exists)." % team_name, 'yellow')
@staticmethod
def __delete_team(team_name, teams):
if team_name in teams:
print colored("Deleting team '%s'..." % team_name, 'green'),
if teams[team_name].delete():
del teams[team_name]
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
else:
print colored("Skipping team '%s' (does not exist)." % team_name, 'yellow')
@staticmethod
def __update_team_members(team, members_act, members_exp):
print colored("Updating team members for team '%s'." % team.name, 'green')
# add missing team members
for u in members_exp - members_act:
print colored("Adding '%s' to team '%s'..." % (u, team.name), 'green'),
if team.invite(u):
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
# remove unexpected team members
for u in members_act - members_exp:
print colored("Removing '%s' from team '%s'..." % (u, team.name), 'green'),
if team.remove_member(u):
print colored('OK', 'green', attrs=['bold'])
else:
print colored('Not OK', 'red', attrs=['bold'])
@staticmethod
def prompt_login():
import getpass
u = prompt("GitHub username [%s]: " % getpass.getuser())
if not u:
u = getpass.getuser()
password_prompt = lambda: (getpass.getpass("GitHub password: "), getpass.getpass('GitHub password (again): '))
p1, p2 = password_prompt()
while p1 != p2:
print('Passwords do not match. Try again')
p1, p2 = password_prompt()
return u, p1
@staticmethod
def prompt_two_factor_login():
code = ''
while not code:
code = prompt('Enter 2FA code: ')
return code
@staticmethod
def prompt_confirm(question='Do you really want to do this (yes/no)?', answer_true='yes'):
return prompt(question) == answer_true | 42.150812 | 121 | 0.608686 | 2,195 | 18,167 | 4.853759 | 0.13303 | 0.03548 | 0.057349 | 0.063075 | 0.643702 | 0.583724 | 0.530505 | 0.495307 | 0.474564 | 0.464145 | 0 | 0.002415 | 0.270766 | 18,167 | 431 | 122 | 42.150812 | 0.801781 | 0.053614 | 0 | 0.426117 | 0 | 0 | 0.202672 | 0.005623 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.034364 | 0.034364 | null | null | 0.120275 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
207541c4f4a46b92967908249bf629d6ac8f4fb1 | 6,335 | py | Python | data/external/repositories/115375/hail-seizure-master/train.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories/115375/hail-seizure-master/train.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | 1 | 2015-12-10T16:46:02.000Z | 2018-05-21T23:01:55.000Z | data/external/repositories/115375/hail-seizure-master/train.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | 1 | 2019-12-04T08:23:33.000Z | 2019-12-04T08:23:33.000Z | #!/usr/bin/env python3
import python.utils as utils
import os
import joblib
import pickle
import pdb
def main(settingsfname, verbose=False, store_models=True,
store_features=False, save_training_detailed=False,
load_pickled=False, parallel=0):
settings = utils.get_settings(settingsfname)
utils.print_verbose('=== Settings file ===', flag=verbose)
utils.print_verbose(settingsfname, flag=verbose)
utils.print_verbose('=== Settings loaded ===', flag=verbose)
utils.print_verbose(settings, flag=verbose)
utils.print_verbose('=======================', flag=verbose)
subjects = settings['SUBJECTS']
data = utils.get_data(settings, verbose=verbose)
metadata = utils.get_metadata()
features_that_parsed = [feature for feature in
settings['FEATURES'] if feature in list(data.keys())]
settings['FEATURES'] = features_that_parsed
if not settings['FEATURES']:
raise EnvironmentError('No features could be loaded')
utils.print_verbose("=====Feature HDF5s parsed=====", flag=verbose)
model_pipe = utils.build_model_pipe(settings)
utils.print_verbose("=== Model Used ===\n"
"{0}\n==================".format(model_pipe),
flag=verbose)
# dictionary to store results
subject_predictions = {}
# dictionary to store features in
transformed_features = {}
# if we're loading pickled features then load them
if load_pickled:
if isinstance(load_pickled, str):
with open(load_pickled, "rb") as fh:
Xtra = pickle.load(fh)
else:
with open(settingsfname.split(".")[0]
+ "_feature_dump.pickle", "rb") as fh:
Xtra = pickle.load(fh)
else:
Xtra = None
# dictionary for final scores
auc_scores = {}
if not parallel:
for subject in subjects:
utils.print_verbose(
"=====Training {0} Model=====".format(str(subject)),
flag=verbose)
if 'RFE' in settings:
transformed_features, auc = utils.train_RFE(settings,
data,
metadata,
subject,
model_pipe,
transformed_features,
store_models,
store_features,
load_pickled,
settingsfname,
verbose,
extra_data=Xtra)
subject_predictions = None
elif 'CUSTOM' in settings:
results, auc = utils.train_custom_model(settings,
data,
metadata,
subject,
model_pipe,
store_models,
load_pickled,
verbose,
extra_data=Xtra)
subject_predictions[subject] = results
else:
results, auc = utils.train_model(settings,
data,
metadata,
subject,
model_pipe,
store_models,
load_pickled,
verbose,
extra_data=Xtra)
subject_predictions[subject] = results
auc_scores.update({subject: auc})
if parallel:
if 'RFE' in settings:
raise NotImplementedError('Parallel RFE is not implemented')
else:
output = joblib.Parallel(n_jobs=parallel)(
joblib.delayed(utils.train_model)(settings,
data,
metadata,
subject,
model_pipe,
store_models,
load_pickled,
verbose,
extra_data=Xtra,
parallel=parallel)
for subject in subjects)
results = [x[0] for x in output]
aucs = [x[1] for x in output]
for result in results:
subject_predictions.update(result)
for auc in aucs:
auc_scores.update(auc)
if save_training_detailed:
with open(save_training_detailed, "wb") as fh:
pickle.dump(subject_predictions[subject], fh)
combined_auc = utils.combined_auc_score(settings,
auc_scores,
subj_pred=subject_predictions)
print(
"predicted AUC score over all subjects: {0:.2f}".format(combined_auc))
auc_scores.update({'all': combined_auc})
utils.output_auc_scores(auc_scores, settings)
return auc_scores
if __name__ == '__main__':
# get and parse CLI options
parser = utils.get_parser()
args = parser.parse_args()
main(args.settings,
verbose=args.verbose,
save_training_detailed=args.pickle_detailed,
parallel=int(args.parallel))
| 38.865031 | 81 | 0.424467 | 485 | 6,335 | 5.350515 | 0.241237 | 0.033911 | 0.052408 | 0.03237 | 0.247784 | 0.204624 | 0.148362 | 0.148362 | 0.128324 | 0.128324 | 0 | 0.003175 | 0.502762 | 6,335 | 162 | 82 | 39.104938 | 0.820635 | 0.029045 | 0 | 0.352459 | 0 | 0 | 0.057933 | 0.007486 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008197 | false | 0 | 0.040984 | 0 | 0.057377 | 0.07377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20778423e20ac6661734493d2303bc03ce7d5df0 | 982 | py | Python | tests/scraper/models.py | teolemon/django-dynamic-scraper | 2a46df8828fa8dcf4f74315abe99cc37b214b2e8 | [
"BSD-3-Clause"
] | null | null | null | tests/scraper/models.py | teolemon/django-dynamic-scraper | 2a46df8828fa8dcf4f74315abe99cc37b214b2e8 | [
"BSD-3-Clause"
] | null | null | null | tests/scraper/models.py | teolemon/django-dynamic-scraper | 2a46df8828fa8dcf4f74315abe99cc37b214b2e8 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from dynamic_scraper.models import Scraper, SchedulerRuntime
from scrapy.contrib.djangoitem import DjangoItem
class EventWebsite(models.Model):
name = models.CharField(max_length=200)
scraper = models.ForeignKey(Scraper, blank=True, null=True, on_delete=models.SET_NULL)
url = models.URLField()
scraper_runtime = models.ForeignKey(SchedulerRuntime, blank=True, null=True, on_delete=models.SET_NULL)
def __unicode__(self):
return self.name + " (" + str(self.id) + ")"
class Event(models.Model):
title = models.CharField(max_length=200)
event_website = models.ForeignKey(EventWebsite)
description = models.TextField(blank=True)
url = models.URLField()
checker_runtime = models.ForeignKey(SchedulerRuntime, blank=True, null=True, on_delete=models.SET_NULL)
def __unicode__(self):
return self.title + " (" + str(self.id) + ")"
class EventItem(DjangoItem):
django_model = Event | 35.071429 | 107 | 0.726069 | 119 | 982 | 5.815126 | 0.352941 | 0.092486 | 0.056358 | 0.073699 | 0.424855 | 0.346821 | 0.346821 | 0.346821 | 0.346821 | 0.291908 | 0 | 0.007299 | 0.162933 | 982 | 28 | 108 | 35.071429 | 0.83455 | 0 | 0 | 0.2 | 0 | 0 | 0.006104 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.15 | 0.1 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
207a528acd1c6078894046fa653d5ad571c45a65 | 1,038 | py | Python | doctable/textmodels/parsetreedoc.py | devincornell/sqlitedocuments | 16923bb3b91af5104140e49045efdc612afbc310 | [
"MIT"
] | 1 | 2019-06-19T20:27:55.000Z | 2019-06-19T20:27:55.000Z | doctable/textmodels/parsetreedoc.py | devincornell/sqlitedocuments | 16923bb3b91af5104140e49045efdc612afbc310 | [
"MIT"
] | 21 | 2019-04-12T01:08:20.000Z | 2020-11-09T18:28:41.000Z | doctable/textmodels/parsetreedoc.py | devincornell/sqlitedocuments | 16923bb3b91af5104140e49045efdc612afbc310 | [
"MIT"
] | null | null | null |
from typing import Any
from .basedoc import BaseDoc
from .parsetree import ParseTree
class ParseTreeDoc(list):
''' Represents a document composed of sequence of parsetrees.
'''
@property
def tokens(self):
return (t for pt in self for t in pt)
def as_dict(self):
''' Convert document into a list of dict-formatted parsetrees.
'''
return [pt.as_dict() for pt in self]
@classmethod
def from_dict(cls, tree_data: list, *args, **kwargs):
''' Create new ParseTreeDoc from a dictionary tree created by as_dict().
Args:
tree_data: list of dict trees created from cls.as_dict()
'''
# root is reference to entire tree
return cls(ParseTree.from_dict(ptd, *args, **kwargs) for ptd in tree_data)
@classmethod
def from_spacy(cls, doc: Any, *args, **kwargs):
''' Create a new ParseTreeDoc from a spacy Doc object.
'''
return cls(ParseTree.from_spacy(sent, *args, **kwargs) for sent in doc.sents)
| 28.833333 | 85 | 0.633911 | 141 | 1,038 | 4.588652 | 0.375887 | 0.037094 | 0.021638 | 0.034003 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.270713 | 1,038 | 35 | 86 | 29.657143 | 0.85469 | 0.346821 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.2 | 0.066667 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
207de260265c6907b91ff91673f6e28b96b0eef2 | 959 | py | Python | rsatools/rp.py | SteelShredder/rsa-tools | 17a3441a7f00b68674a48477eee7b31449eebf6f | [
"MIT"
] | null | null | null | rsatools/rp.py | SteelShredder/rsa-tools | 17a3441a7f00b68674a48477eee7b31449eebf6f | [
"MIT"
] | null | null | null | rsatools/rp.py | SteelShredder/rsa-tools | 17a3441a7f00b68674a48477eee7b31449eebf6f | [
"MIT"
] | null | null | null | from .decrypt import decrypt as d
from .encrypt import encrypt as e
from .generatekeys import genkeys as g
def pg(e, bit):
dp = open("rsakeys/d", "w+")
ep = open("rsakeys/e", "w+")
np = open("rsakeys/n", "w+")
a, b, c = g(e,bit)
np.write(str(a))
ep.write(str(b))
dp.write(str(c))
dp.close()
ep.close()
np.close()
def pe():
mo = open("rsaoutput/eo", "w+")
mp = open("rsainput/ei", "r")
ep = open("rsakeys/e", "r")
np = open("rsakeys/n", "r")
ev=ep.read()
nv=np.read()
mv=mp.read()
mo.write(str(e(int(mv), int(ev), int(nv))))
mp.close()
mo.close()
ep.close()
np.close()
def pd():
mo = open("rsaoutput/do", "w+")
mp = open("rsainput/di", "r")
dp = open("rsakeys/d", "r")
np = open("rsakeys/n", "r")
dv=dp.read()
nv=np.read()
mv=mp.read()
mo.write(str(d(int(mv), int(dv), int(nv))))
mp.close()
mo.close()
dp.close()
np.close()
| 23.390244 | 47 | 0.519291 | 159 | 959 | 3.132075 | 0.27044 | 0.154618 | 0.078313 | 0.084337 | 0.339357 | 0.339357 | 0.120482 | 0.120482 | 0.120482 | 0.120482 | 0 | 0 | 0.24609 | 959 | 40 | 48 | 23.975 | 0.688797 | 0 | 0 | 0.425 | 1 | 0 | 0.130344 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.075 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20800899334a7f9045e3040d9fc79c07a6e2cb14 | 784 | py | Python | api/db/db.py | bcgov/data-stream | 2d8fbf3843ee765ee102f306993fdbc742aca5d8 | [
"Apache-2.0"
] | 1 | 2019-02-10T08:27:22.000Z | 2019-02-10T08:27:22.000Z | api/db/db.py | bcgov/data-stream | 2d8fbf3843ee765ee102f306993fdbc742aca5d8 | [
"Apache-2.0"
] | 18 | 2019-02-09T01:02:09.000Z | 2022-03-30T23:04:24.000Z | api/db/db.py | bcgov/data-stream | 2d8fbf3843ee765ee102f306993fdbc742aca5d8 | [
"Apache-2.0"
] | 2 | 2019-02-09T06:36:54.000Z | 2019-02-12T09:52:58.000Z | from mongoengine import connect
from config import Config
from db.models.subscriptions import Subscriptions
class Db:
Subscriptions = None
def __init__(self, createClient=True):
config = Config()
self.db = {}
self.Subscriptions = Subscriptions
self.createClient = createClient
self.initConnection(config)
def initConnection(self, config):
connect(
db=config.data['database']['dbName'],
host=config.data['database']['host'],
port=config.data['database']['port'],
username=config.data['database']['username'],
password=config.data['database']['password'],
authentication_source=config.data['database']['dbName'],
connect=self.createClient)
| 34.086957 | 68 | 0.633929 | 75 | 784 | 6.56 | 0.333333 | 0.121951 | 0.219512 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242347 | 784 | 22 | 69 | 35.636364 | 0.828283 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.05 | 0.15 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
208ff921c6f3b56395e223269bba951dc85c3d8d | 28,128 | py | Python | _build/jupyter_execute/curriculum-notebooks/Languages/FrenchVerbCodingConjugation/french-verb-conjugation.py | BryceHaley/curriculum-jbook | d1246799ddfe62b0cf5c389394a18c2904383437 | [
"CC-BY-4.0"
] | 1 | 2022-03-18T18:19:40.000Z | 2022-03-18T18:19:40.000Z | _build/jupyter_execute/curriculum-notebooks/Languages/FrenchVerbCodingConjugation/french-verb-conjugation.py | callysto/curriculum-jbook | ffb685901e266b0ae91d1250bf63e05a87c456d9 | [
"CC-BY-4.0"
] | null | null | null | _build/jupyter_execute/curriculum-notebooks/Languages/FrenchVerbCodingConjugation/french-verb-conjugation.py | callysto/curriculum-jbook | ffb685901e266b0ae91d1250bf63e05a87c456d9 | [
"CC-BY-4.0"
] | null | null | null | 
<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Languages/FrenchVerbCodingConjugation/French-Verb-Conjugation.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<p> Code is hidden for ease of viewing. Click the Show/Hide button to see. </>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
import numpy as np
#import matplotlib.pyplot as plt
from IPython.display import display, Math, Latex, HTML, clear_output, Markdown, Javascript
import ipywidgets as widgets
from ipywidgets import interact, FloatSlider, IntSlider, interactive, Layout
from traitlets import traitlets
#module to conjugate
#import mlconjug
#from functools import partial
#import pickle
import plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# French Verb Conjugation
----
## Introduction
In this Jupyter Notebook by Callysto you will learn about French verb conjugation. Mastering the basics of verb conjugation is essential to reading and writing in French. There are some basic rules (and exceptions) that we will address.
Because much of conjugation is algorithmic, one can write computer code to do the task for us. If you are interested in the programming aspects, please see the related notebook [French-Verb-Coding](CC-186-French-Verb-Coding.ipynb).
#### Necessary background
- Some basic knowledge of French
- Elementary Python syntax
#### Outline of this notebook
We will cover several important topics
- a review of personal pronouns in French
- two important verbs, Être and Avoir
- the regular verbs, with endings "-er", "-ir" and "-re"
- exceptions to the regular verbs
#### Allons-y!
## Personal pronouns
Conjugation is the processing of force the verb in a sentence to "agree" with the subject of that sentence. Typically, the subject of a sentence is a pronoun, so to start conjugating verbs, we can review the personal pronouns in French.
Below is table showing the subject pronouns in French. These will be used to separate the different cases of verb conjugation.
#table for personal pronouns using plotly
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
english = ['I','you','she, he, one','we','you (plural or formal)','they']
person = ['First','Second','Third','First (plural)','Second (plural)','Third (plural)']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Person','French','English'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [person,french,english],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(
width=750,
height=450
)
# margin=go.layout.Margin(
# l=0,
# r=0,
# b=0,
# t=0,
# pad=0
# )
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
Our verb conjugation rules will be based on these personal pronouns, so it is good to get familiar with their translations. French makes a distinction between all of these different tense based on their person, whether or not they are masculine or feminine, and if they are plural or singular.
## Two Important Verbs
Let's jump right to conjugating the two (arguably) most important verbs: To Be and To Have.
## 1. Être (to be)
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
etre_conjug = ['suis','es','est','sommes','êtes','sont']
trace0 = go.Table(
columnorder = [1,2],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,etre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(
width=500,
height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
To use these in a sentence, you could write something like:
- Je suis un garçon.
- Elle est une fille.
- Nous sommes tous les humaines.
Notice how in each sentence, the form of the verb changes to match subject pronoun.
"Être" is an irregular verb, that does not obey a certain format, if you will, for conjugating verbs in the present tense. There many examples of exceptions, which we will explore further. But first, the next most important verb:
## 2. Avoir (to have)
french = ["j'",'tu','elle, il, on','nous','vous','elles, ils']
avoir_conjug = ['ai','as','a','avons','avez','ont']
trace0 = go.Table(
columnorder = [1,2],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,avoir_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
Notice for the first person singular we have *j'* instead of *je*, this is due to the fact that the verb starts a vowel. This rule is similar to using "a" and "an" in English.
## The Regular Verbs
There are three types of regular verbs, which are identified by their endings. They are:
- the "-er" verbs, such as "parler" (to speak)
- the "-ir" verbs, such as "finir" (to finish)
- the "-re" verbs, such as "vendre" (to sell)
Each of these three type has its own pattern for conjugation, which is shared by all other regular verbs of the same typs. Let's have a look at these.
## 1. The "-er" Regular Verbs
There is a general rubric for conjugating verbs that end in **er** in the present tense.
We will illustrate this with the verb "parler" (to speak). The stem of the verb parler is "parl-". We conjugate it by adding on the endings "e", "es", "e", "ons", "ez" "ent" for the corresponding pronouns, as follows:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
stem = ['parl-','parl-','parl-','parl-','parl-','parl-']
ending = ['e','es','e','ons','ez','ent']
parler_conjug = ['parle','parles','parle','parlons','parlez','parlent']
trace0 = go.Table(
columnorder = [1,2],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,parler_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
This can be taken as the general rule for conjugating **er** verbs in the present tense. All you need to do is find the stem of the verb, which was parl- in this case and then apply these endings to figure out how to conjugate the verb for every personal pronoun.
For instance, try this yourself with the verb "changer" (to sing). The stem is "chant-", so what are the corresponding six conjucations, as in the table above?
This pattern works for most "er" verbs, and there are hundreds of them. Some common ones are:
- aimer (to like/love)
- arriver (to arrive, to happen)
- brosser (to brush)
- chanter (to sing
- chercher (to look for)
- danser (to dance)
- demander (to ask for)
- détester (to hate)
- donner (to give)
- écouter (to listen to)
- étudier (to study)
- gagner (to win, to earn)
- habiter (to live)
- jouer (to play)
- manquer (to miss)
- marcher (to walk, to function)
- parler (to talk, to speak)
- penser (to think)
- regarder (to watch, to look at)
- travailler (to work)
- trouver (to find)
- visiter (to visit (a place)
There are also many exception for hte **er** verbs, which we will discuss below.
## 2. The "-ir" Regular Verbs
There is a general rubric for conjugating verbs that end in **ir** in the present tense.
We will illustrate this with the verb "finir" (to finish). The stem of the verb finit is "fin-". We conjugate it by adding on the endings "is", "is", "it", "issons", "issez" "issent" for the corresponding pronouns, as follows:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
finir_stem = ['fin-','fin-','fin-','fin-','fin-','fin-']
ir_ending = ['is','is','eit','issons','issez','issent']
finir_conjug = ['finis','finis','finit','finisson','finissez','finissent']
trace0 = go.Table(
columnorder = [1,2],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,finir_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
This can be taken as the general rule for conjugating **ir** verbs in the present tense. All you need to do is find the *stem* of the verb, which was fin- in this case and then apply these endings to figure out how to conjugate the verb for every personal pronoun.
For instance, try this yourself with the verb "grandir" (to grow). The stem is "grand-", so what are the corresponding six conjucations, as in the table above?
This pattern works for most "ir" verbs, and there are hundreds of them. Some common ones are:
- applaudir (to applaud)
- bâtir (to build)
- choisir (to choose)
- désobéir (to disobey)
- finir (to finish)
- grandir (to grow up)
- grossir (to gain weight)
- guérir (to heal, to get well)
- maigrir (to lose weight)
- obéir (to obey)
- punir (to punish)
- réfléchir (to think, to reflect)
- remplir (to fill)
- réussir (to succeed)
- vieillir (to grow old)
Again, though, there will be exceptions...
## 3. The "-re" Regular Verbs
There is a general rubric for conjugating verbs that end in **re** in the present tense.
We will illustrate this with the verb "vendre" (to sell). The stem of the verb finit is "vend-". We conjugate it by adding on the endings "s", "s", "nothing", "ons", "ez" "ent" for the corresponding pronouns, as follows:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
vendre_stem = ['vend-','vend-','vend-','vend-','vend-','vend-']
re_ending = ['s','s','','ons','ez','ent']
vendre_conjug = ['vends','vends','vend','vendons','vendez','vendent']
trace0 = go.Table(
columnorder = [1,2],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,vendre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
This can be taken as the general rule for conjugating **re** verbs in the present tense. All you need to do is find the *stem* of the verb, which was vend- in this case and then apply these endings to figure out how to conjugate the verb for every personal pronoun.
For instance, try this yourself with the verb "grandir" (to grow). The stem is "grand-", so what are the corresponding six conjugations, as in the table above?
This pattern works for most "re" verbs, and there are many of them. Some common ones are:
attendre (to wait)
défendre (to defend)
descendre (to descend)
entendre (to hear)
étendre (to stretch)
fondre (to melt)
pendre (to hang, or suspend)
perdre (to lose)
prétendre (to claim)
rendre (to give back, or return)
répondre (to answer)
vendre (to sell)
Again, though, there will be exceptions...
## 1. Exceptions to the regular er verbs
French is filled with exceptions, which makes it a bit of a difficult language to master as one has to basically dedicate the exceptions to memory. An exception for a verb means that it is not (maybe just partially) conjugating using the endings given above. Most exceptions arise in an alteration of the stem of the verb.
Thankfully there are not many exceptions for the **er** verbs. Here are three notable ones:
## 1a. The "-oyer" and "-uyer" exceptions:
For verbs like "envoyer" (to send) or "ennuyer" (to annoy) the stem changes the "y" to an "i" for all pronouns except nous and vous:
french = ["j'",'tu','elle, il, on','nous','vous','elles, ils']
envoyer_conjug = ['envoie', 'envoies','envoie','envoyons','envoyez','envoient']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,envoyer_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## 1b. The "e_er" or "é_er" exceptions:
Verbs like "acheter" (to buy) or "préférer" (to prefer) also follow an exception rule. The accent aigue becomes an accent grave, that is, é becomes è, except in the nous and vous cases, where it does not change. Note this means the pronunciation of the letter changes as well.
preferer_conjug = ['préfère','préfères','préfère','préférons','préférez','préfèrent']
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,preferer_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## 1c. The " –eler " and " -eter " exceptions:
For verbs like "appeler" (to call) or "rejeter" (to reject) the letters "l"
or "t" get doubled. Again, this does not hold for the nous and vous cases.
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
appeler_conjug = ['appelle','appelles','appelle','appelons','appelez','appellent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,appeler_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
It's important to be aware of these exceptions, as you will be able to identify patterns in verbs of these forms and the exceptions themselves, like how it doesn't apply for nous and vous. Knowledge of the exceptions is crucial to mastering the language!
## 2. Exceptions to the regular ir verbs
Unfortunately, with the **ir** verbs, there are many, many exceptions. Three important ones are as follows:
## 2a. Verbs like partir (to leave):
For "partir" (to leave), the keep is to drop the "t" from the stem in the singular case, and add the endings "s", "s", "t". For the plural case, you keep the "t". The conjgations go like this:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
partir_conjug = ['pars','pars','part','partons','partez','partent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,partir_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
Other irregular ir verbs like partir include
- dormir (to sleep)
- mentir (to lie)
- partir (to leave)
- sentir (to feel)
- servir (to serve)
- sortir (to go out)
## 2b. Verbs that end in -llir, -frir, or -vrir
Curiously, these verbs conjugate like an "er" verb. Just take the stem and add the endings "e", "es", "s", "ons", "ez", "emt." For instance, here is the conjugation for ouvrir (to open):
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
ouvrir_conjug = ['ouvre','ouvres','ouvre','ouvrons','ouvrez','ouvrent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,ouvrir_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
Other ir verbs that follow this pattern include:
- couvrir (to cover)
- cueillir (to pick)
- offrir (to offer)
- ouvrir (to open)
- souffrir (to suffer)
## 2c. Verbs that end in -enir
These ones all follow a similar pattern. The stem changes in the singular cases and the endings are just like the first irregular ir case (like partir). Here is the conjugation for tenir (to hold):
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
tenir_conjug = ['tiens','tiens','tient','tenons','tenez','tenent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,tenir_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
Other verbs in this irregular category include:
- appartenir (to belong)
- contenir (to contain)
- convenir (to suit)
- devenir (to become)
- maintenir (to maintain)
- obtenir (to obtain)
- parvenir (to reach, or achieve)
- prévenir (to warn, or prevent)
- retenir (to retain)
- revenir (to come back)
- soutenir (to support)
- (se) souvenir (to remember)
- tenir (to hold)
- venir (to come)
## 2d. Other very irregular ir verbs
There are a dozen or so irregular ir verbs that don't fit any pattern. These include many that end in oir, as well as other like acquérir, asseoir, avoir, courir, devoir, falloir, mourir, pleuvoir, pouvoir, recevoir, savoir, servir, valoir, voir. You just have to learn these conjugations individually.
## 3. Exceptions to the re verbs
As with the other two regular classes, the **re** verbs also have several exceptions. In all cases, the changes involve adding or dropping a consonant in the stem, and possibly adjusting the endings. A quick summary is to say that the unusual changes have to do with making the spelling match the prononciation of the verb forms. In some sense, it is easier to learn what the verbs sound like, and then spell them to match.
There are four basic exceptions, as follows:
## 3a. The verb prendre (to take) and its relatives
Here, you just drop the "d" from the stem in the plural form, and add an extra "n" in the last case:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
prendre_conjug = ['prends','prends','prend','prenons','prenez','prennent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,prendre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## 3b. The verbs battre (to fight) and mettre (to put)
Here, you just drop one "t" from the stem in the singular form:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
battre_conjug = ['bats','bats','bat','battons','battez','battent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,battre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## 3c. The verbs rompre (to break) and its relatives
This one is such a tiny exception: an extra t in the third person singular:
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
rompre_conjug = ['romps','romps','rompt','rompons','rompez','rompent']
trace0 = go.Table(
columnorder = [1,2,3],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Conjugation'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,rompre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## 3d. Finally, Verbs Ending in –aindre, –eindre, and –oindre
In this case, the dre/tre is dropped to form the stem, and in the plural cases, the letter g is inserted. Again, this is to get the prononciation to match the spelling.
french = ['je','tu','elle, il, on','nous','vous','elles, ils']
craindre_conjug = ['crains','crains','craint','craignons','craignez','craignent']
joindre_conjug = ['joins','joins','joint','joignon','joignez','joignent']
peintre_conjug = ['peins','peins','peint','peignons','peignez','peignent']
trace0 = go.Table(
columnorder = [1,2,3,4,5],
columnwidth = [10,10],
header = dict(
values = ['Pronoun','Craindre','Joindre','Peintre'],
line = dict(color = 'rgb(0,0,0)'),
fill = dict(color = 'rgb(0,35,48)'),
align = ['center','center','center','center'],
font = dict(color = 'white', size = 16),
height = 40
),
cells = dict(
values = [french,craindre_conjug,joindre_conjug,peintre_conjug],
line = dict(color = 'black'),
fill = dict(color = 'rgb(95,102,161)'),
align = ['center', 'center','center','center'],
font = dict(color = 'white', size = 14),
height = 30
)
)
layout = dict(width=500, height=450)
data = [trace0]
fig = dict(data = data, layout = layout)
iplot(fig)
## Coding Examples
---
How could one write code to see if someone conjugated a verb correctly? If you are interested in the programming aspects, please see the related notebook [French-Verb-Coding](CC-186-French-Verb-Coding.ipynb).
#perhaps show how this work for a different verb and subject.
#manipulate this code for 'ir' verbs or try to write your own code to handle the exceptions above.
#remember to use the list user_answer for the user_inputs and don't forget to enter some inputs yourself ;)
# user_answer = [je.value,tu.value,elle.value,nous.value,vous.value,elles.value]
# french = ['je','tu','elle/il/on','nous','vous','elles/ils']
# endings = ['e','es','e','ons','ez','ent']
# for i in range(0,len(endings)):
# n = len(endings[i])
# #feel free to change what happens if they get it right or wrong.
# if user_answer[i] != '': #So that it doesn't print if nothing has been entered
# if user_answer[i][-n:] != endings[i]:
# print('The conjugation for "'+french[i]+'" is incorrect')
# if user_answer[i][-n:] == endings[i]:
# print('The conjugation for "'+french[i]+'" is correct!')
---
## Conclusion
In this Jupyter Notebook by Callysto you learned the basics of French verb conjugation in the present tense. In a related noteboo, we see we can expose the structure of the French verb conjugation rules to compose a program that checks if a user input the correct answers to conjugate a verb in the present tense. This is somewhat of a hallmark of coding. Taking some sort of structure of the problem at hand and exposing in the form of generalizable and applicable written code. Breaking down problems in this fashion is essential to computational thinking.
Je te remercie pour avoir essayer les exercises donner.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) | 33.091765 | 559 | 0.655041 | 4,166 | 28,128 | 4.409265 | 0.203072 | 0.047036 | 0.031357 | 0.022647 | 0.486744 | 0.474767 | 0.469378 | 0.456911 | 0.445261 | 0.445261 | 0 | 0.029379 | 0.190451 | 28,128 | 850 | 560 | 33.091765 | 0.777129 | 0 | 0 | 0.583471 | 0 | 0.029752 | 0.146734 | 0.000921 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023141 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
209013f0c2f9bc0d8238ed1fa7a13fadafec827e | 2,948 | py | Python | item_engine/textbase/generate_tests.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/textbase/generate_tests.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/textbase/generate_tests.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | import os
from typing import List, Iterator
from item_engine.textbase import make_characters
__all__ = ["generate_tests"]
def generate_tests(pckg: str,
inputs: List[str],
__test__: str = '__test__',
spec: str = 'spec',
__test_preview__: str = '__test_preview__',
remove_preview: bool = True
):
PATH_LIST = pckg.split('.')
try:
__import__(name=spec, fromlist=PATH_LIST).engine.build(allow_overwrite=True)
except ImportError:
raise Exception("[TEST GENERATION] : engine build failure !")
try:
engine_module = __import__(name="engine", fromlist=PATH_LIST)
parse = engine_module.parse
build = engine_module.build
except ImportError:
raise Exception("[TEST GENERATION] : generated code failure !")
def get(text: str):
*results, eof = list(parse(make_characters(text, eof=True)))
return [build(result) for result in results if result.at == 0 and result.to == eof.to]
def indent(s: str) -> str:
return '\n'.join(' ' + line for line in s.split('\n'))
def indent_result(result: Iterator):
return "[\n" + indent(",\n".join(map(repr, result))) + "\n]"
tests = indent("\n".join(f"test({text!r}, {indent_result(list(get(text)))!s})" for text in inputs))
content = f"""# THIS MODULE HAVE BEEN GENERATED AUTOMATICALLY, DO NOT MODIFY MANUALLY
from typing import List
from item_engine.textbase import *
PATH_LIST = {PATH_LIST!r}
__all__ = ['run']
try:
__import__(name={spec!r}, fromlist=PATH_LIST).engine.build(allow_overwrite=True)
except ImportError:
raise Exception("[TEST GENERATION] : engine build failure !")
try:
from {'.'.join(PATH_LIST)}.engine import parse
from {'.'.join(PATH_LIST)}.engine.materials import *
except ImportError:
raise Exception("[TEST GENERATION] : generated code failure !")
def get(text: str):
*results, eof = list(parse(make_characters(text, eof=True)))
return [build(result) for result in results if result.at == 0 and result.to == eof.to]
def test(text: str, expected: List[Element]):
result = get(text)
assert expected == result, f"\\ntext = {{text!r}}\\nexpected = {{expected!r}}\\nresult = {{result!r}}"
def run():
{tests}
if __name__ == '__main__':
run()
"""
try:
with open(__test_preview__ + '.py', mode='w', encoding='utf-8') as file:
file.write(content)
try:
__import__(name=__test_preview__, fromlist=PATH_LIST).run()
except Exception as e:
raise Exception("[TEST GENERATION] : preview error", e)
with open(__test__ + '.py', mode='w', encoding='utf-8') as file:
file.write(content)
finally:
if remove_preview:
if os.path.exists(__test_preview__ + '.py'):
os.remove(__test_preview__ + '.py')
| 29.777778 | 108 | 0.620421 | 366 | 2,948 | 4.726776 | 0.265027 | 0.041619 | 0.052023 | 0.080925 | 0.449711 | 0.391908 | 0.391908 | 0.391908 | 0.391908 | 0.391908 | 0 | 0.001789 | 0.24152 | 2,948 | 98 | 109 | 30.081633 | 0.771914 | 0 | 0 | 0.323529 | 1 | 0.029412 | 0.419607 | 0.086839 | 0 | 0 | 0 | 0 | 0.014706 | 1 | 0.058824 | false | 0 | 0.220588 | 0.029412 | 0.338235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2098c33dfd3aab904ba4db79b07d869e376e18df | 3,630 | py | Python | day_1/day1.py | secworks/advent_of_code_2017 | 20ea821710c388429809ca69102a164542d5d798 | [
"BSD-2-Clause"
] | null | null | null | day_1/day1.py | secworks/advent_of_code_2017 | 20ea821710c388429809ca69102a164542d5d798 | [
"BSD-2-Clause"
] | null | null | null | day_1/day1.py | secworks/advent_of_code_2017 | 20ea821710c388429809ca69102a164542d5d798 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#=======================================================================
#
# day_1.py
# --------
# Solution for Advent of code 2017, day 1.
# http://adventofcode.com/2017/day/1
#
# Status: Done.
#
# Joachim Strömbergson 2017
#
#=======================================================================
import sys
VERBOSE = 0
#-------------------------------------------------------------------
# get_input()
#-------------------------------------------------------------------
def get_input():
with open('my_input.txt','r') as f:
test_string = f.read()
return test_string.strip()
#-------------------------------------------------------------------
# parse()
#-------------------------------------------------------------------
def parse(s):
first = s[0]
acc = 0
# Scan through the string and add if pairs match.
for i in range(len(s) - 1):
if s[i] == s[i + 1]:
acc += int(s[i])
# Handle the end case.
if s[0] == s[-1]:
acc += int(s[0])
return acc
#-------------------------------------------------------------------
# parse_two()
#-------------------------------------------------------------------
def parse_two(string):
length = len(string)
ctr = 0;
acc = 0;
i = 0
j = int(length / 2)
while ctr < length:
if VERBOSE:
print("ctr: %d, acc: %d, i: %d, idata: %s, j: %d, jdata: %s" %\
(ctr, acc, i, string[i], j, string[j]))
if string[i] == string[j]:
acc = acc + int(string[i])
i = (i + 1) % length
j = (j + 1) % length
ctr += 1
return acc
#-------------------------------------------------------------------
# part_one()
#-------------------------------------------------------------------
def part_one(string):
print("Result part one: ", parse(string))
print("")
#-------------------------------------------------------------------
# part_two()
#-------------------------------------------------------------------
def part_two(string):
print("Result part two: ", parse_two(string))
print("")
#-------------------------------------------------------------------
# test_one()
#-------------------------------------------------------------------
def test_one():
print("Teststrings part one:")
print(parse("1122"), "Should be 3")
print(parse("1111"), "Should be 4")
print(parse("1234"), "Should be 0")
print(parse("91212129"), "Should be 9")
print("")
#-------------------------------------------------------------------
# test_one()
#-------------------------------------------------------------------
def test_two():
print("Teststrings part two:")
print(parse_two("1212"), "Should be 6")
print(parse_two("1221"), "Should be 0")
print(parse_two("123425"), "Should be 4")
print(parse_two("123123"), "Should be 12")
print(parse_two("12131415"), "Should be 4")
print("")
#-------------------------------------------------------------------
# main()
#-------------------------------------------------------------------
def main():
my_string = get_input()
part_one(my_string)
part_two(my_string)
test_one()
test_two()
#-------------------------------------------------------------------
#-------------------------------------------------------------------
if __name__=="__main__":
# Run the main function.
sys.exit(main())
#=======================================================================
# EOF day_1.py
#=======================================================================
| 26.691176 | 75 | 0.316253 | 310 | 3,630 | 3.577419 | 0.316129 | 0.081154 | 0.058611 | 0.037872 | 0.102795 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030003 | 0.164463 | 3,630 | 135 | 76 | 26.888889 | 0.335641 | 0.511019 | 0 | 0.1 | 0 | 0.016667 | 0.171776 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.016667 | 0 | 0.2 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
209bc0cc626bc5c803a48b3126392e386031528c | 285 | py | Python | sokoapp/contests/admin.py | Mercy-Nekesa/sokoapp | 6c7bc4c1278b7223226124a49fc33c5b8b6b617a | [
"MIT"
] | 1 | 2019-04-01T05:52:37.000Z | 2019-04-01T05:52:37.000Z | sokoapp/contests/admin.py | Mercy-Nekesa/sokoapp | 6c7bc4c1278b7223226124a49fc33c5b8b6b617a | [
"MIT"
] | 1 | 2015-03-11T16:18:12.000Z | 2015-03-11T16:18:12.000Z | sokoapp/contests/admin.py | Mercy-Nekesa/sokoapp | 6c7bc4c1278b7223226124a49fc33c5b8b6b617a | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import TimePeriod
class TimePeriodAdminBase(object):
list_display = ('name', 'period_start', 'period_end',)
class TimePeriodAdmin(TimePeriodAdminBase, admin.ModelAdmin):
pass
admin.site.register(TimePeriod, TimePeriodAdmin)
| 19 | 61 | 0.778947 | 30 | 285 | 7.3 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 285 | 14 | 62 | 20.357143 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0.091228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
20a454e64794fb3ad382db3340f6045cd10b054e | 6,655 | py | Python | HCI/Gesture Recognizer/Python/DTW.py | k-sae/HCI-Lab | ec40136907eaa9ea983efa7b4761e2bb1f5918a5 | [
"Apache-2.0"
] | 3 | 2017-07-03T12:57:36.000Z | 2017-12-11T19:49:14.000Z | HCI/Gesture Recognizer/Python/DTW.py | kareem2048/HCI-Lab | ec40136907eaa9ea983efa7b4761e2bb1f5918a5 | [
"Apache-2.0"
] | null | null | null | HCI/Gesture Recognizer/Python/DTW.py | kareem2048/HCI-Lab | ec40136907eaa9ea983efa7b4761e2bb1f5918a5 | [
"Apache-2.0"
] | 1 | 2018-07-27T11:00:08.000Z | 2018-07-27T11:00:08.000Z | from math import sin, cos, atan2, sqrt
import json
NumPoints = 64
SquareSize = 250.0
AngleRange = 45.0
AnglePrecision = 2.0
Phi = 0.5 * (-1.0 + sqrt(5.0))
class Template:
def __init__(self, name, points):
self.points = points
self.name = name
self.points = resample(self.points, NumPoints)
self.points = rotate_to_zero(self.points)
self.points = scale_to_square(self.points, SquareSize)
self.points = translate_to_origin(self.points)
class Result:
def __init__(self, name, score):
self.Name = name
self.Score = score
class DTWRecognizer:
def __init__(self):
self.templates = []
with open('Templates.json') as data_file:
data = json.load(data_file)
for i in range(len(data['Templates'])) :
templete = data['Templates'][i]['Name']
points = []
for m in range(len(data['Templates'][i]['Points'])) :
point = [data['Templates'][i]['Points'][m]['x'],data['Templates'][i]['Points'][m]['y']]
points.append(point)
self.templates.append(Template(templete,points))
#
'''
self.templates.append(Template("Zig-Zag",[[387, 192],[388, 192],[388, 190],[388, 189],[388, 188],[388, 187],[388, 186],[388, 185],[388, 184],[389, 182],[389, 180],[390, 180],[390, 178],[391, 176],[392, 172],[393, 170],[395, 168],[396, 167],[398, 164],[399, 162],[400, 161],[401, 159],[403, 157],[403, 156],[404, 155],[405, 155],[406, 154],[407, 154],[408, 155],[408, 156],[410, 157],[412, 159],[414, 164],[418, 169],[421, 176],[424, 182],[426, 188],[428, 192],[430, 197],[430, 201],[432, 203],[433, 205], [433, 206],[434, 208],[434, 209],[435, 209],[435, 210],[435, 211],[436, 211],[437, 210],[438, 207],[441, 202],[445, 193],[449, 184],[454, 177],[457, 168], [459, 162],[460, 158],[461, 154],[462, 151],[463, 149],[463, 148],[464, 146],[464, 145], [464, 144],[464, 146],[464, 147],[464, 148],[464, 150],[464, 152],[464, 153],[464, 155],[464, 156],[465, 157],[465, 159], [466, 161],[466, 164],[467, 168],[467, 170],[468, 172],[469, 174],[469, 176],[469, 178],[470, 180],[470, 181],[471, 182],[471, 183],[471, 185],[471, 186],[472, 187],[473, 188],[473, 189],[473, 191],[474, 191],[474, 192],[475, 192],[475, 192],[475, 193],[476, 193],[479, 192],[479, 192],[480, 192],[480, 191],[480, 190],[480, 189],[480, 187],[481, 184],[482, 181],[484, 179],[485, 176],[486, 173],[488, 169],[489, 168],[490, 166],[490, 165],[491, 163],[491, 162],[492, 161]]))
self.templates.append(Template("Line" ,[[430, 158],[435, 156],[440, 156],[447, 154],[456, 153],[465, 151],[476, 150],[484, 149],[495, 148],[503, 148],[510, 148],[517, 148],[520, 148],[525, 148],[530, 148],[533, 148],[535, 149],[538, 149],[539, 149],[540, 149],[541, 149],[542, 149]]))
'''
def Recognize(self, points):
points = resample(points, NumPoints)
points = rotate_to_zero(points)
points = scale_to_square(points, SquareSize)
points = translate_to_origin(points)
b = float("inf")
t = None
for i, temp in enumerate(self.templates):
Tpoints = temp.points
d = distance_at_best_angle(points, Tpoints, -AngleRange, AngleRange, AnglePrecision)
if d < b:
b = d
t = temp
score = 1 - (b / (0.5 * sqrt(SquareSize * SquareSize * 2)))
if t:
return Result(t.name, score)
else:
return Result('Unrecognized', 0.0)
def average(xs): return sum(xs) / len(xs)
def resample(points, n):
I = pathlength(points) / float(n-1)
D = 0
newPoints = [points[0]]
i = 1
while i<len(points):
p_i = points[i]
d = distance(points[i-1], p_i)
if (D + d) >= I:
qx = points[i-1][0] + ((I-D) / d) * (p_i[0] - points[i-1][0])
qy = points[i-1][1] + ((I-D) / d) * (p_i[1] - points[i-1][1])
newPoints.append([qx,qy])
points.insert(i, [qx,qy])
D = 0
else: D = D + d
i+=1
return newPoints
def pathlength(points):
d = 0
for i,p_i in enumerate(points[:len(points)-1]):
d += distance(p_i, points[i+1])
return d
def distance(p1, p2): return float(sqrt((p1[0] - p2[0])**2 + (p1[1] - p2[1])**2))
def centroid(points): return float(average([float(i[0]) for i in points])), float(average([float(i[1]) for i in points]))
def rotate_to_zero(points):
cx, cy = centroid(points)
theta = atan2(cy - points[0][1], cx - points[0][0])
newPoints = rotate_by(points, -theta)
return newPoints
def rotate_by(points, theta):
cx, cy = centroid(points)
newpoints = []
cos_p, sin_p = cos(theta), sin(theta)
for p in points:
qx = (p[0] - cx) * cos_p - (p[1] - cy) * sin_p + cx
qy = (p[0] - cx) * sin_p + (p[1] - cy) * cos_p + cy
newpoints.append([qx,qy])
return newpoints
def bounding_box(points):
minx, maxx = min((p[0] for p in points)), max((p[0] for p in points))
miny, maxy = min((p[1] for p in points)), max((p[1] for p in points))
return minx, miny, maxx-minx, maxy - miny
def scale_to_square(points, size):
min_x, min_y, w, h = bounding_box(points)
newPoints = []
for p in points:
qx = p[0] * (float(size) / w )
qy = p[1] * (float(size) / h )
newPoints.append([qx,qy])
return newPoints
def translate_to_origin(points):
cx, cy = centroid(points)
newpoints = []
for p in points:
qx, qy = p[0] - cx , p[1] - cy
newpoints.append([qx,qy])
return newpoints
def distance_at_best_angle(points, T, ta, tb, td):
x1 = Phi * ta + (1 - Phi) * tb
f1 = distance_at_angle(points, T, x1)
x2 = (1 - Phi) * ta + Phi * tb
f2 = distance_at_angle(points, T, x2)
while abs(tb - ta) > td:
if f1 < f2:
tb,x2,f2 = x2, x1, f1
x1 = Phi * ta + (1 - Phi) * tb
f1 = distance_at_angle(points, T, x1)
else:
ta,x1,f1 = x1, x2, f2
x2 = (1 - Phi) * ta + Phi * tb
f2 = distance_at_angle(points, T, x2)
return min(f1, f2)
def distance_at_angle(points, T, theta):
newpoints = rotate_by(points, theta)
d = pathdistance(newpoints, T)
return d
def pathdistance(a,b):
d = 0
for ai, bi in zip(a,b):
d += distance(ai, bi)
return d / len(a)
| 37.8125 | 1,355 | 0.528475 | 983 | 6,655 | 3.509664 | 0.268566 | 0.028986 | 0.012174 | 0.024348 | 0.204058 | 0.118551 | 0.10058 | 0.067826 | 0.045217 | 0.045217 | 0 | 0.19651 | 0.276634 | 6,655 | 175 | 1,356 | 38.028571 | 0.52015 | 0 | 0 | 0.251908 | 0 | 0 | 0.020362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129771 | false | 0 | 0.015267 | 0.022901 | 0.259542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20aac86b0cfe5ca3694d0752f807ee2ec5a74264 | 275 | py | Python | tests/test_ui_systemtray.py | scottwernervt/clipmanager | 34e9f45f7d9a3cef423d9d54df5d220aed5fd821 | [
"BSD-4-Clause"
] | 12 | 2016-02-11T04:14:35.000Z | 2021-12-16T08:13:05.000Z | tests/test_ui_systemtray.py | scottwernervt/clipmanager | 34e9f45f7d9a3cef423d9d54df5d220aed5fd821 | [
"BSD-4-Clause"
] | 11 | 2018-02-01T21:20:08.000Z | 2018-07-20T16:02:01.000Z | tests/test_ui_systemtray.py | scottwernervt/clipmanager | 34e9f45f7d9a3cef423d9d54df5d220aed5fd821 | [
"BSD-4-Clause"
] | null | null | null | import pytest
from clipmanager.ui.systemtray import SystemTrayIcon
@pytest.fixture()
def systemtray():
tray = SystemTrayIcon()
tray.show()
return tray
class TestSystemTrayIcon:
def test_is_visible(self, systemtray):
assert systemtray.isVisible()
| 17.1875 | 52 | 0.730909 | 29 | 275 | 6.862069 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185455 | 275 | 15 | 53 | 18.333333 | 0.888393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20b8115dffa3217a7ae85b3889aa549226d9b0c6 | 5,064 | py | Python | tests/test_sorted_feed.py | andyet/thoonk.py | 4535ad05975a6410fe3448ace28d591ba1452f02 | [
"MIT"
] | 63 | 2015-01-13T09:08:19.000Z | 2021-10-05T16:52:52.000Z | tests/test_sorted_feed.py | andyet/thoonk.py | 4535ad05975a6410fe3448ace28d591ba1452f02 | [
"MIT"
] | 2 | 2015-03-19T22:32:01.000Z | 2015-03-23T19:21:24.000Z | tests/test_sorted_feed.py | andyet/thoonk.py | 4535ad05975a6410fe3448ace28d591ba1452f02 | [
"MIT"
] | 12 | 2015-03-16T16:40:38.000Z | 2022-03-28T10:56:24.000Z | import thoonk
from thoonk.feeds import SortedFeed
import unittest
from ConfigParser import ConfigParser
class TestLeaf(unittest.TestCase):
def setUp(self):
conf = ConfigParser()
conf.read('test.cfg')
if conf.sections() == ['Test']:
self.ps = thoonk.Thoonk(host=conf.get('Test', 'host'),
port=conf.getint('Test', 'port'),
db=conf.getint('Test', 'db'))
self.ps.redis.flushdb()
else:
print 'No test configuration found in test.cfg'
exit()
def test_10_basic_sorted_feed(self):
"""Test basic sorted feed publish and retrieve."""
l = self.ps.sorted_feed("sortedfeed")
self.assertEqual(l.__class__, SortedFeed)
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
r = l.get_ids()
v = l.get_items()
items = {'1': 'hi',
'2': 'bye',
'3': 'thanks',
'4': "you're welcome"}
self.assertEqual(r, ['1', '2', '3', '4'], "Sorted feed results did not match publish: %s." % r)
self.assertEqual(v, items, "Sorted feed items don't match: %s" % v)
def test_20_sorted_feed_before(self):
"""Test addding an item before another item"""
l = self.ps.sorted_feed("sortedfeed")
l.publish("hi")
l.publish("bye")
l.publish_before('2', 'foo')
r = l.get_ids()
self.assertEqual(r, ['1', '3', '2'], "Sorted feed results did not match: %s." % r)
def test_30_sorted_feed_after(self):
"""Test adding an item after another item"""
l = self.ps.sorted_feed("sortedfeed")
l.publish("hi")
l.publish("bye")
l.publish_after('1', 'foo')
r = l.get_ids()
self.assertEqual(r, ['1', '3', '2'], "Sorted feed results did not match: %s." % r)
def test_40_sorted_feed_prepend(self):
"""Test addding an item to the front of the sorted feed"""
l = self.ps.sorted_feed("sortedfeed")
l.publish("hi")
l.publish("bye")
l.prepend('bar')
r = l.get_ids()
self.assertEqual(r, ['3', '1', '2'],
"Sorted feed results don't match: %s" % r)
def test_50_sorted_feed_edit(self):
"""Test editing an item in a sorted feed"""
l = self.ps.sorted_feed("sortedfeed")
l.publish("hi")
l.publish("bye")
l.edit('1', 'bar')
r = l.get_ids()
v = l.get_item('1')
vs = l.get_items()
items = {'1': 'bar',
'2': 'bye'}
self.assertEqual(r, ['1', '2'],
"Sorted feed results don't match: %s" % r)
self.assertEqual(v, 'bar', "Items don't match: %s" % v)
self.assertEqual(vs, items, "Sorted feed items don't match: %s" % vs)
def test_60_sorted_feed_retract(self):
"""Test retracting an item from a sorted feed"""
l = self.ps.sorted_feed("sortedfeed")
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
l.retract('3')
r = l.get_ids()
self.assertEqual(r, ['1', '2', '4'],
"Sorted feed results don't match: %s" % r)
def test_70_sorted_feed_move_first(self):
"""Test moving items around in the feed."""
l = self.ps.sorted_feed('sortedfeed')
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
l.move_first('4')
r = l.get_ids()
self.assertEqual(r, ['4', '1', '2', '3'],
"Sorted feed results don't match: %s" % r)
def test_71_sorted_feed_move_last(self):
"""Test moving items around in the feed."""
l = self.ps.sorted_feed('sortedfeed')
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
l.move_last('2')
r = l.get_ids()
self.assertEqual(r, ['1', '3', '4', '2'],
"Sorted feed results don't match: %s" % r)
def test_72_sorted_feed_move_before(self):
"""Test moving items around in the feed."""
l = self.ps.sorted_feed('sortedfeed')
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
l.move_before('1', '2')
r = l.get_ids()
self.assertEqual(r, ['2', '1', '3', '4'],
"Sorted feed results don't match: %s" % r)
def test_73_sorted_feed_move_after(self):
"""Test moving items around in the feed."""
l = self.ps.sorted_feed('sortedfeed')
l.publish("hi")
l.publish("bye")
l.publish("thanks")
l.publish("you're welcome")
l.move_after('1', '4')
r = l.get_ids()
self.assertEqual(r, ['1', '4', '2', '3'],
"Sorted feed results don't match: %s" % r)
suite = unittest.TestLoader().loadTestsFromTestCase(TestLeaf)
| 34.924138 | 103 | 0.529226 | 677 | 5,064 | 3.844904 | 0.149188 | 0.138302 | 0.026892 | 0.049942 | 0.663849 | 0.60776 | 0.5801 | 0.560891 | 0.498655 | 0.488667 | 0 | 0.020245 | 0.307464 | 5,064 | 144 | 104 | 35.166667 | 0.721985 | 0 | 0 | 0.516949 | 0 | 0 | 0.194221 | 0 | 0 | 0 | 0 | 0 | 0.118644 | 0 | null | null | 0 | 0.033898 | null | null | 0.008475 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20bd0ca933a883e637a3a9cfd0a0a77678655e62 | 586 | py | Python | freetile/helper/xcb.py | rbn42/freetile | 16a5c95650d1887b372f373c2126f96d991f3366 | [
"MIT"
] | 10 | 2017-09-09T23:24:13.000Z | 2020-04-08T17:07:59.000Z | freetile/helper/xcb.py | rbn42/kd_tree_tile | 16a5c95650d1887b372f373c2126f96d991f3366 | [
"MIT"
] | 1 | 2017-08-26T08:09:03.000Z | 2017-08-26T08:09:34.000Z | freetile/helper/xcb.py | rbn42/freetile | 16a5c95650d1887b372f373c2126f96d991f3366 | [
"MIT"
] | null | null | null | import os
import xcffib
from xcffib.testing import XvfbTest
from xcffib.xproto import Atom, ConfigWindow, EventMask, GetPropertyType
conn = xcffib.connect(os.environ['DISPLAY'])
xproto = xcffib.xproto.xprotoExtension(conn)
def arrange(layout, windowids):
for lay, winid in zip(layout, windowids):
xproto.ConfigureWindow(winid, ConfigWindow.X | ConfigWindow.Y | ConfigWindow.Width | ConfigWindow.Height, lay)
conn.flush()
def move(winid, x, y, sync=True):
xproto.ConfigureWindow(winid, ConfigWindow.X | ConfigWindow.Y, [x, y])
if sync:
conn.flush()
| 27.904762 | 118 | 0.732082 | 73 | 586 | 5.876712 | 0.479452 | 0.04662 | 0.121212 | 0.177156 | 0.242424 | 0.242424 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0.156997 | 586 | 20 | 119 | 29.3 | 0.868421 | 0 | 0 | 0.142857 | 0 | 0 | 0.011945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20c0cfeb06b96a4465675ae0f8ac3dafd13c7aa2 | 537 | py | Python | oro_plugins/migrations/0002_galleryitem_gallery.py | mikeh74/orocus_djangocms | 81946daf17770e27bbe5c56b4caa0529bf3170bc | [
"MIT"
] | null | null | null | oro_plugins/migrations/0002_galleryitem_gallery.py | mikeh74/orocus_djangocms | 81946daf17770e27bbe5c56b4caa0529bf3170bc | [
"MIT"
] | null | null | null | oro_plugins/migrations/0002_galleryitem_gallery.py | mikeh74/orocus_djangocms | 81946daf17770e27bbe5c56b4caa0529bf3170bc | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-06-05 20:15
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('oro_plugins', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='galleryitem',
name='gallery',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to='oro_plugins.Gallery', verbose_name=''),
preserve_default=False,
),
]
| 25.571429 | 135 | 0.640596 | 61 | 537 | 5.52459 | 0.672131 | 0.071217 | 0.083086 | 0.130564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04902 | 0.240223 | 537 | 20 | 136 | 26.85 | 0.776961 | 0.083799 | 0 | 0 | 1 | 0 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20c121357230b7a557b1c97b8dff117f08572b36 | 464 | py | Python | Buy/migrations/0003_singlecardpurchase_initial_sell_price.py | eliilek/TCGProject | ca1f4e89a8b93ec1073526953d1ca3fab21902b0 | [
"MIT"
] | null | null | null | Buy/migrations/0003_singlecardpurchase_initial_sell_price.py | eliilek/TCGProject | ca1f4e89a8b93ec1073526953d1ca3fab21902b0 | [
"MIT"
] | 3 | 2020-02-11T21:16:54.000Z | 2021-06-10T17:30:26.000Z | Buy/migrations/0003_singlecardpurchase_initial_sell_price.py | eliilek/TCGProject | ca1f4e89a8b93ec1073526953d1ca3fab21902b0 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.3 on 2018-04-18 19:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('Buy', '0002_standardset'),
]
operations = [
migrations.AddField(
model_name='singlecardpurchase',
name='initial_sell_price',
field=models.DecimalField(decimal_places=2, default=0, max_digits=6),
preserve_default=False,
),
]
| 23.2 | 81 | 0.62069 | 50 | 464 | 5.62 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065089 | 0.271552 | 464 | 19 | 82 | 24.421053 | 0.766272 | 0.096983 | 0 | 0 | 1 | 0 | 0.131894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20c77eabbc6690e3c5bb68e42dc26beb132dc547 | 1,549 | py | Python | tensordata/utils/conda/_conda.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | 13 | 2019-01-08T10:22:39.000Z | 2020-06-17T10:02:47.000Z | tensordata/utils/conda/_conda.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | null | null | null | tensordata/utils/conda/_conda.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | 1 | 2020-06-17T10:02:49.000Z | 2020-06-17T10:02:49.000Z | import subprocess
__all__ = ['view_env', 'create_env', 'remove_env']
def view_env():
"""Get virtual environment info."""
cmd = f"conda info -e"
s = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
s = s.decode('utf-8').strip().split('\n')[2:]
s = [i.split(' ') for i in s]
return {i[0]:i[-1] for i in s}
def create_env(name, version):
"""Create virtual environment.
Args:
name: virtual environment.
version: python version.
Return:
log info.
"""
cmd = 'conda update -n base -c defaults conda'
subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
s = view_env()
if name in s:
return 'Virtual environment already exists.'
cmd = f"conda create -n {name} python={version} -y"
subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
s = view_env()
if name in s:
return 'Virtual environment successfully created.'
return 'Virtual environment failed created.'
def remove_env(name):
"""Remove virtual environment.
Args:
name: virtual environment.
Return:
log info.
"""
s = view_env()
if name not in s:
return 'Virtual environment not exists.'
cmd = f'conda remove -n {name} --all'
subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
s = view_env()
if name not in s:
return 'Virtual environment successfully removed.'
return 'Virtual environment failed removed.' | 30.98 | 82 | 0.631375 | 203 | 1,549 | 4.748768 | 0.26601 | 0.205394 | 0.149378 | 0.099585 | 0.538382 | 0.538382 | 0.422199 | 0.422199 | 0.422199 | 0.422199 | 0 | 0.006774 | 0.237573 | 1,549 | 50 | 83 | 30.98 | 0.809483 | 0.151065 | 0 | 0.366667 | 0 | 0 | 0.300963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.033333 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20cf19c220c9e244648b1599487f047cd24ebb59 | 4,623 | py | Python | blog/models.py | florimondmanca/personal-api | 6300f965d3f51d1bf5f10cf1eb15d673bd627631 | [
"MIT"
] | 4 | 2018-08-17T08:06:06.000Z | 2020-02-20T15:15:56.000Z | blog/models.py | florimondmanca/personal-api | 6300f965d3f51d1bf5f10cf1eb15d673bd627631 | [
"MIT"
] | 2 | 2018-10-08T15:59:58.000Z | 2018-10-20T16:50:13.000Z | blog/models.py | florimondmanca/personal-api | 6300f965d3f51d1bf5f10cf1eb15d673bd627631 | [
"MIT"
] | 1 | 2019-09-14T23:15:10.000Z | 2019-09-14T23:15:10.000Z | """Blog models."""
from typing import Union
from django.contrib.postgres.fields import ArrayField
from django.db import models
from django.utils import timezone
from django.utils.text import Truncator, slugify
from markdownx.models import MarkdownxField
from .dbfunctions import Unnest
from .signals import post_published
from .utils import markdown_unformatted
class PostManager(models.Manager):
"""Custom object manager for blog posts."""
def published(self) -> models.QuerySet:
"""Return published blog posts only."""
return self.get_queryset().filter(published__isnull=False)
class Post(models.Model):
"""Represents a blog post."""
objects = PostManager()
SLUG_MAX_LENGTH = 80
title = models.CharField(max_length=300)
slug = models.SlugField(max_length=SLUG_MAX_LENGTH, unique=True)
description = models.TextField(
default="", blank=True, help_text="Used for social cards and RSS."
)
content = MarkdownxField(blank=True, default="")
image_url = models.URLField(blank=True, null=True)
image_caption = models.TextField(null=True, blank=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
published = models.DateTimeField(blank=True, null=True)
class Meta: # noqa
ordering = ("-published",)
# NOTE: Django uses B-Tree indexes, enough for small datasets.
indexes = [
# `created` is used for ordering, which can be sped up by an index.
models.Index(fields=["created"]),
# `published` is filtered on a lot (to retrieve drafts)
# and does not change very often.
models.Index(fields=(["published"])),
]
def save(self, *args, **kwargs):
"""Set slug when creating a post."""
if not self.pk and not self.slug:
self.slug = slugify(self.title)[: self.SLUG_MAX_LENGTH]
return super().save(*args, **kwargs)
def __str__(self) -> str:
"""Represent by its title."""
return str(self.title)
def publish(self, request=None):
"""Publish a blog post by setting its published date."""
self.published = timezone.now()
self.save()
post_published.send(sender=Post, instance=self, request=request)
@property
def is_draft(self) -> bool:
"""Return whether the post is a draft."""
return self.published is None
@property
def preview(self) -> str:
"""Return an unformatted preview of the post contents."""
return Truncator(markdown_unformatted(self.content)).chars(200)
def _find_published(self, order_by, **kwargs):
"""Filter and get the first published item in the queryset, or None."""
if not self.published:
return None
qs = Post.objects.published().order_by(order_by).filter(**kwargs)
return qs and qs[0] or None
@property
def previous(self) -> Union["Post", None]:
"""Return the previous published post.
If the post is not published or there is no previous published post,
returns None.
"""
return self._find_published("-published", published__lt=self.published)
@property
def next(self) -> Union["Post", None]:
"""Return the next published post.
If the post is not published or there is no next published post,
returns None.
"""
return self._find_published("published", published__gt=self.published)
def get_absolute_url(self) -> str:
"""Return the absolute URL path of a blog post."""
return f"/{self.slug}/"
@classmethod
def list_absolute_url(cls) -> str:
"""Return the absolute URL path for the list of posts."""
return "/"
class TagManager(models.Manager):
"""Custom manager for tag objects."""
def with_post_counts(self, published_only: bool = False):
"""Add a `.post_count` attribute on each tag."""
if published_only:
published_filter = models.Q(posts__published__isnull=False)
else:
published_filter = None
count_aggregate = models.Count("posts", filter=published_filter)
return self.get_queryset().annotate(post_count=count_aggregate)
class Tag(models.Model):
"""Represents a group of posts related to similar content."""
objects = TagManager()
name = models.CharField(max_length=20)
posts = models.ManyToManyField(to=Post, related_name="tags")
def __str__(self) -> str:
"""Represent the tag by its name."""
return str(self.name)
| 33.021429 | 79 | 0.653472 | 579 | 4,623 | 5.105354 | 0.307427 | 0.018268 | 0.009134 | 0.014208 | 0.126522 | 0.111637 | 0.075778 | 0.075778 | 0.075778 | 0.075778 | 0 | 0.003115 | 0.23621 | 4,623 | 139 | 80 | 33.258993 | 0.834041 | 0.233182 | 0 | 0.076923 | 0 | 0 | 0.031176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.115385 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
20d20589a9943742a40657aa7ecca6a92ff061ab | 10,688 | py | Python | peregrinearb/utils/single_exchange.py | kecheon/peregrine | 3d308ff3134bc00900421b248f9f93d7ad31ddb6 | [
"MIT"
] | 954 | 2018-02-19T23:20:08.000Z | 2022-03-28T16:37:43.000Z | peregrinearb/utils/single_exchange.py | edouardkombo/peregrine | a3346e937d417acd91468884ee1fc14586cf317d | [
"MIT"
] | 55 | 2018-02-17T00:12:03.000Z | 2021-11-09T03:57:34.000Z | peregrinearb/utils/single_exchange.py | edouardkombo/peregrine | a3346e937d417acd91468884ee1fc14586cf317d | [
"MIT"
] | 307 | 2018-02-24T06:00:13.000Z | 2022-03-30T01:28:32.000Z | import asyncio
import math
import networkx as nx
import ccxt.async_support as ccxt
import datetime
import logging
from .logging_utils import FormatForLogAdapter
__all__ = [
'FeesNotAvailable',
'create_exchange_graph',
'load_exchange_graph',
]
adapter = FormatForLogAdapter(logging.getLogger('peregrinearb.utils.single_exchange'))
class FeesNotAvailable(Exception):
pass
def create_exchange_graph(exchange: ccxt.Exchange):
"""
Returns a simple graph representing exchange. Each edge represents a market.
exchange.load_markets() must have been called. Will throw a ccxt error if it has not.
"""
graph = nx.Graph()
for market_name in exchange.symbols:
try:
base_currency, quote_currency = market_name.split('/')
# if ccxt returns a market in incorrect format (e.g FX_BTC_JPY on BitFlyer)
except ValueError:
continue
graph.add_edge(base_currency, quote_currency, market_name=market_name)
return graph
async def load_exchange_graph(exchange, name=True, fees=True, suppress=None, depth=False, tickers=None) -> nx.DiGraph:
"""
Returns a networkx DiGraph populated with the current ask and bid prices for each market in graph (represented by
edges). If depth, also adds an attribute 'depth' to each edge which represents the current volume of orders
available at the price represented by the 'weight' attribute of each edge.
"""
if suppress is None:
suppress = ['markets']
if name:
exchange = getattr(ccxt, exchange)()
if tickers is None:
adapter.info('Fetching tickers')
tickers = await exchange.fetch_tickers()
adapter.info('Fetched tickers')
market_count = len(tickers)
adapter.info('Loading exchange graph', marketCount=market_count)
adapter.debug('Initializing empty graph with exchange_name and timestamp attributes')
graph = nx.DiGraph()
# todo: get exchange's server time?
graph.graph['exchange_name'] = exchange.id
graph.graph['datetime'] = datetime.datetime.now(tz=datetime.timezone.utc)
adapter.debug('Initialized empty graph with exchange_name and timestamp attributes')
async def add_edges():
tasks = [_add_weighted_edge_to_graph(exchange, market_name, graph, log=True, fees=fees,
suppress=suppress, ticker=ticker, depth=depth, )
for market_name, ticker in tickers.items()]
await asyncio.wait(tasks)
if fees:
for i in range(20):
try:
adapter.info('Loading fees', iteration=i)
# must load markets to get fees
await exchange.load_markets()
except (ccxt.DDoSProtection, ccxt.RequestTimeout) as e:
if i == 19:
adapter.warning('Rate limited on final iteration, raising error', iteration=i)
raise e
adapter.warning('Rate limited when loading markets', iteration=i)
await asyncio.sleep(0.1)
except ccxt.ExchangeNotAvailable as e:
if i == 19:
adapter.warning('Cannot load markets due to ExchangeNotAvailable error, '
'graph will not be loaded.', iteration=i)
raise e
adapter.warning('Received ExchangeNotAvailable error when loading markets', iteration=i)
else:
break
adapter.info('Loaded fees', iteration=i, marketCount=market_count)
currency_count = len(exchange.currencies)
adapter.info('Adding data to graph', marketCount=market_count, currencyCount=currency_count)
await add_edges()
adapter.info('Added data to graph', marketCount=market_count, currencyCount=currency_count)
else:
adapter.info('Adding data to graph', marketCount=market_count)
await add_edges()
adapter.info('Added data to graph', marketCount=market_count)
adapter.debug('Closing connection')
await exchange.close()
adapter.debug('Closed connection')
adapter.info('Loaded exchange graph')
return graph
async def _add_weighted_edge_to_graph(exchange: ccxt.Exchange, market_name: str, graph: nx.DiGraph, log=True,
fees=False, suppress=None, ticker=None, depth=False, ):
"""
todo: add global variable to bid_volume/ ask_volume to see if all tickers (for a given exchange) have value == None
Returns a Networkx DiGraph populated with the current ask and bid prices for each market in graph (represented by
edges).
:param exchange: A ccxt Exchange object
:param market_name: A string representing a cryptocurrency market formatted like so:
'{base_currency}/{quote_currency}'
:param graph: A Networkx DiGraph upon
:param log: If the edge weights given to the graph should be the negative logarithm of the ask and bid prices. This
is necessary to calculate arbitrage opportunities.
:param fees: If fees should be taken into account for prices.
:param suppress: A list or set which tells which types of warnings to not throw. Accepted elements are 'markets'.
:param ticker: A dictionary representing a market as returned by ccxt's Exchange's fetch_ticker method
:param depth: If True, also adds an attribute 'depth' to each edge which represents the current volume of orders
available at the price represented by the 'weight' attribute of each edge.
"""
adapter.debug('Adding edge to graph', market=market_name)
if ticker is None:
try:
adapter.info('Fetching ticker', market=market_name)
ticker = await exchange.fetch_ticker(market_name)
adapter.info('Fetched ticker', market=market_name)
# any error is solely because of fetch_ticker
except:
if 'markets' not in suppress:
adapter.warning('Market is unavailable at this time. It will not be included in the graph.',
market=market_name)
return
if fees:
if 'taker' in exchange.markets[market_name]:
# we always take the taker side because arbitrage depends on filling orders
# sell_fee_dict = exchange.calculate_fee(market_name, 'limit', 'sell', 0, 0, 'taker')
# buy_fee_dict = exchange.calculate_fee(market_name, 'limit', 'buy', 0, 0, 'taker')
fee = exchange.markets[market_name]['taker']
else:
if 'fees' not in suppress:
adapter.warning("The fees for {} have not yet been implemented into ccxt's uniform API."
.format(exchange))
raise FeesNotAvailable('Fees are not available for {} on {}'.format(market_name, exchange.id))
else:
fee = 0.002
else:
fee = 0
fee_scalar = 1 - fee
try:
bid_rate = ticker['bid']
ask_rate = ticker['ask']
if depth:
bid_volume = ticker['bidVolume']
ask_volume = ticker['askVolume']
if bid_volume is None:
adapter.warning('Market is unavailable because its bid volume was given as None. '
'It will not be included in the graph.', market=market_name)
return
if ask_volume is None:
adapter.warning('Market is unavailable because its ask volume was given as None. '
'It will not be included in the graph.', market=market_name)
return
# ask and bid == None if this market is non existent.
except TypeError:
adapter.warning('Market is unavailable at this time. It will not be included in the graph.',
market=market_name)
return
# Exchanges give asks and bids as either 0 or None when they do not exist.
# todo: should we account for exchanges upon which an ask exists but a bid does not (and vice versa)? Would this
# cause bugs?
if ask_rate == 0 or bid_rate == 0 or ask_rate is None or bid_rate is None:
adapter.warning('Market is unavailable at this time. It will not be included in the graph.',
market=market_name)
return
try:
base_currency, quote_currency = market_name.split('/')
# if ccxt returns a market in incorrect format (e.g FX_BTC_JPY on BitFlyer)
except ValueError:
if 'markets' not in suppress:
adapter.warning('Market is unavailable at this time due to incorrect formatting. '
'It will not be included in the graph.', market=market_name)
return
if log:
if depth:
graph.add_edge(base_currency, quote_currency, weight=-math.log(fee_scalar * bid_rate),
depth=-math.log(bid_volume), market_name=market_name, trade_type='SELL',
fee=fee, volume=bid_volume, no_fee_rate=bid_rate)
graph.add_edge(quote_currency, base_currency, weight=-math.log(fee_scalar * 1 / ask_rate),
depth=-math.log(ask_volume * ask_rate), market_name=market_name, trade_type='BUY',
fee=fee, volume=ask_volume, no_fee_rate=ask_rate)
else:
graph.add_edge(base_currency, quote_currency, weight=-math.log(fee_scalar * bid_rate),
market_name=market_name, trade_type='SELL', fee=fee, no_fee_rate=bid_rate)
graph.add_edge(quote_currency, base_currency, weight=-math.log(fee_scalar * 1 / ask_rate),
market_name=market_name, trade_type='BUY', fee=fee, no_fee_rate=ask_rate)
else:
if depth:
graph.add_edge(base_currency, quote_currency, weight=fee_scalar * bid_rate, depth=bid_volume,
market_name=market_name, trade_type='SELL', fee=fee, volume=bid_volume, no_fee_rate=bid_rate)
graph.add_edge(quote_currency, base_currency, weight=fee_scalar * 1 / ask_rate, depth=ask_volume,
market_name=market_name, trade_type='BUY', fee=fee, volume=ask_volume, no_fee_rate=ask_rate)
else:
graph.add_edge(base_currency, quote_currency, weight=fee_scalar * bid_rate,
market_name=market_name, trade_type='SELL', fee=fee, no_fee_rate=bid_rate)
graph.add_edge(quote_currency, base_currency, weight=fee_scalar * 1 / ask_rate,
market_name=market_name, trade_type='BUY', fee=fee, no_fee_rate=ask_rate)
adapter.debug('Added edge to graph', market=market_name)
| 47.292035 | 120 | 0.644835 | 1,374 | 10,688 | 4.869723 | 0.187773 | 0.061276 | 0.023913 | 0.026902 | 0.494993 | 0.481094 | 0.44388 | 0.429981 | 0.403079 | 0.381856 | 0 | 0.00323 | 0.275917 | 10,688 | 225 | 121 | 47.502222 | 0.861352 | 0.084955 | 0 | 0.348101 | 0 | 0 | 0.17547 | 0.006633 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.006329 | false | 0.006329 | 0.044304 | 0 | 0.107595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20d70585f5c3c218441a7284e243ac4d8d8a6342 | 1,158 | py | Python | ch02/q2-1.py | iamnicoj/ctci | f71f995cb3d3257d3d58f1f167fcab8eaf84d457 | [
"MIT"
] | null | null | null | ch02/q2-1.py | iamnicoj/ctci | f71f995cb3d3257d3d58f1f167fcab8eaf84d457 | [
"MIT"
] | 3 | 2021-03-19T14:35:27.000Z | 2021-03-20T16:12:34.000Z | ch02/q2-1.py | iamnicoj/ctci | f71f995cb3d3257d3d58f1f167fcab8eaf84d457 | [
"MIT"
] | null | null | null | from linked_list import linked_list
#O(N^2)
def remove_dups(linked_list):
# I can use a sorting structure like a balanced binary search tree
# I am going to be just working with the same list
if linked_list.count < 1: return True
anchor = linked_list.head
while anchor is not None:
cursor = anchor
while cursor is not None and cursor.next is not None:
if anchor.item == cursor.next.item:
cursor.next = cursor.next.next
linked_list.count -= linked_list.count
cursor = cursor.next
anchor = anchor.next
####################################
myll = linked_list()
remove_dups(myll)
myll.print_list()
#####
myll = linked_list()
myll.add(2)
remove_dups(myll)
myll.print_list()
#####
myll = linked_list()
myll.add(2)
myll.add(2) #
remove_dups(myll)
myll.print_list()
#####
myll = linked_list()
myll.add(2)
myll.add(4)
myll.add(0)
myll.add(10)
remove_dups(myll)
myll.print_list()
####
myll = linked_list()
myll.add(4)
myll.add(2)
myll.add(0)
myll.add(4)
myll.add(10)
myll.add(0)
myll.add(10)
remove_dups(myll)
myll.print_list() | 17.029412 | 70 | 0.625216 | 176 | 1,158 | 3.982955 | 0.289773 | 0.1398 | 0.099857 | 0.128388 | 0.433666 | 0.386591 | 0.386591 | 0.386591 | 0.386591 | 0.386591 | 0 | 0.021158 | 0.224525 | 1,158 | 68 | 71 | 17.029412 | 0.759465 | 0.103627 | 0 | 0.707317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.02439 | 0 | 0.04878 | 0.121951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20db30683d6b543731d432357e62120eaca639e6 | 6,272 | py | Python | routes/customer.py | trqanees94/braintree_fast | 8897aa51821ac4b868076a27fe65a74692d33647 | [
"MIT"
] | null | null | null | routes/customer.py | trqanees94/braintree_fast | 8897aa51821ac4b868076a27fe65a74692d33647 | [
"MIT"
] | null | null | null | routes/customer.py | trqanees94/braintree_fast | 8897aa51821ac4b868076a27fe65a74692d33647 | [
"MIT"
] | null | null | null | #bson
from bson import ObjectId
# clients
from clients.mongodb import MongoDB
# braintree/ stripes gateway
from gateway import generate_client_token, transact, find_transaction, find_customer, customer, update_customer ,find_all_customers, \
stripe_customer, update_stripe_customer
# flask
from flask import jsonify, Response
def create(customer_request):
'''
Input:customer_request (request.form)
'''
# customer() uses braintree gateway to create a customer
braintree_data = customer({
"first_name": customer_request.form['first_name'],
"last_name": customer_request.form['last_name'],
"custom_fields": {
"spending_limit": customer_request.form['spending_limit'] if customer_request.form['spending_limit'] else 0
}
})
# stripe_customer() uses stripe gateway to create a customer
stripe_data = stripe_customer(
"{} {}".format(customer_request.form['first_name'], customer_request.form['last_name']),
customer_request.form['spending_limit'] if customer_request.form['spending_limit'] else 0
)
if not braintree_data.is_success:
errors_list = [[x.code, x.message] for x in braintree_data.errors.deep_errors]
error_dict = {
"error_message": errors_list[0][1],
"error_code": errors_list[0][0]
}
else:
error_dict = {}
# customer_pair makes up the Fast Customer record
customer_pair = {
"braintree":{
"customer_id": braintree_data.customer.id,
"customer_first_name": braintree_data.customer.first_name,
"customer_last_name": braintree_data.customer.last_name,
"customer_spending_limit": braintree_data.customer.custom_fields["spending_limit"]
},
"stripe":{
"customer_id": stripe_data.id,
"customer_first_name": stripe_data.name,
"customer_last_name": stripe_data.name,
"customer_spending_limit": stripe_data.metadata.spending_limit
}
}
# open database connection
with MongoDB() as mongo_client:
# add the customer to the customers collection
customer_object = mongo_client.customers.insert_one(customer_pair)
customer_response = {
"fast_customer_id": None if error_dict else str(customer_object["_id"]),
"braintree_id": {} if error_dict else braintree_data.customer.id,
"stripe_id": {} if error_dict else stripe_data.id,
"error": error_dict,
"success": bool(not error_dict)
}
return customer_response
def update(customer_request):
'''
Input: customer_request -(request.args)
'''
# fast_customer_id is sent from the update html page
fast_customer_id = customer_request.args["customer_id"]
updated_first_name=customer_request.args["first_name"]
updated_last_name=customer_request.args["last_name"]
updated_spending_limit=customer_request.args["spending_limit"]
with MongoDB() as mongo_client:
# customer_object contains the braintree and stripe customer pair
customer_object = mongo_client.customers.find_by_id(fast_customer_id)
braintree_id = customer_object['braintree']['customer_id']
stripe_id = customer_object['stripe']['customer_id']
#update_params creates the payload that has updated customer data
update_params = {
"first_name": updated_first_name,
"last_name": updated_last_name,
"custom_fields": {
"spending_limit": updated_spending_limit
}
}
# update_customer() uses the braintree gateway to update customer
braintree_data = update_customer(braintree_id, update_params)
# update_stripe_customer() uses the stripe gateway to update customer
stripe_data = update_stripe_customer(
stripe_id,
"{} {}".format(customer_request.args['first_name'], customer_request.args['last_name']),
customer_request.args['spending_limit']
)
# New customer data must be updated in the MongoDB customers collection
with MongoDB() as mongo_client:
mongo_client.customers.collection.update_one(
{"_id": ObjectId(fast_customer_id)},
{"$set": {
"braintree":{
"customer_id":braintree_id,
"customer_first_name": updated_first_name,
"customer_last_name": updated_last_name,
"customer_spending_limit": updated_spending_limit
},
"stripe":{
"customer_id":stripe_id,
"customer_first_name": "{} {}".format(updated_first_name,updated_last_name),
"customer_last_name": "{} {}".format(updated_first_name,updated_last_name),
"customer_spending_limit": updated_spending_limit
}
}}
)
if not braintree_data.is_success:
errors_list = [[x.code, x.message] for x in braintree_data.errors.deep_errors]
error_dict = {
"error_message": errors_list[0][1],
"error_code": errors_list[0][0]
}
else:
error_dict = {}
customer_response = {
"fast_customer_id": None if error_dict else str(customer_object["_id"]),
"braintree_id": {} if error_dict else braintree_data.customer.id,
"stripe_id": {} if error_dict else stripe_data.id,
"error": error_dict,
"success": bool(not error_dict)
}
return customer_response
def retrieve(mongoid=None):
if mongoid:
with MongoDB() as mongo_client:
# pull single customer from mongodb customers collection
customer_object_list = [mongo_client.customers.find_by_id(mongoid)]
else:
with MongoDB() as mongo_client:
# pull all customers from mongodb customers collection
customer_object_list = mongo_client.customers.find()
return customer_object_list
| 38.243902 | 134 | 0.629464 | 688 | 6,272 | 5.396802 | 0.146802 | 0.076757 | 0.040937 | 0.024239 | 0.629141 | 0.464315 | 0.327767 | 0.327767 | 0.327767 | 0.308645 | 0 | 0.002217 | 0.280931 | 6,272 | 163 | 135 | 38.478528 | 0.821064 | 0.134566 | 0 | 0.371681 | 1 | 0 | 0.149656 | 0.017104 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026549 | false | 0 | 0.035398 | 0 | 0.088496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
20e755a64ee836a08b5fdee412b040a3939be9af | 652 | py | Python | src/minerva_db/sql/models/group.py | labsyspharm/minerva-db | 49c205fc5d9bcc513b4eb21b6493c928ea711fce | [
"MIT"
] | null | null | null | src/minerva_db/sql/models/group.py | labsyspharm/minerva-db | 49c205fc5d9bcc513b4eb21b6493c928ea711fce | [
"MIT"
] | 2 | 2018-06-06T13:29:23.000Z | 2018-07-25T00:36:38.000Z | src/minerva_db/sql/models/group.py | sorgerlab/minerva-db | 49c205fc5d9bcc513b4eb21b6493c928ea711fce | [
"MIT"
] | 1 | 2020-03-06T23:53:42.000Z | 2020-03-06T23:53:42.000Z | from sqlalchemy import Column, ForeignKey, String
from sqlalchemy.orm import relationship
# from sqlalchemy.ext.associationproxy import association_proxy
from .subject import Subject
class Group(Subject):
__mapper_args__ = {
'polymorphic_identity': 'group',
}
uuid = Column(String(36), ForeignKey(Subject.uuid), primary_key=True)
name = Column('name', String(64), unique=True, nullable=False)
users = relationship('User', viewonly=True, secondary='t_membership')
memberships = relationship('Membership', back_populates='group')
def __init__(self, uuid, name):
self.uuid = uuid
self.name = name
| 29.636364 | 73 | 0.714724 | 74 | 652 | 6.108108 | 0.554054 | 0.09292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007449 | 0.17638 | 652 | 21 | 74 | 31.047619 | 0.834264 | 0.093558 | 0 | 0 | 0 | 0 | 0.101868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
20ea9a3199f5ede3e9ec9371bbc365bff86fc2fb | 2,550 | py | Python | integration/python/instantiate_docker.py | DomainDrivenArchitecture/dda-smeagol-crate | d2f4a76dde2df416f905a691d2e7b0b80fd282b9 | [
"Apache-2.0"
] | 2 | 2019-01-02T08:59:47.000Z | 2021-08-05T09:13:46.000Z | integration/python/instantiate_docker.py | DomainDrivenArchitecture/dda-smeagol-crate | d2f4a76dde2df416f905a691d2e7b0b80fd282b9 | [
"Apache-2.0"
] | null | null | null | integration/python/instantiate_docker.py | DomainDrivenArchitecture/dda-smeagol-crate | d2f4a76dde2df416f905a691d2e7b0b80fd282b9 | [
"Apache-2.0"
] | 1 | 2018-11-27T11:17:03.000Z | 2018-11-27T11:17:03.000Z | import docker
import sys
import os
import argparse
# Please perform the following steps in order to use this script
# 1) Install pyton 3 and pip3: sudo apt install python3-pip python3
# 2) Install the docker sdk with pip: pip3 install docker
parser = argparse.ArgumentParser()
parser.add_argument("jar", help="relative or absolute path to the dda-serverspec-crate uberjar.")
parser.add_argument("config", help="relative or absolute path to the config file in edn format.")
# TODO: Review jem 2018.11.08: relevant only for debug? If yes, then remove!
parser.add_argument("-c", "--cmd", help="alternative command to execute in the docker container.\
Default is to run the given uberjar with the given config.")
parser.add_argument("-i", "--image", help="image for the docker container. Default image is openjdk:8 (where netstat tests do not work since net-tools is not installed).")
args = parser.parse_args()
docker_logs = os.getcwd() + '/docker-logs/'
if not os.path.exists(docker_logs):
os.makedirs(docker_logs)
edn_file = os.path.abspath(args.config)
jar_file = os.path.abspath(args.jar)
# TODO: Review jem 2018.11.08: Put defaults to the argparse section
execute_command = 'java -jar /app/uberjar.jar /app/config.edn'
if args.cmd:
execute_command = args.cmd
# TODO: Review jem 2018.11.08: Put defaults to the argparse section
image = 'openjdk:8'
if args.image:
image = args.image
# TODO: Review jem 2018.11.08: we curl the serverspec outside - is'nt it a bad idea to do the curl inside of this test-script?
debug_map = {'edn_file':edn_file, 'jar_file':jar_file, 'docker_logs':docker_logs}
client = docker.APIClient()
# docker run --volume $(pwd)/example-serverspec.edn:/app/config.edn --volume $(pwd)/target/dda-serverspec-crate-1.1.4-SNAPSHOT-standalone.jar:/app/uberjar.jar --volume $(pwd)/docker_logs/:/logs/ -it openjdk:8 /bin/bash
container = client.create_container(
image=image,
command=execute_command,
volumes=['/app/config.edn', '/app/uberjar.jar', '/logs'],
host_config=client.create_host_config(binds={
edn_file: {
'bind': '/app/config.edn',
'mode': 'ro',
},
jar_file: {
'bind': '/app/uberjar.jar',
'mode': 'ro',
},
docker_logs: {
'bind': '/logs/',
'mode': 'rw',
}
})
)
response = client.start(container=container)
for log in client.logs(container, stream=True, stdout=True, stderr=True):
print(log)
sys.exit(client.wait(container)['StatusCode'])
| 37.5 | 218 | 0.688627 | 376 | 2,550 | 4.595745 | 0.369681 | 0.046296 | 0.039352 | 0.039352 | 0.144676 | 0.12037 | 0.096065 | 0.060185 | 0.060185 | 0.060185 | 0 | 0.021511 | 0.179608 | 2,550 | 67 | 219 | 38.059701 | 0.804493 | 0.287451 | 0 | 0.042553 | 0 | 0.021277 | 0.263274 | 0 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0 | false | 0 | 0.085106 | 0 | 0.085106 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45465c3ae216f2ba2ed2de99dd3ca07f116d36fd | 1,982 | py | Python | docs/util.py | simplebutneeded/python-measurement | b14f954fc74424ed3b1c9780e089cd91c19be45c | [
"MIT"
] | null | null | null | docs/util.py | simplebutneeded/python-measurement | b14f954fc74424ed3b1c9780e089cd91c19be45c | [
"MIT"
] | null | null | null | docs/util.py | simplebutneeded/python-measurement | b14f954fc74424ed3b1c9780e089cd91c19be45c | [
"MIT"
] | 2 | 2016-09-15T18:36:33.000Z | 2019-07-17T17:39:23.000Z | from __future__ import print_function
from measurement.base import MeasureBase, BidimensionalMeasure
from measurement.utils import get_all_measures
for measure in get_all_measures():
classname = measure.__name__
print(classname)
print('-' * len(classname))
print()
if issubclass(measure, MeasureBase):
units = measure.get_units()
aliases = measure.get_aliases()
print(
'* *Acceptable as Arguments or Attributes*: %s' % (
', '.join(sorted(['``%s``' % unit for unit in units]))
)
)
print(
'* *Acceptable as Arguments*: %s' % (
', '.join(sorted(['``%s``' % alias for alias in aliases]))
)
)
elif issubclass(measure, BidimensionalMeasure):
print(".. note::")
print(" This is a bi-dimensional measurement; bi-dimensional")
print(" measures are created by finding an appropriate unit in the")
print(" measure's primary measurement class, and an appropriate")
print(" in the measure's reference class, and using them as a")
print(" double-underscore-separated keyword argument (or, if")
print(" converting to another unit, as an attribute).")
print()
print(" For example, to create an object representing 24 miles-per")
print(" hour::")
print()
print(" >>> from measurement.measure import Speed")
print(" >>> my_speed = Speed(mile__hour=24)")
print(" >>> print my_speed")
print(" 24.0 mi/hr")
print(" >>> print my_speed.km__hr")
print(" 38.624256")
print()
print(
"* *Primary Measurement*: %s" % (
measure.PRIMARY_DIMENSION.__name__
)
)
print(
"* *Reference Measurement*: %s" % (
measure.REFERENCE_DIMENSION.__name__
)
)
print()
| 36.703704 | 78 | 0.55449 | 200 | 1,982 | 5.335 | 0.4 | 0.04686 | 0.033739 | 0.048735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011244 | 0.326942 | 1,982 | 53 | 79 | 37.396226 | 0.788606 | 0 | 0 | 0.176471 | 0 | 0 | 0.370838 | 0.013623 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078431 | 0 | 0.078431 | 0.529412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
454f908f7e41a06c5a4c9549749b050cd2e1d924 | 8,027 | py | Python | ipython/attachments/Weave/iterators_example.py | cassiasamp/scipy-cookbook | 67c120be33302554edfd7fe7962f3e2773109021 | [
"BSD-3-Clause"
] | 408 | 2016-05-26T04:17:59.000Z | 2022-03-18T09:19:59.000Z | ipython/attachments/Weave/iterators_example.py | cassiasamp/scipy-cookbook | 67c120be33302554edfd7fe7962f3e2773109021 | [
"BSD-3-Clause"
] | 25 | 2016-08-28T22:20:53.000Z | 2021-11-08T16:37:00.000Z | ipython/attachments/Weave/iterators_example.py | cassiasamp/scipy-cookbook | 67c120be33302554edfd7fe7962f3e2773109021 | [
"BSD-3-Clause"
] | 185 | 2016-06-05T03:27:49.000Z | 2022-01-28T21:14:02.000Z | #!/usr/bin/env python
import sys
import numpy as npy
import pylab as P
from scipy.weave import inline, converters, blitz
from scipy.testing import measure
# Blitz conversion is terrific, but sometimes you don't have fixed array sizes
# in your problem. Fortunately numpy iterators still make writing inline
# weave code very, very simple.
def multi_iter_example():
# This is a very simple example of multi dimensional iterators, and
# their power to "broadcast" arrays of compatible shapes. It shows that
# the very same code that is entirely ignorant of dimensionality can
# achieve completely different computations based on the rules of
# broadcasting.
# it is important to know that the weave array conversion of "a"
# gives you access in C++ to:
# py_a -- PyObject *
# a_array -- PyArrayObject *
# a -- py_array->data
a = npy.ones((4,4), npy.float64)
# for the sake of driving home the "dynamic code" approach...
dtype2ctype = {
npy.dtype(npy.float64): 'double',
npy.dtype(npy.float32): 'float',
npy.dtype(npy.int32): 'int',
npy.dtype(npy.int16): 'short',
}
dt = dtype2ctype.get(a.dtype)
# this code does a = a*b inplace, broadcasting b to fit the shape of a
code = \
"""
%s *p1, *p2;
PyObject *itr;
itr = PyArray_MultiIterNew(2, a_array, b_array);
while(PyArray_MultiIter_NOTDONE(itr)) {
p1 = (%s *) PyArray_MultiIter_DATA(itr, 0);
p2 = (%s *) PyArray_MultiIter_DATA(itr, 1);
*p1 = (*p1) * (*p2);
PyArray_MultiIter_NEXT(itr);
}
""" % (dt, dt, dt)
b = npy.arange(4, dtype=a.dtype)
print '\n A B '
print a, b
# this reshaping is redundant, it would be the default broadcast
b.shape = (1,4)
inline(code, ['a', 'b'])
print "\ninline version of a*b[None,:],"
print a
a = npy.ones((4,4), npy.float64)
b = npy.arange(4, dtype=a.dtype)
b.shape = (4,1)
inline(code, ['a', 'b'])
print "\ninline version of a*b[:,None],"
print a
def data_casting_test():
# In my MR application, raw data is stored as a file with one or more
# (block-hdr, block-data) pairs. Block data is one or more
# rows of Npt complex samples in big-endian integer pairs (real, imag).
#
# At the block level, I encounter three different raw data layouts--
# 1) one plane, or slice: Y rows by 2*Npt samples
# 2) one volume: Z slices * Y rows by 2*Npt samples
# 3) one row sliced across the z-axis: Z slices by 2*Npt samples
#
# The task is to tease out one volume at a time from any given layout,
# and cast the integer precision data into a complex64 array.
# Given that contiguity is not guaranteed, and the number of dimensions
# can vary, Numpy iterators are useful to provide a single code that can
# carry out the conversion.
#
# Other solutions include:
# 1) working entirely with the string data from file.read() with string
# manipulations (simulated below).
# 2) letting numpy handle automatic byteorder/dtype conversion
nsl, nline, npt = (20,64,64)
hdr_dt = npy.dtype('>V28')
# example 1: a block is one slice of complex samples in short integer pairs
blk_dt1 = npy.dtype(('>i2', nline*npt*2))
dat_dt = npy.dtype({'names': ['hdr', 'data'], 'formats': [hdr_dt, blk_dt1]})
# create an empty volume-- nsl contiguous blocks
vol = npy.empty((nsl,), dat_dt)
t = time_casting(vol[:]['data'])
P.plot(100*t/t.max(), 'b--', label='vol=20 contiguous blocks')
P.plot(100*t/t.max(), 'bo')
# example 2: a block is one entire volume
blk_dt2 = npy.dtype(('>i2', nsl*nline*npt*2))
dat_dt = npy.dtype({'names': ['hdr', 'data'], 'formats': [hdr_dt, blk_dt2]})
# create an empty volume-- 1 block
vol = npy.empty((1,), dat_dt)
t = time_casting(vol[0]['data'])
P.plot(100*t/t.max(), 'g--', label='vol=1 contiguous block')
P.plot(100*t/t.max(), 'go')
# example 3: a block slices across the z dimension, long integer precision
# ALSO--a given volume is sliced discontiguously
blk_dt3 = npy.dtype(('>i4', nsl*npt*2))
dat_dt = npy.dtype({'names': ['hdr', 'data'], 'formats': [hdr_dt, blk_dt3]})
# a real data set has volumes interleaved, so create two volumes here
vols = npy.empty((2*nline,), dat_dt)
# and work on casting the first volume
t = time_casting(vols[0::2]['data'])
P.plot(100*t/t.max(), 'r--', label='vol=64 discontiguous blocks')
P.plot(100*t/t.max(), 'ro')
P.xticks([0,1,2], ('strings', 'numpy auto', 'inline'))
P.gca().set_xlim((-0.25, 2.25))
P.gca().set_ylim((0, 110))
P.gca().set_ylabel(r"% of slowest time")
P.legend(loc=8)
P.title('Casting raw file data to an MR volume')
P.show()
def time_casting(int_data):
nblk = 1 if len(int_data.shape) < 2 else int_data.shape[0]
bias = (npy.random.rand(nblk) + \
1j*npy.random.rand(nblk)).astype(npy.complex64)
dstr = int_data.tostring()
dt = npy.int16 if int_data.dtype.itemsize == 2 else npy.int32
fshape = list(int_data.shape)
fshape[-1] = fshape[-1]/2
float_data = npy.empty(fshape, npy.complex64)
# method 1: string conversion
float_data.shape = (npy.product(fshape),)
tstr = measure("float_data[:] = complex_fromstring(dstr, dt)", times=25)
float_data.shape = fshape
print "to-/from- string: ", tstr, "shape=",float_data.shape
# method 2: numpy dtype magic
sl = [None, slice(None)] if len(fshape)<2 else [slice(None)]*len(fshape)
# need to loop since int_data need not be contiguous
tnpy = measure("""
for fline, iline, b in zip(float_data[sl], int_data[sl], bias):
cast_to_complex_npy(fline, iline, bias=b)""", times=25)
print"numpy automagic: ", tnpy
# method 3: plain inline brute force!
twv = measure("cast_to_complex(float_data, int_data, bias=bias)",
times=25)
print"inline casting: ", twv
return npy.array([tstr, tnpy, twv], npy.float64)
def complex_fromstring(data, numtype):
if sys.byteorder == "little":
return npy.fromstring(
npy.fromstring(data,numtype).byteswap().astype(npy.float32).tostring(),
npy.complex64)
else:
return npy.fromstring(
npy.fromstring(data,numtype).astype(npy.float32).tostring(),
npy.complex64)
def cast_to_complex(cplx_float, cplx_integer, bias=None):
if cplx_integer.dtype.itemsize == 4:
replacements = tuple(["l", "long", "SWAPLONG", "l"]*2)
else:
replacements = tuple(["s", "short", "SWAPSHORT", "s"]*2)
if sys.byteorder == "big":
replacements[-2] = replacements[-6] = "NOP"
cast_code = """
#define SWAPSHORT(x) ((short) ((x >> 8) | (x << 8)) )
#define SWAPLONG(x) ((long) ((x >> 24) | (x << 24) | ((x & 0x00ff0000) >> 8) | ((x & 0x0000ff00) << 8)) )
#define NOP(x) x
unsigned short *s;
unsigned long *l;
float repart, impart;
PyObject *itr;
itr = PyArray_IterNew(py_cplx_integer);
while(PyArray_ITER_NOTDONE(itr)) {
// get real part
%s = (unsigned %s *) PyArray_ITER_DATA(itr);
repart = %s(*%s);
PyArray_ITER_NEXT(itr);
// get imag part
%s = (unsigned %s *) PyArray_ITER_DATA(itr);
impart = %s(*%s);
PyArray_ITER_NEXT(itr);
*(cplx_float++) = std::complex<float>(repart, impart);
}
""" % replacements
inline(cast_code, ['cplx_float', 'cplx_integer'])
if bias is not None:
if len(cplx_float.shape) > 1:
bsl = [slice(None)]*(len(cplx_float.shape)-1) + [None]
else:
bsl = slice(None)
npy.subtract(cplx_float, bias[bsl], cplx_float)
def cast_to_complex_npy(cplx_float, cplx_integer, bias=None):
cplx_float.real[:] = cplx_integer[0::2]
cplx_float.imag[:] = cplx_integer[1::2]
if bias is not None:
npy.subtract(cplx_float, bias, cplx_float)
if __name__=="__main__":
data_casting_test()
multi_iter_example()
| 38.22381 | 109 | 0.628005 | 1,195 | 8,027 | 4.123013 | 0.276151 | 0.02192 | 0.009742 | 0.01096 | 0.198498 | 0.176172 | 0.112645 | 0.060077 | 0.047087 | 0.047087 | 0 | 0.02889 | 0.228105 | 8,027 | 209 | 110 | 38.406699 | 0.766301 | 0.286782 | 0 | 0.160305 | 0 | 0.015267 | 0.258649 | 0.048289 | 0 | 0 | 0.0037 | 0 | 0 | 0 | null | null | 0 | 0.038168 | null | null | 0.068702 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4555051cbfcd1d15b946d4d5c08a4cc547ba96e4 | 1,755 | py | Python | measure.py | mtenenholtz/vehicle_footprint | d4500fcc127ed2c0400537b661a964b1d7fcef51 | [
"MIT"
] | null | null | null | measure.py | mtenenholtz/vehicle_footprint | d4500fcc127ed2c0400537b661a964b1d7fcef51 | [
"MIT"
] | null | null | null | measure.py | mtenenholtz/vehicle_footprint | d4500fcc127ed2c0400537b661a964b1d7fcef51 | [
"MIT"
] | null | null | null | class PixelMeasurer:
def __init__(self, coordinate_store, is_one_calib_block, correction_factor):
self.coordinate_store = coordinate_store
self.is_one_calib_block = is_one_calib_block
self.correction_factor = correction_factor
def get_distance(self, calibration_length):
distance_per_pixel = calibration_length / self.pixel_distance_calibration()
if not self.is_one_calib_block:
calibration_difference = float(self.pixel_distance_calibration_side()) / \
float(self.pixel_distance_calibration())
distance_correction = 1 - self.correction_factor*(1 - calibration_difference)
return self.pixel_distance_between_wheels() * distance_per_pixel * distance_correction
else:
return self.pixel_distance_between_wheels() * distance_per_pixel
def get_left_wheel_midpoint(self):
points = self.coordinate_store.get_left_wheel_points()
return int(abs(points[0][0] + points[1][0]) / 2)
def get_right_wheel_midpoint(self):
points = self.coordinate_store.get_right_wheel_points(is_one_calib_block=self.is_one_calib_block)
return int(abs(points[0][0] + points[1][0]) / 2)
def pixel_distance_between_wheels(self):
return abs(self.get_right_wheel_midpoint() - self.get_left_wheel_midpoint())
def pixel_distance_calibration(self):
calibration_points = self.coordinate_store.get_middle_calib_points()
return abs(calibration_points[0][0] - calibration_points[1][0])
def pixel_distance_calibration_side(self):
calibration_points = self.coordinate_store.get_side_calib_points()
return abs(calibration_points[0][0] - calibration_points[1][0])
| 50.142857 | 105 | 0.723077 | 221 | 1,755 | 5.303167 | 0.176471 | 0.099829 | 0.09727 | 0.076792 | 0.554608 | 0.392491 | 0.392491 | 0.319113 | 0.242321 | 0.153584 | 0 | 0.014094 | 0.191453 | 1,755 | 34 | 106 | 51.617647 | 0.811839 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45559cb2010dc718a647583ca7727ca07bebe64f | 2,494 | py | Python | feedback-Python/models.py | gforghieri/code-demo | 7e4faf9d1411d073af360c78387db3f58ea45cf1 | [
"MIT"
] | null | null | null | feedback-Python/models.py | gforghieri/code-demo | 7e4faf9d1411d073af360c78387db3f58ea45cf1 | [
"MIT"
] | null | null | null | feedback-Python/models.py | gforghieri/code-demo | 7e4faf9d1411d073af360c78387db3f58ea45cf1 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import ugettext_lazy as _
class Feature(models.Model):
name = models.CharField('Feature Name', max_length=50, blank=True, unique=True)
description = models.CharField('Feature Description', max_length=150, blank=True)
info_link = models.CharField('Feature Demo Link', max_length=100, blank=True)
class Meta:
verbose_name = _('feature')
verbose_name_plural = _('features')
ordering = ['id']
class Version(models.Model):
tag = models.CharField('Tag', max_length=50, unique=True)
class Meta:
verbose_name = _('tag')
verbose_name_plural = _('tags')
class Release(models.Model):
version = models.ForeignKey(Version, on_delete=models.CASCADE)
features = models.ManyToManyField(Feature, blank=True)
class Meta:
verbose_name = _('release')
verbose_name_plural = _('releases')
class FeedbackResult(models.Model):
user_email = models.EmailField('Email', blank=False, null=False)
service = models.ForeignKey('organizations.service', null=True, on_delete=models.SET_NULL)
feature = models.ForeignKey(Feature, on_delete=models.CASCADE)
feedback = models.CharField('Feature Feedback', max_length=512, blank=True, null=True)
liked = models.NullBooleanField('Feature Liked')
skipped = models.NullBooleanField('Feature Skipped')
class Meta:
verbose_name = _('feedback-result')
verbose_name_plural = _('feedback-results')
class FeedbackActivity(models.Model):
user_email = models.EmailField('Email', blank=False)
declined = models.NullBooleanField('Declined', null=True, blank=True)
release = models.ForeignKey(Release, null=True, blank=True, on_delete=models.CASCADE)
service = models.ForeignKey('organizations.service', null=True, blank=True, on_delete=models.CASCADE)
has_given_feedback = models.NullBooleanField('Given Feedback', blank=True)
hours_used_release = models.FloatField(null=True, blank=True)
class Meta:
verbose_name = _('feedback-activity')
verbose_name_plural = _('feedback-activities')
class UserSession(models.Model):
user_email = models.EmailField('Email', blank=False)
session_start = models.DateTimeField(null=True)
session_end = models.DateTimeField(null=True)
tag = models.CharField(null=True, max_length=30)
class Meta:
verbose_name = _('user-session')
verbose_name_plural = _('user-sessions')
| 36.676471 | 105 | 0.716119 | 292 | 2,494 | 5.931507 | 0.256849 | 0.076212 | 0.055427 | 0.069284 | 0.271363 | 0.236721 | 0.18649 | 0.132217 | 0.088337 | 0 | 0 | 0.007208 | 0.165597 | 2,494 | 67 | 106 | 37.223881 | 0.825084 | 0 | 0 | 0.163265 | 0 | 0 | 0.122294 | 0.01684 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.734694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
455976b188c8965e37899317a939f416893472f4 | 727 | py | Python | jupyterhub_client/tests/test_async.py | minrk/jupyterhub-client | 99b856e81925690d5546fd8457a6e333fb709513 | [
"BSD-2-Clause"
] | 2 | 2017-04-06T20:28:31.000Z | 2020-03-20T06:51:21.000Z | jupyterhub_client/tests/test_async.py | minrk/jupyterhub-client | 99b856e81925690d5546fd8457a6e333fb709513 | [
"BSD-2-Clause"
] | 2 | 2017-04-10T09:30:20.000Z | 2018-11-22T16:30:03.000Z | jupyterhub_client/tests/test_async.py | minrk/jupyterhub-client | 99b856e81925690d5546fd8457a6e333fb709513 | [
"BSD-2-Clause"
] | 8 | 2017-04-06T20:28:38.000Z | 2021-04-01T13:39:13.000Z | from pytest import fixture
from tornado import gen
from tornado.ioloop import IOLoop
from .conftest import TOKEN
from ..async import AsyncJupyterHubClient
@fixture
def client(app, io_loop):
# include io_loop to avoid clear_instance calls resetting the current loop
return AsyncJupyterHubClient(TOKEN, url=app.hub.server.url + 'api')
def _run(_test, timeout=10):
loop = IOLoop.current()
deadline = loop.time() + timeout
loop.run_sync(
lambda : gen.with_timeout(deadline, _test())
)
def test_list_users(app, io_loop, client):
@gen.coroutine
def _test():
users = yield client.list_users()
assert sorted(u['name'] for u in users) == ['admin', 'user']
_run(_test)
| 24.233333 | 78 | 0.696011 | 98 | 727 | 5.010204 | 0.520408 | 0.03666 | 0.03666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003442 | 0.200825 | 727 | 29 | 79 | 25.068966 | 0.841652 | 0.099037 | 0 | 0 | 0 | 0 | 0.024615 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | null | null | 0 | 0.25 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
455a510ca3f33288b1e82aa99f9b204ab68b6807 | 3,972 | py | Python | get_player.py | jason-sa/baseball_lin_regression | 936535693f00b28d17b2b901144dcba8bce45ab9 | [
"MIT"
] | null | null | null | get_player.py | jason-sa/baseball_lin_regression | 936535693f00b28d17b2b901144dcba8bce45ab9 | [
"MIT"
] | null | null | null | get_player.py | jason-sa/baseball_lin_regression | 936535693f00b28d17b2b901144dcba8bce45ab9 | [
"MIT"
] | null | null | null | '''
Script to loop through all baseballrefernce.com pages and store the HTML in data frames
'''
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import TimeoutException
import requests
import time
import os
from selenium.webdriver.common.by import By
import pickle
import re
import time
def get_rookie_player_pages_html(rookie_player_pages, driver, stop=None):
index = 0
html_df = rookie_player_pages[rookie_player_pages.html.isnull()]
for l in html_df.link.values:
start = time.time()
driver.get(l) #Could add try and except to write to csv if error, so can restart from last write.
end = time.time()
print((end-start), l)
rookie_player_pages.loc[rookie_player_pages.link == l, 'html'] = driver.page_source
if index != 0 and index % 100 == 0:
print('Rows completed', rookie_player_pages[~rookie_player_pages.html.isnull()].shape[0])
rookie_player_pages.to_csv('data/rookie_player_pages.csv')
if index == stop:
break
index += 1
rookie_player_pages.to_csv('data/rookie_player_pages.csv')
return rookie_player_pages
def build_rookie_pages(start, end, driver):
rookie_pages = pd.DataFrame(columns=['year','link','html'])
rookie_player_pages = pd.DataFrame(columns=['year','name','link','html'])
#attempt to load from csv
try:
rookie_pages = pd.read_csv('data/rookie_pages.csv', index_col=0)
except FileNotFoundError:
pass
print(rookie_pages.shape)
try:
rookie_player_pages = pd.read_csv('data/rookie_player_pages.csv', index_col=0)
except FileNotFoundError:
pass
print(rookie_player_pages.shape)
for i in range(start, end+1):
links_list = []
names_list = []
#if year == i, then move onto link loop
if not (rookie_pages.year == i).any():
url = 'https://www.baseball-reference.com/leagues/MLB/'+str(i)+'-rookies.shtml'
start = time.time()
driver.get(url)
end = time.time()
print(end-start, i)
rookie_pages.loc[i] = [i, url, driver.page_source]
# scrape the rookie batters (includes pitchers if PA)
batting = driver.find_element_by_id('misc_batting') ## HTML tables
links = batting.find_elements_by_xpath('.//tbody/tr/td/a') ## player pages
# add these to the DF to save
links_list = [a.get_attribute('href') for a in links if re.search(r'players/.', a.get_attribute('href'))]
names_list = [a.text for a in links if re.search(r'players/.', a.get_attribute('href'))]
if len(links_list) != 0: # add new data
year_l = [i] * len(links_list)
new_df = pd.DataFrame({'year': year_l, 'name': names_list, 'link': links_list})
rookie_player_pages = rookie_player_pages.append(new_df, sort=True)
rookie_pages.to_csv('data/rookie_pages.csv')
rookie_player_pages.to_csv('data/rookie_player_pages.csv')
return rookie_pages, rookie_player_pages
chromedriver = "chromedriver" # path to the chromedriver executable
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
while True:
try:
rookie_pages, rookie_player_pages = build_rookie_pages(1985, 2017, driver)
except TimeoutException:
pass
else:
break
tries = 0
while tries <= 2:
try:
rookie_player_pages = pd.read_csv('data/rookie_player_pages.csv',index_col=0)
print('Try:', tries)
print(rookie_player_pages.shape)
rookie_player_pages = get_rookie_player_pages_html(rookie_player_pages, driver, stop=6000)
except TimeoutException:
tries += 1
pass
else:
break
driver.close() | 34.241379 | 117 | 0.659366 | 542 | 3,972 | 4.625461 | 0.276753 | 0.131631 | 0.196649 | 0.043877 | 0.393299 | 0.305943 | 0.263662 | 0.263662 | 0.22856 | 0.22856 | 0 | 0.009552 | 0.23565 | 3,972 | 116 | 118 | 34.241379 | 0.816206 | 0.097432 | 0 | 0.344828 | 0 | 0 | 0.111547 | 0.057455 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022989 | false | 0.045977 | 0.149425 | 0 | 0.195402 | 0.08046 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45639632859da23d700860306c51ed0f2069eeb3 | 7,995 | py | Python | pandalog/cmd.py | bitpanda-labs/pandalog | 3c2df6bcc8afe20de6a1e9ef8faa560cd6ee03d1 | [
"BSD-3-Clause"
] | null | null | null | pandalog/cmd.py | bitpanda-labs/pandalog | 3c2df6bcc8afe20de6a1e9ef8faa560cd6ee03d1 | [
"BSD-3-Clause"
] | null | null | null | pandalog/cmd.py | bitpanda-labs/pandalog | 3c2df6bcc8afe20de6a1e9ef8faa560cd6ee03d1 | [
"BSD-3-Clause"
] | null | null | null | """Pandalog collection of entrypoint scripts
Scripts:
- pandalog
- pandalog-auth
To install the collection, use either one of the following options:
1) Build the Dockerfile and run the application
2) Run "python3 setup.py install"
The scripts are installed as executable binaries under /usr/local/bin.
It is always preferable to isolate the application execution in a
virtual environment or a container.
"""
import click
import logging
from pandalog.client import GraylogAPIClient
@click.group()
@click.version_option()
def auth_entrypoint():
"""Pandalog Auth - Retrieve STS tokens for Pandalog user
\b
Example usage:
\b
$ pandalog-auth get-sts-token -h $HOST -u $USER
$ pandalog-auth get-sts-token -h $HOST -u $USER -p ${PASS}
"""
pass
@auth_entrypoint.command()
@click.option("-h", "--host",
required=True,
envvar="GRAYLOG_HOST",
type=str,
help="graylog host")
@click.option("-u", "--user",
required=True,
envvar="GRAYLOG_USER",
type=str,
help="graylog user")
@click.option("-p", "--password",
prompt=True,
hide_input=True,
type=str,
envvar="GRAYLOG_PASS",
help="graylog password")
def get_sts_token(host: str,
user: str,
password: str):
"""get/issue a temporary session token"""
# initialize graylog API client
client = GraylogAPIClient(host)
# retrieve, dump STS token
print(client.get_sts_token(user, password))
@click.group()
@click.version_option()
def entrypoint():
"""Pandalog - Bitpanda Graylog Python Wrapper
\b
Example Usage:
\b
$ GRAYLOG_HOST=logs.staging.bitpanda
$ GRAYLOG_TOKEN=$(pandalog-auth get-sts-token -u ${USER} -p ${PASS})
$ pandalog get-teams
$ pandalog get-streams
$ pandalog to-stream --all "All Pandas,developer"
$ pandalog from-stream --streams "API,ledger" "staging-developer"
"""
pass
@entrypoint.command()
@click.option("-h", "--host",
required=True,
envvar="GRAYLOG_HOST",
type=str,
help="graylog host")
@click.option("-t", "--token",
required=True,
envvar="GRAYLOG_TOKEN",
type=str,
help="graylog API token")
def get_teams(host: str,
token: str):
"""list teams"""
# initialize graylog API client
client = GraylogAPIClient(host)
# retrieve list of all teams
teams = client.get_teams(token)
# print stdout header
print("ID\t\t\t\t\tNAME")
# sort teams in alphanumerical order
ordered = sorted(teams, key=lambda x: x["name"])
# loop over sorted list
for team in ordered:
# print team ID and name
print("{}\t\t{}".format(team.get("id"),
team.get("name")))
@entrypoint.command()
@click.option("-h", "--host",
required=True,
envvar="GRAYLOG_HOST",
type=str,
help="graylog host")
@click.option("-t", "--token",
required=True,
envvar="GRAYLOG_TOKEN",
type=str,
help="graylog API token")
def get_streams(host: str,
token: str):
"""list streams"""
# initialize graylog API client
client = GraylogAPIClient(host)
# retrieve list of all streams
streams = client.get_streams(token)
# print stdout header
print("ID\t\t\t\t\tTITLE")
# sort streams in alphanumerical order
ordered = sorted(streams, key=lambda x: x["title"])
# loop over sorted list
for stream in ordered:
# print stream ID and title
print("{}\t\t{}".format(stream.get("id"),
stream.get("title")))
@entrypoint.command()
@click.option("-h", "--host",
required=True,
envvar="GRAYLOG_HOST",
type=str,
help="graylog host")
@click.option("-t", "--token",
required=True,
envvar="GRAYLOG_TOKEN",
type=str,
help="graylog API token")
@click.option("-a", "--all",
is_flag=True,
type=bool,
help="all streams")
@click.option("-s", "--stream-names",
required=False,
type=str,
help="comma-separated list of streams")
@click.argument("team-names", nargs=-1)
def to_stream(host: str,
token: str,
all: bool,
stream_names: str,
team_names: list):
"""share stream(s) with team(s)"""
# initialize graylog API client
client = GraylogAPIClient(host)
# initialize empty list of teams
teams = []
# loop over team names
for team in team_names:
# retrieve team based on the name and append to the list
teams.append(client.get_team(team, token))
# initialize empty list of streams
streams = []
# if --all flag is set
if all:
# gather list of all streams
streams = client.get_streams(token)
# else (i.e., if --stream-names is defined)
else:
# if streams were provided
if stream_names is not None:
split_streams = [s.strip() for s in stream_names.split(",")]
# for each specified stream
for stream in split_streams:
# append stream to list
streams.append(client.get_stream(stream, token))
# if --all flag is not set and no stream was provided
else:
# print error and exit
raise SystemExit("please provide streams or set the --all flag")
# loop over streams
for stream in streams:
# add view permissions to specified teams to current stream
client.to_stream(stream.get("id"), "view", teams, token)
@entrypoint.command()
@click.option("-h", "--host",
required=True,
envvar="GRAYLOG_HOST",
type=str,
help="graylog host")
@click.option("-t", "--token",
required=True,
envvar="GRAYLOG_TOKEN",
type=str,
help="graylog API token")
@click.option("-a", "--all",
is_flag=True,
type=bool,
help="all streams")
@click.option("-s", "--stream-names",
required=False,
type=str,
help="comma-separated list of streams")
@click.argument("team-names", nargs=-1)
def from_stream(host: str,
token: str,
all: bool,
stream_names: str,
team_names: list):
"""unshare stream(s) with team(s)"""
# initialize graylog API client
client = GraylogAPIClient(host)
# initialize empty list of teams
teams = []
# loop over team names
for team in team_names:
# retrieve team based on the name and append to the list
teams.append(client.get_team(team, token))
# initialize empty list of streams
streams = []
# if --all flag is set
if all:
# gather list of all streams
streams = client.get_streams(token)
# else (i.e., if --stream-names is defined)
else:
# if streams were provided
if stream_names is not None:
split_streams = [s.strip() for s in stream_names.split(",")]
# for each specified stream
for stream in split_streams:
# append stream to list
streams.append(client.get_stream(stream, token))
# if --all flag is not set and no stream was provided
else:
# print error and exit
raise SystemExit("please provide streams or set the --all flag")
# loop over streams
for stream in streams:
# remove view permissions from specified teams from current stream
client.from_stream(stream.get("id"), "view", teams, token)
| 30.515267 | 76 | 0.572358 | 949 | 7,995 | 4.767123 | 0.174921 | 0.036472 | 0.029178 | 0.055261 | 0.697171 | 0.659372 | 0.645668 | 0.631963 | 0.6187 | 0.59748 | 0 | 0.000912 | 0.313946 | 7,995 | 261 | 77 | 30.632184 | 0.823883 | 0.298061 | 0 | 0.774194 | 0 | 0 | 0.130291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045161 | false | 0.045161 | 0.019355 | 0 | 0.064516 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45650bfff480eaae33caf9f4aae1e7c37cec59e0 | 507 | py | Python | osf_oauth2_adapter/apps.py | enrobyn/lookit-api | 621fbb8b25100a21fd94721d39003b5d4f651dc5 | [
"MIT"
] | null | null | null | osf_oauth2_adapter/apps.py | enrobyn/lookit-api | 621fbb8b25100a21fd94721d39003b5d4f651dc5 | [
"MIT"
] | null | null | null | osf_oauth2_adapter/apps.py | enrobyn/lookit-api | 621fbb8b25100a21fd94721d39003b5d4f651dc5 | [
"MIT"
] | null | null | null | import os
from django.apps import AppConfig
class OsfOauth2AdapterConfig(AppConfig):
name = 'osf_oauth2_adapter'
# staging by default so people don't have to run OSF to use this.
osf_api_url = os.environ.get('OSF_API_URL', 'https://staging-api.osf.io').rstrip('/') + '/'
osf_accounts_url = os.environ.get('OSF_ACCOUNTS_URL', 'https://staging-accounts.osf.io').rstrip('/') + '/'
default_scopes = ['osf.users.email_read', 'osf.users.profile_read', ]
humans_group_name = 'OSF_USERS'
| 39 | 110 | 0.704142 | 73 | 507 | 4.671233 | 0.534247 | 0.070381 | 0.052786 | 0.087977 | 0.105572 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004598 | 0.142012 | 507 | 12 | 111 | 42.25 | 0.77931 | 0.12426 | 0 | 0 | 0 | 0 | 0.355204 | 0.049774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
457699cdcae4bafe41114c01a0f125a7668be5fb | 5,823 | py | Python | tests/processors/test_processors.py | foxis/EasyVision | ffb2ce1a93fdace39c6bcc13c8ae518cec76919e | [
"MIT",
"BSD-3-Clause"
] | 7 | 2018-12-27T07:45:31.000Z | 2021-06-17T03:49:15.000Z | tests/processors/test_processors.py | itohio/EasyVision | de6a4bb9160cd08278ae9c5738497132a4cd3202 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | tests/processors/test_processors.py | itohio/EasyVision | de6a4bb9160cd08278ae9c5738497132a4cd3202 | [
"MIT",
"BSD-3-Clause"
] | 3 | 2019-08-21T03:36:56.000Z | 2021-10-08T16:12:53.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import pytest
from pytest import raises, approx
from EasyVision.vision.base import *
from EasyVision.processors.base import *
from tests.common import VisionSubclass, ProcessorA, ProcessorB
@pytest.mark.main
def test_abstract():
with raises(TypeError):
ProcessorBase()
@pytest.mark.main
def test_implementation():
source = VisionSubclass()
pr = ProcessorA(source)
assert(pr.source is source)
@pytest.mark.main
def test_capture():
vision = VisionSubclass(0)
with ProcessorA(vision) as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is processor)
assert(img.images[0].image == "AN IMAGE")
@pytest.mark.main
def test_capture_disabled():
vision = VisionSubclass(0)
with ProcessorA(vision, enabled=False) as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is vision)
assert(img.images[0].image == "an image")
@pytest.mark.main
def test_capture_append():
vision = VisionSubclass(0)
with ProcessorA(vision, append=True) as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is vision)
assert(img.images[0].image == "an image")
assert(img.images[1].source is processor)
assert(img.images[1].image == "AN IMAGE")
@pytest.mark.main
def test_capture_mask_images():
vision = VisionSubclass(0, num_images=2, processor_mask="10")
with ProcessorA(vision) as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is processor)
assert(img.images[0].image == "AN IMAGE")
assert(img.images[1].source is vision)
assert(img.images[1].image == "an image1")
@pytest.mark.main
def test_capture_mask_processor():
vision = VisionSubclass(0, num_images=2)
with ProcessorA(vision, processor_mask="01") as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is vision)
assert(img.images[0].image == "an image")
assert(img.images[1].source is processor)
assert(img.images[1].image == "AN IMAGE1")
@pytest.mark.main
def test_capture_mask_processor_override():
vision = VisionSubclass(0, num_images=2, processor_mask="10")
with ProcessorA(vision, processor_mask="01") as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is vision)
assert(img.images[0].image == "an image")
assert(img.images[1].source is processor)
assert(img.images[1].image == "AN IMAGE1")
@pytest.mark.main
def test_capture_mask_processor_override_append():
vision = VisionSubclass(0, num_images=2, processor_mask="10")
with ProcessorA(vision, append=True, processor_mask="01") as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is vision)
assert(img.images[0].image == "an image")
assert(img.images[1].source is vision)
assert(img.images[1].image == "an image1")
assert(img.images[2].source is processor)
assert(img.images[2].image == "AN IMAGE1")
@pytest.mark.main
def test_capture_incorrect():
vision = VisionSubclass(0)
processor = ProcessorA(vision)
with raises(AssertionError):
processor.capture()
@pytest.mark.main
def test_capture_stacked_incorrect():
vision = VisionSubclass("Test")
processorA = ProcessorA(vision)
processorB = ProcessorB(processorA)
assert(processorB.name == "ProcessorB <- ProcessorA <- Test")
with raises(AssertionError):
processorB.capture()
@pytest.mark.main
def test_capture_stacked():
vision = VisionSubclass("Test")
processorA = ProcessorA(vision)
processorB = ProcessorB(processorA)
assert(processorB.name == "ProcessorB <- ProcessorA <- Test")
with processorB as processor:
img = processor.capture()
assert(isinstance(img, Frame))
assert(img.images[0].source is processorB)
assert(img.images[0].image == "An Image")
assert(processorB.get_source('VisionSubclass') is vision)
assert(processorB.get_source('ProcessorA') is processorA)
assert(processorB.get_source('ProcessorB') is processorB)
assert(processorB.get_source('Test no') is None)
@pytest.mark.main
def test_method_resolution():
vision = VisionSubclass("Test")
processorA = ProcessorA(vision)
processorB = ProcessorB(processorA)
assert(processorB.name == "ProcessorB <- ProcessorA <- Test")
assert(not vision.camera_called)
assert(processorB.camera_())
assert(processorB._camera_called)
assert(vision._camera_called)
@pytest.mark.main
def test_processor_properties():
vision = VisionSubclass("Test")
processorA = ProcessorA(vision)
processorB = ProcessorB(processorA)
with processorB as s:
assert(s.autoexposure is None)
assert(s.autofocus is None)
assert(s.autowhitebalance is None)
assert(s.autogain is None)
assert(s.exposure is None)
assert(s.focus is None)
assert(s.whitebalance is None)
s.autoexposure = 1
s.autofocus = 2
s.autowhitebalance = 3
s.autogain = 4
s.exposure = 5
s.focus = 6
s.whitebalance = 7
assert(vision.autoexposure == 1)
assert(vision.autofocus == 2)
assert(vision.autowhitebalance == 3)
assert(vision.autogain == 4)
assert(vision.exposure == 5)
assert(vision.focus == 6)
assert(vision.whitebalance == 7)
| 29.558376 | 75 | 0.666667 | 699 | 5,823 | 5.477825 | 0.118741 | 0.065814 | 0.109689 | 0.066858 | 0.678506 | 0.651345 | 0.600679 | 0.599634 | 0.568817 | 0.528598 | 0 | 0.01569 | 0.211918 | 5,823 | 196 | 76 | 29.709184 | 0.818697 | 0.007213 | 0 | 0.547297 | 0 | 0 | 0.048797 | 0 | 0 | 0 | 0 | 0 | 0.432432 | 1 | 0.094595 | false | 0 | 0.033784 | 0 | 0.128378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
457a7a556af7a4b160da5ec709717168732be108 | 559 | py | Python | python/udp_socket_emit.py | draconicfae/godot_daydream_controller | 81b6b78583264c9e5d383fe7691a6836deaea7a7 | [
"MIT"
] | null | null | null | python/udp_socket_emit.py | draconicfae/godot_daydream_controller | 81b6b78583264c9e5d383fe7691a6836deaea7a7 | [
"MIT"
] | null | null | null | python/udp_socket_emit.py | draconicfae/godot_daydream_controller | 81b6b78583264c9e5d383fe7691a6836deaea7a7 | [
"MIT"
] | null | null | null | import socket
import json
class udp_emit:
def __init__(self, host, port):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.connect((host, port))
def emit(self, datadict):
try:
self.sock.sendall(json.dumps(datadict).encode())
except:
pass
#uncomment if you want to be notified
#print("cannot send data over udp socket, the destination is either not listening yet or is refusing to connect. Check to see if it's running yet.")
| 34.9375 | 161 | 0.617174 | 75 | 559 | 4.506667 | 0.653333 | 0.071006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.298748 | 559 | 15 | 162 | 37.266667 | 0.862245 | 0.329159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
457b9730f9e55cce329d265159d8b2efc07ce3bc | 133 | py | Python | recuEuclid.py | azyxb/info | d34555abe55895751272e0ad129c7fb79f9613b0 | [
"MIT"
] | 2 | 2019-12-14T10:54:38.000Z | 2020-03-30T22:57:11.000Z | recuEuclid.py | azyxb/info | d34555abe55895751272e0ad129c7fb79f9613b0 | [
"MIT"
] | null | null | null | recuEuclid.py | azyxb/info | d34555abe55895751272e0ad129c7fb79f9613b0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
def gcd(a, b):
if a == 0 :
return b
return gcd(b%a, a)
a = 12000
b = 8642
print(gcd(a, b))
| 12.090909 | 23 | 0.511278 | 26 | 133 | 2.615385 | 0.538462 | 0.117647 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120879 | 0.315789 | 133 | 10 | 24 | 13.3 | 0.626374 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.428571 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458360f058f7cad1d028a83c610e0b3f7525ff76 | 1,246 | py | Python | easycron/easycron/plist.py | skeptycal/.dotfiles | ef6d4e47f77a12024587ed24a0c3048fe5b60ed1 | [
"MIT"
] | 5 | 2019-10-03T21:25:42.000Z | 2022-03-30T16:14:20.000Z | easycron/easycron/plist.py | skeptycal/.dotfiles | ef6d4e47f77a12024587ed24a0c3048fe5b60ed1 | [
"MIT"
] | 6 | 2019-07-11T00:23:08.000Z | 2020-12-15T06:21:19.000Z | easycron/easycron/plist.py | skeptycal/.dotfiles | ef6d4e47f77a12024587ed24a0c3048fe5b60ed1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import datetime
import plistlib
import tempfile
import time
from os import PathLike
from typing import Dict, Union
pl = dict(
aString="Doodah",
aList=["A", "B", 12, 32.1, [1, 2, 3]],
aFloat=0.1,
anInt=728,
aDict=dict(
anotherString="<hello & hi there!>",
aThirdString="M\xe4ssig, Ma\xdf",
aTrueValue=True,
aFalseValue=False,
),
someData=b"<binary gunk>",
someMoreData=b"<lots of binary gunk>" * 10,
aDate=datetime.datetime.fromtimestamp(time.mktime(time.gmtime())),
)
test_file_name: PathLike = 'some.random.plist'
def write_plist(fileName: PathLike) -> bool:
try:
with open(fileName, 'wb') as fp:
plistlib.dump(pl, fp)
return False
except:
return True
def read_plist(fileName: PathLike) -> Union[Dict, None]:
try:
with open(fileName, 'rb') as fp:
return plistlib.load(fp)
except:
return None
if __name__ == '__main__':
write_plist(fileName=test_file_name)
data = read_plist(fileName=test_file_name)
plistlib.
# with tempfile.NamedTemporaryFile() as output_file:
# plistlib. (d, output_file)
# output_file.seek(0)
# print(output_file.read())
| 23.074074 | 70 | 0.635634 | 159 | 1,246 | 4.842767 | 0.540881 | 0.067532 | 0.046753 | 0.049351 | 0.064935 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018908 | 0.235955 | 1,246 | 53 | 71 | 23.509434 | 0.789916 | 0.126003 | 0 | 0.102564 | 0 | 0 | 0.098708 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458445a4da8ce1297713dbab63280e457ebba263 | 414 | py | Python | symposion/schedule/migrations/0003_remove_presentation_additional_speakers.py | pyohio/symposion | f8ec9c7e7daab4658061867d1294c1c126dd2919 | [
"BSD-3-Clause"
] | null | null | null | symposion/schedule/migrations/0003_remove_presentation_additional_speakers.py | pyohio/symposion | f8ec9c7e7daab4658061867d1294c1c126dd2919 | [
"BSD-3-Clause"
] | 5 | 2015-07-16T19:46:00.000Z | 2018-03-11T05:58:48.000Z | symposion/schedule/migrations/0003_remove_presentation_additional_speakers.py | pyohio/symposion | f8ec9c7e7daab4658061867d1294c1c126dd2919 | [
"BSD-3-Clause"
] | 1 | 2017-01-27T21:18:26.000Z | 2017-01-27T21:18:26.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.9 on 2018-06-23 06:06
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('symposion_schedule', '0002_slot_name'),
]
operations = [
migrations.RemoveField(
model_name='presentation',
name='additional_speakers',
),
]
| 20.7 | 49 | 0.63285 | 44 | 414 | 5.727273 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.256039 | 414 | 19 | 50 | 21.789474 | 0.75 | 0.164251 | 0 | 0 | 1 | 0 | 0.183673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458576ab2c7f8650e836abcb0b08f9c1ccc55b11 | 425 | py | Python | setup.py | 3kwa/datoms | 108733769a940b799482270198aa500873aee01c | [
"Unlicense"
] | 4 | 2017-10-27T19:11:23.000Z | 2021-02-27T08:18:39.000Z | setup.py | 3kwa/datoms | 108733769a940b799482270198aa500873aee01c | [
"Unlicense"
] | null | null | null | setup.py | 3kwa/datoms | 108733769a940b799482270198aa500873aee01c | [
"Unlicense"
] | 1 | 2016-09-02T12:29:45.000Z | 2016-09-02T12:29:45.000Z | from setuptools import setup
setup(
name = 'datoms',
version = '0.1.0',
description = 'A simplistic, Datomic inspired, SQLite backed, REST influenced, schemaless auditable facts storage.',
py_modules = ['datoms'],
license = 'unlicense',
author = 'Eugene Van den Bulke',
author_email = 'eugene.vandenbulke@gmail.com',
url = 'https://github.com/3kwa/datoms',
install_requires = ['sql'],
)
| 28.333333 | 120 | 0.663529 | 49 | 425 | 5.693878 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.2 | 425 | 14 | 121 | 30.357143 | 0.808824 | 0 | 0 | 0 | 0 | 0 | 0.484706 | 0.065882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45857d1509e13e72b89d409aa8c1b7c1595621d3 | 2,728 | py | Python | cocoa_folder/scripts/bot_bot_chat.py | s-akanksha/DialoGraph_ICLR21 | d5bbc10b2623c9f84d21a99a5e54e7dcfdfb1bcc | [
"Apache-2.0"
] | 12 | 2021-03-17T05:15:33.000Z | 2022-01-19T06:09:21.000Z | cocoa_folder/scripts/bot_bot_chat.py | s-akanksha/DialoGraph_ICLR21 | d5bbc10b2623c9f84d21a99a5e54e7dcfdfb1bcc | [
"Apache-2.0"
] | 2 | 2021-05-25T07:28:46.000Z | 2022-02-11T01:54:43.000Z | cocoa_folder/scripts/bot_bot_chat.py | s-akanksha/DialoGraph_ICLR21 | d5bbc10b2623c9f84d21a99a5e54e7dcfdfb1bcc | [
"Apache-2.0"
] | 4 | 2021-10-11T03:39:38.000Z | 2022-02-01T23:58:50.000Z | '''
Takes two agent implementations and generates the dialogues.
'''
import argparse
import random
import json
import numpy as np
from cocoa.core.util import read_json
from cocoa.core.schema import Schema
from cocoa.core.scenario_db import ScenarioDB, add_scenario_arguments
from cocoa.core.dataset import add_dataset_arguments
from core.scenario import Scenario
from core.controller import Controller
from systems import add_system_arguments, get_system
def generate_examples(agents, agent_names, scenarios, num_examples, max_turns):
examples = []
for i in range(num_examples):
scenario = scenarios[i % len(scenarios)]
# Each agent needs to play both buyer and seller
for j in (0, 1):
new_agents = [agents[j], agents[1-j]]
new_agent_names = [agent_names[j], agent_names[1-j]]
sessions = [new_agents[0].new_session(0, scenario.kbs[0]),
new_agents[1].new_session(1, scenario.kbs[1])]
controller = Controller(scenario, sessions, session_names=new_agent_names)
ex = controller.simulate(max_turns, verbose=args.verbose)
examples.append(ex)
return examples
if __name__ == '__main__':
parser = argparse.ArgumentParser(conflict_handler='resolve')
parser.add_argument('--agent', nargs=3, metavar=('type', 'checkpoint', 'name'), action='append', help='Agent parameters')
parser.add_argument('--max-turns', default=20, type=int, help='Maximum number of turns')
parser.add_argument('--num-examples', type=int)
parser.add_argument('--examples-path')
parser.add_argument('-v', '--verbose', default=False, action='store_true', help='whether or not to have verbose prints')
add_scenario_arguments(parser)
add_system_arguments(parser)
args = parser.parse_args()
schema = Schema(args.schema_path)
scenario_db = ScenarioDB.from_dict(schema, read_json(args.scenarios_path), Scenario)
agents = {}
for agent_params in args.agent:
agent_type, model_path, agent_name = agent_params
agents[agent_name] = get_system(agent_type, args, schema, model_path=model_path)
scenarios = scenario_db.scenarios_list
examples = []
for base_agent_name in ('sl-words',):
base_agent = agents[base_agent_name]
for agent_name, agent in agents.iteritems():
if agent_name != base_agent_name:
agents = [base_agent, agent]
agent_names = [base_agent_name, agent_name]
examples.extend(generate_examples(agents, agent_names, scenarios, args.num_examples, args.max_turns))
with open(args.examples_path, 'w') as out:
print >>out, json.dumps([e.to_dict() for e in examples])
| 41.333333 | 125 | 0.69868 | 363 | 2,728 | 5.019284 | 0.316804 | 0.044457 | 0.046652 | 0.029638 | 0.045005 | 0.045005 | 0 | 0 | 0 | 0 | 0 | 0.005912 | 0.193915 | 2,728 | 65 | 126 | 41.969231 | 0.822647 | 0.039589 | 0 | 0.039216 | 1 | 0 | 0.073535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.215686 | 0 | 0.254902 | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458a562e22dcc2a1d067069337d7d494ddd4941f | 3,277 | py | Python | Views/Affichage/transitionView.py | yvesjordan06/automata-brains | 1c34dd9315fcee7ce1807a2b94a0ec48421d03b1 | [
"MIT"
] | 3 | 2020-01-31T15:54:48.000Z | 2020-02-01T10:01:35.000Z | Views/Affichage/transitionView.py | Tcomputer5/automata-brains | 6c2a7714d1fcb16763084a33a2f0f1364d4f8eb8 | [
"MIT"
] | null | null | null | Views/Affichage/transitionView.py | Tcomputer5/automata-brains | 6c2a7714d1fcb16763084a33a2f0f1364d4f8eb8 | [
"MIT"
] | 2 | 2020-02-01T09:59:51.000Z | 2020-02-01T10:02:12.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'UI/transitionView.ui'
#
# Created by: PyQt5 UI code generator 5.14.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtGui import QIcon
from Models.Automate import Automate
class Ui_Form(object):
def __init__(self, automate:Automate):
self.automate = automate
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(369, 279)
self.horizontalLayout = QtWidgets.QHBoxLayout(Form)
self.horizontalLayout.setObjectName("horizontalLayout")
self.tableWidget = QtWidgets.QTableWidget()
self.tableWidget.setObjectName("tableWidget")
self.horizontalLayout.addWidget(self.tableWidget)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
# Initial Set State
self.action_set_state()
self.automate.automate_modifier.connect(self.action_set_state)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
#self.groupBox.setTitle(_translate("Form", "Transition"))
__sortingEnabled = self.tableWidget.isSortingEnabled()
self.tableWidget.setSortingEnabled(False)
self.tableWidget.setSortingEnabled(__sortingEnabled)
def action_set_state(self):
self.tableWidget.clear()
etat_list = list(self.automate.etats)
alphabet = [symbole for symbole in self.automate.alphabet.list]
self.tableWidget.setColumnCount(len(self.automate.alphabet.list))
self.tableWidget.setRowCount(len(self.automate.etats))
row = 0
for etat in etat_list :
label = str(etat)
item = QtWidgets.QTableWidgetItem(label)
initial = False
final = False
if etat == self.automate.etat_initial:
initial = True
if etat in self.automate.etats_finaux:
final = True
if initial:
item = QtWidgets.QTableWidgetItem(QIcon("icons/initial.png"), label)
if final:
item = QtWidgets.QTableWidgetItem(QIcon("icons/final.png"), label)
if initial and final:
item = QtWidgets.QTableWidgetItem(QIcon("icons/icon.png"), label)
self.tableWidget.setVerticalHeaderItem(row, item)
row += 1
self.tableWidget.setHorizontalHeaderLabels(alphabet)
for t in self.automate.transitions:
print(t.est_epsilon())
if (t.est_epsilon()):
continue
alphabet_index = alphabet.index(t.symbole)
etat_index = etat_list.index(t.depart)
valeur_actu = self.tableWidget.item(etat_index,alphabet_index)
valeur_text = ' , ' + valeur_actu.text() if valeur_actu else ''
self.tableWidget.setItem(etat_index, alphabet_index, QtWidgets.QTableWidgetItem(f"{t.arrive}{valeur_text}"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 32.77 | 120 | 0.64968 | 348 | 3,277 | 5.982759 | 0.344828 | 0.09366 | 0.055716 | 0.048991 | 0.098463 | 0.079731 | 0 | 0 | 0 | 0 | 0 | 0.006523 | 0.25145 | 3,277 | 99 | 121 | 33.10101 | 0.842234 | 0.080867 | 0 | 0 | 1 | 0 | 0.039627 | 0.007659 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.060606 | 0 | 0.136364 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458b63ce9f8114bff2c526efb8cbbe3355958823 | 380 | py | Python | gellifinsta/migrations/0004_rename_local_fname_gellifinsta_file_path.py | vallka/djellifique | fb84fba6be413f9d38276d89ae84aeaff761218f | [
"MIT"
] | null | null | null | gellifinsta/migrations/0004_rename_local_fname_gellifinsta_file_path.py | vallka/djellifique | fb84fba6be413f9d38276d89ae84aeaff761218f | [
"MIT"
] | null | null | null | gellifinsta/migrations/0004_rename_local_fname_gellifinsta_file_path.py | vallka/djellifique | fb84fba6be413f9d38276d89ae84aeaff761218f | [
"MIT"
] | null | null | null | # Generated by Django 3.2.4 on 2021-06-23 14:52
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('gellifinsta', '0003_auto_20210622_1928'),
]
operations = [
migrations.RenameField(
model_name='gellifinsta',
old_name='local_fname',
new_name='file_path',
),
]
| 20 | 51 | 0.602632 | 41 | 380 | 5.390244 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114815 | 0.289474 | 380 | 18 | 52 | 21.111111 | 0.703704 | 0.118421 | 0 | 0 | 1 | 0 | 0.195195 | 0.069069 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
458fd8bb6dadc2e31622c1207043cbd32ef38957 | 2,039 | py | Python | utils/visualMap.py | TwelveYC/network-vi | d7a36c21a09eb86276e316193405267e3a9cc78d | [
"MIT"
] | 14 | 2020-05-13T10:04:02.000Z | 2020-12-27T05:42:05.000Z | utils/visualMap.py | TwelveYC/network-vi | d7a36c21a09eb86276e316193405267e3a9cc78d | [
"MIT"
] | null | null | null | utils/visualMap.py | TwelveYC/network-vi | d7a36c21a09eb86276e316193405267e3a9cc78d | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
class MapColorControl():
def __init__(self, colour_scheme, map_normalization,data):
self.colors = plt.get_cmap(colour_scheme)(range(256))[:,:3]
self.data = data
if self.data.min() <= 0:
self.data = self.data + abs(self.data.min()) + 1
if map_normalization == "Linear":
self.normNorm = colors.Normalize(vmin=self.data.min(),vmax=self.data.max())
elif map_normalization == "Logarithmic":
self.normNorm = colors.LogNorm(vmin=self.data.min(),vmax=self.data.max())
elif map_normalization == "Power-law":
self.normNorm = colors.PowerNorm(gamma=2,vmin=self.data.min(),vmax=self.data.max())
def get_map_data(self):
datum = np.round(self.normNorm(self.data) * 255)
return self.map(datum)
def map(self,infos):
datum = []
for index in infos:
datum.append(colors.rgb2hex(self.colors[int(index)]))
return datum
class MapControl():
def __init__(self,max_value,min_value,map_normalization,data):
self.data = data
if self.data.min() <=0:
self.data = self.data + abs(self.data.min()) +1
if map_normalization == "Linear":
self.normNorm = colors.Normalize(vmin=self.data.min(),vmax=self.data.max())
elif map_normalization == "Logarithmic":
self.normNorm = colors.LogNorm(vmin=self.data.min(),vmax=self.data.max())
elif map_normalization == "Power-law":
self.normNorm = colors.PowerNorm(gamma=2,vmin=self.data.min(),vmax=self.data.max())
self.maxValue = max_value
self.minValue = min_value
def get_map_data(self,is_round):
if is_round:
datum = np.round(self.normNorm(self.data) * (self.maxValue-self.minValue) + self.minValue,5)
else:
datum = np.round(self.normNorm(self.data) * (self.maxValue - self.minValue) + self.minValue)
return list(datum) | 41.612245 | 104 | 0.62972 | 267 | 2,039 | 4.700375 | 0.228464 | 0.159363 | 0.087649 | 0.071713 | 0.662948 | 0.635857 | 0.635857 | 0.610359 | 0.610359 | 0.610359 | 0 | 0.009554 | 0.230015 | 2,039 | 49 | 105 | 41.612245 | 0.789809 | 0 | 0 | 0.428571 | 0 | 0 | 0.02549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0 | 0.071429 | 0 | 0.309524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
459906e9c61347117a3af63208abe4d1417e1da0 | 541 | py | Python | environments/migrations/0003_auto_20201228_1616.py | Teosidonio/Data_Solution | 94a898b7939b3bdb0fe92df97aa833c1fc7394a3 | [
"MIT"
] | null | null | null | environments/migrations/0003_auto_20201228_1616.py | Teosidonio/Data_Solution | 94a898b7939b3bdb0fe92df97aa833c1fc7394a3 | [
"MIT"
] | null | null | null | environments/migrations/0003_auto_20201228_1616.py | Teosidonio/Data_Solution | 94a898b7939b3bdb0fe92df97aa833c1fc7394a3 | [
"MIT"
] | null | null | null | # Generated by Django 3.1 on 2020-12-28 14:16
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('environments', '0002_auto_20201228_1548'),
]
operations = [
migrations.AlterModelOptions(
name='environments',
options={'ordering': ['date_created', 'name', 'env_stage', 'platform']},
),
migrations.RenameField(
model_name='environments',
old_name='stage',
new_name='env_stage',
),
]
| 23.521739 | 84 | 0.582255 | 51 | 541 | 6 | 0.72549 | 0.104575 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078329 | 0.292052 | 541 | 22 | 85 | 24.590909 | 0.720627 | 0.079482 | 0 | 0.125 | 1 | 0 | 0.229839 | 0.046371 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
459a1e6b7e0fdf087cd3f288de2eb4f069687538 | 722 | py | Python | code_all/day17/homework/exercise02.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | code_all/day17/homework/exercise02.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | code_all/day17/homework/exercise02.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | """
迭代器 --> yield
"""
class CommodityController:
def __init__(self):
self.__commoditys = []
def add_commodity(self, cmd):
self.__commoditys.append(cmd)
def __iter__(self):
index = 0
yield self.__commoditys[index]
index += 1
yield self.__commoditys[index]
index += 1
yield self.__commoditys[index]
controller = CommodityController()
controller.add_commodity("屠龙刀")
controller.add_commodity("倚天剑")
controller.add_commodity("芭比娃娃")
for item in controller:
print(item)
# iterator = controller.__iter__()
# while True:
# try:
# item = iterator.__next__()
# print(item)
# except StopIteration:
# break
| 19 | 38 | 0.621884 | 73 | 722 | 5.739726 | 0.438356 | 0.167064 | 0.136038 | 0.171838 | 0.200477 | 0.200477 | 0.200477 | 0.200477 | 0.200477 | 0.200477 | 0 | 0.005639 | 0.263158 | 722 | 37 | 39 | 19.513514 | 0.781955 | 0.225762 | 0 | 0.277778 | 0 | 0 | 0.018484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.222222 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45a1215c7fb379ec9d3e7cd1d0925e710f781704 | 1,117 | py | Python | source/server/annotation/migrations/0002_auto_20200622_0656.py | shizacat/shanno | e370dc1bf8a884d8ee5538b702b39275751e5f5d | [
"MIT"
] | 1 | 2020-08-27T12:48:47.000Z | 2020-08-27T12:48:47.000Z | source/server/annotation/migrations/0002_auto_20200622_0656.py | shizacat/shanno | e370dc1bf8a884d8ee5538b702b39275751e5f5d | [
"MIT"
] | 5 | 2021-03-30T12:56:24.000Z | 2021-06-27T17:42:28.000Z | source/server/annotation/migrations/0002_auto_20200622_0656.py | shizacat/shanno | e370dc1bf8a884d8ee5538b702b39275751e5f5d | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-06-22 06:56
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('annotation', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='projects',
name='type',
field=models.CharField(choices=[('text_label', 'Text Labeling'), ('document_classificaton', 'Document classification')], max_length=50),
),
migrations.CreateModel(
name='DCDocLabel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('document', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='dl_doc', to='annotation.Documents')),
('label', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='dl_label', to='annotation.TlLabels')),
],
options={
'unique_together': {('document', 'label')},
},
),
]
| 36.032258 | 148 | 0.605192 | 112 | 1,117 | 5.901786 | 0.589286 | 0.048412 | 0.06354 | 0.099849 | 0.199697 | 0.199697 | 0.199697 | 0.199697 | 0.199697 | 0.199697 | 0 | 0.02515 | 0.252462 | 1,117 | 30 | 149 | 37.233333 | 0.766467 | 0.040286 | 0 | 0.083333 | 1 | 0 | 0.196262 | 0.020561 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45a398c3801e0cae9986d9b12a66be8c29607231 | 439 | py | Python | setup.py | GiorgioBalestrieri/renewables-ninja-client | 5c068f07e0e8e972f8802fd0b3ed28466bbf8c23 | [
"MIT"
] | 1 | 2020-05-27T14:15:00.000Z | 2020-05-27T14:15:00.000Z | setup.py | GiorgioBalestrieri/renewables-ninja-client | 5c068f07e0e8e972f8802fd0b3ed28466bbf8c23 | [
"MIT"
] | 1 | 2020-05-27T14:27:29.000Z | 2020-05-27T14:27:29.000Z | setup.py | GiorgioBalestrieri/renewables-ninja-client | 5c068f07e0e8e972f8802fd0b3ed28466bbf8c23 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name = "renewables_ninja_client",
version = "0.1.0",
description = ("Client for Renewables Ninja API."),
author = ["Giorgio Balestrieri"],
packages = find_packages(exclude=[
"docs", "tests", "examples",
"sandbox", "scripts"]),
install_requires=[
"pandas",
"numpy",
"requests",
'typing;python_version<"3.7"'],
) | 27.4375 | 55 | 0.587699 | 43 | 439 | 5.860465 | 0.790698 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.259681 | 439 | 16 | 56 | 27.4375 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0.354545 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45a5d652cf5968d8ff355947ff4f952cf9d7e299 | 1,408 | py | Python | urls.py | enisimsar/watchtower-news | 222d2e52e76ef32ebb78eb325f4c32b64c0ba1a6 | [
"MIT"
] | 2 | 2019-02-21T18:29:09.000Z | 2021-01-27T14:52:46.000Z | urls.py | enisimsar/watchtower-news | 222d2e52e76ef32ebb78eb325f4c32b64c0ba1a6 | [
"MIT"
] | 3 | 2018-11-22T08:34:04.000Z | 2021-06-01T22:47:19.000Z | urls.py | enisimsar/watchtower-news | 222d2e52e76ef32ebb78eb325f4c32b64c0ba1a6 | [
"MIT"
] | 1 | 2019-06-13T10:45:46.000Z | 2019-06-13T10:45:46.000Z | """
Endpoints
"""
from handlers.auth import UserHandler, AuthHandler
from handlers.base import StaticHandler
from handlers.invitations import InvitationHandler, InvitationsHandler, InvitationPostHandler
from handlers.logs import LogHandler, LogsHandler
from handlers.swagger import SwaggerHandler
from handlers.topics import TopicHandler, TopicsHandler, TopicPostHandler
from handlers.news import NewsHandler, SingleNewsHandler
from handlers.tweets import TweetsHandler, TweetHandler
from settings import app_settings
__author__ = 'Enis Simsar'
url_patterns = [
# ----- API ENDPOINTS ----- #
# AUTH
# (r"/api/auth", AuthHandler),
# (r"/api/user", UserHandler),
# TOPIC
(r"/api/topic", TopicPostHandler),
(r"/api/topic/(.*)$", TopicHandler),
(r"/api/topics", TopicsHandler),
# NEWS
(r"/api/single_news/(.*)$", SingleNewsHandler),
(r"/api/news", NewsHandler),
# TWEETS
# (r"/api/tweet/(.*)$", TweetHandler),
# (r"/api/tweets", TweetsHandler),
# INVITATIONS
# (r"/api/invitation", InvitationPostHandler),
# (r"/api/invitation/(.*)$", InvitationHandler),
# (r"/api/invitations", InvitationsHandler),
# LOGS
# (r'/api/logs', LogsHandler),
# (r'/api/log/(.*)$', LogHandler),
# ----- UI ENDPOINTS ----- #
(r'/', SwaggerHandler),
(r"/static/(.*)", StaticHandler, {'path': app_settings['template_path']}),
]
| 26.566038 | 93 | 0.666903 | 136 | 1,408 | 6.838235 | 0.345588 | 0.060215 | 0.019355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164063 | 1,408 | 52 | 94 | 27.076923 | 0.790144 | 0.303267 | 0 | 0 | 0 | 0 | 0.114256 | 0.023061 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.473684 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
45a961e0e1f604b60a477e34ad903c8057c5ec23 | 877 | py | Python | scripts/ipu/callbacks.py | BastienArcelin/IPU-GPU | dde946686478ce77a06821a1517b5b8206ab8de9 | [
"BSD-3-Clause"
] | null | null | null | scripts/ipu/callbacks.py | BastienArcelin/IPU-GPU | dde946686478ce77a06821a1517b5b8206ab8de9 | [
"BSD-3-Clause"
] | null | null | null | scripts/ipu/callbacks.py | BastienArcelin/IPU-GPU | dde946686478ce77a06821a1517b5b8206ab8de9 | [
"BSD-3-Clause"
] | null | null | null | import sys, os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.callbacks import Callback, ReduceLROnPlateau, TerminateOnNaN, ModelCheckpoint
import tensorflow.keras.backend as K
import tensorflow as tf
import time
###### Callbacks
# Create a callback to compute time spent between 10th and 110th epoch
class time_callback(Callback):
def __init__(self):
'''
Compute time spent between 10th and 110th epoch
'''
self.epoch = 1
self.t1 =0
self.t2 = 0
def on_epoch_end(self, epoch, t1):
if (self.epoch == 10):
self.t1 =time.time()
print('t1: '+str(self.t1))
elif (self.epoch == 110):
self.t2 = time.time()
print('t2: '+str(self.t2))
print('for 100 epochs from 10 to 110: '+str(self.t2 - self.t1))
self.epoch +=1 | 31.321429 | 99 | 0.616876 | 119 | 877 | 4.487395 | 0.420168 | 0.08427 | 0.059925 | 0.086142 | 0.149813 | 0.149813 | 0.149813 | 0.149813 | 0 | 0 | 0 | 0.059937 | 0.277081 | 877 | 28 | 100 | 31.321429 | 0.782334 | 0.144812 | 0 | 0 | 0 | 0 | 0.054092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.333333 | 0 | 0.47619 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
45a9f4fd71c58a688f030ddcf7a969454a4a8652 | 1,400 | py | Python | ag/tests/test_metrics.py | justyre/jus | 1339c010ac4499c253061d2cce5e638ec06062bd | [
"MIT"
] | null | null | null | ag/tests/test_metrics.py | justyre/jus | 1339c010ac4499c253061d2cce5e638ec06062bd | [
"MIT"
] | null | null | null | ag/tests/test_metrics.py | justyre/jus | 1339c010ac4499c253061d2cce5e638ec06062bd | [
"MIT"
] | null | null | null | """Unit tests for the metrics module."""
import pytest
from forest import metrics
def test_counter():
"""Test counter."""
counter = metrics.Counter()
counter.increase()
assert counter.count == 1
counter.increase(10)
assert counter.count == 11
counter.decrease()
assert counter.count == 10
counter.decrease(11)
assert counter.count == -1
def test_histogram():
"""Test histogram."""
histogram = metrics.Histogram()
for value in range(0, 10):
histogram.update(value=value)
result = histogram.report()
assert result["min"] == 0
assert result["max"] == 9
assert result["medium"] == pytest.approx(4.5)
assert result["mean"] == pytest.approx(4.5)
assert result["stdDev"] == pytest.approx(2.8, rel=0.1) # relative tolerance of 0.1
assert result["percentile"]["75"] == pytest.approx(6.75)
assert result["percentile"]["95"] == pytest.approx(8.5, 0.1)
assert result["percentile"]["99"] == pytest.approx(9, 0.1)
def test_registry():
"""Test metrics registry."""
counter = metrics.Counter()
histogram = metrics.Histogram()
registry = metrics.MetricRegistry()
registry.register("counter", counter)
registry.register("histogram", histogram)
assert registry.get_metric("counter") is counter
assert registry.get_metric("histogram") is histogram
| 28.571429 | 87 | 0.642143 | 168 | 1,400 | 5.321429 | 0.321429 | 0.107383 | 0.080537 | 0.042506 | 0.111857 | 0.058166 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.209286 | 1,400 | 49 | 88 | 28.571429 | 0.770551 | 0.081429 | 0 | 0.121212 | 0 | 0 | 0.07109 | 0 | 0 | 0 | 0 | 0 | 0.424242 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45acd3a5f2cd6ccaa0b7e6db6bf65eb041445b32 | 285 | py | Python | Day18/turtle_dashed_line.py | CodePuzzler/100-Days-Of-Code-Python | 4f6da9dabc73f747266ce0e66057d10754ecc54e | [
"MIT"
] | null | null | null | Day18/turtle_dashed_line.py | CodePuzzler/100-Days-Of-Code-Python | 4f6da9dabc73f747266ce0e66057d10754ecc54e | [
"MIT"
] | null | null | null | Day18/turtle_dashed_line.py | CodePuzzler/100-Days-Of-Code-Python | 4f6da9dabc73f747266ce0e66057d10754ecc54e | [
"MIT"
] | null | null | null | # Day18 of my 100DaysOfCode Challenge
# Draw a dashed line using Turtle Graphics
from turtle import Turtle, Screen
groot = Turtle()
for _ in range(15):
groot.forward(10)
groot.penup()
groot.forward(10)
groot.pendown()
my_screen = Screen()
my_screen.exitonclick()
| 15.833333 | 42 | 0.708772 | 39 | 285 | 5.102564 | 0.641026 | 0.120603 | 0.140704 | 0.190955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048035 | 0.196491 | 285 | 17 | 43 | 16.764706 | 0.820961 | 0.266667 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45b197a9d2ff1bd1c087f9b6df99427829783c26 | 11,544 | py | Python | sdk/python/pulumi_oci/dns/get_resolver_endpoint.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/dns/get_resolver_endpoint.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/dns/get_resolver_endpoint.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'GetResolverEndpointResult',
'AwaitableGetResolverEndpointResult',
'get_resolver_endpoint',
]
@pulumi.output_type
class GetResolverEndpointResult:
"""
A collection of values returned by getResolverEndpoint.
"""
def __init__(__self__, compartment_id=None, endpoint_type=None, forwarding_address=None, id=None, is_forwarding=None, is_listening=None, listening_address=None, name=None, nsg_ids=None, resolver_endpoint_name=None, resolver_id=None, scope=None, self=None, state=None, subnet_id=None, time_created=None, time_updated=None):
if compartment_id and not isinstance(compartment_id, str):
raise TypeError("Expected argument 'compartment_id' to be a str")
pulumi.set(__self__, "compartment_id", compartment_id)
if endpoint_type and not isinstance(endpoint_type, str):
raise TypeError("Expected argument 'endpoint_type' to be a str")
pulumi.set(__self__, "endpoint_type", endpoint_type)
if forwarding_address and not isinstance(forwarding_address, str):
raise TypeError("Expected argument 'forwarding_address' to be a str")
pulumi.set(__self__, "forwarding_address", forwarding_address)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if is_forwarding and not isinstance(is_forwarding, bool):
raise TypeError("Expected argument 'is_forwarding' to be a bool")
pulumi.set(__self__, "is_forwarding", is_forwarding)
if is_listening and not isinstance(is_listening, bool):
raise TypeError("Expected argument 'is_listening' to be a bool")
pulumi.set(__self__, "is_listening", is_listening)
if listening_address and not isinstance(listening_address, str):
raise TypeError("Expected argument 'listening_address' to be a str")
pulumi.set(__self__, "listening_address", listening_address)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if nsg_ids and not isinstance(nsg_ids, list):
raise TypeError("Expected argument 'nsg_ids' to be a list")
pulumi.set(__self__, "nsg_ids", nsg_ids)
if resolver_endpoint_name and not isinstance(resolver_endpoint_name, str):
raise TypeError("Expected argument 'resolver_endpoint_name' to be a str")
pulumi.set(__self__, "resolver_endpoint_name", resolver_endpoint_name)
if resolver_id and not isinstance(resolver_id, str):
raise TypeError("Expected argument 'resolver_id' to be a str")
pulumi.set(__self__, "resolver_id", resolver_id)
if scope and not isinstance(scope, str):
raise TypeError("Expected argument 'scope' to be a str")
pulumi.set(__self__, "scope", scope)
if self and not isinstance(self, str):
raise TypeError("Expected argument 'self' to be a str")
pulumi.set(__self__, "self", self)
if state and not isinstance(state, str):
raise TypeError("Expected argument 'state' to be a str")
pulumi.set(__self__, "state", state)
if subnet_id and not isinstance(subnet_id, str):
raise TypeError("Expected argument 'subnet_id' to be a str")
pulumi.set(__self__, "subnet_id", subnet_id)
if time_created and not isinstance(time_created, str):
raise TypeError("Expected argument 'time_created' to be a str")
pulumi.set(__self__, "time_created", time_created)
if time_updated and not isinstance(time_updated, str):
raise TypeError("Expected argument 'time_updated' to be a str")
pulumi.set(__self__, "time_updated", time_updated)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> str:
"""
The OCID of the owning compartment. This will match the resolver that the resolver endpoint is under and will be updated if the resolver's compartment is changed.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="endpointType")
def endpoint_type(self) -> str:
"""
The type of resolver endpoint. VNIC is currently the only supported type.
"""
return pulumi.get(self, "endpoint_type")
@property
@pulumi.getter(name="forwardingAddress")
def forwarding_address(self) -> str:
"""
An IP address from which forwarded queries may be sent. For VNIC endpoints, this IP address must be part of the subnet and will be assigned by the system if unspecified when isForwarding is true.
"""
return pulumi.get(self, "forwarding_address")
@property
@pulumi.getter
def id(self) -> str:
return pulumi.get(self, "id")
@property
@pulumi.getter(name="isForwarding")
def is_forwarding(self) -> bool:
"""
A Boolean flag indicating whether or not the resolver endpoint is for forwarding.
"""
return pulumi.get(self, "is_forwarding")
@property
@pulumi.getter(name="isListening")
def is_listening(self) -> bool:
"""
A Boolean flag indicating whether or not the resolver endpoint is for listening.
"""
return pulumi.get(self, "is_listening")
@property
@pulumi.getter(name="listeningAddress")
def listening_address(self) -> str:
"""
An IP address to listen to queries on. For VNIC endpoints this IP address must be part of the subnet and will be assigned by the system if unspecified when isListening is true.
"""
return pulumi.get(self, "listening_address")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resolver endpoint. Must be unique, case-insensitive, within the resolver.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="nsgIds")
def nsg_ids(self) -> Sequence[str]:
"""
An array of network security group OCIDs for the resolver endpoint. These must be part of the VCN that the resolver endpoint is a part of.
"""
return pulumi.get(self, "nsg_ids")
@property
@pulumi.getter(name="resolverEndpointName")
def resolver_endpoint_name(self) -> str:
return pulumi.get(self, "resolver_endpoint_name")
@property
@pulumi.getter(name="resolverId")
def resolver_id(self) -> str:
return pulumi.get(self, "resolver_id")
@property
@pulumi.getter
def scope(self) -> str:
return pulumi.get(self, "scope")
@property
@pulumi.getter
def self(self) -> str:
"""
The canonical absolute URL of the resource.
"""
return pulumi.get(self, "self")
@property
@pulumi.getter
def state(self) -> str:
"""
The current state of the resource.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> str:
"""
The OCID of a subnet. Must be part of the VCN that the resolver is attached to.
"""
return pulumi.get(self, "subnet_id")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> str:
"""
The date and time the resource was created in "YYYY-MM-ddThh:mm:ssZ" format with a Z offset, as defined by RFC 3339.
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="timeUpdated")
def time_updated(self) -> str:
"""
The date and time the resource was last updated in "YYYY-MM-ddThh:mm:ssZ" format with a Z offset, as defined by RFC 3339.
"""
return pulumi.get(self, "time_updated")
class AwaitableGetResolverEndpointResult(GetResolverEndpointResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetResolverEndpointResult(
compartment_id=self.compartment_id,
endpoint_type=self.endpoint_type,
forwarding_address=self.forwarding_address,
id=self.id,
is_forwarding=self.is_forwarding,
is_listening=self.is_listening,
listening_address=self.listening_address,
name=self.name,
nsg_ids=self.nsg_ids,
resolver_endpoint_name=self.resolver_endpoint_name,
resolver_id=self.resolver_id,
scope=self.scope,
self=self.self,
state=self.state,
subnet_id=self.subnet_id,
time_created=self.time_created,
time_updated=self.time_updated)
def get_resolver_endpoint(resolver_endpoint_name: Optional[str] = None,
resolver_id: Optional[str] = None,
scope: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetResolverEndpointResult:
"""
This data source provides details about a specific Resolver Endpoint resource in Oracle Cloud Infrastructure DNS service.
Gets information about a specific resolver endpoint. Note that attempting to get a resolver endpoint
in the DELETED lifecycle state will result in a `404` response to be consistent with other operations of the
API. Requires a `PRIVATE` scope query parameter.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_resolver_endpoint = oci.dns.get_resolver_endpoint(resolver_endpoint_name=oci_dns_resolver_endpoint["test_resolver_endpoint"]["name"],
resolver_id=oci_dns_resolver["test_resolver"]["id"],
scope="PRIVATE")
```
:param str resolver_endpoint_name: The name of the target resolver endpoint.
:param str resolver_id: The OCID of the target resolver.
:param str scope: Value must be `PRIVATE` when listing private name resolver endpoints.
"""
__args__ = dict()
__args__['resolverEndpointName'] = resolver_endpoint_name
__args__['resolverId'] = resolver_id
__args__['scope'] = scope
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('oci:dns/getResolverEndpoint:getResolverEndpoint', __args__, opts=opts, typ=GetResolverEndpointResult).value
return AwaitableGetResolverEndpointResult(
compartment_id=__ret__.compartment_id,
endpoint_type=__ret__.endpoint_type,
forwarding_address=__ret__.forwarding_address,
id=__ret__.id,
is_forwarding=__ret__.is_forwarding,
is_listening=__ret__.is_listening,
listening_address=__ret__.listening_address,
name=__ret__.name,
nsg_ids=__ret__.nsg_ids,
resolver_endpoint_name=__ret__.resolver_endpoint_name,
resolver_id=__ret__.resolver_id,
scope=__ret__.scope,
self=__ret__.self,
state=__ret__.state,
subnet_id=__ret__.subnet_id,
time_created=__ret__.time_created,
time_updated=__ret__.time_updated)
| 41.228571 | 326 | 0.669179 | 1,422 | 11,544 | 5.175809 | 0.154712 | 0.071739 | 0.046196 | 0.069293 | 0.338995 | 0.253533 | 0.158967 | 0.125543 | 0.083152 | 0.064674 | 0 | 0.001368 | 0.240385 | 11,544 | 279 | 327 | 41.376344 | 0.837952 | 0.22228 | 0 | 0.118919 | 1 | 0 | 0.162533 | 0.022703 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.027027 | 0.021622 | 0.248649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45b8989508eca27d897d3e4bed1d5e5680833bf5 | 295 | py | Python | app/api/auth/api_v1/scheme.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 5 | 2019-05-11T05:14:44.000Z | 2019-09-07T10:22:53.000Z | app/api/auth/api_v1/scheme.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 161 | 2019-09-09T07:30:25.000Z | 2022-03-14T19:52:43.000Z | app/api/auth/api_v1/scheme.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 3 | 2019-09-07T13:15:05.000Z | 2020-05-06T04:30:46.000Z | from fastapi.security.api_key import APIKeyCookie, APIKeyHeader
API_KEY_NAME = "api_key"
cookie_scheme = APIKeyCookie(name="bgm-tv-auto-tracker", auto_error=False)
API_KEY_HEADER = APIKeyHeader(name="api-key", auto_error=False)
API_KEY_COOKIES = APIKeyCookie(name="api-key", auto_error=False)
| 36.875 | 74 | 0.80678 | 44 | 295 | 5.136364 | 0.431818 | 0.185841 | 0.132743 | 0.150442 | 0.327434 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074576 | 295 | 7 | 75 | 42.142857 | 0.827839 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45bf0a95bab68706143f320e56db3c875bc7a37a | 510 | py | Python | tests/r/test_labour.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | tests/r/test_labour.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | tests/r/test_labour.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import sys
import tempfile
from observations.r.labour import labour
def test_labour():
"""Test module labour.py by downloading
labour.csv and testing shape of
extracted data has 569 rows and 4 columns
"""
test_path = tempfile.mkdtemp()
x_train, metadata = labour(test_path)
try:
assert x_train.shape == (569, 4)
except:
shutil.rmtree(test_path)
raise()
| 21.25 | 44 | 0.75098 | 72 | 510 | 5.041667 | 0.569444 | 0.082645 | 0.132231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019185 | 0.182353 | 510 | 23 | 45 | 22.173913 | 0.851319 | 0.215686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.533333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
45c2c705433f6454e6ff7fae79f52f092ad52824 | 2,503 | py | Python | tests/test_lists.py | al3xandru/html2md | fe9c49c7a263f4236a057763d9ad68d237e8cf15 | [
"RSA-MD"
] | 8 | 2015-02-14T04:30:16.000Z | 2019-07-10T05:06:49.000Z | tests/test_lists.py | al3xandru/html2md | fe9c49c7a263f4236a057763d9ad68d237e8cf15 | [
"RSA-MD"
] | 2 | 2015-02-20T12:16:54.000Z | 2017-09-21T10:02:53.000Z | tests/test_lists.py | al3xandru/html2md | fe9c49c7a263f4236a057763d9ad68d237e8cf15 | [
"RSA-MD"
] | 4 | 2016-02-06T04:28:16.000Z | 2021-04-22T00:12:39.000Z | import unittest
from context import html2md
from assertions import assertEq
__author__ = 'alex'
class SpecialListsTest(unittest.TestCase):
def test_text_and_paragraph(self):
in_html = '''<ul>
<li>item 1</li>
<li>item 2
<p>item 2 paragraph</p>
<p>item 2 item 2</p>
</li>
<li>item 3</li>
</ul>'''
out_md = '''* item 1
* item 2
item 2 paragraph
item 2 item 2
* item 3'''
assertEq(out_md, html2md.html2md(in_html))
def test_paragraph_mixed(self):
in_html = '''<ul>
<li>item 1</li>
<li>item 2</li>
<li><p>item 3</p>
<p>item 3 paragraph 2</p></li>
<li>item 4</li>
<li>item 5</li>
</ul>'''
out_md = '''* item 1
* item 2
* item 3
item 3 paragraph 2
* item 4
* item 5'''
assertEq(out_md, html2md.html2md(in_html))
def test_blockquote(self):
in_html = '''
<ul>
<li><blockquote>
<p>item 1</p>
</blockquote></li>
<li><blockquote>
<p>item 2 paragraph 1</p>
<p>item 2 paragraph 2</p>
</blockquote></li>
<li><p>item 3</p></li>
</ul>
'''
out_md = '''* > item 1
* > item 2 paragraph 1
> item 2 paragraph 2
* item 3'''
assertEq(out_md, html2md.html2md(in_html))
def test_blockquote_complex(self):
in_html = '''<ul>
<li>item 1</li>
<li><p>item 2</p>
<blockquote>
<p>item 2 paragraph 1</p>
<p>item 2 paragraph 2</p>
</blockquote></li>
<li><p>item 3</p>
<blockquote>
<p>item 3 blockquote</p>
</blockquote></li>
</ul>'''
out_md = '''* item 1
* item 2
> item 2 paragraph 1
> item 2 paragraph 2
* item 3
> item 3 blockquote'''
assertEq(out_md, html2md.html2md(in_html))
def test_cheatsheet(self):
in_html = '''
<ul>
<li><p>A list item.</p>
<p>With multiple paragraphs.</p>
<blockquote>
<p>And a blockquote</p>
</blockquote></li>
<li><p>Another List item with
a hard wrapped 2nd line.</p>
<pre><code>
project/
__init__.py
example1.py
test/
__init__.py
test_example1.py
</code></pre></li>
</ul>'''
out_md = '''* A list item.
With multiple paragraphs.
> And a blockquote
* Another List item with
a hard wrapped 2nd line.
project/
__init__.py
example1.py
test/
__init__.py
test_example1.py'''
assertEq(out_md, html2md.html2md(in_html))
def suite():
return unittest.TestLoader().loadTestsFromTestCase(SpecialListsTest)
if __name__ == '__main__':
unittest.main() | 17.143836 | 72 | 0.581302 | 367 | 2,503 | 3.803815 | 0.158038 | 0.071633 | 0.100287 | 0.04298 | 0.606017 | 0.552292 | 0.544413 | 0.544413 | 0.508596 | 0.39255 | 0 | 0.038172 | 0.256892 | 2,503 | 146 | 73 | 17.143836 | 0.712366 | 0 | 0 | 0.522936 | 0 | 0 | 0.621006 | 0 | 0 | 0 | 0 | 0 | 0.055046 | 1 | 0.055046 | false | 0 | 0.027523 | 0.009174 | 0.100917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45d13bddb4847d731177692bebac88cbbf7f4d4b | 1,344 | py | Python | scripts/collapse_subtypes.py | edawson/rkmh | ea3d2e6791e8202ec0e487e648c0182f1766728b | [
"MIT"
] | 43 | 2016-06-29T15:55:36.000Z | 2022-03-07T03:18:45.000Z | scripts/collapse_subtypes.py | edawson/rkmh | ea3d2e6791e8202ec0e487e648c0182f1766728b | [
"MIT"
] | 12 | 2016-06-29T12:37:01.000Z | 2021-07-06T18:58:00.000Z | scripts/collapse_subtypes.py | edawson/rkmh | ea3d2e6791e8202ec0e487e648c0182f1766728b | [
"MIT"
] | 8 | 2016-09-01T17:10:53.000Z | 2021-02-26T10:55:31.000Z | import sys
from collections import Counter
## 5 |strains A1:23146 C:377 B1:546 unclassified:211701 A3:133 A2:212 A4:2230 B2:1052 D2:551 D3:3685 D1:30293 |sketch sketchSize=1000 kmer=16
if __name__ == "__main__":
for line in sys.stdin:
x_d = Counter()
tokens = line.split("|")
features = tokens[1].split(" ")
for i in features:
if i.startswith("A"):
x_d["A"] += int(i.strip().split(":")[1])
elif i.startswith("B"):
x_d["B"] += int(i.strip().split(":")[1])
elif i.startswith("C"):
x_d["C"] += int(i.strip().split(":")[1])
elif i.startswith("D"):
x_d["D"] += int(i.strip().split(":")[1])
elif i.startswith("u"):
x_d["U"] = int(i.strip().split(":")[1])
total = sum([x_d[x] for x in x_d])
#feat_l = [str(x + ":" + str( float(x_d[x]) / float(total) )) for x in x_d if x is not "U"]
feat_l = [str(x + ":" + str( float(x_d[x]) / float(total) )) for x in x_d]
#feat_l = [str(x + ":" + str((x_d[x]) )) for x in x_d]
x_feat = "|vir " + " ".join(feat_l)
#xtra_namespace = "|" + tokens[2]
#print " ".join([ tokens[0].strip(), x_feat, xtra_namespace] ).strip()
print " ".join([ tokens[0].strip(), x_feat] ).strip()
| 44.8 | 141 | 0.497024 | 205 | 1,344 | 3.107317 | 0.341463 | 0.043956 | 0.070644 | 0.10989 | 0.466248 | 0.4427 | 0.4427 | 0.361068 | 0.150706 | 0.150706 | 0 | 0.071429 | 0.291667 | 1,344 | 29 | 142 | 46.344828 | 0.597689 | 0.284226 | 0 | 0 | 0 | 0 | 0.034555 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45d64512716035b616ac8ec0c6bb1bdd8e298d41 | 974 | py | Python | cbmcfs3_runner/scenarios/static_demand.py | xapple/cbm_runner | ec532819e0a086077475bfd479836a378f187f6f | [
"MIT"
] | 2 | 2019-07-11T23:49:22.000Z | 2019-10-31T19:11:45.000Z | cbmcfs3_runner/scenarios/static_demand.py | xapple/cbm_runner | ec532819e0a086077475bfd479836a378f187f6f | [
"MIT"
] | null | null | null | cbmcfs3_runner/scenarios/static_demand.py | xapple/cbm_runner | ec532819e0a086077475bfd479836a378f187f6f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Written by Lucas Sinclair and Paul Rougieux.
JRC biomass Project.
Unit D1 Bioeconomy.
"""
# Built-in modules #
# First party modules #
from plumbing.cache import property_cached
# Internal modules #
from cbmcfs3_runner.scenarios.base_scen import Scenario
from cbmcfs3_runner.core.runner import Runner
###############################################################################
class StaticDemand(Scenario):
"""
This scenario represents a demand that is pre-calculated and is not a
function of the maximum wood supply (no interaction yet with the GFTM model).
"""
short_name = 'static_demand'
@property_cached
def runners(self):
"""A dictionary of country codes as keys with a list of runners as values."""
# Create all runners #
result = {c.iso2_code: [Runner(self, c, 0)] for c in self.continent}
# Don't modify these runners #
return result
| 27.055556 | 85 | 0.63655 | 124 | 974 | 4.935484 | 0.717742 | 0.035948 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008997 | 0.201232 | 974 | 35 | 86 | 27.828571 | 0.777635 | 0.466119 | 0 | 0 | 0 | 0 | 0.032746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
45d7be5aecc095fc8c811a04d03b02510ef8c196 | 1,310 | py | Python | personalization/shared/utils.py | alshedivat/federated | 100f0e0940282818c42c39156407ae419f26de50 | [
"Apache-2.0"
] | null | null | null | personalization/shared/utils.py | alshedivat/federated | 100f0e0940282818c42c39156407ae419f26de50 | [
"Apache-2.0"
] | null | null | null | personalization/shared/utils.py | alshedivat/federated | 100f0e0940282818c42c39156407ae419f26de50 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021, Maruan Al-Shedivat.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library for loading and preprocessing EMNIST training and testing data."""
import contextlib
@contextlib.contextmanager
def result_type_is_sequence_hack(tf_computation):
# Monkey patch the result type of the dataset computation to avoid TypeError
# being raised inside `tff.simultation.iterative_process_compositions`.
# TODO: propose to relax the assumption about the type signature of the
# dataset computation being SequenceType in TFF.
try:
# Monkey-patch tf_computation's result type.
tf_computation.type_signature.result.is_sequence = lambda: True
yield
finally:
# Monkey-unpatch tf_computation's result type.
tf_computation.type_signature.result.is_sequence = lambda: False
| 40.9375 | 78 | 0.772519 | 186 | 1,310 | 5.360215 | 0.586022 | 0.060181 | 0.026078 | 0.032096 | 0.144433 | 0.144433 | 0.144433 | 0.144433 | 0.144433 | 0.144433 | 0 | 0.007306 | 0.164122 | 1,310 | 31 | 79 | 42.258065 | 0.903196 | 0.753435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45d82914df1d0506310b57788fd50743db64c8a9 | 2,635 | py | Python | vars/Staging_security_port_scanning.py | rlennon/Doodle | 60b1645fd327192848c4daaccb572fa456974526 | [
"MIT"
] | 5 | 2019-02-25T20:10:18.000Z | 2019-04-24T20:21:04.000Z | vars/Staging_security_port_scanning.py | rlennon/Doodle | 60b1645fd327192848c4daaccb572fa456974526 | [
"MIT"
] | 28 | 2019-02-26T13:50:52.000Z | 2019-04-24T20:10:29.000Z | vars/Staging_security_port_scanning.py | rlennon/Doodle | 60b1645fd327192848c4daaccb572fa456974526 | [
"MIT"
] | 6 | 2019-02-28T20:54:09.000Z | 2019-04-06T22:18:50.000Z | import sys, os, socket
class Ssh_Util:
def port_scan(self, remote_host_ip):
def print_box(print_line):
print("-" * 78)
print(print_line)
print("-" * 78)
# Validate the IP of the remote host
# remote_host_ip = "172.28.25.122"
# Using the range function to specify ports (here it will scans all ports between 1 and 1024)
try:
from_port = 1
to_port = 200
print("Scanning Port range - {} to {}.".format(from_port, to_port))
for port in range(from_port, to_port):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex((remote_host_ip, port))
ssh = False
http = False
other_ports = False
port_list = []
if result == 0:
if port == 22:
print("\n\t\t\tPort {} - open for SSH!\n".format(port))
ssh = True
sock.close()
elif port == 80:
print("\n\t\t\tPort {} - open for HTTP!\n".format(port))
http = True
sock.close()
else:
print("\n\t\t\tPort {} - open!".format(port))
other_ports = True
port_list = [port]
sock.close()
print("\n\t\t\tThe connection to {} over Port {} has now been closed".format(remote_host_ip, port))
# Printing the information to screen
print_box("Scanning Completed for {}".format(remote_host_ip))
print("\t\t\tSummary")
if ssh:
print("\tPort 22, Is open for SSH!")
if http:
print("\tPort 80, Is open for HTTP!")
if other_ports:
for item in port_list:
print("\tPort {} is Open.".format(item))
if not other_ports:
print("\tNo other Ports are available!")
print("-" * 78)
except socket.error:
print("Couldn't connect to server")
sys.exit()
def main():
# Initialize the ssh object
ssh_obj = Ssh_Util()
print(" ")
print(" ")
print(" ")
print("Scanning the Staging Web Server")
ssh_obj.port_scan("172.28.25.129")
print(" ")
print(" ")
print(" ")
print("Scanning the Staging API Server")
ssh_obj.port_scan("172.28.25.128")
main() | 36.09589 | 123 | 0.470209 | 300 | 2,635 | 4.003333 | 0.34 | 0.058285 | 0.049958 | 0.026644 | 0.155704 | 0.155704 | 0.141549 | 0.044963 | 0 | 0 | 0 | 0.035317 | 0.419734 | 2,635 | 73 | 124 | 36.09589 | 0.750164 | 0.08425 | 0 | 0.196721 | 0 | 0 | 0.185631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04918 | false | 0 | 0.016393 | 0 | 0.081967 | 0.409836 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
45dbe199ff3f79ba88ba30ee2d67fa4703fdbf6b | 3,556 | py | Python | app/settings/dev.py | Pixsel1/movie-warehouse | 038c061aa565365ff45dc10bc2c4ab58fdf11f01 | [
"MIT"
] | null | null | null | app/settings/dev.py | Pixsel1/movie-warehouse | 038c061aa565365ff45dc10bc2c4ab58fdf11f01 | [
"MIT"
] | 5 | 2021-03-19T01:58:15.000Z | 2021-09-22T18:52:59.000Z | app/settings/dev.py | Pixsel1/movie-warehouse | 038c061aa565365ff45dc10bc2c4ab58fdf11f01 | [
"MIT"
] | null | null | null | import logging
import os
import sentry_sdk # NOQA
from sentry_sdk.integrations.django import DjangoIntegration # NOQA
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.debug("loading settings dev.py")
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", "XXX")
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
SENTRY_DSN = os.environ.get("SENTRY_DSN")
# Application definition
INSTALLED_APPS = [
# Django
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
# Third party
"rest_framework",
"django_extensions",
"django_filters",
"drf_yasg",
# Local
"moviewarehouse.movies",
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "moviewarehouse.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
]
},
}
]
WSGI_APPLICATION = "moviewarehouse.wsgi.application"
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
"default": {
"HOST": os.getenv("POSTGRES_HOST"),
"NAME": os.getenv("POSTGRES_DB"),
"USER": os.getenv("POSTGRES_USER"),
"PASSWORD": os.getenv("POSTGRES_PASSWORD"),
"ENGINE": "django.db.backends.postgresql_psycopg2",
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"
},
{"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
{"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
{"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = "/static/"
MEDIA_URL = "/media/"
STATIC_ROOT = os.path.join(BASE_DIR, "static")
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
# OMDB API
OMDB_API_KEY = os.getenv("OMDB_API_KEY")
OMDB_API_URL = "https://www.omdbapi.com/?apikey={apikey}&t={title}&type=movie&r=json"
# REST FRAMEWORK
REST_FRAMEWORK = {
"DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
"PAGE_SIZE": 100,
}
| 26.340741 | 90 | 0.702193 | 393 | 3,556 | 6.19084 | 0.417303 | 0.080148 | 0.048911 | 0.041102 | 0.158241 | 0.138101 | 0.05672 | 0.032881 | 0.032881 | 0 | 0 | 0.006036 | 0.161417 | 3,556 | 134 | 91 | 26.537313 | 0.809859 | 0.172947 | 0 | 0 | 0 | 0.012048 | 0.520041 | 0.37136 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.072289 | 0.048193 | 0 | 0.048193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
45dff5ae66853bb0ebe532decb37848df59bad29 | 1,368 | py | Python | Examples/mouselight_api.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | Examples/mouselight_api.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | Examples/mouselight_api.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | """
This tutorial shows how to download and render neurons from the MouseLight project
using the MouseLightAPI class.
You can also download data manually from the neuronbrowser website and render them by
passing the downloaded files to `scene.add_neurons`.
"""
import brainrender
brainrender.USE_MORPHOLOGY_CACHE = True
from brainrender.scene import Scene
from brainrender.Utils.MouseLightAPI.mouselight_api import MouseLightAPI
from brainrender.Utils.MouseLightAPI.mouselight_info import mouselight_api_info, mouselight_fetch_neurons_metadata
# Fetch metadata for neurons with some in the secondary motor cortex
neurons_metadata = mouselight_fetch_neurons_metadata(filterby='soma', filter_regions=['MOs'])
# Then we can download the files and save them as a .json file
ml_api = MouseLightAPI()
neurons_files = ml_api.download_neurons(neurons_metadata[:2]) # just saving the first couple neurons to speed things up
# Show neurons and ZI in the same scene:
scene = Scene()
scene.add_neurons(neurons_files, soma_color='orangered', dendrites_color='orangered',
axon_color='darkseagreen', neurite_radius=8) # add_neurons takes a lot of arguments to specify how the neurons should look
# make sure to check the source code to see all available optionsq
scene.add_brain_regions(['MOs'], alpha=0.15)
scene.render(camera='coronal') | 45.6 | 138 | 0.79386 | 195 | 1,368 | 5.420513 | 0.523077 | 0.056764 | 0.028382 | 0.062441 | 0.081362 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004263 | 0.142544 | 1,368 | 30 | 139 | 45.6 | 0.896846 | 0.452485 | 0 | 0 | 0 | 0 | 0.065187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
45e1b1ddf278cb31a93dc4443d544327d2019794 | 1,692 | py | Python | runtest/__init__.py | thautwarm/gkdtex | e6e7404b0be503a684aee89f2437770ef7b4b04a | [
"MIT"
] | 3 | 2020-12-04T08:48:12.000Z | 2020-12-07T17:54:33.000Z | runtest/__init__.py | thautwarm/gkdtex | e6e7404b0be503a684aee89f2437770ef7b4b04a | [
"MIT"
] | 5 | 2020-11-30T03:46:10.000Z | 2020-12-03T08:07:27.000Z | runtest/__init__.py | thautwarm/gkdtex | e6e7404b0be503a684aee89f2437770ef7b4b04a | [
"MIT"
] | null | null | null | from gkdtex.wrap import parse
from gkdtex.interpreter import Interpreter, CBVFunction
from gkdtex.developer_utilities import *
import sys
src = r"""
\newcommand{\GKDCreateId}{\input{|"gkdmgr --op uuid --rt A"}}
\makeatletter
\newcommand*\GKDNewTemp[2]{
\@ifundefined{GKDTemp#1}{
\expandafter\newcommand\csname GKDTemp#1\endcsname{#2}
}{
\expandafter\renewcommand\csname GKDTemp#1\endcsname{#2}
}
}
\makeatother
\GKDNewTemp{ConstID}{\GKDCreateId}
\newcommand{\GKDSet}[2]{\input{|"gkdmgr --op set --rt \GKDTempConstID #1 #2"}}
\newcommand{\GKDGet}[1]{\input{|"gkdmgr --op get --rt \GKDTempConstID #1"}}
\newcommand{\GKDPush}[2]{\input{|"gkdmgr --op push --rt \GKDTempConstID #1 #2"}}
\newcommand{\GKDPop}[1]{\input{|"gkdmgr --op pop --rt \GKDTempConstID #1"}}
\newcommand{\GKDPyCall}[2]{\input{|"gkdmgr --op call --rt \GKDTempConstID #1 #2"}}
\makeatletter
\newenvironment{GKDBNF}[1]
{\VerbatimEnvironment
\GKDNewTemp{A}{#1}
\input{|"gkdmgr --op createDirFor --rt any ./gkdbnf/#1.bnf"}
\VerbatimOut{./gkdbnf/#1.bnf}
}%
{%
\endVerbatimOut%
\toks0{\immediate\write18}%
\begin{bnf*}
\input{|"gkdmgr --op bnf --rt any ./gkdbnf/\GKDTempA.bnf"}%
\end{bnf*}
}
\verb{a}
\makeatother
"""
body = parse(r"""$ #\1^{ #\1#1 } $""")
interpreter = Interpreter()
interpreter.filename = "a.tex"
interpreter.src = src
interpreter.globals['mk'] = CBVFunction([""], [None], dict(d=0), body)
def verb(a: Group, *, self: Interpreter, tex_print):
tex_print('<<')
tex_print(get_raw_from_span_params(self.src, a.offs))
tex_print('>>')
interpreter.globals['verb'] = verb
interpreter.interp(sys.stdout.write, parse(src, "a.tex"))
| 25.636364 | 82 | 0.663121 | 207 | 1,692 | 5.376812 | 0.376812 | 0.079066 | 0.093441 | 0.037736 | 0.093441 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020464 | 0.13357 | 1,692 | 65 | 83 | 26.030769 | 0.738745 | 0 | 0 | 0.081633 | 0 | 0.061224 | 0.670804 | 0.332151 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.081633 | 0 | 0.102041 | 0.081633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
afd925dfee06bf46bcafe0d3b3e160d1c36eddbe | 3,940 | py | Python | tests/providers/test_credit_card.py | pablofm/faker | f09ad1128da99ec15510aad79b2bc27f79e3165d | [
"MIT"
] | 2 | 2020-02-12T20:12:50.000Z | 2020-02-12T22:02:53.000Z | tests/providers/test_credit_card.py | pablofm/faker | f09ad1128da99ec15510aad79b2bc27f79e3165d | [
"MIT"
] | null | null | null | tests/providers/test_credit_card.py | pablofm/faker | f09ad1128da99ec15510aad79b2bc27f79e3165d | [
"MIT"
] | 1 | 2020-07-12T12:50:15.000Z | 2020-07-12T12:50:15.000Z | import re
import unittest
from faker import Faker
from faker.providers.bank.ru_RU import Provider as RuBank
class TestCreditCardProvider(unittest.TestCase):
def setUp(self):
self.fake = Faker(locale='en_US')
Faker.seed(0)
self.provider = self.fake.provider('faker.providers.credit_card')
self.mastercard_pattern = r'^(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}$'
self.visa_pattern = r'^4[0-9]{12}([0-9]{3}){0,2}$'
self.discover_pattern = r'^6(?:011|5[0-9]{2})[0-9]{12}$'
self.diners_club_pattern = r'^3(?:0[0-5]|[68][0-9])[0-9]{11}$'
self.jcb_pattern = r'^(?:2131|1800|35\d{3})\d{11}$'
def test_mastercard(self):
for prefix in self.provider.prefix_mastercard:
number = self.provider._generate_number(prefix, 16)
assert re.match(self.mastercard_pattern, number)
def test_visa13(self):
for prefix in self.provider.prefix_visa:
number = self.provider._generate_number(prefix, 13)
assert re.match(self.visa_pattern, number)
def test_visa16(self):
for prefix in self.provider.prefix_visa:
number = self.provider._generate_number(prefix, 16)
assert re.match(self.visa_pattern, number)
def test_visa19(self):
for prefix in self.provider.prefix_visa:
number = self.provider._generate_number(prefix, 19)
assert re.match(self.visa_pattern, number)
def test_discover(self):
for prefix in self.provider.prefix_discover:
number = self.provider._generate_number(prefix, 16)
assert re.match(self.discover_pattern, number)
def test_diners_club(self):
for prefix in self.provider.prefix_diners:
number = self.provider._generate_number(prefix, 14)
assert re.match(self.diners_club_pattern, number)
def test_jcb16(self):
for prefix in self.provider.prefix_jcb16:
number = self.provider._generate_number(prefix, 16)
assert re.match(self.jcb_pattern, number)
def test_jcb15(self):
for prefix in self.provider.prefix_jcb15:
number = self.provider._generate_number(prefix, 15)
assert re.match(self.jcb_pattern, number)
class TestRuRu(unittest.TestCase):
""" Tests credit card in the ru_RU locale """
def setUp(self):
self.fake = Faker('ru_RU')
Faker.seed(0)
self.visa_pattern = r'^4[0-9]{15}$'
self.mastercard_pattern = r'^(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}$'
self.mir_pattern = r'^220[0-4][0-9]{12}$'
self.maestro_pattern = r'^50|5[6-9]|6[0-9][0-9]{14}$'
self.amex_pattern = r'^3[4|7][0-9]{13}$'
self.unionpay_pattern = r'^62|81[0-9]{14}$'
def test_visa(self):
number = self.fake.credit_card_number('visa')
assert re.match(self.visa_pattern, number)
def test_mastercard(self):
number = self.fake.credit_card_number('mastercard')
assert re.match(self.mastercard_pattern, number)
def test_mir(self):
number = self.fake.credit_card_number('mir')
assert re.match(self.mir_pattern, number)
def test_maestro(self):
number = self.fake.credit_card_number('maestro')
assert re.match(self.maestro_pattern, number)
def test_amex(self):
number = self.fake.credit_card_number('amex')
assert re.match(self.amex_pattern, number)
def test_unionpay(self):
number = self.fake.credit_card_number('unionpay')
assert re.match(self.unionpay_pattern, number)
def test_owner(self):
card_data = self.fake.credit_card_full().split('\n')
assert re.match('[A-Za-z]+', card_data[1])
def test_issuer(self):
card_data = self.fake.credit_card_full().split('\n')
assert card_data[4] in RuBank.banks
| 37.52381 | 120 | 0.636802 | 585 | 3,940 | 4.129915 | 0.158974 | 0.018212 | 0.080712 | 0.09851 | 0.593957 | 0.593957 | 0.541805 | 0.354719 | 0.354719 | 0.262003 | 0 | 0.063533 | 0.217005 | 3,940 | 104 | 121 | 37.884615 | 0.719611 | 0.009391 | 0 | 0.3125 | 0 | 0.05 | 0.1181 | 0.086521 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.225 | false | 0 | 0.05 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
afdbbfd6b9fd80257fe07451c26105c2209f7e5f | 1,803 | py | Python | sheet_names_xlrd.py | patkujawa-wf/excel-file-reverse-engineering | 6a1780a62d6fda34964659607e2b299e62066671 | [
"MIT"
] | 2 | 2021-04-17T06:07:02.000Z | 2021-04-17T06:30:14.000Z | sheet_names_xlrd.py | patkujawa-wf/excel-file-reverse-engineering | 6a1780a62d6fda34964659607e2b299e62066671 | [
"MIT"
] | null | null | null | sheet_names_xlrd.py | patkujawa-wf/excel-file-reverse-engineering | 6a1780a62d6fda34964659607e2b299e62066671 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
> time \ls -1 **/*.xlsx | python sheet_names_xlrd.py
> for fname in **/*.xlsx; do time echo $fname | python sheet_names_xlrd.py; done
❯ time echo 'xlsx/SOX Controls Testing Template.xlsx' | python sheet_names_xlrd.py
[u'Interim Testing', u'Year End Testing']
echo 'xlsx/SOX Controls Testing Template.xlsx' 0.00s user 0.00s system 31% cpu 0.003 total
python sheet_names_xlrd.py 62.72s user 0.87s system 99% cpu 1:03.79 total
INFO:sheet_names_api:Slow (took longer than 0.5 seconds) files:
{
"0.5802590847015381": "xlsm/Outline Check.xlsm",
"0.7123560905456543": "xlsx/SOX Failure Listing Status.xlsx",
"65.0460250377655": "xlsx/SOX Controls Testing Template.xlsx",
"0.87471604347229": "xlsx/-hp8gt.xlsx",
"1.2309041023254395": "xlsx/SOX Testing Status.xlsx",
"1.5334298610687256": "xlsm/Compare XML.xlsm"
}
"""
from sheet_names_api import main
def _xlrd(filepath):
import xlrd
# https://secure.simplistix.co.uk/svn/xlrd/trunk/xlrd/doc/xlrd.html?p=4966
# with xlrd.open_workbook(filepath, on_demand=True, ragged_rows=True) as wb:
# sheet_names = wb.sheet_names()
# return sheet_names
# How about with memory mapping? Nope, blows up on both xls and xlsx
import contextlib
import mmap
import os
length = 2**10 * 4
# length = 0 # whole file
with open(filepath, 'rb') as f:
# mmap throws if length is larger than file size
length = min(os.path.getsize(filepath), length)
with contextlib.closing(mmap.mmap(f.fileno(), length, access=mmap.ACCESS_READ)) as m,\
xlrd.open_workbook(on_demand=True, file_contents=m) as wb:
sheet_names = wb.sheet_names()
return sheet_names
if __name__ == '__main__':
main(_xlrd, ['xls/SMITH 2014 TRIP-Master List.xls'])
| 37.5625 | 94 | 0.691625 | 274 | 1,803 | 4.427007 | 0.489051 | 0.098928 | 0.052762 | 0.065952 | 0.237428 | 0.201154 | 0.161583 | 0.06925 | 0.06925 | 0.06925 | 0 | 0.096796 | 0.186356 | 1,803 | 47 | 95 | 38.361702 | 0.72938 | 0.654465 | 0 | 0 | 0 | 0 | 0.074135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
afe3566098dc4e4a8a080871d6a7f8b01b9ea714 | 9,922 | py | Python | server_py_files/data/filestream.py | bopopescu/timing_system_software | 11dbb8143dc883507c886a4136cf1de0e3534602 | [
"MIT"
] | 1 | 2019-02-03T14:55:48.000Z | 2019-02-03T14:55:48.000Z | server_py_files/data/filestream.py | bopopescu/timing_system_software | 11dbb8143dc883507c886a4136cf1de0e3534602 | [
"MIT"
] | null | null | null | server_py_files/data/filestream.py | bopopescu/timing_system_software | 11dbb8143dc883507c886a4136cf1de0e3534602 | [
"MIT"
] | 1 | 2020-07-23T17:25:41.000Z | 2020-07-23T17:25:41.000Z | # -*- coding: utf-8 -*-
"""
Created on Sat Apr 05 21:26:33 2014
@author: Nate
"""
import time, datetime, uuid, io, os
import xstatus_ready
import file_locations
import XTSM_Server_Objects
import pdb
import msgpack
import cStringIO
import zlib
import zipfile
import pprint
DEFAULT_CHUNKSIZE=100*1000*1000
class FileStream_old(xstatus_ready.xstatus_ready, XTSM_Server_Objects.XTSM_Server_Object):
"""
A custom file stream object for data bombs and XTSM stacks; wraps io module to create
an infinite output stream to a series of files of approximately one
'chunksize' length. As data is written in, this stream will automatically
close files that exceed the chunksize and open another. the write method
will return the name data was written into - no chunk of data passed in
a single call to write will be segmented into multiple files
"""
def __init__(self, params={}):
print "class FileStream, func __init__()"
today = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d')
self.compression_strength = 9 # 1-9, 1 fastest, least compression. 6 default
self.compressobj = zlib.compressobj(self.compression_strength)
defaultparams = { 'timecreated':time.time(),
'chunksize': DEFAULT_CHUNKSIZE,
'byteswritten' : 0}
try:
location_root = file_locations.file_locations[params['file_root_selector']][uuid.getnode()]
defaultparams.update({'location_root':location_root+'/'+today+'/'})
except KeyError:
print "error"
raise self.UnknownDestinationError
for key in params.keys():
defaultparams.update({key:params[key]})
for key in defaultparams.keys():
setattr(self, key, defaultparams[key])
self.location = self.location_root + str(uuid.uuid1()) + '.msgp'
try:
#self.zip_file = zipfile.ZipFile(self.location, mode='a', compression=zipfile.ZIP_DEFLATED)
self.stream = io.open(self.location, 'ab')
except IOError: #Folder doesn't exist, then we make the day's folder.
os.makedirs(self.location_root)
#self.zip_file = zipfile.ZipFile()
self.stream = io.open(self.location, 'ab')
#self.write(msgpack.packb('}'))
self.filehistory = [self.location]
print self.location
class UnknownDestinationError(Exception):
pass
def output_log(self):
"""
outputs a log of recently written files
"""
logstream=io.open(self.location_root+'DBFS_LOG.txt','a')
time_format = '%Y-%m-%d %H:%M:%S'
time1 = datetime.datetime.fromtimestamp(time.time()).strftime(time_format)
time2 = datetime.datetime.fromtimestamp(time.time()).strftime(time_format)
timeheader= time1 + " through "+ time2
msg = "\nThis is a log of file writes from the DataBomb module:\n"
msg = msg + "This module has written the files below from the time period\n"
msg = msg + timeheader + '\n\n'.join(self.filehistory)
logstream.write(unicode(msg))
pprint.pprint(dir(self), logstream)
logstream.close()
def write(self,bytestream, keep_stream_open=False):
"""
writes bytes to the io stream - if the total bytes written by this
and previous calls since last chunk started exceeds chunksize,
opens a new file for the next chunk after writing the current request
returns the file location of the chunk written.
"""
self.byteswritten += len(bytestream)
cBlock = self.compressobj.compress(bytestream)
self.stream.write(cBlock)
if (self.byteswritten > self.chunksize) and (not keep_stream_open):
self.__flush__()
self.stream.write(bytestream)
self.byteswritten += len(bytestream)
return self.location
def open_file(self):
fileName = 'c:/wamp/www/raw_buffers/DBFS/2014-10-13/6ea6bf2e-52fe-11e4-b225-0010187736b5.msgp'
import zlib
import cStringIO
import zipfile
zf = zipfile.ZipFile(fileName, 'r')
print zf.namelist()
for info in zf.infolist():
print info.filename
print '\tComment:\t', info.comment
print '\tModified:\t', datetime.datetime(*info.date_time)
print '\tSystem:\t\t', info.create_system, '(0 = Windows, 3 = Unix)'
print '\tZIP version:\t', info.create_version
print '\tCompressed:\t', info.compress_size, 'bytes'
print '\tUncompressed:\t', info.file_size, 'bytes'
print
#info = zf.getinfo(filename)
#data = zf.read(filename)
f = open(fileName,'rb')
c = zlib.decompressobj()
cBlock = c.decompress(f.read())
print cBlock
output = cStringIO.StringIO(cBlock)
unpacker = msgpack.Unpacker(output,use_list=False)# If data was msgpacked
print unpacker.next()
print cBlock
def chunkon(self):
"""
this method creates a file for the next chunk of data
"""
#self.stream.write(msgpack.packb('{'))
self.stream.close()
self.location = self.location_root + str(uuid.uuid1()) + '.msgp'
self.stream = io.open(self.location,'ab')
self.compressobj = zlib.compressobj(self.compression_strength)
#self.stream.write(msgpack.packb('}'))
self.filehistory.append(self.location)
self.byteswritten = 0
def __flush__(self):
cBlock = self.compressobj.flush()
self.stream.write(cBlock)
self.stream.flush()
self.chunkon()
self.output_log()
##############################################################################
class Filestream(xstatus_ready.xstatus_ready, XTSM_Server_Objects.XTSM_Server_Object):
"""
CP
A custom file stream object for data bombs and XTSM stacks; wraps zipfile
module. the write method
will return the full path to the data. no chunk of data passed in
a single call to write will be segmented into multiple files.
"""
def __init__(self, params={}):
print "class FS, func __init__()"
self.init_time = time.time()
self.today = datetime.datetime.fromtimestamp(self.init_time).strftime('%Y-%m-%d')
self.defaultparams = {'zip archive created':self.init_time}
try:
self.location_root = file_locations.file_locations[params['file_root_selector']][uuid.getnode()]
self.defaultparams.update({'location_root':self.location_root+'\\'+self.today+'\\'})
except KeyError:
print "error"
pdb.set_trace()
raise self.UnknownDestinationError
for key in params.keys():
self.defaultparams.update({key:params[key]})
for key in self.defaultparams.keys():
setattr(self, key, self.defaultparams[key])
self.logstream = io.open(self.location_root + 'filestream_log.txt', 'a')
self.logstream.write(unicode('This is a log of file writes\n'))
self.root_zip_name = str(uuid.uuid1()) + '.zip'
print self.location_root
class UnknownDestinationError(Exception):
pass
def output_log(self):
"""
outputs a log of recently written files
"""
self.logstream = io.open(self.location_root + 'filestream_log.txt', 'a')
time_format = '%Y-%m-%d %H:%M:%S'
time1 = datetime.datetime.fromtimestamp(self.init_time).strftime(time_format)
timeheader= time1
msg = "This module has written,\n"
msg = msg + self.zip_file_name + '\\' + self.fileName + '\n'
msg = msg + "at time, " + timeheader + '\n'
self.logstream.write(unicode(msg))
#pprint.pprint(unicode(dir(self)), logstream)
self.logstream.close()
def _write_file(self, msg, comments='', prefix='', extension='.dat', is_backup=False):
"""
writes a file to the zip archive.
"""
if is_backup:
self.zip_file_name = self.location_root + 'Backup_' + self.root_zip_name
else:
self.zip_file_name = self.location_root + self.root_zip_name
try:
self.zip_file = zipfile.ZipFile(self.zip_file_name,
mode='a',
compression=zipfile.ZIP_DEFLATED,
allowZip64=True)
except IOError: #Folder doesn't exist, then we make the day's folder.
os.makedirs(self.location_root)
self.zip_file = zipfile.ZipFile(self.zip_file_name,
mode='a',
compression=zipfile.ZIP_DEFLATED,
allowZip64=True)
self.fileName = str(prefix) + str(uuid.uuid1()) + str(extension)
info = zipfile.ZipInfo(self.fileName, date_time=time.localtime(time.time()))
info.compress_type = zipfile.ZIP_DEFLATED
info.comment = comments + str(self.defaultparams)
self.zip_file.writestr(info, msg)
self.zip_file.close()
self.output_log()
return self.zip_file_name + "/" + self.fileName
def write_file(self, msg, comments='', prefix='', extension='.dat', is_backup=False):
"""
writes a file to the zip archive.
"""
self._write_file(msg, comments=comments, prefix='Backup_'+prefix, extension=extension, is_backup=True)
self.check_todays_files()
return self._write_file(msg, comments=comments, prefix=prefix, extension=extension, is_backup=False)
def __flush__(self):
pass
| 42.042373 | 110 | 0.606632 | 1,164 | 9,922 | 5.042955 | 0.229381 | 0.044974 | 0.032709 | 0.018399 | 0.51414 | 0.461158 | 0.38586 | 0.322147 | 0.267121 | 0.252129 | 0 | 0.010745 | 0.277767 | 9,922 | 235 | 111 | 42.221277 | 0.808401 | 0.051804 | 0 | 0.372671 | 0 | 0.006211 | 0.093647 | 0.010251 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.018634 | 0.080745 | null | null | 0.124224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
afe3c60ac04c3ecfe19a0cf5fecce4f3eaf59c99 | 559 | py | Python | backend/portfolify/wsgi.py | JermyTan/Portfolify | 40d86862747699a69e8b1fe55fa5fe0fec0b9776 | [
"MIT"
] | 3 | 2021-01-16T16:03:49.000Z | 2022-03-03T15:11:14.000Z | backend/portfolify/wsgi.py | JermyTan/portfolify | 40d86862747699a69e8b1fe55fa5fe0fec0b9776 | [
"MIT"
] | null | null | null | backend/portfolify/wsgi.py | JermyTan/portfolify | 40d86862747699a69e8b1fe55fa5fe0fec0b9776 | [
"MIT"
] | null | null | null | """
WSGI config for portfolify project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/
"""
import os
import sys
from django.core.wsgi import get_wsgi_application
# only for dev/test
from dotenv import load_dotenv
TESTING = "test" in sys.argv
load_dotenv(".env.backend.test" if TESTING else ".env.backend.local")
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'portfolify.settings')
application = get_wsgi_application()
| 24.304348 | 78 | 0.779964 | 83 | 559 | 5.156627 | 0.650602 | 0.046729 | 0.084112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00404 | 0.11449 | 559 | 22 | 79 | 25.409091 | 0.860606 | 0.420394 | 0 | 0 | 0 | 0 | 0.253165 | 0.06962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
afe4611c351cf1f11c0820aaf3d3d5d4db377839 | 649 | py | Python | leetcode/climbingStairs.py | montukv/Coding-problem-solutions | 973009c00038cc57500d965871376a60f8c4e0d1 | [
"MIT"
] | null | null | null | leetcode/climbingStairs.py | montukv/Coding-problem-solutions | 973009c00038cc57500d965871376a60f8c4e0d1 | [
"MIT"
] | null | null | null | leetcode/climbingStairs.py | montukv/Coding-problem-solutions | 973009c00038cc57500d965871376a60f8c4e0d1 | [
"MIT"
] | null | null | null | '''70. Climbing Stairs
Easy
3866
127
Add to List
Share
You are climbing a stair case. It takes n steps to reach to the top.
Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?
Note: Given n will be a positive integer.
Example 1:
Input: 2
Output: 2
Explanation: There are two ways to climb to the top.
1. 1 step + 1 step
2. 2 steps
Example 2:
Input: 3
Output: 3
Explanation: There are three ways to climb to the top.
1. 1 step + 1 step + 1 step
2. 1 step + 2 steps
3. 2 steps + 1 step'''
n = int(input())
a,b = 1,2
for i in range(n-1):
a,b = b,a+b
print(a) | 17.078947 | 97 | 0.647149 | 131 | 649 | 3.206107 | 0.427481 | 0.083333 | 0.07619 | 0.092857 | 0.159524 | 0.142857 | 0.142857 | 0.142857 | 0.142857 | 0.142857 | 0 | 0.07431 | 0.274268 | 649 | 38 | 98 | 17.078947 | 0.81741 | 0.822804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
afe80106cb57e7805a34937dd45e9e7caf4d8c02 | 433 | py | Python | server/run_server.py | juandisay/twisted-docker | c3b317e70100801cbd8f883598484f6b77f56f73 | [
"MIT"
] | 5 | 2016-01-30T21:09:21.000Z | 2021-08-31T15:17:05.000Z | server/run_server.py | juandisay/twisted-docker | c3b317e70100801cbd8f883598484f6b77f56f73 | [
"MIT"
] | null | null | null | server/run_server.py | juandisay/twisted-docker | c3b317e70100801cbd8f883598484f6b77f56f73 | [
"MIT"
] | 4 | 2019-02-25T10:58:45.000Z | 2020-04-21T23:56:32.000Z | from twisted.application import service, internet
from server import HTTPEchoFactory
import os
# default port in case of the env var not was properly set.
ECHO_SERVER_PORT = 8000
proxy_port = int(os.environ.get('ECHO_SERVER_PORT', ECHO_SERVER_PORT))
application = service.Application('TwistedDockerized')
factory = HTTPEchoFactory()
server = internet.TCPServer(proxy_port, factory)
server.setServiceParent(application) | 33.307692 | 71 | 0.796767 | 55 | 433 | 6.127273 | 0.563636 | 0.089021 | 0.124629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01061 | 0.12933 | 433 | 13 | 72 | 33.307692 | 0.883289 | 0.13164 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
afeb7607481ba24537f5730e63e67c9233d3bd33 | 420 | py | Python | hello.py | gwenzek/func_argparser | 80afe8eb46c3fa85d0679c13eaad0f2b519d9b62 | [
"BSD-3-Clause"
] | 9 | 2019-12-22T09:06:47.000Z | 2022-03-04T10:38:39.000Z | hello.py | gwenzek/func_argparser | 80afe8eb46c3fa85d0679c13eaad0f2b519d9b62 | [
"BSD-3-Clause"
] | 2 | 2020-03-06T19:11:29.000Z | 2020-05-03T12:33:09.000Z | hello.py | gwenzek/func_argparser | 80afe8eb46c3fa85d0679c13eaad0f2b519d9b62 | [
"BSD-3-Clause"
] | 1 | 2020-05-02T15:45:05.000Z | 2020-05-02T15:45:05.000Z | """Say hello or goodbye to the user."""
import func_argparse
def hello(user: str, times: int = None):
"""Say hello.
Arguments:
user: name of the user
"""
print(f"Hello {user}" * (1 if times is None else times))
def bye(user: str, see_you: float = 1.0):
"""Say goodbye."""
print(f"Goodbye {user}, see you in {see_you:.1f} days")
if __name__ == "__main__":
func_argparse.main()
| 19.090909 | 60 | 0.604762 | 64 | 420 | 3.78125 | 0.515625 | 0.07438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.238095 | 420 | 21 | 61 | 20 | 0.74375 | 0.228571 | 0 | 0 | 0 | 0 | 0.220339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.428571 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
afee20d2b8c7e85de3a27f20ac4aaa7c1fa22ef1 | 2,242 | py | Python | sample.py | uguratar/pyzico | b779d590b99392df60db7c5e2df832708df9b6a2 | [
"MIT"
] | 6 | 2015-05-03T10:48:54.000Z | 2018-03-06T12:36:02.000Z | sample.py | uguratar/pyzico | b779d590b99392df60db7c5e2df832708df9b6a2 | [
"MIT"
] | 1 | 2021-06-01T22:06:45.000Z | 2021-06-01T22:06:45.000Z | sample.py | uguratar/pyzico | b779d590b99392df60db7c5e2df832708df9b6a2 | [
"MIT"
] | null | null | null | # coding=utf-8
from iyzico import Iyzico
from iyzico_objects import IyzicoCard, IyzicoCustomer, \
IyzicoCardToken, IyzicoHTTPException, IyzicoValueException
if __name__ == '__main__':
my_card = IyzicoCard("4242424242424242", "10", "2015", "000",
"Python Test")
my_customer = IyzicoCustomer("First Name", "Last Name",
"email@email")
payment = Iyzico()
try:
result = payment.debit_with_installment(6.6612132, my_card,
"Installment "
"Iyzico python library test",
"TRY", my_customer, True, 6)
if result.success:
print result.transaction_state
print result.transaction_id
print result.reference_id
print result.request_id
print result.card_token
my_token = IyzicoCardToken(result.card_token)
else:
print result.error_code
print result.error_message
except (IyzicoHTTPException, IyzicoValueException) as ex:
print ex
'''result = payment.debit_with_token(1, my_token,
"Python debit with "
"card token",
"TRY")'''
'''result = payment.register_card(my_card)
result = payment.delete_card(my_token)'''
'''result2 = payment.pre_authorize(1, my_card,
"Iyzico python library test",
"TRY")
print result2.success
result3 = payment.capture(1, result2.transaction_id,
"Iyzico python library test",
"TRY")
print result3.success
result4 = payment.reversal(1, result.transaction_id,
"Iyzico python library test",
"TRY")
print result4.success
result5 = payment.refund(1, result3.transaction_id,
"Iyzico python library test",
"TRY")
print result5.success'''
| 32.028571 | 73 | 0.501784 | 190 | 2,242 | 5.731579 | 0.336842 | 0.070707 | 0.087236 | 0.105601 | 0.173554 | 0.149679 | 0.121212 | 0.121212 | 0 | 0 | 0 | 0.03858 | 0.421945 | 2,242 | 69 | 74 | 32.492754 | 0.801698 | 0.005352 | 0 | 0 | 0 | 0 | 0.099567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.307692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
aff207ea4a0a28746ce77ebdc560cc119cfda66c | 1,719 | py | Python | apps/blog/resources.py | ride90/eve_features | 6cff35f8c4711ae030d6157565e4f0e77a92ab98 | [
"MIT"
] | 1 | 2021-10-03T05:30:46.000Z | 2021-10-03T05:30:46.000Z | apps/blog/resources.py | ride90/eve_features | 6cff35f8c4711ae030d6157565e4f0e77a92ab98 | [
"MIT"
] | null | null | null | apps/blog/resources.py | ride90/eve_features | 6cff35f8c4711ae030d6157565e4f0e77a92ab98 | [
"MIT"
] | null | null | null | RESOURCES = {
'posts': {
'schema': {
'title': {
'type': 'string',
'minlength': 3,
'maxlength': 30,
'required': True,
'unique': False
},
'body': {
'type': 'string',
'required': True,
'unique': True
},
'published': {
'type': 'boolean',
'default': False
},
'category': {
'type': 'objectid',
'data_relation': {
'resource': 'categories',
'field': '_id',
'embeddable': True
},
'required': True
},
'tags': {
'type': 'list',
'default': [],
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'tags',
'field': '_id',
'embeddable': True
}
}
}
},
},
'categories': {
'schema': {
'name': {
'type': 'string',
'minlength': 2,
'maxlength': 10,
'required': True,
'unique': True
}
},
'item_title': 'category',
},
'tags': {
'schema': {
'name': {
'type': 'string',
'minlength': 2,
'maxlength': 10,
'required': True,
'unique': True
}
}
}
}
| 25.279412 | 45 | 0.275742 | 83 | 1,719 | 5.650602 | 0.39759 | 0.127932 | 0.153518 | 0.140725 | 0.405117 | 0.268657 | 0.268657 | 0.268657 | 0.268657 | 0.268657 | 0 | 0.012605 | 0.584642 | 1,719 | 67 | 46 | 25.656716 | 0.644258 | 0 | 0 | 0.469697 | 0 | 0 | 0.236184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.