hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6c1f0c35f0c434bccb493b2149bcc26ff73971d1 | 3,046 | py | Python | zaqar-8.0.0/zaqar/tests/unit/transport/websocket/base.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 97 | 2015-01-02T09:35:23.000Z | 2022-03-25T00:38:45.000Z | zaqar-8.0.0/zaqar/tests/unit/transport/websocket/base.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | zaqar-8.0.0/zaqar/tests/unit/transport/websocket/base.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 44 | 2015-01-28T03:01:28.000Z | 2021-05-13T18:55:19.000Z | # Copyright (c) 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
from oslo_serialization import jsonutils
from zaqar import bootstrap
from zaqar.conf import default
from zaqar.conf import drivers_transport_websocket
from zaqar.conf import transport
from zaqar import tests as testing
class TestBase(testing.TestBase):
config_file = None
def setUp(self):
super(TestBase, self).setUp()
if not self.config_file:
self.skipTest("No config specified")
self.conf.register_opts(default.ALL_OPTS)
self.conf.register_opts(transport.ALL_OPTS,
group=transport.GROUP_NAME)
self.transport_cfg = self.conf[transport.GROUP_NAME]
self.conf.register_opts(drivers_transport_websocket.ALL_OPTS,
group=drivers_transport_websocket.GROUP_NAME)
self.ws_cfg = self.conf[drivers_transport_websocket.GROUP_NAME]
self.conf.unreliable = True
self.conf.admin_mode = True
self.boot = bootstrap.Bootstrap(self.conf)
self.addCleanup(self.boot.storage.close)
self.addCleanup(self.boot.control.close)
self.transport = self.boot.transport
self.api = self.boot.api
def tearDown(self):
if self.conf.pooling:
self.boot.control.pools_controller.drop_all()
self.boot.control.catalogue_controller.drop_all()
super(TestBase, self).tearDown()
class TestBaseFaulty(TestBase):
"""This test ensures we aren't letting any exceptions go unhandled."""
class V1Base(TestBase):
"""Base class for V1 API Tests.
Should contain methods specific to V1 of the API
"""
pass
class V1BaseFaulty(TestBaseFaulty):
"""Base class for V1 API Faulty Tests.
Should contain methods specific to V1 exception testing
"""
pass
class V1_1Base(TestBase):
"""Base class for V1.1 API Tests.
Should contain methods specific to V1.1 of the API
"""
def _empty_message_list(self, body):
self.assertEqual([], jsonutils.loads(body[0])['messages'])
class V1_1BaseFaulty(TestBaseFaulty):
"""Base class for V1.1 API Faulty Tests.
Should contain methods specific to V1.1 exception testing
"""
pass
class V2Base(V1_1Base):
"""Base class for V2 API Tests.
Should contain methods specific to V2 of the API
"""
class V2BaseFaulty(V1_1BaseFaulty):
"""Base class for V2 API Faulty Tests.
Should contain methods specific to V2 exception testing
"""
| 27.944954 | 79 | 0.695666 | 407 | 3,046 | 5.120393 | 0.361179 | 0.034549 | 0.034549 | 0.071977 | 0.223608 | 0.175624 | 0.12476 | 0.104607 | 0.044146 | 0 | 0 | 0.015632 | 0.222915 | 3,046 | 108 | 80 | 28.203704 | 0.864808 | 0.375575 | 0 | 0.069767 | 0 | 0 | 0.015 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 1 | 0.069767 | false | 0.069767 | 0.139535 | 0 | 0.418605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6c234cba96924d08fadab794dd4d366753d2082b | 1,263 | py | Python | backend/Backendapi/douban/serializers.py | f0rdream/SkyRead | 798b4dd35b7e6be41e5fed4537d3f6034d20494e | [
"MIT"
] | null | null | null | backend/Backendapi/douban/serializers.py | f0rdream/SkyRead | 798b4dd35b7e6be41e5fed4537d3f6034d20494e | [
"MIT"
] | null | null | null | backend/Backendapi/douban/serializers.py | f0rdream/SkyRead | 798b4dd35b7e6be41e5fed4537d3f6034d20494e | [
"MIT"
] | null | null | null | from rest_framework.response import Response
from rest_framework.serializers import (
SerializerMethodField,
ModelSerializer,
ValidationError,
DateTimeField,
CharField,
IntegerField,
)
from .models import Comment,Reading,Review
from rest_framework import serializers
class DoubanCommentSerializer(ModelSerializer):
class Meta:
model = Comment
fields =[
'id',
'isbn13',
'author',
'time',
'star',
'vote',
'content',
]
class DoubanReadingSerializer(ModelSerializer):
class Meta:
model = Reading
fields = [
'id',
'isbn13',
'note',
'content'
]
class DoubanReviewSerialzier(ModelSerializer):
class Meta:
model = Review
fields = [
'id',
'isbn13',
'author',
'title',
'content',
]
#
# class DoubanReviewLinkSerializer(ModelSerializer):
# class Meta:
# model = Review_Link
# field = [
# 'id',
# 'isbn13',
# 'title',
# 'link',
# ] | 21.775862 | 53 | 0.482977 | 81 | 1,263 | 7.481481 | 0.432099 | 0.132013 | 0.158416 | 0.191419 | 0.115512 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01105 | 0.426762 | 1,263 | 58 | 54 | 21.775862 | 0.825967 | 0.160728 | 0 | 0.380952 | 0 | 0 | 0.07855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c31e5f6413725d629bc66cad76ce8b5400705c3 | 29,119 | py | Python | HCm-opt/HCm_v4.2/HCm_v4.2.py | Borja-Perez-Diaz/HII-CHI-Mistry | d0dafc753c63246bf14b77807a885ddc7bd4bb99 | [
"MIT"
] | null | null | null | HCm-opt/HCm_v4.2/HCm_v4.2.py | Borja-Perez-Diaz/HII-CHI-Mistry | d0dafc753c63246bf14b77807a885ddc7bd4bb99 | [
"MIT"
] | null | null | null | HCm-opt/HCm_v4.2/HCm_v4.2.py | Borja-Perez-Diaz/HII-CHI-Mistry | d0dafc753c63246bf14b77807a885ddc7bd4bb99 | [
"MIT"
] | null | null | null | # Filename: HII-CHCm_v 4.2.py
import string
import numpy as np
import sys
#sys.stderr = open('errorlog.txt', 'w')
#Function for interpolation of grids
def interpolate(grid,z,zmin,zmax,n):
ncol = 9
vec = []
for col in range(ncol):
inter = 0
no_inter = 0
for row in range(0,len(grid)):
if grid[row,z] < zmin or grid[row,z] > zmax: continue
if z == 2: x = 0; y = 1
if z == 1: x = 0; y = 2
if z == 0: x = 1; y = 2
if row == (len(grid)-1):
vec.append(grid[row,col])
no_inter = no_inter + 1
elif grid[row,x] < grid[row+1,x] or grid[row,y] < grid[row+1,y] :
vec.append(grid[row,col])
no_inter = no_inter + 1
else:
inter = inter + 1
for index in range(0,n):
i = grid[row,col]+(index)*(grid[row+1,col]-grid[row,col])/n
vec.append(i)
out = np.transpose(np.reshape(vec,(-1,n*inter+no_inter)))
return out
print (' ---------------------------------------------------------------------')
print ('This is HII-CHI-mistry v. 4.2')
print (' See Perez-Montero, E. (2014) for details')
print (' Insert the name of your input text file with the following columns:')
print (' 3727 [OII], 3868 [NeIII], 4363 [OIII], 5007 [OIII], 6584 [NII], 6725 [SII]')
print ('with their corresponding errors in adjacent columns')
print ('with 0 for missing information.')
print ('---------------------------------------------------------------------')
# Input file reading
if len(sys.argv) == 1:
if int(sys.version[0]) < 3:
input00 = raw_input('Insert input file name:')
else:
input00 = input('Insert input file name:')
else:
input00 = str(sys.argv[1])
try:
input0 = np.loadtxt(input00)
if (input0.ndim == 1 and input0.shape[0] != 12) or (input0.ndim > 1 and input0.shape[1] != 12):
print ('The input file does not have 12 columns. Please check')
sys.exit()
print ('The input file is:'+input00)
except:
print ('Input file error: It does not exist or has wrong format')
sys.exit()
print ('')
output = []
# Iterations for Montecarlo error derivation
if len(sys.argv) < 3:
n = 25
else:
n = int(sys.argv[2])
print ('The number of iterations for MonteCarlo simulation is: ',n)
print ('')
# Reading of models grids. These can be changed
print ('')
question = True
while question:
print('-------------------------------------------------')
print ('(1) POPSTAR with Chabrier IMF, age = 1 Myr')
print ('(2) AGN, double component, a(OX) = -0.8, a(UV) = -1.0')
print ('(3) AGN, double component, a(OX) = -1.2, a(UV) = -1.0')
print('-------------------------------------------------')
if int(sys.version[0]) < 3:
sed = raw_input('Choose SED of the models:')
else:
sed = input('Choose SED of the models:')
if sed == '1' or sed == '2' or sed == '3' : question = False
print ('')
question = True
while question:
if int(sys.version[0]) < 3:
inter = raw_input('Choose models [0] No interpolated [1] Interpolated: ')
else:
inter = input('Choose models [0] No interpolated [1] Interpolated: ')
if inter == '0' or inter == '1': question = False
print ('')
sed = int(sed)
inter = int(inter)
if inter == 0 and sed==1:
sed_type = 'POPSTAR, age = 1 Myr, Chabrier IMF. No interpolation'
grid1 = np.loadtxt('C17_cha_1Myr_v4.0.dat')
grid2 = np.loadtxt('C17_cha_1Myr_logU_adapted_emp_v4.0.dat')
grid3 = np.loadtxt('C17_cha_1Myr_logU-NO_adapted_emp_v4.0.dat')
print ('No interpolation for the POPSTAR models is going to be used.')
print ('The grid has a resolution of 0.1dex for O/H and 0.125dex for N/O')
print ('')
res_NO = 0.125
elif inter == 1 and sed==1:
sed_type = 'POPSTAR, age = 1 Myr, Chabrier IMF. interpolation'
grid1 = np.loadtxt('C17_cha_1Myr_v4.0.dat')
grid2 = np.loadtxt('C17_cha_1Myr_logU_adapted_emp_v4.0.dat')
grid3 = np.loadtxt('C17_cha_1Myr_logU-NO_adapted_emp_v4.0.dat')
print ('Interpolation for the POPSTAR models is going to be used.')
print ('The grid has a resolution of 0.01dex for O/H and 0.0125dex for N/O')
print ('')
res_NO = 0.125
elif inter == 0 and sed==2:
sed_type = 'Double composite AGN, a(OX) = -0.8. No interpolation'
grid1 = np.loadtxt('C17_agn_v4.0.dat')
grid2 = np.loadtxt('C17_agn_v4.0.dat')
grid3 = np.loadtxt('C17_agn_NO_adapted_emp_v4.0.dat')
print ('No interpolation for the AGN a(ox) = -0.8 models is going to be used.')
print ('The grid has a resolution of 0.1dex for O/H and 0.125dex for N/O')
print ('')
res_NO = 0.125
elif inter == 1 and sed==2:
sed_type = 'Double composite AGN, a(OX) = -0.8. Interpolation'
grid1 = np.loadtxt('C17_agn_v4.0.dat')
grid2 = np.loadtxt('C17_agn_v4.0.dat')
grid3 = np.loadtxt('C17_agn_NO_adapted_emp_v4.0.dat')
print ('Interpolation for the AGN a(ox) = -0.8 models is going to be used.')
print ('The grid has a resolution of 0.01dex for O/H and 0.0125 dex for N/O')
print ('')
res_NO = 0.125
elif inter == 0 and sed==3:
sed_type = 'Double composite AGN, a(OX) = -1.2. No interpolation'
grid1 = np.loadtxt('C17_agn_a12_v4.0.dat')
grid2 = np.loadtxt('C17_agn_a12_v4.0.dat')
grid3 = np.loadtxt('C17_agn_a12_NO_adapted_emp_v4.0.dat')
print ('No interpolation for the AGN a(ox) = -1.2 models is going to be used.')
print ('The grid has a resolution of 0.1dex for O/H and 0.125dex for N/O')
print ('')
res_NO = 0.125
elif inter == 1 and sed==3:
sed_type = 'Double composite AGN, a(OX) = -1.2. Interpolation'
grid1 = np.loadtxt('C17_agn_a12_v4.0.dat')
grid2 = np.loadtxt('C17_agn_a12_v4.0.dat')
grid3 = np.loadtxt('C17_agn_a12_NO_adapted_emp_v4.0.dat')
print ('Interpolation for the AGN a(ox) = -1.2 models is going to be used.')
print ('The grid has a resolution of 0.01 dex for O/H and 0.0125 dex for N/O')
print ('')
res_NO = 0.125
# Input file reading
if input0.shape == (12,):
input1 = [0,0,0,0,0,0,0,0,0,0, 0, 0,input0[0],input0[1],input0[2],input0[3],input0[4],input0[5],input0[6],input0[7],input0[8],input0[9],input0[10],input0[11]]
input = np.reshape(input1,(2,12))
else:
input = input0
print ('Reading grids ....')
print ('')
print ('')
print ('----------------------------------------------------------------')
print ('(%) Grid 12+log(O/H) log(N/O) log(U)')
print ('-----------------------------------------------------------------')
# Beginning of loop of calculation
count = 0
for tab in input:
count = count + 1
OH_mc = []
NO_mc = []
logU_mc = []
OHe_mc = []
NOe_mc = []
logUe_mc = []
output.append(tab[0])
output.append(tab[1])
output.append(tab[2])
output.append(tab[3])
output.append(tab[4])
output.append(tab[5])
output.append(tab[6])
output.append(tab[7])
output.append(tab[8])
output.append(tab[9])
output.append(tab[10])
output.append(tab[11])
# Selection of grid
if tab[4] > 0 and tab[6] > 0:
grid = grid1
grid_type = 1
output.append(1)
elif tab[8] > 0 and (tab[0] > 0 or tab[10] > 0):
grid = grid2
grid_type = 2
output.append(2)
else:
grid = grid3
grid_type = 3
output.append(3)
# Calculation of N/O
if tab[8] == 0 or (tab[0] == 0 and tab[10] == 0):
NOff = -10
eNOff = 0
else:
for monte in range(0,n,1):
NO_p = 0
den_NO = 0
NO_e = 0
den_NO_e = 0
tol_max = 1e2
if tab[0] == 0:
OII_3727_obs = 0
else:
OII_3727_obs = np.random.normal(tab[0],tab[1]+1e-5)
if OII_3727_obs <= 0: OII_3727_obs = 0
if tab[4] == 0:
OIII_4363_obs = 0
else:
OIII_4363_obs = np.random.normal(tab[4],tab[5]+1e-5)
if OIII_4363_obs <= 0: OIII_4363_obs = 0
if tab[6] == 0:
OIII_5007_obs = 0
else:
OIII_5007_obs = np.random.normal(tab[6],tab[7]+1e-5)
if OIII_5007_obs <= 0: OIII_5007_obs = 0
if OIII_4363_obs == 0 or OIII_5007_obs == 0:
ROIII_obs = 0
else:
ROIII_obs = OIII_5007_obs / OIII_4363_obs
if tab[8] == 0:
NII_6584_obs = 0
else:
NII_6584_obs = np.random.normal(tab[8],tab[9]+1e-3)
if NII_6584_obs <= 0: NII_6584_obs = 0
if tab[10] == 0:
SII_6725_obs = 0
else:
SII_6725_obs = np.random.normal(tab[10],tab[11]+1e-3)
if SII_6725_obs <= 0: SII_6725_obs = 0
if NII_6584_obs == 0 or OII_3727_obs == 0:
N2O2_obs = -10
else:
N2O2_obs = np.log10(NII_6584_obs / OII_3727_obs)
if NII_6584_obs == 0 or SII_6725_obs == 0:
N2S2_obs = -10
else:
N2S2_obs = np.log10(NII_6584_obs / SII_6725_obs)
CHI_ROIII = 0
CHI_N2O2 = 0
CHI_N2S2 = 0
CHI_NO = 0
for index in grid:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if N2O2_obs == -10:
CHI_N2O2 = 0
elif index[3] == 0 or index[7] == 0:
CHI_N2O2 = tol_max
else:
CHI_N2O2 =(np.log10(index[7]/index[3]) - N2O2_obs)**2/(abs(np.log10(index[7]/index[3])+1e-5))
if N2S2_obs == -10:
CHI_N2S2 = 0
elif index[7] == 0 or index[8] == 0:
CHI_N2S2 = tol_max
else:
CHI_N2S2 =(np.log10(index[7]/index[8]) - N2S2_obs)**2/(abs(np.log10(index[7]/index[8])+1e-5))
CHI_NO = (CHI_ROIII**2 + CHI_N2O2**2 + CHI_N2S2**2)**0.5
NO_p = index[1] / (CHI_NO) + NO_p
den_NO = 1 / (CHI_NO) + den_NO
NO = NO_p / den_NO
# Calculation of N/O error
CHI_ROIII = 0
CHI_N2O2 = 0
CHI_N2S2 = 0
CHI_NO = 0
for index in grid:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if N2O2_obs == -10:
CHI_N2O2 = 0
elif index[3] == 0 or index[7] == 0:
CHI_N2O2 = tol_max
else:
CHI_N2O2 =(np.log10(index[7]/index[3]) - N2O2_obs)**2/(abs(np.log10(index[7]/index[3])+1e-5))
if N2S2_obs == -10:
CHI_N2S2 = 0
elif index[7] == 0 or index[8] == 0:
CHI_N2S2 = tol_max
else:
CHI_N2S2 =(np.log10(index[7]/index[8]) - N2S2_obs)**2/(abs(np.log10(index[7]/index[8])+1e-5))
CHI_NO = (CHI_ROIII**2 + CHI_N2O2**2 + CHI_N2S2**2)**0.5
NO_e = (index[1] - NO)**2 / (CHI_NO) + NO_e
den_NO_e = 1 / (CHI_NO) + den_NO_e
eNO = NO_e / den_NO_e
#Iterations for the interpolation mode
if inter == 0 or NO == -10:
NOf = NO
elif inter == 1:
igrid = grid[np.lexsort((grid[:,0],grid[:,2]))]
igrid = interpolate(igrid,1,NO-eNO-0.125,NO+eNO,10)
CHI_ROIII = 0
CHI_N2O2 = 0
CHI_N2S2 = 0
CHI_NO = 0
NO_p = 0
den_NO = 0
for index in igrid:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if OIII_5007_obs == 0:
CHI_OIII = 0
elif index[6] == 0:
CHI_OIII = tol_max
else:
CHI_OIII = (index[6] - OIII_5007_obs)**2/index[6]
if OII_3727_obs == 0:
CHI_OII = 0
elif index[3] == 0:
CHI_OII = tol_max
else:
CHI_OII = (index[3] - OII_3727_obs)**2/index[3]
if N2O2_obs == -10:
CHI_N2O2 = 0
elif index[3] == 0 or index[7] == 0:
CHI_N2O2 = tol_max
else:
CHI_N2O2 =(np.log10(index[7]/index[3]) - N2O2_obs)**2/(abs(np.log10(index[7]/index[3])+1e-5))
if N2S2_obs == -10:
CHI_N2S2 = 0
elif index[7] == 0 or index[8] == 0:
CHI_N2S2 = tol_max
else:
CHI_N2S2 =(np.log10(index[7]/index[8]) - N2S2_obs)**2/(abs(np.log10(index[7]/index[8])+1e-5))
CHI_NO = (CHI_ROIII**2 + CHI_N2O2**2 + CHI_N2S2**2)**0.5
if CHI_NO == 0:
NO_p = NO_p
den_NO = den_NO
else:
NO_p = index[1] / CHI_NO + NO_p
den_NO = 1 / CHI_NO + den_NO
NOf = NO_p / den_NO
NO_mc.append(NOf)
NOe_mc.append(eNO)
NOff = np.mean(NO_mc)
if NOff > -10: NOff = np.mean(NO_mc[NO_mc > -10])
eNOff = (np.std(NO_mc)**2+np.mean(NOe_mc)**2)**0.5
if eNOff > 0: eNOff = (np.std(NO_mc[NO_mc > -10])**2+np.mean(NOe_mc[NO_mc > -10])**2)**0.5
# Creation of a constrained grid on N/O
if NOff == -10:
grid_c = grid
else:
grid_mac = []
for index in grid:
if np.abs(index[1] - NOff) > np.abs(eNOff+res_NO):
continue
else:
grid_mac.append(index[0])
grid_mac.append(index[1])
grid_mac.append(index[2])
grid_mac.append(index[3])
grid_mac.append(index[4])
grid_mac.append(index[5])
grid_mac.append(index[6])
grid_mac.append(index[7])
grid_mac.append(index[8])
grid_c = np.reshape(grid_mac,(len(grid_mac)/9,9))
# Calculation of O/H and logU
for monte in range(0,n,1):
OH_p = 0
logU_p = 0
den_OH = 0
OH_e = 0
logU_e = 0
den_OH_e = 0
tol_max = 1e2
if tab[0] == 0:
OII_3727_obs = 0
else:
OII_3727_obs = np.random.normal(tab[0],tab[1]+1e-5)
if OII_3727_obs <= 0: OII_3727_obs = 0
if tab[2] == 0:
NeIII_3868_obs = 0
else:
NeIII_3868_obs = np.random.normal(tab[2],tab[3]+1e-5)
if NeIII_3868_obs <= 0: NeIII_3868_obs = 0
if tab[4] == 0:
OIII_4363_obs = 0
else:
OIII_4363_obs = np.random.normal(tab[4],tab[5]+1e-5)
if OIII_4363_obs <= 0: OIII_4363_obs = 0
if tab[6] == 0:
OIII_5007_obs = 0
else:
OIII_5007_obs = np.random.normal(tab[6],tab[7]+1e-5)
if OIII_5007_obs <= 0: OIII_5007_obs = 0
if OIII_4363_obs == 0 or OIII_5007_obs == 0:
ROIII_obs = 0
else:
ROIII_obs = OIII_5007_obs / OIII_4363_obs
if tab[8] == 0:
NII_6584_obs = 0
else:
NII_6584_obs = np.random.normal(tab[8],tab[9]+1e-3)
if NII_6584_obs <= 0: NII_6584_obs = 0
if tab[10] == 0:
SII_6725_obs = 0
else:
SII_6725_obs = np.random.normal(tab[10],tab[11]+1e-3)
if SII_6725_obs <= 0: SII_6725_obs = 0
if OII_3727_obs == 0 or OIII_5007_obs== 0:
O2O3_obs = 0
R23_obs = -10
else:
R23_obs = np.log10(OII_3727_obs + OIII_5007_obs )
O2O3_obs = (OII_3727_obs / OIII_5007_obs )
if OII_3727_obs == 0 or NeIII_3868_obs== 0:
O2Ne3_obs = 0
R2Ne3_obs = -10
else:
O2Ne3_obs = (OII_3727_obs / NeIII_3868_obs )
R2Ne3_obs = np.log10(OII_3727_obs + NeIII_3868_obs )
if OIII_5007_obs == 0 or NII_6584_obs == 0:
O3N2_obs = -10
else:
O3N2_obs = np.log10( OIII_5007_obs / NII_6584_obs )
if OIII_5007_obs == 0 or SII_6725_obs == 0:
O3S2_obs = -10
else:
O3S2_obs = np.log10( OIII_5007_obs / SII_6725_obs )
if R23_obs == -10 and NII_6584_obs == 0 and ROIII_obs == 0 and R2Ne3_obs == -10 and O3S2_obs == -10:
OH = 0
logU = 0
else:
CHI_ROIII = 0
CHI_NII = 0
CHI_OIII = 0
CHI_OII = 0
CHI_O2O3 = 0
CHI_R23 = 0
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
CHI_O3N2 = 0
CHI_O3S2 = 0
CHI_OH = 0
for index in grid_c:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if OIII_5007_obs == 0:
CHI_OIII = 0
elif index[6] == 0:
CHI_OIII = tol_max
else:
CHI_OIII = (index[6] - OIII_5007_obs)**2/index[6]
if OII_3727_obs == 0:
CHI_OII = 0
elif index[3] == 0:
CHI_OII = tol_max
else:
CHI_OII = (index[3] - OII_3727_obs)**2/index[3]
if NII_6584_obs == 0:
CHI_NII = 0
elif index[7] == 0:
CHI_NII = tol_max
else:
CHI_NII = (index[7] - NII_6584_obs)**2/index[7]
if OII_3727_obs == 0 or OIII_5007_obs == 0:
CHI_O2O3 = 0
CHI_R23 = 0
elif index[3] == 0 or index[6] == 0:
CHI_O2O3 = tol_max
CHI_R23 = tol_max
else:
CHI_O2O3 = (index[3]/index[6] - O2O3_obs)**2/(index[3]/index[6])
CHI_R23 = (np.log10(index[3]+index[6])-R23_obs)**2/ (np.abs(np.log10(index[3]+index[6]+1e-5)))
if OII_3727_obs == 0 or NeIII_3868_obs == 0:
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
elif index[3] == 0 or index[4] == 0:
CHI_O2Ne3 = tol_max
CHI_R2Ne3 = tol_max
else:
CHI_O2Ne3 = (index[3]/index[4] - O2Ne3_obs)**2/(index[3]/index[4])
CHI_R2Ne3 = (np.log10(index[3]+index[4])-R2Ne3_obs)**2/ (np.abs(np.log10(index[3]+index[4]+1e-5)))
if OIII_5007_obs == 0 or NII_6584_obs == 0:
CHI_O3N2 = 0
elif index[6] == 0 or index[7] == 0:
CHI_O3N2 = tol_max
else:
CHI_O3N2 = (np.log10(index[6]/index[7]) - O3N2_obs)**2/(np.abs(np.log10(index[6]/index[7]+1e-5)))
if OIII_5007_obs == 0 or SII_6725_obs == 0:
CHI_O3S2 = 0
elif index[6] == 0 or index[8] == 0:
CHI_O3S2 = tol_max
else:
CHI_O3S2 = (np.log10(index[6]/index[8]) - O3S2_obs)**2/(np.abs(np.log10(index[6]/index[8]+1e-5)))
if ROIII_obs > 0:
CHI_OH = (CHI_ROIII**2 + CHI_NII**2 + CHI_OII**2 + CHI_OIII**2 )**0.5
elif ROIII_obs == 0 and NII_6584_obs > 0:
CHI_OH = (CHI_NII**2 + CHI_O2O3**2 + CHI_R23**2 + CHI_O3N2**2 + CHI_O3S2**2 )**0.5
elif ROIII_obs == 0 and NII_6584_obs == 0 and OIII_5007_obs > 0:
CHI_OH = (CHI_O2O3**2 + CHI_R23**2 + CHI_O3S2**2)**0.5
elif ROIII_obs == 0 and OIII_5007_obs == 0:
CHI_OH = (CHI_O2Ne3**2 + CHI_R2Ne3**2 )**0.5
if CHI_OH == 0:
OH_p = OH_p
logU_p = logU_p
den_OH = den_OH
else:
OH_p = index[0] / (CHI_OH) + OH_p
logU_p = index[2] / (CHI_OH) + logU_p
den_OH = 1 / (CHI_OH) + den_OH
OH = OH_p / den_OH
logU = logU_p / den_OH
#Calculation of error of O/H and logU
if R23_obs == -10 and NII_6584_obs == 0 and ROIII_obs == 0 and R2Ne3_obs == -10 and O3S2_obs == -10:
eOH = 0
elogU = 0
else:
CHI_ROIII = 0
CHI_NII = 0
CHI_OIII = 0
CHI_OII = 0
CHI_O2O3 = 0
CHI_R23 = 0
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
CHI_O3N2 = 0
CHI_O3S2 = 0
CHI_OH = 0
for index in grid_c:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if OIII_5007_obs == 0:
CHI_OIII = 0
elif index[6] == 0:
CHI_OIII = tol_max
else:
CHI_OIII = (index[6] - OIII_5007_obs)**2/index[6]
if OII_3727_obs == 0:
CHI_OII = 0
elif index[3] == 0:
CHI_OII = tol_max
else:
CHI_OII = (index[3] - OII_3727_obs)**2/index[3]
if NII_6584_obs == 0:
CHI_NII = 0
elif index[7] == 0:
CHI_NII = tol_max
else:
CHI_NII = (index[7] - NII_6584_obs)**2/index[7]
if OII_3727_obs == 0 or OIII_5007_obs == 0:
CHI_O2O3 = 0
CHI_R23 = 0
elif index[3] == 0 or index[6] == 0:
CHI_O2O3 = tol_max
CHI_R23 = tol_max
else:
CHI_O2O3 = (index[3]/index[6] - O2O3_obs)**2/(index[3]/index[6])
CHI_R23 = (np.log10(index[3]+index[6])-R23_obs)**2/ (np.abs(np.log10(index[3]+index[6]+1e-5)))
if OII_3727_obs == 0 or NeIII_3868_obs == 0:
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
elif index[3] == 0 or index[4] == 0:
CHI_O2Ne3 = tol_max
CHI_R2Ne3 = tol_max
else:
CHI_O2Ne3 = (index[3]/index[4] - O2Ne3_obs)**2/(index[3]/index[4])
CHI_R2Ne3 = (np.log10(index[3]+index[4])-R2Ne3_obs)**2/ (np.abs(np.log10(index[3]+index[4]+1e-5)))
if OIII_5007_obs == 0 or NII_6584_obs == 0:
CHI_O3N2 = 0
elif index[6] == 0 or index[7] == 0:
CHI_O3N2 = tol_max
else:
CHI_O3N2 = (np.log10(index[6]/index[7]) - O3N2_obs)**2/(np.abs(np.log10(index[6]/index[7]+1e-5)))
if OIII_5007_obs == 0 or SII_6725_obs == 0:
CHI_O3S2 = 0
elif index[6] == 0 or index[8] == 0:
CHI_O3S2 = tol_max
else:
CHI_O3S2 = (np.log10(index[6]/index[8]) - O3S2_obs)**2/(np.abs(np.log10(index[6]/index[8]+1e-5)))
if ROIII_obs > 0:
CHI_OH = (CHI_ROIII**2 + CHI_NII**2 + CHI_OII**2 + CHI_OIII**2 )**0.5
elif ROIII_obs == 0 and NII_6584_obs > 0:
CHI_OH = (CHI_NII**2 + CHI_O2O3**2 + CHI_R23**2 + CHI_O3N2**2 + CHI_O3S2**2)**0.5
elif ROIII_obs == 0 and NII_6584_obs == 0 and OIII_5007_obs > 0:
CHI_OH = (CHI_O2O3**2 + CHI_R23**2 + CHI_O3S2**2)**0.5
else:
CHI_OH = (CHI_O2Ne3**2 + CHI_R2Ne3**2 )**0.5
if CHI_OH == 0:
OH_e = OH_e
logU_e = logU_e
den_OH_e = den_OH_e
else:
OH_e = (index[0] - OH)**2 / (CHI_OH) + OH_e
logU_e = (index[2] - logU)**2 / (CHI_OH) + logU_e
den_OH_e = 1 / (CHI_OH) + den_OH_e
eOH = OH_e / den_OH_e
elogU = logU_e / den_OH_e
# Iterations for interpolated models
if inter == 0 or OH == 0:
OHf = OH
logUf = logU
elif inter == 1:
igrid = interpolate(grid_c,2,logU-elogU-0.25,logU+elogU,10)
igrid = igrid[np.lexsort((igrid[:,1],igrid[:,2]))]
igrid = interpolate(igrid,0,OH-eOH-0.1,OH+eOH,10)
igrid = igrid[np.lexsort((igrid[:,0],igrid[:,2]))]
CHI_ROIII = 0
CHI_OIII = 0
CHI_OII = 0
CHI_NII = 0
CHI_O2O3 = 0
CHI_R23 = 0
CHI_O3N2 = 0
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
CHI_O3S2 = 0
CHI_OH = 0
OH_p = 0
logU_p = 0
den_OH = 0
for index in igrid:
if ROIII_obs == 0:
CHI_ROIII = 0
elif index[5] == 0:
CHI_ROIII = tol_max
else:
CHI_ROIII = (index[6]/index[5]- ROIII_obs)**2/(index[6]/index[5])
if OIII_5007_obs == 0:
CHI_OIII = 0
elif index[6] == 0:
CHI_OIII = tol_max
else:
CHI_OIII = (index[6] - OIII_5007_obs)**2/index[6]
if OII_3727_obs == 0:
CHI_OII = 0
elif index[3] == 0:
CHI_OII = tol_max
else:
CHI_OII = (index[3] - OII_3727_obs)**2/index[3]
if NII_6584_obs == 0:
CHI_NII = 0
elif index[7] == 0:
CHI_NII = tol_max
else:
CHI_NII = (index[7] - NII_6584_obs)**2/index[7]
if OII_3727_obs == 0 or OIII_5007_obs == 0:
CHI_O2O3 = 0
CHI_R23 = 0
elif index[3] == 0 or index[6] == 0:
CHI_O2O3 = tol_max
CHI_R23 = tol_max
else:
CHI_O2O3 = (index[3]/index[6] - O2O3_obs)**2/(index[3]/index[6])
CHI_R23 = (np.log10(index[3]+index[6])-R23_obs)**2/(np.abs(np.log10(index[3]+index[6]+1e-5)))
if OII_3727_obs == 0 or NeIII_3868_obs == 0:
CHI_O2Ne3 = 0
CHI_R2Ne3 = 0
elif index[3] == 0 or index[4] == 0:
CHI_O2Ne3 = tol_max
CHI_R2Ne3 = tol_max
else:
CHI_O2Ne3 = (index[3]/index[4] - O2Ne3_obs)**2/(index[3]/index[4])
CHI_R2Ne3 = (np.log10(index[3]+index[4])-R2Ne3_obs)**2/(np.abs(np.log10(index[3]+index[4]+1e-5)))
if OIII_5007_obs == 0 or NII_6584_obs == 0:
CHI_O3N2 = 0
elif index[6] == 0 or index[7] == 0:
CHI_O3N2 = tol_max
else:
CHI_O3N2 = (np.log10(index[6]/index[7]) - O3N2_obs)**2/(np.abs(np.log10(index[6]/index[7]+1e-5)))
if OIII_5007_obs == 0 or SII_6725_obs == 0:
CHI_O3S2 = 0
elif index[6] == 0 or index[8] == 0:
CHI_O3S2 = tol_max
else:
CHI_O3S2 = (np.log10(index[6]/index[8]) - O3S2_obs)**2/(np.abs(np.log10(index[6]/index[8]+1e-5)))
if ROIII_obs > 0:
CHI_OH = (CHI_ROIII**2 + CHI_NII**2 + CHI_OII**2 + CHI_OIII**2)**0.5
elif NII_6584_obs > 0 and OII_3727_obs > 0:
CHI_OH = (CHI_NII**2 + CHI_O2O3**2 + CHI_R23**2 + CHI_O2Ne3**2 + CHI_R2Ne3**2)**0.5
elif NII_6584_obs > 0 and OII_3727_obs == 0:
CHI_OH = (CHI_NII**2 + CHI_O3N2**2 + CHI_O3S2**2)**0.5
elif NII_6584_obs == 0:
CHI_OH = (CHI_O2O3**2 + CHI_R23**2 + CHI_O2Ne3**2 + CHI_R2Ne3**2 + CHI_O3S2**2 )**0.5
OH_p = index[0] / CHI_OH**2 + OH_p
logU_p = index[2] / CHI_OH**2 + logU_p
den_OH = 1 / CHI_OH**2 + den_OH
if OH == 0:
OHf = OH
logUf = logU
else:
OHf = OH_p / den_OH
logUf = logU_p / den_OH
OH_mc.append(OHf)
logU_mc.append(logUf)
OHe_mc.append(eOH)
logUe_mc.append(elogU)
OHff = np.mean(OH_mc)
if OHff > 0: OHff = np.mean(OH_mc[OH_mc > 0])
eOHff = (np.std(OH_mc)**2+np.mean(OHe_mc)**2)**0.5
if eOHff > 0: eOHff = (np.std(OH_mc[OH_mc > 0])**2+np.mean(OHe_mc[OH_mc > 0])**2)**0.5
logUff = np.mean(logU_mc)
if logUff < 0: logUff = np.mean(logU_mc[logU_mc < 0])
elogUff = (np.std(logU_mc)**2+np.std(logUe_mc)**2)**0.5
if logUff < 0: elogUff = (np.std(logU_mc[logU_mc < 0])**2+np.mean(logUe_mc[logU_mc < 0])**2)**0.5
logU_mc.append(elogUff)
output.append(OHff)
output.append(eOHff)
output.append(NOff)
output.append(eNOff)
output.append(logUff)
output.append(elogUff)
if input0.shape >= (12,) and count == 1: continue
print (round(100*(count)/float(len(input)),1),'%',grid_type,'', round(OHff,3), round(eOHff,3),'',round(NOff,3), round(eNOff,3), '',round(logUff,3), round(elogUff,3))
out = np.reshape(output,(len(input),19))
if input0.shape == (12,): out = np.delete(out,obj=0,axis=0)
lineas_header = [' HII-CHI-mistry v.4.2 output file', 'Input file:'+input00,'Iterations for MonteCarlo: '+str(n),'Used models: '+sed_type,'','O2Hb eO2Hb Ne3Hb eNeHb O3aHb eO3aHb O3nHb eO3nHb N2Hb eN2Hb S2Hb eS2Hb i O/H eO/H N/O eN/O logU elogU']
header = '\n'.join(lineas_header)
np.savetxt(input00+'_hcm-output.dat',out,fmt=' '.join(['%.4f']*12+['%i']+['%.3f']*6),header=header)
print ('________________________________')
print ('Results are stored in ' + input00 + '_hcm-output.dat')
| 33.741599 | 266 | 0.504138 | 4,486 | 29,119 | 3.063085 | 0.061525 | 0.035514 | 0.020377 | 0.033113 | 0.719744 | 0.677607 | 0.633869 | 0.621061 | 0.60687 | 0.604614 | 0 | 0.122578 | 0.34778 | 29,119 | 862 | 267 | 33.780742 | 0.600937 | 0.017034 | 0 | 0.656425 | 0 | 0.01676 | 0.109045 | 0.025487 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001397 | false | 0 | 0.00419 | 0 | 0.006983 | 0.068436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c32351c4c533e5cb8d687350f22e9e469f1fca6 | 596 | py | Python | data-structures/lists.py | ermus19/python-examples | 38f53cc1fbd45f3fdb6b1bb79993090773000219 | [
"MIT"
] | null | null | null | data-structures/lists.py | ermus19/python-examples | 38f53cc1fbd45f3fdb6b1bb79993090773000219 | [
"MIT"
] | null | null | null | data-structures/lists.py | ermus19/python-examples | 38f53cc1fbd45f3fdb6b1bb79993090773000219 | [
"MIT"
] | null | null | null | a = ['a', 'b', 'c', 'd']
print("This is a list", a)
print("It is", len(a), "elements length.")
print("Let's check if element 'd' is in the list:", 'd' in a)
print("This should be the maximun value of the list", max(a))
print("This should be the minnimun value of the list", min(a))
print("This is a list, item by item: ", end=' ')
for item in a:
print(item, end=' ')
print("\r\nThis is the first element of the list: ", a[0])
a.remove('b')
print("This is the list after removing the 'b' ", a)
del a[0]
print("This is the list after removing the first element", a) | 27.090909 | 63 | 0.612416 | 111 | 596 | 3.288288 | 0.351351 | 0.147945 | 0.120548 | 0.065753 | 0.389041 | 0.30137 | 0.186301 | 0.186301 | 0 | 0 | 0 | 0.004292 | 0.218121 | 596 | 22 | 64 | 27.090909 | 0.77897 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.714286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
6c32e8e381ee9df9923b26d199c0208c7ed8afb5 | 510 | py | Python | source/60-Verifica_palíndromo.py | FelixLuciano/DesSoft-2020.2 | a44063d63778329f1e1266881f20f7954ecb528b | [
"MIT"
] | null | null | null | source/60-Verifica_palíndromo.py | FelixLuciano/DesSoft-2020.2 | a44063d63778329f1e1266881f20f7954ecb528b | [
"MIT"
] | null | null | null | source/60-Verifica_palíndromo.py | FelixLuciano/DesSoft-2020.2 | a44063d63778329f1e1266881f20f7954ecb528b | [
"MIT"
] | null | null | null | # Verifica palíndromo
# Faça uma função que recebe uma string e retorna True se ela for um palíndromo (é a mesma de trás para frente), ou False caso contrário. Por exemplo, a string 'roma é amor' é um palíndromo.
# Use fatiamento.
# Desafio 1: dá para fazer essa função com apenas 2 linhas de código.
# Desafio 2: resolva novamente sem usar fatiamento.
# O nome da sua função deve ser 'eh_palindromo'.
def eh_palindromo (text):
rev_text = text[::-1]
check = text == rev_text
return check
| 39.230769 | 190 | 0.719608 | 83 | 510 | 4.373494 | 0.710843 | 0.066116 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009975 | 0.213725 | 510 | 12 | 191 | 42.5 | 0.895262 | 0.762745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c3309b8f9fa8f0ffdbdeb2359df3541df31c228 | 3,087 | py | Python | flask_csp/test_csp.py | twaldear/flask-csp | eb0c012baf21fab4e5f7bc37f958e58e2337a01b | [
"MIT"
] | 8 | 2016-09-01T12:41:59.000Z | 2020-11-20T00:33:12.000Z | flask_csp/test_csp.py | SmartManoj/flask-csp | 0f679af368299e36ee9008861fbd4e764abf4b86 | [
"MIT"
] | 7 | 2015-08-12T10:28:22.000Z | 2021-04-23T00:34:34.000Z | flask_csp/test_csp.py | SmartManoj/flask-csp | 0f679af368299e36ee9008861fbd4e764abf4b86 | [
"MIT"
] | 5 | 2016-09-30T11:03:26.000Z | 2020-03-31T08:45:36.000Z | import unittest
import tempfile
from flask import Flask
from flask_csp.csp import csp_default, create_csp_header, csp_header
class CspTestFunctions(unittest.TestCase):
""" test base functions """
def setUp(self):
tmp = tempfile.mkstemp()
self.dh = csp_default()
self.dh.default_file = tmp[1]
def test_create_csp_header(self):
""" test dict -> csp header """
self.assertEquals(create_csp_header({'foo':'bar','lorem':'ipsum'}),'foo bar; lorem ipsum')
def test_default_empty_exception(self):
""" test empty default file """
with self.assertRaises(Exception):
self.dh.read()
def test_default_read_write(self):
""" test read/write to default """
self.dh.update() # test empty file
t = self.dh.read()
self.assertEquals(t['default-src'],"'self'")
self.dh.update({'default-src':"'none'",'script-src':"'self'"}) # test update
t = self.dh.read()
self.assertEquals(t['default-src'],"'none'")
self.assertEquals(t['script-src'],"'self'")
def test_included_json_file(self):
""" make sure included json file is readable / writeable """
h = csp_default()
ret = h.read()
assert "default-src" in ret
h.update({'default-src':"'self'"})
ret = h.read()
self.assertEquals(ret['default-src'],"'self'")
class CspTestDefaultDecorator(unittest.TestCase):
""" test decorator with no values passed """
def setUp(self):
self.app = Flask(__name__)
@self.app.route('/')
@csp_header()
def index():
return "test"
def test_csp_header(self):
with self.app.test_client() as c:
result = c.get('/')
assert "default-src 'self'" in result.headers.get('Content-Security-Policy')
class CspTestCustomDecoratorUpdate(unittest.TestCase):
""" test decorator with custom values passed by dict """
def setUp(self):
self.app = Flask(__name__)
@self.app.route('/')
@csp_header({'default-src':"'none'",'script-src':"'self'"})
def index():
return "test"
def test_csp_header(self):
with self.app.test_client() as c:
result = c.get('/')
assert "default-src 'none'" in result.headers.get('Content-Security-Policy')
assert "script-src 'self'" in result.headers.get('Content-Security-Policy')
class CspTestCustomDecoratorRemove(unittest.TestCase):
""" test removing policy through custom decorator values """
def setUp(self):
self.app = Flask(__name__)
@self.app.route('/')
@csp_header({'default-src':''})
def index():
return "hi"
def test_csp_header(self):
with self.app.test_client() as c:
result = c.get('/')
assert "default-src" not in result.headers.get('Content-Security-Policy')
class CspTestReadOnly(unittest.TestCase):
""" test read only """
def setUp(self):
self.app = Flask(__name__)
@self.app.route('/')
@csp_header({'report-only':True})
def index():
return "hi"
def test_csp_header(self):
with self.app.test_client() as c:
result = c.get('/')
assert "default-src" in result.headers.get('Content-Security-Policy-Report-Only')
assert "report-only" not in result.headers.get('Content-Security-Policy-Report-Only')
if __name__ == '__main__':
unittest.main()
| 29.970874 | 92 | 0.686751 | 425 | 3,087 | 4.842353 | 0.2 | 0.056851 | 0.037901 | 0.052478 | 0.503401 | 0.471331 | 0.449951 | 0.431001 | 0.406706 | 0.322157 | 0 | 0.000378 | 0.142857 | 3,087 | 102 | 93 | 30.264706 | 0.7774 | 0.109815 | 0 | 0.480519 | 0 | 0 | 0.183878 | 0.060178 | 0 | 0 | 0 | 0 | 0.168831 | 1 | 0.220779 | false | 0 | 0.051948 | 0.051948 | 0.38961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c3e2ff42efe6be76247758bf8cf7a71c15a12f0 | 1,907 | py | Python | ude/communication/grpc_auth.py | aws-deepracer/ude | c9fbaa37a68aca6239ec9b132ff06be8ed883e5a | [
"Apache-2.0"
] | null | null | null | ude/communication/grpc_auth.py | aws-deepracer/ude | c9fbaa37a68aca6239ec9b132ff06be8ed883e5a | [
"Apache-2.0"
] | null | null | null | ude/communication/grpc_auth.py | aws-deepracer/ude | c9fbaa37a68aca6239ec9b132ff06be8ed883e5a | [
"Apache-2.0"
] | null | null | null | #################################################################################
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance with the License. #
# You may obtain a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#################################################################################
"""A class for GRPC custom authentication with key."""
from typing import Any
import grpc
class GrpcAuth(grpc.AuthMetadataPlugin):
"""
GRPC custom authentication with authentication key.
"""
def __init__(self, key: str) -> None:
"""
Initialize GRPC custom authentication.
Args:
key (str): authentication key.
"""
self._key = key
def __call__(self, context: Any, callback: Any) -> None:
"""
Callback.
Args:
context (Any): callback context.
callback (Any): callback function pointer.
"""
callback((('rpc-auth-header', self._key),), None)
| 44.348837 | 81 | 0.450446 | 164 | 1,907 | 5.176829 | 0.567073 | 0.070671 | 0.084806 | 0.037691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003466 | 0.394861 | 1,907 | 42 | 82 | 45.404762 | 0.732236 | 0.662821 | 0 | 0 | 0 | 0 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c41880b446043d4425ae63f68c735e69c2598d2 | 2,889 | py | Python | neutron_lbaas/agent/agent_api.py | kayrus/neutron-lbaas | d582fc52c725584e83b01e33f617f11d49a165a8 | [
"Apache-2.0"
] | 1 | 2017-11-13T13:24:12.000Z | 2017-11-13T13:24:12.000Z | neutron_lbaas/agent/agent_api.py | kayrus/neutron-lbaas | d582fc52c725584e83b01e33f617f11d49a165a8 | [
"Apache-2.0"
] | 2 | 2018-10-30T11:37:42.000Z | 2020-09-01T12:08:36.000Z | neutron_lbaas/agent/agent_api.py | kayrus/neutron-lbaas | d582fc52c725584e83b01e33f617f11d49a165a8 | [
"Apache-2.0"
] | 5 | 2018-09-21T07:56:14.000Z | 2020-10-13T09:52:15.000Z | # Copyright 2013 New Dream Network, LLC (DreamHost)
# Copyright 2015 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.common import rpc as n_rpc
import oslo_messaging
class LbaasAgentApi(object):
"""Agent side of the Agent to Plugin RPC API."""
# history
# 1.0 Initial version
def __init__(self, topic, context, host):
self.context = context
self.host = host
target = oslo_messaging.Target(topic=topic, version='1.0')
self.client = n_rpc.get_client(target)
def get_ready_devices(self):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'get_ready_devices', host=self.host)
def get_loadbalancer(self, loadbalancer_id):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'get_loadbalancer',
loadbalancer_id=loadbalancer_id)
def loadbalancer_deployed(self, loadbalancer_id):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'loadbalancer_deployed',
loadbalancer_id=loadbalancer_id)
def update_status(self, obj_type, obj_id, provisioning_status=None,
operating_status=None):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'update_status', obj_type=obj_type,
obj_id=obj_id,
provisioning_status=provisioning_status,
operating_status=operating_status)
def loadbalancer_destroyed(self, loadbalancer_id):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'loadbalancer_destroyed',
loadbalancer_id=loadbalancer_id)
def plug_vip_port(self, port_id):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'plug_vip_port', port_id=port_id,
host=self.host)
def unplug_vip_port(self, port_id):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'unplug_vip_port', port_id=port_id,
host=self.host)
def update_loadbalancer_stats(self, loadbalancer_id, stats):
cctxt = self.client.prepare()
return cctxt.call(self.context, 'update_loadbalancer_stats',
loadbalancer_id=loadbalancer_id, stats=stats)
| 39.575342 | 78 | 0.657667 | 354 | 2,889 | 5.189266 | 0.330508 | 0.091453 | 0.065324 | 0.095808 | 0.367447 | 0.316821 | 0.316821 | 0.316821 | 0.316821 | 0.316821 | 0 | 0.007463 | 0.257875 | 2,889 | 72 | 79 | 40.125 | 0.849347 | 0.241606 | 0 | 0.309524 | 0 | 0 | 0.06682 | 0.031336 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.047619 | 0 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c44844e65fa9cab5fb2f67d2234bed6aca02d0a | 2,463 | py | Python | backend/src/gloader/xml/sax/drivers2/drv_sgmlop_html.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 11 | 2019-03-02T20:39:34.000Z | 2021-09-02T19:47:38.000Z | backend/src/gloader/xml/sax/drivers2/drv_sgmlop_html.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 29 | 2019-01-17T15:44:48.000Z | 2021-06-02T00:19:40.000Z | backend/src/gloader/xml/sax/drivers2/drv_sgmlop_html.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 11 | 2019-01-28T05:00:55.000Z | 2021-11-12T03:08:32.000Z | """
SAX2 driver for parsing HTML with the sgmlop parser.
$Id: drv_sgmlop_html.py,v 1.3 2002/05/10 14:50:06 akuchling Exp $
"""
version = "0.1"
from drv_sgmlop import *
from xml.dom.html import HTML_CHARACTER_ENTITIES, HTML_FORBIDDEN_END, HTML_OPT_END, HTML_DTD
from string import strip, upper
class SaxHtmlParser(SaxParser):
def __init__(self, bufsize = 65536, encoding = 'iso-8859-1', verbose = 0):
SaxParser.__init__(self, bufsize, encoding)
self.verbose = verbose
def finish_starttag(self, tag, attrs):
"""uses the HTML DTD to automatically generate events
for missing tags"""
# guess omitted close tags
while self.stack and \
upper(self.stack[-1]) in HTML_OPT_END and \
tag not in HTML_DTD.get(self.stack[-1],[]):
self.unknown_endtag(self.stack[-1])
del self.stack[-1]
if self.stack and tag not in HTML_DTD.get(self.stack[-1],[]) and self.verbose:
print 'Warning : trying to add %s as a child of %s'%\
(tag,self.stack[-1])
self.unknown_starttag(tag,attrs)
if upper(tag) in HTML_FORBIDDEN_END:
# close immediately tags for which we won't get an end
self.unknown_endtag(tag)
return 0
else:
self.stack.append(tag)
return 1
def finish_endtag(self, tag):
if tag in HTML_FORBIDDEN_END :
# do nothing: we've already closed it
return
if tag in self.stack:
while self.stack and self.stack[-1] != tag:
self.unknown_endtag(self.stack[-1])
del self.stack[-1]
self.unknown_endtag(tag)
del self.stack[-1]
elif self.verbose:
print "Warning: I don't see where tag %s was opened"%tag
def handle_data(self,data):
if self.stack:
if '#PCDATA' not in HTML_DTD.get(self.stack[-1],[]) and not strip(data):
# this is probably ignorable whitespace
self._cont_handler.ignorableWhitespace(data)
else:
self._cont_handler.characters(to_xml_string(data,self._encoding))
def close(self):
SGMLParser.close(self)
self.stack.reverse()
for tag in self.stack:
self.unknown_endtag(tag)
self.stack = []
self._cont_handler.endDocument()
def create_parser():
return SaxHtmlParser()
| 32.407895 | 92 | 0.600893 | 332 | 2,463 | 4.319277 | 0.364458 | 0.125523 | 0.076709 | 0.025105 | 0.17643 | 0.132497 | 0.120642 | 0.120642 | 0.120642 | 0.099024 | 0 | 0.024913 | 0.299229 | 2,463 | 75 | 93 | 32.84 | 0.80591 | 0.061307 | 0 | 0.2 | 0 | 0 | 0.050977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c44efadd624e86dd07d53f187b7fd4ae7980a09 | 599 | py | Python | xscale/signal/tests/test_generator.py | xy6g13/xscale | a0c5809b6005a2016ab85849fa33e24c3fc19518 | [
"Apache-2.0"
] | 24 | 2017-02-28T15:01:29.000Z | 2022-02-22T08:26:23.000Z | xscale/signal/tests/test_generator.py | xy6g13/xscale | a0c5809b6005a2016ab85849fa33e24c3fc19518 | [
"Apache-2.0"
] | 19 | 2017-02-24T12:30:26.000Z | 2022-02-25T04:57:32.000Z | xscale/signal/tests/test_generator.py | serazing/xscale | a804866aa6f6a5a0f293a7f6765ea17403159134 | [
"Apache-2.0"
] | 10 | 2017-03-04T02:59:42.000Z | 2021-11-14T12:40:54.000Z | # Python 2/3 compatibility
from __future__ import absolute_import, division, print_function
import xscale.signal.generator as xgen
import numpy as np
import pytest
def test_ar():
xgen.ar(0.3, 100, c=0.1)
def test_rednoise():
xgen.rednoise(0.3, 100, c=0.1)
with pytest.raises(TypeError, message="Expecting TypeError"):
xgen.rednoise((0.3, 0.24), 100)
def test_trend():
x = np.arange(100)
xgen.trend(x, 1.2, 3.4)
def test_example_xt():
xgen.example_xt()
@pytest.mark.parametrize("boundaries", [False, True])
def test_example_xyt(boundaries):
xgen.example_xyt(boundaries=boundaries) | 20.655172 | 64 | 0.739566 | 97 | 599 | 4.412371 | 0.463918 | 0.081776 | 0.023364 | 0.028037 | 0.037383 | 0.037383 | 0 | 0 | 0 | 0 | 0 | 0.058935 | 0.12187 | 599 | 29 | 65 | 20.655172 | 0.754753 | 0.040067 | 0 | 0 | 0 | 0 | 0.050523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.222222 | 0 | 0.5 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c4b9a0cac6777fa89beba0b74f5bea06d461c27 | 8,581 | py | Python | .kodi/addons/plugin.video.1channel/waldo/indexes/1Channel_index.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | null | null | null | .kodi/addons/plugin.video.1channel/waldo/indexes/1Channel_index.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | null | null | null | .kodi/addons/plugin.video.1channel/waldo/indexes/1Channel_index.py | C6SUMMER/allinclusive-kodi-pi | 8baf247c79526849c640c6e56ca57a708a65bd11 | [
"Apache-2.0"
] | 2 | 2018-04-17T17:34:39.000Z | 2020-07-26T03:43:33.000Z | import os
import re
import sys
import urllib2
import HTMLParser
import xbmcgui
import xbmcplugin
from t0mm0.common.addon import Addon
from t0mm0.common.addon import Addon as Addon2
addon = Addon('plugin.video.waldo', sys.argv)
_1CH = Addon2('plugin.video.1channel', sys.argv)
#BASE_Address = 'www.primewire.ag'
BASE_Address = _1CH.get_setting('domain').replace('http://','')
if (_1CH.get_setting("enableDomain")=='true') and (len(_1CH.get_setting("customDomain")) > 10):
BASE_Address=_1CH.get_setting("customDomain").replace('http://','')
if not BASE_Address.startswith('http'):
BASE_URL = 'http://'+BASE_Address
display_name = 'PrimeWire'#'1Channel'
#Label that will be displayed to the user representing this index
tag = 'PrimeWire'#'1Channel'
#MUST be implemented. Unique 3 or 4 character string that will be used to
#identify this index
required_addons = []
#MUST be implemented. A list of strings indicating which addons are required to
#be installed for this index to be used.
#For example: required_addons = ['script.module.beautifulsoup', 'plugin.video.youtube']
#Currently, xbmc does not provide a way to require a specific version of an addon
def get_settings_xml():
"""
Must be defined. This method should return XML which describes any Waldo
specific settings you would like for your plugin. You should make sure that
the ``id`` starts with your tag followed by an underscore.
For example:
xml = '<setting id="ExI_priority" '
xml += 'type="number" label="Priority" default="100"/>\\n'
return xml
The settings category will be your plugin's :attr:`display_name`.
Returns:
A string containing XML which would be valid in
``resources/settings.xml`` or boolean False if none are required
"""
return False
def get_browsing_options():#MUST be defined
"""
Returns a list of dicts. Each dict represents a different method of browsing
this index. The following keys MUST be provided:
'name': Label to display to the user to represent this browsing method
'function': A function (defined in this index) which will be executed when
the user selects this browsing method. This function should describe
and add the list items to the directory, and assume flow control from
this point on.
Once the user indicates the content they would like to search the providers
for (usually via selecting a list item), plugin.video.waldo should be called
with the following parameters (again usually via listitem):
mode = 'GetAllResults'
type = either 'movie', 'tvshow', 'season', or 'episode'
title = The title string to look for
year = The release year of the desired movie, or premiere date of the
desired tv show.
imdb = The imdb id of the movie or tvshow to find sources for
tvdb = The tvdb id of the movie or tvshow to find sources for
season = The season number for which to return results.
If season is supplied, but not episode, all results for that season
should be returned
episode: The episode number for which to return results
"""
option_1 = {'name': 'Tv Shows', 'function': 'BrowseListMenu', 'kwargs': {'section': 'tv'}}
option_2 = {'name': 'Movies', 'function': 'BrowseListMenu', 'kwargs': {'section': 'movies'}}
return [option_1, option_2]
def callback(params):
"""
MUST be implemented. This method will be called when the user selects a
listitem you created. It will be passed a dict of parameters you passed to
the listitem's url.
For example, the following listitem url:
plugin://plugin.video.waldo/?mode=main§ion=tv&api_key=1234
Will call this function with:
{'mode':'main', 'section':'tv', 'api_key':'1234'}
"""
try: addon.log('%s was called with the following parameters: %s' % (params.get('receiver', ''), params))
except: pass
sort_by = params.get('sort', None)
section = params.get('section')
if sort_by: GetFilteredResults(section, sort=sort_by)
def BrowseListMenu(section): #This must match the 'function' key of an option from get_browsing_options
addon.add_directory({'section': section, 'sort': 'featured'}, {'title': 'Featured'}, img=art('featured.png'),
fanart=art('fanart.png'))
addon.add_directory({'section': section, 'sort': 'views'}, {'title': 'Most Popular'}, img=art('most_popular.png'),
fanart=art('fanart.png'))
addon.add_directory({'section': section, 'sort': 'ratings'}, {'title': 'Highly rated'}, img=art('highly_rated.png'),
fanart=art('fanart.png'))
addon.add_directory({'section': section, 'sort': 'release'}, {'title': 'Date released'},
img=art('date_released.png'), fanart=art('fanart.png'))
addon.add_directory({'section': section, 'sort': 'date'}, {'title': 'Date added'}, img=art('date_added.png'),
fanart=art('fanart.png'))
addon.end_of_directory()
def art(filename):
adn = Addon('plugin.video.1channel', sys.argv)
THEME_LIST = ['mikey1234', 'Glossy_Black', 'PrimeWire']
THEME = THEME_LIST[int(adn.get_setting('theme'))]
THEME_PATH = os.path.join(adn.get_path(), 'art', 'themes', THEME)
img = os.path.join(THEME_PATH, filename)
return img
def GetFilteredResults(section=None, genre=None, letter=None, sort='alphabet', page=None): #3000
try: addon.log('Filtered results for Section: %s Genre: %s Letter: %s Sort: %s Page: %s' % (section, genre, letter, sort, page))
except: pass
pageurl = BASE_URL + '/?'
if section == 'tv': pageurl += 'tv'
if genre: pageurl += '&genre=' + genre
if letter: pageurl += '&letter=' + letter
if sort: pageurl += '&sort=' + sort
if page: pageurl += '&page=%s' % page
if page:
page = int(page) + 1
else:
page = 2
html = GetURL(pageurl)
r = re.search('number_movies_result">([0-9,]+)', html)
if r:
total = int(r.group(1).replace(',', ''))
else:
total = 0
total_pages = total / 24
total = min(total, 24)
r = 'class="index_item.+?href="(.+?)" title="Watch (.+?)"?\(?([0-9]{4})?\)?"?>.+?src="(.+?)"'
regex = re.finditer(r, html, re.DOTALL)
resurls = []
for s in regex:
resurl, title, year, thumb = s.groups()
if resurl not in resurls:
resurls.append(resurl)
li_title = '%s (%s)' % (title, year)
li = xbmcgui.ListItem(li_title, iconImage=thumb, thumbnailImage=thumb)
if section == 'tv':
section = 'tvshow'
else:
section = 'movie'
queries = {'waldo_mode': 'GetAllResults', 'title': title, 'vid_type': section}
li_url = addon.build_plugin_url(queries)
xbmcplugin.addDirectoryItem(int(sys.argv[1]), li_url, li,
isFolder=True, totalItems=total)
if html.find('> >> <') > -1:
label = 'Skip to Page...'
command = addon.build_plugin_url(
{'mode': 'PageSelect', 'pages': total_pages, 'section': section, 'genre': genre, 'letter': letter,
'sort': sort})
command = 'RunPlugin(%s)' % command
cm = [(label, command)]
meta = {'title': 'Next Page >>'}
addon.add_directory(
{'mode': 'CallModule', 'receiver': 'PrimeWire', 'ind_path': os.path.dirname(__file__), 'section': section,
'genre': genre, 'letter': letter, 'sort': sort, 'page': page},
meta, cm, True, art('nextpage.png'), art('fanart.png'), is_folder=True)
addon.end_of_directory()
def GetURL(url, params=None, referrer=BASE_URL):
try: addon.log('Fetching URL: %s' % url)
except: pass
USER_AGENT = 'User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.56'
if params:
req = urllib2.Request(url, params)
else:
req = urllib2.Request(url)
req.add_header('User-Agent', USER_AGENT)
req.add_header('Host', BASE_Address) #'www.primewire.ag'
req.add_header('Referer', referrer)
try:
response = urllib2.urlopen(req, timeout=10)
body = response.read()
body = unicode(body, 'iso-8859-1')
h = HTMLParser.HTMLParser()
body = h.unescape(body)
except Exception, e:
try: addon.log('Failed to connect to %s: %s' % (url, e))
except: pass
return ''
return body.encode('utf-8') | 40.861905 | 132 | 0.633026 | 1,129 | 8,581 | 4.73605 | 0.296723 | 0.018328 | 0.019076 | 0.022442 | 0.163082 | 0.115766 | 0.081915 | 0.071816 | 0.055358 | 0.055358 | 0 | 0.013487 | 0.230975 | 8,581 | 210 | 133 | 40.861905 | 0.796787 | 0.070038 | 0 | 0.113821 | 0 | 0.01626 | 0.230565 | 0.02975 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.03252 | 0.073171 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c515bcc0378b767f2ca3dc2a2f5830efe8fce57 | 11,500 | py | Python | src/using_tips/using_tips_2.py | HuangHuaBingZiGe/GitHub-Demo | f3710f73b0828ef500343932d46c61d3b1e04ba9 | [
"Apache-2.0"
] | null | null | null | src/using_tips/using_tips_2.py | HuangHuaBingZiGe/GitHub-Demo | f3710f73b0828ef500343932d46c61d3b1e04ba9 | [
"Apache-2.0"
] | null | null | null | src/using_tips/using_tips_2.py | HuangHuaBingZiGe/GitHub-Demo | f3710f73b0828ef500343932d46c61d3b1e04ba9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
50个话题
9章
1.课程简介
2.数据结构相关话题
3.迭代器与生成器相关话题
4.字符串处理相关话题
5.文件I/O操作相关话题
6.数据编码与处理相关话题
7.类与对象相关话题
8.多线程与多进程相关话题
9.装饰器相关话题
"""
"""
第1章 课程简介
1-1 课程简介
1-2 在线编码工具WebIDE使用指南
第2章 数据结构与算法进阶训练
2-1 如何在列表, 字典, 集合中根据条件筛选数据
2-2 如何为元组中的每个元素命名, 提高程序可读性
2-3 如何统计序列中元素的出现频度
2-4 如何根据字典中值的大小, 对字典中的项排序
2-5 如何快速找到多个字典中的公共键(key)
2-6 如何让字典保持有序
2-7 如何实现用户的历史记录功能(最多n条)
第3章 对象迭代与反迭代技巧训练
3-1 如何实现可迭代对象和迭代器对象(1)
3-2 如何实现可迭代对象和迭代器对象(2)
3-3 如何使用生成器函数实现可迭代对象
3-4 如何进行反向迭代以及如何实现反向迭代
3-5 如何对迭代器做切片操作
3-6 如何在一个for语句中迭代多个可迭代对象
第4章 字符串处理技巧训练
4-1 如何拆分含有多种分隔符的字符串
4-2 如何判断字符串a是否以字符串b开头或结尾
4-3 如何调整字符串中文本的格式
4-4 如何将多个小字符串拼接成一个大的字符串
4-5 如何对字符串进行左, 右, 居中对齐
4-6 如何去掉字符串中不需要的字符
第5章 文件I/O高效处理技巧训练
5-1 如何读写文本文件
5-2 如何处理二进制文件
5-3 如何设置文件的缓冲
5-4 如何将文件映射到内存
5-5 如何访问文件的状态
5-6 如何使用临时文件
第6章 csv,json,xml,excel高效解析与构建技巧训练
6-1 如何读写csv数据
6-2 如何读写json数据
6-3 如何解析简单的xml文档
6-4 如何构建xml文档
6-5 如何读写excel文件
第7章 类与对象深度技术进阶训练
7-1 如何派生内置不可变类型并修改实例化行为
7-2 如何为创建大量实例节省内存
7-3 如何让对象支持上下文管理
7-4 如何创建可管理的对象属性
7-5 如何让类支持比较操作
7-6 如何使用描述符对实例属性做类型检查
7-7 如何在环状数据结构中管理内存
7-8 如何通过实例方法名字的字符串调用方法
第8章 多线程编程核心技术应用进阶训练
8-1 如何使用多线程
8-2 如何线程间通信
8-3 如何在线程间进行事件通知
8-4 如何使用线程本地数据
8-5 如何使用线程池
8-6 如何使用多进程
第9章 装饰器使用技巧进阶训练
9-1 如何使用函数装饰器
9-2 如何为被装饰的函数保存元数据
9-3 如何定义带参数的装饰器
9-4 如何实现属性可修改的函数装饰器
9-5 如何在类中定义装饰器
"""
"""
6-1 如何读写csv数据
实际案例:
http://table.finance.yahoo.com/table.csv?s=000001.sz我们可以通过雅虎网站获取了中国股市(深市)数据集,它以csv数据格式存储:
Date,Open,High,Low,Close,Volume,Adj Close
2016-06-30,8.69,8.74,8.66,8.70,36220400,8.70
2016-06-29,8.63,8.69,8.62,8.69,36961100,8.69
2016-06-28,8.58,8.64,8.56,8.63,33651900,8.63
请将平安银行这支股票,在2016奶奶中成交量超过50000000的纪录存储到另一个csv文件中
解决方案:
使用标准库中的csv模块,可以使用其中reader和writer完成csv文件读写
"""
'''
urllib.request.urlretrieve("http://table.finance.yahoo.com/table.csv?s=000001.sz",'pingan.csv')
cat pingan.csv | less
'''
"""
# 使用二进制打开
# 有问题,其实csv文件不是二进制文件
rf = open(file_name,'rb')
reader = csv.reader(rf)
print(reader)
for row in reader:
print(row)
"""
'''
file = 'test.csv'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'csv' + '\\' + file
file_copy = 'pingan_copy.csv'
file_name_copy = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'csv' + '\\' + file_copy
with open(file_name,"rt",encoding="utf-8") as csvfile:
reader = csv.reader(csvfile)
rows = [row for row in reader]
print(rows)
wf = open(file_name_copy,'w')
writer = csv.writer(wf)
writer.writerow(['Date','Open','High','Low','Close','Volume','Adj Close'])
writer.writerow(['Date','Open','High','Low','Close','Volume','Adj Close'])
wf.flush()
print("-----最好的方法-----")
print("python2和python3的csv.reader.next的方法有所区别")
file_copy_2 = 'pingan2.csv'
file_name_copy2 = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'csv' + '\\' + file_copy_2
with open(file_name,'r') as rf:
reader = csv.reader(rf)
with open(file_name_copy2,'w') as wf:
writer = csv.writer(wf)
headers = next(reader)
writer.writerow(headers)
for row in reader:
#if row[0] < '2016-01-01':
#break
if int(row[5]) > 36961100:
writer.writerow(row)
print("end")
'''
'''
6-2 如何读写json数据
实际案例:
在web应用中常用JSON(JavaScript Object Notation)格式传输数据,例如我们利用Baidu语音识别服务做语音识别,将本地音频数据post到Baidu语音识别服务器,服务器响应结果为json字符串
{"corpus_no":"6303355448008565863","err_msg":"success.","err_no":0,"result":["你好 ,"],"sn":"418359718861467614305"}
在python中如何读写json数据?
解决方案:
使用标准库中的json模块,其中loads,dumps函数可以完成json数据的读写
'''
'''
#coding:utf-8
import requests
import json
# 录音
from record import Record
record = Record(channels=1)
audioData = record.record(2)
# 获取token
from secret import API_KEY,SECRET_KEY
authUrl = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + API_KEY + "&client_secret=" + SECRET_KEY
response = requests.get(authUrl)
res = json.loads(response.content)
token = res['access_token']
# 语音识别
cuid = 'xxxxxxxxxxx'
srvUrl = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token
httpHeader = {
'Content-Type':'audio/wav; rate = 8000',
}
response = requests.post(srvUrl,headers=httpHeader,data=audioData)
res = json.loads(response.content)
text = res['result'][0]
print(u'\n识别结果:')
print(text)
'''
'''
# dumps将python对象转换为json的字符串
l = [1,2,'abc',{'name': 'Bob','age':13}]
print(json.dumps(l))
d = {'b':None,'a':5,'c':'abc'}
print(json.dumps(d))
# 将逗号后的空格和冒号后的空格删除,将空格压缩掉
print(json.dumps(l,separators=[',', ':']))
# 对输出的字典中的键进行排序
print(json.dumps(d,sort_keys=True))
# 把json字符串转换为python对象
l2 = json.loads('[1,2,"abc",{"name": "Bob","age":13}]')
print(type(l2))
d2 = json.loads('{"b":null,"a":5,"c":"abc"}')
print(type(d2))
'''
'''
file = 'demo.json'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'json' + '\\' + file
l = [1,2,'abc',{'name': 'Bob','age':13}]
# 将json写入文件当中,dump和load同理
with open(file_name,'w') as f:
json.dump(l,f)
'''
'''
6-3 如何解析简单的xml文档
实际案例:
xml是一种十分常用的标记性语言,可提供统一的方法来描述应用程序的结构化数据:
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank updated="yes">2</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
</data>
python中如何解析xml文档?
解决方案:
使用标准库中的xml.etree.ElementTree,其中的parse函数可以解析xml文档
from xml.etree.ElementTree import parse
import os
file = 'demo.xml'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'xml' + '\\' + file
f = open(file_name)
et = parse(f)
print(et)
root = et.getroot()
print(root)
print(root.tag)
print(root.attrib)
print(root.text)
print(root.text.strip())
print(root.getchildren())
for child in root:
print(child.get('name'))
print(root.find('country'))
print(root.findall('country'))
print(root.iterfind('country'))
for e in root.iterfind('country'):
print(e.get('name'))
print(root.findall('rank')) # 找不到非子元素
print(root.iter())
print(list(root.iter()))
print(list(root.iter('rank')))
print(root.findall('country/*')) # *表示匹配孙子节点
print(root.findall('rank')) # 直接查找子元素
print(root.findall('.//rank')) # //表示查找所有层次
print(root.findall('.//rank/..')) # ..表示查找rank的所有父节点
print(root.findall('country[@name]')) # 查找包含name属性的country
print(root.findall('country[@name="Singapore"]'))#查找属性等于特定值的
print(root.findall('country[rank]'))# 查找包含rank的country
print(root.findall('country[rank="5"]'))
print(root.findall('country[1]')) #查找序号为1的country
print(root.findall('country[2]'))
print(root.findall('country[last()]')) #找最后一个country标签
print(root.findall('country[last()-1]')) #找倒数第二个
'''
'''
6-4 如何构建xml文档
实际案例:
某些时候,我们需要将其他格式数据转换为xml
例如,我们要把平安股票csv文件,转换成相应的xml,
test.csv
Date,Open,High,Low,Close,Volume,Adj Close
2016/6/1,8.69,8.74,8.66,8.7,36220400,8.7
pingan.xml
<Data>
<Row>
<Date>2016-07-05</Date>
<Open>8.80</Open>
<High>8.83</High>
<Low>8.77</Low>
<Close>8.81</Close>
<Volume>42203700</Volume>
<AdjClose>8.81</AdjClose>
</Row>
</Data>
解决方案:
使用标准库中的xml.etree.ElementTree,构建ElementTree,使用write方法写入文件
from xml.etree.ElementTree import Element,ElementTree
e = Element('Data') # tag名字 Data 创建元素
print(e.tag)
print(e.set('name','abc')) # 设置Data的属性
from xml.etree.ElementTree import tostring
print(tostring(e))
e.text='123'
print(tostring(e))
e2 = Element('Row') #创建子元素
e3 = Element('Open')
e3.text='8.80'
e2.append(e3)
print(tostring(e2))
e.text = None
e.append(e2)
print(tostring(e))
import os
file = 'demo1.xml'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'xml' + '\\' + file
et = ElementTree(e)
et.write(file_name)
'''
'''
import csv
from xml.etree.cElementTree import Element,ElementTree
import os
file = 'pingan.csv'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'csv' + '\\' + file
file1 = 'pingan.xml'
file_name1 = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'xml' + '\\' + file1
def xml_pretty(e,level=0):
if len(e) > 0:
e.text = '\n' + '\t' * (level + 1)
for child in e:
xml_pretty(child,level + 1)
child.tail = child.tail[:-1]
e.tail = '\n' + '\t' * level
def csvToXml(fname):
with open(fname,'r') as f:
reader = csv.reader(f)
headers = next(reader)
root = Element('Data')
for row in reader:
eRow = Element('Row')
root.append(eRow)
for tag,text in zip(headers,row):
e = Element(tag)
e.text = text
eRow.append(e)
xml_pretty(root)
return ElementTree(root)
et = csvToXml(file_name)
et.write(file_name1)
'''
'''
6-5 如何读写excel文件
实际案例:
Microsoft Excel是日常办公中使用最频繁的软件,其数据格式为xls、xlsx,一种非常常用的电子表格,小学某班成绩,记录在excel文件中
姓名 语文 数学 外语
李雷 95 99 96
韩梅 98 100 93
张峰 94 95 95
利用python读写excel,添加“总分”列,计算每人的总分
解决方案:
使用pip安装, $ pip install xlrd xlwt
使用第三方库xlrd和xlwt,这两个库分别用于excel读和写
'''
'''
import xlrd
import os
file = 'sum_point.xlsx'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'excel' + '\\' + file
book = xlrd.open_workbook(file_name)
print(book.sheets())
sheet = book.sheet_by_index(0)
print(sheet.nrows)
print(sheet.ncols)
cell = sheet.cell(0,0)
print(cell)
# cell.ctype 是枚举值 xlrd.XL...
print(type(cell.value))
print(cell.value)
cell2 = sheet.cell(1,1)
print(cell2)
print(type(cell2))
print(cell2.ctype)
print(sheet.row(1))
print(sheet.row_values(1))
print(sheet.row_values(1,1)) # 跳过第一个,第2个1表示从第一个开始
# sheet.put_cell 为表添加1个单元格
import xlwt
wbook = xlwt.Workbook()
wsheet = wbook.add_sheet('sheet1')
# wsheet.write
# wbook.save('output.xlsx')
'''
'''
# 写入失败,有问题!!!!!!!!!
import os
import xlrd
import xlwt
file = 'sum_point.xlsx'
file_name = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'excel' + '\\' + file
file1 = 'sum_point_copy.xlsx'
file_name1 = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + \
'\\' + 'docs' + '\\' + 'excel' + '\\' + file1
rbook = xlrd.open_workbook(file_name)
rsheet = rbook.sheet_by_index(0)
nc = rsheet.ncols
rsheet.put_cell(0, nc, xlrd.XL_CELL_TEXT, u'总分', None) # 添加总分的文字,第0行,第rsheet.ncols列,类型,文本
for row in range(1, rsheet.nrows): # 第1行开始,跳过第0列
t = sum(rsheet.row_values(row, 1))
rsheet.put_cell(row, nc, xlrd.XL_CELL_NUMBER, t, None)
wbook = xlwt.Workbook()
wsheet = wbook.add_sheet(rsheet.name)
style = xlwt.easyxf('align:vertical center,horizontal center')
for r in range(rsheet.nrows):
for c in range(rsheet.ncols):
wsheet.write(r, c, rsheet.cell_value(r, c), style)
wbook.save(u'output.xlsx')
'''
| 22.460938 | 137 | 0.646174 | 1,591 | 11,500 | 4.59648 | 0.289755 | 0.0361 | 0.058663 | 0.067688 | 0.23971 | 0.183509 | 0.172159 | 0.159579 | 0.150964 | 0.129906 | 0 | 0.052114 | 0.177391 | 11,500 | 511 | 138 | 22.504892 | 0.72093 | 0.013391 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c55165299d16eb8a28fa1e0cd5b93ce500fd6c2 | 3,541 | py | Python | demo/showcase/colorpicker.py | ceccopierangiolieugenio/py-ttk | 117d61844bb7344bbe22a7797b7e3763d5fe4de5 | [
"MIT"
] | null | null | null | demo/showcase/colorpicker.py | ceccopierangiolieugenio/py-ttk | 117d61844bb7344bbe22a7797b7e3763d5fe4de5 | [
"MIT"
] | null | null | null | demo/showcase/colorpicker.py | ceccopierangiolieugenio/py-ttk | 117d61844bb7344bbe22a7797b7e3763d5fe4de5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2021 Eugenio Parodi <ceccopierangiolieugenio AT googlemail DOT com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import sys, os, argparse
sys.path.append(os.path.join(sys.path[0],'../..'))
import TermTk as ttk
def demoColorPicker(root=None):
frame = ttk.TTkFrame(parent=root, border=False)
winCP = ttk.TTkWindow(parent=frame,pos = (0,0), size=(30,16), title="Test Color Pickers", border=True)
ttk.TTkColorButtonPicker(parent=winCP, pos=( 0,0), size=(8,3), border=True, color=ttk.TTkColor.bg('#88ffff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=( 0,3), size=(8,3), border=True, color=ttk.TTkColor.bg('#ff88ff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=( 0,6), size=(8,3), border=True, color=ttk.TTkColor.bg('#ffff88') )
ttk.TTkColorButtonPicker(parent=winCP, pos=( 0,9), size=(8,3), border=True, color=ttk.TTkColor.bg('#8888ff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(10,0), size=(8,3), border=True, color=ttk.TTkColor.fg('#00ffff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(10,3), size=(8,3), border=True, color=ttk.TTkColor.fg('#ff00ff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(10,6), size=(8,3), border=True, color=ttk.TTkColor.fg('#ffff00') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(10,9), size=(8,3), border=True, color=ttk.TTkColor.fg('#0000ff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(20,0), size=(8,3), border=True, color=ttk.TTkColor.bg('#ffffff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(20,3), size=(8,3), border=True, color=ttk.TTkColor.bg('#ffffff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(20,6), size=(8,3), border=True, color=ttk.TTkColor.bg('#ffffff') )
ttk.TTkColorButtonPicker(parent=winCP, pos=(20,9), size=(8,3), border=True, color=ttk.TTkColor.bg('#ffffff') )
# win2_1 = ttk.TTkColorDialogPicker(parent=frame,pos = (3,3), size=(110,40), title="Test Color Picker", border=True)
return frame
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-f', help='Full Screen', action='store_true')
args = parser.parse_args()
ttk.TTkLog.use_default_file_logging()
root = ttk.TTk()
if args.f:
root.setLayout(ttk.TTkGridLayout())
winColor1 = root
else:
winColor1 = ttk.TTkWindow(parent=root,pos = (0,0), size=(120,50), title="Test Color Picker", border=True, layout=ttk.TTkGridLayout())
demoColorPicker(winColor1)
root.mainloop()
if __name__ == "__main__":
main() | 49.180556 | 141 | 0.713923 | 514 | 3,541 | 4.889105 | 0.365759 | 0.05969 | 0.13848 | 0.162356 | 0.385197 | 0.385197 | 0.223239 | 0.223239 | 0.223239 | 0.167529 | 0 | 0.035117 | 0.139509 | 3,541 | 72 | 142 | 49.180556 | 0.789629 | 0.35329 | 0 | 0 | 0 | 0 | 0.068342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c584193d545e03672cc71800d4a751b1d6b8a57 | 14,037 | py | Python | service/views.py | antorof/django-simple | 786e93b5084c17b364bac6bceb7dddcce1c789d2 | [
"MIT"
] | null | null | null | service/views.py | antorof/django-simple | 786e93b5084c17b364bac6bceb7dddcce1c789d2 | [
"MIT"
] | null | null | null | service/views.py | antorof/django-simple | 786e93b5084c17b364bac6bceb7dddcce1c789d2 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
from django import forms
from django.shortcuts import render, redirect
from django.http import HttpResponseRedirect
from django.http import HttpResponse
from django.core.validators import validate_slug, RegexValidator
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.models import User
from lxml import etree
import requests
from pymongo import MongoClient
from django.http import JsonResponse
from unidecode import unidecode
class loginForm(forms.Form):
username = forms.CharField(
label = '',
max_length = 10,
required = True,
widget=forms.TextInput(attrs={'class' : 'form-control','placeholder':'Nombre de usuario'}))
password = forms.CharField(
label = '',
required = True,
widget = forms.PasswordInput(attrs={'class' : 'form-control','placeholder':'Contraseña'}),)
def clean (self):
cleaned_data = super(loginForm, self).clean()
class registroForm(forms.Form):
username = forms.CharField(
label='',
max_length = 10,
required = True,
widget=forms.TextInput(attrs={'class' : 'form-control','placeholder':'Nombre de usuario'}))
password1 = forms.CharField(
label='',
required = True,
widget = forms.PasswordInput(attrs={'class' : 'form-control','placeholder':'Contraseña'}),)
password2 = forms.CharField(
label='',
required = True,
widget = forms.PasswordInput(attrs={'class' : 'form-control','placeholder':'Repita su contraseña'}),)
email = forms.EmailField(
label='',
required = False,
widget = forms.EmailInput(attrs={'class' : 'form-control','placeholder':'Correo electrónico'}))
credit = forms.CharField(
label='',
required = False,
max_length=16,
validators=[RegexValidator(r'^[0-9]{16}$','Son necesarios 16 dígitos','numero_invalido'),],
widget = forms.TextInput(attrs={'class' : 'form-control','placeholder':'Tarjeta de crédito'}))
anio_expiracion = forms.CharField(
label='',
required = False,
max_length=4,
validators=[RegexValidator(r'^[0-9]{4}$','Son necesarios 4 dígitos','anio_invalido'),],
widget = forms.NumberInput(attrs={'class' : 'form-control','placeholder':'Año de expiración', 'min':'2000', 'max':'2100'}))
mes_credito = forms.CharField(
label='',
required = False,
max_length=2,
validators=[RegexValidator(r'^[0-9]{1,2}$','Introduzca el número del mes','mes_invalido'),],
widget = forms.NumberInput(attrs={'class' : 'form-control','placeholder':'Mes de expiración', 'min':'1', 'max':'2100'}))
def faltanCampos(self):
cleaned_data = super(registroForm, self).clean()
un = cleaned_data.get("username")
pw = cleaned_data.get("password1")
pw2 = cleaned_data.get("password2")
return un == None or pw == None or pw2 == None
def contraseniasDistintas(self):
cleaned_data = super(registroForm, self).clean()
pw = cleaned_data.get("password1")
pw2 = cleaned_data.get("password2")
if pw != pw2:
return True
def clean (self):
cleaned_data = super(registroForm, self).clean()
pw = cleaned_data.get("password1")
pw2 = cleaned_data.get("password2")
if pw != pw2:
raise forms.ValidationError("") # No le pongo nada para no mostrar texto en el cliente
def index (request):
return redirect('inicio')
def inicio (request):
'Renderiza la página principal'
# Si usuario tiene iniciada la sesión lo dejo continuar
if request.user.is_authenticated() :
return render (request, 'bienvenida.html')
else :
context = {
# 'username':None,
'form':loginForm(),
'mensaje':'Inicie sesión para continuar.',
}
return render (request, 'login.html', context)
def cerrarSesion (request):
'Cierra la sesión del usuario si hubiera una sesión abierta'
logout(request)
return redirect('login')
def iniciarSesion (request):
'Realiza el inicio de sesión de un usuario o devuelve la página de login'
# Si viene del POST
if request.method == 'POST':
form = loginForm (request.POST)
# Si el formulario es valido se comprueban los credenciales
if form.is_valid ():
user = authenticate(username = form.cleaned_data['username'],password = form.cleaned_data['password'])
if user is not None:
if user.is_active:
login(request, user)
return redirect('inicio')
else:
context = {
'mensaje':'Usuario no activo',
'form':form,
}
return render(request, 'login.html', context)
else:
context = {
'mensaje':'Usuario o contraseña incorrectos',
'form':form,
}
return render(request, 'login.html', context)
# Si es la primera vez que se llama (GET)
else:
# Si usuario tiene iniciada la sesión redirijo
if request.user.is_authenticated() :
return redirect('inicio')
# Si no, le muestro el formulario de login
else :
form = loginForm()
context = {
'mensaje':'',
'form':form,
}
return render(request, 'login.html', context)
def registro (request):
'Realiza el registro de un usuario o devuelve la página de registro'
if request.method == 'POST':
form = registroForm (request.POST)
if form.is_valid ():
try:
User.objects.create_user(username = form.cleaned_data['username'],
email = form.cleaned_data['email'],
password = form.cleaned_data['password1'])
except Exception as error:
print error
context = {
'username':None,
'form':form,
'mensaje':'Ese usuario ya existe.',
}
return render (request, 'registro.html', context)
context = {
'username':form.cleaned_data['username'],
'form':loginForm(),
'mensaje':'Usuario creado con éxito. Inicie sesión.',
}
return render (request, 'login.html', context)
else:
if form.faltanCampos():
context = {
'username': None,
'form':form,
'mensaje':'Revise los campos a rellenar.',
}
elif form.contraseniasDistintas():
context = {
'username': None,
'form':form,
'mensaje':'Las contraseñas introducidas son distintas.',
}
else:
context = {
'username': None,
'form':form,
'mensaje':'Error desconocido.',
}
return render (request, 'registro.html', context)
else:
username = 'default'
form = registroForm()
context = {
'username':username,
'form':form,
}
return render(request, 'registro.html', context)
def geoETSIIT (request):
GEOCODE_BASE_URL = 'http://maps.google.com/maps/api/geocode/xml'
# URL_ETSIIT = '?address=Periodista Daniel Saucedo Aranda 18014 GRANADA Spain'
URL_ETSIIT = '?address=ETSIIT GRANADA Spain'
result = ""
tree = etree.parse(GEOCODE_BASE_URL + URL_ETSIIT)
result += "<ul>"
items = tree.xpath('//address_component')
for i in items:
lname = i.xpath('long_name')
type = i.xpath('type')
# Solo aparece un type y un solo long_name, por eso el '[0]'
if type[0].text == 'locality' :
print (">" + lname[0].text)
result += "<li>Localidad: <strong>" + lname[0].text +"</strong></li>"
elif type[0].text == 'administrative_area_level_4' :
print (">" + lname[0].text)
result += "<li>Municipio: <strong>" + lname[0].text +"</strong></li>"
elif type[0].text == 'administrative_area_level_3' :
print (">" + lname[0].text)
result += "<li>Comarca: <strong>" + lname[0].text +"</strong></li>"
elif type[0].text == 'administrative_area_level_2' :
print (">" + lname[0].text)
result += "<li>Provincia: <strong>" + lname[0].text +"</strong></li>"
elif type[0].text == 'administrative_area_level_1' :
print (">" + lname[0].text)
result += "<li>Comunidad: <strong>" + lname[0].text +"</strong></li>"
result += "</ul>"
context = {
'url':GEOCODE_BASE_URL + URL_ETSIIT,
'form':result,
}
return render(request, 'geo-etsiit.html', context)
def elpais (request):
# BASE_URL = 'http://ep00.epimg.net/rss/elpais/portada.xml'
BASE_URL = 'http://ep00.epimg.net/rss/tecnologia/portada.xml'
NOMBRE_URL = 'RSS de Tecnología'
result = ""
tree = etree.parse(BASE_URL)
# result += "<ul>"
images = tree.xpath('//enclosure/@url')
for i in images:
# print (">" + i)
result += "<div class='col-xs-6 col-sm-4 col-md-3'>"
result += '<a href="' + i + '" target="_blank">'
result += '<img class="img-responsive" src="'+ i +'" alt="">'
result += "</a>"
result += "</div>"
# result += "</ul>"
context = {
'nombre_url':NOMBRE_URL,
'url':BASE_URL,
'form':result,
}
return render(request, 'elpais.html', context)
def crawler (request):
client = MongoClient()
db = client.db_ssbw
noticias_tb = db.noticias
NOMBRE = "Servicio de búsqueda"
result = ""
if request.method == 'POST':
categoria = request.POST.get("keyword", "")
if categoria.replace(" ","") != "":
noticias = noticias_tb.find({"categorias_clean":{ "$regex": unidecode(categoria), "$options":"i" }})
# print("post:"+categoria)
# print(unidecode(categoria))
# print("count:"+str(noticias.count()))
if noticias.count()!=0:
result += "<p class='text-mute'>"+str(noticias.count())+" resultados encontrados.</p>"
for i in noticias:
title = i["titulo"]
link = i["link"]
categorias = i["categorias"]
categorias_clean = i["categorias_clean"]
result += "<div class='col-xs-6 col-sm-4 col-md-3'><div class='panel panel-default'><div class='panel-body'>"
result += '<h4>' + title + '</h4>'
result += '<p><a href="' + link + '" target="_blank">Enlace</a></p>'
for k in range(len(categorias)):
if str.lower(unidecode(categoria)) == categorias_clean[k]:
result += "<span class='label label-success'>" + categorias[k] + "</span><br/>"
elif str.lower(unidecode(categoria)) in categorias_clean[k]:
result += "<span class='label label-primary'>" + categorias[k] + "</span><br/>"
else:
result += "<span class='label label-gray'>" + categorias[k] + "</span><br/>"
result += "</div></div></div>"
else:
result += "<p class='text-warning'>No se han encontrado resultados.</p>"
else:
result += "<p class='text-danger'>Debe introducir un término para la búsqueda.</p>"
context = {
'nombre':NOMBRE,
'url':"",
'contenido':result,
'cabecera':'Resultados de la búsqueda',
'keyword':categoria,
'POST':True
}
return render(request, 'crawler.html', context)
else:
URL_ELPAIS = 'http://servicios.elpais.com/rss/'
BASE_URL = 'http://ep00.epimg.net/rss/tecnologia/portada.xml'
result += "<p class='text-muted'>Escriba una categoría en el cuadro de búsqueda para realizar una consulta.</p>"
context = {
'nombre':NOMBRE,
'url':BASE_URL,
'contenido':result,
'cabecera':'Bienvenido al servicio de búsqueda de noticias',
'POST':False
}
return render(request, 'crawler.html', context)
def updatebd (request):
nuevasNoticias = 0
client = MongoClient()
db = client.db_ssbw
noticias_tb = db.noticias
URL_ELPAIS = 'http://servicios.elpais.com/rss/'
BASE_URL = 'http://ep00.epimg.net/rss/tecnologia/portada.xml'
tree = etree.parse(BASE_URL)
items = tree.xpath('//item')
for i in items:
title = i.xpath('title')[0].text
link = i.xpath('link')[0].text
categorias = []
categorias_clean = []
for j in i.xpath('category'):
categorias.append(j.text)
categorias_clean.append(str.lower(unidecode(j.text)))
unItem = {"titulo":title,"link":link,"categorias":categorias,"categorias_clean":categorias_clean}
if noticias_tb.find(unItem).count() == 0:
nuevasNoticias+=1
noticias_tb.insert(unItem)
return JsonResponse( {'numItems':str(noticias_tb.count()),'nuevosItems':str(nuevasNoticias)} )
| 37.036939 | 131 | 0.539004 | 1,435 | 14,037 | 5.209059 | 0.239024 | 0.025017 | 0.033043 | 0.025284 | 0.440134 | 0.366823 | 0.282943 | 0.246288 | 0.203077 | 0.186488 | 0 | 0.010327 | 0.32393 | 14,037 | 379 | 132 | 37.036939 | 0.777345 | 0.0488 | 0 | 0.431894 | 0 | 0.009967 | 0.239163 | 0.017474 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.046512 | 0.039867 | null | null | 0.019934 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c58b942bcebd4a683ecfeb9a7a984410ec44775 | 436 | py | Python | generate.py | g3y/password | 75333c6a204995148b8e69b49116bb0d0ef74fff | [
"MIT"
] | null | null | null | generate.py | g3y/password | 75333c6a204995148b8e69b49116bb0d0ef74fff | [
"MIT"
] | null | null | null | generate.py | g3y/password | 75333c6a204995148b8e69b49116bb0d0ef74fff | [
"MIT"
] | null | null | null | digits = '0123456789'
chars = 'abcdefghijklmn' + \
'opqrstuvwxyz'
up = chars.upper()
special = '_!$%&?ù'
all = digits+chars+up+special
from random import choice
password = ''.join (
choice(all) for i in range(10)
)
f = open('ascii.txt', 'r')
file_contents = f.read()
print("\x1b[1;32m ")
print (file_contents)
f.close()
print("\033[0;31m")
print(password)
print("\033[0;37;40m")
| 18.956522 | 40 | 0.582569 | 57 | 436 | 4.403509 | 0.684211 | 0.095618 | 0.103586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090361 | 0.238532 | 436 | 22 | 41 | 19.818182 | 0.665663 | 0 | 0 | 0 | 0 | 0 | 0.256039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.055556 | 0 | 0.055556 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6c649ba69e1815d00b4b141bdc92997beb8e2f34 | 9,987 | py | Python | experiments/SUNRGBD_few_shot.py | rkwitt/AGA | 21c2344225c24da6991002244e3fe730306b5c25 | [
"Apache-2.0"
] | 11 | 2017-08-25T21:44:34.000Z | 2022-03-10T14:24:43.000Z | experiments/SUNRGBD_few_shot.py | rkwitt/AGA | 21c2344225c24da6991002244e3fe730306b5c25 | [
"Apache-2.0"
] | 2 | 2017-11-03T13:40:02.000Z | 2019-06-17T00:30:59.000Z | experiments/SUNRGBD_few_shot.py | rkwitt/AGA | 21c2344225c24da6991002244e3fe730306b5c25 | [
"Apache-2.0"
] | 4 | 2018-07-07T04:15:02.000Z | 2020-02-11T05:02:23.000Z | """Few-shot object recognition experiments with AGA.
Author(s): rkwitt, mdixit, 2017
"""
import sys
sys.path.append("../")
sys.path.append("liblinear-2.11/python")
import liblinear
import liblinearutil
from misc.tools import build_file_list, balanced_sampling
import scipy
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import Normalizer, MinMaxScaler, StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score
from termcolor import colored, cprint
from scipy.io import loadmat, savemat
from scipy import stats
import numpy as np
import argparse
import glob
import pickle
import time
import sys
import os
class ResultStatistics:
def __init__(self):
self._container = {}
def add_result(self, tag, val):
if not tag in self._container:
self._container[tag] = []
self._container[tag].append(val)
def print_current(self, keys=None):
out_str = ''
if keys is None:
for key in self._container:
out_str += '[{}] {}: {:.2f} | '.format(
str(len(self._container[key])).zfill(5),
key,
self._container[key][-1])
else:
for key in keys:
out_str += '[{}] {}: {:.2f} | '.format(
str(len(self._container[key])).zfill(5),
key,
self._container[key][-1])
cprint(out_str, 'blue')
def print_summary(self, keys=None):
out_str = ''
if keys is None:
for key in self._container:
avg = np.array([self._container[key]]).mean()
out_str += '{}: {:.2f} | '.format(key, avg)
else:
for key in keys:
avg = np.array([self._container[key]]).mean()
out_str += '{}: {:.2f} | '.format(key, avg)
cprint(out_str,'red')
def setup_parser():
parser = argparse.ArgumentParser(description='One-shot object recognition experiments')
parser.add_argument(
"--verbose",
action="store_true",
default=False,
dest="verbose",
help="enables verbose output")
parser.add_argument(
"--omit_original",
action="store_true",
default=False,
dest="omit_original",
help="enables verbose output")
parser.add_argument(
"--img_list",
metavar='',
help="list with image names (no extension)")
parser.add_argument(
"--shots",
type=int,
default=1,
help="nr. of few-shot samples (default: 1)")
parser.add_argument(
"--dim",
type=int,
default=4096,
help="dimensionality of features (default: 4096)")
parser.add_argument(
"--runs",
type=int,
default=10,
help="number of evaluation runs (default: 10)")
parser.add_argument(
"--data_postfix",
metavar='',
help="postfix of data files (with extension)")
parser.add_argument(
"--img_base",
metavar='',
help="base directory for image data")
return parser
def collect_data(img_data_files):
data = {}
# iterate over all data files
for data_file in img_data_files:
print data_file
with open(data_file, 'r') as fid:
tmp = pickle.load(fid)
# iterate over all available detections for that image
for det_idx in tmp:
obj_idx = tmp[det_idx]['obj_idx'] # Object ID
obj_syn = tmp[det_idx]['CNN_activation_syn'] # AGA-syn. feature(s)
obj_org = tmp[det_idx]['CNN_activation_org'] # Original feature
if not obj_idx in data:
data[obj_idx] = []
# store AGA-syn. + original features as a list of tuples per
# object class.
data[obj_idx].append((obj_syn, obj_org))
return data
def select_few_shot(data, args):
# **DEBUG** - for comparision to MATLAB version
debug_indices = [44,
110,
47,
65,
83,
23,
117,
97,
632,
128]
debug_indices = [u-1 for u in debug_indices]
data_trn_few = np.array([]).reshape(0,args.dim)
data_trn_syn = np.array([]).reshape(0,args.dim)
data_tst_org = np.array([]).reshape(0,args.dim)
data_trn_syn_lab = [] # AGA-synthesized training labels
data_trn_few_lab = [] # Few-shot labels
data_tst_org_lab = [] # Testing labels
for m, obj_id in enumerate(data):
all_indices = np.arange(len(data[obj_id]))
while True:
valid = True
few_indices = np.random.choice(
len(data[obj_id]),
size=args.shots,
replace=False)
for fidx in few_indices:
tmp_syn_data, tmp_org_data = data[obj_id][fidx]
if tmp_syn_data is None:
valid = False
if valid:
break
prev_few_size = data_trn_few.shape[0]
prev_syn_size = data_trn_syn.shape[0]
for fidx in few_indices:
tmp_syn_data, tmp_org_data = data[obj_id][fidx]
data_trn_few = np.vstack((data_trn_few, tmp_org_data))
data_trn_syn = np.vstack((data_trn_syn, tmp_syn_data))
if not args.omit_original:
data_trn_syn = np.vstack((data_trn_syn, tmp_org_data))
org_diff = data_trn_few.shape[0] - prev_few_size
syn_diff = data_trn_syn.shape[0] - prev_syn_size
data_trn_few_lab += [obj_id for k in np.arange(org_diff)]
data_trn_syn_lab += [obj_id for k in np.arange(syn_diff)]
assert data_trn_syn.shape[0] == len(data_trn_syn_lab)
assert data_trn_few.shape[0] == len(data_trn_few_lab)
tst_indices = np.setdiff1d(all_indices, few_indices)
data_tst_tmp = np.zeros((len(tst_indices), args.dim))
for n, tidx in enumerate(tst_indices):
_, tmp_org_data = data[obj_id][tidx]
data_tst_tmp[n,:] = tmp_org_data
data_tst_org = np.vstack((data_tst_org, data_tst_tmp))
data_tst_org_lab += [obj_id for k in np.arange(len(tst_indices))]
# some sanity assertions
assert args.shots * len(np.unique(data_trn_few_lab)) == len(data_trn_few_lab)
assert np.array_equal(np.unique(data_trn_few_lab),np.unique(data_trn_syn_lab)) == True
assert np.array_equal(np.unique(data_trn_few_lab),np.unique(data_tst_org_lab)) == True
ret_data = {
"data_trn_syn": data_trn_syn,
"data_trn_few": data_trn_few,
"data_tst_org": data_tst_org,
"data_trn_syn_lab": data_trn_syn_lab,
"data_trn_few_lab": data_trn_few_lab,
"data_tst_org_lab": data_tst_org_lab}
return ret_data
def eval_SVM(X, y, Xhat, yhat):
# create classification problem
problem = liblinearutil.problem(y,X)
# set SVM parameters
svm_param = liblinearutil.parameter('-s 3 -c 10 -q -B 1')
# train SVM
model = liblinearutil.train(problem, svm_param)
# predict and evaluate
p_label, p_acc, p_val = liblinearutil.predict(yhat, Xhat, model, '-q')
# compute accuracy
acc, mse, scc = liblinearutil.evaluations(yhat, p_label)
return acc
def eval_NN1(X,y,Xhat,yhat):
# create 1-NN classifier
neigh = KNeighborsClassifier(n_neighbors=1)
# train :)
neigh.fit(X, y)
# compute accuracy
acc = accuracy_score(yhat, neigh.predict(Xhat))
return acc*100.0
def eval(Xtrn, Xtrn_lab, Xtst, Xtst_lab):
# guarantee valid activations
Xtrn = Xtrn.clip(min=0)
Xtst = Xtst.clip(min=0)
# L1 normalization
normalizer = Normalizer(
norm='l1',
copy=True)
norm_Xtrn = normalizer.fit_transform(Xtrn)
norm_Xtst = normalizer.transform(Xtst)
# possibly cleanup numerics
norm_Xtrn[np.abs(norm_Xtrn) < 1e-9] = 0
norm_Xtst[np.abs(norm_Xtst) < 1e-9] = 0
# create sparse matrix (as many entries are 0 anyways)
norm_Xtrn_sparse = scipy.sparse.csr_matrix(norm_Xtrn)
norm_Xtst_sparse = scipy.sparse.csr_matrix(norm_Xtst)
svm_acc = eval_SVM(
norm_Xtrn_sparse,
Xtrn_lab,
norm_Xtst_sparse,
Xtst_lab)
nn1_acc = eval_NN1(
norm_Xtrn_sparse,
Xtrn_lab,
norm_Xtst_sparse,
Xtst_lab)
return {'SVM' : svm_acc, '1NN' : nn1_acc}
def main(argv=None):
if argv is None:
argv = sys.argv
np.random.seed(seed=1234)
args = setup_parser().parse_args()
img_data_files = build_file_list(
args.img_list,
args.img_base,
args.data_postfix
)
data_source = collect_data(img_data_files)
if args.verbose:
for obj_id in data_source:
cprint('Object: {} - {} detections'.format(
obj_id, len(data_source[obj_id])), 'blue')
stats = ResultStatistics()
for run_id in np.arange(args.runs):
data = select_few_shot(data_source, args)
tmp_result_one = eval(
data['data_trn_few'],
data['data_trn_few_lab'],
data['data_tst_org'],
data['data_tst_org_lab'])
stats.add_result('SVM (w/o AGA)', tmp_result_one['SVM'])
stats.add_result('1NN (w/o AGA)', tmp_result_one['1NN'])
tmp_result_syn = eval(
data['data_trn_syn'],
data['data_trn_syn_lab'],
data['data_tst_org'],
data['data_tst_org_lab'])
stats.add_result('SVM (w AGA)', tmp_result_syn['SVM'])
stats.add_result('1NN (w AGA)', tmp_result_syn['1NN'])
stats.print_current([
'SVM (w/o AGA)',
'SVM (w AGA)',
'1NN (w/o AGA)',
'1NN (w AGA)'])
stats.print_summary([
'SVM (w/o AGA)',
'SVM (w AGA)',
'1NN (w/o AGA)',
'1NN (w AGA)'])
if __name__ == "__main__":
main()
| 28.698276 | 91 | 0.592971 | 1,333 | 9,987 | 4.171793 | 0.213803 | 0.046574 | 0.034167 | 0.023377 | 0.336091 | 0.260744 | 0.202841 | 0.174789 | 0.16292 | 0.140263 | 0 | 0.014587 | 0.292981 | 9,987 | 347 | 92 | 28.78098 | 0.772978 | 0.060278 | 0 | 0.269841 | 0 | 0 | 0.105059 | 0.002265 | 0 | 0 | 0 | 0 | 0.019841 | 0 | null | null | 0 | 0.071429 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c65559a080c5040d9248730992e33bb3a3fa204 | 360 | py | Python | expressmanage/products/admin.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | expressmanage/products/admin.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | expressmanage/products/admin.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Product, ContainerType, RateSlab
class RateSlabInline(admin.TabularInline):
model = RateSlab
extra = 3
class ContainerTypeAdmin(admin.ModelAdmin):
inlines = [RateSlabInline]
# Register your models here.
admin.site.register(Product)
admin.site.register(ContainerType, ContainerTypeAdmin)
| 18.947368 | 54 | 0.780556 | 38 | 360 | 7.394737 | 0.578947 | 0.064057 | 0.120996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003236 | 0.141667 | 360 | 18 | 55 | 20 | 0.906149 | 0.072222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6c6579ac9476c154697b6cbdf8ad89a1d3e8dedd | 431 | py | Python | zeus/vcs/asserts.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 221 | 2017-07-03T17:29:21.000Z | 2021-12-07T19:56:59.000Z | zeus/vcs/asserts.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 298 | 2017-07-04T18:08:14.000Z | 2022-03-03T22:24:51.000Z | zeus/vcs/asserts.py | conrad-kronos/zeus | ddb6bc313e51fb22222b30822b82d76f37dbbd35 | [
"Apache-2.0"
] | 24 | 2017-07-15T13:46:45.000Z | 2020-08-16T16:14:45.000Z | def assert_revision(revision, author=None, message=None):
"""Asserts values of the given fields in the provided revision.
:param revision: The revision to validate
:param author: that must be present in the ``revision``
:param message: message substring that must be present in ``revision``
"""
if author:
assert author == revision.author
if message:
assert message in revision.message
| 35.916667 | 74 | 0.696056 | 56 | 431 | 5.339286 | 0.410714 | 0.093645 | 0.06689 | 0.113712 | 0.12709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225058 | 431 | 11 | 75 | 39.181818 | 0.89521 | 0.533643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c6602a7f2b304f1bfa457983a8b77fa43597a7b | 287 | py | Python | templates/led-button-input/led-button-input.py | elixirbuild/Raspberry-Pi-3-Templates | a61f0cb8c17d3c060923d9448f048cc40dadb362 | [
"MIT"
] | null | null | null | templates/led-button-input/led-button-input.py | elixirbuild/Raspberry-Pi-3-Templates | a61f0cb8c17d3c060923d9448f048cc40dadb362 | [
"MIT"
] | null | null | null | templates/led-button-input/led-button-input.py | elixirbuild/Raspberry-Pi-3-Templates | a61f0cb8c17d3c060923d9448f048cc40dadb362 | [
"MIT"
] | null | null | null | # modules
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BCM)
sleepTime = .1
GPIO.setup(4, GPIO.OUT)
GPIO.setup(17, GPIO.IN, pull_up_down=GPIO.PUD_UP)
while True:
GPIO.output(4, GPIO.inout(17))
sleep(sleepTime)
finally:
GPIO.output(4, False)
GPIO.cleanup()
| 15.944444 | 49 | 0.724739 | 49 | 287 | 4.183673 | 0.571429 | 0.087805 | 0.107317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032389 | 0.139373 | 287 | 17 | 50 | 16.882353 | 0.797571 | 0.02439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c674324f9c6334d3a0c0b26e5fb292d79655814 | 931 | py | Python | tests/test_dim.py | codema-dev/seai_deap | 52b67582beac8d8a2b46b5991970b6ad6695f7b3 | [
"MIT"
] | null | null | null | tests/test_dim.py | codema-dev/seai_deap | 52b67582beac8d8a2b46b5991970b6ad6695f7b3 | [
"MIT"
] | null | null | null | tests/test_dim.py | codema-dev/seai_deap | 52b67582beac8d8a2b46b5991970b6ad6695f7b3 | [
"MIT"
] | 1 | 2020-11-20T11:22:36.000Z | 2020-11-20T11:22:36.000Z | import numpy as np
from numpy.testing import assert_array_equal
from seai_deap import dim
def test_calculate_building_volume() -> None:
expected_output = np.array(4)
output = dim.calculate_building_volume(
ground_floor_area=np.array(1),
first_floor_area=np.array(1),
second_floor_area=np.array(1),
third_floor_area=np.array(1),
ground_floor_height=np.array(1),
first_floor_height=np.array(1),
second_floor_height=np.array(1),
third_floor_height=np.array(1),
)
assert_array_equal(output, expected_output)
def test_calculate_total_floor_area() -> None:
expected_output = np.array((4))
output = dim.calculate_total_floor_area(
ground_floor_area=np.array(1),
first_floor_area=np.array(1),
second_floor_area=np.array(1),
third_floor_area=np.array(1),
)
assert_array_equal(output, expected_output)
| 25.162162 | 47 | 0.694952 | 134 | 931 | 4.477612 | 0.223881 | 0.163333 | 0.16 | 0.213333 | 0.72 | 0.58 | 0.58 | 0.58 | 0.58 | 0.3 | 0 | 0.018868 | 0.203008 | 931 | 36 | 48 | 25.861111 | 0.789757 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c6a761384d2d91a69e9412394a16b4c72c90fa5 | 3,719 | py | Python | app/entities/new_schemas.py | deepettas/contact-tree | 547ee1055b6aa91b37a8621160c66f56ea76b81b | [
"MIT"
] | 1 | 2021-07-15T21:57:20.000Z | 2021-07-15T21:57:20.000Z | app/entities/new_schemas.py | deepettas/contact-tree | 547ee1055b6aa91b37a8621160c66f56ea76b81b | [
"MIT"
] | null | null | null | app/entities/new_schemas.py | deepettas/contact-tree | 547ee1055b6aa91b37a8621160c66f56ea76b81b | [
"MIT"
] | null | null | null | from collections import namedtuple
import graphene
import datetime
import json
from .new_models import Agent, Community, Collection
def _json_object_hook(d):
return namedtuple('X', d.keys())(*d.values())
def json2obj(data):
return json.loads(data, object_hook=_json_object_hook)
class AgentSchema(graphene.ObjectType):
name = graphene.String(required=True)
dateTimeAdded = graphene.DateTime()
knows = graphene.List(graphene.String)
belongs = graphene.List(graphene.String)
tags = graphene.List(graphene.String)
email = graphene.String(required=False)
loves = graphene.String(required=False)
hates = graphene.String(required=False)
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.name = kwargs.pop('name')
self.agent = Agent(name=self.name)
def resolve_knows(self, info):
_agent = Agent(name=self.name).fetch()
return _agent.knows
def resolve_belongs(self, info):
_agent = Agent(name=self.name).fetch()
return _agent.belongs
class CreateAgent(graphene.Mutation):
class Arguments:
name = graphene.String(required=True)
dateTimeAdded = graphene.DateTime()
knows = graphene.List(graphene.String)
belongs = graphene.List(graphene.String)
tags = graphene.List(graphene.String)
email = graphene.String(required=False)
loves = graphene.String(required=False)
hates = graphene.String(required=False)
success = graphene.Boolean()
agent = graphene.Field(lambda: AgentSchema)
def mutate(self, info, **kwargs):
agent = Agent(**kwargs)
agent.save()
agent._link_connections()
agent._link_communities()
return CreateAgent(agent=agent, success=True)
class CommunitySchema(graphene.ObjectType):
name = graphene.String()
description = graphene.String()
def __init__(self, **kwargs):
self._id = kwargs.pop('_id')
super().__init__(**kwargs)
class CreateCommunity(graphene.Mutation):
class Arguments:
name = graphene.String(required=True)
description = graphene.String()
success = graphene.Boolean()
community = graphene.Field(lambda: CommunitySchema)
def mutate(self, info, **kwargs):
community = Community(**kwargs)
community.save()
return CreateCommunity(community=community, success=True)
class CollectionSchema(graphene.ObjectType):
name = graphene.String()
description = graphene.String()
def __init__(self, **kwargs):
self._id = kwargs.pop('_id')
super().__init__(**kwargs)
class CreateCollection(graphene.Mutation):
class Arguments:
name = graphene.String(required=True)
description = graphene.String()
success = graphene.Boolean()
collection = graphene.Field(lambda: CollectionSchema)
def mutate(self, info, **kwargs):
collection = Collection(**kwargs)
collection.save()
return CreateCollection(community=collection, success=True)
class Query(graphene.ObjectType):
agent = graphene.Field(lambda: AgentSchema, name=graphene.String(required=True))
community = graphene.Field(lambda: CommunitySchema, name=graphene.String())
collection = graphene.Field(lambda: CollectionSchema, name=graphene.String())
def resolve_agent(self, info, name):
agent = Agent(name=name)
return AgentSchema(**agent.as_dict())
class Mutations(graphene.ObjectType):
create_agent = CreateAgent.Field()
create_community = CreateCommunity.Field()
create_collection = CreateCollection.Field()
schema = graphene.Schema(query=Query, mutation=Mutations, auto_camelcase=False)
| 29.054688 | 84 | 0.688357 | 394 | 3,719 | 6.365482 | 0.185279 | 0.139553 | 0.096491 | 0.062201 | 0.577751 | 0.424242 | 0.424242 | 0.424242 | 0.424242 | 0.412281 | 0 | 0.000335 | 0.196558 | 3,719 | 127 | 85 | 29.283465 | 0.839023 | 0 | 0 | 0.477778 | 0 | 0 | 0.002958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122222 | false | 0 | 0.055556 | 0.022222 | 0.655556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6c6b3c52de11b644b8e8a5adeae6b1e544b7535d | 3,537 | py | Python | gallery/tests.py | aluoch-sheila/GALLERY | 3a910dc272ce5c731d5780749daeecec66bf2313 | [
"MIT"
] | null | null | null | gallery/tests.py | aluoch-sheila/GALLERY | 3a910dc272ce5c731d5780749daeecec66bf2313 | [
"MIT"
] | null | null | null | gallery/tests.py | aluoch-sheila/GALLERY | 3a910dc272ce5c731d5780749daeecec66bf2313 | [
"MIT"
] | null | null | null |
from django.test import TestCase
from .models import Posts,Location,Category
# Create your tests here.
class locationTest(TestCase):
def setUp(self):
self.new_location = Location(location="nairobi")
def test_instance(self):
self.assertTrue(isinstance(self.new_location,Location))
def test_data(self):
self.assertTrue(self.new_location.location,"nairobi")
def test_save(self):
self.new_location.save()
location = Location.objects.all()
self.assertTrue(len(location)>0)
def test_delete(self):
location = Location.objects.filter(id=1)
location.delete()
locale = Location.objects.all()
self.assertTrue(len(locale)==0)
def test_update_location(self):
self.new_location.save()
self.update_location = Location.objects.filter(location='nairobi').update(location = 'Kenya')
self.updated_location = Location.objects.get(location='Kenya')
self.assertTrue(self.updated_location.location, 'Kenya')
def test_get_location_by_id(self):
self.new_location.save()
locale = Location.objects.get(id=1)
self.assertTrue(locale.location,'nairobi')
class CategoryTest(TestCase):
def setUp(self):
self.new_category = Category(name="test")
def test_instance(self):
self.assertTrue(isinstance(self.new_category,Category))
def test_data(self):
self.assertTrue(self.new_category.name,"test")
def test_save(self):
self.new_category.save()
categories = Category.objects.all()
self.assertTrue(len(categories)>0)
def test_delete(self):
category = Category.objects.filter(id=1)
category.delete()
cat = Category.objects.all()
self.assertTrue(len(cat)==0)
def test_update_category(self):
self.new_category.save()
self.update_cat = Category.objects.filter(name='test').update(name = 'wedding')
self.updated_cat = Category.objects.get(name='wedding')
self.assertTrue(self.updated_cat.name,'wedding')
def test_get_category_by_id(self):
self.new_category.save()
cat = Category.objects.get(id=1)
self.assertTrue(cat.name,'test')
class postsTest(TestCase):
def setUp(self):
self.new_location = Location(location="nairobi")
# self.new_category = Category(name="test")
self.new_location.save()
# self.new_category.save()
self.new_post = Posts(name="sheila",description="like eating",location=self.new_location)
def test_instance(self):
self.assertTrue(isinstance(self.new_post,Posts))
def test_data(self):
self.assertTrue(self.new_post.name,"sheila")
self.assertTrue(self.new_post.description,"like eating")
def test_save(self):
self.new_post.save()
posts = Posts.objects.all()
self.assertTrue(len(posts)>0)
def test_delete(self):
post = Posts.objects.filter(id=1)
post.delete()
posts = Posts.objects.all()
self.assertTrue(len(posts)==0)
def test_update_post(self):
self.new_post.save()
self.update_post = Posts.objects.filter(name='sheila').update(name = 'cake')
self.updated_post = Posts.objects.get(name='cake')
self.assertTrue(self.updated_post.name,'cake')
def test_get_post_by_id(self):
self.new_post.save()
posts = Posts.objects.get(id=1)
self.assertTrue(posts.name,'sheila')
| 31.864865 | 101 | 0.654227 | 437 | 3,537 | 5.15103 | 0.116705 | 0.074634 | 0.058641 | 0.063972 | 0.529098 | 0.390937 | 0.235007 | 0.235007 | 0.162594 | 0.095957 | 0 | 0.004342 | 0.218547 | 3,537 | 110 | 102 | 32.154545 | 0.810058 | 0.025445 | 0 | 0.3625 | 0 | 0 | 0.042127 | 0 | 0 | 0 | 0 | 0 | 0.2375 | 1 | 0.2625 | false | 0 | 0.025 | 0 | 0.325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c7359592bb66c7832d00da9b4e99663c0bdf9f3 | 11,774 | py | Python | py/obiwan/decals_sim_randoms.py | manera/legacypipe | 64dfe164fe1def50f5ad53784edd9a63321b0d45 | [
"BSD-3-Clause"
] | 32 | 2015-08-25T00:25:23.000Z | 2022-03-04T06:35:54.000Z | py/obiwan/decals_sim_randoms.py | manera/legacypipe | 64dfe164fe1def50f5ad53784edd9a63321b0d45 | [
"BSD-3-Clause"
] | 644 | 2015-07-08T16:26:28.000Z | 2022-03-30T19:09:10.000Z | py/obiwan/decals_sim_randoms.py | manera/legacypipe | 64dfe164fe1def50f5ad53784edd9a63321b0d45 | [
"BSD-3-Clause"
] | 22 | 2015-08-24T18:27:36.000Z | 2021-12-04T03:10:42.000Z | import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
import os
import pickle
def add_scatter(ax,x,y,c='b',m='o',lab='',s=80,drawln=False,alpha=1):
ax.scatter(x,y, s=s, lw=2.,facecolors='none',edgecolors=c, marker=m,label=lab,alpha=alpha)
if drawln: ax.plot(x,y, c=c,ls='-')
def draw_unit_sphere(ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000,seed=2015):
'''# from https://github.com/desihub/imaginglss/master/scripts/imglss-mpi-make-random.py'''
rng = np.random.RandomState(seed)
u1,u2= rng.uniform(size=(2, Nran))
#
cmin = np.sin(dcmin*np.pi/180)
cmax = np.sin(dcmax*np.pi/180)
#
RA = ramin + u1*(ramax-ramin)
DEC = 90-np.arccos(cmin+u2*(cmax-cmin))*180./np.pi
return RA,DEC
class QuickRandoms(object):
'''Draw randomly from unit sphere
Example:
qran= QuickRandoms(ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000)
qran.get_randoms()
# save and plot
qran.save_randoms()
qran.plot(xlim=(244.,244.1),ylim=(8.,8.1)) #,xlim=(244.,244.+10./360),ylim=(8.,8.+10./360.))
'''
def __init__(self,ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000):
self.ramin=ramin
self.ramax=ramax
self.dcmin=dcmin
self.dcmax=dcmax
self.Nran=Nran
def get_randoms(self, fn='quick_randoms.pickle'):
if os.path.exists(fn):
ra,dec= self.read_randoms()
else:
ra,dec=draw_unit_sphere(ramin=self.ramin,ramax=self.ramax,\
dcmin=self.dcmin,dcmax=self.dcmax,Nran=self.Nran)
self.ra,self.dec=ra,dec
def save_randoms(self,fn='quick_randoms.pickle'):
if not os.path.exists(fn):
fout=open(fn, 'w')
pickle.dump((self.ra,self.dec),fout)
fout.close()
print("Wrote randoms to: %s" % fn)
else:
print("WARNING: %s exists, not overwritting it" % fn)
def read_randoms(self,fn='quick_randoms.pickle'):
print("Reading randoms from %s" % fn)
fobj=open(fn, 'r')
ra,dec= pickle.load(fobj)
fobj.close()
return ra,dec
def plot(self,xlim=None,ylim=None,text=''):
fig,ax=plt.subplots()
add_scatter(ax,self.ra,self.dec,c='b',m='.',alpha=0.5)
ax.set_xlabel('RA')
ax.set_ylabel('DEC')
if xlim is not None and ylim is not None:
ax.set_xlim(xlim)
ax.set_ylim(ylim)
text='_xlim%0.5f_%.5f_ylim%.5f_%.5f' % (xlim[0],xlim[1],ylim[0],ylim[1])
plt.savefig("quick_randoms%s.png" % text)
plt.close()
class DesiRandoms(object):
'''Draw randomly from unit sphere & provide 2 masks:
mask1: inbricks -- indices where ra,dec pts are in LegacySurvey bricks
mask2: inimages -- union with inbricks and where we have legacy survey imaging data at these ra,dec pts
Example:
ran= DesiRandoms(ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000)
ran.get_randoms()
# save randoms if file not exist and plot
ran.save_randoms()
ran.plot(xlim=(244.,244.1),ylim=(8.,8.1)) #,xlim=(244.,244.+10./360),ylim=(8.,8.+10./360.))
'''
def __init__(self,ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000):
self.ramin=ramin
self.ramax=ramax
self.dcmin=dcmin
self.dcmax=dcmax
self.Nran=Nran
def get_randoms(self,fn='desi_randoms.pickle'):
if os.path.exists(fn):
self.ra,self.dec,self.i_inbricks,self.i_inimages= self.read_randoms()
else:
self.ra,self.dec,self.i_inbricks,self.i_inimages= self.make_randoms()
def save_randoms(self,fn='desi_randoms.pickle'):
if not os.path.exists(fn):
fout=open(fn, 'w')
pickle.dump((self.ra,self.dec,self.i_inbricks,self.i_inimages),fout)
fout.close()
print("Wrote: %s" % fn)
else:
print "WARNING: not saving randoms b/c file already exists: %s" % fn
def make_randoms(self):
'''Nran -- # of randoms'''
import imaginglss
from imaginglss.model import dataproduct
from imaginglss.model.datarelease import contains
import h5py
print "Creating %d Randoms" % self.Nran
# dr2 cache
decals = imaginglss.DECALS('/project/projectdirs/desi/users/burleigh/dr3_testdir_for_bb/imaginglss/dr2.conf.py')
foot= decals.datarelease.create_footprint((self.ramin,self.ramax,self.dcmin,self.dcmax))
print('Total sq.deg. covered by Bricks= ',foot.area)
# Sample full ra,dec box
ra,dec=draw_unit_sphere(ramin=self.ramin,ramax=self.ramax,\
dcmin=self.dcmin,dcmax=self.dcmax,Nran=self.Nran)
#randoms = np.empty(len(ra), dtype=dataproduct.RandomCatalogue)
#randoms['RA'] = ra
#randoms['DEC'] = dec
# Get indices of inbrick points, copied from def filter()
coord= np.array((ra,dec))
bid = foot.brickindex.query_internal(coord)
i_inbricks = contains(foot._covered_brickids, bid)
i_inbricks = np.where(i_inbricks)[0]
print('Number Density in bricks= ',len(ra)/foot.area)
# Union of inbricks and have imaging data, evaluate depths at ra,dec
coord= coord[:, i_inbricks]
cat_lim = decals.datarelease.read_depths(coord, 'grz')
depth= cat_lim['DECAM_DEPTH'] ** -0.5 / cat_lim['DECAM_MW_TRANSMISSION']
nanmask= np.isnan(depth)
nanmask=np.all(nanmask[:,[1,2,4]],axis=1) # shape (Nran,)
i_inimages= i_inbricks[nanmask == False]
print('ra.size=%d, i_inbricks.size=%d, i_inimages.size=%d' % (ra.size, i_inbricks.size, i_inimages.size))
# We are not using Yu's randoms dtype=dataproduct.RandomCatalogue
#randoms['INTRINSIC_NOISELEVEL'][:, :6] = (cat_lim['DECAM_DEPTH'] ** -0.5 / cat_lim['DECAM_MW_TRANSMISSION'])
#randoms['INTRINSIC_NOISELEVEL'][:, 6:] = 0
#nanmask = np.isnan(randoms['INTRINSIC_NOISELEVEL'])
#randoms['INTRINSIC_NOISELEVEL'][nanmask] = np.inf
print('Total sq.deg. where have imaging data approx.= ',foot.area*(len(ra[i_inimages]))/len(ra))
print('Number Density for sources where have images= ',len(ra[i_inimages])/foot.area)
# save ra,dec,mask to file
#with h5py.File('eboss_randoms.hdf5', 'w') as ff:
# ds = ff.create_dataset('_HEADER', shape=(0,))
# ds.attrs['FootPrintArea'] = decals.datarelease.footprint.area
# ds.attrs['NumberDensity'] = 1.0 * len(randoms) / decals.datarelease.footprint.area
# for column in randoms.dtype.names:
# ds = ff.create_dataset(column, data=randoms[column])
# ds = ff.create_dataset('nanmask', data=nanmask)
return ra,dec, i_inbricks,i_inimages
def plot(self,name='desirandoms.png'):
fig,ax=plt.subplots(1,3,sharey=True,sharex=True,figsize=(15,5))
add_scatter(ax[0],self.ra, self.dec, c='b',m='o')
add_scatter(ax[1],self.ra[self.i_inbricks], self.dec[self.i_inbricks], c='b',m='.')
add_scatter(ax[2],self.ra[self.i_inimages], self.dec[self.i_inimages], c='b',m='.')
for i,title in zip(range(3),['All','in Bricks','in Images']):
ti=ax[i].set_title(title)
xlab=ax[i].set_xlabel('ra')
ax[i].set_ylim((self.dec.min(),self.dec.max()))
ax[i].set_xlim((self.ra.min(),self.ra.max()))
ylab=ax[0].set_ylabel('dec')
plt.savefig(name, bbox_extra_artists=[ti,xlab,ylab], bbox_inches='tight',dpi=150)
plt.close()
print "wrote: %s" % name
class Angular_Correlator(object):
'''Compute w(theta) from observed ra,dec and random ra,dec
uses landy szalay estimator: DD - 2DR + RR / RR
two numerical methods: 1) Yu Feng's kdcount, 2) astroML
Example:
ac= Angular_Correlator(gal_ra,gal_dec,ran_ra,ran_dec)
ac.compute()
ac.plot()
'''
def __init__(self,gal_ra,gal_dec,ran_ra,ran_dec,ncores=1):
self.gal_ra=gal_ra
self.gal_dec=gal_dec
self.ran_ra=ran_ra
self.ran_dec=ran_dec
self.ncores=ncores
def compute(self):
self.theta,self.w={},{}
for key in ['astroML','yu']:
self.theta[key],self.w[key]= self.get_angular_corr(whos=key)
self.plot()
def get_angular_corr(self,whos='yu'):
if whos == 'yu': return self.ac_yu()
elif whos == 'astroML': return self.ac_astroML()
else: raise ValueError()
def ac_astroML(self):
'''from two_point_angular() in astroML/correlation.py'''
from astroML.correlation import two_point,ra_dec_to_xyz,angular_dist_to_euclidean_dist
# 3d project
data = np.asarray(ra_dec_to_xyz(self.gal_ra, self.gal_dec), order='F').T
data_R = np.asarray(ra_dec_to_xyz(self.ran_ra, self.ran_dec), order='F').T
# convert spherical bins to cartesian bins
bins = 10 ** np.linspace(np.log10(1. / 60.), np.log10(6), 16)
bins_transform = angular_dist_to_euclidean_dist(bins)
w= two_point(data, bins_transform, method='landy-szalay',data_R=data_R)
bin_centers = 0.5 * (bins[1:] + bins[:-1])
return bin_centers, w
def ac_yu(self):
from kdcount import correlate
from kdcount import sphere
abin = sphere.AngularBinning(np.logspace(-4, -2.6, 10))
D = sphere.points(self.gal_ra, self.gal_dec)
R = sphere.points(self.ran_ra, self.ran_dec) #weights=wt_array
DD = correlate.paircount(D, D, abin, np=self.ncores)
DR = correlate.paircount(D, R, abin, np=self.ncores)
RR = correlate.paircount(R, R, abin, np=self.ncores)
r = D.norm / R.norm
w= (DD.sum1 - 2 * r * DR.sum1 + r ** 2 * RR.sum1) / (r ** 2 * RR.sum1)
return abin.angular_centers,w
def plot(self,name='wtheta.png'):
fig,ax=plt.subplots()
for key,col,mark in zip(['yu','astroML'],['g','b'],['o']*2):
print "%s: theta,w" % key,self.theta[key],self.w[key]
add_scatter(ax,self.theta[key], self.w[key], c=col,m=mark,lab=key,alpha=0.5)
t = np.array([0.01, 10])
plt.plot(t, 10 * (t / 0.01) ** -0.8, ':k', lw=1)
ax.legend(loc='upper right',scatterpoints=1)
xlab=ax.set_xlabel(r'$\theta$ (deg)')
ylab=ax.set_ylabel(r'$\hat{w}(\theta)$')
ax.set_xscale('log')
ax.set_yscale('log')
plt.savefig(name, bbox_extra_artists=[xlab,ylab], bbox_inches='tight',dpi=150)
plt.close()
print "wrote: %s" % name
def ac_unit_test():
'''angular correlation func unit test'''
qran= QuickRandoms(ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000)
qran.get_randoms()
# subset
index= np.all((qran.ra >= 244.,qran.ra <= 244.5,\
qran.dec >= 8.,qran.dec <= 8.5),axis=0)
ra,dec= qran.ra[index],qran.dec[index]
# use these as Ducks for DesiRandoms
ran= DesiRandoms()
ran.ra,ran.dec= ra,dec
index= np.all((ran.ra >= 244.,ran.ra <= 244.25),axis=0)
ran.i_inbricks= np.where(index)[0]
index= np.all((index,ran.dec >= 8.1,ran.dec <= 8.4),axis=0)
ran.i_inimages= np.where(index)[0]
ran.plot()
# wtheta
ac= Angular_Correlator(ran.ra[ran.i_inimages],ran.dec[ran.i_inimages],ran.ra,ran.dec)
ac.compute()
ac.plot()
print 'finished unit_test'
if __name__ == '__main__':
#ac_unit_test()
#Nran=int(2.4e3*9.)
#ran= DesiRandoms(ramin=243.,ramax=246.,dcmin=7.,dcmax=10.,Nran=216000)
Nran=int(2.4e3*1.e2)
ran= DesiRandoms(ramin=120.,ramax=130.,dcmin=20.,dcmax=30.,Nran=Nran)
ran.get_randoms()
# save randoms if file not exist and plot
ran.save_randoms(fn='desi_randoms_qual.pickle')
ran.plot()
| 42.352518 | 120 | 0.615848 | 1,769 | 11,774 | 3.983041 | 0.204635 | 0.014902 | 0.012773 | 0.015896 | 0.31493 | 0.273489 | 0.233324 | 0.207777 | 0.195714 | 0.195714 | 0 | 0.036316 | 0.223543 | 11,774 | 277 | 121 | 42.505415 | 0.734413 | 0.10744 | 0 | 0.193717 | 0 | 0 | 0.097295 | 0.017016 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.062827 | null | null | 0.08377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c74d59d6f7b40e67065f884b5a9c86fcff0998f | 2,283 | py | Python | purchase/migrations/0004_auto_20200928_0513.py | drtweety/busman | 4847ffecafb4499d1e2225e4ea860bd2bf442110 | [
"MIT"
] | null | null | null | purchase/migrations/0004_auto_20200928_0513.py | drtweety/busman | 4847ffecafb4499d1e2225e4ea860bd2bf442110 | [
"MIT"
] | 8 | 2020-09-24T06:30:13.000Z | 2021-06-13T18:12:21.000Z | purchase/migrations/0004_auto_20200928_0513.py | drtweety/busman | 4847ffecafb4499d1e2225e4ea860bd2bf442110 | [
"MIT"
] | 1 | 2021-06-12T09:59:47.000Z | 2021-06-12T09:59:47.000Z | # Generated by Django 3.1.1 on 2020-09-28 05:13
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('organization', '0004_auto_20200914_0713'),
('products', '0003_product_minimum_price'),
('purchase', '0003_auto_20200927_0929'),
]
operations = [
migrations.CreateModel(
name='PurchaseInvoice',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('date', models.DateField()),
('finalized', models.BooleanField(choices=[(0, 'Pending'), (1, 'Finalized')], default=0, max_length=20)),
('organization', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='purchaseinvoice', to='organization.organization')),
],
options={
'ordering': ['-id'],
'abstract': False,
},
),
migrations.CreateModel(
name='PurchaseInvoiceEntry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.DecimalField(decimal_places=2, max_digits=12)),
('quantity', models.DecimalField(decimal_places=2, max_digits=12)),
('product', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='purchaseinvoiceentry', to='products.product')),
('purchase', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='entries', to='purchase.purchaseinvoice')),
],
options={
'verbose_name_plural': 'Purchase Entries',
},
),
migrations.RemoveField(
model_name='purchaseentry',
name='product',
),
migrations.RemoveField(
model_name='purchaseentry',
name='purchase',
),
migrations.DeleteModel(
name='Purchase',
),
migrations.DeleteModel(
name='PurchaseEntry',
),
]
| 39.362069 | 164 | 0.579501 | 206 | 2,283 | 6.26699 | 0.407767 | 0.030984 | 0.043377 | 0.068164 | 0.435321 | 0.384198 | 0.288149 | 0.288149 | 0.221534 | 0.221534 | 0 | 0.039731 | 0.283399 | 2,283 | 57 | 165 | 40.052632 | 0.749389 | 0.019711 | 0 | 0.431373 | 1 | 0 | 0.196333 | 0.054114 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.039216 | 0 | 0.098039 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c7de28d39354c0e41abcb99add6d27c59b144b8 | 3,256 | py | Python | app/training/models/chromosome.py | TUIASI-AC-enaki/flappy-bird-with-ai | e1b70108b0e6a548033dc1845fabcd5459fb2cbe | [
"MIT"
] | null | null | null | app/training/models/chromosome.py | TUIASI-AC-enaki/flappy-bird-with-ai | e1b70108b0e6a548033dc1845fabcd5459fb2cbe | [
"MIT"
] | null | null | null | app/training/models/chromosome.py | TUIASI-AC-enaki/flappy-bird-with-ai | e1b70108b0e6a548033dc1845fabcd5459fb2cbe | [
"MIT"
] | 1 | 2021-08-29T09:32:12.000Z | 2021-08-29T09:32:12.000Z | from utils import generate_random_range, generate_random_int_range, read_dict_from_json
from .neural_bird import NeuralBird
import random
class Chromosome:
def __init__(self, bird: NeuralBird, fitness=0, generations_alive=0, ancestor_generations=0):
self.bird = bird
self.fitness = fitness
self.generations_alive = generations_alive
self.ancestor_generations = ancestor_generations
def mutate(self, mutation_probability=0.2):
for index in range(len(self.bird.weights)):
if random.random() < mutation_probability:
self.bird.weights[index] = generate_random_range()
@staticmethod
def reproduce(father, mother, crossover_probability):
if random.random() < crossover_probability:
slice_index = generate_random_int_range(max_range=len(mother.bird.weights) - 2,
min_range=1)
weights = mother.bird.weights[:slice_index]
weights.extend(father.bird.weights[slice_index:])
return Chromosome(NeuralBird(weights),
ancestor_generations=max(mother.ancestor_generations, father.ancestor_generations))
return None
def get_fitness(self):
return self.fitness
def to_dict(self):
return {
"score": self.fitness,
"generations_alive": self.generations_alive,
"ancestor_generations": self.ancestor_generations,
"weights": self.bird.get_list_weights()
}
def complete_training(self, score):
self.ancestor_generations += 1
self.fitness = self.fitness * self.generations_alive + score
self.generations_alive += 1
self.fitness /= int(self.generations_alive)
# self.fitness = max(self.fitness, score)
def to_str(self):
return str(self.bird.weights)
def __str__(self):
return str(self.bird.weights)
def __lt__(self, other):
return self.fitness < other.fitness
@staticmethod
def read_from_file(filename, population_size):
data = read_dict_from_json(filename)
if data is None:
print("Json File {}: Error opening.".format(filename))
return Chromosome.generate_new_random_population(population_size)
population = [Chromosome(bird=NeuralBird(element["weights"]),
fitness=element["score"],
generations_alive=element["generations_alive"],
ancestor_generations=element["ancestor_generations"])
for element in data]
if len(population) < population_size:
for _ in range(population_size - len(population)):
population.append(Chromosome(NeuralBird()))
if len(population) > population_size:
population = population[:population_size]
return population
@staticmethod
def read_best_from_file(filename):
data = read_dict_from_json(filename)
return data[0]["weights"] if data else None
@staticmethod
def generate_new_random_population(population_size):
return [Chromosome(NeuralBird()) for _ in range(population_size)]
| 39.228916 | 113 | 0.643428 | 345 | 3,256 | 5.814493 | 0.2 | 0.104187 | 0.04985 | 0.023928 | 0.155533 | 0.102692 | 0.033898 | 0.033898 | 0 | 0 | 0 | 0.004212 | 0.270885 | 3,256 | 82 | 114 | 39.707317 | 0.840775 | 0.011978 | 0 | 0.119403 | 1 | 0 | 0.041369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179104 | false | 0 | 0.044776 | 0.089552 | 0.402985 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c7f81bf1bdb6134fa5fcc1e8fd8269537afff4a | 24,402 | py | Python | bloodhound_theme/bhtheme/theme.py | HelionDevPlatform/bloodhound | 206b0d9898159fa8297ad1e407d38484fa378354 | [
"Apache-2.0"
] | null | null | null | bloodhound_theme/bhtheme/theme.py | HelionDevPlatform/bloodhound | 206b0d9898159fa8297ad1e407d38484fa378354 | [
"Apache-2.0"
] | null | null | null | bloodhound_theme/bhtheme/theme.py | HelionDevPlatform/bloodhound | 206b0d9898159fa8297ad1e407d38484fa378354 | [
"Apache-2.0"
] | null | null | null |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import sys
from genshi.builder import tag
from genshi.core import TEXT
from genshi.filters.transform import Transformer
from genshi.output import DocType
from trac.config import ListOption, Option
from trac.core import Component, TracError, implements
from trac.mimeview.api import get_mimetype
from trac.resource import get_resource_url, Neighborhood, Resource
from trac.ticket.model import Ticket, Milestone
from trac.ticket.notification import TicketNotifyEmail
from trac.ticket.web_ui import TicketModule
from trac.util.compat import set
from trac.util.presentation import to_json
from trac.util.translation import _
from trac.versioncontrol.web_ui.browser import BrowserModule
from trac.web.api import IRequestFilter, IRequestHandler, ITemplateStreamFilter
from trac.web.chrome import (add_stylesheet, INavigationContributor,
ITemplateProvider, prevnext_nav, Chrome)
from trac.wiki.admin import WikiAdmin
from themeengine.api import ThemeBase, ThemeEngineSystem
from bhdashboard.util import dummy_request
from bhdashboard.web_ui import DashboardModule
from bhdashboard import wiki
from multiproduct.env import ProductEnvironment
from multiproduct.web_ui import PRODUCT_RE, ProductModule
try:
from multiproduct.ticket.web_ui import ProductTicketModule
except ImportError:
ProductTicketModule = None
class BloodhoundTheme(ThemeBase):
"""Look and feel of Bloodhound issue tracker.
"""
template = htdocs = css = screenshot = disable_trac_css = True
disable_all_trac_css = True
BLOODHOUND_KEEP_CSS = set(
(
'diff.css', 'code.css'
)
)
BLOODHOUND_TEMPLATE_MAP = {
# Admin
'admin_accountsconfig.html': ('bh_admin_accountsconfig.html', '_modify_admin_breadcrumb'),
'admin_accountsnotification.html': ('bh_admin_accountsnotification.html', '_modify_admin_breadcrumb'),
'admin_basics.html': ('bh_admin_basics.html', '_modify_admin_breadcrumb'),
'admin_components.html': ('bh_admin_components.html', '_modify_admin_breadcrumb'),
'admin_enums.html': ('bh_admin_enums.html', '_modify_admin_breadcrumb'),
'admin_logging.html': ('bh_admin_logging.html', '_modify_admin_breadcrumb'),
'admin_milestones.html': ('bh_admin_milestones.html', '_modify_admin_breadcrumb'),
'admin_perms.html': ('bh_admin_perms.html', '_modify_admin_breadcrumb'),
'admin_plugins.html': ('bh_admin_plugins.html', '_modify_admin_breadcrumb'),
'admin_products.html': ('bh_admin_products.html', '_modify_admin_breadcrumb'),
'admin_repositories.html': ('bh_admin_repositories.html', '_modify_admin_breadcrumb'),
'admin_users.html': ('bh_admin_users.html', '_modify_admin_breadcrumb'),
'admin_versions.html': ('bh_admin_versions.html', '_modify_admin_breadcrumb'),
# no template substitutions below - use the default template,
# but call the modifier nonetheless
'repository_links.html': ('repository_links.html', '_modify_admin_breadcrumb'),
# Preferences
'prefs.html': ('bh_prefs.html', None),
'prefs_account.html': ('bh_prefs_account.html', None),
'prefs_advanced.html': ('bh_prefs_advanced.html', None),
'prefs_datetime.html': ('bh_prefs_datetime.html', None),
'prefs_general.html': ('bh_prefs_general.html', None),
'prefs_keybindings.html': ('bh_prefs_keybindings.html', None),
'prefs_language.html': ('bh_prefs_language.html', None),
'prefs_pygments.html': ('bh_prefs_pygments.html', None),
'prefs_userinterface.html': ('bh_prefs_userinterface.html', None),
# Search
'search.html': ('bh_search.html', '_modify_search_data'),
# Wiki
'wiki_delete.html': ('bh_wiki_delete.html', None),
'wiki_diff.html': ('bh_wiki_diff.html', None),
'wiki_edit.html': ('bh_wiki_edit.html', None),
'wiki_rename.html': ('bh_wiki_rename.html', None),
'wiki_view.html': ('bh_wiki_view.html', '_modify_wiki_page_path'),
# Ticket
'diff_view.html': ('bh_diff_view.html', None),
'manage.html': ('manage.html', '_modify_resource_breadcrumb'),
'milestone_edit.html': ('bh_milestone_edit.html', '_modify_roadmap_page'),
'milestone_delete.html': ('bh_milestone_delete.html', '_modify_roadmap_page'),
'milestone_view.html': ('bh_milestone_view.html', '_modify_roadmap_page'),
'query.html': ('bh_query.html', '_add_products_general_breadcrumb'),
'report_delete.html': ('bh_report_delete.html', '_add_products_general_breadcrumb'),
'report_edit.html': ('bh_report_edit.html', '_add_products_general_breadcrumb'),
'report_list.html': ('bh_report_list.html', '_add_products_general_breadcrumb'),
'report_view.html': ('bh_report_view.html', '_add_products_general_breadcrumb'),
'roadmap.html': ('roadmap.html', '_modify_roadmap_page'),
'ticket.html': ('bh_ticket.html', '_modify_ticket'),
'ticket_delete.html': ('bh_ticket_delete.html', None),
'ticket_preview.html': ('bh_ticket_preview.html', None),
# Attachment
'attachment.html': ('bh_attachment.html', None),
'preview_file.html': ('bh_preview_file.html', None),
# Version control
'browser.html': ('bh_browser.html', '_modify_browser'),
'dir_entries.html': ('bh_dir_entries.html', None),
'revisionlog.html': ('bh_revisionlog.html', '_modify_browser'),
# Multi Product
'product_view.html': ('bh_product_view.html', '_add_products_general_breadcrumb'),
# General purpose
'about.html': ('bh_about.html', None),
'history_view.html': ('bh_history_view.html', None),
'timeline.html': ('bh_timeline.html', None),
# Account manager plugin
'account_details.html': ('bh_account_details.html', None),
'login.html': ('bh_login.html', None),
'register.html': ('bh_register.html', None),
'reset_password.html': ('bh_reset_password.html', None),
'user_table.html': ('bh_user_table.html', None),
'verify_email.html': ('bh_verify_email.html', None),
}
BOOTSTRAP_CSS_DEFAULTS = (
# ('XPath expression', ['default', 'bootstrap', 'css', 'classes'])
("body//table[not(contains(@class, 'table'))]", # TODO: Accurate ?
['table', 'table-condensed']),
)
labels_application_short = Option('labels', 'application_short',
'Bloodhound', """A short version of application name most commonly
displayed in text, titles and labels""")
labels_application_full = Option('labels', 'application_full',
'Apache Bloodhound', """This is full name with trade mark and
everything, it is currently used in footers and about page only""")
labels_footer_left_prefix = Option('labels', 'footer_left_prefix', '',
"""Text to display before full application name in footers""")
labels_footer_left_postfix = Option('labels', 'footer_left_postfix', '',
"""Text to display after full application name in footers""")
labels_footer_right = Option('labels', 'footer_right', '',
"""Text to use as the right aligned footer""")
_wiki_pages = None
Chrome.default_html_doctype = DocType.HTML5
implements(IRequestFilter, INavigationContributor, ITemplateProvider,
ITemplateStreamFilter)
from trac.web import main
main.default_tracker = 'http://issues.apache.org/bloodhound'
def _get_whitelabelling(self):
"""Gets the whitelabelling config values"""
return {
'application_short': self.labels_application_short,
'application_full': self.labels_application_full,
'footer_left_prefix': self.labels_footer_left_prefix,
'footer_left_postfix': self.labels_footer_left_postfix,
'footer_right': self.labels_footer_right,
'application_version': application_version
}
# ITemplateStreamFilter methods
def filter_stream(self, req, method, filename, stream, data):
"""Insert default Bootstrap CSS classes if rendering
legacy templates (i.e. determined by template name prefix)
and renames wiki guide links.
"""
tx = Transformer('body')
def add_classes(classes):
"""Return a function ensuring CSS classes will be there for element.
"""
def attr_modifier(name, event):
attrs = event[1][1]
class_list = attrs.get(name, '').split()
self.log.debug('BH Theme : Element classes ' + str(class_list))
out_classes = ' '.join(set(class_list + classes))
self.log.debug('BH Theme : Inserting class ' + out_classes)
return out_classes
return attr_modifier
# Insert default bootstrap CSS classes if necessary
for xpath, classes in self.BOOTSTRAP_CSS_DEFAULTS:
tx = tx.end().select(xpath) \
.attr('class', add_classes(classes))
# Rename wiki guide links
tx = tx.end() \
.select("body//a[contains(@href,'/wiki/%s')]" % wiki.GUIDE_NAME) \
.map(lambda text: wiki.new_name(text), TEXT)
# Rename trac error
app_short = self.labels_application_short
tx = tx.end() \
.select("body//div[@class='error']/h1") \
.map(lambda text: text.replace("Trac", app_short), TEXT)
return stream | tx
# IRequestFilter methods
def pre_process_request(self, req, handler):
"""Pre process request filter"""
def hwiki(*args, **kw):
def new_name(name):
new_name = wiki.new_name(name)
if new_name != name:
if not self._wiki_pages:
wiki_admin = WikiAdmin(self.env)
self._wiki_pages = wiki_admin.get_wiki_list()
if new_name in self._wiki_pages:
return new_name
return name
a = tuple([new_name(x) for x in args])
return req.href.__call__("wiki", *a, **kw)
req.href.wiki = hwiki
return handler
def post_process_request(self, req, template, data, content_type):
"""Post process request filter.
Removes all trac provided css if required"""
if template is None and data is None and \
sys.exc_info() == (None, None, None):
return template, data, content_type
def is_active_theme():
is_active = False
active_theme = ThemeEngineSystem(self.env).theme
if active_theme is not None:
this_theme_name = self.get_theme_names().next()
is_active = active_theme['name'] == this_theme_name
return is_active
req.chrome['labels'] = self._get_whitelabelling()
if data is not None:
data['product_list'] = \
ProductModule.get_product_list(self.env, req)
links = req.chrome.get('links', {})
# replace favicon if appropriate
if self.env.project_icon == 'common/trac.ico':
bh_icon = 'theme/img/bh.ico'
new_icon = {'href': req.href.chrome(bh_icon),
'type': get_mimetype(bh_icon)}
if links.get('icon'):
links.get('icon')[0].update(new_icon)
if links.get('shortcut icon'):
links.get('shortcut icon')[0].update(new_icon)
is_active_theme = is_active_theme()
if self.disable_all_trac_css and is_active_theme:
if self.disable_all_trac_css:
stylesheets = links.get('stylesheet', [])
if stylesheets:
path = '/chrome/common/css/'
_iter = ([ss, ss.get('href', '')] for ss in stylesheets)
links['stylesheet'] = \
[ss for ss, href in _iter if not path in href or
href.rsplit('/', 1)[-1] in self.BLOODHOUND_KEEP_CSS]
template, modifier = \
self.BLOODHOUND_TEMPLATE_MAP.get(template, (template, None))
if modifier is not None:
modifier = getattr(self, modifier)
modifier(req, template, data, content_type, is_active_theme)
if is_active_theme and data is not None:
data['responsive_layout'] = \
self.env.config.getbool('bloodhound', 'responsive_layout',
'true')
data['bhrelations'] = \
self.env.config.getbool('components', 'bhrelations.*', 'false')
return template, data, content_type
# ITemplateProvider methods
def get_htdocs_dirs(self):
"""Ensure dashboard htdocs will be there even if
`bhdashboard.web_ui.DashboardModule` is disabled.
"""
if not self.env.is_component_enabled(DashboardModule):
return DashboardModule(self.env).get_htdocs_dirs()
def get_templates_dirs(self):
"""Ensure dashboard templates will be there even if
`bhdashboard.web_ui.DashboardModule` is disabled.
"""
if not self.env.is_component_enabled(DashboardModule):
return DashboardModule(self.env).get_templates_dirs()
# Request modifiers
def _modify_search_data(self, req, template, data, content_type, is_active):
"""Insert breadcumbs and context navigation items in search web UI
"""
if is_active:
# Insert query string in search box (see bloodhound_theme.html)
req.search_query = data.get('query')
# Context nav
prevnext_nav(req, _('Previous'), _('Next'))
# Breadcrumbs nav
data['resourcepath_template'] = 'bh_path_search.html'
def _modify_wiki_page_path(self, req, template, data, content_type,
is_active):
"""Override wiki breadcrumbs nav items
"""
if is_active:
data['resourcepath_template'] = 'bh_path_wikipage.html'
def _modify_roadmap_page(self, req, template, data, content_type,
is_active):
"""Insert roadmap.css + products breadcrumb
"""
add_stylesheet(req, 'dashboard/css/roadmap.css')
self._add_products_general_breadcrumb(req, template, data,
content_type, is_active)
data['milestone_list'] = [m.name for m in Milestone.select(self.env)]
req.chrome['ctxtnav'] = []
def _modify_ticket(self, req, template, data, content_type, is_active):
"""Ticket modifications
"""
self._modify_resource_breadcrumb(req, template, data, content_type,
is_active)
#add a creation event to the changelog if the ticket exists
if data['ticket'].exists:
data['changes'] = [{'comment': '',
'author': data['author_id'],
'fields': {u'reported': {'label': u'Reported'},
},
'permanent': 1,
'cnum': 0,
'date': data['start_time'],
},
] + data['changes']
#and set default order
if not req.session.get('ticket_comments_order'):
req.session['ticket_comments_order'] = 'newest'
def _modify_resource_breadcrumb(self, req, template, data, content_type,
is_active):
"""Provides logic for breadcrumb resource permissions
"""
if data and ('ticket' in data.keys()) and data['ticket'].exists:
data['resourcepath_template'] = 'bh_path_ticket.html'
# determine path permissions
for resname, permname in [('milestone', 'MILESTONE_VIEW'),
('product', 'PRODUCT_VIEW')]:
res = Resource(resname, data['ticket'][resname])
data['path_show_' + resname] = permname in req.perm(res)
# add milestone list + current milestone to the breadcrumb
data['milestone_list'] = [m.name
for m in Milestone.select(self.env)]
mname = data['ticket']['milestone']
if mname:
data['milestone'] = Milestone(self.env, mname)
def _modify_admin_breadcrumb(self, req, template, data, content_type, is_active):
# override 'normal' product list with the admin one
def admin_url(prefix):
env = ProductEnvironment.lookup_env(self.env, prefix)
href = ProductEnvironment.resolve_href(env, self.env)
return href.admin()
global_settings = (None, _('(Global settings)'), admin_url(None))
data['admin_product_list'] = [global_settings] + \
ProductModule.get_product_list(self.env, req, admin_url)
if isinstance(req.perm.env, ProductEnvironment):
product = req.perm.env.product
data['admin_current_product'] = \
(product.prefix, product.name,
req.href.products(product.prefix, 'admin'))
else:
data['admin_current_product'] = global_settings
data['resourcepath_template'] = 'bh_path_general.html'
def _modify_browser(self, req, template, data, content_type, is_active):
"""Locate path to file in breadcrumbs area rather than title.
Add browser-specific CSS.
"""
data.update({
'resourcepath_template': 'bh_path_links.html',
'path_depth_limit': 2
})
add_stylesheet(req, 'theme/css/browser.css')
def _add_products_general_breadcrumb(self, req, template, data,
content_type, is_active):
if isinstance(req.perm.env, ProductEnvironment):
data['resourcepath_template'] = 'bh_path_general.html'
# INavigationContributor methods
def get_active_navigation_item(self, req):
return
def get_navigation_items(self, req):
if 'BROWSER_VIEW' in req.perm and 'VERSIONCONTROL_ADMIN' in req.perm:
bm = self.env[BrowserModule]
if bm and not list(bm.get_navigation_items(req)):
yield ('mainnav', 'browser',
tag.a(_('Browse Source'),
href=req.href.wiki('TracRepositoryAdmin')))
class QuickCreateTicketDialog(Component):
implements(IRequestFilter, IRequestHandler)
qct_fields = ListOption('ticket', 'quick_create_fields',
'product, version, type',
doc="""Multiple selection fields displayed in create ticket menu""")
# IRequestFilter(Interface):
def pre_process_request(self, req, handler):
"""Nothing to do.
"""
return handler
def post_process_request(self, req, template, data, content_type):
"""Append necessary ticket data
"""
try:
tm = self._get_ticket_module()
except TracError:
# no ticket module so no create ticket button
return template, data, content_type
if (template, data, content_type) != (None,) * 3: # TODO: Check !
if data is None:
data = {}
req = dummy_request(self.env)
ticket = Ticket(self.env)
tm._populate(req, ticket, False)
all_fields = dict([f['name'], f]
for f in tm._prepare_fields(req, ticket)
if f['type'] == 'select')
product_field = all_fields['product']
if product_field:
if self.env.product:
product_field['value'] = self.env.product.prefix
else:
# Global scope, now check default_product_prefix is valid
default_prefix = self.config.get('multiproduct',
'default_product_prefix')
try:
ProductEnvironment.lookup_env(self.env, default_prefix)
except LookupError:
product_field['value'] = product_field['options'][0]
else:
product_field['value'] = default_prefix
data['qct'] = {
'fields': [all_fields[k] for k in self.qct_fields
if k in all_fields],
'hidden_fields': [all_fields[k] for k in all_fields.keys()
if k not in self.qct_fields]
}
return template, data, content_type
# IRequestHandler methods
def match_request(self, req):
"""Handle requests sent to /qct
"""
m = PRODUCT_RE.match(req.path_info)
return req.path_info == '/qct' or \
(m and m.group('pathinfo').strip('/') == 'qct')
def process_request(self, req):
"""Forward new ticket request to `trac.ticket.web_ui.TicketModule`
but return plain text suitable for AJAX requests.
"""
try:
tm = self._get_ticket_module()
req.perm.require('TICKET_CREATE')
summary = req.args.pop('field_summary', '')
desc = ""
attrs = dict([k[6:], v] for k, v in req.args.iteritems()
if k.startswith('field_'))
product, tid = self.create(req, summary, desc, attrs, True)
except Exception, exc:
self.log.exception("BH: Quick create ticket failed %s" % (exc,))
req.send(str(exc), 'plain/text', 500)
else:
tres = Neighborhood('product', product)('ticket', tid)
href = req.href
req.send(to_json({'product': product, 'id': tid,
'url': get_resource_url(self.env, tres, href)}),
'application/json')
def _get_ticket_module(self):
ptm = None
if ProductTicketModule is not None:
ptm = self.env[ProductTicketModule]
tm = self.env[TicketModule]
if not (tm is None) ^ (ptm is None):
raise TracError('Unable to load TicketModule (disabled)?')
if tm is None:
tm = ptm
return tm
# Public API
def create(self, req, summary, description, attributes={}, notify=False):
""" Create a new ticket, returning the ticket ID.
PS: Borrowed from XmlRpcPlugin.
"""
t = Ticket(self.env)
t['summary'] = summary
t['description'] = description
t['reporter'] = req.authname
for k, v in attributes.iteritems():
t[k] = v
t['status'] = 'new'
t['resolution'] = ''
t.insert()
if notify:
try:
tn = TicketNotifyEmail(self.env)
tn.notify(t, newticket=True)
except Exception, e:
self.log.exception("Failure sending notification on creation "
"of ticket #%s: %s" % (t.id, e))
return t['product'], t.id
from pkg_resources import get_distribution
application_version = get_distribution('BloodhoundTheme').version
| 42.438261 | 110 | 0.60286 | 2,715 | 24,402 | 5.194843 | 0.190792 | 0.023398 | 0.024248 | 0.029353 | 0.198242 | 0.129041 | 0.099263 | 0.072178 | 0.06055 | 0.035167 | 0 | 0.001209 | 0.288091 | 24,402 | 574 | 111 | 42.512195 | 0.810672 | 0.073232 | 0 | 0.087179 | 0 | 0 | 0.239985 | 0.085088 | 0 | 0 | 0 | 0.003484 | 0 | 0 | null | null | 0.002564 | 0.074359 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c829a5e205c26ec37374d7c16898408dd6b3e10 | 617 | py | Python | Lib/site-packages/altendpy/misc.py | fochoao/cpython | 3dc84b260e5bced65ebc2c45c40c8fa65f9b5aa9 | [
"bzip2-1.0.6",
"0BSD"
] | null | null | null | Lib/site-packages/altendpy/misc.py | fochoao/cpython | 3dc84b260e5bced65ebc2c45c40c8fa65f9b5aa9 | [
"bzip2-1.0.6",
"0BSD"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | Lib/site-packages/altendpy/misc.py | fochoao/cpython | 3dc84b260e5bced65ebc2c45c40c8fa65f9b5aa9 | [
"bzip2-1.0.6",
"0BSD"
] | null | null | null | import itertools
def identifier_path(it):
return '__' + '_'.join(
it.__module__.split('.') + [it.__qualname__]
)
# https://docs.python.org/3/library/itertools.html
def pairwise(iterable):
's -> (s0,s1), (s1,s2), (s2, s3), ...'
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
# https://docs.python.org/3/library/itertools.html
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
| 25.708333 | 60 | 0.636953 | 83 | 617 | 4.578313 | 0.590361 | 0.047368 | 0.078947 | 0.094737 | 0.221053 | 0.221053 | 0.221053 | 0.221053 | 0.221053 | 0 | 0 | 0.017928 | 0.186386 | 617 | 23 | 61 | 26.826087 | 0.739044 | 0.367909 | 0 | 0 | 0 | 0 | 0.184322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.071429 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6c88dccf0278d3cda08b47e7b209dee9cebea2dd | 847 | py | Python | venv/lib/python3.7/site-packages/zope/site/tests/test_folder.py | leanhvu86/matrix-server | 6e16fc53dfebaeaf222ff5a371ccffcc65de3818 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.7/site-packages/zope/site/tests/test_folder.py | leanhvu86/matrix-server | 6e16fc53dfebaeaf222ff5a371ccffcc65de3818 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.7/site-packages/zope/site/tests/test_folder.py | leanhvu86/matrix-server | 6e16fc53dfebaeaf222ff5a371ccffcc65de3818 | [
"Apache-2.0"
] | null | null | null |
import doctest
import unittest
from zope.site.folder import Folder
from zope.site.testing import siteSetUp, siteTearDown, checker
from zope.site.tests.test_site import TestSiteManagerContainer
def setUp(test=None):
siteSetUp()
def tearDown(test=None):
siteTearDown()
class FolderTest(TestSiteManagerContainer):
def makeTestObject(self):
return Folder()
def test_suite():
flags = doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE
return unittest.TestSuite((
unittest.defaultTestLoader.loadTestsFromName(__name__),
doctest.DocTestSuite('zope.site.folder',
setUp=setUp, tearDown=tearDown),
doctest.DocFileSuite("folder.txt",
setUp=setUp, tearDown=tearDown,
checker=checker, optionflags=flags),
))
| 24.2 | 65 | 0.672963 | 81 | 847 | 6.950617 | 0.444444 | 0.056838 | 0.063943 | 0.092362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24085 | 847 | 34 | 66 | 24.911765 | 0.875583 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.227273 | 0.045455 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
665c80aeff3f68824d60fbf5efe1f7fb14c8d913 | 245 | py | Python | Regs/Block_1/R1600.py | BernardoB95/Extrator_SPEDFiscal | 10b4697833c561d24654251da5f22d044f03fc16 | [
"MIT"
] | 1 | 2021-04-25T13:53:20.000Z | 2021-04-25T13:53:20.000Z | Regs/Block_1/R1600.py | BernardoB95/Extrator_SPEDFiscal | 10b4697833c561d24654251da5f22d044f03fc16 | [
"MIT"
] | null | null | null | Regs/Block_1/R1600.py | BernardoB95/Extrator_SPEDFiscal | 10b4697833c561d24654251da5f22d044f03fc16 | [
"MIT"
] | null | null | null | from ..IReg import IReg
class R1600(IReg):
def __init__(self):
self._header = ['REG',
'COD_PART',
'TOT_CREDITO',
'TOT_DEBITO']
self._hierarchy = "2"
| 18.846154 | 38 | 0.428571 | 22 | 245 | 4.363636 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037879 | 0.461224 | 245 | 12 | 39 | 20.416667 | 0.689394 | 0 | 0 | 0 | 0 | 0 | 0.134694 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6661b76ee106456d1280d4a92aa82e22b084ae1a | 748 | py | Python | script.py | rohank63/SEC | 19db2f8d843712f3aad5e6fe6e94be0b0ea2acca | [
"Apache-2.0"
] | 1 | 2020-05-28T21:11:01.000Z | 2020-05-28T21:11:01.000Z | script.py | rohank63/SEC | 19db2f8d843712f3aad5e6fe6e94be0b0ea2acca | [
"Apache-2.0"
] | null | null | null | script.py | rohank63/SEC | 19db2f8d843712f3aad5e6fe6e94be0b0ea2acca | [
"Apache-2.0"
] | null | null | null | import infer_organism
import subprocess as sp
print(infer_organism.infer(
file_1="./first_mate.fastq",
min_match=2,factor=1,
transcript_fasta="transcripts.fasta.zip"
))
print(infer_organism.infer(
file_1="./SRR13496438.fastq.gz",
min_match=2,factor=1,
transcript_fasta="transcripts.fasta.zip"
))
'''
print(infer_read_orientation.infer(
file_1="./files/SRR13496438.fastq.gz",
fasta="transcripts.fasta.zip",
organism="oaries"
))
import subprocess as sp
file_1 = "./files/SRR13496438.fastq.gz"
quant_single = "kallisto quant -i transcripts.idx -o output" + \
" -l 100 -s 300 --single " + file_1
result = sp.run(quant_single, shell=True,capture_output=True, text=True)
print(result.stderr)
print(result.returncode)
'''
| 18.7 | 72 | 0.73262 | 106 | 748 | 5 | 0.424528 | 0.04717 | 0.056604 | 0.135849 | 0.418868 | 0.418868 | 0.226415 | 0.226415 | 0.226415 | 0.226415 | 0 | 0.059181 | 0.118984 | 748 | 39 | 73 | 19.179487 | 0.745068 | 0 | 0 | 0.666667 | 0 | 0 | 0.263666 | 0.205788 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6664a49daddeba9123acf862abe5792abe8bd30f | 706 | py | Python | setup.py | biocompibens/annotaread | 5aa4a37e731db91746e15b61693f87962afa61f4 | [
"MIT"
] | 12 | 2019-02-11T06:39:19.000Z | 2022-02-17T07:40:14.000Z | setup.py | biocompibens/annotaread | 5aa4a37e731db91746e15b61693f87962afa61f4 | [
"MIT"
] | 10 | 2019-01-11T10:17:44.000Z | 2022-01-28T11:11:26.000Z | setup.py | biocompibens/annotaread | 5aa4a37e731db91746e15b61693f87962afa61f4 | [
"MIT"
] | 3 | 2016-06-09T14:10:24.000Z | 2019-10-10T23:25:06.000Z | #!/usr/bin/env python
from distutils.core import setup
setup(name = "alfa",
py_modules = ["alfa"],
version = "1.1.1",
description = "A simple software to get a quick overview of features composing NGS dataset(s).",
author = "Mathieu Bahin",
author_email = "mathieu.bahin@biologie.ens.fr",
maintainer = "Mathieu Bahin",
maintainer_email = "mathieu.bahin@biologie.ens.fr",
url = "https://github.com/biocompibens/ALFA",
scripts=["alfa"],
long_description = open("README").read(),
install_requires=["numpy>=1.15,<1.16", "pysam>=0.15,<0.16", "pybedtools>=0.8,<0.9", "matplotlib>=3.0,<3.1", "progressbar2>=3.37,<3.40"],
license = "MIT"
)
| 37.157895 | 142 | 0.624646 | 96 | 706 | 4.541667 | 0.666667 | 0.110092 | 0.077982 | 0.114679 | 0.137615 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0.052448 | 0.189802 | 706 | 18 | 143 | 39.222222 | 0.70979 | 0.028329 | 0 | 0 | 0 | 0 | 0.471533 | 0.119708 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66696dcc60485569857caf8d24f37e051ed45d28 | 2,113 | py | Python | python/rational-numbers/rational_numbers.py | sci-c0/exercism-learning | dd9fb1d2a407085992c3371c1d56456b7ebf9180 | [
"BSD-3-Clause"
] | null | null | null | python/rational-numbers/rational_numbers.py | sci-c0/exercism-learning | dd9fb1d2a407085992c3371c1d56456b7ebf9180 | [
"BSD-3-Clause"
] | null | null | null | python/rational-numbers/rational_numbers.py | sci-c0/exercism-learning | dd9fb1d2a407085992c3371c1d56456b7ebf9180 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import division
class Rational:
def __init__(self, numer, denom):
assert denom != 0, "ValueError: The denominator of the Rational Number cannot be 0"
gcd = self._gcd(abs(numer), abs(denom))
numer = numer // gcd
denom = denom // gcd
numer_sign = numer // abs(numer) if numer else 1
denom_sign = denom // abs(denom)
self.numer = abs(numer) if numer_sign == denom_sign else -abs(numer)
self.denom = abs(denom)
def _gcd(self, a, b):
if a == 0 or a == b:
return b
elif b == 0:
return a
elif a < b:
return self._gcd(a, b % a)
elif a >= b:
return self._gcd(a % b, b)
def __eq__(self, other):
return self.numer == other.numer and self.denom == other.denom
def __repr__(self):
return '{}/{}'.format(self.numer, self.denom)
def __add__(self, other):
return self.__class__(
self.numer * other.denom + other.numer * self.denom,
self.denom * other.denom
)
def __sub__(self, other):
return self.__class__(
self.numer * other.denom - other.numer * self.denom,
self.denom * other.denom
)
def __mul__(self, other):
return self.__class__(
self.numer * other.numer,
self.denom * other.denom
)
def __truediv__(self, other):
return self.__class__(
self.numer * other.denom,
self.denom * other.numer
)
def __abs__(self):
return self.__class__(
abs(self.numer),
abs(self.denom)
)
def __pow__(self, power):
is_int = (power == int(power))
sign = power // abs(power) if power else 1
numer = pow(self.numer, power)
denom = pow(self.denom, power)
if is_int:
t = (numer, denom)
return self.__class__(*t[::sign])
else:
return numer / denom
def __rpow__(self, base):
return pow(pow(base, self.numer), 1 / self.denom)
| 27.441558 | 91 | 0.538097 | 261 | 2,113 | 4.057471 | 0.180077 | 0.101983 | 0.084986 | 0.089707 | 0.355996 | 0.276676 | 0.276676 | 0.276676 | 0.240793 | 0.15864 | 0 | 0.005095 | 0.34974 | 2,113 | 76 | 92 | 27.802632 | 0.765648 | 0 | 0 | 0.131148 | 0 | 0 | 0.031708 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 1 | 0.180328 | false | 0 | 0.016393 | 0.131148 | 0.442623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
66702c84ed5242ec35c50132bc6962391cbcb4a0 | 594 | py | Python | core/yasg_auto_schema.py | HiroshiFuu/django-rest-drf-yasg-boilerplate | 93221b2dbca0635eb42a18096e805b00f36ff9c1 | [
"Apache-2.0"
] | null | null | null | core/yasg_auto_schema.py | HiroshiFuu/django-rest-drf-yasg-boilerplate | 93221b2dbca0635eb42a18096e805b00f36ff9c1 | [
"Apache-2.0"
] | null | null | null | core/yasg_auto_schema.py | HiroshiFuu/django-rest-drf-yasg-boilerplate | 93221b2dbca0635eb42a18096e805b00f36ff9c1 | [
"Apache-2.0"
] | null | null | null | from drf_yasg.inspectors import SwaggerAutoSchema
from drf_yasg.utils import swagger_settings
from core.yasg_inspector import ExampleSerializerInspector
class NameAsOperationIDAutoSchema(SwaggerAutoSchema):
def get_operation_id(self, operation_keys):
operation_id = super(NameAsOperationIDAutoSchema, self).get_operation_id(operation_keys)
# print(operation_id, operation_keys)
return operation_id
class SwaggerExampleAutoSchema(SwaggerAutoSchema):
field_inspectors = [
ExampleSerializerInspector,
] + swagger_settings.DEFAULT_FIELD_INSPECTORS
| 31.263158 | 96 | 0.806397 | 58 | 594 | 7.948276 | 0.448276 | 0.119306 | 0.047722 | 0.104121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143098 | 594 | 18 | 97 | 33 | 0.905697 | 0.058923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6675810c1116e497355327b1898d4079401833d7 | 635 | py | Python | globa_micrograph2np.py | bioinsilico/EM_FILTER | 7289c6b32a3cd2c463a1b11ba7af7ac15931d124 | [
"Apache-2.0"
] | null | null | null | globa_micrograph2np.py | bioinsilico/EM_FILTER | 7289c6b32a3cd2c463a1b11ba7af7ac15931d124 | [
"Apache-2.0"
] | null | null | null | globa_micrograph2np.py | bioinsilico/EM_FILTER | 7289c6b32a3cd2c463a1b11ba7af7ac15931d124 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import sys
def micrograph2np(width,shift):
r = int(width/shift-1)
#I = np.load("../DATA_SETS/004773_ProtRelionRefine3D/kino.micrograph.numpy.npy")
I = np.load("../DATA_SETS/004773_ProtRelionRefine3D/full_micrograph.stack_0001.numpy.npy")
I = (I-I.mean())/I.std()
N = int(I.shape[0]/shift)
M = int(I.shape[1]/shift)
S=[]
for i in range(N-r):
for j in range(M-r):
x1 = i*shift
x2 = x1+width
y1 = j*shift
y2 = y1+width
w = I[x1:x2,y1:y2]
S.append(w)
S = np.array(S)
np.save("../DATA_SETS/004773_ProtRelionRefine3D/fraction_micrograph.numpy", S)
| 23.518519 | 92 | 0.626772 | 105 | 635 | 3.704762 | 0.419048 | 0.061697 | 0.107969 | 0.246787 | 0.200514 | 0.200514 | 0.200514 | 0 | 0 | 0 | 0 | 0.077228 | 0.204724 | 635 | 26 | 93 | 24.423077 | 0.693069 | 0.124409 | 0 | 0 | 0 | 0 | 0.250903 | 0.250903 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66779b78acfa540552e4a9d5de889fc9456ee666 | 10,202 | py | Python | python/get_dinucleotides.py | kadepettie/mike_tools | 467698a835c04383d97c18055cb200ea6cdbc9b0 | [
"Unlicense"
] | 2 | 2016-01-14T02:04:37.000Z | 2018-03-16T09:38:10.000Z | python/get_dinucleotides.py | kadepettie/mike_tools | 467698a835c04383d97c18055cb200ea6cdbc9b0 | [
"Unlicense"
] | null | null | null | python/get_dinucleotides.py | kadepettie/mike_tools | 467698a835c04383d97c18055cb200ea6cdbc9b0 | [
"Unlicense"
] | 1 | 2018-07-20T20:31:39.000Z | 2018-07-20T20:31:39.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Take a list of genome positions and return the dinucleotides around it.
For each position, will generate a list of + strand dinucleotides and - strand
dinucleotides.
Created: 2017-07-27 12:02
Last modified: 2017-10-18 00:17
"""
from __future__ import print_function
import os
import sys
import bz2
import gzip
from datetime import timedelta as _td
import logging as _log
import pandas as pd
import fyrd
from Bio import SeqIO as seqio
hg18 = "/godot/genomes/human/hg18"
hg19 = "/godot/genomes/human/hg19"
###############################################################################
# Core Algorithm #
###############################################################################
def get_dinucleotides(positions, genome_file, base=0, return_as='list'):
"""Return a list of all + and - strand dinucleotides around each position.
Will loop through each chromosome and search all positions in that
chromosome in one batch. Lookup is serial per chromosome.
Args:
positions (dict): Dictionary of {chrom->positons}
genome_file (str): Location of a genome fasta file or directory of
files. If directory, file names must be
<chrom_name>.fa[.gz]. Gzipped OK.
base (int): Either 0 or 1, base of positions in your list
return_as (str): dict: Return a dictionary of:
{chrom->{postion->{'ref': str, '+': tuple, '-': tuple}}}
list: just returns two lists with no positions.
df: return DataFrame
Returns:
(list, list): + strand dinucleotides, - strand dinucleotides. Returns
a dict or instead if requested through return_as.
"""
if os.path.isdir(genome_file):
chroms = positions.keys()
files = []
for chrom in chroms:
files.append(get_fasta_file(genome_file, chrom))
if return_as == 'df':
final = []
elif return_as == 'dict':
final = {}
else:
final = ([], [])
for chrom, fl in zip(chroms, files):
pos = {chrom: positions[chrom]}
res = get_dinucleotides(pos, fl, base, return_as)
if return_as == 'df':
final.append(res)
elif return_as == 'dict':
final.update(res)
else:
plus, minus = res
final[0] += plus
final[1] += minus
if return_as == 'df':
print('Converting to dataframe')
final = pd.concat(final)
return final
done = []
results = {} if return_as in ('dict', 'df') else ([], [])
with open_zipped(genome_file) as fasta_file:
for chrom in seqio.parse(fasta_file, 'fasta'):
if chrom.id not in positions:
continue
else:
done.append(chrom.id)
if return_as in ('dict', 'df'):
results[chrom.id] = {}
for pos in positions[chrom.id]:
pos = pos-base
ref = chrom[pos]
plus1 = chrom[pos-1:pos+1]
plus2 = chrom[pos:pos+2]
minus1 = plus1.reverse_complement()
minus2 = plus2.reverse_complement()
if return_as in ('dict', 'df'):
results[chrom.id][pos] = {
'ref': ref,
'+': (seq(plus1), seq(plus2)),
'-': (seq(minus1), seq(minus2))}
else:
results[0] += [plus1, plus2]
results[1] += [minus1, minus2]
if len(done) != len(positions.keys()):
print('The following chromosomes were not in files: {}'
.format([i for i in positions if i not in done]))
if return_as == 'df':
print('Converting to dataframe')
results = dict_to_df(results, base)
return results
def dict_to_df(results, base):
"""Convert results dictionary into a DataFrame."""
dfs = []
for chrom, data in results.items():
nuc_lookup = pd.DataFrame.from_dict(data, orient='index')
nuc_lookup['chrom'] = chrom
nuc_lookup['position'] = nuc_lookup.index.to_series().astype(int) + base
nuc_lookup['snp'] = nuc_lookup.chrom.astype(str) + '.' + nuc_lookup.position.astype(str)
nuc_lookup.set_index('snp', drop=True, inplace=True)
dfs.append(nuc_lookup)
result = pd.concat(dfs)
dfs = None
result = result[['ref', '+', '-']]
result.sort_index()
result.index.name = None
return result
###############################################################################
# Parallelization #
###############################################################################
def get_dinucleotides_parallel(positions, genome_file, base=0, return_as='list'):
"""Return a list of all + and - strand dinucleotides around each position.
Will loop through each chromosome and search all positions in that
chromosome in one batch. Lookup is parallel per chromosome.
Args:
positions (dict): Dictionary of {chrom->positons}
genome_file (str): Location of a genome fasta file or directory of
files. If directory, file names must be
<chrom_name>.fa[.gz]. Gzipped OK. Directory is
preferred in parallel mode.
base (int): Either 0 or 1, base of positions in your list
return_as (str): dict: Return a dictionary of:
{chrom->{postion->{'ref': str, '+': tuple, '-': tuple}}}
list: just returns two lists with no positions.
df: return DataFrame
Returns:
(list, list): + strand dinucleotides, - strand dinucleotides. Returns
a dict or instead if requested through return_as.
"""
outs = []
for chrom in positions.keys():
if os.path.isdir(genome_file):
fa_file = get_fasta_file(genome_file, chrom)
if not os.path.isfile(fa_file):
raise FileNotFoundError('{} not found.'.format(genome_file))
mins = int(len(positions[chrom])/2000)+45
time = str(_td(minutes=mins))
outs.append(
fyrd.submit(
get_dinucleotides,
({chrom: positions[chrom]}, fa_file, base, return_as),
cores=1, mem='6GB', time=time,
)
)
if return_as == 'df':
final = []
elif return_as == 'dict':
final = {}
else:
final = ([], [])
fyrd.wait(outs)
print('Getting results')
for out in outs:
res = out.get()
if return_as == 'df':
if isinstance(res, dict):
res = dict_to_df(res, base)
final.append(res)
elif return_as == 'dict':
final.update(res)
else:
plus, minus = res
final[0] += plus
final[1] += minus
if return_as == 'df':
print('Joining dataframe')
final = pd.concat(final)
return final
###############################################################################
# Helper Functions #
###############################################################################
def seq(sequence):
"""Convert Bio.Seq object to string."""
return str(sequence.seq.upper())
def get_fasta_file(directory, name):
"""Look in directory for name.fa or name.fa.gz and return path."""
fa_file = os.path.join(directory, name + '.fa')
gz_file = fa_file + '.gz'
if os.path.isfile(fa_file):
genome_file = fa_file
elif os.path.isfile(gz_file):
genome_file = fa_file
else:
raise FileNotFoundError(
'No {f}.fa or {f}.fa.gz file found in {d}'.format(
f=name, d=directory
)
)
return genome_file
def open_zipped(infile, mode='r'):
""" Return file handle of file regardless of zipped or not
Text mode enforced for compatibility with python2 """
mode = mode[0] + 't'
p2mode = mode
if hasattr(infile, 'write'):
return infile
if isinstance(infile, str):
if infile.endswith('.gz'):
return gzip.open(infile, mode)
if infile.endswith('.bz2'):
if hasattr(bz2, 'open'):
return bz2.open(infile, mode)
else:
return bz2.BZ2File(infile, p2mode)
return open(infile, p2mode)
###############################################################################
# Run On Files #
###############################################################################
def parse_location_file(infile, base=None):
"""Get a compatible dictionary from an input file.
Args:
infile (str): Path to a bed, vcf, or tsv. If tsv should be chrom\\tpos.
Filetype detected by extension. Gzipped/B2zipped OK.
base (int): Force base of file, if not set, bed/tsv assumed base 0,
vcf assumed base-1
Returns:
dict: A dict of {chrom->pos}
"""
if not isinstance(base, int):
base = 1 if 'vcf' in infile.split('.') else 0
out = {}
for chrom, pos in tsv_bed_vcf(infile, base):
if chrom not in out:
out[chrom] = []
out[chrom].append(pos)
return out
def tsv_bed_vcf(infile, base=0):
"""Interator for generic tsv, yields column1, column2 for every line.
column1 is assumed to be string, column2 is converted to int and base is
subtracted from it.
"""
with open_zipped(infile) as fin:
for line in fin:
if line.startswith('#'):
continue
f = line.rstrip().split('\t')
yield f[0], int(f[1])-base
| 35.17931 | 96 | 0.50745 | 1,143 | 10,202 | 4.44357 | 0.223097 | 0.034652 | 0.019689 | 0.016539 | 0.383934 | 0.352432 | 0.339831 | 0.315613 | 0.303997 | 0.291396 | 0 | 0.013614 | 0.330425 | 10,202 | 289 | 97 | 35.301038 | 0.729908 | 0.315526 | 0 | 0.284884 | 1 | 0 | 0.058979 | 0.008237 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.05814 | 0 | 0.174419 | 0.034884 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6678343e89112fade7cd09060df449fe7f8bd1ed | 588 | py | Python | test/props.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | null | null | null | test/props.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | null | null | null | test/props.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | 1 | 2019-11-11T20:25:19.000Z | 2019-11-11T20:25:19.000Z | import snap
G9 = snap.GenRndGnm(snap.PNGraph, 10000, 1000)
CntV = snap.TIntPrV()
snap.GetWccSzCnt(G9, CntV)
for p in CntV:
print "size %d: count %d" % (p.GetVal1(), p.GetVal2())
snap.GetOutDegCnt(G9, CntV)
for p in CntV:
print "degree %d: count %d" % (p.GetVal1(), p.GetVal2())
G10 = snap.GenPrefAttach(100, 3)
EigV = snap.TFltV()
snap.GetEigVec(G10, EigV)
nr = 0
for f in EigV:
nr += 1
print "%d: %.6f" % (nr, f)
diam = snap.GetBfsFullDiam(G10, 10)
print "diam", diam
triads = snap.GetTriads(G10)
print "triads", triads
cf = snap.GetClustCf(G10)
print "cf", cf
| 18.375 | 60 | 0.646259 | 93 | 588 | 4.086022 | 0.430108 | 0.031579 | 0.047368 | 0.052632 | 0.231579 | 0.231579 | 0.231579 | 0 | 0 | 0 | 0 | 0.072917 | 0.183673 | 588 | 31 | 61 | 18.967742 | 0.71875 | 0 | 0 | 0.090909 | 0 | 0 | 0.0954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045455 | null | null | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
667a1c0b613963eab193fb77f230ded956fa0a20 | 2,926 | py | Python | harp2/make_kmeans/make_input_controller.py | canesche/kmeans | 5fb060f463e945200210739b8827a00f6ae853c8 | [
"MIT"
] | null | null | null | harp2/make_kmeans/make_input_controller.py | canesche/kmeans | 5fb060f463e945200210739b8827a00f6ae853c8 | [
"MIT"
] | null | null | null | harp2/make_kmeans/make_input_controller.py | canesche/kmeans | 5fb060f463e945200210739b8827a00f6ae853c8 | [
"MIT"
] | null | null | null | from veriloggen import *
# Component that receives the input buffer BEGIN
def make_input_controller(external_data_width):
m = Module('input_controller')
# sinais básicos para o funcionamento do circuito
clk = m.Input('clk')
rst = m.Input('rst')
start = m.Input('start')
done_rd_data = m.Input('done_rd_data')
# fifo_in control
input_controller_available_read = m.Input('input_controller_available_read')
input_controller_read_data = m.Input('input_controller_read_data', external_data_width)
input_controller_read_data_valid = m.Input('input_controller_read_data_valid')
input_controller_request_read = m.OutputReg('input_controller_request_read')
# output
input_controller_data_out = m.OutputReg('input_controller_data_out', external_data_width)
input_controller_output_valid = m.OutputReg('input_controller_output_valid', 2)
m.EmbeddedCode(' ')
fsm_main = m.Reg('fsm_main', 3)
FSM_IDLE = m.Localparam('FSM_IDLE', Int(0, fsm_main.width, 10))
FSM_READ = m.Localparam('FSM_READ', Int(1, fsm_main.width, 10))
FSM_DONE = m.Localparam('FSM_DONE', Int(2, fsm_main.width, 10))
m.EmbeddedCode(' ')
m.Always(Posedge(clk))(
If(rst)(
input_controller_data_out(Int(0, input_controller_data_out.width, 10)),
input_controller_request_read(Int(0, 1, 2)),
input_controller_output_valid(Int(0, input_controller_output_valid.width, 10)),
fsm_main(FSM_IDLE),
).Elif(start)(
input_controller_request_read(Int(0, 1, 2)),
input_controller_output_valid(Int(0, input_controller_output_valid.width, 10)),
Case(fsm_main)(
When(FSM_IDLE)(
If(input_controller_available_read)(
input_controller_request_read(Int(1, 1, 2)),
fsm_main(FSM_READ),
).Elif(AndList(done_rd_data, Not(input_controller_available_read)))(
fsm_main(FSM_DONE),
)
),
When(FSM_READ)(
If(input_controller_read_data_valid)(
input_controller_data_out(input_controller_read_data),
input_controller_output_valid(Int(1, input_controller_data_out.width, 10)),
fsm_main(FSM_IDLE),
If(input_controller_available_read)(
input_controller_request_read(Int(1, 1, 2)),
fsm_main(FSM_READ),
),
)
),
When(FSM_DONE)(
input_controller_output_valid(Int(2, input_controller_data_out.width, 10)),
fsm_main(FSM_DONE),
),
)
)
)
return m
| 42.405797 | 100 | 0.591934 | 340 | 2,926 | 4.676471 | 0.179412 | 0.320755 | 0.10566 | 0.130818 | 0.507547 | 0.401258 | 0.325157 | 0.271069 | 0.271069 | 0.223899 | 0 | 0.018924 | 0.313739 | 2,926 | 68 | 101 | 43.029412 | 0.772908 | 0.039986 | 0 | 0.357143 | 0 | 0 | 0.089547 | 0.062866 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.017857 | 0 | 0.053571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
667b198fc6ec216ed910a63aa711bb5b84e2db78 | 907 | py | Python | cannula/helpers.py | rmyers/cannula | eb6fd76d2a9daed0df73b0bf389da0182f797972 | [
"MIT"
] | 9 | 2015-11-05T08:52:49.000Z | 2019-11-18T10:20:58.000Z | cannula/helpers.py | rmyers/cannula | eb6fd76d2a9daed0df73b0bf389da0182f797972 | [
"MIT"
] | null | null | null | cannula/helpers.py | rmyers/cannula | eb6fd76d2a9daed0df73b0bf389da0182f797972 | [
"MIT"
] | 1 | 2015-12-22T15:15:08.000Z | 2015-12-22T15:15:08.000Z | import os
import pkgutil
import sys
def get_root_path(import_name):
"""Returns the path to a package or cwd if that cannot be found.
Inspired by [flask](https://github.com/pallets/flask/blob/master/flask/helpers.py)
"""
# Module already imported and has a file attribute. Use that first.
mod = sys.modules.get(import_name)
if mod is not None and hasattr(mod, '__file__'):
return os.path.dirname(os.path.abspath(mod.__file__))
# Next attempt: check the loader.
loader = pkgutil.get_loader(import_name)
# Loader does not exist or we're referring to an unloaded main module
# or a main module without path (interactive sessions), go with the
# current working directory.
if loader is None or import_name == '__main__':
return os.getcwd()
filepath = loader.get_filename(import_name)
return os.path.dirname(os.path.abspath(filepath))
| 33.592593 | 86 | 0.708931 | 136 | 907 | 4.573529 | 0.536765 | 0.080386 | 0.038585 | 0.061093 | 0.102894 | 0.102894 | 0.102894 | 0 | 0 | 0 | 0 | 0 | 0.199559 | 907 | 26 | 87 | 34.884615 | 0.856749 | 0.446527 | 0 | 0 | 0 | 0 | 0.033126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.666667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6681dfd474e46eff7c26a723e8f0f5a75b82ea26 | 100,283 | py | Python | zufall/lib/funktionen/funktionen.py | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | zufall/lib/funktionen/funktionen.py | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | zufall/lib/funktionen/funktionen.py | HBOMAT/AglaUndZufall | 3976fecf024a5e4e771d37a6b8056ca4f7eb0da1 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding utf-8 -*-
#
# zufall - Funktionen
#
#
# This file is part of zufall
#
#
# Copyright (c) 2019 Holger Böttcher hbomat@posteo.de
#
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# Inhalt:
#
# abs, sqrt, ... - Mathematische Funktionen
# is_zahl - Test auf Zahl
# mit_param - Test auf Parameter
# permutationen, perm - Permutationen
# kombinationen, komb - Kombinationen
# variationen - Variationen
# zuf_zahl - Erzeugung von Zufallszahlen
# anzahl - Anzahl des Vorkommens eines Elementes in
# einer DatenReihe / Liste
# anzahl_treffer - Anzahl Treffer
# summe - Summe der Elemente einer Liste / DatenReihe
# ja_nein - Bewertung logischer Ausdrücke
# auswahlen - k-Auswahlen aus n Objekten
# gesetze - Einige Gesetze der Wahrscheinlichkeitsrechnung
# stochastisch - Test auf stochastischen Vektor / Matrix
# löse - Solver für Gleichungen / Ungleichungen
# einfach - Vereinfachung von Vektoren / Matrizen
# ja, nein, ... - Hilfsgrößen für True/False
# Hilfe - Hilfefunktion
import importlib
from itertools import (product, permutations, combinations,
combinations_with_replacement)
from Lib.random import randint, sample
from IPython.display import display, Math
from sympy import (Symbol, nsimplify, simplify, solve, radsimp, trigsimp,
signsimp)
from sympy.core.compatibility import iterable
from sympy import (Integer, Rational, Float, Add, Mul, Pow, Mod,
N, factorial, binomial as Binomial)
from sympy.core.numbers import Zero, One, NegativeOne, Half, E
from sympy.core.sympify import sympify
from sympy.core.containers import Tuple
from sympy import (
Abs, sqrt as Sqrt, exp as Exp, log as Log,
sin as Sin, cos as Cos, tan as Tan, cot as Cot,
asin as Asin, acos as Acos, atan as Atan, acot as Acot,
sinh as Sinh, cosh as Cosh, tanh as Tanh,
asinh as Asinh, acosh as Acosh, atanh as Atanh,
re as Re, im as Im, conjugate as Conjugate)
from sympy.functions.elementary.miscellaneous import Max, Min
from sympy.printing.latex import latex
from sympy.matrices import Matrix as SympyMatrix
from sympy import solveset, S, pi
from zufall.lib.objekte.basis import ZufallsObjekt
from zufall.lib.objekte.ausnahmen import ZufallError
import zufall
# ---------------------------
# Umrechnung Bogenmaß in Grad
# ---------------------------
def deg(*number, **kwargs):
if kwargs.get("h"):
print("\nUmrechnung Bogen- in Gradmaß - Funktion\n")
print("Aufruf deg( winkel )\n")
print(" winkel Winkel in Bogenmaß\n")
print("Synonymer Bezeichner grad\n")
print("Rückgabe Winkel in Grad\n")
print("Zusatz d=n Dezimaldarstellung")
print(" n - Anzahl der Nachkommastellen\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = number * 180 / pi
d = kwargs.get('d')
if d:
return wert_ausgabe(wert, d)
return wert
grad = deg
# ---------------------------
# Umrechnung Grad in Bogenmaß
# ---------------------------
def rad(*number, **kwargs):
if kwargs.get("h"):
print("\nUmrechnung Grad- in Bogenmaß - Funktion\n")
print("Aufruf rad( winkel )\n")
print(" winkel Winkel in Grad\n")
print("Synonymer Bezeichner bog\n")
print("Rückgabe Winkel in Bogenmaß\n")
print("Zusatz d=n Dezimaldarstellung")
print(" n - Anzahl der Nachkommastellen\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl aus [-1, 1] angeben")
return
wert = number / 180 * pi
d = kwargs.get('d')
if d:
return wert_ausgabe(wert, d)
return nsimplify(wert, [pi])
bog = rad
# -----------------------------------
# Allgemeine mathematische Funktionen
# -----------------------------------
def abs(*number, **kwargs):
if kwargs.get("h"):
print("\nBetrags - Funktion\n")
print("Aufruf abs( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Abs(number)
if kwargs.get("d"):
return N(wert)
return wert
def sqrt(*number, **kwargs):
if kwargs.get("h"):
print("\nWurzel - Funktion\n")
print("Aufruf sqrt( x )\n")
print(" x Zahl\n")
print("Rückgabe einer reellen Zahl bei x > 0\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Sqrt(number)
if kwargs.get("d"):
return N(wert)
return wert
def exp(*number, **kwargs):
if kwargs.get("h"):
print("\nExponential - Funktion\n")
print("Aufruf exp( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Exp(number)
if kwargs.get("d"):
return N(wert)
return wert
def log(*number, **kwargs):
if kwargs.get("h"):
print("\nNatürlicher Logarithmus - Funktion\n")
print("Aufruf ln( x )")
print("oder log( x )\n")
print(" x Zahl\n")
print("Rückgabe einer reellen Zahl bei x > 0\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Log(number)
if kwargs.get("d"):
return N(wert)
return wert
ln = log
def lg(*number, **kwargs):
if kwargs.get("h"):
print("\nDekadischer Logarithmus - Funktion\n")
print("Aufruf lg( x )\n")
print(" x Zahl\n")
print("Rückgabe einer reellen Zahl bei x > 0\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Log(number, 10)
if kwargs.get("d"):
return N(wert)
return wert
def max(*numbers, **kwargs):
if kwargs.get("h"):
print("\nGrößte Zahl in einer Folge von Zahlen\n")
print("Aufruf max( x1, x2, ... )\n")
print(" x Zahl\n")
return
if isinstance(numbers[0], (list, tuple, Tuple, set, dict)):
zahlen = [x for x in numbers[0]]
else:
zahlen = [x for x in numbers]
if not all([is_zahl(x) for x in zahlen]):
print("agla: nur Zahlen angeben")
return
wert = Max(*zahlen)
return wert
def min(*numbers, **kwargs):
if kwargs.get("h"):
print("\nKleinste Zahl in einer Folge von Zahlen\n")
print("Aufruf min( x1, x2, ... )\n")
print(" x Zahl\n")
return
if isinstance(numbers[0], (list, tuple, Tuple, set, dict)):
zahlen = [x for x in numbers[0]]
else:
zahlen = [x for x in numbers]
if not all([is_zahl(x) for x in zahlen]):
print("agla: nur Zahlen angeben")
return
wert = Min(*zahlen)
return wert
def re(*number, **kwargs):
if kwargs.get("h"):
print("\nRealteil einer komplexen Zahl\n")
print("Aufruf re( z )\n")
print(" z komplexe Zahl\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine komplexe Zahl angeben")
return
wert = Re(number)
return wert
def im(*number, **kwargs):
if kwargs.get("h"):
print("\nImaginärteil einer komplexen Zahl\n")
print("Aufruf im( z )\n")
print(" z komplexe Zahl\n")
return
if len(number) != 1:
print("agla: eine komplexe Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Im(number)
return wert
def conjugate(*number, **kwargs):
if kwargs.get("h"):
print("\nKonjugiert - komplexe Zahl\n")
print("Aufruf conjugate( z )")
print(" oder konjugirt( z )\n")
print(" z komplexe Zahl\n")
return
if len(number) != 1:
print("agla: eine komplexe Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Conjugate(number)
return wert
konjugiert = conjugate
# -------------------------------------------------
# Trigonometrische und Umkehr-Funktionen - Bogenmaß
# -------------------------------------------------
def sin(*number, **kwargs):
if kwargs.get("h"):
print("\nSinus - Funktion\n")
print("Aufruf sin( winkel )\n")
print(" winkel Winkel in Bogenmaß\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Sin(number)
if kwargs.get("d"):
return N(wert)
return wert
def arcsin(*number, **kwargs):
if kwargs.get("h"):
print("\nArkussinus - Funktion\n")
print("Aufruf arcsin( x )")
print(" oder asin( x )\n")
print(" x Zahl\n")
print("Rückgabe einer reellen Zahl bei x in [-1, 1]\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Asin(number)
if kwargs.get("d"):
return N(wert)
return wert
asin = arcsin
def cos(*number, **kwargs):
if kwargs.get("h"):
print("\nKosinus - Funktion\n")
print("Aufruf cos( winkel )\n")
print(" winkel Winkel in Bogenmaß\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Cos(number)
if kwargs.get("d"):
return N(wert)
return wert
def arccos(*number, **kwargs):
if kwargs.get("h"):
print("\nArkuskosinus - Funktion\n")
print("Aufruf arccos( x )")
print(" oder acos( x )\n")
print(" x Zahl\n")
print("Rückgabe einer reellen Zahl bei x in [-1, 1]\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Acos(number)
if kwargs.get("d"):
return N(wert)
return wert
acos = arccos
def tan(*number, **kwargs):
if kwargs.get("h"):
print("\nTangens - Funktion\n")
print("Aufruf tan( winkel )\n")
print(" winkel Winkel in Bogenmaß\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Tan(number)
if kwargs.get("d"):
return N(wert)
return wert
def arctan(*number, **kwargs):
if kwargs.get("h"):
print("\nArkustangens - Funktion\n")
print("Aufruf arctan( x )")
print(" oder atan( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Atan(number)
if kwargs.get("d"):
return N(wert)
return wert
atan = arctan
def cot(*number, **kwargs):
if kwargs.get("h"):
print("\nKotangens - Funktion\n")
print("Aufruf cot( winkel )\n")
print(" winkel Winkel in Bogenmaß\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Cot(number)
if kwargs.get("d"):
return N(wert)
return wert
def arccot(*number, **kwargs):
if kwargs.get("h"):
print("\nArkuskotangens - Funktion\n")
print("Aufruf arccot( x )")
print(" oder acot( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Acot(number)
if kwargs.get("d"):
return N(wert)
return wert
acot = arccot
# ------------------------------------------------
# Trigonometrische und Umkehr-Funktionen - Gradmaß
# ------------------------------------------------
def sing(*number, **kwargs):
if kwargs.get("h"):
print("\nSinus für Gradwerte - Funktion\n")
print("Aufruf sing( winkel )\n")
print(" winkel Winkel in Grad\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = sin(number * pi /180)
if kwargs.get("d"):
return N(wert)
return wert
def cosg(*number, **kwargs):
if kwargs.get("h"):
print("\nKosinus für Gradwerte - Funktion\n")
print("Aufruf cosg( winkel )\n")
print(" winkel Winkel in Grad\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = cos(number * pi /180)
if kwargs.get("d"):
return N(wert)
return wert
def tang(*number, **kwargs):
if kwargs.get("h"):
print("\nTangens für Gradwerte - Funktion\n")
print("Aufruf tang( winkel )\n")
print(" winkel Winkel in Grad\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = tan(number * pi /180)
if kwargs.get("d"):
return N(wert)
return wert
def cotg(*number, **kwargs):
if kwargs.get("h"):
print("\nKotangens für Gradwerte - Funktion\n")
print("Aufruf cotg( winkel )\n")
print(" winkel Winkel in Grad\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = 1 / tan(number * pi /180)
if kwargs.get("d"):
return N(wert)
return wert
def asing(*number, **kwargs):
if kwargs.get("h"):
print("\nArkussinus in Grad - Funktion\n")
print("Aufruf arcsing( x )")
print("oder asing( x )\n")
print(" x Zahl \n")
print("Rückgabe einer reellen Zahl bei x in [-1, 1]\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
try:
if len(number) != 1:
raise AglaError("eine Zahl angeben")
number = sympify(number[0])
if not is_zahl(number):
raise AglaError("eine Zahl angeben")
except AglaError as e:
print('agla:', str(e))
return
try:
number = nsimplify(number)
except RecursionError:
pass
wert = asin(number)*180/pi
if kwargs.get("d"):
return N(wert)
return wert
arcsing = asing
def acosg(*number, **kwargs):
if kwargs.get("h"):
print("\nArkuskosinus in Grad - Funktion\n")
print("Aufruf arccosg( x )")
print("oder acosg( x )\n")
print(" zahl Zahl \n")
print("Rückgabe einer reellen Zahl bei x in [-1, 1]\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
try:
if len(number) != 1:
raise AglaError("eine Zahl angeben")
number = sympify(number[0])
if not is_zahl(number):
raise AglaError("eine Zahl angeben")
number = re(number)
except AglaError as e:
print('agla:', str(e))
return
try:
number = nsimplify(number)
except RecursionError:
pass
wert = acos(number)*180/pi
if kwargs.get("d"):
return N(wert)
return wert
arccosg = acosg
def atang(*number, **kwargs):
if kwargs.get("h"):
print("\nArkustangens in Grad - Funktion\n")
print("Aufruf arctang( x )")
print("oder atang( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = sympify(number[0])
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
number = nsimplify(number)
try:
number = nsimplify(number)
except RecursionError:
pass
wert = atan(number) * 180 / pi
if kwargs.get("d"):
return N(wert)
return wert
arctang = atang
def acotg(*number, **kwargs):
if kwargs.get("h"):
print("\nArkuskotangens in Grad - Funktion\n")
print("Aufruf arccotg( x )")
print("oder acotg( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = sympify(number[0])
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
number = nsimplify(number)
try:
number = nsimplify(number)
except RecursionError:
pass
wert = acot(number) * 180 / pi
if kwargs.get("d"):
return N(wert)
return wert
arccotg = acotg
# -----------------------------------
# Hyperbolische und Umkehr-Funktionen
# -----------------------------------
def sinh(*number, **kwargs):
if kwargs.get("h"):
print("\nSinus hyperbolikus - Funktion\n")
print("Aufruf sinh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Sinh(number)
if kwargs.get("d"):
return N(wert)
return wert
def cosh(*number, **kwargs):
if kwargs.get("h"):
print("\nKosinus hyperbolikus - Funktion\n")
print("Aufruf cosh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Cosh(number)
if kwargs.get("d"):
return N(wert)
return wert
def tanh(*number, **kwargs):
if kwargs.get("h"):
print("\nTangens hyperbolikus - Funktion\n")
print("Aufruf tanh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Tanh(number)
if kwargs.get("d"):
return N(wert)
return wert
def asinh(*number, **kwargs):
if kwargs.get("h"):
print("\nAreasinus - Funktion\n")
print("Aufruf asinh( x )")
print(" oder arsinh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Asinh(number)
if kwargs.get("d"):
return N(wert)
return wert
arsinh = asinh
def acosh(*number, **kwargs):
if kwargs.get("h"):
print("\nAreakosinus - Funktion\n")
print("Aufruf acosh( x )")
print(" oder arcosh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Acosh(number)
if kwargs.get("d"):
return N(wert)
return wert
arcosh = acosh
def atanh(*number, **kwargs):
if kwargs.get("h"):
print("\nAreatangens - Funktion\n")
print("Aufruf atanh( x )")
print(" oder artanh( x )\n")
print(" x Zahl\n")
print("Zusatz d=1 Dezimaldarstellung\n")
return
if len(number) != 1:
print("agla: eine Zahl angeben")
return
number = number[0]
if not is_zahl(number):
print("agla: eine Zahl angeben")
return
wert = Atanh(number)
if kwargs.get("d"):
return N(wert)
return wert
artanh = atanh
# Test auf eine zufall-zahl
# -------------------------
def is_zahl(x):
if isinstance(x, str):
return False
x = sympify(x)
try:
if x.is_number:
return True
elif x.is_Function:
return True
except AttributeError:
pass
zahlen = (Integer, int, Float, float, Symbol, One, Zero, NegativeOne, Half,
sin, cos, tan, sinh, cosh, tanh, asin, acos, atan, exp, log,
Mul, Add, Pow)
return type(x) in zahlen
isZahl = is_zahl
# ------------------------
# Test auf freie Parameter
# ------------------------
def mit_param(obj):
nv = importlib.import_module('zufall.lib.objekte.normal_verteilung')
NormalVerteilung = nv.NormalVerteilung
if iterable(obj):
test = [mit_param(el) for el in obj]
return any(test)
obj = sympify(obj)
if is_zahl(obj):
try:
return bool(obj.free_symbols)
except SyntaxError:
return False
elif isinstance(obj, NormalVerteilung):
return mit_param(obj.mu) or mit_param(obj.sigma)
mitParam = mit_param
# --------------------------
# Ausgabe nummerischer Werte
# --------------------------
def wert_ausgabe(wert, d=None): # interne Funktion
if not isinstance(d, (Integer, int)):
d = None
else:
if d <= 0:
d = None
if not d:
if mit_param(wert):
return N(wert)
else:
return eval(format(float(wert)))
else:
if mit_param(wert):
return N(wert, d)
else:
return eval(format(float(wert), ".%df" %d ))
wertAusgabe = wert_ausgabe
# ---------
# Fakultaet
# ---------
def fakultaet(*args, **kwargs):
"""Fakultätsfunktion"""
if kwargs.get('h'):
print("\nfakultät - Fakultätsfunktion\n")
print("Kurzform fak\n")
print("Aufruf fak( n )\n")
print(" n ganze Zahl >= 0\n")
return
if len(args) != 1:
print('zufall: ein Argument angeben')
return
n = args[0]
if mit_param(n):
return factorial(n)
if not (isinstance(n, (int, Integer)) and n >= 0):
print ('zufall: ganze nichtnegative Zahl angeben')
return
return factorial(n)
fak = fakultaet
# -------------------
# Binomialkoeffizient
# -------------------
def binomial(*args, **kwargs):
"""Binomialkoeffizient"""
if kwargs.get('h'):
print("\nbinomial - Binomialkoeffizient\n")
print("Kurzform B\n")
print("Aufruf B( n, k )\n")
print(" n, k ganze Zahl >= 0\n")
print("Achtung - der Bezeichner B kann überschrieben werden\n")
return
if len(args) != 2:
print('zufall: zwei Argumente angeben')
return
n, k = args
if mit_param(n):
if mit_param(k):
return Binomial(n, k)
else:
if isinstance(k, (int, Integer)) and k >= 0:
return Binomial(n, k)
print('zufall: positive ganze Zahlen angeben')
return
else:
if isinstance(n, (int, Integer)) and n >= 0:
return Binomial(n, k)
print('zufall: positive ganze Zahlen angeben')
return
B = binomial
# -------------
# Permutationen
# -------------
def permutationen(*args, **kwargs):
"""Permutationen einer Menge von Elementen"""
if kwargs.get('h'):
print("\nPermutationen der Elemente einer Menge\n")
print("Kurzform perm\n")
print("Aufruf perm( menge | n )\n")
print(" menge Liste/Tupel/Menge von Elementen | dictionary ")
print(" Elemente sind Zahlen, Symbole, Zeichenketten")
print(" ein dictionary enthält (element:anzahl)-Paare")
print(" n bei Angabe einer ganzen Zahl >0 wird die Menge")
print(" {1, 2,...,n} verwendet\n")
print("Zusatz k=ja Ausgabe der Permutationen in Kurzform")
print(" l=ja Ausgabe der Permutationen in Listenform")
print(" f=ja Formeln\n")
print("Beispiele")
print("perm( [a, b, c, d], k=ja)")
print("perm( { 0:3, 1:2 }, l=ja)")
print("perm( 5)\n")
return
if kwargs.get('f'):
i = Symbol('i')
print(' ')
display(Math('Anzahl\; der\; Permutationen\; ohne\; Wiederholungen = n!'))
display(Math('Anzahl\; der\; Permutationen\; mit\; Wiederholungen = \\frac{n!}{n_1!\: n_2!\: ... \:n_p!}'))
display(Math('n - Anzahl\; der\; Elemente \; der\; Grundgesamtheit'))
display(Math('n_i - Anzahl\; des\; Auftretens \; des\;' + latex(i) + \
'.\; Elementes\; in\; der\; Grundgesamtheit, \\quad \\sum\limits_{i=1}^{p}n_i = n'))
print(' ')
return
if len(args) != 1:
print('zufall: ein Argument angeben')
return
menge = args[0]
if not menge:
return []
if not isinstance(menge, (list, tuple, set, dict, int, Integer)):
raise ZufallError('Liste/Tupel/Menge von Elementen oder ganze positive Zahl angeben')
if isinstance(menge, (list, tuple, set)) and not all(map(lambda x: isinstance(x, \
(int, Integer, Symbol, str)), menge)):
raise ZufallError("Listenelemente können ganze Zahlen, Symbole oder Zeichenketten sein")
if isinstance(menge, dict):
if not all(map(lambda x: isinstance(x, (int, Integer)) and x > 0, menge.values())):
raise ZufallError("im dictionary als Werte Anzahlen angeben")
m = []
for it in menge:
m += [it for i in range(menge[it])]
menge = m
if isinstance(menge, (int, Integer)):
if menge <= 0:
raise ZufallError('ganze positive Zahl angeben')
else:
menge = range(1, menge+1)
menge = list(menge)
menge.sort(key=str)
di = {menge[0]:1}
wiederh = False
for it in menge[1:]:
try:
di[it] += 1
wiederh = True
except KeyError:
di[it] = 1
kwl = kwargs.get('l')
kwk = kwargs.get('k')
if not(kwl or kwk):
if not wiederh:
return factorial(len(menge))
else:
N = factorial(len(menge))
for it in di:
N = N / factorial(di[it])
return nsimplify(N)
if not wiederh:
pp = list(permutations(menge))
else:
def pmw(iterable):
L = [iterable[0]]
for i, it in enumerate(iterable):
if i == 0 or it not in L:
L += [it]
yield it
pp = list(pmw(list(permutations(menge))))
if kwl:
return pp
elif kwk:
return [kurz_form(x) for x in pp]
perm = permutationen
# -------------
# Kombinationen
# -------------
def kombinationen(*args, **kwargs):
"""k-Kombinationen aus einer Menge von Elementen"""
if kwargs.get('h'):
print("\nKombinationen - k-Kombinationen aus einer Menge von n Objekten\n")
print("Kurzform komb\n")
print("Aufruf komb( menge, k, wiederh, anordn )\n")
print(" menge Liste/Tupel/Menge von Elementen | dictionary |")
print(" ganze positive Zahl")
print(" Listenelemente sind Zahlen, Symbole, strings,")
print(" aber keine Listen")
print(" ein dictionary enthält (Objekt:Anzahl)-Paare")
print(" bei Angabe einer Zahl n wird die Menge")
print(" {1,2,...,n} verwendet")
print(" k Anzahl Elemente einer Kombination")
print(" wiederh Wiederholungen von Elementen in einer Kombina-")
print(" tion möglich (ja/nein)")
print(" anordn Beachtung der Anordnung/Reihenfolge der Elemen- ")
print(" te in einer Kombination (ja/nein)\n")
print("Zusatz k=ja Ausgabe der Kombinationen in Kurzform")
print(" l=ja Ausgabe der Kombinationen in Listenform")
print(" f=ja Formeln")
print(" b=ja Begriffe\n")
print("Beispiele")
print("komb( [a, b, c, d], 2, ja, nein)")
print("komb( { 0:3, 1:2 }, 4, ja, ja, k=ja)")
print("komb( 5, 2, nein, nein)\n")
return
if kwargs.get('b'):
print("\nMitunter werden Kombinationen mit Berücksichtigung der Anordnung Varia-")
print("tionen genannt, die ohne Berücksichtigung der Anordnung heißen dann Kom-")
print("binationen\n")
return
try:
if len(args) != 4:
raise ZufallError('vier Argumente angeben')
menge, k, wiederh, anordn = args
if not isinstance(menge, (list, tuple, set, dict, int, Integer)):
raise ZufallError('Liste/Tupel/Menge von Elementen oder ganze positive Zahl angeben')
if isinstance(menge, (list, tuple, set)) and not all(map(lambda x: isinstance(x, \
(int, Integer, Symbol, str)), menge)):
raise ZufallError("Listenelemente können Zahlen, Symbole eoder Zeichenketten sein")
if isinstance(menge, dict):
if not all(map(lambda x: isinstance(x, (int, Integer)) and x > 0, menge.values())):
raise ZufallError("im dictionary als Werte Anzahlen angeben")
m = []
for it in menge:
m += [it for i in range(menge[it])]
menge = m
if isinstance(menge, (int, Integer)):
if menge <= 0:
raise ZufallError('ganze positive Zahl angeben')
else:
menge = range(1, menge+1)
if not isinstance(k, (int, Integer)) and k > 0:
raise ZufallError('für Anzahl Elemente ganze Zahl > 0 angeben')
if not isinstance(wiederh, bool):
raise ZufallError('Zulassen Wiederholungen mit ja/mit oder nein/ohne angeben')
if not isinstance(anordn, bool):
raise ZufallError('Beachten der Anordnung mit ja/mit oder nein/ohne angeben')
except ZufallError as e:
print('zufall:', str(e))
return
if kwargs.get('f'):
print(' ')
if wiederh and anordn:
display(Math('Anzahl\; der\; Kombinationen\; mit\; Wiederholungen, \; mit\; Anordnung = n^k'))
elif wiederh and not anordn:
display(Math('Anzahl\; der\; Kombinationen\; mit\; Wiederholungen, \; ohne\; Anordnung'))
display(Math('\\qquad {n+k-1 \\choose k} = \\frac{(k+n-1)!}{k!\,(n-1)!}'))
elif not wiederh and anordn:
display(Math('Anzahl\; der\; Kombinationen\; ohne\; Wiederholungen, \; mit\; Anordnung = ' + \
'\\frac{n!}{(n-k)! }'))
elif not wiederh and not anordn:
display(Math('Anzahl\; der\; Kombinationen\; ohne\; Wiederholungen, \; ohne\; Anordnung'))
display(Math('\\qquad {n \\choose k} = \\frac{n!}{k!\,(n-k)! }'))
display(Math('n - Anzahl\; der\; Elemente \; der\; Grundgesamtheit'))
display(Math('k - Anzahl\; der\; ausgewählten \; Elemente'))
print(' ')
return
if not menge:
return []
menge = list(menge)
menge.sort(key=str)
if not anordn and not wiederh:
kk = list(combinations(menge, k))
elif not anordn and wiederh:
kk = list(combinations_with_replacement(menge, k))
elif anordn and not wiederh:
kk = list(permutations(menge, k))
elif anordn and wiederh:
kk = list(product(menge, repeat=k))
kwl = kwargs.get('l')
kwk = kwargs.get('k')
n = len(menge)
if not(kwl or kwk):
if wiederh and anordn:
return n**k
elif wiederh and not anordn:
N = factorial(k+n-1) / (factorial(k) * factorial(n-1))
return nsimplify(N)
elif not wiederh and anordn:
N = factorial(n) / factorial(n-k)
return nsimplify(N)
elif not wiederh and not anordn:
N = factorial(n) / (factorial(k) * factorial(n-k))
return nsimplify(N)
if kwl:
return kk
elif kwk:
return [kurz_form(x) for x in kk]
komb = kombinationen
# -----------
# Variationen
# -----------
def variationen(*args, **kwargs):
"""k-Variationen aus einer Menge von Elementen"""
if kwargs.get('h'):
print("\nVariationen - k-Variationen aus einer Menge von n Objekten\n")
print("Aufruf variationen( menge, k, wiederh )\n")
print(" menge Liste/Tupel/Menge von Elementen | dictionary |")
print(" ganze positive Zahl")
print(" Listenelemente sind Zahlen, Symbole, strings,")
print(" aber keine Listen")
print(" ein dictionary enthält (Objekt:Anzahl)-Paare")
print(" bei Angabe einer Zahl n wird die Menge")
print(" {1,2,...,n} verwendet")
print(" k Anzahl Elemente einer Variation")
print(" wiederh Wiederholungen von Elementen in einer Variation")
print(" möglich (ja/nein)\n")
print("Zusatz k=ja Ausgabe der Variationen in Kurzform")
print(" l=ja Ausgabe der Variationen in Listenform")
print(" f=ja Formeln")
print(" b=ja Begriffe\n")
print("Beispiele")
print("variationen( [a, b, c, d], 2, ja)")
print("variationen( { 0:3, 1:2 }, 4, ja, k=ja)")
print("variationen( 5, 2, nein)\n")
return
if kwargs.get('b'):
print("\nVariationen sind Kombinationen mit Berücksichtigung der Anordnung/Reihenfolge")
print("der Elemente; wird der Begriff verwendet, heißen Kombinationen nur diejenigen ")
print("ohne Berücksichtigung der Anordnung\n")
return
try:
if len(args) != 3:
raise ZufallError('drei Argumente angeben')
menge, k, wiederh = args
if not isinstance(menge, (list, tuple, set, dict, int, Integer)):
raise ZufallError('Liste/Tupel/Menge von Elementen oder ganze positive Zahl angeben')
if isinstance(menge, (list, tuple, set)) and not all(map(lambda x: isinstance(x, \
(int, Integer, Symbol, str)), menge)):
raise ZufallError("Listenelemente können Zahlen, Symbole eoder Zeichenketten sein")
if isinstance(menge, dict):
if not all(map(lambda x: isinstance(x, (int, Integer)) and x > 0, menge.values())):
raise ZufallError("im dictionary als Werte Anzahlen angeben")
m = []
for it in menge:
m += [it for i in range(menge[it])]
menge = m
if isinstance(menge, (int, Integer)):
if menge <= 0:
raise ZufallError('ganze positive Zahl angeben')
else:
menge = range(1, menge+1)
if not isinstance(k, (int, Integer)) and k > 0:
raise ZufallError('für Anzahl Elemente ganze Zahl > 0 angeben')
if not isinstance(wiederh, bool):
raise ZufallError('Zulassen Wiederholungen mit ja/mit oder nein/ohne angeben')
except ZufallError as e:
print('zufall:', str(e))
return
return kombinationen(menge, k, wiederh, True, **kwargs)
# -------------
# Zufallszahlen
# -------------
def zuf_zahl(*args, **kwargs):
"""Erzeuung von Zufallszahlen"""
if kwargs.get('h'):
print("\nzuf_zahl - Erzeugung von ganzzahligen Pseudo-Zufallszahlen\n")
print("Aufruf zuf_zahl( bereich1 /[, bereich2, ... ] /[, anzahl ] )\n")
print(" bereich Bereichsangabe z.B. (0, 9); [1, 6]")
print(" anzahl Anzahl der erzeugten Zahlen; Standard = 1\n")
print("Zusatz w=nein keine Wiederholung von Zahlen; Standard=ja")
print(" s=ja sortierte Ausgabe mehrerer Zufallszahlen; ")
print(" Standard=nein\n")
print("Rückgabe eine einzelne Zahl oder eine Liste mit anzahl Elementen")
print(" ist die Anzahl der Bereiche > 1, so ist jedes Element ein")
print(" Tupel, dessen i. Element aus dem i. Bereich ist\n")
print("Beispiele zuf_zahl( (0, 9) ) - eine Zufallsziffer 0, 1, ... oder 9")
print(" zuf_zahl( (1, 365), 6, w=nein ) - 6 Tage eines Jahres, ohne")
print(" Wiederh.")
print(" zuf_zahl( [0, 1], 3 ) - zur Simulation des 3-maligen Werfens")
print(" einer Münze")
print(" zuf_zahl( [1, 6], [1, 6], 100 ) - zur Simulation des 100-ma-")
print(" ligen Werfens zweier Würfel\n")
return
if not args:
print('zufall: Mindestens ein Argument angeben')
return
if not iterable(args[0]):
print('zufall: Mindestens einen Bereich angeben')
return
if iterable(args[-1]):
anzahl = 1
bereich = [*args]
else:
anzahl = args[-1]
bereich = [*args[:-1]]
for ber in bereich:
if not (iterable(ber) and len(ber) == 2):
print('zufall: Bereiche der Länge 2 und eventuell Anzahl angeben')
return
if not (isinstance(ber[0], (int, Integer)) and isinstance(ber[1], (int, Integer))):
print('zufall: die Bereichsgrenzen müssen ganzzahlig sein')
return
if ber[0] >= ber[1]:
print('zufall: es muss 1.Bereichsgrenze < 2.Bereichsgranze sein')
return
w = kwargs.get('w')
if w == None:
w = True
s = kwargs.get('s')
if anzahl == 1:
if len(bereich) == 1:
return randint(*bereich[0])
else:
return [randint(*b) for b in bereich]
else:
if len(bereich) == 1:
b = bereich[0]
if w:
if not s:
return [randint(*b) for i in range(anzahl)]
return sorted([randint(*b) for i in range(anzahl)])
if anzahl > len(range(b[0], b[1]+1)):
print('zufall: es muss Anzahl <= Bereichsgröße sein')
return
if not s:
return sample(range(b[0], b[1]+1), anzahl)
return sorted(sample(range(b[0], b[1]+1), anzahl))
else:
if w is None:
samp = [[randint(*b) for b in bereich] for i in range(anzahl)]
if s is None:
return samp
return sorted(samp)
anz = 1
for b in bereich:
g = b[1] - b[0] + 1
anz *= g
if anz < anzahl and not w:
print('zufall: die angegebene Anzahl ist größer als die Vorratsmenge')
return
samp, i = [], 0
while i < anzahl:
sa = []
for b in bereich:
sa += [randint(*b)]
samp += [tuple(sa)]
i += 1
if not s:
return samp
return sorted(samp)
zufZahl = zuf_zahl
# -------------------------------------
# Anzahl des Vorkommens eines Elementes
# in einer DatenReihe / Liste
# -------------------------------------
def anzahl(*args, **kwargs):
"""Anzahl von Elementen"""
if kwargs.get('h'):
print("\nanzahl - Anzahl des Vorkommens eines Elementes in einer DatenReihe /")
print(" Liste\n")
print("Aufruf anzahl( daten /[, elem ] )\n")
print(" daten Liste von Elementen | DatenReihe")
print(" elem Listen- / Datenelement")
print(" bei Fehlen wird die Anzahl der Elemente")
print(" von daten zurückgegeben")
print(" oder anzahl( elem )\n")
print(" es wird eine Funktion zurückgegeben, die die Anzahl")
print(" des Vorkommens des Elementes elem in einer Liste /")
print(" DatenReihe zählt")
print(" bei deren Aufruf ist die Liste / DatenReihe als")
print(" Argument anzugeben; ist elem selbst eine Liste, ist")
print(" der Zusatz el=ja anzugeben\n")
print("Beispiele")
print("anzahl( [ 1, 0, 0, 1, 1, 1 ], 1 ) ergibt 4")
print("anzahl( [ a, b, c ] ) ergibt 3")
print("anzahl(sp, W) ergibt die Anzahl der W[appen] in der Stichprobe sp beim")
print(" Münzwurf-ZufallsExperiment)")
print("anzahl( el ) ergibt eine Funktion zum Zählen des Elements el")
print(" anzahl(0)( [0,1,1,0,0] ) ergibt 3")
print(" ist el eine Liste, wird der Zusatz el=ja angegeben")
print(" anzahl([a, b], el=ja)( [[a, a], [a, b], [a, c], [a, b]]")
print(" ergibt 2\n")
return
dr = importlib.import_module('zufall.lib.objekte.datenreihe')
DatenReihe = dr.DatenReihe
if len(args) == 1:
a = args[0]
if isinstance(a, list) and not kwargs.get('el'):
return len(a)
elif isinstance(a, DatenReihe):
return a.n
else:
def fkt(*li):
liste = li[0]
if not liste or not isinstance(liste, (list, DatenReihe, \
tuple, Tuple)):
print('zufall: Liste oder DatenReihe angeben')
return
if isinstance(liste, DatenReihe):
liste = liste.daten
return len([x for x in liste if x == a])
return fkt
elif len(args) == 2:
liste, elem = args
if not isinstance(liste, (list, DatenReihe)):
print('zufall: als 1. Argument Liste oder DatenReihe angeben')
return
if isinstance(liste, DatenReihe):
liste = liste.daten
return len([x for x in liste if x == elem])
else:
print('zufall: ein oder zwei Argumente angeben')
return
# --------------
# Anzahl Treffer
# --------------
def anzahl_treffer(*args, **kwargs):
"""Anzahl Treffer"""
if kwargs.get('h'):
print("\nanzahl_treffer - Anzahl des Treffer\n")
print("Aufruf anzahl_treffer( treffer )\n")
print(" treffer Element, das als Treffer / Erfolg angesehen")
print(" wird (etwa Wappen oder W beim Münzwurf)\n")
print("Die Funktion ist nur als ZG-Funktion beim Erzeugen von ZufallsGröße-Ob-")
print("jekten verwendbar\n")
return
if len(args) != 1:
print('zufall: ein Element als Treffer angeben')
return
return anzahl(args[0])
anzahlTreffer = anzahl_treffer
# -----
# Summe
# -----
def summe(*args, **kwargs):
"""Summe der Elemente"""
if kwargs.get('h'):
print("\nsumme - Summe der Elemente einer Liste mit Daten / DatenReihe\n")
print("Aufruf summe( daten )\n")
print(" daten Liste mit Daten | DatenReihe\n")
print("Synonyme augen_summe, augenSumme\n")
print("Beispiel")
print("summe( [ 1, 0, 0, 1, 1, 1 ] ) ergibt 4\n")
return
dr = importlib.import_module('zufall.lib.objekte.datenreihe')
DatenReihe = dr.DatenReihe
if len(args) != 1 or not isinstance(args[0], (list, tuple, Tuple, DatenReihe)):
print('zufall: Liste oder Datenreihe angeben')
return
liste = args[0]
if isinstance(liste, DatenReihe):
liste = liste.daten
if not all([isinstance(x, (int, Integer, Rational, float, Float)) for x in liste]):
print('zufall: in der Liste nur Zahlen angeben')
return
return Add(*liste)
augen_summe = summe
augenSumme = summe
# ------------------
# Allgemeiner solver
# ------------------
def loese(*args, **kwargs):
if kwargs.get("h") == 1:
print("\nlöse - Funktion\n")
print("Zum Lösen von Gleichungen sowie von Ungleichungen\n")
print(" Aufruf löse( gleich /[, variable ] )\n")
print(" gleich linke Seite einer Gleichung der Form")
print(" ausdruck = 0 oder Liste mit solchen")
print(" Elementen (Gleichungssystem)")
print(" variable einzelne oder Liste von Variablen")
print(" ausdruck Ausdruck in den Variablen\n")
print(" oder löse( ungleich /[, variable ] )\n")
print(" ungleich Ungleichung der Form ausdruck rel ausdruck1")
print(" rel Relation '<' | '<=' | '>' | '>='\n")
print("Zusatz set=ja Verwendung von solveset; standardmäßig wird solve ver-")
print(" wendet (siehe SymPy-Dokumentation)\n")
print("Beispiele")
print("löse( 3*x^2 + 5*x - 3 ) - einzelne Gleichung")
print("löse( 3*x^2 + 5*x - 3, set=ja )")
print("löse( (1-1/3)^n > 0.01, set=ja ) - Ungleichung")
print("löse( [2*x-4*y-2, 3*x+5*y+1] ) - Gleichungssystem\n")
return
ve = importlib.import_module('agla.lib.objekte.vektor')
Vektor = ve.Vektor
if len(args) == 1:
gleich = args[0]
var = []
elif len(args) == 2:
gleich = sympify(args[0])
var = args[1]
else:
print('zufall: ein oder zwei Argumente angeben')
return
if not type(var) in (Symbol, list, tuple, Tuple):
print('zufall: einzelne Variable als Symbol, mehrere in einer' +
' Liste angeben')
return
se = kwargs.get('set')
if is_zahl(gleich):
if se:
if not var:
return solveset(gleich, domain=S.Reals)
return solveset(gleich, var, domain=S.Reals)
if not var:
res = solve(gleich, dict=True, rational=True)
else:
res = solve(gleich, var, dict=True, rational=True)
if isinstance(res, list) and len(res) == 1:
return res[0]
if not res:
return set()
return res
elif isinstance(gleich, _Gleichung):
gleich = gleich.lhs - gleich.rhs
if se:
if not var:
return solveset(gleich, domain=S.Reals)
return solveset(gleich, var, domain=S.Reals)
if not var:
res = solve(gleich, dict=True, rational=True)
else:
res = solve(gleich, var, dict=True, rational=True)
if isinstance(res, list) and len(res) == 1:
return res[0]
if not res:
return set()
return res
elif isinstance(gleich, Vektor):
gleich = [gleich.komp[i] for i in range(gleich.dim)]
if not var:
res = solve(gleich, dict=True, rational=True)
else:
res = solve(gleich, var, dict=True, rational=True)
if isinstance(res, list) and len(res) == 1:
return res[0]
if not res:
return set()
return res
elif isinstance(gleich, (list, tuple, Tuple)):
res = solve(gleich, rational=True)
if not res:
return set()
return res
elif '<' in str(gleich) or '>' in str(gleich):
if se:
if not var:
return solveset(gleich, domain=S.Reals)
return solveset(gleich, var, domain=S.Reals)
if not var:
res = solve(gleich)
else:
res = solve(gleich, var)
if isinstance(res, list) and len(res) == 1:
return res[0]
if not res:
return set()
return res
else:
print('zufall: linke Seite einer Gleichung oder einer ' +
'Vektorgleichung oder Gleichungssystem angeben')
# -------------
# Vereinfachung
# -------------
from zufall.lib.objekte.umgebung import UMG
def einfach(*x, **kwargs):
if kwargs.get('h') == 1:
print("\neinfach - Funktion\n")
print("Vereinfachung von Objekten\n")
print("Aufruf einfach( objekt )\n")
print(" objekt numm. Ausdruck, Vektor, Matrix\n")
print("Zusatz rad=ja Einsatz von radsimp")
print(" trig=ja Einsatz von trigsimp")
print(" num=ja Einsatz von nsimplify")
print(" sign=ja Einsatz von signsimp")
print(" (siehe SymPy-Dokumentation)\n")
return
Vektor = importlib.import_module('agla.lib.objekte.vektor').Vektor
if len(x) != 1:
print('zufall: ein Objekt angeben')
return
x = x[0]
if not UMG.SIMPL:
return x
if not (is_zahl(x) or isinstance(x, (Vektor, SympyMatrix))):
print('zufall: nummerischen Wert, Vektor oder Matrix angeben')
return
if isinstance(x, Vektor):
li = [einfach(k, **kwargs) for k in x.komp]
return Vektor(li)
if isinstance(x, SympyMatrix):
Matrix = importlib.import_module('zufall.lib.objekte.matrix').Matrix
return Matrix(*[einfach(v, **kwargs) for v in x.vekt])
elif is_zahl(x):
if not kwargs:
return simplify(x)
elif kwargs.get('rad'):
return radsimp(x)
elif kwargs.get('trig'):
return trigsimp(x)
elif kwargs.get('num'):
try:
return nsimplify(x)
except RecursionError:
return x
elif kwargs.get('sign'):
return signsimp(x)
else:
return x
# --------------------------
# k-Auswahlen aus n Objekten
# --------------------------
def auswahlen(**kwargs):
"""k-Auswahlen aus n Objekten; Übericht"""
if kwargs.get('h'):
print("\nk-Auswahlen aus n Objekten (Übersicht)\n")
print("Aufruf auswahlen( )\n")
print("Zusatz a=ja Algorithmus als Pseudocode\n")
return
if not kwargs.get('a'):
dm = lambda x: display(Math(x))
print(' ')
dm('\\text{Tabelle der $k$-Auswahlen aus $n$ Objekten}')
print(' ')
dm('\\text{Bezeichnung $\\qquad\qquad\\quad$ Eigenschaften \
$\\qquad\\quad$ Formel $\\qquad\\quad$ Beispiel}')
dm('\\text{$k$-Kombination oW mA } \\quad\:\, k \\lt n \\qquad\\quad \
\\qquad\quad\; \\dfrac{n!}{(n-k)!} \\qquad\\quad\, \\text{Pakplatzbelegung}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\;\, \\text{15 Autos, 6 Plätze}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\; \, \\Rightarrow n=15, k=6')
dm('\\text{$k$-Kombination mW mA} \\quad\:\, k, n \; \\text{beliebig} \\qquad\\quad \
\,\quad n^k \\qquad\\qquad\\quad\, \\text{Fußballtoto}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\; \, \\Rightarrow n=3, k=11')
dm('\\text{$k$-Permutation oW } \\qquad\\quad\:\, \\text{mA; jedes Element} \\quad\:\;\, \
n! \\qquad\\qquad\\quad\, \\text{Startaufstellung}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\text{wird benutzt} \
\\qquad\\qquad\\qquad\\qquad\\quad\:\;\; \, \\text{8 Läufer auf 8 Bahnen}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\, k=n \\qquad\\qquad \
\\qquad\\qquad\\qquad\\quad\:\:\:\; \, \\Rightarrow n=k=8')
dm('\\text{$k$-Permutation mW } \\qquad\\quad\:\, \\text{mA; jedes Element} \\quad\:\;\, \
\\dfrac{n!}{n_1!\cdot \dots \cdot n_p}\\quad\, \\text{Anagramm}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\text{wird benutzt} \
\\qquad\\qquad\\qquad\\qquad\\quad\:\;\; \, \\text{RENNEN}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\, k>n \\qquad\\qquad \
\\qquad\\qquad\\qquad\\quad\:\:\:\; \, \\Rightarrow p=3,n=6')
dm('\\text{$k$-Kombination oW oA } \\quad\:\,\:\; k \\lt n \\qquad\\quad \
\\qquad\quad\;\; \\dfrac{n!}{(n-k)! \cdot k!} \\quad\, \\text{Zahlenlotto}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\;\, \\text{6 aus 49}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\; \, \\Rightarrow n=49, k=6')
dm('\\text{$k$-Kombination mW oA } \\quad\:\,\: k, n \\text{beliebig} \\qquad\\quad \
\\quad \;\; {n+k-1 \choose k} \\quad \\text{Flaschenträger}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\;\, \\text{12 Flaschen aus 3 Sorten}')
dm('\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \
\\qquad\\qquad\; \, \\Rightarrow n=3, k=12')
print(' ')
dm('\\text{(mW/oW - mit/ohne Wiederholung, mA/oA - mit/ohne Anordnung)}')
print(' ')
dm('\\text{Oft werden Kombinationen mit Berücksichtigung der Anordnung Variationen genannt}')
dm('\\text{die ohne Berücksichtigung der Anordnung heißen dann Kombinationen}')
print(' ')
return
# Algorithmus
print(""" \
Algorithmus zur Berechnug der k-Auswahlen aus n Objekten (Python-ähn-
licher Pseudocode)
Analyse der Aufgabenstellung
WENN die Elemente 'angeordnet' sind:
WENN einzelne Elemente wiederholt werden dürfen:
WENN jedes Element mindestens einmal benutzt wird:
Permutation mW mit n > p
/ Aus n > p ergibt sich die Zuordnung von p und n:
/ Die Länge n der Anordnung ist größer als die Größe p
/ der Vorratsmenge ]
SONST:
Kombination mW mA
/ Zuordnung von k und n:
/ Das, was 'wiederholt' werden kann, gehört zur Vorrats-
/ menge
SONST:
WENN jedes Element genau einmal benutzt wird:
WENN k = n ist:
Permutation oW
SONST:
ES WURDE ETWAS ÜBERSEHEN
neu beginnen
SONST:
Kombination oW mA mit k < n
/ Aus k < n ergibt sich die Zuordnung von n und k:
/ Die Länge k der Anordnung ist kleiner als die Größe n
/ der Vorratsmenge
SONST:
WENN Elemente wiederholt werden dürfen:
Kombination mW oA
/ Zuordnung von n und k:
/ Das, was 'wiederholt' werden kann, gehört zur Vorratsmenge
SONST:
Kombination oW oA mit k < n
/ Aus k < n ergibt sich die Zuordnung von n und k:
/ Die Größe k der Teilmenge ist kleiner als die Größe n
/ der Vorratsmenge
Oft werden Kombinationen mit Berücksichtigung der Anordnung Variationen
genannt, die ohne Berücksichtigung der Anordnung heißen dann Kombinationen
Grundlage:
Wolfdieter Feix
mentor Abiturhilfe
Mathematik Oberstufe
Stochastik
mentor Verlag 2000
""")
return
# ---------------------------------------
# Gesetze der Wahrscheinlichkeitsrechnung
# ---------------------------------------
def gesetze(**kwargs):
"""Gesetze der Wahrscheinlichkeitsrechnung"""
if kwargs.get('h'):
print("\nEinige Gesetze der Wahrscheinlichkeitsrechnung\n")
print("Aufruf gesetze( )\n")
return
dm = lambda x: display(Math(x))
print(' ')
dm('\\text{Einige Gesetze der Wahrscheinlichkeitsrechnung}')
print(' ')
dm('\\text{Additionssatz}')
dm('\\qquad\\text{Für beliebige Ereignisse} \; A \\text{ und } B \\text{ gilt } P( \\cup B) = \
P(A)+P(B)-P(A \\cap B)')
dm('\\text{Satz von Bayes}')
dm('\\qquad\\text{Sei } A \\text{ ein Ereignis und } B \\text{ eine Bedingung, \
unter der das Ereignis betrachtet}')
dm('\qquad\\text{wird. Dann berechnet sich die Wahrscheinlichkeit } P_B(A) \
\\text{ für } A \\text{ unter der Be-}')
dm('\\qquad \\text{dingung } B \\text{nach der Formel }\; P_B(A) = \\dfrac{P(A \\cap B)}{P(B)}')
dm('\\text{Multiplikationssatz}')
dm('\\qquad\\text{Ist } P(A) \\neq 0 \\text{, so gilt } P(A \\cap B) = P(A) \cdot \
P_A(B)')
dm('\\text{Satz von der totalen Wahrscheinlichkeit}')
dm('\\qquad\\text{Für beliebige Ereignisse }A \\text{ und }B \\text{ gilt } P(B) = \
P(A \\cap B) + P(\\overline{A} \\cap B) = ')
dm('\\qquad P(A) \\cdot P_A(B) + P(\\overline{A}) \\cdot P_\\overline{A}(B)')
dm('\\qquad\\text{oder allgemeiner}')
dm('\\qquad\\text{Wenn } A_1 \\cup A_2 \\cup \\dots \\cup A_n = \\Omega, \; A_i\\cap A_j = \
\\emptyset \\text{ für } i,j=1\dots n, i \\neq j \\text{ gilt, dann ist}')
dm('\\qquad P(B) = \\sum_{i=1}^n P(A_i)\\cdot P_{A_i}(B)')
dm('\\text{Empirisches Gesetz der großen Zahlen}')
dm('\\qquad\\text{Bei langen Versuchsreihen, also bei häufiger Wiederholung eines Zufallsex-}')
dm('\\qquad\\text{perimentes verändern sich die relativen Häufigkeiten eines Ergebnisses in }')
dm('\\qquad\\text{der Regel nur noch wenig. Sie stabilisieren sich in der Nähe der Wahrschein-}')
dm('\\qquad\\text{lichkeit des Ergebnisses.}')
dm('\\text{Bernoullisches Gesetz der großen Zahlen}')
dm('\\qquad\\text{Gegeben sei ein } n \\text{-stufiges Bernoulli-Experiment mit der Trefferwahrschein-}')
dm('\\qquad\\text{lichkeit }p. X \\text{ sei die Zufallsgröße \'Anzahl der Treffer\'. Für jedes beliebige}')
dm('\\qquad\\text{positive } \\epsilon \\text{ gilt dann }\\lim\\limits_{n \\rightarrow \\infty} P \left( \
\left| \\frac{X}{n} - p \\right| \\le \
\\epsilon \\right) = 1')
dm('\\text{Tschebyschew - Ungleichung}')
dm('\\qquad\\text{Sei } X \\text{ eine beliebige Zufallsgröße mit Erwartungswert } \\mu \\text{ und Standardabwei-}')
dm('\\qquad\\text{chung }\\sigma. \\text{ Für die Wahrscheinlichkeit, dass } X \\text{ einen Wert annimmt, der um}')
dm('\\qquad\\text{mindestens } c\; (c \\gt 0) \\text{ vom Erwartungswert abweicht, gilt}')
dm('\\qquad P\\left(\\left|X - \\mu\\right| \\ge c \\right) \\le \\dfrac{\\sigma^2}{c^2}. \
\\qquad \\text{Daraus folgt}')
dm('\\qquad P(\\mu - \\sigma\cdot c \\le X \\le \\mu + \\sigma\\cdot c ) \\ge 1 -\\dfrac{1}{c^2}')
dm('\\dfrac{1}{\\sqrt{n}} \\text{ - Gesetz}')
dm('\\qquad X_1, X_2, \\dots , X_n \\text{ seien identisch verteilte unabhängige Zufallsgrößen mit dem }')
dm('\\qquad\\text{Erwartungswert } \\mu \\text{ und der Standardabweichung } \\sigma. \\text{ Für die Zufallsgröße }')
dm('\\qquad\\overline{X} = \\dfrac{1}{n} \, (X_1 + X_2 + \\dots + X_n) \ \\text{ gilt dann:} ')
dm('\\qquad\\text{Sie hat den Erwartungswert } \\mu \\text{ und die Standardabweichung } \
\\dfrac{\\sigma}{\\sqrt{n}}')
dm('\\text{Zentraler Grenzwertsatz}')
dm('\\qquad X_1, X_2, \\dots , X_n \\text{ seien unabhängige Zufallsgrößen. Die Zufallsgröße } \
\;X = X_1+')
dm('\\qquad \\dots + X_n \\text{ habe den Erwartungswert } \\mu \\text{ und die Standardabweichung } \\sigma. \
\\text{Dann}')
dm('\\qquad\\text{gilt unter gewissen Bedingungen, die fast immer erfüllt sind (insbesondere}')
dm('\\qquad\\text{für großes } n \\text{): }')
dm('\\qquad\\text{Die Zufallsgröße $X$ ist näherungsweise nomalverteilt mit } \\mu \\text{ und } \\sigma')
print(' ')
# ------------------
# ja-nein - Funktion
# ------------------
def ja_nein(*args, **kwargs):
"""Bewertung eines logischen Ausdruckes"""
if kwargs.get('h'):
print("\nja_nein - Bewertung eines logischen Ausdruckes\n")
print("Aufruf ja_nein( ausdruck )\n")
print(" ausdruck Ausdruck mit dem Wert True oder False\n")
print("Rückgabe 1, wenn ausdruck==True")
print(" 0, wenn ausdruck==False\n")
return
if len(args) != 1:
print('zufall: ein Argument angeben')
return
ausdruck = args[0]
if not isinstance(bool(ausdruck), bool):
print('zufall: der Ausdruck hat nicht den Wert True oder False')
return
if ausdruck:
return 1
return 0
jaNein = ja_nein
# --------------------------------------------------------
# stochastisch - Test auf stochastische(n) Vektor / Matrix
# --------------------------------------------------------
def stochastisch(*args, **kwargs):
"""Test auf stochastischen Vektor / Matrix"""
if kwargs.get('h'):
print("\nstochastisch - Test auf stochastische(n) Vektor / Matrix\n")
print("Aufruf stochastisch( objekt )\n")
print(" objekt Vektor, Matrix\n")
print("Ein Vektor ist stochastisch, wenn alle Komponenten in [0, 1] liegen")
print("und ihre Summe 1 ist\n")
print("Eine quadratische Matrix ist stochastisch, wenn alle Spaltenvektoren")
print("stochastisch sind\n")
return
if len(args) != 1:
print('zufall: Vektor oder Matrix angeben')
return
obj = args[0]
ve = importlib.import_module('agla.lib.objekte.vektor')
Vektor = ve.Vektor
if not isinstance(obj, (Vektor, SympyMatrix)):
print('zufall: Vektor oder Matrix angeben')
return
if isinstance(obj, Vektor):
if not all(k >= 0 for k in obj.komp):
return False
s = 0
for k in obj.komp:
s += k
if s != 1:
return False
return True
else:
if obj.shape[0] != obj.shape[1]:
return False
for i in range(obj.shape[0]):
col = Vektor(*[obj[j, i] for j in range(obj.shape[1])])
if not stochastisch(col):
return False
return True
# ------------------
# Kurzform für Tupel
# ------------------
def kurz_form(iterable):
menge = list(iterable)
symbole = all(map(lambda x: isinstance(x, Symbol), menge))
ziffern = all(map(lambda x: isinstance(x, (int, Integer)), menge))
if symbole or ziffern:
kf = ''
for el in menge:
kf += str(el)
return Symbol(kf)
return None
# ------------------------------------------
# Erzeugen der Baumstruktur einer Tupelmenge
# ------------------------------------------
def tupel2baum(liste):
def kopf(liste):
if not isinstance(liste, list):
return liste
elif len(liste) == 0:
return None
return liste[0]
def rest(liste):
if not isinstance(liste, list):
return []
elif len(liste) == 1:
return []
return liste[1:]
def ibaum(liste):
rliste = []
if liste:
rliste = ['o']
#li = map(lambda x: not isinstance(x, list), liste)
li = [not isinstance(x, list) for x in liste]
if all(li):
rliste += [[x] for x in liste]
else:
nam = set([kopf(x) for x in liste if kopf(x) is not None])
nam = list(nam)
nam.sort(key=str)
for nm in nam:
nm_liste = [ nm ]
nm_rest_liste = [x for x in liste if kopf(x) == nm]
nm_rest_liste = [rest(x) for x in nm_rest_liste]
nm_rest_baum = ibaum(nm_rest_liste)
nm_liste += rest(nm_rest_baum)
rliste += [nm_liste]
return rliste
return ibaum(liste)
# --------------
# Hilfe-Funktion
# --------------
def Hilfe(**kwargs):
h = kwargs.get('h')
if not h:
h = 1
if h == 1:
print("h=2 - Einleitung")
print("h=3 - Online-Hilfeinformationen")
print("h=4 - Bezeichner")
print("h=5 - Zugriff auf Eigenschaften und Methoden")
print("h=6 - Klassen")
print("h=7 - Funktionen")
print("h=8 - Operatoren")
print("h=9 - Jupyter-Notebook")
print("h=10 - Nutzung von SymPy-Anweisungen")
print("h=11 - Griechische Buchstaben")
print("h=12 - Kleiner Python-Exkurs")
print("h=13 - Bemerkungen für Programmierer/Entwickler")
return
if h == 2:
print(
"""
Einleitung
Python ist ein leistungsfähiger konventioneller Taschenrechner. Durch das CAS
SymPy werden seine Fähigkeiten vor allem um das symbolische Rechnen erwei-
tert. Mit dem Paket zufall sollen Berechnungen auf dem Gebiet der Stochastik
unterstützt werden, wobei es für den Gebrauch in der Schule vorgesehen ist
zufall ist ein Python-Paket und kann innerhalb von Jupyter-Notebooks benutzt
werden
In zufall werden die Objekte der Stochastik, wie Zufallsexperiment, Bernoul-,
likette, Urne, Binomialverteilung usw. mit entsprechenden Python-Klassen dar-
gestellt. Über eine Konstruktor-/Erzeugerfunktion gleichen Namens können In-
stanzen dieser Klassen (Objekte), erzeugt werden. Mit diesen und ihren Eigen-
schaften + Methoden wird dann interaktiv gearbeitet. Weiterhin unterstützen
einige Funktionen die Arbeit
Das Paket basiert auf dem vollständig in Python geschriebenen CAS SymPy und
ist selbst ebenfalls (mit leichten Modifizierungen) in reinem Python ge-
schrieben. Für Grafiken wird das matplotlib-Paket benutzt
Die Programme von zufall werden im Quellcode für die Benutzung bereitgestellt\n
Die Syntax zur Handhabung von zufall ist so gestaltet, dass sie leicht er-
lernbar ist. Es sind nur geringe Python-Kenntnisse sowie Fähigkeiten zum
Bedienen eines Jupyter-Notebooks notwendig
Bei der Arbeit mit zufall kann auf den gesamten Leistungsumfang von Python
zugegriffen werden, der vor allem duch eine Vielzahl weiterer Pakete reali-
siert wird
""")
return
if h == 3:
print(
"""
Erhalten von Hilfe-Informationen
Unter dem Namen Hilfe steht eine Funktion zur Verfügung, über die zentrale
Hilfeinformationen erhalten werden können. Mit der Eingabe
In [..]: Hilfe() oder Hilfe(h=1)
in eine Zelle des Notebooks wird man auf einzelne Seiten geleitet
Weitere Hilfeinformationen können zu jedem zufall-Objekt und zu den Metho-
den eines Objektes gewonnen werden, indem bei der Erzeugung des Objektes
mit Hilfe seiner Erzeugerfunktion oder beim Aufruf der Methode als letzter
Eintrag in der Argumentenliste h=1 geschrieben wird. Man erhält dann unmit-
telbar die gewünschte Information oder wird auf eine andere Hilfeseite ge-
leitet
Analoges gilt für die Funktionen, die von zufall zur Verfügung gestellt wer-
den
Weiterhin ist für jedes Objekt eine Eigenschaft mit dem Namen h (Kurzform
von hilfe) vorhanden, bei deren Aufruf die verfügbaren Eigenschaften und
Methoden aufgelistet werden
Tritt in einer Syntaxdarstellung die Konstruktion /[...] auf, kann die Anga-
be zwischen den eckigen Klammern entfallen. Ein |-Zeichen bedeutet i.A.,
dass zwischen zwei Angaben ausgewählt werden kann
""")
return
if h == 4:
print(
"""
Bezeichner (Namen)
Die erzeugten zufall-Objekte können einem Bezeichner zugewiesen werden,
z.B. wird mit der Anweisung
In [..]: bv = BV(12, 0.3)
dem Bezeichner bv als Wert ein BinomialVerteilung-Objekt zugewiesen ('=' ist
in Python für Zuweisungen vorgesehen)
Ein Bezeichner kann in zufall aus allen Buchstaben des englischen Alphabets,
allen Ziffern 0, 1, ..., 9 und dem Unterstrich '_' bestehen, wobei er mit
einem Buchstaben beginnen muß. Der Name kann beliebig lang sein, es wird
zwischen großen und kleinen Buchstaben unterschieden. Auf diese Art gebil-
deten Namen kann jederzeit ein Objekt (zufall-Objekt oder anderes, z.B. ei-
ne Zahl) zugewiesen werden. Dabei darf es sich nicht um einen geschützten
Namen handeln (s.u.)
Anders verhält es sich bei den 'freien' Bezeichnern, denen unmittelbar kein
Wert zugewiesen wird und die als Variablen oder als Parameter u.a. in Glei-
chungen auftreten. Im Unterschied zu anderen CAS werden in dem von zufall
benutzten SymPy solche Bezeichner nicht einfach durch Hinschreiben erkannt
und akzeptiert, sondern sie müssen explizit als Symbole deklariert werden.
Für Buchstaben und kleine griechische Buchstaben wird das bereits innerhalb
von zufall erledigt, so dass Bezeichner wie r, g, b, A, X usw. jederzeit
frei verwendet werden können. Soll ein freier Bezeichner länger als ein
Zeichen sein, muss er mittels einer entsprechenden SymPy-Anweisung dekla-
riert werden, etwa durch
In [..]: xyz = Symbol('xyz')
Es gibt eine Reihe von Bezeichnern, die in zufall eine feste Bedeutung ha-
ben und nicht anderweitig verwendet werden können, indem sie einen anderen
Wert bekommen. Beim Versuch, einen anderen Wert an einen solchen Bezeichner
zu binden, warnt zufall mit einem Hinweis und verhindert das Überschreiben.
Ebenfalls in das Warnsystem aufgenommen wurden die Elemente der SymPy-Spra-
che, die innerhalb von zufall zur Verfüung des Nutzers gestellt werden\n
Besondere Beachtung erfordern die Bezeichner E N und I, denen Konstanten zu-
gewiesen sind. Sie werden kommentarlos überschrieben werden, wenn ihnen ein
anderer Wert zugewiesen wird
Viele Eigenschaften/Methoden haben synonyme Bezeichner, die folgendermaßen
gebildet werden:
- ein '_' (Unterstrich) innerhalb des Bezeichners einer Eigenschaft oder
Methode wird eliminiert, indem der nächste Buchstabe groß geschrieben
wird, z.B. sch_el -> schEl ('Kamelschreibweise'; Methode
'Scharelement')
- ein '_' am Ende eines Bezeichners wird elimimiert, indem das erste Zei-
chen groß geschrieben wird, z.B: umfang_ -> Umfang (Methode 'Umfang')
In einem zufall-Notebook kann explizit mit anderen Python-Paketen gearbeitet
werden, speziell mit SymPy, von dem einige Anweisungen dem System bereits
bekannt sind. Soll ein weiteres SymPy-Element benutzt werden, z.B. die Funk-
tion ceiling, so ist dieses mit der üblichen import-Anweisung zu importieren
und kann danach aufgerufen werden
In [..]: from sympy import ceiling
...
In [..]: ceiling(3.12) # das Ergebnis ist 4
""")
return
if h == 5:
print(
"""
Zugriff auf Eigenschaften und Methoden von Objekten
Die zufall-Objekte haben verschiedene Eigenschaften und Methoden (die letz-
teren erwarten für ihre Ausführung Argumente - ein weiteres Objekt, einen
Parameterwert o.ä.). Die implementierten Eigenschaften und Methoden eines
Objektes können über seine Hilfeseite wie etwa
In [..]: BV(h=1)
ermittelt werden. Ein BV-Objekt (BV ist der Kurzname von BinomialVerteilung)
hat z.B. die Eigenschaft erw (Erwartungswert) und die Methode P (zur Berech-
nung von Wahrscheinlichkeiten). Der Zugriff erfolgt mittels des '.' - Ope-
rators, der allgemein in der Objektorientierten Programmierung Verwendung
findet. Sei etwa dem Bezeichner bv ein BV-Objekt zugewiesen, etwa mit der
Anweisung
In [..]: bv = BV(50, 1/3))
so sind die Anweisungen für den Zugriff zu seinem Erwartungswert
In [..]: bv.erw
und zu der Methode für die Berechnung von Wahrscheinlichkeiten
In [..]: bv.P(25)
Eine Methode wird generell über einen Funktionsaufruf realisiert, der Argu-
mente erwartet, die in Klammern eingeschlossen werden. Hier wurde das Argu-
ment 25 angegeben, es soll die Wahrscheinlichkeit dafür berechnet werden,
dass eine Zufallsgröße mit der betrachteten Verteilung diesen Wert annimmt
Zu einer Reihe von Eigenschaften existiert eine Methode mit gleichem Namen,
der auf einen Unterstrich '_' endet. Damit besteht die Möglichkeit, mittels
des entsprechenden Funktionsaufrufes zusätzliche Informationen/Leistungen
anzufordern. Welche das sind, kann über die Hilfeanforderung (h=1 als letz-
ter Eintrag in der Argumentliste) erfahren werden. Diese zu Eigenschaften
gehörenden Methoden können auch über den Namen der Eigenschaft mit großem
Anfangsbuchstaben aufgerufen werden, also z.B. für die Eigenschaft erw von
bv
In [..]: bv.erw_(...) oder
In [..]: bv.Erw(...)
Das Ergebnis eines Eigenschafts-/Methodenaufrufes kann ein Tupel oder eine
Liste sein, etwa die Daten einer DatenReihe dr, die als Liste dr.daten vor-
liegen. Um auf ein einzelnes Element zuzugreifen, wird der Indexzugriff ver-
wendet, etwa
In [..]: dr.daten[3]
für das 4. Element der Liste (gemäß der Python-Konvention beginnt die Zählung
bei 0) oder
In [..]: dr.daten[:3]
für den Zugriff auf die ersten 3 Elemente
Wahrscheinlichkeits- und Häufigkeits-Verteilungen und anderes werden als
dictionary bereitgestellt (Schlüssel/Wert-Paare). Hier erfolgt der Zugriff
auf einen einzelnen Wert über den Schlüssel, z.B. bei der Methode vert
(Wahrscheinlichkeitsverteilung) der betrachteten BinomialVerteilung
In [..]: bv.vert[4]
""")
return
if h == 6:
print(
"""
Klassen in zufall
Kurz- Langname
ZE ZufallsExperiment
= ZV ZufallsVersuch
ZG ZufallsGröße
BK BernoulliKette
BV BinomialVerteilung
HGV HyperGeometrischeVerteilung
GLV GleichVerteilung
GV GeometrischeVerteilung
PV PoissonVerteilung\n
NV NormalVerteilung
EV ExponentialVerteilung
DR DatenReihe
EA EreignisAlgebra
VT VierFelderTafel
HB HäufigkeitsBaum
KI KonfidenzIntervall
AT AlternativTest
STP SignifikanzTestP
Urne
Münze
Würfel
Rad GlücksRad
MK MarkoffKette
Roulette
Chuck ChuckALuck
Craps
Toto FussballToto
Lotto
Skat SkatBlatt
Vektor analog zu agla
Matrix analog zu agla
""")
return
if h == 7:
print(
"""
Funktionen in zufall
Allgemeine Funktionen
Hilfe Hilfefunktion
fakultät = fak Fakultät
binomial = B Binomialkoeffizient
perm = permutationen Permutationen
komb = kombinationen Kombinationen
variationen Variationen
auswahlen Berechnung von k-Auswahlen
zuf_zahl = zufZahl Erzeugen von (Pseudo)-Zufallszahlen
anzahl Anzahl des Vorkommens eines Elementes in einer
Liste/DatenReihe
anzahl_treffer Anzahl Treffer in einer Liste
= anzahlTreffer
summe Summe der Elemente einer Liste/DatenReihe
gesetze Einige Gesetze der Wahrscheinlichkeitsrechnung
löse Allgemeiner Gleichungs-/Ungleichungs-Löser
ja_nein = jaNein Bewertung logischer Ausdrücke
stochastisch Test auf stochastische(n) Vektor/Matrix
einfach Vereinfachen von Objekten
ja, nein, mit, ohne, Hilfsgrößen
Ja, Nein, Mit, Ohne für True/False
Mathematische Funktionen
sqrt, exp, log, ln, lg, abs
sin, arcsin (= asin), sing, arcsing (= asing) / ...g:
cos, arccos (= acos), cosg, arccosg (= acosg) / Funktionen
tan, arctan (= atan), tang, arctang (= atang) / mit Grad-
cot, arccot (= acot), cotg, arccotg (= acotg) / werten
sinh, arsinh (= asinh)
cosh, arcosh (= acosh)
tanh, artanh (= atanh)
deg = grad Umrechnung Bogen- in Gradmaß
rad = bog Umrechnung Grad- in Bogenmaß
kug_koord (= kugKoord) Umrechnung in Kugelkoordinaten
min, max - Minimum bzw. Maximum von zwei oder mehr Zahlen
N oder Methode n - Umwandlung SymPy- in Dezimal-Ausdruck
re - Realteil einer komplexen Zahl
im - Imaginärteil einer komplexen Zahl
conjugate (= konjugiert) - Konjugiert-komplexe Zahl
Konstanten
pi - Zahl Pi (3.1415...)
E - Eulersche Zahl e (2.7182...)
I - imaginäre Einheit
ACHTUNG! B, E, N, I sind kommentarlos überschreibbar
""")
return
if h == 8:
print(
"""
Operatoren
Folgende Operatoren stehen zusätzlich zu den Python-Operatoren zur
Verfügung bzw. ersetzen diese
^ Potenzierung; zusätzlich zum Operator **; Umdefinition des
Python-Operators ^
° Skalarprodukt von Vektoren; zusätzlich zum Operator *
| Verkettung von Vektoren; Umdefinition des Python-Operators |
""")
return
if h == 9:
print(
"""
Jupyter-Notebook
+==================================================================+
| Um in einem Notebook mit zufall arbeiten zu können, muss zu |
| Beginn der Sitzung die (Jupyter-) Anweisung |
| |
| In [..]: %run zufall/start |
| |
| in einer Codezelle ausgeführt werden |
+==================================================================+
zufall benutzt als Bedienoberfläche Jupyter. Dieses wurde unter dem
Namen IPython ursprünglich als Entwicklungsumgebung für Python-
Anwendungen bereitgestellt, unterstützt aber inzwischen eine Vielzahl
weiterer Programmiersprachen. Der Name setzt sich aus den Namen von
drei Sprachen zusammen - Julia (eine Sprache, die sehr schnellen Code
erzeugt), Python und R (inzwischen ein leistungsfähiges Statistikpaket)
Ausschlaggebend für die Wahl dieser Plattform war das hier realisierte
Notebook-Konzept, wie es auch in kommerziellen CAS (z.B. Mathematica)
Verwendung findet
Jupyter läuft als lokale Anwendung auf dem Standardbrowser des
Computers, Kern (kernel) ist der Python-Interpreter
Ein Jupyer-Notebook ist in Zellen (cells) unterteilt, wobei drei
Zelltypen auftreten, die hier interessieren:
- Code-Zellen Kennzeichnung: In [..]
In diese Zellen werden Anweisungen in der benutzten
Programmiersprache (hier Python) geschrieben, also auch
Anweisungen zur Benutzung von zufall; die Zellen sind analog zu
einem Texteditor editierbar; beim Ausführen (run) einer solchen
Zelle wird ihr Inhalt an den Python-Interpreter übergeben, der
für seine Verarbeitung sorgt
Eine neue Zelle wird standardmäßig als Code-Zelle erzeugt; die
Umwandlung einer Markdown-Zelle in eine Code-Zelle ist über das
Code-Menü oder die Platzierung des Cursors im vorderen
Zellbereich und Drücken der Y-Taste erreichbar
-Ausgabe-Zellen Kennzeichnung: Out [..]
Die Zellen entstehen, wenn nach der Auswertung einer Codezelle
durch den Python-Interpreter eine Ausgabe erforderlich ist; in
diese Zellen kann der Benutzer nicht direkt schreiben
- Markdown-Zellen Ohne Kennzeichnung
Die Zellen dienen vor allem zur Aufnahme von Texten, wobei diese
mit Markdown- (eine einfache Auszeichnungssprache) oder HTML-
Anweisungen formatiert werden können; sie können auch
mathematische Formeln enthalten (Nutzung von LATEX), außerdem
können in solchen Zellen Grafiken und Bilder dargestellt und/
oder Audio- und Video-Dateien aktiv sein; beim Ausführen einer
solchen Zelle werden eventuell vorhandene Formatierungs-
Anweisungen ausgeführt und der Inhalt auf dem Ausgabemedium
präsentiert
Die Umwandlung einer Code-Zelle in eine Markdown-Zelle ist über
das entsprechende Menü oder die Platzierung des Cursors im
vorderen Zellbereich und Drücken der M-Taste erreichbar
Code- und Markdown-Zellen können beliebig erzeugt, gelöscht, kopiert,
eingefügt und verschoben werden
Es kann zu jeder dieser Zellen gesprungen werden, um sie zu verändern
und/oder erneut auszuführen
In einem Notebook kann in zwei Modi gearbeitet werden
- Editier-Modus: Einschalten mit Enter; oben rechts ist ein Stift
dargestellt
In diesem Modus kann der Inhalt der aktuellen Zelle editiert
werden
Das Editieren einer bestehenden Markdown-Zelle kann auch mit
einem Doppel-Klick eingeleitet werden
- Kommando-Modus: Einschalten mit ESC; der Stift rechts oben fehlt
in diesem Modus können Aktionen durchgeführt werden, die das
Notebook als Ganzes betreffen (Zellen erzeugen/kopieren/
löschen/verschieben, zwischen ihnen navigieren, Dateien öffnen
und speichern usw.)
Wenn der Kern beschäftigt ist, ist der schwarze Kreis rechts oben
gefüllt; auch in dieser Zeit kann editiert werden, die Ausführung
weiterer Zellen kann aber erst erfolgen, wenn der Kern wieder frei
ist
Eine Datei, in die der Inhalt eines Notebooks gespeichert wird,
erhält die Endung .ipynb
Für den Export eines Notebooks, z.B. in das .html- oder .pdf-
Format, ist das separat zu nutzende Werkzeug nbconvert vorgesehen
Die Bedienung eines Notebooks kann über das Menü und/oder über die Tastatur
erfolgen
Einige Tastatur-Kürzel für das Jupyter-Notebook
Umsch+Enter Zelle ausführen, zur nächsten gehen (diese wird even-
tuell neu angefügt)
Strg+Enter Zelle ausführen, in der Zelle verbleiben
Strg+M B Zelle unterhalb einfügen
Strg+M A Zelle oberhalb einfügen
Strg+M DD Zelle löschen (D 2-mal drücken)
Esc X Zelle löschen
Strg+Z Zurücksetzen beim Editieren\n
Esc Einschalten des Kommando-Modus
Enter Einschalten des Editier-Modus
Strg+M H Anzeigen aller Tastatur-Kürzel für die beiden Modi
Ausführen: (z.B. Strg-M B)
Strg-Taste drücken, dann M-Taste, Strg loslassen, dann B-Taste
durch mehrmaliges Drücken der B-Taste können mehrere Zellen eingefügt
werden
""")
return
if h == 10:
print(
"""
Nutzung von SymPy-Anweisungen
In zufall sind folgende Elemente von SymPy integriert:
Symbol, symbols - zur Definition von (mehrstelligen) Bezeichnern
Rational - zur Erzeugung von rationalen Zahlen (wird in zufall weitgehend
automatisch erledigt)
solve, solveset, expand, collect, factor, simplify, nsimplify
N [der Wert ist überschreibbar]
pi - die Kreiszahl
E - die Basis der natürlichen Logarithmen (e) [der Wert ist überschreibbar]
I - die imaginäre Einheit (i) [der Wert ist überschreibbar]
Sollen weitere Elemente benutzt werden, sind diese zu importieren, z.B.
In [..]: from sympy import Piecewise
(eventuell ist der Pfad im SymPy-Verzeichnis-Baum anzugeben)
""")
return
if h == 11:
print("\nGriechische Buchstaben\n")
print("Es werden die kleinen griechischen Buchstaben\n")
print("alpha, beta, gamma, delta, epsilon, zeta, eta, theta, iota, kappa ")
display(Math("\\alpha \qquad \\beta \qquad \\gamma \qquad \\delta \qquad \\epsilon \qquad \
\\zeta \qquad \\eta \qquad \\theta \qquad \\iota \qquad \\kappa "))
print("lamda (Schreibweise!), mu, nu, xi, omicron, pi, rho, sigma, tau ")
display(Math("\\lambda \qquad \\mu \qquad \\nu \qquad \\xi \qquad \\omicron \qquad \\pi \
\qquad \\rho \qquad \\sigma \qquad \\tau"))
print("upsilon, phi, chi, psi, omega\n")
display(Math("\\upsilon \qquad \\phi \qquad \\chi \qquad \\psi \qquad \\omega"))
print("bereitgestellt. Die Namen sind nicht überschreibbar\n")
return
if h == 12:
print(
"""
Kleiner Python-Exkurs
Eingabe von Code (in eine Code-Zelle des Jupyter-Notebooks):
Die Ausführung einer Zelle wird durch Umsch+Enter bzw. Strg+Enter
veranlaßt
Eine Zuweisung (eines Wertes an einen Bezeichner) wird mittels '='
realisiert:
In [..]: a = 4
Der Wert eines Bezeichners kann über eine Abfrage ermittelt werden
In [..]: a
Mehrere Zuweisungen in einer Zeile sind durch ';' zu trennen
In [..]: a = 4; b = 34; c = -8
Mehrere Abfrageanweisungen in einer Zeile sind durch ',' zu trennen
(ein ';' unterdrückt die Anzeige der vorausgehenden Elemente)
In [..]: a, b, c
Eine neue Zeile (innerhalb einer Zelle des Notebooks) wird über die
Enter-Taste erzeugt; in der neuen Zeile ist ab derselben Stelle zu
schreiben wie in der vorangehenden Zeile, wenn nicht ein eingerückter
Block entstehen soll (bzw. wenn nicht durch ein '\\' am Zeilenende ei-
ne Verlängerung der Zeile erreicht werden soll)
Das ist Teil der Python-Syntax und führt bei Nichtbeachten zu einem
Syntaxfehler
Eingerückte Blöcke sind z.B. bei Kontrollstrukturen (vor allem in Pro-
grammen benutzt) erforderlich. Dabei müssen alle Einrückungen die glei-
che Stellenanzahl (standardmäßig 4 Stellen) haben
Bei der if-else-Anweisung sieht das z.B. folgendermaßen aus:
In [..]: if a < 1:
b = 0 # 4 Stellen eingerückt
c = 3 # ebenso
else:
b = 1 # ebenso
oder bei einer Funktions-Definition:
In [..]: def summe(x):
sum = 0
for y in x:
sum += y # weitere Einrückung
return sum
Die Funktion berechnet die Summe der Elemente des Zahlen-Containers
x (eine Liste, ein Tupel oder eine Menge)
Mittels '#' können in Codezellen Kommentare geschrieben werden, sie wer-
den bei der Ausführung ignoriert
Einige Datentypen:
Zeichenkette (string) z.B.: 'Tab23' oder \"Tab23\"#
Tupel (tuple) z.B.:
In [..]: t = ( 1, 2, 3 ); t1 = ( 'a', a, Rational(1, 2), 2.7 )
Zugriff auf Elemente t[0], t1[-1], Slicing (Zählung ab 0)
Liste (list) z.B.:
In [..]: L = [ 1, 2, 3 ]; L1 = [ 'a', a, Rational(1, 2), 2.7 ]
Zugriff auf Elemente L[0], L1[-1], Slicing (Zählung ab 0)
Schlüssel-Wert-Liste (dictionary, dict) z.B.:
In [..]: d = { a:4, b:34, c:-8 }
Zugriff auf Elemente d[a], d[c]
Menge (set) z.B.:
In [..]: m = { a, b, c }; m1 = set() (leere Menge)
Zugriff auf Elemente m.pop(), Indexzugriff mit list(m)[index] mög-
lich
Weitere nützliche Python-Elemente:
Mittels type(obj) kann der Datentyp eines Objektes obj erfragt werden
List-Comprehension
In [..]: tup = 1, 2, 3, 4, 5, 6 # oder anderer Datencontainer
In [..]: [ x^2 for x in tup ] # sehr mächtige Anweisung
Out[..]: [1, 4, 9, 16, 25, 36]
Funktionsdefinition mit anonymer Funktion
lambda arg1, arg2, ... : ausdruck in arg1, arg2, ...
Klasse Rational: da p/q in Python (und damit auch in SymPy) eine float-
Zahl ergibt, kann bei Bedarf eine rationale Zahl Rational(p, q) ver-
wendet werden (in zufall erfolgt das an den meisten Stellen automa-
tisch)
*liste als Argument einer Funktion packt den Container liste aus
Ersetzen des Wertes eines Bezeichners in einem Ausdruck durch
einen anderen Wert (eine SymPy-Anweisung)
ausdruck.subs(bez, wert)
In [..]: (x+y).subs(x, 2)
Out[..]: y+2
Die Ausgabe '<bound method ...>' weist auf eine an ein Objekt gebundene
Methode (eine Funktion) hin, die zu ihrer Ausführung in Klammern ein-
gefasste Parameter erwartet
""")
return
if h == 13:
print(
"""
Bemerkungen für Programmierer / Entwickler
Zur Unterstützung der Fehlersuche ist im Hauptprogramm die Variable _TEST
vorgesehen, die im Quelltext geändert werden kann; bei _TEST = True werden
bei Fehlern die vollständigen Python-Fehlermeldungen angezeigt
Durch das zufall-Paket wird die Python-Sprache an einigen Stellen modifiziert
(Umdefinition der Operatoren '^' und '|', Unterbinden der Zuweisung eines
Wertes an die Eigenschaft/Methode eines Objektes ('objekt.eigenschaft = wert'-
Konstrukt), Verwenden der deutschen Umlaute in Bezeichnern u.a.m.). Bei Ände-
rungen oder Ergänzungen der zufall-Quelltexte dürfen diese Modifizierungen nicht
benutzt werden. Ebenso ist es nicht ratsam, innerhalb eines zufall-Notebooks
eine allgemeine Python-Programmierung durchzuführen
Aus der Sicht des Autors sollten die Schwerpunkte der weiteren Entwicklung
des Paketes sein:
- Konfiguration der Jupyter-Oberfläche entsprechend den Bedürfnissen von
Lehrern und Schülern\n
- Vereinheitlichung der Schriftart und -größe für Ausgaben\n
- Aufnahme weiterer statistischer Tests in das Paket\n
- Gestaltung der EreignisAlgebra-Klasse auf der Basis von logischen Aus-
drücken\n
- Verbesserung der Fehlererkennung und -mitteilung\n
- Bessere Verknüpfung der Dokumentation mit den Programmen\n
- Eventuelle Anpassung an die SymEngine (nach deren Fertigstellung durch
die Entwickler)
""")
return
# ------------------------------
# Hilfsgroessen für True / False
# ------------------------------
Ja = ja = Mit = mit = True
Nein = nein = Ohne = ohne = False
# ---------------------------
# Hilfsklasse für Gleichungen
# ---------------------------
class _Gleichung(ZufallsObjekt):
def __new__(cls, *args):
printmethod = '_latex'
try:
if not args:
raise ZufallError("mindestens die linke Seite der Gleichung angeben")
if len(args) > 2:
raise ZufallError("nur die beiden Seiten der Gleichung angeben")
lhs = args[0]
rhs = 0
if len(args) > 1:
rhs = args[1]
ve = importlib.import_module('agla.lib.objekte.vektor')
Vektor = ve.Vektor
if not ((is_zahl(lhs) or isinstance(lhs, Vektor)) and
(is_zahl(rhs) or isinstance(rhs, Vektor))):
raise ZufallError("nur arithmetische Ausdrücke oder Vektoren angeben")
return ZufallObjekt.__new__(cls, lhs, rhs)
except ZufallError as e:
print('zufall', str(e))
return
def __str__(self):
return str(self.lhs) + " = " + str(self.rhs)
def __repr__(self):
return 'gleichung(' + repr(self.lhs) + ',' + repr(self.rhs) + ')'
def _latex(self, printer):
return latex(self.lhs) + '=' + latex(self.rhs)
@property
def lhs(self):
return self.args[0]
@property
def rhs(self):
return self.args[1]
def __mul__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert als Faktor angeben')
return
return gleichung(other * self.lhs, other * self.rhs)
def __rmul__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert als Faktor angeben')
return
return gleichung(other * self.lhs, other * self.rhs)
def __truediv__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert angeben')
return
return gleichung(self.lhs / other, self.rhs / other)
def __add__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert als Summand angeben')
return
return gleichung(other + self.lhs, other + self.rhs)
def __radd__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert als Summand angeben')
return
return gleichung(other + self.lhs, other + self.rhs)
def __neg__(self):
return gleichung(-self.lhs, -self.rhs)
def __pow__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert als Exponent angeben')
return
return gleichung(self.lhs**other, self.rhs**other)
def __sub__(self, other):
if not is_zahl(other):
print('zufall: Zahlenwert angeben')
return
return gleichung(self.lhs - other, self.rhs - other)
| 36.493086 | 122 | 0.55129 | 11,581 | 100,283 | 4.75503 | 0.138934 | 0.018414 | 0.02506 | 0.027965 | 0.387957 | 0.360936 | 0.339562 | 0.304097 | 0.272409 | 0.25474 | 0 | 0.008757 | 0.329288 | 100,283 | 2,747 | 123 | 36.506371 | 0.809934 | 0.041662 | 0 | 0.518079 | 0 | 0.028814 | 0.318688 | 0.011611 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002825 | 0.015254 | null | null | 0.267797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
668f3876c8fdd49d31c1bb250f330ea8bb798338 | 1,329 | py | Python | app/forms.py | yahuishuo/alpha-flask | f5921e665737cddb583b6ab752d1154f9121638a | [
"Apache-2.0"
] | null | null | null | app/forms.py | yahuishuo/alpha-flask | f5921e665737cddb583b6ab752d1154f9121638a | [
"Apache-2.0"
] | null | null | null | app/forms.py | yahuishuo/alpha-flask | f5921e665737cddb583b6ab752d1154f9121638a | [
"Apache-2.0"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, BooleanField, SubmitField, ValidationError
from wtforms.validators import DataRequired, Length, Email, Regexp, EqualTo, URL, Optional
from models.profile import User
class LoginForm(FlaskForm):
username = StringField()
password = PasswordField()
remember_me = BooleanField('Keep me logged in')
class RegisterForm(FlaskForm):
users_in_db = User.objects
name_rule = Regexp('^[A-Za-z0-9_.]*$', 0, 'User names must have only letters, numbers dots or underscores')
username = StringField('Username', validators=[DataRequired(), Length(1, 64), name_rule])
email = StringField('Email', validators=[DataRequired(), Length(1, 128), Email()])
password = PasswordField('Password', validators=[DataRequired(), EqualTo('password2', message='Does not match')])
password2 = PasswordField('Confirm password', validators=[DataRequired()])
register_submit = SubmitField('Register')
def validate_username(self, field):
if self.users_in_db.filter(username=field.data).count() > 0:
raise ValidationError('Username already in use')
def validate_email(self, field):
if self.users_in_db.filter(email=field.data).count() > 0:
raise ValidationError('Email already in registered')
| 44.3 | 117 | 0.724605 | 153 | 1,329 | 6.202614 | 0.48366 | 0.092729 | 0.028451 | 0.061117 | 0.136986 | 0.136986 | 0.063224 | 0.063224 | 0 | 0 | 0 | 0.012489 | 0.156509 | 1,329 | 29 | 118 | 45.827586 | 0.834077 | 0 | 0 | 0 | 0 | 0 | 0.160271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.181818 | 0.181818 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6690000285467cf29c337fb120d04ba8b2509782 | 463 | py | Python | setup.py | DavideAlwaysMe/link-shortcut | 23e59ccef8a21906cdbcde140df153f808d511ec | [
"MIT"
] | 1 | 2021-03-04T11:15:52.000Z | 2021-03-04T11:15:52.000Z | setup.py | DavideAlwaysMe/link-shortcut | 23e59ccef8a21906cdbcde140df153f808d511ec | [
"MIT"
] | null | null | null | setup.py | DavideAlwaysMe/link-shortcut | 23e59ccef8a21906cdbcde140df153f808d511ec | [
"MIT"
] | null | null | null | import os
from setuptools import setup
setup(
name = "link",
version = "0.1",
author = "Davide Rizzuto",
author_email = "yodadr01@gmail.com",
license = "MIT",
url = "https://github.com/DavideAlwaysMe/link-shortcut",
packages=['link'],
scripts = ['link/link.py'],
data_files = [
('/usr/share/applications', ['link.desktop']),('/usr/share/pixmaps',['icona.png'])
],
install_requires = [ 'requests','favicon'],
)
| 25.722222 | 90 | 0.602592 | 51 | 463 | 5.411765 | 0.784314 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01084 | 0.203024 | 463 | 17 | 91 | 27.235294 | 0.737127 | 0 | 0 | 0 | 0 | 0 | 0.393089 | 0.049676 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66a440b8cde07927e07af471ab468bcbe3206969 | 377 | py | Python | car.py | Pscodium/python-gas-economy-project | 27f056fad6841c77c4c0f0ac8859112e6c593b71 | [
"MIT"
] | null | null | null | car.py | Pscodium/python-gas-economy-project | 27f056fad6841c77c4c0f0ac8859112e6c593b71 | [
"MIT"
] | null | null | null | car.py | Pscodium/python-gas-economy-project | 27f056fad6841c77c4c0f0ac8859112e6c593b71 | [
"MIT"
] | null | null | null | kilometer = float(input('Digite quantos KM você irá percorrer: '))
price_gas = float(input('Digite o preço da gasolina na sua região: R$'))
cars_consumption = [5, 6, 7, 8, 9, 10, 11, 12, 13]
for i in range(9):
total = (kilometer/cars_consumption[i])*price_gas
print(f'Se seu carro tem a autonomia de {cars_consumption[i]}km por litro, você vai gastar R${total:.2f}') | 41.888889 | 110 | 0.697613 | 66 | 377 | 3.909091 | 0.742424 | 0.174419 | 0.124031 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.164456 | 377 | 9 | 110 | 41.888889 | 0.771429 | 0 | 0 | 0 | 0 | 0.166667 | 0.470899 | 0.060847 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66a451b2d65deda4cbd97dd5880c83f60487ffef | 1,230 | py | Python | setup.py | JustBennnn/minecraftstats | d8170cb57464339d32d96d2f768cff3fabf93370 | [
"MIT"
] | 2 | 2021-09-14T20:35:39.000Z | 2022-03-21T18:35:27.000Z | setup.py | JustBennnn/minecraftstats | d8170cb57464339d32d96d2f768cff3fabf93370 | [
"MIT"
] | null | null | null | setup.py | JustBennnn/minecraftstats | d8170cb57464339d32d96d2f768cff3fabf93370 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name="minecraftstats",
version="1.1.6",
author="JustBen",
author_email="justben009@gmail.com",
description="A python library allowing the user to get stats from Hypixel in Minecraft.",
keywords="minecraft api-wrapper mojang mojang-api".split(),
python_requires=">=3.7",
packages=["minecraftstats"],
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
url="https://github.com/JustBennnn/minecraftstats",
project_urls={
"Issue Tracker": "https://github.com/JustBennnn/minecraftstats/issues",
},
install_requires=["requests", "pydantic", "mojang"],
license="MIT",
classifiers=[
"Programming Language :: Python :: 3",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Natural Language :: English",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities",
],
) | 38.4375 | 93 | 0.642276 | 126 | 1,230 | 6.206349 | 0.722222 | 0.038363 | 0.035806 | 0.061381 | 0.097187 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011294 | 0.20813 | 1,230 | 32 | 94 | 38.4375 | 0.791581 | 0 | 0 | 0 | 0 | 0 | 0.565394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.032258 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66b1fe37453f3cf15e9d2035a80308d5cf8f4498 | 2,352 | py | Python | tests/test_check_files_checksums_logging.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 25 | 2021-05-19T08:05:07.000Z | 2022-03-14T02:48:58.000Z | tests/test_check_files_checksums_logging.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 311 | 2021-05-17T23:04:56.000Z | 2022-03-31T10:41:44.000Z | tests/test_check_files_checksums_logging.py | adisbladis/geostore | 79439c06b33414e1e26b3aa4b93a72fd7cbbae83 | [
"MIT"
] | 1 | 2022-01-03T05:38:32.000Z | 2022-01-03T05:38:32.000Z | import sys
from os import environ
from unittest.mock import patch
from pynamodb.exceptions import DoesNotExist
from pytest import mark, raises
from pytest_subtests import SubTests
from geostore.api_keys import MESSAGE_KEY
from geostore.check_files_checksums.task import main
from geostore.check_files_checksums.utils import ARRAY_INDEX_VARIABLE_NAME
from geostore.error_response_keys import ERROR_KEY
from geostore.logging_keys import LOG_MESSAGE_VALIDATION_COMPLETE
from geostore.models import DATASET_ID_PREFIX, DB_KEY_SEPARATOR, VERSION_ID_PREFIX
from geostore.parameter_store import ParameterName, get_param
from geostore.processing_assets_model import ProcessingAssetType, ProcessingAssetsModelBase
from geostore.step_function import Outcome
from .aws_utils import get_s3_role_arn
from .general_generators import any_program_name
from .stac_generators import any_dataset_id, any_dataset_version_id
@mark.infrastructure
def should_log_missing_item(subtests: SubTests) -> None:
# Given
dataset_id = any_dataset_id()
version_id = any_dataset_version_id()
index = 0
expected_log = {
ERROR_KEY: {MESSAGE_KEY: ProcessingAssetsModelBase.DoesNotExist.msg},
"parameters": {
"hash_key": (
f"{DATASET_ID_PREFIX}{dataset_id}"
f"{DB_KEY_SEPARATOR}{VERSION_ID_PREFIX}{version_id}"
),
"range_key": f"{ProcessingAssetType.DATA.value}{DB_KEY_SEPARATOR}{index}",
},
}
sys.argv = [
any_program_name(),
f"--dataset-id={dataset_id}",
f"--version-id={version_id}",
f"--first-item={index}",
f"--assets-table-name={get_param(ParameterName.PROCESSING_ASSETS_TABLE_NAME)}",
f"--results-table-name={get_param(ParameterName.STORAGE_VALIDATION_RESULTS_TABLE_NAME)}",
f"--s3-role-arn={get_s3_role_arn()}",
]
# When/Then
with patch("geostore.check_files_checksums.task.LOGGER.error") as logger_mock, patch.dict(
environ, {ARRAY_INDEX_VARIABLE_NAME: "0"}
):
with subtests.test(msg="Return code"), raises(DoesNotExist):
main()
with subtests.test(msg="Log message"):
logger_mock.assert_any_call(
LOG_MESSAGE_VALIDATION_COMPLETE,
extra={"outcome": Outcome.FAILED, "error": expected_log},
)
| 37.333333 | 97 | 0.721514 | 294 | 2,352 | 5.44898 | 0.336735 | 0.067416 | 0.033708 | 0.050562 | 0.160424 | 0.036205 | 0 | 0 | 0 | 0 | 0 | 0.002614 | 0.18665 | 2,352 | 62 | 98 | 37.935484 | 0.834814 | 0.006378 | 0 | 0 | 0 | 0 | 0.218509 | 0.183376 | 0 | 0 | 0 | 0 | 0.019231 | 1 | 0.019231 | false | 0 | 0.346154 | 0 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
66bdce42d2b0da59afe0cb16dce28d26a082a26c | 579 | py | Python | clb_nb_utils/oauth.py | HumanBrainProject/clb-nb-utils | 213715c3f96b1ce11101617892b86fbf22ae602e | [
"Apache-2.0"
] | 1 | 2021-11-04T18:32:41.000Z | 2021-11-04T18:32:41.000Z | clb_nb_utils/oauth.py | HumanBrainProject/clb-nb-utils | 213715c3f96b1ce11101617892b86fbf22ae602e | [
"Apache-2.0"
] | null | null | null | clb_nb_utils/oauth.py | HumanBrainProject/clb-nb-utils | 213715c3f96b1ce11101617892b86fbf22ae602e | [
"Apache-2.0"
] | null | null | null | '''This module gets fresh access tokens from the Jupyterhub Service to refresh access tokens.
See https://github.com/HumanBrainProject/jupyterhub-access-token-service
'''
import os
import requests
JUPYTERHUB_API_TOKEN = os.getenv("JUPYTERHUB_API_TOKEN")
# @TODO fix this
JUPYTERHUB_SERVICE_URL = 'http://jupyterhub:8080/services'
def get_token():
headers = {'Authorization': f'Token {JUPYTERHUB_API_TOKEN}'}
url = f'{JUPYTERHUB_SERVICE_URL}/access-token-service/access-token'
resp = requests.get(url, headers=headers)
return resp.json().get('access_token')
| 28.95 | 93 | 0.761658 | 77 | 579 | 5.571429 | 0.480519 | 0.102564 | 0.125874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007843 | 0.119171 | 579 | 19 | 94 | 30.473684 | 0.833333 | 0.310881 | 0 | 0 | 0 | 0 | 0.413265 | 0.204082 | 0 | 0 | 0 | 0.052632 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66c4235595d8974b0c06e0a9fd276b8f59f19204 | 3,087 | py | Python | rc_velo_vol.py | VincentCheungM/rc_velo_volt | 8aeabc32eae4f42fa6bb6c7252daa008b1b58ebc | [
"MIT"
] | null | null | null | rc_velo_vol.py | VincentCheungM/rc_velo_volt | 8aeabc32eae4f42fa6bb6c7252daa008b1b58ebc | [
"MIT"
] | 2 | 2019-04-13T02:27:50.000Z | 2019-04-21T01:49:50.000Z | rc_velo_vol.py | VincentCheungM/rc_velo_volt | 8aeabc32eae4f42fa6bb6c7252daa008b1b58ebc | [
"MIT"
] | null | null | null | #! *-* coding: utf-8 *-*
#!/usr/bin/env python
"""
A simple scraper for recording the power supply of velodyne LiDAR,
by getting the `diag.json` files.
@author vincent cheung
@file rc_velo_vol.py
"""
import argparse
import math
import time
import requests
import json
import logging
import os
from volt_temp import Volt_temp
url_1 = 'http://192.168.100.201/cgi/diag.json'
url_2 = 'http://192.168.100.202/cgi/diag.json'
## For test only
url_3 = 'http://127.0.0.1:8000/example_diag.json'
url_4 = 'http://127.0.0.1:8000/example_diag.json'
# Sleep a period after getting one diag, in seconds.
sleep_prd = 1.0
def volt_temp_logger(volts, lidar_id):
"""
A simple level logger based on the voltages and lidar_id.
"""
# Round the voltage into xx.xx
volts = round(volts, 2)
if volts >= 11.5 and volts <= 12.5:
logger.info('Lidar:{} voltage:{}'.format(lidar_id, volts))
elif volts >= 10.0 and volts < 11.5:
logger.warning('Lidar:{} voltage:{}'.format(lidar_id, volts))
elif volts >= 9.0 and volts < 10.0:
logger.error('Lidar:{} voltage:{}'.format(lidar_id, volts))
else:
logger.critical('Lidar:{} voltage:{}'.format(lidar_id, volts))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Velodyne LiDAR voltage logger.')
parser.add_argument('--num', type=int, help='Num of LiDARs', default=2)
parser.add_argument('--mode', choices=['run', 'test'], default='run')
parser.add_argument('--version', action='version', version='%(prog)s alpha 1.0')
args = args = parser.parse_args()
if args.mode == 'test':
url_lidar_1 = url_3
url_lidar_2 = url_4
else:
url_lidar_1 = url_1
url_lidar_2 = url_2
"""
Define logger and logfile path
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
rq = time.strftime('velo_volt-%Y%m%d%H%M', time.localtime(time.time()))
log_path = os.path.join(os.getcwd(), 'data', 'logs')
log_name = os.path.join(log_path, rq + '.log')
logfile = log_name
# Check path exists or not
if not os.path.exists(log_path):
#os.makedirs(log_path, exists_ok=True)
os.makedirs(log_path)
fh = logging.FileHandler(logfile, mode='w')
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s")
fh.setFormatter(formatter)
logger.addHandler(fh)
volt_temp_parser = Volt_temp()
while True:
"""
Get the `diag.json` file periodically
Parse and logs
"""
req = requests.get(url_lidar_1, timeout=0.20)
js = req.json()['volt_temp']
volt_temp_parser.parse(js)
volt_temp_logger(js['bot']['pwr_v_in'], 201)
if args.num >= 2:
#TODO: Not yet support more than two LiDARs
req = requests.get(url_lidar_2, timeout=0.20)
js = req.json()['volt_temp']
volt_temp_parser.parse(js)
volt_temp_logger(js['bot']['pwr_v_in'], 202)
time.sleep(sleep_prd)
| 32.494737 | 109 | 0.635892 | 457 | 3,087 | 4.133479 | 0.367615 | 0.046585 | 0.038115 | 0.048703 | 0.204341 | 0.181048 | 0.149285 | 0.149285 | 0.107994 | 0.07729 | 0 | 0.039175 | 0.214448 | 3,087 | 94 | 110 | 32.840426 | 0.739794 | 0.144153 | 0 | 0.1 | 0 | 0.016667 | 0.19403 | 0.011698 | 0 | 0 | 0 | 0.010638 | 0 | 1 | 0.016667 | false | 0 | 0.133333 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66cfeed34d6ac341fdd7acad5244ee5ac346603e | 695 | py | Python | django_cradmin/templatetags/cradmin_icon_tags.py | appressoas/django_cradmin | 0f8715afdfe1ad32e46033f442e622aecf6a4dec | [
"BSD-3-Clause"
] | 11 | 2015-07-05T16:57:58.000Z | 2020-11-24T16:58:19.000Z | django_cradmin/templatetags/cradmin_icon_tags.py | appressoas/django_cradmin | 0f8715afdfe1ad32e46033f442e622aecf6a4dec | [
"BSD-3-Clause"
] | 91 | 2015-01-08T22:38:13.000Z | 2022-02-10T10:25:27.000Z | django_cradmin/templatetags/cradmin_icon_tags.py | appressoas/django_cradmin | 0f8715afdfe1ad32e46033f442e622aecf6a4dec | [
"BSD-3-Clause"
] | 3 | 2016-12-07T12:19:24.000Z | 2018-10-03T14:04:18.000Z | from django import template
import logging
from django.conf import settings
from django.template.defaultfilters import stringfilter
from django_cradmin import css_icon_map
register = template.Library()
log = logging.getLogger(__name__)
@register.simple_tag
@stringfilter
def cradmin_icon(iconkey):
"""
Returns the css class for an icon configured with the
given key in ``DJANGO_CRADMIN_CSS_ICON_MAP``.
"""
iconmap = getattr(settings, 'DJANGO_CRADMIN_CSS_ICON_MAP', css_icon_map.FONT_AWESOME)
icon_classes = iconmap.get(iconkey, '')
if not icon_classes:
log.warn('No icon named "%s" in settings.DJANGO_CRADMIN_ICONMAP.', iconkey)
return icon_classes
| 26.730769 | 89 | 0.761151 | 94 | 695 | 5.361702 | 0.478723 | 0.079365 | 0.079365 | 0.079365 | 0.09127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159712 | 695 | 25 | 90 | 27.8 | 0.863014 | 0.142446 | 0 | 0 | 0 | 0 | 0.140625 | 0.102431 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
66eafa7adef9b625c2e6b6dc2db3e5316ae781f5 | 496 | py | Python | stravenkovac/common_data.py | Katzeminze/Stravenkovac | c600f269327a885a80111a493ef3a9d4a75b41db | [
"BSD-Source-Code"
] | null | null | null | stravenkovac/common_data.py | Katzeminze/Stravenkovac | c600f269327a885a80111a493ef3a9d4a75b41db | [
"BSD-Source-Code"
] | null | null | null | stravenkovac/common_data.py | Katzeminze/Stravenkovac | c600f269327a885a80111a493ef3a9d4a75b41db | [
"BSD-Source-Code"
] | null | null | null | pdf_path_month_hour = "C:/Users/Nyrobtseva/Documents/Python_Parser_stravenky/Month hour registration_07_2020_David_Tampier.pdf"
csv_path_month_hour = "month_hours.csv" # should be changed to smth better
pdf_path_travel_costs = "C:/Users/Nyrobtseva/Documents/Python_Parser_stravenky/cz_travelexpenses_DavidTampier_July.pdf"
csv_path_travel_costs = "travel_costs.csv" # should be changed to smth better
"""Dictionaries"""
dictionary_WH = {}
dictionary_TH = {}
"""Constatnts"""
required_WH = 6 | 38.153846 | 127 | 0.8125 | 70 | 496 | 5.357143 | 0.528571 | 0.072 | 0.069333 | 0.133333 | 0.405333 | 0.405333 | 0.405333 | 0 | 0 | 0 | 0 | 0.015487 | 0.08871 | 496 | 13 | 128 | 38.153846 | 0.814159 | 0.131048 | 0 | 0 | 0 | 0 | 0.574684 | 0.481013 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd07e6539d24fd747ad3a887d3dd4632d2206e64 | 14,988 | py | Python | Segger/promod_dialog.py | gregdp/segger | d4c112fd43f0b088145e225f976335800874ebe5 | [
"MIT"
] | 6 | 2019-03-27T22:53:12.000Z | 2021-11-19T09:02:05.000Z | Segger/promod_dialog.py | gregdp/segger | d4c112fd43f0b088145e225f976335800874ebe5 | [
"MIT"
] | 1 | 2017-03-07T16:52:30.000Z | 2019-11-25T21:37:21.000Z | Segger/promod_dialog.py | gregdp/segger | d4c112fd43f0b088145e225f976335800874ebe5 | [
"MIT"
] | 5 | 2019-05-30T19:10:01.000Z | 2022-02-09T07:04:59.000Z |
# Copyright (c) 2020 Greg Pintilie - pintilie@mit.edu
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import chimera
import os
import os.path
import Tkinter
from CGLtk import Hybrid
import VolumeData
import _multiscale
import MultiScale.surface
import _surface
import numpy
import _contour
import Matrix
import VolumeViewer
from sys import stderr
from time import clock
from axes import prAxes
import regions
import graph
from Segger import dev_menus, timing, seggerVersion
OML = chimera.openModels.list
REG_OPACITY = 0.45
from segment_dialog import current_segmentation, segmentation_map
def umsg ( txt ) :
print txt
status ( txt )
def status ( txt ) :
txt = txt.rstrip('\n')
msg.configure(text = txt)
msg.update_idletasks()
class ProMod_Dialog ( chimera.baseDialog.ModelessDialog ):
title = "ProMod - Probabilistic Models (Segger v" + seggerVersion + ")"
name = "segger_promod"
buttons = ( "Close" )
help = 'https://github.com/gregdp/segger'
def fillInUI(self, parent):
self.group_mouse_mode = None
tw = parent.winfo_toplevel()
self.toplevel_widget = tw
tw.withdraw()
parent.columnconfigure(0, weight = 1)
row = 0
menubar = Tkinter.Menu(parent, type = 'menubar', tearoff = False)
tw.config(menu = menubar)
f = Tkinter.Frame(parent)
f.grid(column=0, row=row, sticky='ew')
l = Tkinter.Label(f, text=' ')
l.grid(column=0, row=row, sticky='w')
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = "1. Open all models to be considered, make them visible, hide other models", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = "2. Find (closest-to) average model", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
b = Tkinter.Button(ff, text="Find Average Model", command=self.AvgMod)
b.grid (column=1, row=0, sticky='w', padx=5, pady=1)
self.avgModLabel = Tkinter.Label(ff, text = " ", anchor = 'w')
self.avgModLabel.grid(column=2, row=0, sticky='ew', padx=5, pady=1)
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = "3. Calculate standard deviations at each residue ", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
b = Tkinter.Button(ff, text="Calculate", command=self.Calc)
b.grid (column=1, row=0, sticky='w', padx=5, pady=1)
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = " - standard deviations are stored for each residue atom as the b-factor", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = " - use Tools -> Depiction -> Render by Attribute to show deviations using", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
row += 1
ff = Tkinter.Frame(f)
ff.grid(column=0, row=row, sticky='w')
if 1 :
l = Tkinter.Label(ff, text = " color and/or ribbon thickness. See tutorial by pressing Help below.", anchor = 'w')
l.grid(column=0, row=0, sticky='ew', padx=5, pady=1)
row += 1
f = Tkinter.Frame(parent)
f.grid(column=0, row=row, sticky='ew')
l = Tkinter.Label(f, text=' ')
l.grid(column=0, row=row, sticky='w')
row += 1
dummyFrame = Tkinter.Frame(parent, relief='groove', borderwidth=1)
Tkinter.Frame(dummyFrame).pack()
dummyFrame.grid(row=row,column=0,columnspan=7, pady=7, sticky='we')
global msg
row = row + 1
msg = Tkinter.Label(parent, width = 60, anchor = 'w', justify = 'left', fg="red")
msg.grid(column=0, row=row, sticky='ew', padx=5, pady=1)
row += 1
def Calc ( self ) :
if hasattr ( self, 'avgMod' ) and hasattr ( self, 'mods' ) and len(self.mods) > 0 and self.avgMod != None :
print "Average model: %s -- %d mods" % ( self.avgMod.name, len(self.mods) )
else :
umsg ("Find Average Model first.")
return
avgMod = self.avgMod
mods = self.mods
umsg ( "Calculating standard deviations..." )
vars = []
for ri, avgRes in enumerate ( avgMod.residues ) :
status ( "Res %d/%d" % (ri+1,len(avgMod.residues)) )
for avgAt in avgRes.atoms :
mean = 0.0
for m in mods :
res = m.residues[ri]
cat = res.atomsMap[avgAt.name][0]
v = cat.coord() - avgAt.coord()
d = v.length * v.length
mean += d
mean /= len(mods)
stdev = numpy.sqrt ( mean )
vars.append ( stdev )
for m in mods :
res = m.residues[ri]
cat = res.atomsMap[avgAt.name][0]
cat.bfactor = stdev
umsg ( "%d models, %d residues - min variance %.2f, max variance %.2f" % (
len(mods), len(avgMod.residues), numpy.min(vars), numpy.max(vars) ) )
def Calc_CA ( self ) :
if hasattr ( self, 'avgMod' ) and hasattr ( self, 'mods' ) and len(self.mods) > 0 and self.avgMod != None :
print "Average model: %s -- %d mods" % ( self.avgMod.name, len(self.mods) )
else :
umsg ("Find Average Model first.")
return
avgMod = self.avgMod
mods = self.mods
umsg ( "Calculating standard deviations..." )
vars = []
for ri, resAvg in enumerate ( avgMod.residues ) :
try :
catAvg = resAvg.atomsMap["CA"][0]
except :
continue
mean = 0.0
for m in mods :
res = m.residues[ri]
cat = res.atomsMap["CA"][0]
v = cat.coord() - catAvg.coord()
d = v.length * v.length
mean += d
mean /= len(mods)
stdev = numpy.sqrt ( mean )
vars.append ( stdev )
for m in mods :
res = m.residues[ri]
for at in res.atoms :
at.bfactor = stdev
#at.occupancy = stdev
umsg ( "%d models, %d residues - min variance %.2f, max variance %.2f" % (
len(mods), len(avgMod.residues), numpy.min(vars), numpy.max(vars) ) )
def AvgMod0 ( self ) :
self.avgMod = None
self.mods = []
import numpy
for m in chimera.openModels.list() :
if type (m) == chimera.Molecule and m.display == True:
self.mods.append ( m )
N = len(self.mods)
if N < 2 :
umsg ( "At least 2 models are needed - make sure they are shown" )
self.avgModLabel.configure ( text = "" )
return
mod0 = self.mods[0]
numRes = len(mod0.residues)
umsg ( "Finding average of %d mods, %d residues" % ( len(self.mods), len(mod0.residues) ) )
avgPs = numpy.zeros ( [len(mod0.residues), 3] )
for mod in self.mods :
#print " - mod: %s, %d residues" % ( mod.name, len(mod.residues) )
if numRes <> len(mod.residues) :
umsg ("All models should have the same number of residues")
self.avgModLabel.configure ( text = "" )
return
for ri, res in enumerate ( mod.residues ) :
cat = None
try :
cat = res.atomsMap["CA"][0]
except :
#print "carbon alpha not found in res ", ri, res.id.position
#return None
pass
if cat :
avgPs[ri] += cat.coord().data()
N = float ( len(self.mods) )
for ri, res in enumerate ( mod0.residues ) :
avgPs[ri] /= N
#if ri == 0 :
# print " r0 avg pos: ", avgPs[ri]
minDist = -1.0
minMod = None
for mod in self.mods :
#print " - mod: %s, %d residues" % ( mod.name, len(mod.residues) ),
modDist = 0.0
for ri, res in enumerate ( mod.residues ) :
try :
cat = res.atomsMap["CA"][0]
except :
#print "carbon alpha not found in mod %s res " % mod.name, ri, res.id.position
#return None
continue
dv = avgPs[ri] - cat.coord().data()
modDist += numpy.sum ( dv * dv )
#print ", dist: ", modDist
if minMod == None or modDist < minDist :
minMod = mod
minDist = modDist
print "Avg mod: %s, min dist to avg: %.2f" % (minMod.name, minDist)
self.avgMod = minMod
self.avgModLabel.configure ( text = " found: %s" % minMod.name )
umsg ( "Average of %d models is %s" % (len(self.mods), minMod.name) )
return minMod, avgPs
def AvgMod ( self ) :
self.avgMod = None
self.mods = []
import numpy
for m in chimera.openModels.list() :
if type (m) == chimera.Molecule and m.display == True:
self.mods.append ( m )
N = len(self.mods)
if N < 2 :
umsg ( "At least 2 models are needed - make sure they are shown" )
self.avgModLabel.configure ( text = "" )
return
mod0 = self.mods[0]
numRes = len(mod0.residues)
umsg ( "Finding average of %d mods, %d residues" % ( len(self.mods), len(mod0.residues) ) )
print "."
#avgPs = numpy.zeros ( [len(mod0.atoms), 3] )
avg = {}
for mod in self.mods :
#print " - mod: %s, %d residues" % ( mod.name, len(mod.residues) )
for res in mod.residues :
for at in res.atoms :
if not res.id.chainId in avg :
avg[res.id.chainId] = {}
if not res.id.position in avg[res.id.chainId] :
avg[res.id.chainId][res.id.position] = {}
if not at.name in avg[res.id.chainId][res.id.position] :
avg[res.id.chainId][res.id.position][at.name] = []
avg[res.id.chainId][res.id.position][at.name].append ( numpy.array ( at.coord().data() ) )
for ci, rmap in avg.iteritems () :
for ri, amap in rmap.iteritems () :
for aname, plist in amap.iteritems () :
if len(plist) <> len(self.mods) :
print " - at %s_%d.%s has only %d/%d pos" % ( aname, ri, ci, len(plist), len(self.mods) )
avgp = numpy.array ( [0,0,0] )
for p in plist :
avgp += p
avgp /= float ( len(plist) )
minDist = -1.0
minMod = None
for mod in self.mods :
#print " - mod: %s, %d residues" % ( mod.name, len(mod.residues) ),
modDist = 0.0
for ri, res in enumerate ( mod.residues ) :
for at in res.atoms :
avgPos = avg[res.id.chainId][res.id.position][at.name]
dv = numpy.array ( at.coord().data() ) - avgPos
modDist += numpy.sum ( dv * dv )
#print ", dist: ", modDist
if minMod == None or modDist < minDist :
minMod = mod
minDist = modDist
print "Avg mod: %s, min dist to avg: %.2f" % (minMod.name, minDist)
self.avgMod = minMod
self.avgModLabel.configure ( text = " found: %s" % minMod.name )
umsg ( "Average of %d models is %s" % (len(self.mods), minMod.name) )
return minMod
def Bring () :
print "bring..."
fromm, tom = None, None
for m in chimera.openModels.list() :
if type (m) == chimera.Molecule and m.display == True:
if "promod" in m.name :
fromm = m
else :
tom = m
print " - from: %s" % fromm.name
print " - to: %s" % tom.name
bfs = []
rid = {}
for r in fromm.residues :
rid[r.id.position] = r
for at in r.atoms :
bfs.append ( at.bfactor )
print "devs mean: %.3f" % numpy.average(bfs)
print "devs std: %.3f" % numpy.std(bfs)
print "devs 3sig: %.3f" % (numpy.average(bfs) + 3.0*numpy.std(bfs))
for r in tom.residues :
rf = rid[r.id.position]
for at in r.atoms :
at.bfactor = rf.atomsMap[at.name][0].bfactor
def show_dialog (closeOld = True):
from chimera import dialogs
d = dialogs.find ( ProMod_Dialog.name, create=False )
if d :
if closeOld :
d.toplevel_widget.update_idletasks ()
d.Close()
d.toplevel_widget.update_idletasks ()
else :
return d
dialogs.register ( ProMod_Dialog.name, ProMod_Dialog, replace = True)
d = dialogs.find ( ProMod_Dialog.name, create=True )
# Avoid transient dialog resizing when created and mapped for first time.
d.toplevel_widget.update_idletasks ()
d.enter()
return d
# -----------------------------------------------------------------------------
#
| 28.712644 | 131 | 0.525354 | 1,879 | 14,988 | 4.175625 | 0.195849 | 0.02753 | 0.023834 | 0.030334 | 0.546521 | 0.507265 | 0.49299 | 0.46597 | 0.463166 | 0.450166 | 0 | 0.014487 | 0.350614 | 14,988 | 521 | 132 | 28.767754 | 0.791637 | 0.120363 | 0 | 0.562092 | 0 | 0 | 0.104967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003268 | 0.075163 | null | null | 0.042484 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd0b60ded9727481502e16a37162d0f2a79126fc | 3,562 | py | Python | 6.0002/problem_sets/ps1/ps1b.py | Haplo-Dragon/MIT | 19295613460265cf01561d6229bea290c59a5247 | [
"MIT"
] | null | null | null | 6.0002/problem_sets/ps1/ps1b.py | Haplo-Dragon/MIT | 19295613460265cf01561d6229bea290c59a5247 | [
"MIT"
] | null | null | null | 6.0002/problem_sets/ps1/ps1b.py | Haplo-Dragon/MIT | 19295613460265cf01561d6229bea290c59a5247 | [
"MIT"
] | null | null | null | ###########################
# 6.0002 Problem Set 1b: Space Change
# Name: Ethan Fulbright
# Collaborators: Jesi Ross, Yale CS lecture - Computer Science 201a, Prof. Dana Angluin
# Time:
# Author: charz, cdenise
# ================================
# Part B: Golden Eggs
# ================================
# Problem 1
def dp_make_weight(egg_weights, target_weight, memo={}):
"""
Find number of eggs to bring back, using the smallest number of eggs. Assumes there is
an infinite supply of eggs of each weight, and there is always a egg of value 1.
Parameters:
egg_weights - tuple of integers, available egg weights sorted from smallest to largest
value (1 = d1 < d2 < ... < dk)
target_weight - int, amount of weight we want to find eggs to fit
memo - dictionary, OPTIONAL parameter for memoization (you may not need to use this
parameter depending on your implementation)
Returns: int, smallest number of eggs needed to make target weight
"""
# This will be the key used to find answers in the memo
subproblem = (egg_weights, target_weight)
# If we've already stored this answer in the memo, return it
if subproblem in memo:
return memo[subproblem]
# If no eggs are left or no space is left on ship, there's nothing left to do
if egg_weights == () or target_weight == 0:
return 0
# If the next heaviest egg is too heavy to fit, consider subset of lighter eggs
elif egg_weights[-1] > target_weight:
result = dp_make_weight(egg_weights[:-1], target_weight, memo)
else:
# Find the minimum number of eggs by testing both taking heaviest egg and not
# taking heaviest egg.
this_egg = egg_weights[-1]
num_eggs_with_this_egg = 1 + dp_make_weight(
egg_weights,
target_weight - this_egg,
memo)
num_eggs_without_this_egg = dp_make_weight(egg_weights[:-1], target_weight, memo)
if num_eggs_without_this_egg != 0:
result = min(num_eggs_with_this_egg, num_eggs_without_this_egg)
else:
result = num_eggs_with_this_egg
# Store this answer in the memo for future use.
memo[subproblem] = result
return result
# EXAMPLE TESTING CODE, feel free to add more if you'd like
if __name__ == "__main__":
egg_weights = (1, 5, 10, 25)
n = 99
print("Egg weights = (1, 5, 10, 25)")
print("n = 99")
print("Expected ouput: 9 (3 * 25 + 2 * 10 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
egg_weights = (1, 5, 10, 20)
n = 99
print("Egg weights = (1, 5, 10, 20)")
print("n = 99")
print("Expected ouput: 10 (4 * 20 + 1 * 10 + 1 * 5 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
egg_weights = (1, 5, 10, 20, 25, 30)
n = 99
print("Egg weights = (1, 5, 10, 20, 25, 30)")
print("n = 99")
print("Expected ouput: 8 (3 * 30 + 0 * 10 + 1 * 5 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
egg_weights = (1, 2, 6, 12, 20)
n = 37
print("Egg weights = (1, 2, 6, 12, 20)")
print("n = 37")
print("Expected ouput: 4 (0 * 20 + 3 * 12 + 0 * 6 + 0 * 2 + 1 * 1 = 37)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
egg_weights = (1, 5)
n = 6
print("Egg weights = (1, 5)")
print("n = 6")
print("Expected ouput: 2 (1 * 5 + 1 * 1 = 6)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
| 33.603774 | 90 | 0.605559 | 540 | 3,562 | 3.848148 | 0.301852 | 0.120308 | 0.07411 | 0.064966 | 0.405679 | 0.311838 | 0.26564 | 0.232916 | 0.208855 | 0.14437 | 0 | 0.061263 | 0.262212 | 3,562 | 105 | 91 | 33.92381 | 0.729452 | 0.364964 | 0 | 0.315789 | 0 | 0.052632 | 0.235836 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0 | 0 | 0.070175 | 0.438596 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
dd0d4ad21cc27ff60a3307318c9ebe377f4aa0e7 | 5,788 | py | Python | algolib/graph/undirected.py | niemmi/algolib | 81a013af5ae1ca1e8cf8d3f2e2f1b4a9bce6ead8 | [
"BSD-3-Clause"
] | null | null | null | algolib/graph/undirected.py | niemmi/algolib | 81a013af5ae1ca1e8cf8d3f2e2f1b4a9bce6ead8 | [
"BSD-3-Clause"
] | null | null | null | algolib/graph/undirected.py | niemmi/algolib | 81a013af5ae1ca1e8cf8d3f2e2f1b4a9bce6ead8 | [
"BSD-3-Clause"
] | null | null | null | """Undirected graph that doesn't have multi-edges but may contain loops.
Both vertices and edges may have associated properties. Vertices as stored
as an adjacency matrix using dicts and as a separate dict that maybe iterated
over.
Time complexity of the operations:
- check if edge (x, y) exists: O(1)
- check degree of vertex: O(1)
- insert/delete edge: O(1)
- insert vertex: O(1)
- delete vertex: O(number of connected edges)
- iterate vertices/edges: O(n)
Interface is loosely based on NetworkX (http://networkx.github.io/).
"""
from collections import defaultdict
class Undirected(object):
"""Undirected graph which may contain loops but not multiple edges.
Attributes:
vertices: Dictionary of vertices where keys are vertex names and
values are dictionary of vertex properties.
edges: Dictionary of edges where keys are tuples of vertex pairs in
sorted order and values are dictionary of edge properties.
_neighbors: Three level dictionary where first level keys are vertices,
second level keys are neighboring vertices and third level is
edge properties. Use index operator to access edges.
"""
def __init__(self):
"""Initializer, initializes empty graph."""
self.vertices = {}
self.edges = {}
self._neighbors = defaultdict(dict)
@property
def directed(self):
"""Returns boolean value telling if graph is directed or not.
Returns:
Always False.
"""
return False
@staticmethod
def __key(x, y):
# Note that on Python 3 frozenset would be better option
return tuple(sorted([x, y]))
def insert_vertex(self, name, **kwargs):
"""Inserts vertex to graph.
Args:
name: Vertex name, any hashable object
**kwargs: Optional properties, if vertex already exists then given
properties will be used to update existing ones.
"""
kwargs.update(self.vertices.get(name, {}))
self.vertices[name] = kwargs
self._neighbors.setdefault(name, {})
def remove_vertex(self, name):
"""Removes vertex from graph. Removes also all the edges the vertex
is part of.
Args:
name: Name of the vertex.
"""
del self.vertices[name]
# Iterate over neighbors without copying
while self._neighbors[name]:
self.remove_edge(name, next(iter(self._neighbors[name])))
del self._neighbors[name]
def insert_edge(self, x, y, **kwargs):
"""Inserts edge to graph. If vertices don't exist they are created.
Args:
x: First vertex.
y: Second vertex.
**kwargs: Optional properties for the edge
"""
self.vertices.setdefault(x, {})
self.vertices.setdefault(y, {})
edge_key = self.__key(x, y)
kwargs.update(self.edges.get(edge_key, {}))
self._neighbors[x][y] = kwargs
self._neighbors[y][x] = kwargs
self.edges[edge_key] = kwargs
def remove_edge(self, x, y):
"""Removes edge from vertex.
Args:
x: First vertex.
y: Second vertex.
"""
del self.edges[self.__key(x, y)]
del self._neighbors[x][y]
if x != y:
del self._neighbors[y][x]
def connected(self, x, y):
"""Returns boolean value telling if given vertices are connected by
an edge.
Args:
x: First vertex.
y: Second vertex.
Returns:
True if vertices are connected by edge, False if not
"""
return self.__key(x, y) in self.edges
def edges_between(self, x, y):
"""Returns iterator iterating over edges between given nodes. Note that
with graph like this which doesn't allow multiple edges between the same
nodes this doesn't make much sense but if multi-edge graphs are
supported then easier to expose similar interface.
Args:
x: First vertex.
y: Second vertex.
Returns:
Iterator iterating over all the edges between given vertices.
"""
if y in self._neighbors[x]:
yield self.__key(x, y)
def edges_from(self, vertex):
"""Returns iterator iterating over all the edges connected to given
vertex.
Args:
vertex: Edge endpoint.
Returns:
Iterator iterating over all the edges connecting given vertex.
Iterator returns (edge key, connected vertex) tuples where edge key
can be used to index Undirected.edges.
"""
for neighbor in self._neighbors[vertex]:
yield self.__key(vertex, neighbor), neighbor
def __getitem__(self, item):
return self._neighbors[item]
def degree(self, vertex):
"""Returns degree of given vertex.
Args:
vertex: Vertex who's degree is queried.
Returns:
Vertex degree, note that if vertex has a loop it is considered
as degree of 2.
"""
loop = vertex in self._neighbors[vertex]
return len(self._neighbors[vertex]) + loop
def __eq__(self, other):
return isinstance(other, Undirected) and \
self.edges == other.edges and \
self.vertices == other.vertices
def __ne__(self, other):
return not self == other
def __copy__(self):
copy = Undirected()
for vertex, data in self.vertices.items():
copy.insert_vertex(vertex, **data)
for (x, y), data in self.edges.items():
copy.insert_edge(x, y, **data)
return copy
copy = __copy__
| 30.951872 | 80 | 0.607118 | 723 | 5,788 | 4.773167 | 0.26971 | 0.009273 | 0.007244 | 0.018545 | 0.111852 | 0.071284 | 0.071284 | 0.043176 | 0 | 0 | 0 | 0.001501 | 0.309433 | 5,788 | 186 | 81 | 31.11828 | 0.861896 | 0.501037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.245902 | false | 0 | 0.016393 | 0.065574 | 0.42623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd0f35ccbfab95b95870a42ee33207ca353a51f1 | 5,972 | py | Python | pylibscrypt/test_properties.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 19 | 2015-02-03T22:25:09.000Z | 2021-09-01T05:25:44.000Z | pylibscrypt/test_properties.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 16 | 2015-06-03T15:52:43.000Z | 2019-03-24T16:47:52.000Z | pylibscrypt/test_properties.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 3 | 2015-05-26T01:39:20.000Z | 2017-12-15T23:44:19.000Z | # Copyright (c) 2017-2021, Jan Varho
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""Tests scrypt implementations using hypothesis"""
import sys
import unittest
from hypothesis import given, settings
from hypothesis.strategies import (
binary, integers, none, one_of, sampled_from, text)
from .common import (
SCRYPT_MCF_PREFIX_7, SCRYPT_MCF_PREFIX_s1,
SCRYPT_MCF_PREFIX_DEFAULT, SCRYPT_MCF_PREFIX_ANY)
# Strategies for producing parameters
def valid_pass():
return binary()
def valid_mcf_pass():
return one_of(binary().filter(lambda b: b'\0' not in b),
text().filter(lambda b: u'\0' not in b))
def valid_salt():
return binary()
def valid_mcf_salt():
return one_of(binary(min_size=1, max_size=16), none())
def valid_olen():
return integers(min_value=1, max_value=2**20)
def mcf_prefix():
return sampled_from([
SCRYPT_MCF_PREFIX_7,
SCRYPT_MCF_PREFIX_s1,
SCRYPT_MCF_PREFIX_DEFAULT,
SCRYPT_MCF_PREFIX_ANY,
])
class ScryptTests(unittest.TestCase):
"""Tests an scrypt implementation from module"""
set_up_lambda = lambda self:None
tear_down_lambda = lambda self:None
module = None
ref = None
def setUp(self):
if not self.module:
self.skipTest('module not tested')
self.set_up_lambda()
def tearDown(self):
self.tear_down_lambda()
def invalidPass(self, pw):
try:
return pw + b'_'
except TypeError:
return pw + u'_'
@given(valid_pass(), valid_salt(), valid_olen())
@settings(deadline=500)
def test_scrypt(self, pw, salt, olen):
h1 = self.module.scrypt(pw, salt, 2, 2, 2, olen)
self.assertEqual(olen, len(h1))
if (self.ref):
h2 = self.ref.scrypt(pw, salt, 2, 2, 2, olen)
self.assertEqual(h1, h2)
if olen >= 16: # short hashes can collide
h2 = self.module.scrypt(self.invalidPass(pw), salt, 2, 2, 2, olen)
h3 = self.module.scrypt(pw, salt + b'_', 2, 2, 2, olen)
self.assertNotEqual(h1, h2)
self.assertNotEqual(h1, h3)
@given(valid_mcf_pass(), valid_mcf_salt(), mcf_prefix())
@settings(deadline=500)
def test_mcf_scrypt(self, pw, salt, prefix):
m = self.module.scrypt_mcf(pw, salt, 2, 2, 2, prefix)
self.assertTrue(self.module.scrypt_mcf_check(m, pw))
self.assertFalse(self.module.scrypt_mcf_check(m, self.invalidPass(pw)))
if (self.ref):
self.assertTrue(self.ref.scrypt_mcf_check(m, pw))
self.assertFalse(self.ref.scrypt_mcf_check(m, self.invalidPass(pw)))
if salt and prefix != SCRYPT_MCF_PREFIX_ANY:
m2 = self.ref.scrypt_mcf(pw, salt, 2, 2, 2, prefix)
self.assertEqual(m, m2)
def load_scrypt_suite(name, module, ref=None):
tests = type(name, (ScryptTests,), {'module': module, 'ref': ref})
return unittest.defaultTestLoader.loadTestsFromTestCase(tests)
if __name__ == "__main__":
suite = unittest.TestSuite()
ref = None
try:
from . import hashlibscrypt
suite.addTest(load_scrypt_suite('hashlibscryptTests', hashlibscrypt, ref))
ref = hashlibscrypt
except ImportError:
suite.addTest(load_scrypt_suite('hashlibscryptTests', None, ref))
try:
from . import pylibscrypt
suite.addTest(load_scrypt_suite('pylibscryptTests', pylibscrypt, ref))
ref = ref or pylibscrypt
except ImportError:
suite.addTest(load_scrypt_suite('pylibscryptTests', None, ref))
try:
from . import pyscrypt
suite.addTest(load_scrypt_suite('pyscryptTests', pyscrypt, ref))
ref = ref or pyscrypt
except ImportError:
suite.addTest(load_scrypt_suite('pyscryptTests', None, ref))
try:
from . import pylibsodium
suite.addTest(load_scrypt_suite('pylibsodiumTests',
pylibsodium, ref))
from . import pylibscrypt
loader = unittest.defaultTestLoader
def set_up_ll(self):
if not self.module._scrypt_ll:
self.skipTest('no ll')
self.tmp_ll = self.module._scrypt_ll
self.tmp_scr = self.module.scr_mod
self.module._scrypt_ll = None
self.module.scr_mod = pylibscrypt
def tear_down_ll(self):
self.module._scrypt_ll = self.tmp_ll
self.module.scr_mod = self.tmp_scr
tmp = type(
'pylibsodiumFallbackTests', (ScryptTests,),
{
'module': pylibsodium,
'fast': False, # supports only large parameters
'set_up_lambda': set_up_ll,
'tear_down_lambda': tear_down_ll,
}
)
suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(tmp))
except ImportError:
suite.addTest(load_scrypt_suite('pylibsodiumTests', None, ref))
try:
from . import pypyscrypt_inline as pypyscrypt
suite.addTest(load_scrypt_suite('pypyscryptTests', pypyscrypt, ref))
except ImportError:
suite.addTest(load_scrypt_suite('pypyscryptTests', None, ref))
result = unittest.TextTestRunner().run(suite)
sys.exit(not result.wasSuccessful())
| 34.72093 | 82 | 0.65288 | 753 | 5,972 | 5 | 0.25498 | 0.035857 | 0.043825 | 0.058433 | 0.354316 | 0.274369 | 0.167331 | 0.108898 | 0.071713 | 0.038778 | 0 | 0.013375 | 0.248828 | 5,972 | 171 | 83 | 34.923977 | 0.825903 | 0.152713 | 0 | 0.166667 | 0 | 0 | 0.052642 | 0.004768 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.111111 | false | 0.063492 | 0.126984 | 0.047619 | 0.349206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dd1205989e889a4d503a41473ce287d5b964d129 | 506 | py | Python | finbot/apps/appwsrv/blueprints/base.py | jean-edouard-boulanger/finbot | ddc3c0e4673b1025d2352719755ff77ef445577c | [
"MIT"
] | 1 | 2020-12-25T19:33:27.000Z | 2020-12-25T19:33:27.000Z | finbot/apps/appwsrv/blueprints/base.py | jean-edouard-boulanger/finbot | ddc3c0e4673b1025d2352719755ff77ef445577c | [
"MIT"
] | 1 | 2021-01-18T23:19:58.000Z | 2021-01-19T17:35:13.000Z | finbot/apps/appwsrv/blueprints/base.py | jean-edouard-boulanger/finbot | ddc3c0e4673b1025d2352719755ff77ef445577c | [
"MIT"
] | 1 | 2020-01-19T22:37:36.000Z | 2020-01-19T22:37:36.000Z | from finbot.core.web_service import Route
from finbot.core import environment
from flask import Blueprint
API_V1 = Route("/api/v1")
base_api = Blueprint("api", __name__)
@base_api.route(API_V1.healthy(), methods=["GET"])
def healthy():
return {"healthy": True}
@base_api.route(API_V1.system_report(), methods=["GET"])
def get_system_report():
return {
"system_report": {
"finbot_version": "0.0.1",
"runtime": environment.get_finbot_runtime(),
}
}
| 21.083333 | 56 | 0.656126 | 65 | 506 | 4.830769 | 0.4 | 0.063694 | 0.095541 | 0.095541 | 0.10828 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017199 | 0.195652 | 506 | 23 | 57 | 22 | 0.7543 | 0 | 0 | 0 | 0 | 0 | 0.12253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.1875 | 0.125 | 0.4375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
dd1d487d5cf09acdac0b56b84e3a55fe3d02685d | 405 | py | Python | setup.py | stkobsar/RandomForest | 3097b871164e01fc72dca2387536d6c082c994b3 | [
"MIT"
] | null | null | null | setup.py | stkobsar/RandomForest | 3097b871164e01fc72dca2387536d6c082c994b3 | [
"MIT"
] | null | null | null | setup.py | stkobsar/RandomForest | 3097b871164e01fc72dca2387536d6c082c994b3 | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(name='RandomForest',
version="0.1git status.0",
url = "https://github.com/stkobsar/RandomForest.git",
description='Random Forest algorithm use case',
author='Stephi Kobsar',
author_email='stkobsar7@gmail.com',
packages=setuptools.find_packages(),
install_requires=["matplotlib", "scipy", "numpy", "seaborn", "sklearn"],
) | 33.75 | 78 | 0.674074 | 43 | 405 | 6.27907 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011976 | 0.175309 | 405 | 12 | 79 | 33.75 | 0.796407 | 0 | 0 | 0 | 0 | 0 | 0.416256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd218b77d7f8d906900e9396e496e7930fffa9f8 | 497 | py | Python | spell_bee/migrations/0005_auto_20170522_1049.py | haideralipunjabi/django_quiz | 8963dd814ce67a175d3f264f5a51f15355e8f227 | [
"Apache-2.0"
] | null | null | null | spell_bee/migrations/0005_auto_20170522_1049.py | haideralipunjabi/django_quiz | 8963dd814ce67a175d3f264f5a51f15355e8f227 | [
"Apache-2.0"
] | null | null | null | spell_bee/migrations/0005_auto_20170522_1049.py | haideralipunjabi/django_quiz | 8963dd814ce67a175d3f264f5a51f15355e8f227 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-05-22 05:19
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('spell_bee', '0004_auto_20170522_1038'),
]
operations = [
migrations.AlterField(
model_name='spellbeequestion',
name='meaning',
field=models.TextField(help_text='Meaning of the word.', max_length=500),
),
]
| 23.666667 | 85 | 0.635815 | 56 | 497 | 5.428571 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093834 | 0.249497 | 497 | 20 | 86 | 24.85 | 0.72118 | 0.132797 | 0 | 0 | 1 | 0 | 0.175234 | 0.053738 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd2c1f2a01690cad019e69d0dc03886d86e069c7 | 607 | py | Python | setup.py | erfaneshrati/meta-transfer-learning | ba15db64db7b38f196717d3de3b066178dce8696 | [
"MIT"
] | 16 | 2018-09-25T16:32:34.000Z | 2020-10-12T12:59:17.000Z | setup.py | erfaneshrati/meta-transfer-learning | ba15db64db7b38f196717d3de3b066178dce8696 | [
"MIT"
] | null | null | null | setup.py | erfaneshrati/meta-transfer-learning | ba15db64db7b38f196717d3de3b066178dce8696 | [
"MIT"
] | 1 | 2019-04-25T01:46:10.000Z | 2019-04-25T01:46:10.000Z | """
Module configuration.
"""
from setuptools import setup
setup(
name='supervised-mtl',
version='0.0.1',
description='Meta-transfer learning over Reptile and MAML',
url='https://github.com/erfaneshrati/supervised-mtl',
author='Amir Erfan Eshratifar',
author_email='erfaneshrati@gmail.com',
license='MIT',
keywords='ai machine learning',
packages=['meta-learning'],
install_requires=[
'numpy>=1.0.0,<2.0.0',
'Pillow>=4.0.0,<5.0.0'
],
extras_require={
"tf": ["tensorflow>=1.0.0"],
"tf_gpu": ["tensorflow-gpu>=1.0.0"],
}
)
| 23.346154 | 63 | 0.607908 | 77 | 607 | 4.74026 | 0.636364 | 0.038356 | 0.024658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.204283 | 607 | 25 | 64 | 24.28 | 0.712215 | 0.034596 | 0 | 0 | 0 | 0 | 0.470588 | 0.074394 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd2eb147d3c0d11d0b0c85c3bc435dfa84694c0b | 1,209 | py | Python | tests/factories.py | helloyxy/shopcarts | 6be8a285d39bf6221e87ec0b0a4f928531d9bb90 | [
"Apache-2.0"
] | 3 | 2021-09-29T13:23:27.000Z | 2021-12-15T07:14:07.000Z | tests/factories.py | helloyxy/shopcarts | 6be8a285d39bf6221e87ec0b0a4f928531d9bb90 | [
"Apache-2.0"
] | 107 | 2021-09-29T15:13:48.000Z | 2021-12-15T07:08:33.000Z | tests/factories.py | helloyxy/shopcarts | 6be8a285d39bf6221e87ec0b0a4f928531d9bb90 | [
"Apache-2.0"
] | 3 | 2021-10-18T04:18:24.000Z | 2021-11-19T16:16:11.000Z | # Copyright 2016, 2019 John J. Rofrano. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test Factory to make fake objects for testing
"""
import factory
from factory.fuzzy import FuzzyChoice, FuzzyInteger
from services.models import Shopcart
class ShopcartFactory(factory.Factory):
""" Creates fake shopcarts that you don't have to feed """
class Meta:
model = Shopcart
product_id = FuzzyChoice(choices=[1001,2002,3003,4747,9999])
customer_id = FuzzyChoice(choices=[1000,2000,3000,8000])
product_name = FuzzyChoice(choices=["a","b","d","c","e"])
product_price = FuzzyChoice(choices=[10.01,200.2,30,4747,999])
quantity = FuzzyInteger(0, 10, step=1)
| 35.558824 | 74 | 0.7378 | 177 | 1,209 | 5.016949 | 0.677966 | 0.067568 | 0.029279 | 0.036036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.162945 | 1,209 | 33 | 75 | 36.636364 | 0.809289 | 0.55914 | 0 | 0 | 0 | 0 | 0.009862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dd31b955fe7c53cdf103d64bbe569f412fa0e120 | 5,303 | py | Python | tidecv/data.py | wuhandashuaibi/tide | d48b919554c4a8df5cc3a9f5abe18dad30c0db23 | [
"MIT"
] | null | null | null | tidecv/data.py | wuhandashuaibi/tide | d48b919554c4a8df5cc3a9f5abe18dad30c0db23 | [
"MIT"
] | null | null | null | tidecv/data.py | wuhandashuaibi/tide | d48b919554c4a8df5cc3a9f5abe18dad30c0db23 | [
"MIT"
] | null | null | null | import os
from collections import defaultdict
import numpy as np
import cv2
from . import functions as f
class Data():
"""
A class to hold ground truth or predictions data in an easy to work with format.
Note that any time they appear, bounding boxes are [x, y, width, height] and masks
are either a list of polygons or pycocotools RLEs.
Also, don't mix ground truth with predictions. Keep them in separate data objects.
'max_dets' specifies the maximum number of detections the model is allowed to output for a given image.
"""
def __init__(self, name:str, max_dets:int=100):
self.name = name
self.max_dets = max_dets
self.classes = {} # Maps class ID to class name
self.annotations = [] # Maps annotation ids to the corresponding annotation / prediction
# Maps an image id to an image name and a list of annotation ids
self.images = defaultdict(lambda: {'name': None, 'anns': []})
def _get_ignored_classes(self, image_id:int) -> set:
anns = self.get(image_id)
classes_in_image = set()
ignored_classes = set()
for ann in anns:
if ann['ignore']:
if ann['class'] is not None and ann['bbox'] is None and ann['mask'] is None:
ignored_classes.add(ann['class'])
else:
classes_in_image.add(ann['class'])
return ignored_classes.difference(classes_in_image)
def _make_default_class(self, id:int):
""" (For internal use) Initializes a class id with a generated name. """
if id not in self.classes:
self.classes[id] = 'Class ' + str(id)
def _make_default_image(self, id:int):
if self.images[id]['name'] is None:
self.images[id]['name'] = 'Image ' + str(id)
def _prepare_box(self, box:object):
return box
def _prepare_mask(self, mask:object):
return mask
def _add(self, image_id:int, class_id:int, box:object=None, mask:object=None, score:float=1, ignore:bool=False):
""" Add a data object to this collection. You should use one of the below functions instead. """
self._make_default_class(class_id)
self._make_default_image(image_id)
new_id = len(self.annotations)
self.annotations.append({
'_id' : new_id,
'score' : score,
<<<<<<< HEAD
'image_id' : image_id,
=======
'image' : image_id,
>>>>>>> 49a5d2a4aeb56795e93a3ed7cc7e6d25757bb4c1
'class' : class_id,
'bbox' : self._prepare_box(box),
'mask' : self._prepare_mask(mask),
'ignore': ignore,
})
self.images[image_id]['anns'].append(new_id)
def add_ground_truth(self, image_id:int, class_id:int, box:object=None, mask:object=None):
""" Add a ground truth. If box or mask is None, this GT will be ignored for that mode. """
self._add(image_id, class_id, box, mask)
def add_detection(self, image_id:int, class_id:int, score:int, box:object=None, mask:object=None):
""" Add a predicted detection. If box or mask is None, this prediction will be ignored for that mode. """
self._add(image_id, class_id, box, mask, score=score)
def add_ignore_region(self, image_id:int, class_id:int=None, box:object=None, mask:object=None):
"""
Add a region inside of which background detections should be ignored.
You can use these to mark a region that has deliberately been left unannotated
(e.g., if is a huge crowd of people and you don't want to annotate every single person in the crowd).
If class_id is -1, this region will match any class. If the box / mask is None, the region will be the entire image.
"""
self._add(image_id, class_id, box, mask, ignore=True)
def add_class(self, id:int, name:str):
""" Register a class name to that class ID. """
self.classes[id] = name
def add_image(self, id:int, name:str):
""" Register an image name/path with an image ID. """
self.images[id]['name'] = name
def get(self, image_id:int):
""" Collects all the annotations / detections for that particular image. """
return [self.annotations[x] for x in self.images[image_id]['anns']]
<<<<<<< HEAD
def cat_name(self, class_id):
cat_map = {1: 'person', 2: 'bicycle', 3: 'car', 4: 'motorcycle', 5: 'airplane', 6: 'bus',
7: 'train', 8: 'truck', 9: 'boat', 10: 'traffic light', 11: 'fire hydrant',
13: 'stop sign', 14: 'parking meter', 15: 'bench', 16: 'bird', 17: 'cat',
18: 'dog', 19: 'horse', 20: 'sheep', 21: 'cow', 22: 'elephant', 23: 'bear',
24: 'zebra', 25: 'giraffe', 27: 'backpack', 28: 'umbrella', 31: 'handbag', 32: 'tie',
33: 'suitcase', 34: 'frisbee', 35: 'skis', 36: 'snowboard', 37: 'sports ball',
38: 'kite', 39: 'baseball bat', 40: 'baseball glove', 41: 'skateboard', 42: 'surfboard',
43: 'tennis racket', 44: 'bottle', 46: 'wine glass', 47: 'cup', 48: 'fork',
49: 'knife', 50: 'spoon', 51: 'bowl', 52: 'banana', 53: 'apple', 54: 'sandwich',
55: 'orange', 56: 'broccoli', 57: 'carrot', 58: 'hot dog', 59: 'pizza', 60: 'donut',
61: 'cake', 62: 'chair', 63: 'couch', 64: 'potted plant', 65: 'bed', 67: 'dining table',
70: 'toilet', 72: 'tv', 73: 'laptop', 74: 'mouse', 75: 'remote', 76: 'keyboard',
77: 'cell phone', 78: 'microwave', 79: 'oven', 80: 'toaster', 81: 'sink', 82: 'refrigerator', 84: 'book', 85: 'clock', 86: 'vase', 87: 'scissors', 88: 'teddy bear', 89: 'hair drier', 90: 'toothbrush'}
return cat_map[class_id]
=======
>>>>>>> 49a5d2a4aeb56795e93a3ed7cc7e6d25757bb4c1
| 39.574627 | 207 | 0.662832 | 818 | 5,303 | 4.191932 | 0.408313 | 0.036745 | 0.019248 | 0.024497 | 0.140857 | 0.128609 | 0.114611 | 0.088364 | 0.071158 | 0.060076 | 0 | 0.046947 | 0.184612 | 5,303 | 133 | 208 | 39.87218 | 0.746068 | 0.029417 | 0 | 0.074074 | 0 | 0 | 0.171053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.061728 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd363973ab415042f38d53ecd6eb3ea142076116 | 1,924 | py | Python | src/syncro/__main__.py | cav71/syncro | 2591dd1bd14b7b4bf2a8b2f0099c1d5140679d10 | [
"MIT"
] | null | null | null | src/syncro/__main__.py | cav71/syncro | 2591dd1bd14b7b4bf2a8b2f0099c1d5140679d10 | [
"MIT"
] | null | null | null | src/syncro/__main__.py | cav71/syncro | 2591dd1bd14b7b4bf2a8b2f0099c1d5140679d10 | [
"MIT"
] | null | null | null | """starts a sync remote server
"""
import os
import getpass
import pathlib
import logging
import click
from . import cli
import paramiko
import paramiko.sftp_client
import syncro.support as support
import syncro.cli as cli
logger = logging.getLogger(__name__)
def add_arguments(parser):
parser.add_argument("host")
parser.add_argument("-u", "--username", default=getpass.getuser())
parser.add_argument("-p", "--password")
def process_options(options):
pass
def main(options):
host, port, username = options.host, 22, options.username
startup_delay_s = 2
print(support.remote(transport, ["ls", "-la",])[1])
#print(support.remote(transport, ["/bin/echo", "$$",]))
#print(support.remote(transport, ["/bin/echo", "$$",]))
sftp = paramiko.sftp_client.SFTPClient.from_transport(transport)
# transfer the remote server
sftp.put(pathlib.Path(__file__).parent / "remote.py", "remote.py")
# connect the secure end points
support.shell(transport)
@click.command()
@click.argument("host")
@click.option('--password', hide_input=True)
@click.option('--username', default=lambda: getpass.getuser())
@cli.standard(quiet=True)
def main(host, username, password):
"hello world"
logger.debug("A")
logger.info("B")
logger.warning("C")
port = 22
print("one", username, password)
client = paramiko.client.SSHClient()
client.load_system_host_keys()
client.load_host_keys(pathlib.Path("~/.ssh/known_hosts").expanduser())
client.connect(host, port, username=username, password=password)
transport = client.get_transport()
transport.set_keepalive(2)
print(support.remote(transport, ["ls", "-la",])[1])
# @cli.add_logging()
# def two(*args, **kwargs):
# print("two", args, kwargs)
#
# @cli.add_logging(1, b=2)
# def three(*args, **kwargs):
# print("three", args, kwargs)
if __name__ == '__main__':
main()
| 22.904762 | 74 | 0.680873 | 243 | 1,924 | 5.238683 | 0.395062 | 0.037706 | 0.056559 | 0.084839 | 0.105263 | 0.105263 | 0.051846 | 0.051846 | 0 | 0 | 0 | 0.006146 | 0.154366 | 1,924 | 83 | 75 | 23.180723 | 0.776275 | 0.190748 | 0 | 0.044444 | 0 | 0 | 0.079253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0.2 | 0.222222 | 0 | 0.311111 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dd3806973c2aebf71b51f8ec05e5137d8ac0688c | 451 | py | Python | future-release/api/migrations/0006_auto_20210606_1218.py | shauray8/we_must_know_website | 3a024cfdb6d051f85a3d86ba6b559bfaed1147ce | [
"MIT"
] | null | null | null | future-release/api/migrations/0006_auto_20210606_1218.py | shauray8/we_must_know_website | 3a024cfdb6d051f85a3d86ba6b559bfaed1147ce | [
"MIT"
] | null | null | null | future-release/api/migrations/0006_auto_20210606_1218.py | shauray8/we_must_know_website | 3a024cfdb6d051f85a3d86ba6b559bfaed1147ce | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-06-06 06:48
import api.models
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0005_auto_20210528_1632'),
]
operations = [
migrations.AlterField(
model_name='room',
name='code',
field=models.CharField(default=api.models.generate_unique_code, max_length=200, unique=True),
),
]
| 22.55 | 105 | 0.631929 | 53 | 451 | 5.245283 | 0.716981 | 0.028777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10119 | 0.254989 | 451 | 19 | 106 | 23.736842 | 0.72619 | 0.099778 | 0 | 0 | 1 | 0 | 0.084158 | 0.056931 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd3df301e53b1d64e214bc0d01100dcc2165cecc | 951 | py | Python | MultipleComparision.py | simplymanas/python-learning | 75bc99c0dce211fd1bce5f6ce1155e0f4c71d7d0 | [
"Apache-2.0"
] | 4 | 2020-08-18T05:29:38.000Z | 2021-03-13T19:01:10.000Z | MultipleComparision.py | simplymanas/python-learning | 75bc99c0dce211fd1bce5f6ce1155e0f4c71d7d0 | [
"Apache-2.0"
] | null | null | null | MultipleComparision.py | simplymanas/python-learning | 75bc99c0dce211fd1bce5f6ce1155e0f4c71d7d0 | [
"Apache-2.0"
] | 1 | 2020-08-29T12:57:17.000Z | 2020-08-29T12:57:17.000Z | # Multiple Comparisons
# the way vs. the better way
# simplify chained comparison
# Manas Dash
# Raksha Bhandhan day of 2020
time_of_the_day = 6
day_of_the_week = 'mon'
# this way
if time_of_the_day < 12 and time_of_the_day > 6:
print('Good morning')
# a better way
if 6 < time_of_the_day < 12:
print('Good morning')
# this way
if day_of_the_week == "Mon" or day_of_the_week == "Wed" or day_of_the_week == "Fri" or day_of_the_week == "Sun":
print('its just a week day')
# a better way
if day_of_the_week in "Mon Wed Fri Sun".split(): # you can also specify a tuple ("Mon", "Wed", "Fri", "Sun")
print('its just a week day')
# this way
if time_of_the_day < 17 and time_of_the_day > 10 and day_of_the_week == 'mon':
print('its a working day')
# a better way
if all(time_of_the_day < 17, time_of_the_day > 10, day_of_the_week == 'mon'):
print('its a working day')
# similar way use 'any' for logical operator 'or'
# The way is on the way | 25.026316 | 112 | 0.70347 | 184 | 951 | 3.375 | 0.288043 | 0.128824 | 0.115942 | 0.154589 | 0.58132 | 0.305958 | 0.251208 | 0.109501 | 0.109501 | 0.109501 | 0 | 0.024485 | 0.184017 | 951 | 38 | 113 | 25.026316 | 0.775773 | 0.32387 | 0 | 0.428571 | 0 | 0 | 0.209857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
dd423106865aca419842316c9da357289b6afcb5 | 5,056 | py | Python | ibpy_native/interfaces/delegates/order.py | Devtography/ibpy_native | e3e2a406a8db9bb338953be6dc195b8099379acb | [
"Apache-2.0"
] | 6 | 2020-07-09T20:55:41.000Z | 2022-01-22T15:43:29.000Z | ibpy_native/interfaces/delegates/order.py | Devtography/ibpy_native | e3e2a406a8db9bb338953be6dc195b8099379acb | [
"Apache-2.0"
] | 1 | 2021-02-28T13:37:43.000Z | 2021-02-28T13:37:43.000Z | ibpy_native/interfaces/delegates/order.py | Devtography/ibpy_native | e3e2a406a8db9bb338953be6dc195b8099379acb | [
"Apache-2.0"
] | 5 | 2020-05-24T19:15:06.000Z | 2022-01-22T15:43:35.000Z | """Internal delegate module for orders related features."""
import abc
from typing import Dict, Optional
from ibapi import contract as ib_contract
from ibapi import order as ib_order
from ibapi import order_state as ib_order_state
from ibpy_native import error
from ibpy_native import models
from ibpy_native.utils import finishable_queue as fq
class OrdersManagementDelegate(metaclass=abc.ABCMeta):
"""Internal delegate protocol for handling orders."""
@property
@abc.abstractmethod
def next_order_id(self) -> int:
"""int: Next valid order ID. If is `0`, it means the connection
with IB has not been established yet.
"""
return NotImplemented
@property
@abc.abstractmethod
def open_orders(self) -> Dict[int, models.OpenOrder]:
""":obj:`Dict[int, models.OpenOrder]`: Open orders returned from IB
during this session.
"""
return NotImplemented
@abc.abstractmethod
def is_pending_order(self, order_id: int) -> bool:
"""Check if a identifier matches with an existing order in pending.
Args:
order_id (int): The order identifier to validate.
Returns:
bool: `True` if `val` matches with the order identifier of an
pending order. `False` if otherwise.
"""
return NotImplemented
#region - Internal functions
@abc.abstractmethod
def update_next_order_id(self, order_id: int):
"""INTERNAL FUNCTION! Update the next order ID stored.
Args:
order_id (int): The updated order identifier.
"""
return NotImplemented
@abc.abstractmethod
def get_pending_queue(self, order_id: int) -> Optional[fq.FinishableQueue]:
"""INTERNAL FUNCTION! Retrieve the queue for order submission task
completeion status.
Args:
order_id (int): The order's identifier on TWS/Gateway.
Returns:
:obj:`Optional[ibpy_native.utils.finishable_queue.FinishableQueue]`:
Queue to monitor for the completeion signal of the order
submission task. `None` should be return if the `order_id`
passed in does not match with any queue stored.
"""
return NotImplemented
#region - Order events
@abc.abstractmethod
def order_error(self, err: error.IBError):
"""INTERNAL FUNCTION! Handles the error return from IB for the order
submiteted.
Args:
err (:obj:`ibpy_native.error.IBError`): Error returned from IB.
"""
return NotImplemented
@abc.abstractmethod
def on_order_submission(self, order_id: int):
"""INTERNAL FUNCTION! Triggers while invoking the internal order
submission function.
Args:
order_id (int): The order's identifier on TWS/Gateway.
"""
return NotImplemented
@abc.abstractmethod
def on_open_order_updated(
self, contract: ib_contract.Contract, order: ib_order.Order,
order_state: ib_order_state.OrderState
):
"""INTERNAL FUNCTION! Handles the open order returned from IB
after an order is submitted to TWS/Gateway.
Args:
contract (:obj:`ibapi.contract.Contract`): The order's contract.
order (:obj:`ibapi.order.Order`): The current active order returned
from IB.
order_state (:obj:`ibapi.order_state.OrderState`): Order states/
status returned from IB.
"""
return NotImplemented
@abc.abstractmethod
def on_order_status_updated(
self, order_id: int, status: str, filled: float, remaining: float,
avg_fill_price: float, last_fill_price: float, mkt_cap_price: float
):
"""INTERNAL FUNCTION! Handles the `orderStatus` callback from IB.
Args:
order_id (int): The order's identifier on TWS/Gateway.
status (str): The current status of the order.
filled (float): Number of filled positions.
remaining (float): The remnant positions.
avg_fill_price (float): Average filling price.
last_fill_price (float): Price at which the last positions were
filled.
mkt_cap_price (float): If an order has been capped, this indicates
the current capped price.
"""
return NotImplemented
@abc.abstractmethod
def on_order_rejected(self, order_id: int, reason: str):
"""INTERNAL FUNCTION! Handles the order rejection error and message
received in `error` callback from IB.
Args:
order_id (int): The order's client identifier.
reason (str): Reason of order rejection.
"""
return NotImplemented
#endregion - Order events
@abc.abstractmethod
def on_disconnected(self):
"""INTERNAL FUNCTION! Handles the event of API connection dropped.
"""
return NotImplemented
#endregion - Internal functions
| 34.630137 | 80 | 0.642801 | 597 | 5,056 | 5.333333 | 0.259631 | 0.037374 | 0.037688 | 0.069724 | 0.200377 | 0.15044 | 0.111495 | 0.096734 | 0.096734 | 0.096734 | 0 | 0.000275 | 0.282041 | 5,056 | 145 | 81 | 34.868966 | 0.87686 | 0.520372 | 0 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.22 | false | 0 | 0.16 | 0 | 0.62 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd4874a921d57f8d652632d58c5b6cfdd4fd5128 | 205 | py | Python | devml/__init__.py | jgonzal3/devml | e49a9bcf510fb25b8d59d7d09a9078c6e68c5d44 | [
"MIT"
] | 22 | 2017-10-15T15:17:53.000Z | 2022-01-14T22:06:08.000Z | devml/__init__.py | Jkoenes211/devml | 77902de0af041e1e272ed1356068fc101498b144 | [
"MIT"
] | 27 | 2017-10-15T04:55:35.000Z | 2021-04-08T02:08:17.000Z | devml/__init__.py | Jkoenes211/devml | 77902de0af041e1e272ed1356068fc101498b144 | [
"MIT"
] | 19 | 2017-10-21T20:19:00.000Z | 2021-01-24T22:09:23.000Z | """
API Example:
from devml import stats, mkdata
path = "/Users/noah/src/wulio/checkout/"
org_df = mkdata.create_org_df(path)
author_counts = stats.author_commit_count(org_df)
"""
__version__ = "0.5.1"
| 17.083333 | 49 | 0.736585 | 32 | 205 | 4.375 | 0.75 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016575 | 0.117073 | 205 | 11 | 50 | 18.636364 | 0.756906 | 0.843902 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd48bae45325888246c3b35520c43a81d9e8ad63 | 4,615 | py | Python | fwd_alg_both_graph.py | collaborative-robotics/ABT | 44649bfc4e7c44ecde03ff72e4a569ca2a35903a | [
"MIT"
] | 5 | 2020-12-02T20:55:21.000Z | 2022-01-25T14:58:16.000Z | fwd_alg_both_graph.py | collaborative-robotics/ABT | 44649bfc4e7c44ecde03ff72e4a569ca2a35903a | [
"MIT"
] | null | null | null | fwd_alg_both_graph.py | collaborative-robotics/ABT | 44649bfc4e7c44ecde03ff72e4a569ca2a35903a | [
"MIT"
] | 3 | 2020-12-02T22:56:34.000Z | 2020-12-02T23:30:45.000Z | #!/usr/bin/env python
import numpy as np # operations on numerical arrays
import csv # file I/O
import math as m
import sys # for command line args
import operator # for sorting list of class instances
import numpy as np
from scipy import stats
import datetime as dt
from dateutil import parser
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from abt_constants import *
def approx(a,b):
if abs(a-b) < abs(0.00001*a):
return True
return False
def figure_output(plt, task, modelstring, ratiostring='all'):
print 'Enter a filename for this plot: (.png will be added)'
rs = ratiostring.replace(' ','')
rs = rs.replace('=','-')
rs = rs.replace('.','p')
ms = modelstring.replace(' ','')
ms = ms.replace('Ratio','R_')
ms = ms.replace('-stateModel','')
fname = 'res_'+task+'_'+ ms +'_'+rs+'.png'
#fname.replace(' ','')
print 'proposed file name: (CR to accept)', fname
pfname = raw_input('new name:')
if(pfname == ''):
pfname = fname
plt.savefig(pfname)
return
names = ['fwd_res2_6state.csv', 'fwd_res2_16state.csv']
#################################################
#
# Basic graph params
plotH = 800
plotV = 900
Xticklabs = []
RatioL = []
nrow = 0
allrows = []
perts = [0, 0.1, 0.25, 0.50]
loop = 0
headrow = False
for ifn in names:
pert0 = []
pert1 = []
pert25 = []
pert50 = []
with open(ifn,'r') as f:
d1 = csv.reader(f,delimiter=',',quotechar='"')
for row in d1:
print '---------------------------------'
print row
if not headrow:
allrows.append(row)
#print row
nrow += 1
Xticklabs.append(row[0])
if loop ==0:
RatioL.append(float(row[0]))
pert0.append(float(row[1]))
pert1.append(float(row[2]))
pert25.append(float(row[3]))
pert50.append(float(row[4]))
headrow = False
N = len(pert0)
if(loop == 0):
p0 = np.array(pert0)
p1 = np.array(pert1)
p25 = np.array(pert25)
p50 = np.array(pert50)
if(loop == 1):
p01 = np.array(pert0)
p11 = np.array(pert1)
p251 = np.array(pert25)
p501 = np.array(pert50)
loop += 1
print pert0
print p0
#########################################################
#
# Basic lineplot
#
figno = 1
modelstring = 'ABT-like HMM'
ymax = 0.3
stXlabel = 'Output Ratio'
stYlabel = 'Log Probability per sequence'
stTitle = 'Forward LogP vs. Output Ratio, 6 & 16-state models'
listXticks = Xticklabs
ymax = 0
ymin = -40
#########################################################
#
# LogP vs perturbation
#
#
# Plot 1
fig1 = plt.figure(figno)
#figno += 1
#bp = plt.boxplot(data, notch=True,vert=True ,patch_artist=True)
#bp = plt.boxplot(box_data, notch=True,vert=True ,patch_artist=True)
bp = plt.plot(RatioL, p0, RatioL, p1, RatioL, p25, RatioL, p50, marker='s')
bp = plt.plot(RatioL, p01, RatioL, p11, RatioL, p251, RatioL, p501, marker='s')
#standardize graph size
#figptr = plt.gcf()
figptr = fig1
DPI = figptr.get_dpi()
figptr.set_size_inches(plotH/float(DPI),plotV/float(DPI))
#for b in bp['boxes']:
#b.set_facecolor('lightblue')
#plt.xlabel('Initial and Final RMS A-matrix Error')
#plt.ylabel('RMS Error')
#plt.ylim(0.0, ymax)
#plt.title('BW Parameter Estimation: A-matrix Improvement, '+modelstring)
#plt.xlabel('Perturbation in RMS A-matrix')
#plt.ylabel('Delta RMS Error')
#plt.ylim(-ymax, ymax)
#plt.title('BW Parameter Estimation: A-matrix Improvement, '+modelstring)
#locs, labels = plt.xticks()
#plt.xticks(locs, ['0.1','0.3','0.5'])
plt.xlabel(stXlabel)
plt.ylabel(stYlabel)
plt.ylim(ymin, ymax)
plt.title(stTitle)
#locs, labels = plt.xticks()
#plt.xticks(locs, listXticks)
plt.annotate('pert = 0.0, pert=0.1', (3.2, -6.6))
#plt.annotate('pert = 0.1', (3.2, -7.1))
plt.annotate('pert = 0.25', (3.2, -8))
plt.annotate('pert = 0.50', (3.2, -9))
plt.annotate('pert = 0.0', (3.2, -28))
plt.annotate('pert = 0.1', (3.2, -29.4))
plt.annotate('pert = 0.25', (3.2, -31.5))
plt.annotate('pert = 0.50', (3.2, -37))
plt.grid(color='lightgray', which='both')
plt.show(block=False)
figure_output(plt, 'Forward_Alg_LogP_vs_output_ratio_BOTHMODELS', '', '')
| 25.081522 | 79 | 0.562514 | 615 | 4,615 | 4.180488 | 0.36748 | 0.017503 | 0.046674 | 0.049786 | 0.16725 | 0.150914 | 0.150914 | 0.080124 | 0.080124 | 0.080124 | 0 | 0.050331 | 0.246587 | 4,615 | 183 | 80 | 25.218579 | 0.6891 | 0.20195 | 0 | 0.037383 | 0 | 0 | 0.129486 | 0.02182 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.130841 | null | null | 0.056075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd48c68da6d2f759e2ca19538a30a95ef949fe24 | 527 | py | Python | pdfmerge/migrations/0003_auto_20190616_1823.py | rupin/pdfmerger | fee19523e88362d215f1a29cdab0d140f4c9385c | [
"MIT"
] | null | null | null | pdfmerge/migrations/0003_auto_20190616_1823.py | rupin/pdfmerger | fee19523e88362d215f1a29cdab0d140f4c9385c | [
"MIT"
] | null | null | null | pdfmerge/migrations/0003_auto_20190616_1823.py | rupin/pdfmerger | fee19523e88362d215f1a29cdab0d140f4c9385c | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2019-06-16 12:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pdfmerge', '0002_auto_20190616_1800'),
]
operations = [
migrations.RemoveField(
model_name='userdata',
name='field_type',
),
migrations.AddField(
model_name='userdata',
name='field_type',
field=models.ManyToManyField(default=0, to='pdfmerge.FormField'),
),
]
| 22.913043 | 77 | 0.588235 | 53 | 527 | 5.716981 | 0.716981 | 0.059406 | 0.112211 | 0.138614 | 0.19802 | 0.19802 | 0 | 0 | 0 | 0 | 0 | 0.086253 | 0.296015 | 527 | 22 | 78 | 23.954545 | 0.730458 | 0.085389 | 0 | 0.375 | 1 | 0 | 0.177083 | 0.047917 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd4ef7eb5e116fec9a494caf739d4fc17f3acea8 | 1,509 | py | Python | ImageConversion.py | nisargmshah/classification-project | b1725c4e072f11ca78e36dfa343e24fd70fe1991 | [
"Apache-2.0"
] | 1 | 2017-11-19T20:04:33.000Z | 2017-11-19T20:04:33.000Z | ImageConversion.py | nisargmshah/bme590classification | b1725c4e072f11ca78e36dfa343e24fd70fe1991 | [
"Apache-2.0"
] | null | null | null | ImageConversion.py | nisargmshah/bme590classification | b1725c4e072f11ca78e36dfa343e24fd70fe1991 | [
"Apache-2.0"
] | null | null | null | import base64
class Image:
"""
This class takes in a base64 string representation of an image and
gives the user the ability to return it in base64 and binary form.
Note: Input to this class must be string; otherwise, will raise a TypeError
"""
# make default image a generic image to know bad?
def __init__(self, input_image, thefilename):
"""
:param input_image: base64 string representation of an image
"""
if isinstance(input_image, str) is False:
raise TypeError('input must be a string')
# ideally would better test for base64 (do some later in this init)
# could decode and re-encode, but that is working for all strings
self.__image = input_image
self.__filename = thefilename
try:
self.print2()
except ValueError:
raise ValueError("Input not in base64, or incorrectly padded")
# self.save_image_string(file=self.__filename)
# self.__image = self.encode_image_string(file=self.__filename)
def encode_image_string(self, file="example.jpg"):
with open(file, "rb") as image_file:
return base64.b64encode(image_file.read())
def save_image_string(self, file="example.jpg"):
with open(self.__filename, "wb") as image_out:
image_out.write(base64.b64decode(self.__image))
def print64(self):
return self.__image
def print2(self):
return base64.b64decode(self.__image)
| 33.533333 | 79 | 0.656726 | 199 | 1,509 | 4.788945 | 0.432161 | 0.047219 | 0.054565 | 0.058762 | 0.207765 | 0.151102 | 0.07765 | 0.07765 | 0 | 0 | 0 | 0.02518 | 0.263088 | 1,509 | 44 | 80 | 34.295455 | 0.831835 | 0.368456 | 0 | 0 | 0 | 0 | 0.099778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.047619 | 0.095238 | 0.47619 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd4efce91994fd23292587adc979ea243bd5e030 | 672 | py | Python | setup.py | aws-samples/ml-lineage-helper | 3562fa35a5480e7f0a06c6de55a26407774a9edb | [
"Apache-2.0"
] | 7 | 2021-09-28T13:31:31.000Z | 2022-03-26T17:17:07.000Z | setup.py | aws-samples/ml-lineage-helper | 3562fa35a5480e7f0a06c6de55a26407774a9edb | [
"Apache-2.0"
] | null | null | null | setup.py | aws-samples/ml-lineage-helper | 3562fa35a5480e7f0a06c6de55a26407774a9edb | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(
name="ml-lineage-helper",
version="0.1",
description="A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.",
url="https://github.com/aws-samples/ml-lineage-helper",
author="Bobby Lindsey",
author_email="bwlind@amazon.com",
license="Apache-2.0",
packages=["ml_lineage_helper"],
install_requires=[
"numpy",
"boto3>=1.17.74",
"sagemaker>2.49.1",
"pandas",
"networkx",
"matplotlib",
"numpy",
],
)
| 30.545455 | 217 | 0.64881 | 81 | 672 | 5.333333 | 0.716049 | 0.104167 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026616 | 0.217262 | 672 | 21 | 218 | 32 | 0.794677 | 0 | 0 | 0.1 | 0 | 0.05 | 0.575893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd532d5dc4cfb56a9fd1b9b0eb6cf240180eafc9 | 796 | py | Python | srdk/cy/lang_tools/get_stressed_phones_for_htk.py | techiaith/seilwaith-adnabod-lleferydd | 72e9a36eecae6e1fedb0015c2360ff3c7306a471 | [
"Apache-2.0"
] | 1 | 2018-10-18T15:53:25.000Z | 2018-10-18T15:53:25.000Z | srdk/cy/lang_tools/get_stressed_phones_for_htk.py | techiaith/seilwaith-adnabod-lleferydd | 72e9a36eecae6e1fedb0015c2360ff3c7306a471 | [
"Apache-2.0"
] | 1 | 2018-03-23T15:56:18.000Z | 2018-03-23T15:56:18.000Z | srdk/cy/lang_tools/get_stressed_phones_for_htk.py | techiaith/seilwaith-adnabod-lleferydd | 72e9a36eecae6e1fedb0015c2360ff3c7306a471 | [
"Apache-2.0"
] | 3 | 2017-08-28T05:09:30.000Z | 2018-10-04T13:55:10.000Z | import sys, re, traceback
from llef.llef import get_stressed_phones
def get_stressed_phones_for_htk(word):
try:
stressed_phones = get_stressed_phones(word)
except (ValueError, TypeError):
return '','',''
lexiconword=word
if lexiconword.startswith("'"): lexiconword=lexiconword[1:]
if '/' in lexiconword: return '','',''
if '\\' in lexiconword: return '','',''
if 'tsh' in stressed_phones:
#print 'Ignored because of unsupported phone: %s' % lexiconword
return '','','';
phones = ' '.join(stressed_phones).encode('UTF-8')
phones = phones.replace('1','X')
phones = phones.replace('X','')
phones = phones.replace('i','I')
phones = phones.replace('o','O')
return lexiconword, word, phones
| 29.481481 | 79 | 0.613065 | 89 | 796 | 5.359551 | 0.438202 | 0.176101 | 0.159329 | 0.08805 | 0.092243 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.227387 | 796 | 26 | 80 | 30.615385 | 0.770732 | 0.077889 | 0 | 0 | 0 | 0 | 0.027322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd54720408a48ead8269ae52bf0439a68064bf5b | 4,075 | py | Python | codes/RedSpider.py | MasterScott/hack4career | 2e1b815a083e3f50ddfc59b0e61d9dc5e7c6f856 | [
"Apache-2.0"
] | 96 | 2015-06-03T04:32:36.000Z | 2022-03-16T21:46:14.000Z | codes/RedSpider.py | MasterScott/hack4career | 2e1b815a083e3f50ddfc59b0e61d9dc5e7c6f856 | [
"Apache-2.0"
] | null | null | null | codes/RedSpider.py | MasterScott/hack4career | 2e1b815a083e3f50ddfc59b0e61d9dc5e7c6f856 | [
"Apache-2.0"
] | 30 | 2016-01-22T14:45:51.000Z | 2021-09-14T06:29:31.000Z | # -*- coding: cp1254 -*-
# Expired Domain Check v1.0
# Author: Mert SARICA
# E-mail: mert [ . ] sarica [ @ ] gmail [ . ] com
# URL: https://www.mertsarica.com
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.http import Request
from urlparse import urlparse
import time
import os
import sys
import urllib, urllib2
import datetime
domains = []
debug = 0
logfile = "logs.txt"
proxy_info = {
'user' : '', # proxy username
'pass' : '', # proxy password
'host' : "", # proxy host (leave it empty if no proxy is in use)
'port' : 8080 # proxy port
}
# build a new opener that uses a proxy requiring authorization
proxy_support = urllib2.ProxyHandler({"http" : \
"http://%(user)s:%(pass)s@%(host)s:%(port)d" % proxy_info})
if proxy_info['host'] != "":
opener = urllib2.build_opener(proxy_support, urllib2.HTTPCookieProcessor())
else:
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor())
# install it
urllib2.install_opener(opener)
def log(txt):
try:
now = datetime.datetime.now()
time = now.strftime("%d-%m-%Y %H:%M:%S")
file = open(logfile, "a")
txt = str(time + " " + str(txt).encode("cp1254") + "\n")
file.write(txt)
file.close()
except Exception as e:
print str(e)
if debug:
log("|log() error: " + str(e))
pass
def cls():
if sys.platform == 'linux-i386' or sys.platform == 'linux2':
os.system("clear")
elif sys.platform == 'win32':
os.system("cls")
else:
os.system("cls")
def banner():
cls()
print "======================================================"
print u"Expired Domain Check v1.0 [https://www.mertsarica.com]"
print "======================================================"
def is_registered(domain):
url = "https://www.whois.com.tr/process.php"
post_data_dictionary = {"domain" : domain,
"tld" : "" ,
"sid" : "13"}
http_headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36 RS"}
post_data_encoded = urllib.urlencode(post_data_dictionary)
request_object = urllib2.Request(url, post_data_encoded, http_headers)
f = opener.open(request_object)
response = f.read().decode("utf-8")
if debug:
print "[*] Response:", response
time.sleep(2)
if domain.find("www.") >= 0:
domain = domain.split("www.")[1]
findStr="(" + domain + ")</h1>"
if response.find("No match for") > 0 and response.find(findStr) > 0:
return 0
return 1
class RedSpider(CrawlSpider):
name = 'RedSpider'
allowed_domains = ['mertsarica.com']
start_urls = ['https://www.mertsarica.com']
AUTOTHROTTLE_ENABLED = "True"
rules = (
Rule(LinkExtractor(unique=True), callback='parse_item', follow=True),
)
banner()
print "[*] Crawling:", "".join(start_urls)
def parse_item(self, response):
txt = ""
link = ""
# links = response.css('a[href*=http]::attr(href)').extract()
links = response.css('a::attr(href)').extract()
crawledLinks = []
for domain in links:
if debug:
print "URL: ", domain
try:
link = domain
domain = ".".join(urlparse(domain).hostname.split(".")[-2:]) #urlparse(domain).hostname
if domain.replace(".","").isdigit():
continue
if domain.find(".") < 0:
continue
except Exception as e:
if debug:
print str(e)
continue
if len(domain) > 0 and domain.find(".tr") < 0 and domain not in domains and domain.find("/") < 0:
domains.append(domain)
if debug:
print "Domain:", domain, "Page:", response.request.url
try:
if is_registered(domain):
print "[-] Domain:", domain, "Expired: NO"
txt = "Domain: " + domain + " Expired: NO"
log(txt)
else:
print "[+] Domain:", domain, "Expired: YES", "Page:", response.request.url
txt = "Domain: " + domain + " Expired: YES " + " Page: " + response.request.url
log(txt)
except Exception as e:
if debug:
print str(e)
continue | 28.298611 | 151 | 0.607607 | 517 | 4,075 | 4.73501 | 0.367505 | 0.039216 | 0.02451 | 0.025735 | 0.087418 | 0.070261 | 0.070261 | 0.070261 | 0.034314 | 0.034314 | 0 | 0.023493 | 0.206135 | 4,075 | 144 | 152 | 28.298611 | 0.73323 | 0.097178 | 0 | 0.241379 | 0 | 0.008621 | 0.203219 | 0.02946 | 0.008621 | 0 | 0 | 0 | 0 | 0 | null | null | 0.025862 | 0.086207 | null | null | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd548a87e76f43c9d2964c7f9bace9b5688513e2 | 656 | py | Python | root/scripts/setup/01_0_backup.py | DragonCrafted87/docker-alpine-spigot | 7806d88dd24e0da7bf979249224305234df238ea | [
"MIT"
] | null | null | null | root/scripts/setup/01_0_backup.py | DragonCrafted87/docker-alpine-spigot | 7806d88dd24e0da7bf979249224305234df238ea | [
"MIT"
] | null | null | null | root/scripts/setup/01_0_backup.py | DragonCrafted87/docker-alpine-spigot | 7806d88dd24e0da7bf979249224305234df238ea | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# System Imports
from datetime import datetime
from os import getenv
from pathlib import PurePath
from tarfile import open as tar_open
# Local Imports
from python_logger import create_logger #pylint: disable=import-error
def main():
logger = create_logger(PurePath(__file__).stem)
if not getenv('SPIGOT_SKIP_BACKUP', 'False').lower() in ['true', 't', 'y', 'yes', '1']:
logger.info('Creating Backup')
date_stamp = datetime.now().strftime("%G-W%V-%u-%H-%M-%S")
with tar_open(f'/mnt/minecraft/spigot-backup-{date_stamp}.tar.lzma', 'w:xz') as tar:
tar.add('/mnt/minecraft/.')
if __name__ == "__main__":
main()
| 26.24 | 89 | 0.696646 | 98 | 656 | 4.44898 | 0.622449 | 0.050459 | 0.068807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003552 | 0.141768 | 656 | 24 | 90 | 27.333333 | 0.77087 | 0.112805 | 0 | 0 | 0 | 0 | 0.249135 | 0.086505 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dd5cf81323ccfb6fae834ddd8e196cefbedd628d | 1,024 | py | Python | Easy/MinStack.py | a-shah8/LeetCode | a654e478f51b2254f7b49055beba6b5675bc5223 | [
"MIT"
] | 1 | 2021-06-02T15:03:41.000Z | 2021-06-02T15:03:41.000Z | Easy/MinStack.py | a-shah8/LeetCode | a654e478f51b2254f7b49055beba6b5675bc5223 | [
"MIT"
] | null | null | null | Easy/MinStack.py | a-shah8/LeetCode | a654e478f51b2254f7b49055beba6b5675bc5223 | [
"MIT"
] | null | null | null | ## Designing MinStack
## 1. Using Linked List
## 2. Using Arrays/Lists
class MinStack:
head = None
def __init__(self):
"""
initialize your data structure here.
"""
def push(self, x: int) -> None:
if self.head==None:
self.head = self.Node(x, x, None)
else:
self.head = self.Node(x, min(self.head.min_val, x), self.head)
def pop(self) -> None:
self.head = self.head.next_node
def top(self) -> int:
return self.head.value
def getMin(self) -> int:
return self.head.min_val
class Node:
value = None
min_val = None
next_node = None
def __init__(self, value, min_val, next_node):
self.value = value
self.min_val = min_val
self.next_node = next_node
# Your MinStack object will be instantiated and called as such:
# obj = MinStack()
# obj.push(x)
# obj.pop()
# param_3 = obj.top()
# param_4 = obj.getMin()
| 22.755556 | 74 | 0.551758 | 136 | 1,024 | 4 | 0.338235 | 0.132353 | 0.066176 | 0.055147 | 0.139706 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005874 | 0.334961 | 1,024 | 44 | 75 | 23.272727 | 0.792952 | 0.237305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.090909 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd62876b9e7f767cfac4660c0ded8eb96fd46495 | 2,122 | py | Python | server/tests/test_models.py | NBanski/XSS-Catcher | c986a941dd3dec5d2617b46106d3e5dd665bffd2 | [
"MIT"
] | 98 | 2019-05-28T12:17:55.000Z | 2022-02-15T07:06:41.000Z | server/tests/test_models.py | NBanski/XSS-Catcher | c986a941dd3dec5d2617b46106d3e5dd665bffd2 | [
"MIT"
] | 18 | 2019-11-08T20:14:47.000Z | 2022-02-27T15:04:32.000Z | server/tests/test_models.py | NBanski/XSS-Catcher | c986a941dd3dec5d2617b46106d3e5dd665bffd2 | [
"MIT"
] | 13 | 2020-08-27T21:40:57.000Z | 2022-02-02T16:35:48.000Z | import json
from app import db
from app.models import Blocklist, Client, Settings, User, init_app
from xss import app
from .fixtures import client, client_empty
from .functions import *
def test_client_to_dict_clients(client):
access_header, _ = login_get_headers(client, "admin", "xss")
create_client(client, access_header, name="name1", description="desc1")
client_name1 = Client.query.first()
get_x(client, access_header, "r", client_name1.uid, test_data="test")
rv = get_clients(client, access_header)
assert json.loads(rv.data)[0]["data"] == 1
def test_client_to_dict_client(client):
access_header, _ = login_get_headers(client, "admin", "xss")
new_user(client, access_header, username="test")
create_client(client, access_header, name="name1", description="desc1")
edit_client(client, access_header, 1, owner=2)
delete_user(client, access_header, id=2)
rv = get_client(client, access_header, id=1)
assert json.loads(rv.data)["owner"] == "Nobody"
def test_xss_to_dict(client):
access_header, _ = login_get_headers(client, "admin", "xss")
create_client(client, access_header, name="name1", description="desc1")
client_name1 = Client.query.first()
post_x(
client,
access_header,
"r",
client_name1.uid,
cookies="cookie=good",
local_storage='{"local":"good"}',
session_storage='{"session":"good"}',
param="good",
fingerprint='["good"]',
dom="<br />",
screenshot="O==",
)
rv = get_xss(client, access_header, 1)
json_data = json.loads(rv.data)
assert json_data["data"]["fingerprint"] == ""
assert json_data["data"]["dom"] == ""
assert json_data["data"]["screenshot"] == ""
def test_init_app_not_needed(client):
get_user(client, {})
init_app(app)
assert Settings.query.count() == 1
assert User.query.count() == 1
assert Blocklist.query.count() == 0
def test_init_app_needed(client_empty):
get_user(client_empty, {})
init_app(app)
assert Settings.query.count() == 1
assert User.query.count() == 1
| 31.671642 | 75 | 0.667766 | 281 | 2,122 | 4.790036 | 0.234875 | 0.124814 | 0.187221 | 0.106984 | 0.460624 | 0.401189 | 0.401189 | 0.401189 | 0.350669 | 0.274889 | 0 | 0.012724 | 0.185203 | 2,122 | 66 | 76 | 32.151515 | 0.765761 | 0 | 0 | 0.259259 | 0 | 0 | 0.085297 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 1 | 0.092593 | false | 0 | 0.111111 | 0 | 0.203704 | 0.037037 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd6b3ba77e2a6e6d534f78002bc3b166eb910c67 | 829 | py | Python | driver/directory.py | koltenfluckiger/pyseleniummanagement | 46403adb98d0495b61f8273da326ba117178043f | [
"MIT",
"Unlicense"
] | null | null | null | driver/directory.py | koltenfluckiger/pyseleniummanagement | 46403adb98d0495b61f8273da326ba117178043f | [
"MIT",
"Unlicense"
] | null | null | null | driver/directory.py | koltenfluckiger/pyseleniummanagement | 46403adb98d0495b61f8273da326ba117178043f | [
"MIT",
"Unlicense"
] | null | null | null | try:
from enum import Enum
from pathlib import Path as path
import os
except ImportError as err:
print("Unable to import: {}".format(err))
exit()
class Directory(Enum):
DEFAULT_WINDOWS_FIREFOX = "{}\\Roaming\\Mozilla\\Firefox\\Profiles".format(
os.getenv('APPDATA'))
DEFAULT_WINDOWS_CHROME = "{}\\Local\\Google\\Chrome\\User Data".format(
os.getenv('APPDATA'))
DEFAULT_WINDOWS_EDGE = "{}\\Local\\Microsoft\\Edge\\User Data\\Default".format(
os.getenv('APPDATA'))
DEFAULT_LINUX_FIREFOX = "{}/.mozilla/firefox/".format(path.home())
DEFAULT_LINUX_CHROME = "{}/.config/google-chrome/default".format(path.home())
DEFAULT_LINUX_EDGE = "{}\\Local\\Microsoft\\Edge\\User Data\\Default".format(
path.home())
def __str__(self):
return self.value
| 31.884615 | 83 | 0.657419 | 99 | 829 | 5.343434 | 0.414141 | 0.079395 | 0.079395 | 0.119093 | 0.434783 | 0.294896 | 0.162571 | 0.162571 | 0 | 0 | 0 | 0 | 0.176116 | 829 | 25 | 84 | 33.16 | 0.774524 | 0 | 0 | 0.15 | 0 | 0 | 0.313631 | 0.200241 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.25 | 0.05 | 0.7 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dd6f8e0c6b9c17782e168b597bcd72824497e4f7 | 3,104 | py | Python | core/utils/serializer.py | chiaki64/Windless | 12eef67e7c49bd131104c223539445ccd841edc1 | [
"MIT"
] | 10 | 2016-11-30T12:15:00.000Z | 2018-10-04T01:13:45.000Z | core/utils/serializer.py | chiaki64/Windless | 12eef67e7c49bd131104c223539445ccd841edc1 | [
"MIT"
] | null | null | null | core/utils/serializer.py | chiaki64/Windless | 12eef67e7c49bd131104c223539445ccd841edc1 | [
"MIT"
] | 3 | 2017-11-01T09:17:18.000Z | 2018-09-25T02:07:40.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import time
from utils.period import todate
from utils.shortcut import (rebuild_html,
render)
class Serializer:
# 一定要传form={}进来
def __init__(self, **kwargs):
self.data = self.serialize(kwargs.get('form'))
self.exclude = ()
self.is_valid()
def serialize(self, dit):
return dit
def is_valid(self):
for key in self.data:
if self.data[key] is None and key not in self.exclude:
# print('invalid')
return False
# print('valid')
return True
class ArticleSer(Serializer):
def __init__(self, **kwargs):
super(ArticleSer, self).__init__(**kwargs)
self.exclude = ('id', 'updated_date', 'pic_address', 'axis_y', 'desc', 'citation')
def serialize(self, form):
# TODO:考虑更新和创建
form['created_time'] = (
str(int(time.time())) if form.get('time') is None or form.get('time') == '' else form['time'])
if form.get('edit'):
form['updated_time'] = form['created_time']
if form.get('update') == 'on':
form['updated_time'] = str(int(time.time()))
form['html'], form['desc'] = rebuild_html(render(form['text']))
return dict(
id=None if form.get('id') == '' else form.get('id'),
created_time=form.get('created_time'),
updated_time=form.get('updated_time'),
date=todate(form['created_time'], '%b.%d %Y'), # form.get('date') or
# updated_date=form.get('updated_date') or todate(form['updated_time'], '%b.%d %Y %H:%M:%S'),
title=form.get('title'),
tag=form.get('tag'),
author=form.get('author'),
category=form.get('category'),
text=form.get('text'),
html=form['html'],
desc=form['desc'],
desc_text=((form.get('text'))[:(form.get('text')).find('-----', 1)]).replace('\n', ' ').replace('\"', '\''),
citation=form['citation'] if form.get('citation') else None,
top=form.get('top'),
open=form.get('open'),
pic=form.get('pic'),
pic_address=form.get('pic_address'),
axis_y=form.get('axis_y'),
comments=form.get('comments') or []
)
class ArchiveSer(Serializer):
def __init__(self, **kwargs):
super(ArchiveSer, self).__init__(**kwargs)
self.exclude = ()
def serialize(self, form):
return dict(
id=form.get('id'),
title=form.get('title'),
category=form.get('category'),
created_time=form.get('created_time'),
)
class LinkSer(Serializer):
def __init__(self, **kwargs):
super(LinkSer, self).__init__(**kwargs)
self.exclude = ()
def serialize(self, form):
return dict()
class ConfigSer(Serializer):
def __init__(self, **kwargs):
super(ConfigSer, self).__init__(**kwargs)
self.exclude = ()
def serialize(self, form):
return dict()
| 31.353535 | 120 | 0.541881 | 364 | 3,104 | 4.450549 | 0.239011 | 0.120988 | 0.033951 | 0.052469 | 0.254321 | 0.216667 | 0.101852 | 0.101852 | 0.101852 | 0.101852 | 0 | 0.001348 | 0.282861 | 3,104 | 98 | 121 | 31.673469 | 0.726415 | 0.068943 | 0 | 0.319444 | 0 | 0 | 0.107564 | 0 | 0 | 0 | 0 | 0.010204 | 0 | 1 | 0.152778 | false | 0 | 0.041667 | 0.055556 | 0.361111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd702004e381b70df80e10128ffd0f112ddcd462 | 3,665 | py | Python | electripy/physics/charge_distribution.py | dylannalex/electripy | 9f16da35c71716025bedefc4b2d7d38bc77f68a0 | [
"MIT"
] | 21 | 2021-10-17T01:29:21.000Z | 2022-03-11T22:39:37.000Z | electripy/physics/charge_distribution.py | dylannalex/ElectriPy | 9f16da35c71716025bedefc4b2d7d38bc77f68a0 | [
"MIT"
] | null | null | null | electripy/physics/charge_distribution.py | dylannalex/ElectriPy | 9f16da35c71716025bedefc4b2d7d38bc77f68a0 | [
"MIT"
] | 3 | 2021-10-30T20:08:50.000Z | 2022-01-15T10:24:37.000Z | from numpy import ndarray, array
from electripy.physics.charges import PointCharge
class _ChargesSet:
"""
A _ChargesSet instance is a group of charges. The electric
field at a given point can be calculated as the sum of each
electric field at that point for every charge in the charge
set.
"""
def __init__(self, charges: list[PointCharge]) -> None:
self.charges = charges
def electric_field(self, point: ndarray) -> ndarray:
"""
Returns the electric field at the specified point.
"""
ef = array([0.0, 0.0])
for charge in self.charges:
ef += charge.electric_field(point)
return ef
def electric_force(self, charge: PointCharge) -> ndarray:
"""
Returns the force of the electric field exerted
on the charge.
"""
ef = self.electric_field(charge.position)
return ef * charge.charge
def __getitem__(self, index):
return self.charges[index]
class ChargeDistribution:
def __init__(self):
"""
There is one group for each charge in charges.
Each group is a two dimensional vector. The first element is
a charge, and the second element is the ChargeSet instance
containing all charges in charges except the charge itself.
"""
self.groups = []
self.charges_set = _ChargesSet([])
def add_charge(self, charge: PointCharge) -> None:
"""
Adds the charge to charges_set and updates the groups.
"""
self.charges_set.charges.append(charge)
self._update_groups(self.charges_set.charges)
def remove_charge(self, charge: PointCharge) -> None:
"""
Removes the charge to charges_set and updates the groups.
"""
self.charges_set.charges.remove(charge)
self._update_groups(self.charges_set.charges)
def _update_groups(self, charges: list[PointCharge]) -> None:
"""
Let X be a charge from the charge distribution. Computing X electric
force involves computing the electric force exerted on X by all
the other charges on the charge distribution.
This means that, in order to compute the electric force of X,
we need a two dimensional vector where the first component is
the charge X itself and the second component is a ChargeSet
instance cointaning all charges on the charge distribution except
X. This vector is called 'group'.
"""
self.groups = []
for charge in charges:
self.groups.append(
[
charge,
_ChargesSet([c for c in charges if c is not charge]),
]
)
def get_electric_forces(self) -> list[tuple[PointCharge, ndarray]]:
"""
Returns a list of electric forces. There is one electric force for
each charge in charges. Each electric force is a two dimensional
vector. The first element is the charge and the second element is
the electric force the other charges make on it.
"""
electric_forces = []
for group in self.groups:
electric_forces.append((group[0], group[1].electric_force(group[0])))
return electric_forces
def get_electric_field(self, position: ndarray) -> ndarray:
"""
Returns the electric force array at the given point.
"""
return self.charges_set.electric_field(position)
def __len__(self):
return len(self.charges_set.charges)
def __getitem__(self, index):
return self.charges_set[index]
| 34.575472 | 81 | 0.631924 | 461 | 3,665 | 4.904555 | 0.219089 | 0.063246 | 0.049536 | 0.044228 | 0.3295 | 0.211411 | 0.188412 | 0.130031 | 0.130031 | 0.053958 | 0 | 0.002713 | 0.296044 | 3,665 | 105 | 82 | 34.904762 | 0.873643 | 0.382265 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.044444 | 0.066667 | 0.511111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dd7a40e573032b37343fff9728aef7aaa9daed44 | 327 | py | Python | wordle/config.py | marcotinacci/wordle-solver | cdcf16020ad969369ca29d9f2e2dfb749e46890a | [
"Apache-2.0"
] | 1 | 2022-01-23T14:36:26.000Z | 2022-01-23T14:36:26.000Z | wordle/config.py | marcotinacci/wordle-solver | cdcf16020ad969369ca29d9f2e2dfb749e46890a | [
"Apache-2.0"
] | null | null | null | wordle/config.py | marcotinacci/wordle-solver | cdcf16020ad969369ca29d9f2e2dfb749e46890a | [
"Apache-2.0"
] | null | null | null | import os
import logging
from typing import Final
from pathlib import Path
SYMBOL_MATCH: Final = "X"
SYMBOL_MISPLACED: Final = "."
SYMBOL_MISS: Final = "_"
MAX_ATTEMPTS: Final = 6
DATA_ROOT = Path(__file__).parent.parent / "data"
DEBUG = os.environ.get("DEBUG", False)
LOG_LEVEL = logging.DEBUG if DEBUG else logging.WARNING
| 23.357143 | 55 | 0.755352 | 48 | 327 | 4.916667 | 0.604167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003559 | 0.140673 | 327 | 13 | 56 | 25.153846 | 0.836299 | 0 | 0 | 0 | 0 | 0 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dd8c083cdbbfe12ec346878733af8b9f620bff9f | 381 | py | Python | geoana/kernels/setup.py | simpeg/geoana | 417e23a0a689da19112e5fd361f823a2abd8785a | [
"MIT"
] | 11 | 2017-11-14T12:29:42.000Z | 2022-01-17T18:36:28.000Z | geoana/kernels/setup.py | simpeg/geoana | 417e23a0a689da19112e5fd361f823a2abd8785a | [
"MIT"
] | 28 | 2016-09-02T02:44:32.000Z | 2022-03-31T22:41:33.000Z | geoana/kernels/setup.py | simpeg/geoana | 417e23a0a689da19112e5fd361f823a2abd8785a | [
"MIT"
] | 4 | 2017-03-07T22:07:15.000Z | 2021-05-14T20:08:33.000Z | import os
def configuration(parent_package="", top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration("kernels", parent_package, top_path)
# Conditionally add subpackage if intending to build compiled components
if os.environ.get('BUILD_GEOANA_EXT', "0") != "0":
config.add_subpackage("_extensions")
return config
| 31.75 | 76 | 0.737533 | 47 | 381 | 5.787234 | 0.680851 | 0.095588 | 0.117647 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0.165354 | 381 | 11 | 77 | 34.636364 | 0.849057 | 0.183727 | 0 | 0 | 0 | 0 | 0.116505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dd8cf527dbcc63c33b3f41b5f2270a195f2e8c02 | 2,499 | py | Python | society/migrations/0008_auto_20190204_1104.py | JeekStudio/StudentPlatform | d2cccef6555a7c9d137ecab54dbbd4aa219be57b | [
"MIT"
] | 4 | 2019-02-23T13:34:48.000Z | 2019-04-09T12:44:19.000Z | society/migrations/0008_auto_20190204_1104.py | JeekStudio/StudentPlatform | d2cccef6555a7c9d137ecab54dbbd4aa219be57b | [
"MIT"
] | 134 | 2019-01-29T03:49:54.000Z | 2021-04-08T18:44:57.000Z | society/migrations/0008_auto_20190204_1104.py | JeekStudio/StudentPlatform | d2cccef6555a7c9d137ecab54dbbd4aa219be57b | [
"MIT"
] | null | null | null | # Generated by Django 2.1.4 on 2019-02-04 11:04
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('student', '0003_student_password_changed'),
('society', '0007_society_members'),
]
operations = [
migrations.CreateModel(
name='ActivityRequest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=32)),
('content', models.TextField(blank=True, null=True)),
('place', models.CharField(max_length=32)),
('start_time', models.DateTimeField()),
('status', models.PositiveSmallIntegerField(choices=[(0, '审核中'), (1, '通过'), (2, '未通过')], default=0)),
],
),
migrations.CreateModel(
name='CreditReceivers',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('year', models.PositiveSmallIntegerField()),
('semester', models.PositiveSmallIntegerField()),
('receivers', models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='student.Student')),
],
),
migrations.CreateModel(
name='SocietyTag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('content', models.CharField(max_length=8)),
('color', models.CharField(max_length=16)),
],
),
migrations.AddField(
model_name='society',
name='credit',
field=models.PositiveSmallIntegerField(default=0),
),
migrations.AddField(
model_name='creditreceivers',
name='society',
field=models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='society.Society'),
),
migrations.AddField(
model_name='activityrequest',
name='society',
field=models.ForeignKey(on_delete=django.db.models.deletion.DO_NOTHING, to='society.Society'),
),
migrations.AddField(
model_name='society',
name='tags',
field=models.ManyToManyField(to='society.SocietyTag'),
),
]
| 39.046875 | 119 | 0.573029 | 229 | 2,499 | 6.126638 | 0.358079 | 0.02851 | 0.039914 | 0.062723 | 0.444048 | 0.406985 | 0.37206 | 0.37206 | 0.37206 | 0.37206 | 0 | 0.019652 | 0.287315 | 2,499 | 63 | 120 | 39.666667 | 0.768108 | 0.018007 | 0 | 0.508772 | 1 | 0 | 0.130506 | 0.011827 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.017544 | 0.035088 | 0 | 0.087719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06c469c3153c442ee4376207f2687e2b2ee2c599 | 1,088 | py | Python | mps_history/models/input_history.py | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | mps_history/models/input_history.py | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | mps_history/models/input_history.py | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | from sqlalchemy import Column, Integer, String, DateTime
from mps_database.models import Base
import datetime
class InputHistory(Base):
"""
InputHistory class (input_history table)
Input data collected from the central node
All derived data is from the mps_configuration database.
Properties:
timestamp: the timestamp of the fault event. Format is as follows
in order to work with sqlite date/time functions: "YYYY-MM-DD HH:MM:SS.SSS"
new_state: the state that was transitioned to in this fault event (either a 0 or 1)
old_state: the state that was transitioned from in this fault event (either a 0 or 1)
channel:
device:
"""
__tablename__ = 'input_history'
id = Column(Integer, primary_key=True)
timestamp = Column(DateTime, default=datetime.datetime.utcnow, nullable=False)
#Old and new satates are based off of named values
new_state = Column(String, nullable=False)
old_state = Column(String, nullable=False)
channel = Column(String, nullable=False) #DigitalChannel
device = Column(String, nullable=False) #DigitalDevice
| 34 | 88 | 0.750919 | 155 | 1,088 | 5.187097 | 0.516129 | 0.080846 | 0.099502 | 0.124378 | 0.221393 | 0.146766 | 0.067164 | 0.067164 | 0.067164 | 0 | 0 | 0.004479 | 0.179228 | 1,088 | 31 | 89 | 35.096774 | 0.895857 | 0.530331 | 0 | 0 | 0 | 0 | 0.027254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
06c6baaee7c7df92cdac14e6adc6f2876eb4d3cd | 515 | py | Python | intro/matplotlib/examples/plot_plot_ex.py | jorisvandenbossche/scipy-lecture-notes | 689105f90db641eb1e1f82692f4d8b8492e8245d | [
"CC-BY-3.0"
] | 3 | 2016-06-14T02:37:55.000Z | 2019-08-08T16:52:09.000Z | intro/matplotlib/examples/plot_plot_ex.py | jorisvandenbossche/scipy-lecture-notes | 689105f90db641eb1e1f82692f4d8b8492e8245d | [
"CC-BY-3.0"
] | null | null | null | intro/matplotlib/examples/plot_plot_ex.py | jorisvandenbossche/scipy-lecture-notes | 689105f90db641eb1e1f82692f4d8b8492e8245d | [
"CC-BY-3.0"
] | 2 | 2018-11-13T08:48:59.000Z | 2020-06-03T18:01:57.000Z | import pylab as pl
import numpy as np
n = 256
X = np.linspace(-np.pi, np.pi, n, endpoint=True)
Y = np.sin(2 * X)
pl.axes([0.025, 0.025, 0.95, 0.95])
pl.plot(X, Y + 1, color='blue', alpha=1.00)
pl.fill_between(X, 1, Y + 1, color='blue', alpha=.25)
pl.plot(X, Y - 1, color='blue', alpha=1.00)
pl.fill_between(X, -1, Y - 1, (Y - 1) > -1, color='blue', alpha=.25)
pl.fill_between(X, -1, Y - 1, (Y - 1) < -1, color='red', alpha=.25)
pl.xlim(-np.pi, np.pi)
pl.xticks(())
pl.ylim(-2.5, 2.5)
pl.yticks(())
pl.show()
| 22.391304 | 68 | 0.580583 | 112 | 515 | 2.642857 | 0.321429 | 0.047297 | 0.050676 | 0.202703 | 0.493243 | 0.493243 | 0.402027 | 0.402027 | 0.402027 | 0.402027 | 0 | 0.106481 | 0.161165 | 515 | 22 | 69 | 23.409091 | 0.578704 | 0 | 0 | 0 | 0 | 0 | 0.036893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06cafc99125fbe6c42a9d786cc628d54d412fd90 | 1,466 | py | Python | app/users/api/tests.py | DakobedBard/Bookings | 6738fd52d2bcd5ab16228b7bfe9c06fee3ee49aa | [
"MIT"
] | null | null | null | app/users/api/tests.py | DakobedBard/Bookings | 6738fd52d2bcd5ab16228b7bfe9c06fee3ee49aa | [
"MIT"
] | null | null | null | app/users/api/tests.py | DakobedBard/Bookings | 6738fd52d2bcd5ab16228b7bfe9c06fee3ee49aa | [
"MIT"
] | null | null | null | import json
from django.urls import reverse
from rest_framework.authtoken.models import Token
from rest_framework.test import APITestCase
from rest_framework import status
from rooms.models import Room
from utils.test_utils.date_seeder import DataSeeder
class RoomTestCase(APITestCase):
def setUp(self) -> None:
pass
def test_create_host(self):
host_create_response = self.client.post(
path="http://127.0.0.1:8000/users/create_host/",
data=json.dumps({
"username":"BennyAb",
"password":'iksarman',
'phone_number':'206-321-2211',
'state':'Michigan',
'city': 'Ann Arbor',
'address': '38 Oak street'
}),
content_type='application/json'
)
self.assertEqual(host_create_response.status_code, status.HTTP_201_CREATED)
def test_create_guest(self):
host_create_response = self.client.post(
path="http://127.0.0.1:8000/users/create_guest/",
data=json.dumps({
"username":"JmanJack",
"password":'iksarman',
'phone_number':'206-321-2211',
'state':'Michigan',
'city': 'Ann Arbor',
'address': '38 Oak street'
}),
content_type='application/json'
)
self.assertEqual(host_create_response.status_code, status.HTTP_201_CREATED) | 35.756098 | 83 | 0.587995 | 160 | 1,466 | 5.20625 | 0.425 | 0.048019 | 0.086435 | 0.052821 | 0.561825 | 0.561825 | 0.561825 | 0.561825 | 0.561825 | 0.561825 | 0 | 0.048591 | 0.29809 | 1,466 | 41 | 84 | 35.756098 | 0.760933 | 0 | 0 | 0.526316 | 0 | 0 | 0.215406 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.078947 | false | 0.078947 | 0.184211 | 0 | 0.289474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
06d26ca34dcf4ace06e0a071ab2c545d2a07f2a9 | 952 | py | Python | airdrop/alembic/versions/1468fd5ca2be_addreceiptonregistrationtable.py | anandrgitnirman/airdrop-services | 041118e986d595b2764a838af834bd08c283d374 | [
"MIT"
] | null | null | null | airdrop/alembic/versions/1468fd5ca2be_addreceiptonregistrationtable.py | anandrgitnirman/airdrop-services | 041118e986d595b2764a838af834bd08c283d374 | [
"MIT"
] | 5 | 2021-09-27T05:08:41.000Z | 2022-03-02T03:58:04.000Z | airdrop/alembic/versions/1468fd5ca2be_addreceiptonregistrationtable.py | anandrgitnirman/airdrop-services | 041118e986d595b2764a838af834bd08c283d374 | [
"MIT"
] | 8 | 2021-09-24T10:52:50.000Z | 2022-01-14T12:07:41.000Z | """AddreceiptOnRegistrationTable
Revision ID: 1468fd5ca2be
Revises: 3dd7097453f8
Create Date: 2022-02-24 22:33:14.628454
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '1468fd5ca2be'
down_revision = '3dd7097453f8'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('user_registrations', sa.Column('receipt_generated', sa.VARCHAR(length=250), nullable=True))
op.create_index(op.f('ix_user_registrations_receipt_generated'), 'user_registrations', ['receipt_generated'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_user_registrations_receipt_generated'), table_name='user_registrations')
op.drop_column('user_registrations', 'receipt_generated')
# ### end Alembic commands ###
| 30.709677 | 127 | 0.741597 | 115 | 952 | 5.93913 | 0.504348 | 0.149341 | 0.140556 | 0.193265 | 0.254758 | 0.254758 | 0.254758 | 0.254758 | 0 | 0 | 0 | 0.064242 | 0.133403 | 952 | 30 | 128 | 31.733333 | 0.763636 | 0.326681 | 0 | 0 | 0 | 0 | 0.372517 | 0.129139 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06d2d9a68908e8cc5c73e24f9e5a1989fb784313 | 2,148 | py | Python | Hardware/ComputedPattern/computedDiffractionPattern.py | MarijnVenderbosch/MScProject | b82925d249e1c380995e1d5f60c0e636b52948d5 | [
"MIT"
] | null | null | null | Hardware/ComputedPattern/computedDiffractionPattern.py | MarijnVenderbosch/MScProject | b82925d249e1c380995e1d5f60c0e636b52948d5 | [
"MIT"
] | 1 | 2021-07-28T15:27:05.000Z | 2021-07-28T15:27:05.000Z | Hardware/ComputedPattern/computedDiffractionPattern.py | MarijnVenderbosch/MScProject | b82925d249e1c380995e1d5f60c0e636b52948d5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Jan 9 15:49:08 2022
Script plots computed pattern from GSW algorithm as well as phasemask that provides it
@author: marijn
"""
#%% Imports
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.axes_grid1 import make_axes_locatable
#%% load data
# Load calculated pattern
pattern = Image.open('files/7x7_calc_pattern.bmp')
patternGrey = pattern.convert('L')
patternArray = np.array(patternGrey) / 255
# crop
def crop_center(img, cropx, cropy):
y,x = img.shape
startx = int(x / 2 - (cropx / 2))
starty = int(y / 2 - (cropy / 2))
return img[starty : starty + cropy, startx : startx + cropx]
patternCrop = crop_center(patternArray, 80, 50)
# load phasemask
mask = Image.open('files/7x7_mask.bmp')
maskArray = np.array(mask)
#%% Ploting
fig, (ax1,ax2) = plt.subplots(1, 2,
tight_layout = True,
<<<<<<< Updated upstream
figsize = (12, 3.5))
=======
figsize = (7.8, 3.5*2/3))
>>>>>>> Stashed changes
maskPlot = ax1.imshow(maskArray, cmap = 'gray')
ax1.set_xlabel(r'$x$ [pixels]')
ax1.set_ylabel(r'$y$ [pixels]')
<<<<<<< Updated upstream
ax1.text(-200,
50,
r'a)',
fontsize = 14,
=======
ax1.text(-400,
50,
r'a)',
fontsize = 12,
>>>>>>> Stashed changes
fontweight = 'bold'
)
twoDplot = ax2.imshow(patternCrop)
ax2.set_xlabel(r'$x$ [focal units]')
ax2.set_ylabel(r'$y$ [focal units]')
<<<<<<< Updated upstream
ax2.text(-9,
1.8,
r'b)',
fontsize = 14,
=======
ax2.text(-20,
1.8,
r'b)',
fontsize = 12,
>>>>>>> Stashed changes
fontweight = 'bold'
)
fig.colorbar(twoDplot,
pad=0.02,
<<<<<<< Updated upstream
shrink = 0.5)
=======
shrink = 0.7)
>>>>>>> Stashed changes
plt.savefig('exports/MaskAndComputedPattern.pdf',
dpi = 100,
pad_inches = 0,
bbox_inches = 'tight'
)
| 21.267327 | 86 | 0.558194 | 268 | 2,148 | 4.414179 | 0.496269 | 0.050719 | 0.023669 | 0.02874 | 0.077768 | 0.064243 | 0 | 0 | 0 | 0 | 0 | 0.05768 | 0.281657 | 2,148 | 100 | 87 | 21.48 | 0.709008 | 0 | 0 | 0.38806 | 0 | 0 | 0.086492 | 0.032034 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074627 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06dd9184d218fc8501e6b1114c6a323ca19a20eb | 5,803 | py | Python | pyvo/dal/tests/test_params.py | tomdonaldson/pyvo | 229820bd04b243a092b13e25362a7e1b258519f5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1 | 2019-11-12T22:38:36.000Z | 2019-11-12T22:38:36.000Z | pyvo/dal/tests/test_params.py | tomdonaldson/pyvo | 229820bd04b243a092b13e25362a7e1b258519f5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | pyvo/dal/tests/test_params.py | tomdonaldson/pyvo | 229820bd04b243a092b13e25362a7e1b258519f5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
Tests for pyvo.dal.datalink
"""
from functools import partial
from urllib.parse import parse_qsl
from pyvo.dal.adhoc import DatalinkResults
from pyvo.dal.params import find_param_by_keyword, get_converter
from pyvo.dal.exceptions import DALServiceError
import pytest
import numpy as np
import astropy.units as u
from astropy.utils.data import get_pkg_data_contents, get_pkg_data_fileobj
get_pkg_data_contents = partial(
get_pkg_data_contents, package=__package__, encoding='binary')
get_pkg_data_fileobj = partial(
get_pkg_data_fileobj, package=__package__, encoding='binary')
@pytest.fixture()
def proc(mocker):
def callback(request, context):
return get_pkg_data_contents('data/datalink/proc.xml')
with mocker.register_uri(
'GET', 'http://example.com/proc', content=callback
) as matcher:
yield matcher
@pytest.fixture()
def proc_ds(mocker):
def callback(request, context):
return b''
with mocker.register_uri(
'GET', 'http://example.com/proc', content=callback
) as matcher:
yield matcher
@pytest.fixture()
def proc_units(mocker):
def callback(request, context):
return get_pkg_data_contents('data/datalink/proc_units.xml')
with mocker.register_uri(
'GET', 'http://example.com/proc_units', content=callback
) as matcher:
yield matcher
@pytest.fixture()
def proc_units_ds(mocker):
def callback(request, context):
data = dict(parse_qsl(request.query))
if 'band' in data:
assert data['band'] == (
'6.000000000000001e-07 8.000000000000001e-06')
return b''
with mocker.register_uri(
'GET', 'http://example.com/proc_units_ds', content=callback
) as matcher:
yield matcher
@pytest.fixture()
def proc_inf(mocker):
def callback(request, context):
return get_pkg_data_contents('data/datalink/proc_inf.xml')
with mocker.register_uri(
'GET', 'http://example.com/proc_inf', content=callback
) as matcher:
yield matcher
@pytest.fixture()
def proc_inf_ds(mocker):
def callback(request, context):
data = dict(parse_qsl(request.query))
if 'band' in data:
assert data['band'] == (
'6.000000000000001e-07 +Inf')
return b''
with mocker.register_uri(
'GET', 'http://example.com/proc_inf_ds', content=callback
) as matcher:
yield matcher
@pytest.mark.usefixtures('proc')
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.W06")
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.W48")
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.E02")
def test_find_param_by_keyword():
datalink = DatalinkResults.from_result_url('http://example.com/proc')
proc = datalink[0]
input_params = {param.name: param for param in proc.input_params}
polygon_lower = find_param_by_keyword('polygon', input_params)
polygon_upper = find_param_by_keyword('POLYGON', input_params)
circle_lower = find_param_by_keyword('circle', input_params)
circle_upper = find_param_by_keyword('CIRCLE', input_params)
assert polygon_lower == polygon_upper
assert circle_lower == circle_upper
@pytest.mark.usefixtures('proc')
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.W06")
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.W48")
@pytest.mark.filterwarnings("ignore::astropy.io.votable.exceptions.E02")
def test_serialize():
datalink = DatalinkResults.from_result_url('http://example.com/proc')
proc = datalink[0]
input_params = {param.name: param for param in proc.input_params}
polygon_conv = get_converter(
find_param_by_keyword('polygon', input_params))
circle_conv = get_converter(
find_param_by_keyword('circle', input_params))
scale_conv = get_converter(
find_param_by_keyword('scale', input_params))
kind_conv = get_converter(
find_param_by_keyword('kind', input_params))
assert polygon_conv.serialize((1, 2, 3)) == "1 2 3"
assert polygon_conv.serialize(np.array((1, 2, 3))) == "1 2 3"
assert circle_conv.serialize((1.1, 2.2, 3.3)) == "1.1 2.2 3.3"
assert circle_conv.serialize(np.array((1.1, 2.2, 3.3))) == "1.1 2.2 3.3"
assert scale_conv.serialize(1) == "1"
assert kind_conv.serialize("DATA") == "DATA"
@pytest.mark.usefixtures('proc')
@pytest.mark.usefixtures('proc_ds')
def test_serialize_exceptions():
datalink = DatalinkResults.from_result_url('http://example.com/proc')
proc = datalink[0]
input_params = {param.name: param for param in proc.input_params}
polygon_conv = get_converter(
find_param_by_keyword('polygon', input_params))
circle_conv = get_converter(
find_param_by_keyword('circle', input_params))
band_conv = get_converter(
find_param_by_keyword('band', input_params))
with pytest.raises(DALServiceError):
polygon_conv.serialize((1, 2, 3, 4))
with pytest.raises(DALServiceError):
circle_conv.serialize((1, 2, 3, 4))
with pytest.raises(DALServiceError):
band_conv.serialize((1, 2, 3))
@pytest.mark.usefixtures('proc_units')
@pytest.mark.usefixtures('proc_units_ds')
def test_units():
datalink = DatalinkResults.from_result_url('http://example.com/proc_units')
proc = datalink[0]
proc.process(band=(6000*u.Angstrom, 80000*u.Angstrom))
@pytest.mark.usefixtures('proc_inf')
@pytest.mark.usefixtures('proc_inf_ds')
def test_inf():
datalink = DatalinkResults.from_result_url('http://example.com/proc_inf')
proc = datalink[0]
proc.process(band=(6000, +np.inf) * u.Angstrom)
| 30.703704 | 79 | 0.70274 | 778 | 5,803 | 5.020566 | 0.1491 | 0.047875 | 0.03661 | 0.059908 | 0.759089 | 0.701485 | 0.677163 | 0.601382 | 0.580389 | 0.552739 | 0 | 0.02783 | 0.170257 | 5,803 | 188 | 80 | 30.867021 | 0.783385 | 0.018956 | 0 | 0.533333 | 0 | 0 | 0.157108 | 0.067734 | 0 | 0 | 0 | 0 | 0.074074 | 1 | 0.125926 | false | 0 | 0.066667 | 0.02963 | 0.237037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06e1151beb39e232e87cdadb19a9e3cc960f57c7 | 386 | py | Python | DSA Learning Series/Divide and Conquer + Binary Search/Lowest Sum (LOWSUM)/lowest_sum.py | Ekalaivanpj/codechef | 0adabcabe1dde60be5ee822878ce01057a351fbb | [
"Apache-2.0"
] | 4 | 2021-05-20T08:21:36.000Z | 2022-03-26T03:56:20.000Z | DSA Learning Series/Divide and Conquer + Binary Search/Lowest Sum (LOWSUM)/lowest_sum.py | Ekalaivanpj/codechef | 0adabcabe1dde60be5ee822878ce01057a351fbb | [
"Apache-2.0"
] | 5 | 2021-03-30T05:07:16.000Z | 2021-05-02T04:09:39.000Z | DSA Learning Series/Divide and Conquer + Binary Search/Lowest Sum (LOWSUM)/lowest_sum.py | Ekalaivanpj/codechef | 0adabcabe1dde60be5ee822878ce01057a351fbb | [
"Apache-2.0"
] | 3 | 2021-03-27T12:20:09.000Z | 2021-10-05T16:53:16.000Z | for _ in range(int(input())):
k, q = map(int, input().split())
mot = sorted(list(map(int, input().split())))
sat = sorted(list(map(int, input().split())))
qs = []
for i in range(q):
qs.append(int(input()))
gen = [mot[i]+sat[j] for i in range(k) for j in range(min(k, 10001//(i+1)))]
gen.sort()
res = [gen[e-1] for e in qs]
print(*res)
| 25.733333 | 80 | 0.520725 | 66 | 386 | 3.030303 | 0.378788 | 0.2 | 0.165 | 0.24 | 0.26 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0.024221 | 0.251295 | 386 | 14 | 81 | 27.571429 | 0.66782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06e3c1aeb681d761c58c83162ae190f781ff3012 | 561 | py | Python | Lecture1/euler.py | quao627/AGRI9999-Seminar-in-Python | c87a628d2866787192db8a949925f6f1d6747200 | [
"MIT"
] | 2 | 2021-05-18T09:49:01.000Z | 2021-07-01T07:54:06.000Z | Lecture1/euler.py | quao627/AGRI9999-Seminar-in-Python | c87a628d2866787192db8a949925f6f1d6747200 | [
"MIT"
] | null | null | null | Lecture1/euler.py | quao627/AGRI9999-Seminar-in-Python | c87a628d2866787192db8a949925f6f1d6747200 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Estimation methods for the Euler Number"""
def series(n_terms=1000):
"""Estimate e with series: 1/1 + 1/1 + 1/(1*2) + 1/(1*2*3) + ..."""
def factorial(n):
result = 1
for i in range(1, n+1):
result *= i
return result
print(sum([1/factorial(i) for i in range(n_terms)]))
def limit(n_limit=1000):
"""Estimate e with limit: (1 + 1/n) ^ n"""
print((1 + 1/n_limit)**n_limit)
if __name__ == '__main__':
estimation_1 = series()
estimation_2 = limit() | 25.5 | 71 | 0.561497 | 89 | 561 | 3.370787 | 0.404494 | 0.053333 | 0.04 | 0.04 | 0.02 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073634 | 0.249554 | 561 | 22 | 72 | 25.5 | 0.638955 | 0.324421 | 0 | 0 | 0 | 0 | 0.022039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06edf692570744bccdf6deec2bcb6156ea29b2f1 | 988 | py | Python | final_project/server.py | tarka-projects/xzceb-flask_eng_fr | 2461cea58904416fb290ef7eec450dcf1cb74bce | [
"Apache-2.0"
] | null | null | null | final_project/server.py | tarka-projects/xzceb-flask_eng_fr | 2461cea58904416fb290ef7eec450dcf1cb74bce | [
"Apache-2.0"
] | null | null | null | final_project/server.py | tarka-projects/xzceb-flask_eng_fr | 2461cea58904416fb290ef7eec450dcf1cb74bce | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Dec 8 12:16:53 2021
@author: M.Tarka
"""
from machinetranslation import translator
from flask import Flask, render_template, request
#import json
app = Flask("Web Translator")
@app.route("/englishToFrench")
def english_to_french():
textToTranslate = request.args.get('textToTranslate')
# Write your code here
# return "Translated text to French"
french_text = translator.english_to_french(textToTranslate)
return french_text
@app.route("/frenchToEnglish")
def french_to_english():
textToTranslate = request.args.get('textToTranslate')
# Write your code here
#return "Translated text to English"
english_text = translator.french_to_english(textToTranslate)
return english_text
@app.route("/")
def renderIndexPage():
# Write the code to render template
return render_template('index.html')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080) | 26.702703 | 65 | 0.698381 | 121 | 988 | 5.520661 | 0.446281 | 0.062874 | 0.04491 | 0.08982 | 0.248503 | 0.248503 | 0.248503 | 0.248503 | 0.248503 | 0.248503 | 0 | 0.025 | 0.190283 | 988 | 37 | 66 | 26.702703 | 0.81 | 0.238866 | 0 | 0.111111 | 0 | 0 | 0.145092 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0.166667 | false | 0 | 0.111111 | 0.055556 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06ef39ff8253db655f448f8f2402723868136317 | 400 | py | Python | hata/discord/http/headers.py | Multiface24111/hata | cd28f9ef158e347363669cc8d1d49db0ff41aba0 | [
"0BSD"
] | 173 | 2019-06-14T20:25:00.000Z | 2022-03-21T19:36:10.000Z | hata/discord/http/headers.py | Multiface24111/hata | cd28f9ef158e347363669cc8d1d49db0ff41aba0 | [
"0BSD"
] | 52 | 2020-01-03T17:05:14.000Z | 2022-03-31T11:39:50.000Z | hata/discord/http/headers.py | Multiface24111/hata | cd28f9ef158e347363669cc8d1d49db0ff41aba0 | [
"0BSD"
] | 47 | 2019-11-09T08:46:45.000Z | 2022-03-31T14:33:34.000Z | __all__ = ()
from ...backend.utils import istr
AUDIT_LOG_REASON = istr('X-Audit-Log-Reason')
RATE_LIMIT_REMAINING = istr('X-RateLimit-Remaining')
RATE_LIMIT_RESET = istr('X-RateLimit-Reset')
RATE_LIMIT_RESET_AFTER = istr('X-RateLimit-Reset-After')
RATE_LIMIT_LIMIT = istr('X-RateLimit-Limit')
# to send
RATE_LIMIT_PRECISION = istr('X-RateLimit-Precision')
DEBUG_OPTIONS = istr('X-Debug-Options')
| 26.666667 | 56 | 0.765 | 59 | 400 | 4.881356 | 0.355932 | 0.121528 | 0.243056 | 0.131944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0875 | 400 | 14 | 57 | 28.571429 | 0.789041 | 0.0175 | 0 | 0 | 0 | 0 | 0.337596 | 0.16624 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06f5bf36c0f5f2c3625329aa5bf79fe25c4d0d2d | 7,943 | py | Python | RPN.py | elbert-xiao/RFCN-Pytorch | 481439bfc88b35a27c9c74aa64823c21dabb9c88 | [
"MIT"
] | 11 | 2021-02-10T11:41:54.000Z | 2021-08-11T12:45:47.000Z | RPN.py | elbert-xiao/RFCN-Pytorch | 481439bfc88b35a27c9c74aa64823c21dabb9c88 | [
"MIT"
] | 1 | 2021-03-30T04:14:48.000Z | 2021-03-30T06:42:02.000Z | RPN.py | elbert-xiao/RFCN-Pytorch | 481439bfc88b35a27c9c74aa64823c21dabb9c88 | [
"MIT"
] | 2 | 2021-03-20T01:54:06.000Z | 2021-05-21T04:22:46.000Z | import torch.nn as nn
import numpy as np
from torch.nn import functional as F
from utils.bbox_tools import generate_anchor_base
from utils.creator_tool import ProposalCreator
def _enumerate_shifted_anchor(anchor_base, feat_stride, height, width):
"""
Enumerate all shifted anchors:
:param anchor_base: base anchor,shape: (A, 4), here 4==(y1, x1, y2, x2)
:param feat_stride: int, stride
:param height: height of RPN input feature map
:param width: width of RPN input feature map
:return: all anchor
"""
shift_y = np.arange(0, height * feat_stride, feat_stride)
shift_x = np.arange(0, width * feat_stride, feat_stride)
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
# offset of center
shift = np.stack((shift_y.ravel(), shift_x.ravel(),
shift_y.ravel(), shift_x.ravel()), axis=1)
A = anchor_base.shape[0] # the number of base anchor
K = shift.shape[0] # anchor group (==height * width)
# A (base) anchor on each pixel <----> K offset,==>K * A anchors
anchor = anchor_base.reshape((1, A, 4)) + \
shift.reshape((1, K, 4)).transpose((1, 0, 2)) # shape:(K, A, 4)
anchor = anchor.reshape((K * A, 4)).astype(np.float32) # shape:(K*A, 4)
return anchor
class RegionProposalNetwork(nn.Module):
"""Region Proposal Network introduced in Faster R-CNN.
This is Region Proposal Network introduced in Faster R-CNN [#]_.
This takes features extracted from images and propose
class agnostic bounding boxes around "objects".
.. [#] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun. \
Faster R-CNN: Towards Real-Time Object Detection with \
Region Proposal Networks. NIPS 2015.
Args:
in_channels (int): The channel size of input.
mid_channels (int): The channel size of the intermediate tensor.
ratios (list of floats): This is ratios of width to height of
the anchors.
anchor_scales (list of numbers): This is areas of anchors.
Those areas will be the product of the square of an element in
:obj:`anchor_scales` and the original area of the reference
window.
feat_stride (int): Stride size after extracting features from an
image.
initialW (callable): Initial weight value. If :obj:`None` then this
function uses Gaussian distribution scaled by 0.1 to
initialize weight.
May also be a callable that takes an array and edits its values.
proposal_creator_params (dict): Key valued paramters for
:class:`model.utils.creator_tools.ProposalCreator`.
.. seealso::
:class:`~model.utils.creator_tools.ProposalCreator`
"""
def __init__(self, in_channels=1024, mid_channels=512,
ratios=[0.5, 1, 2],
anchor_scales=[8, 16, 32],
feat_stride=16,
proposal_creator_params=dict()):
super(RegionProposalNetwork, self).__init__()
self.anchor_base = generate_anchor_base(anchor_scales=anchor_scales,
ratios=ratios)
self.feat_stride = feat_stride
self.proposal_layer = ProposalCreator(self, **proposal_creator_params)
# the number of base anchor
n_anchor = self.anchor_base.shape[0]
self.conv1 = nn.Conv2d(in_channels, mid_channels, (3, 3), 1, 1)
# confidence and regression params
score_out_channels = n_anchor * 2 # 2class(P/N) for each anchor
self.score = nn.Conv2d(mid_channels, score_out_channels, 1)
loc_out_channels = n_anchor * 4 # 4coords for each anchor
self.loc = nn.Conv2d(mid_channels, loc_out_channels, 1)
normal_init(self.conv1, 0, 0.01)
normal_init(self.score, 0, 0.01)
normal_init(self.loc, 0, 0.01)
def forward(self, x, img_size, scale=1., only_rpn=False):
"""Forward Region Proposal Network.
Here are notations.
* :math:`N` is batch size.
* :math:`C` channel size of the input.
* :math:`H` and :math:`W` are height and witdh of the input feature.
* :math:`A` is number of anchors assigned to each pixel.
Args:
x (~torch.autograd.Variable): The Features extracted from images.
Its shape is :math:`(N, C, H, W)`.
img_size (tuple of ints): A tuple :obj:`height, width`,
which contains image size after scaling.
scale (float): The amount of scaling done to the input images after
reading them from files.
Returns:
(~torch.autograd.Variable, ~torch.autograd.Variable, array, array, array):
This is a tuple of five following values.
* **rpn_locs**: Predicted bounding box offsets and scales for \
anchors. Its shape is :math:`(N, H W A, 4)`.
* **rpn_scores**: Predicted foreground scores for \
anchors. Its shape is :math:`(N, H W A, 2)`.
* **rois**: A bounding box array containing coordinates of \
proposal boxes. This is a concatenation of bounding box \
arrays from multiple images in the batch. \
Its shape is :math:`(R', 4)`. Given :math:`R_i` predicted \
bounding boxes from the :math:`i` th image, \
:math:`R' = \\sum _{i=1} ^ N R_i`.
* **roi_indices**: An array containing indices of images to \
which RoIs correspond to. Its shape is :math:`(R',)`.
* **anchor**: Coordinates of enumerated shifted anchors. \
Its shape is :math:`(H W A, 4)`.
"""
n, _, hh, ww = x.shape
anchor = _enumerate_shifted_anchor(self.anchor_base,
self.feat_stride,
hh, ww)
n_anchor = self.anchor_base.shape[0]
mid_out = F.relu(self.conv1(x)) # Dimension reduction+relu
rpn_locs = self.loc(mid_out)
rpn_locs = rpn_locs.permute(0, 2, 3, 1).contiguous().view((n, -1, 4))
rpn_scores = self.score(mid_out)
rpn_scores = rpn_scores.permute(0, 2, 3, 1).contiguous()
rpn_softmax_scores = F.softmax(rpn_scores.view(n, hh, ww, n_anchor, 2), dim=4)
rpn_fg_scores = rpn_softmax_scores[:, :, :, :, 1].contiguous()
rpn_fg_scores = rpn_fg_scores.view(n, -1)
rpn_scores = rpn_scores.view(n, -1, 2)
if only_rpn:
# return reg and cls item of rpn
return rpn_locs, rpn_scores, anchor
rois_allbatch = list()
rois_indices = list()
for i in range(n):
rois = self.proposal_layer(
rpn_locs[i].cpu().data.numpy(),
rpn_fg_scores[i].cpu().data.numpy(),
anchor, img_size,
scale=scale) # shape:(S, 4)
batch_index = i * np.ones((len(rois),), dtype=np.int32) # shape: (S, )
rois_allbatch.append(rois) # [array[[], [], ...], array[[], [], ...] ]
rois_indices.append(batch_index) # roi batch index, [array[0, 0,...], array([1, 1,...], ...)]
rois_allbatch = np.concatenate(rois_allbatch,
axis=0) # array([[y11, x11, y12, x12], [y21, x21, y22, x22], ...])
rois_indices = np.concatenate(rois_indices, axis=0) # array([0, 0, ..., 1,1, 1, ...])
return rpn_locs, rpn_scores, rois_allbatch, rois_indices, anchor
def normal_init(m, mean, stddev, truncated=False):
"""
weight initalizer: truncated normal and random normal.
"""
# x is a parameter
if truncated:
m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean)
else:
m.weight.data.normal_(mean, stddev)
if m.bias is not None:
m.bias.data.zero_()
| 41.586387 | 110 | 0.591212 | 1,056 | 7,943 | 4.305871 | 0.273674 | 0.024192 | 0.013196 | 0.018474 | 0.155707 | 0.113481 | 0.044865 | 0.032989 | 0.032989 | 0.012316 | 0 | 0.023268 | 0.296613 | 7,943 | 190 | 111 | 41.805263 | 0.790585 | 0.480297 | 0 | 0.026316 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.065789 | 0 | 0.171053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
06f6d78ce704011152fca337b015e023a11c5710 | 3,180 | py | Python | DroneOS/buildroot/system/skeleton/root/mytest.py | TechV/DroneOS | b01366e9f658890436a7bbcc809739ea225b99e0 | [
"Apache-2.0"
] | 1 | 2021-06-27T12:31:21.000Z | 2021-06-27T12:31:21.000Z | DroneOS/buildroot/system/skeleton/root/mytest.py | TechV/DroneOS | b01366e9f658890436a7bbcc809739ea225b99e0 | [
"Apache-2.0"
] | null | null | null | DroneOS/buildroot/system/skeleton/root/mytest.py | TechV/DroneOS | b01366e9f658890436a7bbcc809739ea225b99e0 | [
"Apache-2.0"
] | 2 | 2015-12-12T04:57:05.000Z | 2018-09-11T09:39:26.000Z | #!/usr/bin/python
import time
from motor import motor
from RPIO import PWM
PWM.setup()
PWM.init_channel(0)
#where 17 is GPIO17 = pin 11
# First we specify which gpio pins our motors are on and set our pwm accordingly
mymotor1 = motor('m1', 23, simulation=False)
mymotor2 = motor('m2', 17, simulation=False)
mymotor3 = motor('m3', 24, simulation=False)
mymotor4 = motor('m4', 4, simulation=False)
print('Motors set, press ENTER')
res = raw_input()
# Here we set each motor to 7, most esc's handle pairing by quickly
# increasing and decreasing throttle, this implements that.
mymotor1.start()
mymotor1.setW(7)
mymotor2.start()
mymotor2.setW(7)
mymotor3.start()
mymotor3.setW(7)
mymotor4.start()
mymotor4.setW(7)
#NOTE:the angular motor speed W can vary from 0 (min) to 100 (max)
print('***Wait beep-beep')
print('***then press ENTER')
# here we throttle down to zero and wait for a longer beep to designate
# that our motors are all paired and ready for orders.
res = raw_input()
mymotor1.setW(0)
mymotor2.setW(0)
mymotor3.setW(0)
mymotor4.setW(0)
print('***Wait for long beeeeep')
print('***then press ENTER')
# My setup begins spining at a W level around 17 so we set baseline at 10
# You want this to be just under your minimum throttle level.
res = raw_input()
mymotor1.setW(10)
res = raw_input()
mymotor2.setW(10)
res = raw_input()
mymotor3.setW(10)
res = raw_input()
mymotor4.setW(10)
print ('increase W > q | decrease W > w | save Wh > e | set Wh > r | quit > 9 | cycle > c')
cycling = True
try:
while cycling:
res = raw_input()
if res == 'q':
mymotor1.increaseW(20)
mymotor2.increaseW(20)
mymotor3.increaseW(20)
mymotor4.increaseW(20)
if res == 'w':
mymotor1.decreaseW(25)
mymotor2.decreaseW(25)
mymotor3.decreaseW(25)
mymotor4.decreaseW(25)
if res == 'e':
mymotor1.saveWh()
mymotor2.saveWh()
mymotor3.saveWh()
mymotor4.saveWh()
if res == 'r':
mymotor1.setWh()
mymotor2.setWh()
mymotor3.setWh()
mymotor4.setWh()
if res == 'c':
# decrease by 100 since not all esc's can set W to zero after pairing
mymotor1.decreaseW(100)
mymotor2.decreaseW(100)
mymotor3.decreaseW(100)
mymotor4.decreaseW(100)
# spin motor 1 for 10 seconds
mymotor1.increaseW(25)
time.sleep(10)
#stop motor 1
mymotor1.decreaseW(25)
# spin motor 2 for 10 seconds
mymotor2.increaseW(25)
time.sleep(10)
#stop motor 2
mymotor2.decreaseW(25)
# spin motor 3 for 10 seconds
mymotor3.increaseW(25)
time.sleep(10)
#stop motor 3
mymotor3.decreaseW(25)
# spin motor 4 for 10 seconds
mymotor4.increaseW(25)
time.sleep(10)
#stop motor 4
mymotor4.decreaseW(25)
if res == '9':
cycling = False
finally:
# shut down cleanly
mymotor1.stop()
mymotor2.stop()
mymotor3.stop()
mymotor4.stop()
PWM.clear_channel_gpio(0, 23)
PWM.clear_channel_gpio(0, 17)
PWM.clear_channel_gpio(0, 22)
PWM.clear_channel_gpio(0, 4)
print ("well done!")
| 24.84375 | 91 | 0.647799 | 457 | 3,180 | 4.472648 | 0.328228 | 0.043053 | 0.037671 | 0.039139 | 0.168787 | 0.060665 | 0.060665 | 0 | 0 | 0 | 0 | 0.074059 | 0.239937 | 3,180 | 127 | 92 | 25.03937 | 0.771618 | 0.255975 | 0 | 0.233333 | 0 | 0.011111 | 0.088311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.033333 | null | null | 0.077778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
660190b920eba2db692a28004d64fd484ef5d4a1 | 274 | py | Python | PythonDesafios/d046.py | adaatii/Python-Curso-em-Video- | 30b37713b3685469558babb93b557b53210f010c | [
"MIT"
] | null | null | null | PythonDesafios/d046.py | adaatii/Python-Curso-em-Video- | 30b37713b3685469558babb93b557b53210f010c | [
"MIT"
] | null | null | null | PythonDesafios/d046.py | adaatii/Python-Curso-em-Video- | 30b37713b3685469558babb93b557b53210f010c | [
"MIT"
] | null | null | null | #Faça um programa que mostre na tela uma contagem regressiva para
# o estouro de fogos de artifício, indo de 10 até 0, com uma pausa
# de 1 segundo entre eles.
from time import sleep
for i in range(10, -1, -1):
print('{}'.format(i))
sleep(1)
print('Bum, BUM, POW')
| 27.4 | 66 | 0.686131 | 50 | 274 | 3.76 | 0.76 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041284 | 0.20438 | 274 | 9 | 67 | 30.444444 | 0.821101 | 0.562044 | 0 | 0 | 0 | 0 | 0.128205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6609c0576ff62418d3d4463d3773b85b21390d7a | 4,855 | py | Python | starter_simpleNN.py | MirunaPislar/Word2vec | e9dd01488f081a7b8d7c00a0b21efe0d401d4927 | [
"MIT"
] | 13 | 2018-05-19T22:29:27.000Z | 2022-03-25T13:28:17.000Z | starter_simpleNN.py | MirunaPislar/Word2vec | e9dd01488f081a7b8d7c00a0b21efe0d401d4927 | [
"MIT"
] | 1 | 2019-01-14T09:55:50.000Z | 2019-01-25T22:17:03.000Z | starter_simpleNN.py | MirunaPislar/Word2vec | e9dd01488f081a7b8d7c00a0b21efe0d401d4927 | [
"MIT"
] | 6 | 2018-05-19T22:29:29.000Z | 2022-03-11T12:00:37.000Z | import numpy as np
import random
# Softmax function, optimized such that larger inputs are still feasible
# softmax(x + c) = softmax(x)
def softmax(x):
orig_shape = x.shape
x = x - np.max(x, axis = 1, keepdims = True)
exp_x = np.exp(x)
x = exp_x / np.sum(exp_x, axis = 1, keepdims = True)
assert x.shape == orig_shape
return x
# Implementation for the sigmoid function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of sigmoid function
def sigmoid_grad(sigmoid):
return sigmoid * (1 - sigmoid)
# Gradient checker for a function f
# f is a function that takes a single argument and outputs the cost and its gradients
# x is the point to check the gradient at
def gradient_checker(f, x):
rndstate = random.getstate()
random.setstate(rndstate)
cost, grad = f(x) # Evaluate function value at original point
epsilon = 1e-4 # Tiny shift to the input to compute approximated gradient with formula
# Iterate over all indexes in x
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
i = it.multi_index
# Calculate J(theta_minus)
x_minus = np.copy(x)
x_minus[i] = x[i] - epsilon
random.setstate(rndstate)
f_minus = f(x_minus)[0]
# Calculate J(theta_plus)
x_plus = np.copy(x)
x_plus[i] = x[i] + epsilon
random.setstate(rndstate)
f_plus = f(x_plus)[0]
numgrad = (f_plus - f_minus) / (2 * epsilon)
# Compare gradients
reldiff = abs(numgrad - grad[i]) / max(1, abs(numgrad), abs(grad[i]))
if reldiff > 1e-5:
print "Gradient check failed."
print "First gradient error found at index %s" % str(i)
print "Your gradient: %f \t Numerical gradient: %f" % (
grad[i], numgrad)
return
it.iternext() # Step to next dimension
print "Gradient check passed!"
# Compute the forward and backward propagation for the NN model
def forward_backward_prop(data, labels, params, dimensions):
# Unpack the parameters
Dx, H, Dy = (dimensions[0], dimensions[1], dimensions[2])
offset = 0
W1 = np.reshape(params[offset : offset + Dx * H], (Dx, H))
offset += Dx * H
b1 = np.reshape(params[offset : offset + 1 * H], (1, H))
offset += 1 * H
W2 = np.reshape(params[offset : offset + H * Dy], (H, Dy))
offset += H * Dy
b2 = np.reshape(params[offset : offset + 1 * Dy], (1, Dy))
# Forward propagation
a0 = data
z1 = np.dot(a0, W1) + b1
a1 = sigmoid(z1)
z2 = np.dot(a1, W2) + b2
a2 = softmax(z2)
cost = - np.sum(labels * np.log(a2))
# Backward propagation
delta1 = a2 - labels
dW2 = np.dot(a1.T, delta1)
db2 = np.sum(delta1, axis = 0, keepdims = True)
delta2 = np.multiply(np.dot(delta1, W2.T), sigmoid_grad(a1))
dW1 = np.dot(a0.T, delta2)
db1 = np.sum(delta2, axis = 0, keepdims = True)
### Stack gradients
grad = np.concatenate((dW1.flatten(), db1.flatten(),dW2.flatten(), db2.flatten()))
return cost, grad
# ************** IMPLEMENTATION TESTS **************
def test_softmax():
print "Running softmax tests..."
test1 = softmax(np.array([[1,2]]))
ans1 = np.array([0.26894142, 0.73105858])
assert np.allclose(test1, ans1, rtol=1e-05, atol=1e-06)
test2 = softmax(np.array([[1001,1002],[3,4]]))
ans2 = np.array([
[0.26894142, 0.73105858],
[0.26894142, 0.73105858]])
assert np.allclose(test2, ans2, rtol=1e-05, atol=1e-06)
test3 = softmax(np.array([[-1001,-1002]]))
ans3 = np.array([0.73105858, 0.26894142])
assert np.allclose(test3, ans3, rtol=1e-05, atol=1e-06)
print "Passed!\n"
def test_sigmoid():
print "Running sigmoid tests..."
x = np.array([[1, 2], [-1, -2]])
f = sigmoid(x)
g = sigmoid_grad(f)
f_ans = np.array([
[0.73105858, 0.88079708],
[0.26894142, 0.11920292]])
assert np.allclose(f, f_ans, rtol=1e-05, atol=1e-06)
g_ans = np.array([
[0.19661193, 0.10499359],
[0.19661193, 0.10499359]])
assert np.allclose(g, g_ans, rtol=1e-05, atol=1e-06)
print "Passed!\n"
def test_gradient_descent_checker():
# Test square function x^2, grad is 2 * x
quad = lambda x: (np.sum(x ** 2), x * 2)
print "Running gradient checker for quad function..."
gradient_checker(quad, np.array(123.456))
gradient_checker(quad, np.random.randn(3,))
gradient_checker(quad, np.random.randn(4,5))
print "Passed!\n"
# Test cube function x^3, grad is 3 * x^2
cube = lambda x: (np.sum(x ** 3), 3 * (x ** 2))
print "Running gradient checker for cube function..."
gradient_checker(cube, np.array(123.456))
gradient_checker(cube, np.random.randn(3,))
gradient_checker(cube, np.random.randn(4,5))
print "Passed!\n"
if __name__ == "__main__":
test_softmax()
test_sigmoid()
test_gradient_descent_checker() | 31.732026 | 91 | 0.625953 | 735 | 4,855 | 4.061224 | 0.257143 | 0.025796 | 0.0134 | 0.020101 | 0.263987 | 0.198995 | 0.115243 | 0.063652 | 0.023451 | 0.023451 | 0 | 0.07966 | 0.224305 | 4,855 | 153 | 92 | 31.732026 | 0.712958 | 0.171164 | 0 | 0.064815 | 0 | 0 | 0.08175 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | null | null | 0.046296 | 0.018519 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
660e77e4cd6f8356cc558856a3324dc6a1902eaa | 489 | py | Python | SimpleAddition/sum_classes.py | OpenGuide/Python---Beginner-s-Guide | 74b853be7f3eaeb490e464b549459bd877c6aa8b | [
"MIT"
] | 35 | 2017-10-09T14:45:34.000Z | 2021-11-11T08:48:52.000Z | SimpleAddition/sum_classes.py | jawachipcookie/Python-Guide-for-Beginners | 71f87df3a31044d9f6e4e2e7d9617a9e40c039ba | [
"MIT"
] | 35 | 2017-10-09T14:42:54.000Z | 2022-02-26T12:39:36.000Z | SimpleAddition/sum_classes.py | jawachipcookie/Python-Guide-for-Beginners | 71f87df3a31044d9f6e4e2e7d9617a9e40c039ba | [
"MIT"
] | 112 | 2017-10-09T14:45:42.000Z | 2022-02-25T13:03:30.000Z | # This example uses python classes for addition
class Numbers(object):
def __init__(self):
self.sum = 0
def add(self,x):
# Addtion funciton
self.sum += x
def total(self):
# Returns the total of the sum
return self.sum
if __name__ == "__main__":
# Prints 12 on the terminal when the file is run,
# you can even use input() to get numbers from
# users.
add = Numbers()
add.add(5)
add.add(7)
y = add.total()
print("Total Sum : " , y)
| 18.111111 | 51 | 0.619632 | 75 | 489 | 3.88 | 0.64 | 0.072165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014045 | 0.271984 | 489 | 26 | 52 | 18.807692 | 0.803371 | 0.390593 | 0 | 0 | 0 | 0 | 0.068729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0 | 0.076923 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
661016f5c180f4216248312796168ec5dd016391 | 1,074 | py | Python | examples/sensors.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | 1 | 2019-09-17T14:11:52.000Z | 2019-09-17T14:11:52.000Z | examples/sensors.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | null | null | null | examples/sensors.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
"""
This script retrieves all the sensors for a group and prints their ID, Name, Mac Address.
To use it, run:
./examples/sensors --access_token <SAMSARA_API_TOKEN> --group_id <GROUP_ID>
passing in your Samsara API access token and the group ID you want to access.
"""
import click
import samsara
from samsara.apis import SamsaraClient
@click.command()
@click.option('--access_token', type=str, required=True)
@click.option('--group_id', type=int, required=True)
def get_sensors(access_token, group_id):
# Create an instance of the SamsaraClient.
client = SamsaraClient()
# Get the sensors for the group.
response = client.get_sensors(access_token,
samsara.GroupParam(group_id))
for sensor in response.sensors:
print '\nsensor ID: {}, name: {}, macAddress: {}'.format(sensor.id,
sensor.name,
sensor.mac_address)
if __name__ == "__main__":
get_sensors()
| 31.588235 | 89 | 0.616387 | 130 | 1,074 | 4.915385 | 0.469231 | 0.065728 | 0.084507 | 0.078247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283985 | 1,074 | 33 | 90 | 32.545455 | 0.830949 | 0.081937 | 0 | 0 | 0 | 0 | 0.102384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1875 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66107e108ec64097a2d7afae132b9d16d94357fd | 528 | py | Python | examples/websocket/http.py | FabianElsmer/rueckenwind | 255b026009edcdc41b6a5ad7cbae3e5e4970696c | [
"Apache-2.0"
] | 3 | 2015-09-03T07:39:57.000Z | 2020-01-28T09:14:04.000Z | examples/websocket/http.py | FabianElsmer/rueckenwind | 255b026009edcdc41b6a5ad7cbae3e5e4970696c | [
"Apache-2.0"
] | 6 | 2015-05-09T13:26:12.000Z | 2017-07-13T14:22:31.000Z | examples/websocket/http.py | FabianElsmer/rueckenwind | 255b026009edcdc41b6a5ad7cbae3e5e4970696c | [
"Apache-2.0"
] | 5 | 2015-05-13T08:58:22.000Z | 2020-09-10T14:49:43.000Z | import rw.websocket
import rw.http
from rw import gen
class WebSocketHandler(rw.websocket.WebSocketHandler):
@gen.engine
def open(self):
print 'open'
@gen.engine
def on_message(self, message):
print 'on message'
@gen.engine
def on_close(self):
print 'on close'
def __del__(self):
# XXX debugging
print 'bye bye'
root = rw.http.Module('websocket')
root.mount('/ws', WebSocketHandler)
@root.get('/')
def index():
root.render_template('index.html')
| 16.5 | 54 | 0.636364 | 67 | 528 | 4.910448 | 0.432836 | 0.082067 | 0.109422 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24053 | 528 | 31 | 55 | 17.032258 | 0.820449 | 0.024621 | 0 | 0.15 | 0 | 0 | 0.101365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66149430dade01f4455993e712a49045cab77edb | 2,194 | py | Python | todo/config.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | todo/config.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | todo/config.py | ruslan-ok/ruslan | fc402e53d2683581e13f4d6c69a6f21e5c2ca1f8 | [
"MIT"
] | null | null | null | from task.const import *
app_config = {
'name': APP_TODO,
'app_title': 'tasks',
'icon': 'check2-square',
'role': ROLE_TODO,
'main_view': 'planned',
'use_groups': True,
'use_selector': True,
'use_important': True,
'sort': [
('stop', 'termin'),
('name', 'name'),
('created', 'create date'),
('completion', 'completion date'),
('important', 'important'),
('in_my_day', 'my day'),
],
'views': {
'myday': {
'icon': 'sun',
'title': 'my day',
'sort': [
('stop', 'termin'),
('name', 'name'),
('created', 'create date'),
('important', 'important'),
],
},
'important': {
'icon': 'star',
'title': 'important tasks',
'sort': [
('stop', 'termin'),
('name', 'name'),
('created', 'create date'),
('in_my_day', 'my day'),
],
},
'planned': {
'icon': 'check2-square',
'title': 'planned tasks',
'use_sub_groups': True,
'sort': [
('stop', 'termin'),
('name', 'name'),
('created', 'create date'),
('important', 'important'),
('in_my_day', 'my day'),
],
},
'all': {
'icon': 'infinity',
'title': 'all tasks',
'use_sub_groups': True,
'hide_qty': True,
'sort': [
('stop', 'termin'),
('name', 'name'),
('created', 'create date'),
('important', 'important'),
('in_my_day', 'my day'),
],
},
'completed': {
'icon': 'check2-circle',
'title': 'completed tasks',
'hide_qty': True,
'sort': [
('completion', 'completion date'),
('name', 'name'),
('created', 'create date'),
('important', 'important'),
],
},
}
} | 28.128205 | 50 | 0.364631 | 159 | 2,194 | 4.893082 | 0.27044 | 0.057841 | 0.115681 | 0.161954 | 0.529563 | 0.465296 | 0.465296 | 0.410026 | 0.316195 | 0.260925 | 0 | 0.002465 | 0.445305 | 2,194 | 78 | 51 | 28.128205 | 0.636812 | 0 | 0 | 0.597403 | 0 | 0 | 0.339863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.116883 | 0 | 0.116883 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
661ee0f8b3d914a8c929bd99b89ca6b4d11b1811 | 5,463 | py | Python | contextual_encoders/aggregator.py | StuttgarterDotNet/contextual-encoders | 002923022ad03ec4af5159d7e434da5edffd7328 | [
"Apache-2.0"
] | null | null | null | contextual_encoders/aggregator.py | StuttgarterDotNet/contextual-encoders | 002923022ad03ec4af5159d7e434da5edffd7328 | [
"Apache-2.0"
] | null | null | null | contextual_encoders/aggregator.py | StuttgarterDotNet/contextual-encoders | 002923022ad03ec4af5159d7e434da5edffd7328 | [
"Apache-2.0"
] | null | null | null | """
Aggregator
====================================
*Aggregators* are used to combine multiple matrices to a single matrix.
This is used to combine similarity and dissimilarity matrices of multiple attributes to a single one.
Thus, an *Aggregator* :math:`\\mathcal{A}` is a mapping of the form
:math:`\\mathcal{A} : \\mathbb{R}^{n \\times n \\times k} \\rightarrow \\mathbb{R}^{n \\times n}`,
with :math:`n` being the amount of features and :math:`k` being the number of similarity or dissimilarity matrices
of type :math:`D \\in \\mathbb{R}^{n \\times n}`, i.e. the amount of attributes/columns of the dataset.
Currently, the following *Aggregators* are implement:
=========== ===========
Name Formula
----------- -----------
mean :math:`\\mathcal{A} (D^1, D^2, ..., D^k) = \\frac{1}{k} \\sum_{i=1}^{k} D^i`
median :math:`\\mathcal{A} (D^1, D^2, ..., D^k) = \\left\\{ \\begin{array}{ll} D^{\\frac{k}{2}} & \\mbox{, if } k \\mbox{ is even} \\\\ \\frac{1}{2} \\left( D^{\\frac{k-1}{2}} + D^{\\frac{k+1}{2}} \\right) & \\mbox{, if } k \\mbox{ is odd} \\end{array} \\right.`
max :math:`\\mathcal{A} (D^1, D^2, ..., D^k) = max_{ l} \\; D_{i,j}^l`
min :math:`\\mathcal{A} (D^1, D^2, ..., D^k) = min_{ l} \\; D_{i,j}^l`
=========== ===========
"""
import numpy as np
from abc import ABC, abstractmethod
class Aggregator(ABC):
"""
An abstract base class for *Aggregators*.
If custom *Aggregators* are created,
it is enough to derive from this class
and use it whenever an *Aggregator* is needed.
"""
@abstractmethod
def aggregate(self, matrices):
"""
The abstract method that is implemented by the concrete *Aggregators*.
:param matrices: a list of similarity or dissimilarity matrices as 2D numpy arrays.
:return: a single 2D numpy array.
"""
pass
class AggregatorFactory:
"""
The factory class for creating concrete instances of the implemented *Aggregators* with default values.
"""
@staticmethod
def create(aggregator):
"""
Creates an instance of the given *Aggregator* name.
:param aggregator: The name of the *Aggregator*, which can be ``mean``, ``median``, ``max`` or ``min``.
:return: An instance of the *Aggregator*.
:raise ValueError: The given *Aggregator* does not exist.
"""
if aggregator == "mean":
return MeanAggregator()
elif aggregator == "median":
return MedianAggregator()
elif aggregator == "max":
return MaxAggregator()
elif aggregator == "min":
return MinAggregator()
else:
raise ValueError(f"An aggregator of type {aggregator} does not exist.")
class MeanAggregator(Aggregator):
"""
This class aggregates similarity or dissimilarity matrices using the ``mean``.
Given :math:`k` similarity or dissimilarity matrices :math:`D^i \\in \\mathbb{R}^{n \\times n}`,
the *MeanAggregator* calculates
.. centered::
:math:`\\mathcal{A} (D^1, D^2, ..., D^k) = \\frac{1}{k} \\sum_{i=1}^{k} D^i`.
"""
def aggregate(self, matrices):
"""
Calculates the mean of all given matrices along the zero axis.
:param matrices: A list of 2D numpy arrays.
:return: A 2D numpy array.
"""
return np.mean(matrices, axis=0)
class MedianAggregator(Aggregator):
"""
This class aggregates similarity or dissimilarity matrices using the ``median``.
Given :math:`k` similarity or dissimilarity matrices :math:`D^i \\in \\mathbb{R}^{n \\times n}`,
the *MedianAggregator* calculates
.. centered::
:math:`\\mathcal{A} (D^1, D^2, ..., D^k) = \\left{ \\begin{array}{ll} D^{\\frac{k}{2}} & \\mbox{, if } k \\mbox{ is even} \\\\ \\frac{1}{2} \\left( D^{\\frac{k-1}{2}} + D^{\\frac{k+1}{2}} \\right) & \\mbox{, if } k \\mbox{ is odd} \\end{array} \\right.`
"""
def aggregate(self, matrices):
"""
Calculates the median of all given matrices along the zero axis.
:param matrices: A list of 2D numpy arrays.
:return: A 2D numpy array.
"""
return np.median(matrices, axis=0)
class MaxAggregator(Aggregator):
"""
This class aggregates similarity or dissimilarity matrices using the ``max``.
Given :math:`k` similarity or dissimilarity matrices :math:`D^i \\in \\mathbb{R}^{n \\times n}`,
the *MaxAggregator* calculates
.. centered::
:math:`\\mathcal{A} (D^1, D^2, ..., D^k) = max_{ l} \\; D_{i,j}^l`.
"""
def aggregate(self, matrices):
"""
Calculates the max of all given matrices along the zero axis.
:param matrices: A list of 2D numpy arrays.
:return: A 2D numpy array.
"""
return np.max(matrices, axis=0)
class MinAggregator(Aggregator):
"""
This class aggregates similarity or dissimilarity matrices using the ``min``.
Given :math:`k` similarity or dissimilarity matrices :math:`D^i \\in \\mathbb{R}^{n \\times n}`,
the *MinAggregator* calculates
.. centered::
:math:`\\mathcal{A} (D^1, D^2, ..., D^k) = min_{ l} \\; D_{i,j}^l`.
"""
def aggregate(self, matrices):
"""
Calculates the min of all given matrices along the zero axis.
:param matrices: A list of 2D numpy arrays.
:return: A 2D numpy array.
"""
return np.min(matrices, axis=0)
| 35.705882 | 267 | 0.58521 | 728 | 5,463 | 4.377747 | 0.18956 | 0.072482 | 0.037653 | 0.103546 | 0.534672 | 0.491999 | 0.463759 | 0.463759 | 0.463759 | 0.463759 | 0 | 0.01153 | 0.237964 | 5,463 | 152 | 268 | 35.940789 | 0.754024 | 0.701812 | 0 | 0.16129 | 0 | 0 | 0.057844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0.032258 | 0.064516 | 0 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6624d6cbdf35dd41d2cb1c12bee1f4a54b12e14b | 4,427 | py | Python | src/frame/mysql_manager.py | f304646673/scheduler_frame | 0a9ba45a6523cbf9bd50e9fa8e08c8bfd2a9204a | [
"Apache-2.0"
] | 9 | 2017-05-14T05:12:32.000Z | 2022-01-13T08:11:07.000Z | src/frame/mysql_manager.py | f304646673/scheduler_frame | 0a9ba45a6523cbf9bd50e9fa8e08c8bfd2a9204a | [
"Apache-2.0"
] | null | null | null | src/frame/mysql_manager.py | f304646673/scheduler_frame | 0a9ba45a6523cbf9bd50e9fa8e08c8bfd2a9204a | [
"Apache-2.0"
] | 7 | 2017-08-28T08:31:43.000Z | 2020-03-03T07:18:37.000Z | import json
import frame_tools
from collections import OrderedDict
import conf_keys
from mysql_conn import mysql_conn
from loggingex import LOG_WARNING
from loggingex import LOG_INFO
from singleton import singleton
from mysql_conn import mysql_conn
class mysql_conn_info:
def __init__(self):
self.valid = 0
self.conns_dict = OrderedDict()
@singleton
class mysql_manager():
def __init__(self):
self._conns = {}
def modify_conns(self, conns_info):
for (conn_name, conn_info) in conns_info.items():
conn_info_hash = frame_tools.hash(json.dumps(conn_info))
if conn_name in self._conns.keys():
if conn_info_hash in self._conns[conn_name].conns_dict.keys():
continue
else:
self._conns[conn_name] = mysql_conn_info()
for key in conf_keys.mysql_conn_keys:
if key not in conn_info.keys():
continue
conn_obj = mysql_conn(conn_info["host"], conn_info["port"], conn_info["user"], conn_info["passwd"], conn_info["db"], conn_info["charset"])
self._conns[conn_name].conns_dict[conn_info_hash] = conn_obj
self._conns[conn_name].valid = 1
self._print_conns()
def add_conns(self, conns_info):
self.modify_conns(conns_info)
def remove_conns(self, conns_info):
for (conn_name, conn_info) in conns_info.items():
conn_info_hash = frame_tools.hash(json.dumps(conn_info))
if conn_name in self._conns.keys():
if conn_info_hash in self._conns[conn_name].conns_dict.keys():
self._conns[conn_name].valid = 0
self._print_conns()
def get_mysql_conn(self, conn_name):
if conn_name not in self._conns.keys():
return None
conn_info = self._conns[conn_name]
valid = self._conns[conn_name].valid
if 0 == valid:
return None
conns_dict_keys = self._conns[conn_name].conns_dict.keys()
if len(conns_dict_keys) == 0:
return None
key = conns_dict_keys[-1]
ret_conn = self._conns[conn_name].conns_dict[key]
return ret_conn
def _print_conns(self):
for (conn_name, conn_info) in self._conns.items():
out_str = "conn name: " + conn_name + "\n"
out_str = out_str + "conn info valid: " + str(conn_info.valid) + "\n"
for (key, value) in conn_info.conns_dict.items():
out_str = out_str + key + str(value) + "\n"
LOG_INFO(out_str)
def refresh_all_conns_tables_info(self):
for (conn_name, conn_info) in self._conns.items():
conn = self.get_mysql_conn(conn_name)
if None != conn:
conn.refresh_tables_info()
if __name__ == "__main__":
import os
os.chdir("../../")
from j_load_mysql_conf import j_load_mysql_conf
from scheduler_frame_conf_inst import scheduler_frame_conf_inst
frame_conf_inst = scheduler_frame_conf_inst()
frame_conf_inst.load("./conf/frame.conf")
j_load_mysql_conf_obj = j_load_mysql_conf()
j_load_mysql_conf_obj.run()
a = mysql_manager()
print a.get_mysql_conn("stock_db")
print a.get_mysql_conn("stock_part_35")
#test_data_1 = {"a1":{"host":"127.0.0.1", "port":123, "user":"fangliang", "passwd":"fl_pwd", "db":"db1", "charset":"utf8"}}
#a.add_conns(test_data_1)
#print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
#a.add_conns(test_data_1)
#print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
#test_data_2 = {"a2":{"host":"127.0.0.2", "port":123, "user":"fangliang", "passwd":"fl_pwd", "db":"db1", "charset":"utf8"}}
#a.add_conns(test_data_2)
#print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
#test_data_3 = {"a2":{"host":"127.0.0.3", "port":123, "user":"fangliang", "passwd":"fl_pwd", "db":"db1", "charset":"utf8"}}
#a.modify_conns(test_data_3)
#print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
#test_data_4 = {"a2":{"host":"127.0.0.3", "port":123, "user":"fangliang", "passwd":"fl_pwd", "db":"db1", "charset":"utf8"}}
#a.remove_conns(test_data_4)
pass
| 38.833333 | 150 | 0.649424 | 581 | 4,427 | 4.583477 | 0.151463 | 0.072099 | 0.048817 | 0.063838 | 0.478032 | 0.44724 | 0.373639 | 0.324822 | 0.247841 | 0.247841 | 0 | 0.019062 | 0.229727 | 4,427 | 113 | 151 | 39.176991 | 0.761877 | 0.225435 | 0 | 0.265823 | 0 | 0 | 0.03308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.025316 | 0.151899 | null | null | 0.063291 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66276434bf67465167e6057a31abd3d18429d11b | 13,793 | py | Python | src/scripts/experiment-1-searchstims/generate_source_data_csv.py | NickleDave/Nicholson-Prinz-2020 | 35d49c5330f9e5e9945eb2ea93302b60ee1f0c1f | [
"BSD-3-Clause"
] | 1 | 2021-05-17T15:30:11.000Z | 2021-05-17T15:30:11.000Z | src/scripts/experiment-1-searchstims/generate_source_data_csv.py | NickleDave/Nicholson-Prinz-2020 | 35d49c5330f9e5e9945eb2ea93302b60ee1f0c1f | [
"BSD-3-Clause"
] | 12 | 2021-07-03T19:41:59.000Z | 2021-07-29T02:01:33.000Z | src/scripts/experiment-1-searchstims/generate_source_data_csv.py | NickleDave/Nicholson-Prinz-2021 | 8ba8919c5c8203730fa86edaa4771f37d02d31dd | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
"""script that generates source data csvs for searchstims experiment figures"""
from argparse import ArgumentParser
from collections import defaultdict
from pathlib import Path
import pandas as pd
import pyprojroot
import searchnets
def main(results_gz_root,
source_data_root,
all_csv_filename,
acc_diff_csv_filename,
stim_acc_diff_csv_filename,
net_acc_diff_csv_filename,
acc_diff_by_stim_csv_filename,
net_names,
methods,
modes,
alexnet_split_csv_path,
VGG16_split_csv_path,
learning_rate=1e-3,
):
"""generate .csv files used as source data for figures corresponding to experiments
carried out with stimuli generated by searchstims library
Parameters
----------
results_gz_root : str, Path
path to root of directory that has results.gz files created by `searchnets test` command
source_data_root : str, path
path to root of directory where csv files
that are the source data for figures should be saved.
all_csv_filename : str
filename for .csv saved that contains results from **all** results.gz files.
Saved in source_data_root.
acc_diff_csv_filename : str
filename for .csv should be saved that contains group analysis derived from all results,
with difference in accuracy between set size 1 and 8.
Saved in source_data_root.
stim_acc_diff_csv_filename : str
filename for .csv saved that contains group analysis derived from all results,
with stimulus type column sorted by difference in accuracy between set size 1 and 8.
Saved in source_data_root.
net_acc_diff_csv_filename : str
filename for .csv saved that contains group analysis derived from all results,
with net name column sorted by mean accuracy across all stimulus types.
Saved in source_data_root.
acc_diff_by_stim_csv_filename : str
filename for .csv saved that contains group analysis derived from all results,
with difference in accuracy between set size 1 and 8,
pivoted so that columns are visual search stimulus type.
Saved in source_data_root.
net_names : list
of str, neural network architecture names
methods : list
of str, training "methods". Valid values are {"transfer", "initialize"}.
modes : list
of str, training "modes". Valid values are {"classify","detect"}.
alexnet_split_csv_path : str, Path
path to .csv that contains dataset splits for "alexnet-sized" searchstim images
VGG16_split_csv_path : str, Path
path to .csv that contains dataset splits for "VGG16-sized" searchstim images
learning_rate
float, learning rate value for all experiments. Default is 1e-3.
"""
results_gz_root = Path(results_gz_root)
source_data_root = Path(source_data_root)
if not source_data_root.exists():
raise NotADirectoryError(
f'directory specified as source_data_root not found: {source_data_root}'
)
df_list = []
for net_name in net_names:
for method in methods:
if method not in METHODS:
raise ValueError(
f'invalid method: {method}, must be one of: {METHODS}'
)
for mode in modes:
results_gz_path = sorted(results_gz_root.glob(f'**/*{net_name}*{method}*gz'))
if mode == 'classify':
results_gz_path = [results_gz for results_gz in results_gz_path if 'detect' not in str(results_gz)]
elif mode == 'detect':
results_gz_path = [results_gz for results_gz in results_gz_path if 'detect' in str(results_gz)]
else:
raise ValueError(
f'invalid mode: {mode}, must be one of: {MODES}'
)
if len(results_gz_path) != 1:
raise ValueError(f'found more than one results.gz file: {results_gz_path}')
results_gz_path = results_gz_path[0]
if net_name == 'alexnet' or 'CORnet' in net_name:
csv_path = alexnet_split_csv_path
elif net_name == 'VGG16':
csv_path = VGG16_split_csv_path
else:
raise ValueError(f'no csv path defined for net_name: {net_name}')
df = searchnets.analysis.searchstims.results_gz_to_df(results_gz_path,
csv_path,
net_name,
method,
mode,
learning_rate)
df_list.append(df)
df_all = pd.concat(df_list)
# Get just the transfer learning results,
# then group by network, stimulus, and set size,
# and compute the mean accuracy for each set size.
df_transfer = df_all[df_all['method'] == 'transfer']
df_transfer_acc_mn = df_transfer.groupby(['net_name', 'stimulus', 'set_size']).agg({'accuracy':'mean'})
df_transfer_acc_mn = df_transfer_acc_mn.reset_index()
# Make one more `DataFrame`
# where variable is difference of mean accuracies on set size 1 and set size 8.
# We use this to organize the figure,
# and to show a heatmap with a marginal distribution.
records = defaultdict(list)
for net_name in df_transfer_acc_mn['net_name'].unique():
df_net = df_transfer_acc_mn[df_transfer_acc_mn['net_name'] == net_name]
for stim in df_net['stimulus'].unique():
df_stim = df_net[df_net['stimulus'] == stim]
set_size_1_acc = df_stim[df_stim['set_size'] == 1]['accuracy'].values.item()
set_size_8_acc = df_stim[df_stim['set_size'] == 8]['accuracy'].values.item()
acc_diff = set_size_1_acc - set_size_8_acc
records['net_name'].append(net_name)
records['stimulus'].append(stim)
records['set_size_1_acc'].append(set_size_1_acc)
records['set_size_8_acc'].append(set_size_8_acc)
records['acc_diff'].append(acc_diff)
df_acc_diff = pd.DataFrame.from_records(records)
df_acc_diff = df_acc_diff[['net_name', 'stimulus', 'set_size_1_acc', 'set_size_8_acc', 'acc_diff']]
# columns will be stimuli, in increasing order of accuracy drop across models
stim_acc_diff_df = df_acc_diff.groupby(['stimulus']).agg({'acc_diff': 'mean', 'set_size_1_acc': 'mean'})
stim_acc_diff_df = stim_acc_diff_df.reset_index()
stim_acc_diff_df = stim_acc_diff_df.sort_values(by=['set_size_1_acc', 'acc_diff'], ascending=False)
# rows will be nets, in decreasing order of accuracy drops across stimuli
net_acc_diff_df = df_acc_diff.groupby(['net_name']).agg({'acc_diff': 'mean'})
net_acc_diff_df = net_acc_diff_df.reset_index()
net_acc_diff_df = net_acc_diff_df.sort_values(by='acc_diff', ascending=False)
# no idea how much I am abusing the Pandas API, just trying to make a pivot table into a data frame here
# https://stackoverflow.com/a/42708606/4906855
# want the columns to be (sorted) stimulus type,
# and rows be (sorted) network names,
# with values in cells being effect size
df_acc_diff_only = df_acc_diff[['net_name', 'stimulus', 'acc_diff']]
df_acc_diff_by_stim = df_acc_diff_only.pivot_table(index='net_name', columns='stimulus')
df_acc_diff_by_stim.columns = df_acc_diff_by_stim.columns.get_level_values(1)
df_acc_diff_by_stim = pd.DataFrame(df_acc_diff_by_stim.to_records())
df_acc_diff_by_stim = df_acc_diff_by_stim.set_index('net_name')
df_acc_diff_by_stim = df_acc_diff_by_stim.reindex(net_acc_diff_df['net_name'].values.tolist())
df_acc_diff_by_stim = df_acc_diff_by_stim[stim_acc_diff_df['stimulus'].values.tolist()]
# finally, save csvs
df_all.to_csv(source_data_root.joinpath(all_csv_filename), index=False)
df_acc_diff.to_csv(source_data_root.joinpath(acc_diff_csv_filename), index=False)
stim_acc_diff_df.to_csv(source_data_root.joinpath(stim_acc_diff_csv_filename), index=False)
net_acc_diff_df.to_csv(source_data_root.joinpath(net_acc_diff_csv_filename), index=False)
# for this csv, the index is "net names" -- we want to keep it
df_acc_diff_by_stim.to_csv(source_data_root.joinpath(acc_diff_by_stim_csv_filename))
ROOT = pyprojroot.here()
DATA_DIR = ROOT.joinpath('data')
RESULTS_ROOT = ROOT.joinpath('results')
SEARCHSTIMS_ROOT = RESULTS_ROOT.joinpath('searchstims')
RESULTS_GZ_ROOT = SEARCHSTIMS_ROOT.joinpath('results_gz')
LEARNING_RATE = 1e-3
NET_NAMES = [
'alexnet',
'VGG16',
'CORnet_Z',
'CORnet_S',
]
METHODS = [
'initialize',
'transfer'
]
MODES = ['classify']
SEARCHSTIMS_OUTPUT_ROOT = ROOT.joinpath('../visual_search_stimuli')
alexnet_split_csv_path = SEARCHSTIMS_OUTPUT_ROOT.joinpath(
'alexnet_multiple_stims/alexnet_multiple_stims_128000samples_balanced_split.csv')
VGG16_split_csv_path = SEARCHSTIMS_OUTPUT_ROOT.joinpath(
'VGG16_multiple_stims/VGG16_multiple_stims_128000samples_balanced_split.csv'
)
def get_parser():
parser = ArgumentParser()
parser.add_argument('--results_gz_root',
help='path to root of directory that has results.gz files created by searchstims test command')
parser.add_argument('--source_data_root',
help=('path to root of directory where "source data" csv files '
'that are generated should be saved'))
parser.add_argument('--all_csv_filename', default='all.csv',
help=('filename for .csv that should be saved '
'that contains results from **all** results.gz files. '
'Saved in source_data_root.'))
parser.add_argument('--acc_diff_csv_filename', default='acc_diff.csv',
help=("filename for .csv should be saved "
"that contains group analysis derived from all results, "
"with difference in accuracy between set size 1 and 8. "
"Saved in source_data_root"))
parser.add_argument('--stim_acc_diff_csv_filename', default='stim_acc_diff.csv',
help=("filename for .csv should be saved "
"that contains group analysis derived from all results, "
"with stimulus type column sorted by difference in accuracy between set size 1 and 8. "
"Saved in source_data_root"))
parser.add_argument('--net_acc_diff_csv_filename', default='net_acc_diff.csv',
help=("filename for .csv should be saved "
"that contains group analysis derived from all results, "
"with net name column sorted by mean accuracy across all stimulus types."
"Saved in source_data_root."))
parser.add_argument('--acc_diff_by_stim_csv_filename', default='acc_diff_by_stim.csv',
help=("filename for .csv should be saved "
"that contains group analysis derived from all results, "
"with difference in accuracy between set size 1 and 8, "
"pivoted so that columns are visual search stimulus type. "
"Saved in source_data_root"))
parser.add_argument('--net_names', default=NET_NAMES,
help='comma-separated list of neural network architecture names',
type=lambda net_names: net_names.split(','))
parser.add_argument('--methods', default=METHODS,
help='comma-separated list of training "methods", must be in {"transfer", "initialize"}',
type=lambda methods: methods.split(','))
parser.add_argument('--modes', default=MODES,
help='comma-separate list of training "modes", must be in {"classify","detect"}',
type=lambda modes: modes.split(','))
parser.add_argument('--learning_rate', default=LEARNING_RATE,
help=f'float, learning rate value for all experiments. Default is {LEARNING_RATE}')
parser.add_argument('--alexnet_split_csv_path', default=alexnet_split_csv_path,
help='path to .csv that contains dataset splits for "alexnet-sized" searchstim images')
parser.add_argument('--VGG16_split_csv_path', default=VGG16_split_csv_path,
help='path to .csv that contains dataset splits for "VGG16-sized" searchstim images')
return parser
if __name__ == '__main__':
parser = get_parser()
args = parser.parse_args()
main(results_gz_root=args.results_gz_root,
source_data_root=args.source_data_root,
all_csv_filename=args.all_csv_filename,
acc_diff_csv_filename=args.acc_diff_csv_filename,
stim_acc_diff_csv_filename=args.stim_acc_diff_csv_filename,
net_acc_diff_csv_filename=args.net_acc_diff_csv_filename,
acc_diff_by_stim_csv_filename=args.acc_diff_by_stim_csv_filename,
net_names=args.net_names,
methods=args.methods,
modes=args.modes,
alexnet_split_csv_path=args.alexnet_split_csv_path,
VGG16_split_csv_path=args.VGG16_split_csv_path,
learning_rate=args.learning_rate,
)
| 49.437276 | 119 | 0.645037 | 1,832 | 13,793 | 4.546397 | 0.132642 | 0.060511 | 0.042022 | 0.029655 | 0.529956 | 0.483852 | 0.40473 | 0.372794 | 0.316124 | 0.282867 | 0 | 0.00928 | 0.2734 | 13,793 | 278 | 120 | 49.615108 | 0.821792 | 0.215907 | 0 | 0.081522 | 1 | 0 | 0.264734 | 0.035587 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01087 | false | 0 | 0.032609 | 0 | 0.048913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6628516331bee00443b896aa90f62547f50ba151 | 1,252 | py | Python | archiv/migrations/0001_initial.py | acdh-oeaw/nerdpool-api | e4388d4b5b323113ba675a732952c2ecf5fcef6d | [
"MIT"
] | null | null | null | archiv/migrations/0001_initial.py | acdh-oeaw/nerdpool-api | e4388d4b5b323113ba675a732952c2ecf5fcef6d | [
"MIT"
] | null | null | null | archiv/migrations/0001_initial.py | acdh-oeaw/nerdpool-api | e4388d4b5b323113ba675a732952c2ecf5fcef6d | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-20 12:50
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='NerSource',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250, unique=True)),
('info', models.JSONField()),
],
),
migrations.CreateModel(
name='NerSample',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ner_text', models.TextField(blank=True, help_text='text', null=True, verbose_name='text')),
('ner_sample', models.JSONField(blank=True, help_text='text', null=True, verbose_name='text')),
('ner_ent_exist', models.BooleanField(default=False, verbose_name='Contains Entities')),
('ner_source', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='archiv.nersource')),
],
),
]
| 36.823529 | 118 | 0.597444 | 133 | 1,252 | 5.488722 | 0.488722 | 0.075342 | 0.065753 | 0.060274 | 0.345205 | 0.345205 | 0.345205 | 0.345205 | 0.345205 | 0.345205 | 0 | 0.019481 | 0.261981 | 1,252 | 33 | 119 | 37.939394 | 0.770563 | 0.035942 | 0 | 0.384615 | 1 | 0 | 0.103734 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.